Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 6 de 6
Filter
Add more filters










Database
Language
Publication year range
1.
Front Neurosci ; 18: 1373515, 2024.
Article in English | MEDLINE | ID: mdl-38765672

ABSTRACT

A growing number of studies apply deep neural networks (DNNs) to recordings of human electroencephalography (EEG) to identify a range of disorders. In many studies, EEG recordings are split into segments, and each segment is randomly assigned to the training or test set. As a consequence, data from individual subjects appears in both the training and the test set. Could high test-set accuracy reflect data leakage from subject-specific patterns in the data, rather than patterns that identify a disease? We address this question by testing the performance of DNN classifiers using segment-based holdout (in which segments from one subject can appear in both the training and test set), and comparing this to their performance using subject-based holdout (where all segments from one subject appear exclusively in either the training set or the test set). In two datasets (one classifying Alzheimer's disease, and the other classifying epileptic seizures), we find that performance on previously-unseen subjects is strongly overestimated when models are trained using segment-based holdout. Finally, we survey the literature and find that the majority of translational DNN-EEG studies use segment-based holdout. Most published DNN-EEG studies may dramatically overestimate their classification performance on new subjects.

2.
Dev Sci ; : e13507, 2024 Apr 17.
Article in English | MEDLINE | ID: mdl-38629500

ABSTRACT

Blind adults display language-specificity in their packaging and ordering of events in speech. These differences affect the representation of events in co-speech gesture--gesturing with speech--but not in silent gesture--gesturing without speech. Here we examine when in development blind children begin to show adult-like patterns in co-speech and silent gesture. We studied speech and gestures produced by 30 blind and 30 sighted children learning Turkish, equally divided into 3 age groups: 5-6, 7-8, 9-10 years. The children were asked to describe three-dimensional spatial event scenes (e.g., running out of a house) first with speech, and then without speech using only their hands. We focused on physical motion events, which, in blind adults, elicit cross-linguistic differences in speech and co-speech gesture, but cross-linguistic similarities in silent gesture. Our results showed an effect of language on gesture when it was accompanied by speech (co-speech gesture), but not when it was used without speech (silent gesture) across both blind and sighted learners. The language-specific co-speech gesture pattern for both packaging and ordering semantic elements was present at the earliest ages we tested the blind and sighted children. The silent gesture pattern appeared later for blind children than sighted children for both packaging and ordering. Our findings highlight gesture as a robust and integral aspect of the language acquisition process at the early ages and provide insight into when language does and does not have an effect on gesture, even in blind children who lack visual access to gesture. RESEARCH HIGHLIGHTS: Gestures, when produced with speech (i.e., co-speech gesture), follow language-specific patterns in event representation in both blind and sighted children. Gestures, when produced without speech (i.e., silent gesture), do not follow the language-specific patterns in event representation in both blind and sighted children. Language-specific patterns in speech and co-speech gestures are observable at the same time in blind and sighted children. The cross-linguistic similarities in silent gestures begin slightly later in blind children than in sighted children.

3.
Cereb Cortex ; 30(11): 5821-5829, 2020 10 01.
Article in English | MEDLINE | ID: mdl-32537630

ABSTRACT

How do humans compute approximate number? According to one influential theory, approximate number representations arise in the intraparietal sulcus and are amodal, meaning that they arise independent of any sensory modality. Alternatively, approximate number may be computed initially within sensory systems. Here we tested for sensitivity to approximate number in the visual system using steady state visual evoked potentials. We recorded electroencephalography from humans while they viewed dotclouds presented at 30 Hz, which alternated in numerosity (ranging from 10 to 20 dots) at 15 Hz. At this rate, each dotcloud backward masked the previous dotcloud, disrupting top-down feedback to visual cortex and preventing conscious awareness of the dotclouds' numerosities. Spectral amplitude at 15 Hz measured over the occipital lobe (Oz) correlated positively with the numerical ratio of the stimuli, even when nonnumerical stimulus attributes were controlled, indicating that subjects' visual systems were differentiating dotclouds on the basis of their numerical ratios. Crucially, subjects were unable to discriminate the numerosities of the dotclouds consciously, indicating the backward masking of the stimuli disrupted reentrant feedback to visual cortex. Approximate number appears to be computed within the visual system, independently of higher-order areas, such as the intraparietal sulcus.


Subject(s)
Evoked Potentials, Visual/physiology , Mathematical Concepts , Visual Cortex/physiology , Adult , Consciousness/physiology , Electroencephalography , Female , Humans , Male , Photic Stimulation , Visual Perception/physiology
4.
Cogn Sci ; 42(3): 1001-1014, 2018 04.
Article in English | MEDLINE | ID: mdl-28481418

ABSTRACT

Sighted speakers of different languages vary systematically in how they package and order components of a motion event in speech. These differences influence how semantic elements are organized in gesture, but only when those gestures are produced with speech (co-speech gesture), not without speech (silent gesture). We ask whether the cross-linguistic similarity in silent gesture is driven by the visuospatial structure of the event. We compared 40 congenitally blind adult native speakers of English or Turkish (20/language) to 80 sighted adult speakers (40/language; half with, half without blindfolds) as they described three-dimensional motion scenes. We found an effect of language on co-speech gesture, not on silent gesture-blind speakers of both languages organized their silent gestures as sighted speakers do. Humans may have a natural semantic organization that they impose on events when conveying them in gesture without language-an organization that relies on neither visuospatial cues nor language structure.


Subject(s)
Blindness/psychology , Gestures , Language , Semantics , Speech , Adult , Female , Humans , Male , Turkey
5.
Psychol Sci ; 27(5): 737-47, 2016 05.
Article in English | MEDLINE | ID: mdl-26980154

ABSTRACT

Speakers of all languages gesture, but there are differences in the gestures that they produce. Do speakers learn language-specific gestures by watching others gesture or by learning to speak a particular language? We examined this question by studying the speech and gestures produced by 40 congenitally blind adult native speakers of English and Turkish (n = 20/language), and comparing them with the speech and gestures of 40 sighted adult speakers in each language (20 wearing blindfolds, 20 not wearing blindfolds). We focused on speakers' descriptions of physical motion, which display strong cross-linguistic differences in patterns of speech and gesture use. Congenitally blind speakers of English and Turkish produced speech that resembled the speech produced by sighted speakers of their native language. More important, blind speakers of each language used gestures that resembled the gestures of sighted speakers of that language. Our results suggest that hearing a particular language is sufficient to gesture like a native speaker of that language.


Subject(s)
Gestures , Language , Speech/physiology , Verbal Behavior/physiology , Adult , Cross-Cultural Comparison , Female , Humans , Language Development , Male , Middle Aged , Sign Language , Turkey , Visually Impaired Persons/statistics & numerical data
6.
Cognition ; 148: 10-8, 2016 Mar.
Article in English | MEDLINE | ID: mdl-26707427

ABSTRACT

Languages differ in how they organize events, particularly in the types of semantic elements they express and the arrangement of those elements within a sentence. Here we ask whether these cross-linguistic differences have an impact on how events are represented nonverbally; more specifically, on how events are represented in gestures produced without speech (silent gesture), compared to gestures produced with speech (co-speech gesture). We observed speech and gesture in 40 adult native speakers of English and Turkish (N=20/per language) asked to describe physical motion events (e.g., running down a path)-a domain known to elicit distinct patterns of speech and co-speech gesture in English- and Turkish-speakers. Replicating previous work (Kita & Özyürek, 2003), we found an effect of language on gesture when it was produced with speech-co-speech gestures produced by English-speakers differed from co-speech gestures produced by Turkish-speakers. However, we found no effect of language on gesture when it was produced on its own-silent gestures produced by English-speakers were identical in how motion elements were packaged and ordered to silent gestures produced by Turkish-speakers. The findings provide evidence for a natural semantic organization that humans impose on motion events when they convey those events without language.


Subject(s)
Gestures , Language , Speech , Adult , Aged , Female , Humans , Male , Middle Aged , Semantics , Young Adult
SELECTION OF CITATIONS
SEARCH DETAIL
...