Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 5 de 5
Filter
Add more filters










Database
Language
Publication year range
1.
PLoS One ; 10(8): e0135697, 2015.
Article in English | MEDLINE | ID: mdl-26295970

ABSTRACT

The recognition of object categories is effortlessly accomplished in everyday life, yet its neural underpinnings remain not fully understood. In this electroencephalography (EEG) study, we used single-trial classification to perform a Representational Similarity Analysis (RSA) of categorical representation of objects in human visual cortex. Brain responses were recorded while participants viewed a set of 72 photographs of objects with a planned category structure. The Representational Dissimilarity Matrix (RDM) used for RSA was derived from confusions of a linear classifier operating on single EEG trials. In contrast to past studies, which used pairwise correlation or classification to derive the RDM, we used confusion matrices from multi-class classifications, which provided novel self-similarity measures that were used to derive the overall size of the representational space. We additionally performed classifications on subsets of the brain response in order to identify spatial and temporal EEG components that best discriminated object categories and exemplars. Results from category-level classifications revealed that brain responses to images of human faces formed the most distinct category, while responses to images from the two inanimate categories formed a single category cluster. Exemplar-level classifications produced a broadly similar category structure, as well as sub-clusters corresponding to natural language categories. Spatiotemporal components of the brain response that differentiated exemplars within a category were found to differ from those implicated in differentiating between categories. Our results show that a classification approach can be successfully applied to single-trial scalp-recorded EEG to recover fine-grained object category structure, as well as to identify interpretable spatiotemporal components underlying object processing. Finally, object category can be decoded from purely temporal information recorded at single electrodes.


Subject(s)
Image Processing, Computer-Assisted/methods , Pattern Recognition, Visual/physiology , Recognition, Psychology , Visual Cortex/physiology , Visual Pathways/physiology , Adult , Brain Mapping , Electrodes , Electroencephalography , Female , Humans , Male , Middle Aged , Photic Stimulation , Photography , Reaction Time , Visual Cortex/anatomy & histology
2.
PLoS One ; 8(6): e65366, 2013.
Article in English | MEDLINE | ID: mdl-23799009

ABSTRACT

This paper presents a new method of analysis by which structural similarities between brain data and linguistic data can be assessed at the semantic level. It shows how to measure the strength of these structural similarities and so determine the relatively better fit of the brain data with one semantic model over another. The first model is derived from WordNet, a lexical database of English compiled by language experts. The second is given by the corpus-based statistical technique of latent semantic analysis (LSA), which detects relations between words that are latent or hidden in text. The brain data are drawn from experiments in which statements about the geography of Europe were presented auditorily to participants who were asked to determine their truth or falsity while electroencephalographic (EEG) recordings were made. The theoretical framework for the analysis of the brain and semantic data derives from axiomatizations of theories such as the theory of differences in utility preference. Using brain-data samples from individual trials time-locked to the presentation of each word, ordinal relations of similarity differences are computed for the brain data and for the linguistic data. In each case those relations that are invariant with respect to the brain and linguistic data, and are correlated with sufficient statistical strength, amount to structural similarities between the brain and linguistic data. Results show that many more statistically significant structural similarities can be found between the brain data and the WordNet-derived data than the LSA-derived data. The work reported here is placed within the context of other recent studies of semantics and the brain. The main contribution of this paper is the new method it presents for the study of semantics and the brain and the focus it permits on networks of relations detected in brain data and represented by a semantic model.


Subject(s)
Brain/physiology , Semantics , Acoustic Stimulation , Brain Waves , Cluster Analysis , Evoked Potentials , Humans , Photic Stimulation , Psycholinguistics
3.
Proc Natl Acad Sci U S A ; 109(50): 20685-90, 2012 Dec 11.
Article in English | MEDLINE | ID: mdl-23185010

ABSTRACT

The neural mechanisms used by the human brain to identify phonemes remain unclear. We recorded the EEG signals evoked by repeated presentation of 12 American English phonemes. A support vector machine model correctly recognized a high percentage of the EEG brain wave recordings represented by their phases, which were expressed in discrete Fourier transform coefficients. We show that phases of the oscillations restricted to the frequency range of 2-9 Hz can be used to successfully recognize brain processing of these phonemes. The recognition rates can be further improved using the scalp tangential electric field and the surface Laplacian around the auditory cortical area, which were derived from the original potential signal. The best rate for the eight initial consonants was 66.7%. Moreover, we found a distinctive phase pattern in the brain for each of these consonants. We then used these phase patterns to recognize the consonants, with a correct rate of 48.7%. In addition, in the analysis of the confusion matrices, we found significant similarity-differences were invariant between brain and perceptual representations of phonemes. These latter results supported the importance of phonological distinctive features in the neural representation of phonemes.


Subject(s)
Brain/physiology , Language , Phonetics , Speech Perception/physiology , Adult , Artificial Intelligence , Brain Mapping , Brain Waves , Electroencephalography , Electroencephalography Phase Synchronization , Female , Humans , Male , Models, Neurological , Psychoacoustics
4.
Neural Comput ; 21(11): 3228-69, 2009 Nov.
Article in English | MEDLINE | ID: mdl-19686069

ABSTRACT

The idea of a hierarchical structure of language constituents of phonemes, syllables, words, and sentences is robust and widely accepted. Empirical similarity differences at every level of this hierarchy have been analyzed in the form of confusion matrices for many years. By normalizing such data so that differences are represented by conditional probabilities, semiorders of similarity differences can be constructed. The intersection of two such orderings is an invariant partial ordering with respect to the two given orders. These invariant partial orderings, especially between perceptual and brain representations, but also for comparison of brain images of words generated by auditory or visual presentations, are the focus of this letter. Data from four experiments are analyzed, with some success in finding conceptually significant invariants.


Subject(s)
Brain/physiology , Electroencephalography , Language , Models, Neurological , Perception/physiology , Algorithms , Artifacts , Cluster Analysis , Decision Trees , Psycholinguistics
5.
Neuroimage ; 39(3): 1051-63, 2008 Feb 01.
Article in English | MEDLINE | ID: mdl-18023210

ABSTRACT

In brain-imaging research, we are often interested in making quantitative claims about effects across subjects. Given that most imaging data consist of tens to thousands of spatially correlated time series, inter-subject comparisons are typically accomplished with simple combinations of inter-subject data, for example methods relying on group means. Further, these data are frequently taken from reduced channel subsets defined either a priori using anatomical considerations, or functionally using p-value thresholding to choose cluster boundaries. While such methods are effective for data reduction, means are sensitive to outliers, and current methods for subset selection can be somewhat arbitrary. Here, we introduce a novel "partial-ranking" approach to test for inter-subject agreement at the channel level. This non-parametric method effectively tests whether channel concordance is present across subjects, how many channels are necessary for maximum concordance, and which channels are responsible for this agreement. We validate the method on two previously published and two simulated EEG data sets.


Subject(s)
Algorithms , Brain/anatomy & histology , Brain/physiology , Electroencephalography/statistics & numerical data , Image Processing, Computer-Assisted/statistics & numerical data , Analysis of Variance , Brain Mapping , Computer Simulation , Humans , Models, Anatomic
SELECTION OF CITATIONS
SEARCH DETAIL
...