Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 10 de 10
Filter
Add more filters










Publication year range
1.
PLoS One ; 10(8): e0135697, 2015.
Article in English | MEDLINE | ID: mdl-26295970

ABSTRACT

The recognition of object categories is effortlessly accomplished in everyday life, yet its neural underpinnings remain not fully understood. In this electroencephalography (EEG) study, we used single-trial classification to perform a Representational Similarity Analysis (RSA) of categorical representation of objects in human visual cortex. Brain responses were recorded while participants viewed a set of 72 photographs of objects with a planned category structure. The Representational Dissimilarity Matrix (RDM) used for RSA was derived from confusions of a linear classifier operating on single EEG trials. In contrast to past studies, which used pairwise correlation or classification to derive the RDM, we used confusion matrices from multi-class classifications, which provided novel self-similarity measures that were used to derive the overall size of the representational space. We additionally performed classifications on subsets of the brain response in order to identify spatial and temporal EEG components that best discriminated object categories and exemplars. Results from category-level classifications revealed that brain responses to images of human faces formed the most distinct category, while responses to images from the two inanimate categories formed a single category cluster. Exemplar-level classifications produced a broadly similar category structure, as well as sub-clusters corresponding to natural language categories. Spatiotemporal components of the brain response that differentiated exemplars within a category were found to differ from those implicated in differentiating between categories. Our results show that a classification approach can be successfully applied to single-trial scalp-recorded EEG to recover fine-grained object category structure, as well as to identify interpretable spatiotemporal components underlying object processing. Finally, object category can be decoded from purely temporal information recorded at single electrodes.


Subject(s)
Image Processing, Computer-Assisted/methods , Pattern Recognition, Visual/physiology , Recognition, Psychology , Visual Cortex/physiology , Visual Pathways/physiology , Adult , Brain Mapping , Electrodes , Electroencephalography , Female , Humans , Male , Middle Aged , Photic Stimulation , Photography , Reaction Time , Visual Cortex/anatomy & histology
3.
PLoS One ; 8(6): e65366, 2013.
Article in English | MEDLINE | ID: mdl-23799009

ABSTRACT

This paper presents a new method of analysis by which structural similarities between brain data and linguistic data can be assessed at the semantic level. It shows how to measure the strength of these structural similarities and so determine the relatively better fit of the brain data with one semantic model over another. The first model is derived from WordNet, a lexical database of English compiled by language experts. The second is given by the corpus-based statistical technique of latent semantic analysis (LSA), which detects relations between words that are latent or hidden in text. The brain data are drawn from experiments in which statements about the geography of Europe were presented auditorily to participants who were asked to determine their truth or falsity while electroencephalographic (EEG) recordings were made. The theoretical framework for the analysis of the brain and semantic data derives from axiomatizations of theories such as the theory of differences in utility preference. Using brain-data samples from individual trials time-locked to the presentation of each word, ordinal relations of similarity differences are computed for the brain data and for the linguistic data. In each case those relations that are invariant with respect to the brain and linguistic data, and are correlated with sufficient statistical strength, amount to structural similarities between the brain and linguistic data. Results show that many more statistically significant structural similarities can be found between the brain data and the WordNet-derived data than the LSA-derived data. The work reported here is placed within the context of other recent studies of semantics and the brain. The main contribution of this paper is the new method it presents for the study of semantics and the brain and the focus it permits on networks of relations detected in brain data and represented by a semantic model.


Subject(s)
Brain/physiology , Semantics , Acoustic Stimulation , Brain Waves , Cluster Analysis , Evoked Potentials , Humans , Photic Stimulation , Psycholinguistics
4.
Proc Natl Acad Sci U S A ; 109(50): 20685-90, 2012 Dec 11.
Article in English | MEDLINE | ID: mdl-23185010

ABSTRACT

The neural mechanisms used by the human brain to identify phonemes remain unclear. We recorded the EEG signals evoked by repeated presentation of 12 American English phonemes. A support vector machine model correctly recognized a high percentage of the EEG brain wave recordings represented by their phases, which were expressed in discrete Fourier transform coefficients. We show that phases of the oscillations restricted to the frequency range of 2-9 Hz can be used to successfully recognize brain processing of these phonemes. The recognition rates can be further improved using the scalp tangential electric field and the surface Laplacian around the auditory cortical area, which were derived from the original potential signal. The best rate for the eight initial consonants was 66.7%. Moreover, we found a distinctive phase pattern in the brain for each of these consonants. We then used these phase patterns to recognize the consonants, with a correct rate of 48.7%. In addition, in the analysis of the confusion matrices, we found significant similarity-differences were invariant between brain and perceptual representations of phonemes. These latter results supported the importance of phonological distinctive features in the neural representation of phonemes.


Subject(s)
Brain/physiology , Language , Phonetics , Speech Perception/physiology , Adult , Artificial Intelligence , Brain Mapping , Brain Waves , Electroencephalography , Electroencephalography Phase Synchronization , Female , Humans , Male , Models, Neurological , Psychoacoustics
5.
Neural Comput ; 23(11): 2974-3000, 2011 Nov.
Article in English | MEDLINE | ID: mdl-21851276

ABSTRACT

This letter develops a framework for EEG analysis and similar applications based on polyharmonic splines. This development overcomes a basic problem with the method of splines in the Euclidean setting: that it does not work on low-degree algebraic surfaces such as spherical and ellipsoidal scalp models. The method's capability is illustrated through simulations on the three-sphere model and using empirical data.


Subject(s)
Brain/physiology , Electroencephalography , Signal Processing, Computer-Assisted , Animals , Humans
6.
IEEE Trans Neural Netw ; 22(1): 84-95, 2011 Jan.
Article in English | MEDLINE | ID: mdl-21075723

ABSTRACT

The idea that synchronized oscillations are important in cognitive tasks is receiving significant attention. In this view, single neurons are no longer elementary computational units. Rather, coherent oscillating groups of neurons are seen as nodes of networks performing cognitive tasks. From this assumption, we develop a model of stimulus-pattern learning and recognition. The three most salient features of our model are: 1) a new definition of synchronization; 2) demonstrated robustness in the presence of noise; and 3) pattern learning.


Subject(s)
Artificial Intelligence , Biological Clocks/physiology , Cortical Synchronization/physiology , Neural Networks, Computer , Pattern Recognition, Automated/standards , Humans
7.
Biol Psychol ; 82(3): 253-9, 2009 Dec.
Article in English | MEDLINE | ID: mdl-19698758

ABSTRACT

In the current study we investigate the EEG response to listening and imagining melodies and explore the possibility of decomposing this response according to musical features, such as rhythm and pitch patterns. A structural model was created based on musical aspects and multiple regression was used to calculate profiles of the contribution of each aspect, in contrast to traditional ERP components. By decomposing the response, we aimed to uncover pronounced ERP contributions for aspects of the encoding of musical structure, assuming a simple additive combination of these. When using a model built up of metric levels and contour direction, 81% of the variance is explained for perceived, and 57% for imagined melodies. The maximum correlation between the parameters found for the same melodic aspect in perception vs. imagery was 0.88, indicating similar processing between tasks. The decomposition method is shown to be a novel analysis method of complex ERP patterns, which allows subcomponents to be investigated within a continuous context.


Subject(s)
Auditory Perception/physiology , Brain/physiology , Evoked Potentials, Auditory/physiology , Imagination/physiology , Music/psychology , Acoustic Stimulation , Adult , Attention/physiology , Brain Mapping , Electroencephalography , Humans , Regression Analysis , Signal Processing, Computer-Assisted
8.
Neural Comput ; 21(11): 3228-69, 2009 Nov.
Article in English | MEDLINE | ID: mdl-19686069

ABSTRACT

The idea of a hierarchical structure of language constituents of phonemes, syllables, words, and sentences is robust and widely accepted. Empirical similarity differences at every level of this hierarchy have been analyzed in the form of confusion matrices for many years. By normalizing such data so that differences are represented by conditional probabilities, semiorders of similarity differences can be constructed. The intersection of two such orderings is an invariant partial ordering with respect to the two given orders. These invariant partial orderings, especially between perceptual and brain representations, but also for comparison of brain images of words generated by auditory or visual presentations, are the focus of this letter. Data from four experiments are analyzed, with some success in finding conceptually significant invariants.


Subject(s)
Brain/physiology , Electroencephalography , Language , Models, Neurological , Perception/physiology , Algorithms , Artifacts , Cluster Analysis , Decision Trees , Psycholinguistics
9.
Neuroimage ; 39(3): 1051-63, 2008 Feb 01.
Article in English | MEDLINE | ID: mdl-18023210

ABSTRACT

In brain-imaging research, we are often interested in making quantitative claims about effects across subjects. Given that most imaging data consist of tens to thousands of spatially correlated time series, inter-subject comparisons are typically accomplished with simple combinations of inter-subject data, for example methods relying on group means. Further, these data are frequently taken from reduced channel subsets defined either a priori using anatomical considerations, or functionally using p-value thresholding to choose cluster boundaries. While such methods are effective for data reduction, means are sensitive to outliers, and current methods for subset selection can be somewhat arbitrary. Here, we introduce a novel "partial-ranking" approach to test for inter-subject agreement at the channel level. This non-parametric method effectively tests whether channel concordance is present across subjects, how many channels are necessary for maximum concordance, and which channels are responsible for this agreement. We validate the method on two previously published and two simulated EEG data sets.


Subject(s)
Algorithms , Brain/anatomy & histology , Brain/physiology , Electroencephalography/statistics & numerical data , Image Processing, Computer-Assisted/statistics & numerical data , Analysis of Variance , Brain Mapping , Computer Simulation , Humans , Models, Anatomic
10.
IEEE Trans Biomed Eng ; 54(3): 436-43, 2007 Mar.
Article in English | MEDLINE | ID: mdl-17355055

ABSTRACT

While magnetoencephalography (MEG) is widely used to identify spatial locations of brain activations associated with various tasks, classification of single trials in stimulus-locked experiments remains an open subject. Very significant single-trial classification results have been published using electroencephalogram (EEG) data, but in the MEG case, the weakness of the magnetic fields originating from the relevant sources relative to external noise, and the high dimensionality of the data are difficult obstacles to overcome. We present here very significant MEG single-trial mean classification rates of words. The number of words classified varied from seven to nine and both visual and auditory modalities were studied. These results were obtained by using a variety of blind sources separation methods: spatial principal components analysis (PCA), Infomax independent components analysis (Infomax ICA) and second-order blind identification (SOBI). The sources obtained were classified using two methods, linear discriminant classification (LDC) and v-support vector machine (v-SVM). The data used here, auditory and visual presentations of words, presented nontrivial classification problems, but with Infomax ICA associated with LDC we obtained high classification rates. Our best single-trial mean classification rate was 60.1% for classification of 900 single trials of nine auditory words. On two-class problems rates were as high as 97.5%.


Subject(s)
Algorithms , Brain Mapping/methods , Evoked Potentials, Auditory/physiology , Evoked Potentials, Visual/physiology , Magnetoencephalography/methods , Pattern Recognition, Automated/methods , Speech Perception/physiology , Cluster Analysis , Diagnosis, Computer-Assisted/methods , Humans , Principal Component Analysis
SELECTION OF CITATIONS
SEARCH DETAIL
...