Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 36
Filter
1.
Laryngorhinootologie ; 103(4): 252-260, 2024 Apr.
Article in German | MEDLINE | ID: mdl-38565108

ABSTRACT

Language processing can be measured objectively using late components in the evoked brain potential. The most established component in this area of research is the N400 component, a negativity that peaks at about 400 ms after stimulus onset with a centro-parietal maximum. It reflects semantic processing. Its presence, as well as its temporal and quantitative expression, allows to conclude about the quality of processing. It is therefore suitable for measuring speech comprehension in special populations, such as cochlear implant (CI) users. The following is an overview of the use of the N400 component as a tool for studying language processes in CI users. We present studies with adult CI users, where the N400 reflects the quality of speech comprehension with the new hearing device and we present studies with children where the emergence of the N400 component reflects the acquisition of their very first vocabulary.


Subject(s)
Cochlear Implants , Speech Perception , Adult , Child , Female , Humans , Male , Comprehension/physiology , Electroencephalography , Evoked Potentials/physiology , Language , Language Development , Semantics , Speech Perception/physiology
2.
Front Neurosci ; 14: 787, 2020.
Article in English | MEDLINE | ID: mdl-32848560

ABSTRACT

Cochlear implantation constitutes a successful therapy of inner ear deafness, with the majority of patients showing good outcomes. There is, however, still some unexplained variability in outcomes with a number of cochlear-implant (CI) users, showing major limitations in speech comprehension. The current study used a multimodal diagnostic approach combining single-photon emission computed tomography (SPECT) and electroencephalography (EEG) to examine the mechanisms underlying speech processing in postlingually deafened CI users (N = 21). In one session, the participants performed a speech discrimination task, during which a 96-channel EEG was recorded and the perfusions marker 99mTc-HMPAO was injected intravenously. The SPECT scan was acquired 1.5 h after injection to measure the cortical activity during the speech task. The second session included a SPECT scan after injection without stimulation at rest. Analysis of EEG and SPECT data showed N400 and P600 event-related potentials (ERPs) particularly evoked by semantic violations in the sentences, and enhanced perfusion in a temporo-frontal network during task compared to rest, involving the auditory cortex bilaterally and Broca's area. Moreover, higher performance in testing for word recognition and verbal intelligence strongly correlated to the activation in this network during the speech task. However, comparing CI users with lower and higher speech intelligibility [median split with cutoff + 7.6 dB signal-to-noise ratio (SNR) in the Göttinger sentence test] revealed for CI users with higher performance additional activations of parietal and occipital regions and for those with lower performance stronger activation of superior frontal areas. Furthermore, SPECT activity was tightly coupled with EEG and cognitive abilities, as indicated by correlations between (1) cortical activation and the amplitudes in EEG, N400 (temporal and occipital areas)/P600 (parietal and occipital areas) and (2) between cortical activation in left-sided temporal and bilateral occipital/parietal areas and working memory capacity. These results suggest the recruitment of a temporo-frontal network in CI users during speech processing and a close connection between ERP effects and cortical activation in CI users. The observed differences in speech-evoked cortical activation patterns for CI users with higher and lower speech intelligibility suggest distinct processing strategies during speech rehabilitation with CI.

3.
Cereb Cortex ; 30(2): 812-823, 2020 03 21.
Article in English | MEDLINE | ID: mdl-31373629

ABSTRACT

Language is a fundamental part of human cognition. The question of whether language is processed independently of speech, however, is still heavily discussed. The absence of speech in deaf signers offers the opportunity to disentangle language from speech in the human brain. Using probabilistic tractography, we compared brain structural connectivity of adult deaf signers who had learned sign language early in life to that of matched hearing controls. Quantitative comparison of the connectivity profiles revealed that the core language tracts did not differ between signers and controls, confirming that language is independent of speech. In contrast, pathways involved in the production and perception of speech displayed lower connectivity in deaf signers compared to hearing controls. These differences were located in tracts towards the left pre-supplementary motor area and the thalamus when seeding in Broca's area, and in ipsilateral parietal areas and the precuneus with seeds in left posterior temporal regions. Furthermore, the interhemispheric connectivity between the auditory cortices was lower in the deaf than in the hearing group, underlining the importance of the transcallosal connection for early auditory processes. The present results provide evidence for a functional segregation of the neural pathways for language and speech.


Subject(s)
Brain/anatomy & histology , Language , Sign Language , Speech , Adult , Deafness/pathology , Diffusion Magnetic Resonance Imaging , Female , Humans , Male , Neural Pathways/anatomy & histology , Persons With Hearing Impairments , Speech Perception
4.
Sci Rep ; 8(1): 910, 2018 01 17.
Article in English | MEDLINE | ID: mdl-29343736

ABSTRACT

In the present study we explore the implications of acquiring language when relying mainly or exclusively on input from a cochlear implant (CI), a device providing auditory input to otherwise deaf individuals. We focus on the time course of semantic learning in children within the second year of implant use; a period that equals the auditory age of normal hearing children during which vocabulary emerges and extends dramatically. 32 young bilaterally implanted children saw pictures paired with either matching or non-matching auditory words. Their electroencephalographic responses were recorded after 12, 18 and 24 months of implant use, revealing a large dichotomy: Some children failed to show semantic processing throughout their second year of CI use, which fell in line with their poor language outcomes. The majority of children, though, demonstrated semantic processing in form of the so-called N400 effect already after 12 months of implant use, even when their language experience relied exclusively on the implant. This is slightly earlier than observed for normal hearing children of the same auditory age, suggesting that more mature cognitive faculties at the beginning of language acquisition lead to faster semantic learning.


Subject(s)
Deafness/physiopathology , Child, Preschool , Cochlear Implantation/methods , Cochlear Implants , Electroencephalography/methods , Evoked Potentials/physiology , Female , Hearing Tests/methods , Humans , Infant , Language , Language Development , Male , Vocabulary
5.
Otol Neurotol ; 37(9): e360-8, 2016 10.
Article in English | MEDLINE | ID: mdl-27631660

ABSTRACT

OBJECTIVE: Measurement of electrophysiological correlates of discrimination abilities of basic musical features in pre- and postlingually deafened adult cochlear implant (CI) users. STUDY DESIGN: Electroencephalographic study. Comparison between CI users and matched normal hearing controls. PATIENTS: Thirty-six hearing impaired adults using a cochlear implant for 4 to 15 months. Profound hearing impairment was acquired either before (N = 12) or after language acquisition (N = 17). Seven patients suffered from a single-sided deafness. METHODS: Presentation of auditory stimuli consisting of musical four tone standard patterns and deviant patterns varying with regard to tone pitch, timbre, intensity, and rhythm of two different degrees. Analysis of electrophysiological, event-related mismatch responses. RESULTS: Cochlear implant users elicited significant mismatch responses on most deviant features. Comparison to controls revealed significantly smaller mismatch negativity amplitudes. Except for one parameter (pitch) there were no reliable differences between pre- and postlingually deafened CI users. CONCLUSION: Despite a highly reduced complexity of neural auditory stimulation by the cochlear implant device in comparison to the physiological cochlear input, CI users exhibit cortical discriminatory responses to relatively subtle basic tonal alterations.


Subject(s)
Auditory Perception , Cochlear Implants , Evoked Potentials/physiology , Hearing Loss/surgery , Music , Adult , Aged , Cochlear Implantation , Electroencephalography , Female , Hearing , Hearing Tests , Humans , Male , Middle Aged , Persons With Hearing Impairments
6.
Sci Rep ; 6: 32026, 2016 08 25.
Article in English | MEDLINE | ID: mdl-27558546

ABSTRACT

Direct stimulation of the auditory nerve via a Cochlear Implant (CI) enables profoundly hearing-impaired people to perceive sounds. Many CI users find language comprehension satisfactory, but music perception is generally considered difficult. However, music contains different dimensions which might be accessible in different ways. We aimed to highlight three main dimensions of music processing in CI users which rely on different processing mechanisms: (1) musical discrimination abilities, (2) access to meaning in music, and (3) subjective music appreciation. All three dimensions were investigated in two CI user groups (post- and prelingually deafened CI users, all implanted as adults) and a matched normal hearing control group. The meaning of music was studied by using event-related potentials (with the N400 component as marker) during a music-word priming task while music appreciation was gathered by a questionnaire. The results reveal a double dissociation between the three dimensions of music processing. Despite impaired discrimination abilities of both CI user groups compared to the control group, appreciation was reduced only in postlingual CI users. While musical meaning processing was restorable in postlingual CI users, as shown by a N400 effect, data of prelingual CI users lack the N400 effect and indicate previous dysfunctional concept building.


Subject(s)
Cochlear Implants , Hearing Loss/therapy , Music , Adult , Aged , Auditory Perception/physiology , Case-Control Studies , Electroencephalography , Evoked Potentials , Female , Hearing Loss/etiology , Humans , Male , Middle Aged , Pitch Perception/physiology
7.
Sci Rep ; 6: 28259, 2016 06 20.
Article in English | MEDLINE | ID: mdl-27321666

ABSTRACT

In most everyday situations sensorimotor processes are quite complex because situations often require to carry out several actions in a specific temporal order; i.e. one has to cascade different actions. While it is known that changes to stimuli affect action cascading mechanisms, it is unknown whether action cascading changes when sensory stimuli are not manipulated, but the neural architecture to process these stimuli is altered. In the current study we test this hypothesis using prelingually deaf subjects as a model to answer this question. We use a system neurophysiological approach using event-related potentials (ERPs) and source localization techniques. We show that prelingually deaf subjects show improvements in action cascading. However, this improvement is most likely not due to changes at the perceptual (P1-ERP) and attentional processing level (N1-ERP), but due to changes at the response selection level (P3-ERP). It seems that the temporo-parietal junction (TPJ) is important for these effects to occur, because the TPJ comprises overlapping networks important for the processing of sensory information and the selection of responses. Sensory deprivation thus affects cognitive processes downstream of sensory processing and only these seem to be important for behavioral improvements in situations requiring complex sensorimotor processes and action cascading.


Subject(s)
Cognition , Deafness/physiopathology , Evoked Potentials , Sensory Deprivation , Adult , Female , Humans , Male , Middle Aged
8.
Front Neurosci ; 10: 68, 2016.
Article in English | MEDLINE | ID: mdl-27013937

ABSTRACT

Children with sensorineural hearing loss may (re)gain hearing with a cochlear implant-a device that transforms sounds into electric pulses and bypasses the dysfunctioning inner ear by stimulating the auditory nerve directly with an electrode array. Many implanted children master the acquisition of spoken language successfully, yet we still have little knowledge of the actual input they receive with the implant and specifically which language sensitive cues they hear. This would be important however, both for understanding the flexibility of the auditory system when presented with stimuli after a (life-) long phase of deprivation and for planning therapeutic intervention. In rhythmic languages the general stress pattern conveys important information about word boundaries. Infant language acquisition relies on such cues and can be severely hampered when this information is missing, as seen for dyslexic children and children with specific language impairment. Here we ask whether children with a cochlear implant perceive differences in stress patterns during their language acquisition phase and if they do, whether it is present directly following implant stimulation or if and how much time is needed for the auditory system to adapt to the new sensory modality. We performed a longitudinal ERP study, testing in bimonthly intervals the stress pattern perception of 17 young hearing impaired children (age range: 9-50 months; mean: 22 months) during their first 6 months of implant use. An additional session before the implantation served as control baseline. During a session they passively listened to an oddball paradigm featuring the disyllable "baba," which was stressed either on the first or second syllable (trochaic vs. iambic stress pattern). A group of age-matched normal hearing children participated as controls. Our results show, that within the first 6 months of implant use the implanted children develop a negative mismatch response for iambic but not for trochaic deviants, thus showing the same result as the normal hearing controls. Even congenitally deaf children show the same developing pattern. We therefore conclude (a) that young implanted children have early access to stress pattern information and (b) that they develop ERP responses similar to those of normal hearing children.

9.
J Cogn Neurosci ; 27(12): 2427-41, 2015 Dec.
Article in English | MEDLINE | ID: mdl-26351863

ABSTRACT

One main incentive for supplying hearing impaired children with a cochlear implant is the prospect of oral language acquisition. Only scarce knowledge exists, however, of what congenitally deaf children actually perceive when receiving their first auditory input, and specifically what speech-relevant features they are able to extract from the new modality. We therefore presented congenitally deaf infants and young children implanted before the age of 4 years with an oddball paradigm of long and short vowel variants of the syllable /ba/. We measured the EEG in regular intervals to study their discriminative ability starting with the first activation of the implant up to 8 months later. We were thus able to time-track the emerging ability to differentiate one of the most basic linguistic features that bears semantic differentiation and helps in word segmentation, namely, vowel length. Results show that already 2 months after the first auditory input, but not directly after implant activation, these early implanted children differentiate between long and short syllables. Surprisingly, after only 4 months of hearing experience, the ERPs have reached the same properties as those of the normal hearing control group, demonstrating the plasticity of the brain with respect to the new modality. We thus show that a simple but linguistically highly relevant feature such as vowel length reaches age-appropriate electrophysiological levels as fast as 4 months after the first acoustic stimulation, providing an important basis for further language acquisition.


Subject(s)
Brain/physiopathology , Cochlear Implants , Deafness/physiopathology , Deafness/therapy , Language Development , Speech Perception/physiology , Acoustic Stimulation , Brain/growth & development , Child, Preschool , Cochlear Implantation , Electroencephalography , Evoked Potentials , Female , Humans , Infant , Language Tests , Male , Speech Acoustics , Time Factors
10.
Neuroimage ; 57(2): 624-33, 2011 Jul 15.
Article in English | MEDLINE | ID: mdl-21554964

ABSTRACT

Processing syntax is believed to be a higher cognitive function involving cortical regions outside sensory cortices. In particular, previous studies revealed that early syntactic processes at around 100-200 ms affect brain activations in anterior regions of the superior temporal gyrus (STG), while independent studies showed that pure auditory perceptual processing is related to sensory cortex activations. However, syntax-related modulations of sensory cortices were reported recently, thereby adding diverging findings to the previous studies. The goal of the present magnetoencephalography study was to localize the cortical regions underlying early syntactic processes and those underlying perceptual processes using a within-subject design. Sentences varying the factors syntax (correct vs. incorrect) and auditory space (standard vs. change of interaural time difference (ITD)) were auditorily presented. Both syntactic and auditory spatial anomalies led to very early activations (40-90 ms) in the STG. Around 135 ms after violation onset, differential effects were observed for syntax and auditory space, with syntactically incorrect sentences leading to activations in the anterior STG, whereas ITD changes elicited activations more posterior in the STG. Furthermore, our observations strongly indicate that the anterior and the posterior STG are activated simultaneously when a double violation is encountered. Thus, the present findings provide evidence of a dissociation of speech-related processes in the anterior STG and the processing of auditory spatial information in the posterior STG, compatible with the view of different processing streams in the temporal cortex.


Subject(s)
Brain Mapping , Comprehension/physiology , Speech Perception/physiology , Temporal Lobe/physiology , Acoustic Stimulation , Adult , Female , Humans , Magnetoencephalography , Male , Signal Processing, Computer-Assisted
11.
Neuron ; 59(5): 695-707, 2008 Sep 11.
Article in English | MEDLINE | ID: mdl-18786354

ABSTRACT

Numerous linguistic operations have been assigned to cortical brain areas, but the contributions of subcortical structures to human language processing are still being discussed. Using simultaneous EEG recordings directly from deep brain structures and the scalp, we show that the human thalamus systematically reacts to syntactic and semantic parameters of auditorily presented language in a temporally interleaved manner in coordination with cortical regions. In contrast, two key structures of the basal ganglia, the globus pallidus internus and the subthalamic nucleus, were not found to be engaged in these processes. We therefore propose that syntactic and semantic language analysis is primarily realized within cortico-thalamic networks, whereas a cohesive basal ganglia network is not involved in these essential operations of language analysis.


Subject(s)
Comprehension , Semantics , Thalamus/physiology , Verbal Behavior/physiology , Adult , Aged , Brain Diseases/pathology , Brain Diseases/therapy , Brain Mapping , Deep Brain Stimulation/methods , Electroencephalography , Evoked Potentials, Auditory/physiology , Female , Humans , Language Tests , Male , Middle Aged , Neural Pathways/physiology , Time Factors
12.
Neuroreport ; 19(1): 25-9, 2008 Jan 08.
Article in English | MEDLINE | ID: mdl-18281887

ABSTRACT

The current study investigated the role played by conflict monitoring in a lexical-decision task involving competing word representations, using event-related potentials. We extended the multiple read-out model (Grainger and Jacobs, 1996), a connectionist model of word recognition, to quantify conflict by means of Hopfield Energy, which is defined as the sum of the products of all orthographic word node pair activations within the artificial mental lexicon of this model. With increasing conflict levels in nonwords, a late negativity increased in amplitude (400-600 ms) accompanied by activation of the anterior cingulate cortex and the medial frontal gyrus. The simulated conflict predicted the amplitudes associated with this mediofrontal conflict-monitoring network on an item level, and is consistent with the conflict-monitoring theory.


Subject(s)
Brain Mapping , Conflict, Psychological , Frontal Lobe/physiology , Mental Processes/physiology , Pattern Recognition, Visual/physiology , Adult , Electroencephalography/methods , Evoked Potentials , Female , Functional Laterality , Humans , Male , Models, Neurological , Photic Stimulation
13.
J Child Lang ; 34(3): 601-22, 2007 Aug.
Article in English | MEDLINE | ID: mdl-17822141

ABSTRACT

This study examines the mental processes involved in children's on-line recognition of inflected word forms using event-related potentials (ERPs). Sixty children in three age groups (20 six- to seven-year-olds, 20 eight- to nine-year-olds, 20 eleven- to twelve-year-olds) and 23 adults (tested in a previous study) listened to sentences containing correct or incorrect German noun plural forms. In the two older child groups, as well as in the adult group, over-regularized plural forms elicited brain responses that are characteristic of combinatorial (grammatical) violations. We also found that ERP components associated with language processing change from child to adult with respect to their onsets and their topography. The ERP violation effects obtained for over-regularizations suggest that children (aged eight years and above) and adults employ morphological computation for processing purposes, consistent with dual-mechanism models of inflection. The observed differences between children's and adults' ERP responses are argued to result from children's smaller lexicons and from slower and less efficient processing.


Subject(s)
Brain/physiology , Evoked Potentials/physiology , Generalization, Psychological , Recognition, Psychology/physiology , Adult , Child , Female , Humans , Male
14.
Biol Psychol ; 74(3): 337-46, 2007 Mar.
Article in English | MEDLINE | ID: mdl-17011692

ABSTRACT

Sentence interpretation was examined with event-related brain potentials (ERPs). The ERPs were recorded while participants listened to French sentences containing a subject-modifying relative clause (SRC). These were either correct, semantically incorrect, syntactically incorrect, or doubly (syntactically and semantically) incorrect. The semantic anomaly realized as a selectional restriction violation was associated with an N400. The syntactic anomaly realized as a phrase structure violation in the SRC elicited a frontal negativity between 150 and 600 ms. This negativity was more pronounced in the left than in the right hemisphere in the early time window (150-300 ms). In a later time window (300-600 ms), it was more broadly distributed including anterior and posterior regions, but with a maximum over the anterior recording sites. Finally, a centro-parietal late positivity (P600) was found between 600 and 1000 ms. While syntactic and semantic information in the double violation condition did not interact between 150 and 300 ms, they did interact between 300 and 600 ms. This finding supports serial models of sentence processing that postulate an initial autonomous stage of phrase structure building and a late stage of interaction.


Subject(s)
Arousal/physiology , Cerebral Cortex/physiology , Comprehension/physiology , Evoked Potentials/physiology , Language , Semantics , Speech Perception/physiology , Adult , Brain Mapping , Conflict, Psychological , Contingent Negative Variation , Dominance, Cerebral/physiology , Electroencephalography , Female , Frontal Lobe/physiology , Humans , Male , Mental Recall/physiology , Psycholinguistics , Signal Processing, Computer-Assisted
15.
J Cogn Neurosci ; 18(12): 2030-48, 2006 Dec.
Article in English | MEDLINE | ID: mdl-17129189

ABSTRACT

The present study investigated the role of proficiency in late second-language (L2) processing using comparable stimuli in German and Italian. Both sets of stimuli consisted of simple active sentences including a word category violation, a morphosyntactic agreement violation, or a combination of the two. Four experiments were conducted to study high- and low-proficiency L2 learners of German as well as high- and low-proficiency L2 learners of Italian. High-proficiency L2 learners in both languages showed the same event-related potential (ERP) components as native speakers for all syntactic violations. For the word category violation, they displayed an early anterior negativity (ELAN), an additional negativity reflecting reference-related processes, and a late P600 evidencing processes of reanalysis. For the processing of the morphosyntactic error, an anterior negativity (LAN) and a P600 were observed, whereas for the combined violation, the same ERP components were found as in the pure category violation. In high-proficiency L2 learners, the timing of the processing steps was equivalent to that of native speakers, although some amplitude differences were present. Low-proficiency L2 learners, however, showed qualitative differences in the agreement violation characterized by the absence of the LAN and quantitative differences reflected in a delayed P600 in every violation condition. These findings emphasize that with a high proficiency, late L2 learners can indeed show native-like neural responses with the timing approximating that of native speakers. This challenges the idea that there are fundamental differences in language processing in the brain between natives and late L2 learners.


Subject(s)
Electroencephalography , Language , Adult , Algorithms , Electrooculography , Evoked Potentials/physiology , Female , Germany , Humans , Italy , Male , Psycholinguistics , Psychomotor Performance/physiology
16.
Neuroreport ; 17(14): 1511-4, 2006 Oct 02.
Article in English | MEDLINE | ID: mdl-16957599

ABSTRACT

The current study used event-related brain potentials to investigate lexical-semantic processing of words in sentences spoken by children with specific language impairment and children with normal language development. Children heard correct sentences and sentences with a violation of the selectional restriction of the verb. Control children showed an N400 effect followed by a late positivity for the incorrect sentences. In contrast, children with specific language impairment showed no N400 effect but did show a late, broadly distributed positivity. This absence of the N400 effect is due to a relatively large negativity for correct sentences, suggesting weaker lexical-semantic representations of the verbs and their selectional restrictions in children with specific language impairment.


Subject(s)
Evoked Potentials, Auditory/physiology , Language Disorders/physiopathology , Mental Processes/physiology , Semantics , Acoustic Stimulation/methods , Auditory Perception/physiology , Brain Mapping , Child , Electroencephalography/methods , Female , Humans , Male , Reaction Time/physiology
17.
Brain Res ; 1096(1): 163-72, 2006 Jun 22.
Article in English | MEDLINE | ID: mdl-16769041

ABSTRACT

We studied auditory sentence comprehension using magnetoencephalography while subjects listened to sentences whose correctness they had to judge subsequently. The localization and the time course of brain electrical activity during processing of correct and semantically incorrect sentences were estimated by computing a brain surface current density within a cortical layer for both conditions. Finally, a region of interest (ROI) analysis was conducted to determine the time course of specific locations. A magnetic N400 was present in six spatially different ROIs. Semantic anomalies caused an exclusive involvement of the ventral portion of the left inferior frontal gyrus (BA 47) and left pars triangularis (BA 45). The anterior parts of the superior (BA 22) and inferior (BA 20/21) temporal gyri bilaterally were activated by both conditions. The activation for the correct condition, however, peaked earlier in both left temporal regions (approximately 32 ms). In general, activation due to semantic violations was more pronounced, started later, and lasted longer as compared to correct sentences. The findings reveal a clear left-hemispheric dominance during language processing indicated firstly by the mere number of activated regions (four in the left vs. two in the right hemisphere) and secondly by the observed specificity of the left inferior frontal ROIs to semantic violations. The temporal advantage observed for the correct condition in the left temporal regions is supporting the notion that the established context eases the processing of the final word. Semantically incorrect words that do not fit into the context result in longer integration times.


Subject(s)
Auditory Perception/physiology , Language , Magnetoencephalography , Nerve Net/physiology , Adolescent , Adult , Data Interpretation, Statistical , Female , Functional Laterality/physiology , Humans , Magnetic Resonance Imaging , Male , Psycholinguistics , Temporal Lobe/physiology
18.
J Exp Psychol Learn Mem Cogn ; 32(2): 373-86, 2006 Mar.
Article in English | MEDLINE | ID: mdl-16569153

ABSTRACT

There is a long-standing debate in the area of speech production on the question of whether only words selected for articulation are phonologically activated (as maintained by serial-discrete models) or whether this is also true for their semantic competitors (as maintained by forward-cascading and interactive models). Past research has addressed this issue by testing whether retrieval of a target word (e.g., cat) affects--or is affected by--the processing of a word that is phonologically related to a semantic category coordinate of the target (e.g., doll, related to dog) and has consistently failed to obtain such mediated effects in adult speakers. The authors present a series of experiments demonstrating that mediated effects are present in children (around age 7) and diminish with increasing age. This observation provides further evidence for cascaded models of lexical retrieval.


Subject(s)
Mental Processes/physiology , Phonetics , Semantics , Speech Perception/physiology , Speech/physiology , Adult , Age Factors , Child , Female , Humans , Male , Reaction Time/physiology , Speech Production Measurement/methods , Time Factors
19.
Brain Res ; 1077(1): 144-52, 2006 Mar 10.
Article in English | MEDLINE | ID: mdl-16487499

ABSTRACT

This study uses event-related brain potentials (ERPs) to investigate the processing of morphologically regular and irregular words during auditory comprehension. ERPs were recorded, while 23 German-speaking subjects listened to correctly and incorrectly inflected noun plural forms presented in sentential contexts. ERP responses to violations of morphological structure were different to those of lexical (word-level) violations: the former elicited LAN/P600 effects, and the latter an enhanced N400 component relative to the correctly inflected plural forms. This difference replicates previous results from visual ERP studies and supports the distinction between combinatorial and memory-based processing of morphologically complex words. In addition, LAN/P600 effects were found to be more prominent in the auditory domain than in a previous visual study using similar materials.


Subject(s)
Comprehension/physiology , Evoked Potentials, Auditory/physiology , Speech Perception/physiology , Adolescent , Adult , Analysis of Variance , Female , Humans , Male , Psycholinguistics , Statistics, Nonparametric , Vocabulary
20.
Brain Res ; 1073-1074: 431-9, 2006 Feb 16.
Article in English | MEDLINE | ID: mdl-16464440

ABSTRACT

Recent neurocognitive studies of visual word recognition provide information about neuronal networks correlated with processes involved in lexical access and their time course (e.g., [Holcomb, Ph.J., Grainger J. and O'Rourke, T. (2002). An Electrophysiological Study of the Effects of Orthographic Neighborhood Size on Printed Word Perception, J. of Cogn. Neurosci. 14 938-950; Binder, J.R., McKiernan, K.A., Parsons, M.E., Westbury, C.F., Possing, E.T., Kaufman, J.N. and Buchanan, L. (2003). Neural Correlates of Lexical Access during Visual Word Recognition, J. Cogn. Neurosci. 15 372-393.]). These studies relate the orthographic neighborhood density of letter strings to the amount of global lexical activity in the brain, generated by a hypothetical mental lexicon as speculated in an early paper by [Jacobs, A.M. and Carr, T.H. (1995). Mind mappers and cognitive modelers: Toward cross-fertilization, Behav. Brain. Sci. 18 362-363]. The present study uses model-generated stimuli theoretically eliciting graded global lexical activity and relates this activity to activation of lexical processing networks using event-related potentials (ERPs). The results from a lexical decision task provide evidence for an effect of lexicality around 350 ms post-stimulus and also a graded effect of global lexical activity for nonwords around 500 ms post-stimulus. The data are interpreted as reflecting two different decision processes: an identification process based on local lexical activity underlying the 'yes' response to words and a temporal deadline process underlying the 'no' response to nonwords based on global lexical activity.


Subject(s)
Evoked Potentials/physiology , Models, Psychological , Pattern Recognition, Visual/physiology , Vocabulary , Adult , Analysis of Variance , Electroencephalography/methods , Female , Humans , Male , Photic Stimulation/methods , Reaction Time/physiology
SELECTION OF CITATIONS
SEARCH DETAIL
...