Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 19 de 19
Filter
Add more filters











Publication year range
2.
PLoS Biol ; 21(3): e3002046, 2023 03.
Article in English | MEDLINE | ID: mdl-36947552

ABSTRACT

Understanding speech requires mapping fleeting and often ambiguous soundwaves to meaning. While humans are known to exploit their capacity to contextualize to facilitate this process, how internal knowledge is deployed online remains an open question. Here, we present a model that extracts multiple levels of information from continuous speech online. The model applies linguistic and nonlinguistic knowledge to speech processing, by periodically generating top-down predictions and incorporating bottom-up incoming evidence in a nested temporal hierarchy. We show that a nonlinguistic context level provides semantic predictions informed by sensory inputs, which are crucial for disambiguating among multiple meanings of the same word. The explicit knowledge hierarchy of the model enables a more holistic account of the neurophysiological responses to speech compared to using lexical predictions generated by a neural network language model (GPT-2). We also show that hierarchical predictions reduce peripheral processing via minimizing uncertainty and prediction error. With this proof-of-concept model, we demonstrate that the deployment of hierarchical predictions is a possible strategy for the brain to dynamically utilize structured knowledge and make sense of the speech input.


Subject(s)
Comprehension , Speech Perception , Humans , Comprehension/physiology , Speech , Speech Perception/physiology , Brain/physiology , Language
3.
J Neurosci ; 42(31): 6108-6120, 2022 08 03.
Article in English | MEDLINE | ID: mdl-35760528

ABSTRACT

Speech perception in noisy environments is enhanced by seeing facial movements of communication partners. However, the neural mechanisms by which audio and visual speech are combined are not fully understood. We explore MEG phase-locking to auditory and visual signals in MEG recordings from 14 human participants (6 females, 8 males) that reported words from single spoken sentences. We manipulated the acoustic clarity and visual speech signals such that critical speech information is present in auditory, visual, or both modalities. MEG coherence analysis revealed that both auditory and visual speech envelopes (auditory amplitude modulations and lip aperture changes) were phase-locked to 2-6 Hz brain responses in auditory and visual cortex, consistent with entrainment to syllable-rate components. Partial coherence analysis was used to separate neural responses to correlated audio-visual signals and showed non-zero phase-locking to auditory envelope in occipital cortex during audio-visual (AV) speech. Furthermore, phase-locking to auditory signals in visual cortex was enhanced for AV speech compared with audio-only speech that was matched for intelligibility. Conversely, auditory regions of the superior temporal gyrus did not show above-chance partial coherence with visual speech signals during AV conditions but did show partial coherence in visual-only conditions. Hence, visual speech enabled stronger phase-locking to auditory signals in visual areas, whereas phase-locking of visual speech in auditory regions only occurred during silent lip-reading. Differences in these cross-modal interactions between auditory and visual speech signals are interpreted in line with cross-modal predictive mechanisms during speech perception.SIGNIFICANCE STATEMENT Verbal communication in noisy environments is challenging, especially for hearing-impaired individuals. Seeing facial movements of communication partners improves speech perception when auditory signals are degraded or absent. The neural mechanisms supporting lip-reading or audio-visual benefit are not fully understood. Using MEG recordings and partial coherence analysis, we show that speech information is used differently in brain regions that respond to auditory and visual speech. While visual areas use visual speech to improve phase-locking to auditory speech signals, auditory areas do not show phase-locking to visual speech unless auditory speech is absent and visual speech is used to substitute for missing auditory signals. These findings highlight brain processes that combine visual and auditory signals to support speech understanding.


Subject(s)
Auditory Cortex , Speech Perception , Visual Cortex , Acoustic Stimulation , Auditory Cortex/physiology , Auditory Perception , Female , Humans , Lipreading , Male , Speech/physiology , Speech Perception/physiology , Visual Cortex/physiology , Visual Perception/physiology
4.
Neurobiol Lang (Camb) ; 3(4): 665-698, 2022.
Article in English | MEDLINE | ID: mdl-36742011

ABSTRACT

Listening to spoken language engages domain-general multiple demand (MD; frontoparietal) regions of the human brain, in addition to domain-selective (frontotemporal) language regions, particularly when comprehension is challenging. However, there is limited evidence that the MD network makes a functional contribution to core aspects of understanding language. In a behavioural study of volunteers (n = 19) with chronic brain lesions, but without aphasia, we assessed the causal role of these networks in perceiving, comprehending, and adapting to spoken sentences made more challenging by acoustic-degradation or lexico-semantic ambiguity. We measured perception of and adaptation to acoustically degraded (noise-vocoded) sentences with a word report task before and after training. Participants with greater damage to MD but not language regions required more vocoder channels to achieve 50% word report, indicating impaired perception. Perception improved following training, reflecting adaptation to acoustic degradation, but adaptation was unrelated to lesion location or extent. Comprehension of spoken sentences with semantically ambiguous words was measured with a sentence coherence judgement task. Accuracy was high and unaffected by lesion location or extent. Adaptation to semantic ambiguity was measured in a subsequent word association task, which showed that availability of lower-frequency meanings of ambiguous words increased following their comprehension (word-meaning priming). Word-meaning priming was reduced for participants with greater damage to language but not MD regions. Language and MD networks make dissociable contributions to challenging speech comprehension: Using recent experience to update word meaning preferences depends on language-selective regions, whereas the domain-general MD network plays a causal role in reporting words from degraded speech.

5.
Cortex ; 126: 107-118, 2020 05.
Article in English | MEDLINE | ID: mdl-32065956

ABSTRACT

In the healthy human brain, the processing of language is strongly lateralised, usually to the left hemisphere, while the processing of complex non-linguistic sounds recruits brain regions bilaterally. Here we asked whether the anterior temporal lobes, strongly implicated in semantic processing, are critical to this special treatment of spoken words. Nine patients with semantic dementia (SD) and fourteen age-matched controls underwent magnetoencephalography and structural MRI. Voxel based morphometry demonstrated the stereotypical pattern of SD: severe grey matter loss restricted to the anterior temporal lobes, with the left side more affected. During magnetoencephalography, participants listened to word sets in which identity and meaning were ambiguous until word completion, for example PLAYED versus PLATE. Whereas left-hemispheric responses were similar across groups, patients demonstrated increased right hemisphere activity 174-294 msec after stimulus disambiguation. Source reconstructions confirmed recruitment of right-sided analogues of language regions in SD: atrophy of anterior temporal lobes was associated with increased activity in right temporal pole, middle temporal gyrus, inferior frontal gyrus and supramarginal gyrus. Overall, the results indicate that anterior temporal lobes are necessary for normal and efficient lateralised processing of word identity by the language network.


Subject(s)
Functional Laterality , Temporal Lobe , Brain Mapping , Humans , Magnetic Resonance Imaging , Magnetoencephalography , Semantics , Temporal Lobe/diagnostic imaging
6.
J Cogn Neurosci ; 32(3): 403-425, 2020 03.
Article in English | MEDLINE | ID: mdl-31682564

ABSTRACT

Semantically ambiguous words challenge speech comprehension, particularly when listeners must select a less frequent (subordinate) meaning at disambiguation. Using combined magnetoencephalography (MEG) and EEG, we measured neural responses associated with distinct cognitive operations during semantic ambiguity resolution in spoken sentences: (i) initial activation and selection of meanings in response to an ambiguous word and (ii) sentence reinterpretation in response to subsequent disambiguation to a subordinate meaning. Ambiguous words elicited an increased neural response approximately 400-800 msec after their acoustic offset compared with unambiguous control words in left frontotemporal MEG sensors, corresponding to sources in bilateral frontotemporal brain regions. This response may reflect increased demands on processes by which multiple alternative meanings are activated and maintained until later selection. Disambiguating words heard after an ambiguous word were associated with marginally increased neural activity over bilateral temporal MEG sensors and a central cluster of EEG electrodes, which localized to similar bilateral frontal and left temporal regions. This later neural response may reflect effortful semantic integration or elicitation of prediction errors that guide reinterpretation of previously selected word meanings. Across participants, the amplitude of the ambiguity response showed a marginal positive correlation with comprehension scores, suggesting that sentence comprehension benefits from additional processing around the time of an ambiguous word. Better comprehenders may have increased availability of subordinate meanings, perhaps due to higher quality lexical representations and reflected in a positive correlation between vocabulary size and comprehension success.


Subject(s)
Brain/physiology , Comprehension/physiology , Semantics , Speech Perception/physiology , Adult , Electroencephalography , Female , Humans , Magnetoencephalography , Male , Vocabulary , Young Adult
7.
Sci Rep ; 6: 26558, 2016 05 24.
Article in English | MEDLINE | ID: mdl-27217080

ABSTRACT

Rapid and efficient processing of external information by the brain is vital to survival in a highly dynamic environment. The key channel humans use to exchange information is language, but the neural underpinnings of its processing are still not fully understood. We investigated the spatio-temporal dynamics of neural access to word representations in the brain by scrutinising the brain's activity elicited in response to psycholinguistically, visually and phonologically matched groups of familiar words and meaningless pseudowords. Stimuli were briefly presented on the visual-field periphery to experimental participants whose attention was occupied with a non-linguistic visual feature-detection task. The neural activation elicited by these unattended orthographic stimuli was recorded using multi-channel whole-head magnetoencephalography, and the timecourse of lexically-specific neuromagnetic responses was assessed in sensor space as well as at the level of cortical sources, estimated using individual MR-based distributed source reconstruction. Our results demonstrate a neocortical signature of automatic near-instant access to word representations in the brain: activity in the perisylvian language network characterised by specific activation enhancement for familiar words, starting as early as ~70 ms after the onset of unattended word stimuli and underpinned by temporal and inferior-frontal cortices.


Subject(s)
Brain Mapping/methods , Neocortex/physiology , Adult , Female , Humans , Language , Magnetoencephalography , Male , Spatio-Temporal Analysis , Young Adult
8.
Neuropsychologia ; 93(Pt B): 413-424, 2016 Dec.
Article in English | MEDLINE | ID: mdl-27063061

ABSTRACT

Previous studies have demonstrated that efficient neurorehabilitation in post stroke aphasia leads to clinical language improvements and promotes neuroplasticity. Brain areas frequently implicated in functional restitution of language after stroke comprise perilesional sites in the left hemisphere and homotopic regions in the right hemisphere. However, the neuronal mechanisms underlying therapy-induced language restitution are still largely unclear. In this study, magnetoencephalography was used to investigate neurophysiological changes in a group of chronic aphasia patients who underwent intensive language action therapy (ILAT), also known as constraint-induced aphasia therapy (CIAT). Before and immediately after ILAT, patients' language and communication skills were assessed and their brain responses were recorded during a lexical magnetic mismatch negativity (MMNm) paradigm, presenting familiar spoken words and meaningless pseudowords. After the two-week therapy interval, patients showed significant clinical improvements of language and communication skills. Spatio-temporal dynamics of neuronal changes revealed a significant increase in word-specific neuro-magnetic MMNm activation around 200ms after stimulus identification points. This enhanced brain response occurred specifically for words and was most pronounced over perilesional areas in the left hemisphere. Therapy-related changes in neuromagnetic activation for words in both hemispheres significantly correlated with performance on a clinical language test. The findings indicate that functional recovery of language in chronic post stroke aphasia is associated with neuroplastic changes in both cerebral hemispheres, with stronger left-hemispheric contribution during automatic stages of language processing.


Subject(s)
Aphasia/physiopathology , Aphasia/therapy , Brain/physiopathology , Language Therapy , Language , Stroke/physiopathology , Adult , Aged , Aphasia/etiology , Brain Mapping , Chronic Disease , Female , Functional Laterality/physiology , Humans , Magnetoencephalography , Male , Middle Aged , Neuronal Plasticity/physiology , Recovery of Function/physiology , Speech/physiology , Speech Perception/physiology , Stroke/complications , Stroke Rehabilitation , Treatment Outcome
9.
Neuropsychologia ; 68: 126-38, 2015 Feb.
Article in English | MEDLINE | ID: mdl-25576909

ABSTRACT

Theoretical linguistic accounts of lexical ambiguity distinguish between homonymy, where words that share a lexical form have unrelated meanings, and polysemy, where the meanings are related. The present study explored the psychological reality of this theoretical assumption by asking whether there is evidence that homonyms and polysemes are represented and processed differently in the brain. We investigated the time-course of meaning activation of different types of ambiguous words using EEG. Homonyms and polysemes were each further subdivided into two: unbalanced homonyms (e.g., "coach") and balanced homonyms (e.g., "match"); metaphorical polysemes (e.g., "mouth") and metonymic polysemes (e.g., "rabbit"). These four types of ambiguous words were presented as primes in a visual single-word priming delayed lexical decision task employing a long ISI (750 ms). Targets were related to one of the meanings of the primes, or were unrelated. ERPs formed relative to the target onset indicated that the theoretical distinction between homonymy and polysemy was reflected in the N400 brain response. For targets following homonymous primes (both unbalanced and balanced), no effects survived at this long ISI indicating that both meanings of the prime had already decayed. On the other hand, for polysemous primes (both metaphorical and metonymic), activation was observed for both dominant and subordinate senses. The observed processing differences between homonymy and polysemy provide evidence in support of differential neuro-cognitive representations for the two types of ambiguity. We argue that the polysemous senses act collaboratively to strengthen the representation, facilitating maintenance, while the competitive nature of homonymous meanings leads to decay.


Subject(s)
Cerebral Cortex/physiology , Electroencephalography/methods , Evoked Potentials/physiology , Language , Adolescent , Adult , Female , Humans , Male , Repetition Priming , Semantics , Time Factors , Young Adult
10.
Brain Topogr ; 28(2): 279-91, 2015 Mar.
Article in English | MEDLINE | ID: mdl-25403745

ABSTRACT

Effects of intensive language action therapy (ILAT) on automatic language processing were assessed using Magnetoencephalography (MEG). Auditory magnetic mismatch negativity (MMNm) responses to words and pseudowords were recorded in twelve patients with chronic aphasia before and immediately after two weeks of ILAT. Following therapy, Patients showed significant clinical improvements of auditory comprehension as measured by the Token Test and in word retrieval and naming as measured by the Boston Naming Test. Neuromagnetic responses dissociated between meaningful words and meaningless word-like stimuli ultra-rapidly, approximately 50 ms after acoustic information first allowed for stimulus identification. Over treatment, there was a significant increase in the left-lateralisation of this early word-elicited activation, observed in perilesional fronto-temporal regions. No comparable change was seen for pseudowords. The results may reflect successful, therapy-induced, language restitution in the left hemisphere.


Subject(s)
Aphasia/physiopathology , Aphasia/therapy , Brain/physiopathology , Language Therapy/methods , Speech Perception/physiology , Acoustic Stimulation , Adult , Aged , Aphasia/psychology , Chronic Disease , Female , Humans , Language Tests , Magnetoencephalography , Male , Middle Aged
11.
PLoS One ; 8(11): e76600, 2013.
Article in English | MEDLINE | ID: mdl-24223704

ABSTRACT

The processing of notes and chords which are harmonically incongruous with their context has been shown to elicit two distinct late ERP effects. These effects strongly resemble two effects associated with the processing of linguistic incongruities: a P600, resembling a typical response to syntactic incongruities in language, and an N500, evocative of the N400, which is typically elicited in response to semantic incongruities in language. Despite the robustness of these two patterns in the musical incongruity literature, no consensus has yet been reached as to the reasons for the existence of two distinct responses to harmonic incongruities. This study was the first to use behavioural and ERP data to test two possible explanations for the existence of these two patterns: the musicianship of listeners, and the resolved or unresolved nature of the harmonic incongruities. Results showed that harmonically incongruous notes and chords elicited a late positivity similar to the P600 when they were embedded within sequences which started and ended in the same key (harmonically resolved). The notes and chords which indicated that there would be no return to the original key (leaving the piece harmonically unresolved) were associated with a further P600 in musicians, but with a negativity resembling the N500 in non-musicians. We suggest that the late positivity reflects the conscious perception of a specific element as being incongruous with its context and the efforts of musicians to integrate the harmonic incongruity into its local context as a result of their analytic listening style, while the late negativity reflects the detection of the absence of resolution in non-musicians as a result of their holistic listening style.


Subject(s)
Auditory Perception , Evoked Potentials, Auditory , Acoustic Stimulation , Electroencephalography , Female , Humans , Male , Music , Parietal Lobe/physiology , Semantics , Young Adult
12.
Brain Res ; 1538: 135-50, 2013 Nov 13.
Article in English | MEDLINE | ID: mdl-24064384

ABSTRACT

This study examined the extent to which concreteness influences the acquisition and subsequent processing of novel (low frequency) concepts. Participants were trained on 70 rare English words (35 concrete, 35 abstract) paired with definitions. ERPs were then recorded while participants performed a semantic categorisation (concrete vs. abstract) and a lexical decision task on single-meaning, multi-meaning and the newly acquired words. During training there was a significant effect of concreteness, in that participants were more successful at acquiring concrete concepts. In both the semantic categorisation and the lexical decision task, concreteness effects were evident in the behavioural and in the ERP data for all word types, with concrete words eliciting more negative waveforms than abstract words in the N400 time window. Behaviourally, participants experienced greater difficulty in judging the concreteness of multi-meaning words, yet concreteness effects in the N400 were equally strong for all three word types across both tasks. These findings indicate that concreteness represents a fundamental distinction in the way that items are represented in memory, which is independent of the participant's perceived judgement. They further demonstrate that novel concepts can be acquired rapidly after minimal training, and that the neurophysiological correlates associated with processing novel words are modulated by the specific nature of the conceptual characteristics assigned to the word.


Subject(s)
Brain/physiology , Concept Formation/physiology , Learning/physiology , Semantics , Adolescent , Adult , Evoked Potentials , Female , Humans , Male , Photic Stimulation , Reaction Time , Young Adult
13.
Brain Lang ; 126(2): 217-29, 2013 Aug.
Article in English | MEDLINE | ID: mdl-23800711

ABSTRACT

Are compound words represented as unitary lexical units, or as individual constituents that are processed combinatorially? We investigated the neuro-cognitive processing of compounds using EEG and a passive-listening oddball design in which lexical access and combinatorial processing elicit dissociating Mismatch Negativity (MMN) brain-response patterns. MMN amplitude varied with compound frequency and semantic transparency (the clarity of the relationship between compound and constituent meanings). Opaque compounds elicited an enhanced 'lexical' MMN, reflecting stronger lexical representations, to high- vs. low-frequency compounds. Transparent compounds showed no frequency effect, nor differed to pseudo-compounds, reflecting the combination of a reduced 'syntactic' MMN indexing combinatorial links, and an enhanced 'lexical' MMN for real-word compounds compared to pseudo-compounds. We argue that transparent compounds are processed combinatorially alongside parallel lexical access of the whole-form representation, but whole-form access is the dominant mechanism for opaque compounds, particularly those of high-frequency. Results support a flexible dual-route account of compound processing.


Subject(s)
Brain/physiology , Evoked Potentials/physiology , Language , Speech Perception/physiology , Adult , Electroencephalography , Female , Humans , Male , Semantics , Signal Processing, Computer-Assisted , Young Adult
14.
Neuroimage ; 71: 187-95, 2013 May 01.
Article in English | MEDLINE | ID: mdl-23298745

ABSTRACT

A controversial issue in neuro- and psycholinguistics is whether regular past-tense forms of verbs are stored lexically or generated productively by the application of abstract combinatorial schemas, for example affixation rules. The success or failure of models in accounting for this particular issue can be used to draw more general conclusions about cognition and the degree to which abstract, symbolic representations and rules are psychologically and neurobiologically real. This debate can potentially be resolved using a neurophysiological paradigm, in which alternative predictions of the brain response patterns for lexical and syntactic processing are put to the test. We used magnetoencephalography (MEG) to record neural responses to spoken monomorphemic words ('hide'), pseudowords ('smide'), regular past-tense forms ('cried') and ungrammatical (overregularised) past-tense forms ('flied') in a passive listening oddball paradigm, in which lexically and syntactically modulated stimuli are known to elicit distinct patterns of the mismatch negativity (MMN) brain response. We observed an enhanced ('lexical') MMN to monomorphemic words relative to pseudowords, but a reversed ('syntactic') MMN to ungrammatically inflected past tenses relative to grammatical forms. This dissociation between responses to monomorphemic and bimorphemic stimuli indicates that regular past tenses are processed more similarly to syntactic sequences than to lexically stored monomorphemic words, suggesting that regular past tenses are generated productively by the application of a combinatorial scheme to their separately represented stems and affixes. We suggest discrete combinatorial neuronal assemblies, which bind classes of sequentially occurring lexical elements into morphologically complex units, as the neurobiological basis of regular past tense inflection.


Subject(s)
Brain Mapping , Brain/physiology , Cognition/physiology , Linguistics , Adolescent , Adult , Female , Humans , Magnetoencephalography , Male , Young Adult
15.
Nat Commun ; 3: 711, 2012 Feb 28.
Article in English | MEDLINE | ID: mdl-22426232

ABSTRACT

Rapid information processing in the human brain is vital to survival in a highly dynamic environment. The key tool humans use to exchange information is spoken language, but the exact speed of the neuronal mechanisms underpinning speech comprehension is still unknown. Here we investigate the time course of neuro-lexical processing by analyzing neuromagnetic brain activity elicited in response to psycholinguistically and acoustically matched groups of words and pseudowords. We show an ultra-early dissociation in cortical activation elicited by these stimulus types, emerging ∼50 ms after acoustic information required for word identification first becomes available. This dissociation is the earliest brain signature of lexical processing of words so far reported, and may help explain the evolutionary advantage of human spoken language.


Subject(s)
Brain Waves/physiology , Cerebral Cortex/physiology , Mental Processes/physiology , Reaction Time , Speech Perception , Acoustic Stimulation , Adolescent , Adult , Brain Mapping , Electroencephalography , Female , Humans , Language , Magnetoencephalography , Male , Speech , Young Adult
16.
Neuropsychologia ; 48(14): 3982-92, 2010 Dec.
Article in English | MEDLINE | ID: mdl-20950633

ABSTRACT

Silent pauses are a common form of disfluency in speech yet little attention has been paid to them in the psycholinguistic literature. The present paper investigates the consequences of such silences for listeners, using an Event-Related Potential (ERP) paradigm. Participants heard utterances ending in predictable or unpredictable words, some of which included a disfluent silence before the target. In common with previous findings using er disfluencies, the N400 difference between predictable and unpredictable words was attenuated for the utterances that included silent pauses, suggesting a reduction in the relative processing benefit for predictable words. An earlier relative negativity, topographically distinct from the N400 effect and identifiable as a Phonological Mismatch Negativity (PMN), was found for fluent utterances only. This suggests that only in the fluent condition did participants perceive the phonology of unpredictable words to mismatch with their expectations. By contrast, for disfluent utterances only, unpredictable words gave rise to a late left frontal positivity, an effect previously observed following ers and disfluent repetitions. We suggest that this effect reflects the engagement of working memory processes that occurs when fluent speech is resumed. Using a surprise recognition memory test, we also show that listeners were more likely to recognise words which had been encountered after silent pauses, demonstrating that silence affects not only the process of language comprehension but also its eventual outcome. We argue that, from a listener's perspective, one critical feature of disfluency is the temporal delay which it adds to the speech signal.


Subject(s)
Recognition, Psychology/physiology , Sound , Speech Perception/physiology , Speech/physiology , Acoustic Stimulation/methods , Adolescent , Adult , Analysis of Variance , Brain Mapping , Electroencephalography/methods , Evoked Potentials/physiology , Female , Humans , Male , Time Factors , Young Adult
17.
Brain Lang ; 111(1): 36-45, 2009 Oct.
Article in English | MEDLINE | ID: mdl-19700188

ABSTRACT

Disfluencies can affect language comprehension, but to date, most studies have focused on disfluent pauses such as er. We investigated whether disfluent repetitions in speech have discernible effects on listeners during language comprehension, and whether repetitions affect the linguistic processing of subsequent words in speech in ways which have been previously observed withers. We used event-related potentials (ERPs) to measure participants' neural responses to disfluent repetitions of words relative to acoustically identical words in fluent contexts, as well as to unpredictable and predictable words that occurred immediately post-disfluency and in fluent utterances. We additionally measured participants' recognition memories for the predictable and unpredictable words. Repetitions elicited an early onsetting relative positivity (100-400 ms post-stimulus), clearly demonstrating listeners' sensitivity to the presence of disfluent repetitions. Unpredictable words elicited an N400 effect. Importantly, there was no evidence that this effect, thought to reflect the difficulty of semantically integrating unpredictable compared to predictable words, differed quantitatively between fluent and disfluent utterances. Furthermore there was no evidence that the memorability of words was affected by the presence of a preceding repetition. These findings contrast with previous research which demonstrated an N400 attenuation of, and an increase in memorability for, words that were preceded by an er. However, in a later (600-900 ms) time window, unpredictable words following a repetition elicited a relative positivity. Reanalysis of previous data confirmed the presence of a similar effect following an er. The effect may reflect difficulties in resuming linguistic processing following any disruption to speech.


Subject(s)
Cerebral Cortex/physiology , Evoked Potentials, Auditory/physiology , Speech Intelligibility/physiology , Speech Perception/physiology , Acoustic Stimulation , Adult , Analysis of Variance , Brain Mapping , Electroencephalography , Female , Humans , Language Tests , Male , Mental Recall/physiology , Signal Processing, Computer-Assisted , Time Factors , Vocabulary
18.
J Exp Psychol Learn Mem Cogn ; 34(3): 696-702, 2008 May.
Article in English | MEDLINE | ID: mdl-18444766

ABSTRACT

Filled-pause disfluencies such as um and er affect listeners' comprehension, possibly mediated by attentional mechanisms (J. E. Fox Tree, 2001). However, there is little direct evidence that hesitations affect attention. The current study used an acoustic manipulation of continuous speech to induce event-related potential components associated with attention (mismatch negativity [MMN] and P300) during the comprehension of fluent and disfluent utterances. In fluent cases, infrequently occurring acoustically manipulated target words gave rise to typical MMN and P300 components when compared to nonmanipulated controls. In disfluent cases, where targets were preceded by natural sounding hesitations culminating in the filled pause er, an MMN (reflecting a detection of deviance) was still apparent for manipulated words, but there was little evidence of a subsequent P300. This suggests that attention was not reoriented to deviant words in disfluent cases. A subsequent recognition test showed that nonmanipulated words were more likely to be remembered if they had been preceded by a hesitation. Taken together, these results strongly implicate attention in an account of disfluency processing: Hesitations orient listeners' attention, with consequences for the immediate processing and later representation of an utterance.


Subject(s)
Attention/physiology , Comprehension/physiology , Contingent Negative Variation/physiology , Electroencephalography , Event-Related Potentials, P300/physiology , Semantics , Signal Processing, Computer-Assisted , Speech Perception/physiology , Verbal Behavior/physiology , Adolescent , Adult , Brain Mapping , Cerebral Cortex/physiology , Female , Humans , Male , Reaction Time/physiology
19.
Cognition ; 105(3): 658-68, 2007 Dec.
Article in English | MEDLINE | ID: mdl-17173887

ABSTRACT

Everyday speech is littered with disfluency, often correlated with the production of less predictable words (e.g., Beattie & Butterworth [Beattie, G., & Butterworth, B. (1979). Contextual probability and word frequency as determinants of pauses in spontaneous speech. Language and Speech, 22, 201-211.]). But what are the effects of disfluency on listeners? In an ERP experiment which compared fluent to disfluent utterances, we established an N400 effect for unpredictable compared to predictable words. This effect, reflecting the difference in ease of integrating words into their contexts, was reduced in cases where the target words were preceded by a hesitation marked by the word er. Moreover, a subsequent recognition memory test showed that words preceded by disfluency were more likely to be remembered. The study demonstrates that hesitation affects the way in which listeners process spoken language, and that these changes are associated with longer-term consequences for the representation of the message.


Subject(s)
Cognition , Language , Speech , Vocabulary , Evoked Potentials/physiology , Humans , Semantics , Speech Perception
SELECTION OF CITATIONS
SEARCH DETAIL