Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 29
Filter
Add more filters











Publication year range
1.
Front Psychol ; 10: 318, 2019.
Article in English | MEDLINE | ID: mdl-30858810

ABSTRACT

Facial electromyography research shows that corrugator supercilii ("frowning muscle") activity tracks the emotional valence of linguistic stimuli. Grounded or embodied accounts of language processing take such activity to reflect the simulation or "re-enactment" of emotion, as part of the retrieval of word meaning (e.g., of "furious") and/or of building a situation model (e.g., for "Mark is furious"). However, the same muscle also expresses our primary emotional evaluation of things we encounter. Language-driven affective simulation can easily be at odds with the reader's affective evaluation of what language describes (e.g., when we like Mark being furious). In a previous experiment ('t Hart et al., 2018) we demonstrated that neither language-driven simulation nor affective evaluation alone seem sufficient to explain the corrugator patterns that emerge during online language comprehension in these complex cases. Those results showed support for a multiple-drivers account of corrugator activity, where both simulation and evaluation processes contribute to the activation patterns observed in the corrugator. The study at hand replicates and extends these findings. With more refined control over when precisely affective information became available in a narrative, we again find results that speak against an interpretation of corrugator activity in terms of simulation or evaluation alone, and as such support the multiple-drivers account. Additional evidence suggests that the simulation driver involved reflects simulation at the level of situation model construction, rather than at the level of retrieving concepts from long-term memory. In all, by giving insights into how language-driven simulation meshes with the reader's evaluative responses during an unfolding narrative, this study contributes to the understanding of affective language comprehension.

2.
Front Psychol ; 9: 613, 2018.
Article in English | MEDLINE | ID: mdl-29760671

ABSTRACT

Facial electromyography research shows that corrugator supercilii ("frowning muscle") activity tracks the emotional valence of linguistic stimuli. Grounded or embodied accounts of language processing take such activity to reflect the simulation or "reenactment" of emotion, as part of the retrieval of word meaning (e.g., of "furious") and/or of building a situation model (e.g., for "Mark is furious"). However, the same muscle also expresses our primary emotional evaluation of things we encounter. Language-driven affective simulation can easily be at odds with the reader's affective evaluation of what language describes (e.g., when we like Mark being furious). To examine what happens in such cases, we independently manipulated simulation valence and moral evaluative valence in short narratives. Participants first read about characters behaving in a morally laudable or objectionable fashion: this immediately led to corrugator activity reflecting positive or negative affect. Next, and critically, a positive or negative event befell these same characters. Here, the corrugator response did not track the valence of the event, but reflected both simulation and moral evaluation. This highlights the importance of unpacking coarse notions of affective meaning in language processing research into components that reflect simulation and evaluation. Our results also call for a re-evaluation of the interpretation of corrugator EMG, as well as other affect-related facial muscles and other peripheral physiological measures, as unequivocal indicators of simulation. Research should explore how such measures behave in richer and more ecologically valid language processing, such as narrative; refining our understanding of simulation within a framework of grounded language comprehension.

3.
Soc Neurosci ; 12(2): 182-193, 2017 04.
Article in English | MEDLINE | ID: mdl-26985787

ABSTRACT

Insults always sting, but the context in which they are delivered can make the effects even worse. Here we test how the brain processes insults, and whether and how the neurocognitive processing of insults is changed by the presence of a laughing crowd. Event-related potentials showed that insults, compared to compliments, evoked an increase in N400 amplitude (indicating increased lexical-semantic processing) and LPP amplitude (indicating emotional processing) when presented in isolation. When insults were perceived in the presence of a laughing crowd, the difference in N400 amplitude disappeared, while the difference in LPP activation increased. These results show that even without laughter, verbal insults receive additional neural processing over compliments, both at the lexical-semantic and emotional level. The presence of a laughing crowd has a direct effect on the neurocognitive processing of insults, leading to stronger and more elongated emotional processing.


Subject(s)
Brain/physiology , Laughter , Rejection, Psychology , Social Perception , Speech Perception/physiology , Adolescent , Adult , Analysis of Variance , Electroencephalography , Evoked Potentials , Female , Humans , Interpersonal Relations , Male , Neuropsychological Tests , Young Adult
4.
Neuropsychologia ; 76: 79-91, 2015 Sep.
Article in English | MEDLINE | ID: mdl-25858603

ABSTRACT

In using language, people not only exchange information, but also navigate their social world - for example, they can express themselves indirectly to avoid losing face. In this functional magnetic resonance imaging study, we investigated the neural correlates of interpreting face-saving indirect replies, in a situation where participants only overheard the replies as part of a conversation between two other people, as well as in a situation where the participants were directly addressed themselves. We created a fictional job interview context where indirect replies serve as a natural communicative strategy to attenuate one's shortcomings, and asked fMRI participants to either pose scripted questions and receive answers from three putative job candidates (addressee condition) or to listen to someone else interview the same candidates (overhearer condition). In both cases, the need to evaluate the candidate ensured that participants had an active interest in comprehending the replies. Relative to direct replies, face-saving indirect replies increased activation in medial prefrontal cortex, bilateral temporo-parietal junction (TPJ), bilateral inferior frontal gyrus and bilateral middle temporal gyrus, in active overhearers and active addressees alike, with similar effect size, and comparable to findings obtained in an earlier passive listening study (Basnáková et al., 2014). In contrast, indirectness effects in bilateral anterior insula and pregenual ACC, two regions implicated in emotional salience and empathy, were reliably stronger in addressees than in active overhearers. Our findings indicate that understanding face-saving indirect language requires additional cognitive perspective-taking and other discourse-relevant cognitive processing, to a comparable extent in active overhearers and addressees. Furthermore, they indicate that face-saving indirect language draws upon affective systems more in addressees than in overhearers, presumably because the addressee is the one being managed by a face-saving reply. In all, face-saving indirectness provides a window on the cognitive as well as affect-related neural systems involved in human communication.


Subject(s)
Brain/physiology , Communication , Comprehension/physiology , Social Perception , Speech Perception/physiology , Adult , Brain Mapping , Female , Humans , Interpersonal Relations , Magnetic Resonance Imaging , Male , Young Adult
5.
Cereb Cortex ; 24(10): 2572-8, 2014 Oct.
Article in English | MEDLINE | ID: mdl-23645715

ABSTRACT

Even though language allows us to say exactly what we mean, we often use language to say things indirectly, in a way that depends on the specific communicative context. For example, we can use an apparently straightforward sentence like "It is hard to give a good presentation" to convey deeper meanings, like "Your talk was a mess!" One of the big puzzles in language science is how listeners work out what speakers really mean, which is a skill absolutely central to communication. However, most neuroimaging studies of language comprehension have focused on the arguably much simpler, context-independent process of understanding direct utterances. To examine the neural systems involved in getting at contextually constrained indirect meaning, we used functional magnetic resonance imaging as people listened to indirect replies in spoken dialog. Relative to direct control utterances, indirect replies engaged dorsomedial prefrontal cortex, right temporo-parietal junction and insula, as well as bilateral inferior frontal gyrus and right medial temporal gyrus. This suggests that listeners take the speaker's perspective on both cognitive (theory of mind) and affective (empathy-like) levels. In line with classic pragmatic theories, our results also indicate that currently popular "simulationist" accounts of language comprehension fail to explain how listeners understand the speaker's intended message.


Subject(s)
Brain/physiology , Comprehension/physiology , Semantics , Speech Perception/physiology , Adult , Brain Mapping , Female , Humans , Magnetic Resonance Imaging , Male , Young Adult
6.
Front Psychol ; 4: 505, 2013.
Article in English | MEDLINE | ID: mdl-23986725

ABSTRACT

In neurocognitive research on language, the processing principles of the system at hand are usually assumed to be relatively invariant. However, research on attention, memory, decision-making, and social judgment has shown that mood can substantially modulate how the brain processes information. For example, in a bad mood, people typically have a narrower focus of attention and rely less on heuristics. In the face of such pervasive mood effects elsewhere in the brain, it seems unlikely that language processing would remain untouched. In an EEG experiment, we manipulated the mood of participants just before they read texts that confirmed or disconfirmed verb-based expectations about who would be talked about next (e.g., that "David praised Linda because … " would continue about Linda, not David), or that respected or violated a syntactic agreement rule (e.g., "The boys turns"). ERPs showed that mood had little effect on syntactic parsing, but did substantially affect referential anticipation: whereas readers anticipated information about a specific person when they were in a good mood, a bad mood completely abolished such anticipation. A behavioral follow-up experiment suggested that a bad mood did not interfere with verb-based expectations per se, but prevented readers from using that information rapidly enough to predict upcoming reference on the fly, as the sentence unfolds. In all, our results reveal that background mood, a rather unobtrusive affective state, selectively changes a crucial aspect of real-time language processing. This observation fits well with other observed interactions between language processing and affect (emotions, preferences, attitudes, mood), and more generally testifies to the importance of studying "cold" cognitive functions in relation to "hot" aspects of the brain.

7.
Front Psychol ; 3: 190, 2012.
Article in English | MEDLINE | ID: mdl-22715332

ABSTRACT

During spoken language interpretation, listeners rapidly relate the meaning of each individual word to what has been said before. However, spoken words often contain spurious other words, like day in daisy, or dean in sardine. Do listeners also relate the meaning of such unintended, spurious words to the prior context? We used ERPs to look for transient meaning-based N400 effects in sentences that were completely plausible at the level of words intended by the speaker, but contained an embedded word whose meaning clashed with the context. Although carrier words with an initial embedding (day in daisy) did not elicit an embedding-related N400 effect relative to matched control words without embedding, carrier words with a final embedding (dean in sardine) did elicit such an effect. Together with prior work from our lab and the results of a Shortlist B simulation, our findings suggest that listeners do semantically interpret embedded words, albeit not under all conditions. We explain the latter by assuming that the sense-making system adjusts its hypothesis for how to interpret the external input at every new syllable, in line with recent ideas of active sampling in perception.

8.
Soc Cogn Affect Neurosci ; 7(2): 173-83, 2012 Feb.
Article in English | MEDLINE | ID: mdl-21148175

ABSTRACT

When an adult claims he cannot sleep without his teddy bear, people tend to react surprised. Language interpretation is, thus, influenced by social context, such as who the speaker is. The present study reveals inter-individual differences in brain reactivity to social aspects of language. Whereas women showed brain reactivity when stereotype-based inferences about a speaker conflicted with the content of the message, men did not. This sex difference in social information processing can be explained by a specific cognitive trait, one's ability to empathize. Individuals who empathize to a greater degree revealed larger N400 effects (as well as a larger increase in γ-band power) to socially relevant information. These results indicate that individuals with high-empathizing skills are able to rapidly integrate information about the speaker with the content of the message, as they make use of voice-based inferences about the speaker to process language in a top-down manner. Alternatively, individuals with lower empathizing skills did not use information about social stereotypes in implicit sentence comprehension, but rather took a more bottom-up approach to the processing of these social pragmatic sentences.


Subject(s)
Empathy/physiology , Evoked Potentials/physiology , Language , Adolescent , Adult , Brain/physiology , Brain Mapping , Female , Humans , Individuality , Male , Semantics , Surveys and Questionnaires , Voice/physiology , Young Adult
9.
J Cogn Neurosci ; 22(11): 2618-26, 2010 Nov.
Article in English | MEDLINE | ID: mdl-19702463

ABSTRACT

In an ERP experiment, we examined whether listeners, when making sense of spoken utterances, take into account the meaning of spurious words that are embedded in longer words, either at their onsets (e.g., pie in pirate) or at their offsets (e.g., pain in champagne). In the experiment, Dutch listeners heard Dutch words with initial or final embeddings presented in a sentence context that did or did not support the meaning of the embedded word, while equally supporting the longer carrier word. The N400 at the carrier words was modulated by the semantic fit of the embedded words, indicating that listeners briefly relate the meaning of initial- and final-embedded words to the sentential context, even though these words were not intended by the speaker. These findings help us understand the dynamics of initial sense-making and its link to lexical activation. In addition, they shed new light on the role of lexical competition and the debate concerning the lexical activation of final-embedded words.


Subject(s)
Evoked Potentials, Auditory/physiology , Semantics , Speech Perception/physiology , Vocabulary , Acoustic Stimulation/methods , Electroencephalography/methods , Female , Humans , Male , Reaction Time/physiology , Young Adult
10.
Psychol Sci ; 20(9): 1092-9, 2009 Sep.
Article in English | MEDLINE | ID: mdl-19656340

ABSTRACT

How does the brain respond to statements that clash with a person's value system? We recorded event-related brain potentials while respondents from contrasting political-ethical backgrounds completed an attitude survey on drugs, medical ethics, social conduct, and other issues. Our results show that value-based disagreement is unlocked by language extremely rapidly, within 200 to 250 ms after the first word that indicates a clash with the reader's value system (e.g., "I think euthanasia is an acceptable/unacceptable..."). Furthermore, strong disagreement rapidly influences the ongoing analysis of meaning, which indicates that even very early processes in language comprehension are sensitive to a person's value system. Our results testify to rapid reciprocal links between neural systems for language and for valuation.


Subject(s)
Arousal/physiology , Attitude , Conflict, Psychological , Electroencephalography , Evoked Potentials/physiology , Judgment , Morals , Reading , Social Values , Adult , Brain Mapping , Cerebral Cortex/physiology , Christianity , Female , Humans , Male , Middle Aged , Politics , Religion and Psychology , Semantics
11.
Brain Res ; 1291: 92-101, 2009 Sep 29.
Article in English | MEDLINE | ID: mdl-19631622

ABSTRACT

Prior research has indicated that readers and listeners can use information in the prior discourse to rapidly predict specific upcoming words, as the text is unfolding. Here we used event-related potentials to explore whether the ability to make rapid online predictions depends on a reader's working memory capacity (WMC). Readers with low WMC were hypothesized to differ from high WMC readers either in their overall capability to make predictions (because of their lack of cognitive resources). High and low WMC participants read highly constraining stories that supported the prediction of a specific noun, mixed with coherent but essentially unpredictive 'prime control' control stories that contained the same content words as the predictive stories. To test whether readers were anticipating upcoming words, critical nouns were preceded by a determiner whose gender agreed or disagreed with the gender of the expected noun. In predictive stories, both high and low WMC readers displayed an early negative deflection (300-600 ms) to unexpected determiners, which was not present in prime control stories. Only the low WMC participants displayed an additional later negativity (900-1500 ms) to unexpected determiners. This pattern of results suggests that WMC does not influence the ability to anticipate upcoming words per se, but does change the way in which readers deal with information that disconfirms the generated prediction.


Subject(s)
Association , Cerebral Cortex/physiology , Language , Memory, Short-Term/physiology , Adolescent , Adult , Analysis of Variance , Brain Mapping , Cues , Electroencephalography , Evoked Potentials, Visual/physiology , Female , Humans , Male , Photic Stimulation , Reading , Signal Processing, Computer-Assisted
12.
J Cogn Neurosci ; 21(11): 2085-99, 2009 Nov.
Article in English | MEDLINE | ID: mdl-19016606

ABSTRACT

When interpreting a message, a listener takes into account several sources of linguistic and extralinguistic information. Here we focused on one particular form of extralinguistic information, certain speaker characteristics as conveyed by the voice. Using functional magnetic resonance imaging, we examined the neural structures involved in the unification of sentence meaning and voice-based inferences about the speaker's age, sex, or social background. We found enhanced activation in the inferior frontal gyrus bilaterally (BA 45/47) during listening to sentences whose meaning was incongruent with inferred speaker characteristics. Furthermore, our results showed an overlap in brain regions involved in unification of speaker-related information and those used for the unification of semantic and world knowledge information [inferior frontal gyrus bilaterally (BA 45/47) and left middle temporal gyrus (BA 21)]. These findings provide evidence for a shared neural unification system for linguistic and extralinguistic sources of information and extend the existing knowledge about the role of inferior frontal cortex as a crucial component for unification during language comprehension.


Subject(s)
Comprehension/physiology , Concept Formation/physiology , Frontal Lobe/physiology , Language , Social Perception , Speech Perception/physiology , Adult , Brain Mapping , Female , Humans , Language Tests , Magnetic Resonance Imaging , Male , Psycholinguistics , Reference Values , Voice Quality/physiology , Young Adult
13.
Cereb Cortex ; 19(7): 1493-503, 2009 Jul.
Article in English | MEDLINE | ID: mdl-19001084

ABSTRACT

Sentence comprehension requires the retrieval of single word information from long-term memory, and the integration of this information into multiword representations. The current functional magnetic resonance imaging study explored the hypothesis that the left posterior temporal gyrus supports the retrieval of lexical-syntactic information, whereas left inferior frontal gyrus (LIFG) contributes to syntactic unification. Twenty-eight subjects read sentences and word sequences containing word-category (noun-verb) ambiguous words at critical positions. Regions contributing to the syntactic unification process should show enhanced activation for sentences compared to words, and only within sentences display a larger signal for ambiguous than unambiguous conditions. The posterior LIFG showed exactly this predicted pattern, confirming our hypothesis that LIFG contributes to syntactic unification. The left posterior middle temporal gyrus was activated more for ambiguous than unambiguous conditions (main effect over both sentences and word sequences), as predicted for regions subserving the retrieval of lexical-syntactic information from memory. We conclude that understanding language involves the dynamic interplay between left inferior frontal and left posterior temporal regions.


Subject(s)
Brain Mapping/methods , Cerebral Cortex/physiology , Comprehension/physiology , Evoked Potentials/physiology , Language , Magnetic Resonance Imaging/methods , Semantics , Adolescent , Adult , Female , Humans , Male , Young Adult
14.
Brain Lang ; 106(2): 119-31, 2008 Aug.
Article in English | MEDLINE | ID: mdl-18556057

ABSTRACT

In this event-related brain potential (ERP) study, we examined how semantic and referential aspects of anaphoric noun phrase resolution interact during discourse comprehension. We used a full factorial design that crossed referential ambiguity with semantic incoherence. Ambiguous anaphors elicited a sustained negative shift (Nref effect), and incoherent anaphors elicited an N400 effect. Simultaneously ambiguous and incoherent anaphors elicited an ERP pattern resembling that of the incoherent anaphors. These results suggest that semantic incoherence can preclude readers from engaging in anaphoric inferencing. Furthermore, approximately half of our participants unexpectedly showed common late positive effects to the three types of problematic anaphors. We relate the latter finding to recent accounts of what the P600 might reflect, and to the role of individual differences therein.


Subject(s)
Cognition/physiology , Comprehension/physiology , Evoked Potentials/physiology , Semantics , Speech Perception/physiology , Adult , Brain/physiology , Brain Mapping/methods , Electroencephalography , Female , Humans , Language Tests/statistics & numerical data , Male , Mental Processes/physiology , Models, Psychological , Photic Stimulation/methods , Psycholinguistics/methods , Semantic Differential/statistics & numerical data
15.
J Cogn Neurosci ; 20(4): 580-91, 2008 Apr.
Article in English | MEDLINE | ID: mdl-18052777

ABSTRACT

When do listeners take into account who the speaker is? We asked people to listen to utterances whose content sometimes did not match inferences based on the identity of the speaker (e.g., "If only I looked like Britney Spears" in a male voice, or "I have a large tattoo on my back" spoken with an upper-class accent). Event-related brain responses revealed that the speaker's identity is taken into account as early as 200-300 msec after the beginning of a spoken word, and is processed by the same early interpretation mechanism that constructs sentence meaning based on just the words. This finding is difficult to reconcile with standard "Gricean" models of sentence interpretation in which comprehenders initially compute a local, context-independent meaning for the sentence ("semantics") before working out what it really means given the wider communicative context and the particular speaker ("pragmatics"). Because the observed brain response hinges on voice-based and usually stereotype-dependent inferences about the speaker, it also shows that listeners rapidly classify speakers on the basis of their voices and bring the associated social stereotypes to bear on what is being said. According to our event-related potential results, language comprehension takes very rapid account of the social context, and the construction of meaning based on language alone cannot be separated from the social aspects of language use. The linguistic brain relates the message to the speaker immediately.


Subject(s)
Cerebral Cortex/physiology , Comprehension/physiology , Evoked Potentials/physiology , Psycholinguistics , Social Perception , Speech Perception/physiology , Acoustic Stimulation , Adult , Female , Humans , Male , Reference Values , Semantics , Verbal Behavior/physiology
16.
BMC Neurosci ; 8: 89, 2007 Oct 26.
Article in English | MEDLINE | ID: mdl-17963486

ABSTRACT

BACKGROUND: Recently several studies have shown that people use contextual information to make predictions about the rest of the sentence or story as the text unfolds. Using event related potentials (ERPs) we tested whether these on-line predictions are based on a message-level representation of the discourse or on simple automatic activation by individual words. Subjects heard short stories that were highly constraining for one specific noun, or stories that were not specifically predictive but contained the same prime words as the predictive stories. To test whether listeners make specific predictions critical nouns were preceded by an adjective that was inflected according to, or in contrast with, the gender of the expected noun. RESULTS: When the message of the preceding discourse was predictive, adjectives with an unexpected gender inflection evoked a negative deflection over right-frontal electrodes between 300 and 600 ms. This effect was not present in the prime control context, indicating that the prediction mismatch does not hinge on word-based priming but is based on the actual message of the discourse. CONCLUSION: When listening to a constraining discourse people rapidly make very specific predictions about the remainder of the story, as the story unfolds. These predictions are not simply based on word-based automatic activation, but take into account the actual message of the discourse.


Subject(s)
Cerebral Cortex/physiology , Mental Processes/physiology , Semantics , Verbal Behavior/physiology , Adolescent , Adult , Brain Mapping , Electroencephalography , Evoked Potentials/physiology , Female , Humans , Language Tests , Male , Reaction Time/physiology , Signal Processing, Computer-Assisted
17.
Neuroimage ; 37(3): 993-1004, 2007 Sep 01.
Article in English | MEDLINE | ID: mdl-17611124

ABSTRACT

In an event-related fMRI study, we examined the cortical networks involved in establishing reference during language comprehension. We compared BOLD responses to sentences containing referentially ambiguous pronouns (e.g., "Ronald told Frank that he..."), referentially failing pronouns (e.g., "Rose told Emily that he...") or coherent pronouns. Referential ambiguity selectively recruited medial prefrontal regions, suggesting that readers engaged in problem-solving to select a unique referent from the discourse model. Referential failure elicited activation increases in brain regions associated with morpho-syntactic processing, and, for those readers who took failing pronouns to refer to unmentioned entities, additional regions associated with elaborative inferencing were observed. The networks activated by these two referential problems did not overlap with the network activated by a standard semantic anomaly. Instead, we observed a double dissociation, in that the systems activated by semantic anomaly are deactivated by referential ambiguity, and vice versa. This inverse coupling may reflect the dynamic recruitment of semantic and episodic processing to resolve semantically or referentially problematic situations. More generally, our findings suggest that neurocognitive accounts of language comprehension need to address not just how we parse a sentence and combine individual word meanings, but also how we determine who's who and what's what during language comprehension.


Subject(s)
Brain/physiology , Cognition/physiology , Evoked Potentials/physiology , Language , Speech Perception/physiology , Adult , Female , Humans , Male
18.
Brain Res ; 1153: 166-77, 2007 Jun 11.
Article in English | MEDLINE | ID: mdl-17466281

ABSTRACT

A discourse context provides a reader with a great deal of information that can provide constraints for further language processing, at several different levels. In this experiment we used event-related potentials (ERPs) to explore whether discourse-generated contextual constraints are based on the precise message of the discourse or, more 'loosely', on the scenario suggested by one or more content words in the text. Participants read constraining stories whose precise message rendered a particular word highly predictable ("The manager thought that the board of directors should assemble to discuss the issue. He planned a...[meeting]") as well as non-constraining control stories that were only biasing in virtue of the scenario suggested by some of the words ("The manager thought that the board of directors need not assemble to discuss the issue. He planned a..."). Coherent words that were inconsistent with the message-level expectation raised in a constraining discourse (e.g., "session" instead of "meeting") elicited a classic centroparietal N400 effect. However, when the same words were only inconsistent with the scenario loosely suggested by earlier words in the text, they elicited a different negativity around 400 ms, with a more anterior, left-lateralized maximum. The fact that the discourse-dependent N400 effect cannot be reduced to scenario-mediated priming reveals that it reflects the rapid use of precise message-level constraints in comprehension. At the same time, the left-lateralized negativity in non-constraining stories suggests that, at least in the absence of strong message-level constraints, scenario-mediated priming does also rapidly affect comprehension.


Subject(s)
Comprehension/physiology , Evoked Potentials/physiology , Language , Reading , Adolescent , Adult , Brain Mapping , Electroencephalography/methods , Female , Humans , Male
19.
Philos Trans R Soc Lond B Biol Sci ; 362(1481): 801-11, 2007 May 29.
Article in English | MEDLINE | ID: mdl-17412680

ABSTRACT

A central and influential idea among researchers of language is that our language faculty is organized according to Fregean compositionality, which states that the meaning of an utterance is a function of the meaning of its parts and of the syntactic rules by which these parts are combined. Since the domain of syntactic rules is the sentence, the implication of this idea is that language interpretation takes place in a two-step fashion. First, the meaning of a sentence is computed. In a second step, the sentence meaning is integrated with information from prior discourse, world knowledge, information about the speaker and semantic information from extra-linguistic domains such as co-speech gestures or the visual world. Here, we present results from recordings of event-related brain potentials that are inconsistent with this classical two-step model of language interpretation. Our data support a one-step model in which knowledge about the context and the world, concomitant information from other modalities, and the speaker are brought to bear immediately, by the same fast-acting brain system that combines the meanings of individual words into a message-level representation. Underlying the one-step model is the immediacy assumption, according to which all available information will immediately be used to co-determine the interpretation of the speaker's message. Functional magnetic resonance imaging data that we collected indicate that Broca's area plays an important role in semantic unification. Language comprehension involves the rapid incorporation of information in a 'single unification space', coming from a broader range of cognitive domains than presupposed in the standard two-step model of interpretation.


Subject(s)
Language , Semantics , Speech Perception , Evoked Potentials , Frontal Lobe/physiology , Humans , Magnetic Resonance Imaging
20.
J Cogn Neurosci ; 19(2): 228-36, 2007 Feb.
Article in English | MEDLINE | ID: mdl-17280512

ABSTRACT

In this event-related brain potentials (ERPs) study, we explored the possibility to selectively track referential ambiguity during spoken discourse comprehension. Earlier ERP research has shown that referentially ambiguous nouns (e.g., "the girl" in a two-girl context) elicit a frontal, sustained negative shift relative to unambiguous control words. In the current study, we examined whether this ERP effect reflects "deep" situation model ambiguity or "superficial" textbase ambiguity. We contrasted these different interpretations by investigating whether a discourse-level semantic manipulation that prevents referential ambiguity also averts the elicitation of a referentially induced ERP effect. We compared ERPs elicited by nouns that were referentially nonambiguous but were associated with two discourse entities (e.g., "the girl" with two girls introduced in the context, but one of which has died or left the scene), with referentially ambiguous and nonambiguous control words. Although temporally referentially ambiguous nouns elicited a frontal negative shift compared to control words, the "double bound" but referentially nonambiguous nouns did not. These results suggest that it is possible to selectively track referential ambiguity with ERPs at the level that is most relevant to discourse comprehension, the situation model.


Subject(s)
Brain/physiology , Contingent Negative Variation/physiology , Semantics , Speech Perception/physiology , Acoustic Stimulation , Adult , Brain Mapping , Electroencephalography , Female , Humans , Male , Reaction Time , Reference Values
SELECTION OF CITATIONS
SEARCH DETAIL