Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 26
Filter
Add more filters










Publication year range
1.
Front Psychol ; 10: 318, 2019.
Article in English | MEDLINE | ID: mdl-30858810

ABSTRACT

Facial electromyography research shows that corrugator supercilii ("frowning muscle") activity tracks the emotional valence of linguistic stimuli. Grounded or embodied accounts of language processing take such activity to reflect the simulation or "re-enactment" of emotion, as part of the retrieval of word meaning (e.g., of "furious") and/or of building a situation model (e.g., for "Mark is furious"). However, the same muscle also expresses our primary emotional evaluation of things we encounter. Language-driven affective simulation can easily be at odds with the reader's affective evaluation of what language describes (e.g., when we like Mark being furious). In a previous experiment ('t Hart et al., 2018) we demonstrated that neither language-driven simulation nor affective evaluation alone seem sufficient to explain the corrugator patterns that emerge during online language comprehension in these complex cases. Those results showed support for a multiple-drivers account of corrugator activity, where both simulation and evaluation processes contribute to the activation patterns observed in the corrugator. The study at hand replicates and extends these findings. With more refined control over when precisely affective information became available in a narrative, we again find results that speak against an interpretation of corrugator activity in terms of simulation or evaluation alone, and as such support the multiple-drivers account. Additional evidence suggests that the simulation driver involved reflects simulation at the level of situation model construction, rather than at the level of retrieving concepts from long-term memory. In all, by giving insights into how language-driven simulation meshes with the reader's evaluative responses during an unfolding narrative, this study contributes to the understanding of affective language comprehension.

2.
Front Psychol ; 9: 613, 2018.
Article in English | MEDLINE | ID: mdl-29760671

ABSTRACT

Facial electromyography research shows that corrugator supercilii ("frowning muscle") activity tracks the emotional valence of linguistic stimuli. Grounded or embodied accounts of language processing take such activity to reflect the simulation or "reenactment" of emotion, as part of the retrieval of word meaning (e.g., of "furious") and/or of building a situation model (e.g., for "Mark is furious"). However, the same muscle also expresses our primary emotional evaluation of things we encounter. Language-driven affective simulation can easily be at odds with the reader's affective evaluation of what language describes (e.g., when we like Mark being furious). To examine what happens in such cases, we independently manipulated simulation valence and moral evaluative valence in short narratives. Participants first read about characters behaving in a morally laudable or objectionable fashion: this immediately led to corrugator activity reflecting positive or negative affect. Next, and critically, a positive or negative event befell these same characters. Here, the corrugator response did not track the valence of the event, but reflected both simulation and moral evaluation. This highlights the importance of unpacking coarse notions of affective meaning in language processing research into components that reflect simulation and evaluation. Our results also call for a re-evaluation of the interpretation of corrugator EMG, as well as other affect-related facial muscles and other peripheral physiological measures, as unequivocal indicators of simulation. Research should explore how such measures behave in richer and more ecologically valid language processing, such as narrative; refining our understanding of simulation within a framework of grounded language comprehension.

3.
Soc Neurosci ; 12(2): 182-193, 2017 04.
Article in English | MEDLINE | ID: mdl-26985787

ABSTRACT

Insults always sting, but the context in which they are delivered can make the effects even worse. Here we test how the brain processes insults, and whether and how the neurocognitive processing of insults is changed by the presence of a laughing crowd. Event-related potentials showed that insults, compared to compliments, evoked an increase in N400 amplitude (indicating increased lexical-semantic processing) and LPP amplitude (indicating emotional processing) when presented in isolation. When insults were perceived in the presence of a laughing crowd, the difference in N400 amplitude disappeared, while the difference in LPP activation increased. These results show that even without laughter, verbal insults receive additional neural processing over compliments, both at the lexical-semantic and emotional level. The presence of a laughing crowd has a direct effect on the neurocognitive processing of insults, leading to stronger and more elongated emotional processing.


Subject(s)
Brain/physiology , Laughter , Rejection, Psychology , Social Perception , Speech Perception/physiology , Adolescent , Adult , Analysis of Variance , Electroencephalography , Evoked Potentials , Female , Humans , Interpersonal Relations , Male , Neuropsychological Tests , Young Adult
4.
Front Psychol ; 4: 505, 2013.
Article in English | MEDLINE | ID: mdl-23986725

ABSTRACT

In neurocognitive research on language, the processing principles of the system at hand are usually assumed to be relatively invariant. However, research on attention, memory, decision-making, and social judgment has shown that mood can substantially modulate how the brain processes information. For example, in a bad mood, people typically have a narrower focus of attention and rely less on heuristics. In the face of such pervasive mood effects elsewhere in the brain, it seems unlikely that language processing would remain untouched. In an EEG experiment, we manipulated the mood of participants just before they read texts that confirmed or disconfirmed verb-based expectations about who would be talked about next (e.g., that "David praised Linda because … " would continue about Linda, not David), or that respected or violated a syntactic agreement rule (e.g., "The boys turns"). ERPs showed that mood had little effect on syntactic parsing, but did substantially affect referential anticipation: whereas readers anticipated information about a specific person when they were in a good mood, a bad mood completely abolished such anticipation. A behavioral follow-up experiment suggested that a bad mood did not interfere with verb-based expectations per se, but prevented readers from using that information rapidly enough to predict upcoming reference on the fly, as the sentence unfolds. In all, our results reveal that background mood, a rather unobtrusive affective state, selectively changes a crucial aspect of real-time language processing. This observation fits well with other observed interactions between language processing and affect (emotions, preferences, attitudes, mood), and more generally testifies to the importance of studying "cold" cognitive functions in relation to "hot" aspects of the brain.

5.
Front Psychol ; 3: 190, 2012.
Article in English | MEDLINE | ID: mdl-22715332

ABSTRACT

During spoken language interpretation, listeners rapidly relate the meaning of each individual word to what has been said before. However, spoken words often contain spurious other words, like day in daisy, or dean in sardine. Do listeners also relate the meaning of such unintended, spurious words to the prior context? We used ERPs to look for transient meaning-based N400 effects in sentences that were completely plausible at the level of words intended by the speaker, but contained an embedded word whose meaning clashed with the context. Although carrier words with an initial embedding (day in daisy) did not elicit an embedding-related N400 effect relative to matched control words without embedding, carrier words with a final embedding (dean in sardine) did elicit such an effect. Together with prior work from our lab and the results of a Shortlist B simulation, our findings suggest that listeners do semantically interpret embedded words, albeit not under all conditions. We explain the latter by assuming that the sense-making system adjusts its hypothesis for how to interpret the external input at every new syllable, in line with recent ideas of active sampling in perception.

6.
Soc Cogn Affect Neurosci ; 7(2): 173-83, 2012 Feb.
Article in English | MEDLINE | ID: mdl-21148175

ABSTRACT

When an adult claims he cannot sleep without his teddy bear, people tend to react surprised. Language interpretation is, thus, influenced by social context, such as who the speaker is. The present study reveals inter-individual differences in brain reactivity to social aspects of language. Whereas women showed brain reactivity when stereotype-based inferences about a speaker conflicted with the content of the message, men did not. This sex difference in social information processing can be explained by a specific cognitive trait, one's ability to empathize. Individuals who empathize to a greater degree revealed larger N400 effects (as well as a larger increase in γ-band power) to socially relevant information. These results indicate that individuals with high-empathizing skills are able to rapidly integrate information about the speaker with the content of the message, as they make use of voice-based inferences about the speaker to process language in a top-down manner. Alternatively, individuals with lower empathizing skills did not use information about social stereotypes in implicit sentence comprehension, but rather took a more bottom-up approach to the processing of these social pragmatic sentences.


Subject(s)
Empathy/physiology , Evoked Potentials/physiology , Language , Adolescent , Adult , Brain/physiology , Brain Mapping , Female , Humans , Individuality , Male , Semantics , Surveys and Questionnaires , Voice/physiology , Young Adult
7.
J Cogn Neurosci ; 22(11): 2618-26, 2010 Nov.
Article in English | MEDLINE | ID: mdl-19702463

ABSTRACT

In an ERP experiment, we examined whether listeners, when making sense of spoken utterances, take into account the meaning of spurious words that are embedded in longer words, either at their onsets (e.g., pie in pirate) or at their offsets (e.g., pain in champagne). In the experiment, Dutch listeners heard Dutch words with initial or final embeddings presented in a sentence context that did or did not support the meaning of the embedded word, while equally supporting the longer carrier word. The N400 at the carrier words was modulated by the semantic fit of the embedded words, indicating that listeners briefly relate the meaning of initial- and final-embedded words to the sentential context, even though these words were not intended by the speaker. These findings help us understand the dynamics of initial sense-making and its link to lexical activation. In addition, they shed new light on the role of lexical competition and the debate concerning the lexical activation of final-embedded words.


Subject(s)
Evoked Potentials, Auditory/physiology , Semantics , Speech Perception/physiology , Vocabulary , Acoustic Stimulation/methods , Electroencephalography/methods , Female , Humans , Male , Reaction Time/physiology , Young Adult
8.
Psychol Sci ; 20(9): 1092-9, 2009 Sep.
Article in English | MEDLINE | ID: mdl-19656340

ABSTRACT

How does the brain respond to statements that clash with a person's value system? We recorded event-related brain potentials while respondents from contrasting political-ethical backgrounds completed an attitude survey on drugs, medical ethics, social conduct, and other issues. Our results show that value-based disagreement is unlocked by language extremely rapidly, within 200 to 250 ms after the first word that indicates a clash with the reader's value system (e.g., "I think euthanasia is an acceptable/unacceptable..."). Furthermore, strong disagreement rapidly influences the ongoing analysis of meaning, which indicates that even very early processes in language comprehension are sensitive to a person's value system. Our results testify to rapid reciprocal links between neural systems for language and for valuation.


Subject(s)
Arousal/physiology , Attitude , Conflict, Psychological , Electroencephalography , Evoked Potentials/physiology , Judgment , Morals , Reading , Social Values , Adult , Brain Mapping , Cerebral Cortex/physiology , Christianity , Female , Humans , Male , Middle Aged , Politics , Religion and Psychology , Semantics
9.
Brain Res ; 1291: 92-101, 2009 Sep 29.
Article in English | MEDLINE | ID: mdl-19631622

ABSTRACT

Prior research has indicated that readers and listeners can use information in the prior discourse to rapidly predict specific upcoming words, as the text is unfolding. Here we used event-related potentials to explore whether the ability to make rapid online predictions depends on a reader's working memory capacity (WMC). Readers with low WMC were hypothesized to differ from high WMC readers either in their overall capability to make predictions (because of their lack of cognitive resources). High and low WMC participants read highly constraining stories that supported the prediction of a specific noun, mixed with coherent but essentially unpredictive 'prime control' control stories that contained the same content words as the predictive stories. To test whether readers were anticipating upcoming words, critical nouns were preceded by a determiner whose gender agreed or disagreed with the gender of the expected noun. In predictive stories, both high and low WMC readers displayed an early negative deflection (300-600 ms) to unexpected determiners, which was not present in prime control stories. Only the low WMC participants displayed an additional later negativity (900-1500 ms) to unexpected determiners. This pattern of results suggests that WMC does not influence the ability to anticipate upcoming words per se, but does change the way in which readers deal with information that disconfirms the generated prediction.


Subject(s)
Association , Cerebral Cortex/physiology , Language , Memory, Short-Term/physiology , Adolescent , Adult , Analysis of Variance , Brain Mapping , Cues , Electroencephalography , Evoked Potentials, Visual/physiology , Female , Humans , Male , Photic Stimulation , Reading , Signal Processing, Computer-Assisted
10.
J Cogn Neurosci ; 21(11): 2085-99, 2009 Nov.
Article in English | MEDLINE | ID: mdl-19016606

ABSTRACT

When interpreting a message, a listener takes into account several sources of linguistic and extralinguistic information. Here we focused on one particular form of extralinguistic information, certain speaker characteristics as conveyed by the voice. Using functional magnetic resonance imaging, we examined the neural structures involved in the unification of sentence meaning and voice-based inferences about the speaker's age, sex, or social background. We found enhanced activation in the inferior frontal gyrus bilaterally (BA 45/47) during listening to sentences whose meaning was incongruent with inferred speaker characteristics. Furthermore, our results showed an overlap in brain regions involved in unification of speaker-related information and those used for the unification of semantic and world knowledge information [inferior frontal gyrus bilaterally (BA 45/47) and left middle temporal gyrus (BA 21)]. These findings provide evidence for a shared neural unification system for linguistic and extralinguistic sources of information and extend the existing knowledge about the role of inferior frontal cortex as a crucial component for unification during language comprehension.


Subject(s)
Comprehension/physiology , Concept Formation/physiology , Frontal Lobe/physiology , Language , Social Perception , Speech Perception/physiology , Adult , Brain Mapping , Female , Humans , Language Tests , Magnetic Resonance Imaging , Male , Psycholinguistics , Reference Values , Voice Quality/physiology , Young Adult
11.
Cereb Cortex ; 19(7): 1493-503, 2009 Jul.
Article in English | MEDLINE | ID: mdl-19001084

ABSTRACT

Sentence comprehension requires the retrieval of single word information from long-term memory, and the integration of this information into multiword representations. The current functional magnetic resonance imaging study explored the hypothesis that the left posterior temporal gyrus supports the retrieval of lexical-syntactic information, whereas left inferior frontal gyrus (LIFG) contributes to syntactic unification. Twenty-eight subjects read sentences and word sequences containing word-category (noun-verb) ambiguous words at critical positions. Regions contributing to the syntactic unification process should show enhanced activation for sentences compared to words, and only within sentences display a larger signal for ambiguous than unambiguous conditions. The posterior LIFG showed exactly this predicted pattern, confirming our hypothesis that LIFG contributes to syntactic unification. The left posterior middle temporal gyrus was activated more for ambiguous than unambiguous conditions (main effect over both sentences and word sequences), as predicted for regions subserving the retrieval of lexical-syntactic information from memory. We conclude that understanding language involves the dynamic interplay between left inferior frontal and left posterior temporal regions.


Subject(s)
Brain Mapping/methods , Cerebral Cortex/physiology , Comprehension/physiology , Evoked Potentials/physiology , Language , Magnetic Resonance Imaging/methods , Semantics , Adolescent , Adult , Female , Humans , Male , Young Adult
12.
Brain Lang ; 106(2): 119-31, 2008 Aug.
Article in English | MEDLINE | ID: mdl-18556057

ABSTRACT

In this event-related brain potential (ERP) study, we examined how semantic and referential aspects of anaphoric noun phrase resolution interact during discourse comprehension. We used a full factorial design that crossed referential ambiguity with semantic incoherence. Ambiguous anaphors elicited a sustained negative shift (Nref effect), and incoherent anaphors elicited an N400 effect. Simultaneously ambiguous and incoherent anaphors elicited an ERP pattern resembling that of the incoherent anaphors. These results suggest that semantic incoherence can preclude readers from engaging in anaphoric inferencing. Furthermore, approximately half of our participants unexpectedly showed common late positive effects to the three types of problematic anaphors. We relate the latter finding to recent accounts of what the P600 might reflect, and to the role of individual differences therein.


Subject(s)
Cognition/physiology , Comprehension/physiology , Evoked Potentials/physiology , Semantics , Speech Perception/physiology , Adult , Brain/physiology , Brain Mapping/methods , Electroencephalography , Female , Humans , Language Tests/statistics & numerical data , Male , Mental Processes/physiology , Models, Psychological , Photic Stimulation/methods , Psycholinguistics/methods , Semantic Differential/statistics & numerical data
13.
J Cogn Neurosci ; 20(4): 580-91, 2008 Apr.
Article in English | MEDLINE | ID: mdl-18052777

ABSTRACT

When do listeners take into account who the speaker is? We asked people to listen to utterances whose content sometimes did not match inferences based on the identity of the speaker (e.g., "If only I looked like Britney Spears" in a male voice, or "I have a large tattoo on my back" spoken with an upper-class accent). Event-related brain responses revealed that the speaker's identity is taken into account as early as 200-300 msec after the beginning of a spoken word, and is processed by the same early interpretation mechanism that constructs sentence meaning based on just the words. This finding is difficult to reconcile with standard "Gricean" models of sentence interpretation in which comprehenders initially compute a local, context-independent meaning for the sentence ("semantics") before working out what it really means given the wider communicative context and the particular speaker ("pragmatics"). Because the observed brain response hinges on voice-based and usually stereotype-dependent inferences about the speaker, it also shows that listeners rapidly classify speakers on the basis of their voices and bring the associated social stereotypes to bear on what is being said. According to our event-related potential results, language comprehension takes very rapid account of the social context, and the construction of meaning based on language alone cannot be separated from the social aspects of language use. The linguistic brain relates the message to the speaker immediately.


Subject(s)
Cerebral Cortex/physiology , Comprehension/physiology , Evoked Potentials/physiology , Psycholinguistics , Social Perception , Speech Perception/physiology , Acoustic Stimulation , Adult , Female , Humans , Male , Reference Values , Semantics , Verbal Behavior/physiology
14.
BMC Neurosci ; 8: 89, 2007 Oct 26.
Article in English | MEDLINE | ID: mdl-17963486

ABSTRACT

BACKGROUND: Recently several studies have shown that people use contextual information to make predictions about the rest of the sentence or story as the text unfolds. Using event related potentials (ERPs) we tested whether these on-line predictions are based on a message-level representation of the discourse or on simple automatic activation by individual words. Subjects heard short stories that were highly constraining for one specific noun, or stories that were not specifically predictive but contained the same prime words as the predictive stories. To test whether listeners make specific predictions critical nouns were preceded by an adjective that was inflected according to, or in contrast with, the gender of the expected noun. RESULTS: When the message of the preceding discourse was predictive, adjectives with an unexpected gender inflection evoked a negative deflection over right-frontal electrodes between 300 and 600 ms. This effect was not present in the prime control context, indicating that the prediction mismatch does not hinge on word-based priming but is based on the actual message of the discourse. CONCLUSION: When listening to a constraining discourse people rapidly make very specific predictions about the remainder of the story, as the story unfolds. These predictions are not simply based on word-based automatic activation, but take into account the actual message of the discourse.


Subject(s)
Cerebral Cortex/physiology , Mental Processes/physiology , Semantics , Verbal Behavior/physiology , Adolescent , Adult , Brain Mapping , Electroencephalography , Evoked Potentials/physiology , Female , Humans , Language Tests , Male , Reaction Time/physiology , Signal Processing, Computer-Assisted
15.
Neuroimage ; 37(3): 993-1004, 2007 Sep 01.
Article in English | MEDLINE | ID: mdl-17611124

ABSTRACT

In an event-related fMRI study, we examined the cortical networks involved in establishing reference during language comprehension. We compared BOLD responses to sentences containing referentially ambiguous pronouns (e.g., "Ronald told Frank that he..."), referentially failing pronouns (e.g., "Rose told Emily that he...") or coherent pronouns. Referential ambiguity selectively recruited medial prefrontal regions, suggesting that readers engaged in problem-solving to select a unique referent from the discourse model. Referential failure elicited activation increases in brain regions associated with morpho-syntactic processing, and, for those readers who took failing pronouns to refer to unmentioned entities, additional regions associated with elaborative inferencing were observed. The networks activated by these two referential problems did not overlap with the network activated by a standard semantic anomaly. Instead, we observed a double dissociation, in that the systems activated by semantic anomaly are deactivated by referential ambiguity, and vice versa. This inverse coupling may reflect the dynamic recruitment of semantic and episodic processing to resolve semantically or referentially problematic situations. More generally, our findings suggest that neurocognitive accounts of language comprehension need to address not just how we parse a sentence and combine individual word meanings, but also how we determine who's who and what's what during language comprehension.


Subject(s)
Brain/physiology , Cognition/physiology , Evoked Potentials/physiology , Language , Speech Perception/physiology , Adult , Female , Humans , Male
16.
Brain Res ; 1153: 166-77, 2007 Jun 11.
Article in English | MEDLINE | ID: mdl-17466281

ABSTRACT

A discourse context provides a reader with a great deal of information that can provide constraints for further language processing, at several different levels. In this experiment we used event-related potentials (ERPs) to explore whether discourse-generated contextual constraints are based on the precise message of the discourse or, more 'loosely', on the scenario suggested by one or more content words in the text. Participants read constraining stories whose precise message rendered a particular word highly predictable ("The manager thought that the board of directors should assemble to discuss the issue. He planned a...[meeting]") as well as non-constraining control stories that were only biasing in virtue of the scenario suggested by some of the words ("The manager thought that the board of directors need not assemble to discuss the issue. He planned a..."). Coherent words that were inconsistent with the message-level expectation raised in a constraining discourse (e.g., "session" instead of "meeting") elicited a classic centroparietal N400 effect. However, when the same words were only inconsistent with the scenario loosely suggested by earlier words in the text, they elicited a different negativity around 400 ms, with a more anterior, left-lateralized maximum. The fact that the discourse-dependent N400 effect cannot be reduced to scenario-mediated priming reveals that it reflects the rapid use of precise message-level constraints in comprehension. At the same time, the left-lateralized negativity in non-constraining stories suggests that, at least in the absence of strong message-level constraints, scenario-mediated priming does also rapidly affect comprehension.


Subject(s)
Comprehension/physiology , Evoked Potentials/physiology , Language , Reading , Adolescent , Adult , Brain Mapping , Electroencephalography/methods , Female , Humans , Male
17.
J Cogn Neurosci ; 19(2): 228-36, 2007 Feb.
Article in English | MEDLINE | ID: mdl-17280512

ABSTRACT

In this event-related brain potentials (ERPs) study, we explored the possibility to selectively track referential ambiguity during spoken discourse comprehension. Earlier ERP research has shown that referentially ambiguous nouns (e.g., "the girl" in a two-girl context) elicit a frontal, sustained negative shift relative to unambiguous control words. In the current study, we examined whether this ERP effect reflects "deep" situation model ambiguity or "superficial" textbase ambiguity. We contrasted these different interpretations by investigating whether a discourse-level semantic manipulation that prevents referential ambiguity also averts the elicitation of a referentially induced ERP effect. We compared ERPs elicited by nouns that were referentially nonambiguous but were associated with two discourse entities (e.g., "the girl" with two girls introduced in the context, but one of which has died or left the scene), with referentially ambiguous and nonambiguous control words. Although temporally referentially ambiguous nouns elicited a frontal negative shift compared to control words, the "double bound" but referentially nonambiguous nouns did not. These results suggest that it is possible to selectively track referential ambiguity with ERPs at the level that is most relevant to discourse comprehension, the situation model.


Subject(s)
Brain/physiology , Contingent Negative Variation/physiology , Semantics , Speech Perception/physiology , Acoustic Stimulation , Adult , Brain Mapping , Electroencephalography , Female , Humans , Male , Reaction Time , Reference Values
18.
Brain Res ; 1146: 158-71, 2007 May 18.
Article in English | MEDLINE | ID: mdl-16916496

ABSTRACT

The electrophysiology of language comprehension has long been dominated by research on syntactic and semantic integration. However, to understand expressions like "he did it" or "the little girl", combining word meanings in accordance with semantic and syntactic constraints is not enough-readers and listeners also need to work out what or who is being referred to. We review our event-related brain potential research on the processes involved in establishing reference, and present a new experiment in which we examine when and how the implicit causality associated with specific interpersonal verbs affects the interpretation of a referentially ambiguous pronoun. The evidence suggests that upon encountering a singular noun or pronoun, readers and listeners immediately inspect their situation model for a suitable discourse entity, such that they can discriminate between having too many, too few, or exactly the right number of referents within at most half a second. Furthermore, our implicit causality findings indicate that a fragment like "David praised Linda because..." can immediately foreground a particular referent, to the extent that a subsequent "he" is at least initially construed as a syntactic error. In all, our brain potential findings suggest that referential processing is highly incremental, and not necessarily contingent upon the syntax. In addition, they demonstrate that we can use ERPs to relatively selectively keep track of how readers and listeners establish reference.


Subject(s)
Comprehension/physiology , Evoked Potentials/physiology , Frontal Lobe/physiology , Language , Linguistics , Brain Mapping , Coumarins , Electroencephalography , Electrophysiology/methods , Emotions , Humans , Interpersonal Relations
19.
Brain Res ; 1118(1): 155-67, 2006 Nov 06.
Article in English | MEDLINE | ID: mdl-16956594

ABSTRACT

Although we usually have no trouble finding the right antecedent for a pronoun, the co-reference relations between pronouns and antecedents in everyday language are often 'formally' ambiguous. But a pronoun is only really ambiguous if a reader or listener indeed perceives it to be ambiguous. Whether this is the case may depend on at least two factors: the language processing skills of an individual reader, and the contextual bias towards one particular referential interpretation. In the current study, we used event related brain potentials (ERPs) to explore how both these factors affect the resolution of referentially ambiguous pronouns. We compared ERPs elicited by formally ambiguous and non-ambiguous pronouns that were embedded in simple sentences (e.g., "Jennifer Lopez told Madonna that she had too much money."). Individual differences in language processing skills were assessed with the Reading Span task, while the contextual bias of each sentence (up to the critical pronoun) had been assessed in a referential cloze pretest. In line with earlier research, ambiguous pronouns elicited a sustained, frontal negative shift relative to non-ambiguous pronouns at the group-level. The size of this effect was correlated with Reading Span score, as well as with contextual bias. These results suggest that whether a reader perceives a formally ambiguous pronoun to be ambiguous is subtly co-determined by both individual language processing skills and contextual bias.


Subject(s)
Cerebral Cortex/physiology , Evoked Potentials/physiology , Language , Pattern Recognition, Visual/physiology , Reading , Verbal Behavior/physiology , Adult , Bias , Brain Mapping , Cerebral Cortex/anatomy & histology , Female , Functional Laterality/physiology , Humans , Language Tests , Male , Neuropsychological Tests , Observer Variation , Photic Stimulation , Semantics
20.
J Cogn Neurosci ; 18(7): 1098-111, 2006 Jul.
Article in English | MEDLINE | ID: mdl-16839284

ABSTRACT

In linguistic theories of how sentences encode meaning, a distinction is often made between the context-free rule-based combination of lexical-semantic features of the words within a sentence ("semantics"), and the contributions made by wider context ("pragmatics"). In psycholinguistics, this distinction has led to the view that listeners initially compute a local, context-independent meaning of a phrase or sentence before relating it to the wider context. An important aspect of such a two-step perspective on interpretation is that local semantics cannot initially be overruled by global contextual factors. In two spoken-language event-related potential experiments, we tested the viability of this claim by examining whether discourse context can overrule the impact of the core lexical-semantic feature animacy, considered to be an innate organizing principle of cognition. Two-step models of interpretation predict that verb-object animacy violations, as in "The girl comforted the clock," will always perturb the unfolding interpretation process, regardless of wider context. When presented in isolation, such anomalies indeed elicit a clear N400 effect, a sign of interpretive problems. However, when the anomalies were embedded in a supportive context (e.g., a girl talking to a clock about his depression), this N400 effect disappeared completely. Moreover, given a suitable discourse context (e.g., a story about an amorous peanut), animacy-violating predicates ("the peanut was in love") were actually processed more easily than canonical predicates ("the peanut was salted"). Our findings reveal that discourse context can immediately overrule local lexical-semantic violations, and therefore suggest that language comprehension does not involve an initially context-free semantic analysis.


Subject(s)
Acoustic Stimulation/methods , Electroencephalography/methods , Evoked Potentials, Auditory/physiology , Linguistics/methods , Love , Adult , Female , Humans , Language , Male
SELECTION OF CITATIONS
SEARCH DETAIL
...