Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 10 de 10
Filter
Add more filters










Publication year range
1.
Lang Speech ; 66(1): 202-213, 2023 Mar.
Article in English | MEDLINE | ID: mdl-35652369

ABSTRACT

Previous electroencephalography studies have yielded evidence for automatic processing of syntax and lexical stress. However, these studies looked at both effects in isolation, limiting their generalizability to everyday language comprehension. In the current study, we investigated automatic processing of grammatical agreement in the face of variation in lexical stress. Using an oddball paradigm, we measured the Mismatch Negativity (MMN) in Dutch-speaking participants while they listened to Dutch subject-verb sequences (linguistic context) or acoustically similar sequences in which the subject was replaced by filtered noise (nonlinguistic context). The verb forms differed in the inflectional suffix, rendering the subject-verb sequences grammatically correct or incorrect, and leading to a difference in the stress pattern of the verb forms. We found that the MMNs were modulated in both the linguistic and nonlinguistic condition, suggesting that the processing load induced by variation in lexical stress can hinder early automatic processing of grammatical agreement. However, as the morphological differences between the verb forms correlated with differences in number of syllables, an interpretation in terms of the prosodic structure of the sequences cannot be ruled out. Future research is needed to determine which of these factors (i.e., lexical stress, syllabic structure) most strongly modulate early syntactic processing.


Subject(s)
Electroencephalography , Language , Humans , Linguistics
2.
Atten Percept Psychophys ; 83(5): 2102-2112, 2021 Jul.
Article in English | MEDLINE | ID: mdl-33786749

ABSTRACT

One of the most influential ideas within the domain of cognition is that of embodied cognition, in which the experienced world is the result of an interplay between an organism's physiology, sensorimotor system, and its environment. An aspect of this idea is that linguistic information activates sensory representations automatically. For example, hearing the word 'red' would automatically activate sensory representations of this color. But does linguistic information prioritize access to awareness of congruent visual information? Here, we show that linguistic verbal cues accelerate matching visual targets into awareness by using a breaking continuous flash suppression paradigm. In a speeded reaction time task, observers heard spoken color labels (e.g., red) followed by colored targets that were either congruent (red), incongruent (green), or neutral (a neutral noncolor word) with respect to the labels. Importantly, and in contrast to previous studies investigating a similar question, the incidence of congruent trials was not higher than that of incongruent trials. Our results show that RTs were selectively shortened for congruent verbal-visual pairings, and that this shortening occurred over a wide range of cue-target intervals. We suggest that linguistic verbal information preactivates sensory representations, so that hearing the word 'red' preactivates (visual) sensory information internally.


Subject(s)
Cues , Linguistics , Cognition , Hearing , Humans , Reaction Time
3.
Cogn Emot ; 35(4): 690-704, 2021 06.
Article in English | MEDLINE | ID: mdl-33622178

ABSTRACT

In decision-making people react differently to positive wordings than to negatives, which may be caused by negativity bias: a difference in emotional force of these wordings. Because emotions are assumed to be activated more strongly in one's mother tongue, we predict a Foreign Language Effect, being that such framing effects are larger in a native language than in a foreign one. In two experimental studies (N = 475 and N = 503) we tested this prediction for balanced and unbalanced second language users of Spanish and English and for three types of valence framing effects. In Study 1 we observed risky-choice framing effects and attribute framing effects, but these were always equally large for native and foreign-language speakers. In our second study, we added a footbridge dilemma to the framing materials. Only for this task we did observe a Foreign Language Effect, indicating more utilitarian choices when the dilemma is presented in L2. Hence, across two studies, we find no Foreign Language Effect for three types of valence framing but we do find evidence for such an effect in a moral decision task. We discuss several alternative explanations for these results.


Subject(s)
Language , Multilingualism , Decision Making , Emotions , Humans , Morals
4.
Neuroimage ; 227: 117436, 2021 02 15.
Article in English | MEDLINE | ID: mdl-33039619

ABSTRACT

When we feel connected or engaged during social behavior, are our brains in fact "in sync" in a formal, quantifiable sense? Most studies addressing this question use highly controlled tasks with homogenous subject pools. In an effort to take a more naturalistic approach, we collaborated with art institutions to crowdsource neuroscience data: Over the course of 5 years, we collected electroencephalogram (EEG) data from thousands of museum and festival visitors who volunteered to engage in a 10-min face-to-face interaction. Pairs of participants with various levels of familiarity sat inside the Mutual Wave Machine-an artistic neurofeedback installation that translates real-time correlations of each pair's EEG activity into light patterns. Because such inter-participant EEG correlations are prone to noise contamination, in subsequent offline analyses we computed inter-brain coupling using Imaginary Coherence and Projected Power Correlations, two synchrony metrics that are largely immune to instantaneous, noise-driven correlations. When applying these methods to two subsets of recorded data with the most consistent protocols, we found that pairs' trait empathy, social closeness, engagement, and social behavior (joint action and eye contact) consistently predicted the extent to which their brain activity became synchronized, most prominently in low alpha (~7-10 Hz) and beta (~20-22 Hz) oscillations. These findings support an account where shared engagement and joint action drive coupled neural activity and behavior during dynamic, naturalistic social interactions. To our knowledge, this work constitutes a first demonstration that an interdisciplinary, real-world, crowdsourcing neuroscience approach may provide a promising method to collect large, rich datasets pertaining to real-life face-to-face interactions. Additionally, it is a demonstration of how the general public can participate and engage in the scientific process outside of the laboratory. Institutions such as museums, galleries, or any other organization where the public actively engages out of self-motivation, can help facilitate this type of citizen science research, and support the collection of large datasets under scientifically controlled experimental conditions. To further enhance the public interest for the out-of-the-lab experimental approach, the data and results of this study are disseminated through a website tailored to the general public (wp.nyu.edu/mutualwavemachine).


Subject(s)
Brain/physiology , Empathy/physiology , Social Behavior , Crowdsourcing , Electroencephalography , Humans , Interpersonal Relations , Neurofeedback
5.
Front Psychol ; 10: 318, 2019.
Article in English | MEDLINE | ID: mdl-30858810

ABSTRACT

Facial electromyography research shows that corrugator supercilii ("frowning muscle") activity tracks the emotional valence of linguistic stimuli. Grounded or embodied accounts of language processing take such activity to reflect the simulation or "re-enactment" of emotion, as part of the retrieval of word meaning (e.g., of "furious") and/or of building a situation model (e.g., for "Mark is furious"). However, the same muscle also expresses our primary emotional evaluation of things we encounter. Language-driven affective simulation can easily be at odds with the reader's affective evaluation of what language describes (e.g., when we like Mark being furious). In a previous experiment ('t Hart et al., 2018) we demonstrated that neither language-driven simulation nor affective evaluation alone seem sufficient to explain the corrugator patterns that emerge during online language comprehension in these complex cases. Those results showed support for a multiple-drivers account of corrugator activity, where both simulation and evaluation processes contribute to the activation patterns observed in the corrugator. The study at hand replicates and extends these findings. With more refined control over when precisely affective information became available in a narrative, we again find results that speak against an interpretation of corrugator activity in terms of simulation or evaluation alone, and as such support the multiple-drivers account. Additional evidence suggests that the simulation driver involved reflects simulation at the level of situation model construction, rather than at the level of retrieving concepts from long-term memory. In all, by giving insights into how language-driven simulation meshes with the reader's evaluative responses during an unfolding narrative, this study contributes to the understanding of affective language comprehension.

6.
Front Psychol ; 9: 613, 2018.
Article in English | MEDLINE | ID: mdl-29760671

ABSTRACT

Facial electromyography research shows that corrugator supercilii ("frowning muscle") activity tracks the emotional valence of linguistic stimuli. Grounded or embodied accounts of language processing take such activity to reflect the simulation or "reenactment" of emotion, as part of the retrieval of word meaning (e.g., of "furious") and/or of building a situation model (e.g., for "Mark is furious"). However, the same muscle also expresses our primary emotional evaluation of things we encounter. Language-driven affective simulation can easily be at odds with the reader's affective evaluation of what language describes (e.g., when we like Mark being furious). To examine what happens in such cases, we independently manipulated simulation valence and moral evaluative valence in short narratives. Participants first read about characters behaving in a morally laudable or objectionable fashion: this immediately led to corrugator activity reflecting positive or negative affect. Next, and critically, a positive or negative event befell these same characters. Here, the corrugator response did not track the valence of the event, but reflected both simulation and moral evaluation. This highlights the importance of unpacking coarse notions of affective meaning in language processing research into components that reflect simulation and evaluation. Our results also call for a re-evaluation of the interpretation of corrugator EMG, as well as other affect-related facial muscles and other peripheral physiological measures, as unequivocal indicators of simulation. Research should explore how such measures behave in richer and more ecologically valid language processing, such as narrative; refining our understanding of simulation within a framework of grounded language comprehension.

7.
PLoS One ; 6(9): e24253, 2011.
Article in English | MEDLINE | ID: mdl-21935391

ABSTRACT

Neuropsychological and imaging studies have shown that the left supramarginal gyrus (SMG) is specifically involved in processing spatial terms (e.g. above, left of), which locate places and objects in the world. The current fMRI study focused on the nature and specificity of representing spatial language in the left SMG by combining behavioral and neuronal activation data in blind and sighted individuals. Data from the blind provide an elegant way to test the supramodal representation hypothesis, i.e. abstract codes representing spatial relations yielding no activation differences between blind and sighted. Indeed, the left SMG was activated during spatial language processing in both blind and sighted individuals implying a supramodal representation of spatial and other dimensional relations which does not require visual experience to develop. However, in the absence of vision functional reorganization of the visual cortex is known to take place. An important consideration with respect to our finding is the amount of functional reorganization during language processing in our blind participants. Therefore, the participants also performed a verb generation task. We observed that only in the blind occipital areas were activated during covert language generation. Additionally, in the first task there was functional reorganization observed for processing language with a high linguistic load. As the visual cortex was not specifically active for spatial contents in the first task, and no reorganization was observed in the SMG, the latter finding further supports the notion that the left SMG is the main node for a supramodal representation of verbal spatial relations.


Subject(s)
Language , Visual Cortex/physiology , Visually Impaired Persons , Adult , Female , Humans , Magnetic Resonance Imaging , Male , Middle Aged
8.
Perception ; 40(6): 725-38, 2011.
Article in English | MEDLINE | ID: mdl-21936300

ABSTRACT

We investigated which reference frames are preferred when matching spatial language to the haptic domain. Sighted, low-vision, and blind participants were tested on a haptic-sentence-verification task where participants had to haptically explore different configurations of a ball and a shoe and judge the relation between them. Results from the spatial relation "above", in the vertical plane, showed that various reference frames are available after haptic inspection of a configuration. Moreover, the pattern of results was similar for all three groups and resembled patterns found for the sighted on visual sentence-verification tasks. In contrast, when judging the spatial relation "in front", in the horizontal plane, the blind showed a markedly different response pattern. The sighted and low-vision participants did not show a clear preference for either the absolute/relative or the intrinsic reference frame when these frames were dissociated. The blind, on the other hand, showed a clear preference for the intrinsic reference frame. In the absence of a dominant cue, such as gravity in the vertical plane, the blind might emphasise the functional relationship between the objects owing to enhanced experience with haptic exploration of objects.


Subject(s)
Blindness/psychology , Discrimination, Psychological , Judgment , Orientation , Pattern Recognition, Physiological , Stereognosis , Adult , Aged , Cues , Depth Perception , Female , Humans , Male , Middle Aged , Touch , Vision, Low/psychology
9.
Q J Exp Psychol (Hove) ; 64(6): 1124-37, 2011 Jun.
Article in English | MEDLINE | ID: mdl-21424986

ABSTRACT

In two experiments, the extent to which mental body representations contain spatial information was examined. Participants were asked to compare distances between various body parts. Similar to what happens when people compare distances on a real visual stimulus, they were faster as the distance differences between body parts became larger (Experiment 1), and this effect could not (only) be explained by the crossing of major bodily categories (umbilicus to knee vs. knee to ankle; Experiment 2). In addition, participants also performed simple animate/inanimate verification on a set of nouns. The nouns describing animate items were names of body parts. A spatial priming effect was found: Verification was faster for body part items preceded by body parts in close spatial proximity. This suggests automatic activation of spatial body information. Taken together, results from the distance comparison task and the property verification task showed that mental body representations contain both categorical and more metric spatial information. These findings are further discussed in terms of recent embodied cognition theories.


Subject(s)
Body Image , Cognition/physiology , Space Perception/physiology , Adult , Analysis of Variance , Cues , Distance Perception/physiology , Female , Humans , Male , Reaction Time/physiology , Students/psychology , Task Performance and Analysis , Young Adult
10.
Acta Psychol (Amst) ; 132(2): 145-56, 2009 Oct.
Article in English | MEDLINE | ID: mdl-19457462

ABSTRACT

In order to find objects or places in the world, multiple sources of information, such as visual input, auditory input and asking for directions, can help you. These different sources of information can be converged into a spatial image, which represents configurational characteristics of the world. This paper discusses the findings on the nature of spatial images and the role of spatial language in generating these spatial images in both blind and sighted individuals. Congenitally blind individuals have never experienced visual input, yet they are able to perform several tasks traditionally associated with spatial imagery, such as mental scanning, mental pathway completions and mental clock time comparison, though perhaps not always in a similar manner as sighted. Therefore, they offer invaluable insights into the exact nature of spatial images. We will argue that spatial imagery exceeds the input from different input modalities to form an abstract mental representation while maintaining connections with the input modalities. This suggests that the nature of spatial images is supramodal, which can explain functional equivalent results from verbal and perceptual inputs for spatial situations and subtle to moderate behavioral differences between the blind and sighted.


Subject(s)
Blindness/psychology , Imagination/physiology , Language , Space Perception/physiology , Brain Mapping , Humans , Mental Processes , Psycholinguistics
SELECTION OF CITATIONS
SEARCH DETAIL
...