Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 3 de 3
Filter
Add more filters










Database
Language
Publication year range
1.
Neuropsychologia ; 89: 217-224, 2016 08.
Article in English | MEDLINE | ID: mdl-27329686

ABSTRACT

The anterior region of the left superior temporal gyrus/superior temporal sulcus (aSTG/STS) has been implicated in two very different cognitive functions: sentence processing and social-emotional processing. However, the vast majority of the sentence stimuli in previous reports have been of a social or social-emotional nature suggesting that sentence processing may be confounded with semantic content. To evaluate this possibility we had subjects read word lists that differed in phrase/constituent size (single words, 3-word phrases, 6-word sentences) and semantic content (social-emotional, social, and inanimate objects) while scanned in a 7T environment. This allowed us to investigate if the aSTG/STS responded to increasing constituent structure (with increased activity as a function of constituent size) with or without regard to a specific domain of concepts, i.e., social and/or social-emotional content. Activity in the left aSTG/STS was found to increase with constituent size. This region was also modulated by content, however, such that social-emotional concepts were preferred over social and object stimuli. Reading also induced content type effects in domain-specific semantic regions. Those preferring social-emotional content included aSTG/STS, inferior frontal gyrus, posterior STS, lateral fusiform, ventromedial prefrontal cortex, and amygdala, regions included in the "social brain", while those preferring object content included parahippocampal gyrus, retrosplenial cortex, and caudate, regions involved in object processing. These results suggest that semantic content affects higher-level linguistic processing and should be taken into account in future studies.


Subject(s)
Bias , Emotions/physiology , Semantics , Social Behavior , Temporal Lobe/physiology , Adult , Analysis of Variance , Brain Mapping , Female , Heart Rate/physiology , Humans , Image Processing, Computer-Assisted , Magnetic Resonance Imaging , Male , Oxygen/blood , Reading , Recognition, Psychology , Temporal Lobe/diagnostic imaging , Young Adult
2.
J Neurosci ; 36(17): 4669-80, 2016 04 27.
Article in English | MEDLINE | ID: mdl-27122026

ABSTRACT

UNLABELLED: Synchronized behavior (chanting, singing, praying, dancing) is found in all human cultures and is central to religious, military, and political activities, which require people to act collaboratively and cohesively; however, we know little about the neural underpinnings of many kinds of synchronous behavior (e.g., vocal behavior) or its role in establishing and maintaining group cohesion. In the present study, we measured neural activity using fMRI while participants spoke simultaneously with another person. We manipulated whether the couple spoke the same sentence (allowing synchrony) or different sentences (preventing synchrony), and also whether the voice the participant heard was "live" (allowing rich reciprocal interaction) or prerecorded (with no such mutual influence). Synchronous speech was associated with increased activity in posterior and anterior auditory fields. When, and only when, participants spoke with a partner who was both synchronous and "live," we observed a lack of the suppression of auditory cortex, which is commonly seen as a neural correlate of speech production. Instead, auditory cortex responded as though it were processing another talker's speech. Our results suggest that detecting synchrony leads to a change in the perceptual consequences of one's own actions: they are processed as though they were other-, rather than self-produced. This may contribute to our understanding of synchronized behavior as a group-bonding tool. SIGNIFICANCE STATEMENT: Synchronized human behavior, such as chanting, dancing, and singing, are cultural universals with functional significance: these activities increase group cohesion and cause participants to like each other and behave more prosocially toward each other. Here we use fMRI brain imaging to investigate the neural basis of one common form of cohesive synchronized behavior: joint speaking (e.g., the synchronous speech seen in chants, prayers, pledges). Results showed that joint speech recruits additional right hemisphere regions outside the classic speech production network. Additionally, we found that a neural marker of self-produced speech, suppression of sensory cortices, did not occur during joint synchronized speech, suggesting that joint synchronized behavior may alter self-other distinctions in sensory processing.


Subject(s)
Brain/physiology , Social Perception , Speech Perception/physiology , Speech/physiology , Acoustic Stimulation/methods , Adult , Auditory Cortex/physiology , Auditory Perception/physiology , Brain Mapping , Female , Humans , Image Processing, Computer-Assisted/methods , Magnetic Resonance Imaging/methods , Male
SELECTION OF CITATIONS
SEARCH DETAIL
...