Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 14 de 14
Filter
Add more filters










Publication year range
1.
MethodsX ; 7: 100801, 2020.
Article in English | MEDLINE | ID: mdl-32021831

ABSTRACT

Functional localizers allow the definition of regions of interest in the human brain that cannot be delineated by anatomical markers alone. To date, when localizing the body-selective areas of the visual cortex using fMRI, researchers have used static images of bodies and objects. However, there are other relevant brain areas involved in the processing of moving bodies and action interpretation that are missed by these techniques. Typically, these biological motion areas are localized separately using whole and scrambled point-light display stimuli. Currently, one can only localize either the static body-selective areas or the biological motion areas, but not both together. Here, for the first time, using motion-controlled dynamic body and object stimuli, we describe a method for localizing the full dynamic body-selective network of the human brain in one experimental run. •The method uses dynamic body and object stimuli.•Low-level local motion information is added as a covariate into the fMRI analysis.•This localizes the full dynamic body-selective network of the human brain.

2.
Dev Cogn Neurosci ; 38: 100660, 2019 08.
Article in English | MEDLINE | ID: mdl-31128318

ABSTRACT

Emotions are strongly conveyed by the human body and the ability to recognize emotions from body posture or movement is still developing through childhood and adolescence. To date, very few studies have explored how these behavioural observations are paralleled by functional brain development. Furthermore, currently no studies have explored the development of emotion modulation in these areas. In this study, we used functional magnetic resonance imaging (fMRI) to compare the brain activity of 25 children (age 6-11), 18 adolescents (age 12-17) and 26 adults while they passively viewed short videos of angry, happy or neutral body movements. We observed that when viewing dynamic bodies generally, adults showed higher activity than children bilaterally in the body-selective areas; namely the extra-striate body area (EBA), fusiform body area (FBA), posterior superior temporal sulcus (pSTS), as well as the amygdala (AMY). Adults also showed higher activity than adolescents, but only in the right hemisphere. Crucially, however, we found that there were no age differences in the emotion modulation of activity in these areas. These results indicate, for the first time, that despite activity selective to body perception increasing across childhood and adolescence, emotion modulation of these areas in adult-like from 7 years of age.


Subject(s)
Emotions/physiology , Kinesics , Movement/physiology , Photic Stimulation/methods , Visual Cortex/diagnostic imaging , Visual Cortex/growth & development , Adolescent , Adult , Brain Mapping/methods , Child , Female , Humans , Magnetic Resonance Imaging/methods , Male
3.
Neuroimage ; 119: 164-74, 2015 Oct 01.
Article in English | MEDLINE | ID: mdl-26116964

ABSTRACT

fMRI studies increasingly examine functions and properties of non-primary areas of human auditory cortex. However there is currently no standardized localization procedure to reliably identify specific areas across individuals such as the standard 'localizers' available in the visual domain. Here we present an fMRI 'voice localizer' scan allowing rapid and reliable localization of the voice-sensitive 'temporal voice areas' (TVA) of human auditory cortex. We describe results obtained using this standardized localizer scan in a large cohort of normal adult subjects. Most participants (94%) showed bilateral patches of significantly greater response to vocal than non-vocal sounds along the superior temporal sulcus/gyrus (STS/STG). Individual activation patterns, although reproducible, showed high inter-individual variability in precise anatomical location. Cluster analysis of individual peaks from the large cohort highlighted three bilateral clusters of voice-sensitivity, or "voice patches" along posterior (TVAp), mid (TVAm) and anterior (TVAa) STS/STG, respectively. A series of extra-temporal areas including bilateral inferior prefrontal cortex and amygdalae showed small, but reliable voice-sensitivity as part of a large-scale cerebral voice network. Stimuli for the voice localizer scan and probabilistic maps in MNI space are available for download.


Subject(s)
Auditory Cortex/physiology , Individuality , Speech Perception/physiology , Acoustic Stimulation , Adult , Brain Mapping , Dominance, Cerebral , Female , Humans , Magnetic Resonance Imaging , Male , Voice , Young Adult
4.
Front Hum Neurosci ; 8: 941, 2014.
Article in English | MEDLINE | ID: mdl-25484863

ABSTRACT

Our ability to read other people's non-verbal signals gets refined throughout childhood and adolescence. How this is paralleled by brain development has been investigated mainly with regards to face perception, showing a protracted functional development of the face-selective visual cortical areas. In view of the importance of whole-body expressions in interpersonal communication it is important to understand the development of brain areas sensitive to these social signals. Here we used functional magnetic resonance imaging (fMRI) to compare brain activity in a group of 24 children (age 6-11) and 26 adults while they passively watched short videos of body or object movements. We observed activity in similar regions in both groups; namely the extra-striate body area (EBA), fusiform body area (FBA), posterior superior temporal sulcus (pSTS), amygdala and premotor regions. Adults showed additional activity in the inferior frontal gyrus (IFG). Within the main body-selective regions (EBA, FBA and pSTS), the strength and spatial extent of fMRI signal change was larger in adults than in children. Multivariate Bayesian (MVB) analysis showed that the spatial pattern of neural representation within those regions did not change over age. Our results indicate, for the first time, that body perception, like face perception, is still maturing through the second decade of life.

5.
J Neurosci ; 34(20): 6813-21, 2014 May 14.
Article in English | MEDLINE | ID: mdl-24828635

ABSTRACT

The integration of emotional information from the face and voice of other persons is known to be mediated by a number of "multisensory" cerebral regions, such as the right posterior superior temporal sulcus (pSTS). However, whether multimodal integration in these regions is attributable to interleaved populations of unisensory neurons responding to face or voice or rather by multimodal neurons receiving input from the two modalities is not fully clear. Here, we examine this question using functional magnetic resonance adaptation and dynamic audiovisual stimuli in which emotional information was manipulated parametrically and independently in the face and voice via morphing between angry and happy expressions. Healthy human adult subjects were scanned while performing a happy/angry emotion categorization task on a series of such stimuli included in a fast event-related, continuous carryover design. Subjects integrated both face and voice information when categorizing emotion-although there was a greater weighting of face information-and showed behavioral adaptation effects both within and across modality. Adaptation also occurred at the neural level: in addition to modality-specific adaptation in visual and auditory cortices, we observed for the first time a crossmodal adaptation effect. Specifically, fMRI signal in the right pSTS was reduced in response to a stimulus in which facial emotion was similar to the vocal emotion of the preceding stimulus. These results suggest that the integration of emotional information from face and voice in the pSTS involves a detectable proportion of bimodal neurons that combine inputs from visual and auditory cortices.


Subject(s)
Auditory Perception/physiology , Emotions/physiology , Temporal Lobe/physiology , Visual Perception/physiology , Acoustic Stimulation , Adult , Brain Mapping , Female , Humans , Image Processing, Computer-Assisted , Magnetic Resonance Imaging , Male , Photic Stimulation , Social Perception , Voice
6.
Hum Brain Mapp ; 35(10): 5190-203, 2014 Oct.
Article in English | MEDLINE | ID: mdl-24824165

ABSTRACT

Human beings often observe other people's social interactions without being a part of them. Whereas the implications of some brain regions (e.g. amygdala) have been extensively examined, the implication of the precuneus remains yet to be determined. Here we examined the implication of the precuneus in third-person perspective of social interaction using functional magnetic resonance imaging (fMRI). Participants performed a socially irrelevant task while watching the biological motion of two agents acting in either typical (congruent to social conventions) or atypical (incongruent to social conventions) ways. When compared to typical displays, the atypical displays elicited greater activation in the central and posterior bilateral precuneus, and in frontoparietal and occipital regions. Whereas the right precuneus responded with greater activation also to upside down than upright displays, the left precuneus did not. Correlations and effective connectivity analysis added consistent evidence of an interhemispheric asymmetry between the right and left precuneus. These findings suggest that the precuneus reacts to violations of social expectations, and plays a crucial role in third-person perspective of others' interaction even when the social context is unattended.


Subject(s)
Brain Mapping , Interpersonal Relations , Parietal Lobe/physiology , Adult , Causality , Female , Functional Laterality , Humans , Image Processing, Computer-Assisted , Magnetic Resonance Imaging , Male , Parietal Lobe/blood supply , Reaction Time/physiology , Young Adult
7.
Cogn Affect Behav Neurosci ; 14(1): 307-18, 2014 Mar.
Article in English | MEDLINE | ID: mdl-23943513

ABSTRACT

It has been proposed that we make sense of the movements of others by observing fluctuations in the kinematic properties of their actions. At the neural level, activity in the human motion complex (hMT+) and posterior superior temporal sulcus (pSTS) has been implicated in this relationship. However, previous neuroimaging studies have largely utilized brief, diminished stimuli, and the role of relevant kinematic parameters for the processing of human action remains unclear. We addressed this issue by showing extended-duration natural displays of an actor engaged in two common activities, to 12 participants in an fMRI study under passive viewing conditions. Our region-of-interest analysis focused on three neural areas (hMT+, pSTS, and fusiform face area) and was accompanied by a whole-brain analysis. The kinematic properties of the actor, particularly the speed of body part motion and the distance between body parts, were related to activity in hMT+ and pSTS. Whole-brain exploratory analyses revealed additional areas in posterior cortex, frontal cortex, and the cerebellum whose activity was related to these features. These results indicate that the kinematic properties of peoples' movements are continually monitored during everyday activity as a step to determining actions and intent.


Subject(s)
Brain/physiology , Motion Perception/physiology , Biomechanical Phenomena , Brain Mapping , Cerebral Cortex/physiology , Female , Functional Laterality , Humans , Magnetic Resonance Imaging , Male , Neuropsychological Tests , Young Adult
8.
Cortex ; 50: 125-36, 2014 Jan.
Article in English | MEDLINE | ID: mdl-23988132

ABSTRACT

The functional role of the superior temporal sulcus (STS) has been implicated in a number of studies, including those investigating face perception, voice perception, and face-voice integration. However, the nature of the STS preference for these 'social stimuli' remains unclear, as does the location within the STS for specific types of information processing. The aim of this study was to directly examine properties of the STS in terms of selective response to social stimuli. We used functional magnetic resonance imaging (fMRI) to scan participants whilst they were presented with auditory, visual, or audiovisual stimuli of people or objects, with the intention of localising areas preferring both faces and voices (i.e., 'people-selective' regions) and audiovisual regions designed to specifically integrate person-related information. Results highlighted a 'people-selective, heteromodal' region in the trunk of the right STS which was activated by both faces and voices, and a restricted portion of the right posterior STS (pSTS) with an integrative preference for information from people, as compared to objects. These results point towards the dedicated role of the STS as a 'social-information processing' centre.


Subject(s)
Auditory Perception/physiology , Recognition, Psychology/physiology , Temporal Lobe/physiology , Visual Perception/physiology , Adult , Brain Mapping , Face , Female , Humans , Image Processing, Computer-Assisted , Magnetic Resonance Imaging , Male , Photic Stimulation , Singing , Speech , Voice , Young Adult
9.
Front Hum Neurosci ; 7: 744, 2013.
Article in English | MEDLINE | ID: mdl-24294196

ABSTRACT

In the everyday environment, affective information is conveyed by both the face and the voice. Studies have demonstrated that a concurrently presented voice can alter the way that an emotional face expression is perceived, and vice versa, leading to emotional conflict if the information in the two modalities is mismatched. Additionally, evidence suggests that incongruence of emotional valence activates cerebral networks involved in conflict monitoring and resolution. However, it is currently unclear whether this is due to task difficulty-that incongruent stimuli are harder to categorize-or simply to the detection of mismatching information in the two modalities. The aim of the present fMRI study was to examine the neurophysiological correlates of processing incongruent emotional information, independent of task difficulty. Subjects were scanned while judging the emotion of face-voice affective stimuli. Both the face and voice were parametrically morphed between anger and happiness and then paired in all audiovisual combinations, resulting in stimuli each defined by two separate values: the degree of incongruence between the face and voice, and the degree of clarity of the combined face-voice information. Due to the specific morphing procedure utilized, we hypothesized that the clarity value, rather than incongruence value, would better reflect task difficulty. Behavioral data revealed that participants integrated face and voice affective information, and that the clarity, as opposed to incongruence value correlated with categorization difficulty. Cerebrally, incongruence was more associated with activity in the superior temporal region, which emerged after task difficulty had been accounted for. Overall, our results suggest that activation in the superior temporal region in response to incongruent information cannot be explained simply by task difficulty, and may rather be due to detection of mismatching information between the two modalities.

10.
Cereb Cortex ; 23(4): 958-66, 2013 Apr.
Article in English | MEDLINE | ID: mdl-22490550

ABSTRACT

Normal listeners effortlessly determine a person's gender by voice, but the cerebral mechanisms underlying this ability remain unclear. Here, we demonstrate 2 stages of cerebral processing during voice gender categorization. Using voice morphing along with an adaptation-optimized functional magnetic resonance imaging design, we found that secondary auditory cortex including the anterior part of the temporal voice areas in the right hemisphere responded primarily to acoustical distance with the previously heard stimulus. In contrast, a network of bilateral regions involving inferior prefrontal and anterior and posterior cingulate cortex reflected perceived stimulus ambiguity. These findings suggest that voice gender recognition involves neuronal populations along the auditory ventral stream responsible for auditory feature extraction, functioning in pair with the prefrontal cortex in voice gender perception.


Subject(s)
Auditory Perception/physiology , Cerebral Cortex/blood supply , Cerebral Cortex/physiology , Magnetic Resonance Imaging , Sex Characteristics , Voice , Acoustic Stimulation , Adult , Brain Mapping , Female , Humans , Image Processing, Computer-Assisted , Linear Models , Male , Oxygen/blood , Psychometrics , Reaction Time/physiology , Young Adult
11.
Front Psychol ; 3: 89, 2012.
Article in English | MEDLINE | ID: mdl-22485101

ABSTRACT

The "temporal voice areas" (TVAs; Belin et al., 2000) of the human brain show greater neuronal activity in response to human voices than to other categories of non-vocal sounds. However, a direct link between TVA activity and voice perception behavior has not yet been established. Here we show that a functional magnetic resonance imaging measure of activity in the TVAs predicts individual performance at a separately administered voice memory test. This relation holds when general sound memory ability is taken into account. These findings provide the first evidence that the TVAs are specifically involved in voice cognition.

12.
Cereb Cortex ; 22(6): 1263-70, 2012 Jun.
Article in English | MEDLINE | ID: mdl-21828348

ABSTRACT

Social interactions involve more than "just" language. As important is a more primitive nonlinguistic mode of communication acting in parallel with linguistic processes and driving our decisions to a much higher degree than is generally suspected. Amongst the "honest signals" that influence our behavior is perceived vocal attractiveness. Not only does vocal attractiveness reflect important biological characteristics of the speaker, it also influences our social perceptions according to the "what sounds beautiful is good" phenomenon. Despite the widespread influence of vocal attractiveness on social interactions revealed by behavioral studies, its neural underpinnings are yet unknown. We measured brain activity while participants listened to a series of vocal sounds ("ah") and performed an unrelated task. We found that voice-sensitive auditory and inferior frontal regions were strongly correlated with implicitly perceived vocal attractiveness. While the involvement of auditory areas reflected the processing of acoustic contributors to vocal attractiveness ("distance to mean" and spectrotemporal regularity), activity in inferior prefrontal regions (traditionally involved in speech processes) reflected the overall perceived attractiveness of the voices despite their lack of linguistic content. These results suggest the strong influence of hidden nonlinguistic aspects of communication signals on cerebral activity and provide an objective measure of this influence.


Subject(s)
Acoustic Stimulation/methods , Prefrontal Cortex/physiology , Social Behavior , Speech Perception/physiology , Voice/physiology , Adolescent , Adult , Auditory Perception/physiology , Female , Humans , Male , Young Adult
13.
PLoS One ; 6(4): e19165, 2011 Apr 29.
Article in English | MEDLINE | ID: mdl-21559468

ABSTRACT

In humans, emotions from music serve important communicative roles. Despite a growing interest in the neural basis of music perception, action and emotion, the majority of previous studies in this area have focused on the auditory aspects of music performances. Here we investigate how the brain processes the emotions elicited by audiovisual music performances. We used event-related functional magnetic resonance imaging, and in Experiment 1 we defined the areas responding to audiovisual (musician's movements with music), visual (musician's movements only), and auditory emotional (music only) displays. Subsequently a region of interest analysis was performed to examine if any of the areas detected in Experiment 1 showed greater activation for emotionally mismatching performances (combining the musician's movements with mismatching emotional sound) than for emotionally matching music performances (combining the musician's movements with matching emotional sound) as presented in Experiment 2 to the same participants. The insula and the left thalamus were found to respond consistently to visual, auditory and audiovisual emotional information and to have increased activation for emotionally mismatching displays in comparison with emotionally matching displays. In contrast, the right thalamus was found to respond to audiovisual emotional displays and to have similar activation for emotionally matching and mismatching displays. These results suggest that the insula and left thalamus have an active role in detecting emotional correspondence between auditory and visual information during music performances, whereas the right thalamus has a different role.


Subject(s)
Emotions/physiology , Music/psychology , Neurons/physiology , Acoustic Stimulation , Adult , Auditory Perception/physiology , Brain/physiology , Brain Mapping/methods , Female , Humans , Image Processing, Computer-Assisted , Magnetic Resonance Imaging/methods , Male
14.
Cereb Cortex ; 21(12): 2820-8, 2011 Dec.
Article in English | MEDLINE | ID: mdl-21531779

ABSTRACT

Temporal voice areas showing a larger activity for vocal than non-vocal sounds have been identified along the superior temporal sulcus (STS); more voice-sensitive areas have been described in frontal and parietal lobes. Yet, the role of voice-sensitive regions in representing voice identity remains unclear. Using a functional magnetic resonance adaptation design, we aimed at disentangling acoustic- from identity-based representations of voices. Sixteen participants were scanned while listening to pairs of voices drawn from morphed continua between 2 initially unfamiliar voices, before and after a voice learning phase. In a given pair, the first and second stimuli could be identical or acoustically different and, at the second session, perceptually similar or different. At both sessions, right mid-STS/superior temporal gyrus (STG) and superior temporal pole (sTP) showed sensitivity to acoustical changes. Critically, voice learning induced changes in the acoustical processing of voices in inferior frontal cortices (IFCs). At the second session only, right IFC and left cingulate gyrus showed sensitivity to changes in perceived identity. The processing of voice identity appears to be subserved by a large network of brain areas ranging from the sTP, involved in an acoustic-based representation of unfamiliar voices, to areas along the convexity of the IFC for identity-related processing of familiar voices.


Subject(s)
Auditory Perception/physiology , Brain Mapping , Cerebral Cortex/physiology , Learning/physiology , Voice , Acoustic Stimulation , Female , Humans , Image Interpretation, Computer-Assisted , Magnetic Resonance Imaging , Male , Young Adult
SELECTION OF CITATIONS
SEARCH DETAIL
...