Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 11 de 11
Filter
Add more filters










Publication year range
1.
Neurobiol Lang (Camb) ; 4(4): 575-610, 2023.
Article in English | MEDLINE | ID: mdl-38144236

ABSTRACT

Much of the language we encounter in our everyday lives comes in the form of conversation, yet the majority of research on the neural basis of language comprehension has used input from only one speaker at a time. Twenty adults were scanned while passively observing audiovisual conversations using functional magnetic resonance imaging. In a block-design task, participants watched 20 s videos of puppets speaking either to another puppet (the dialogue condition) or directly to the viewer (the monologue condition), while the audio was either comprehensible (played forward) or incomprehensible (played backward). Individually functionally localized left-hemisphere language regions responded more to comprehensible than incomprehensible speech but did not respond differently to dialogue than monologue. In a second task, participants watched videos (1-3 min each) of two puppets conversing with each other, in which one puppet was comprehensible while the other's speech was reversed. All participants saw the same visual input but were randomly assigned which character's speech was comprehensible. In left-hemisphere cortical language regions, the time course of activity was correlated only among participants who heard the same character speaking comprehensibly, despite identical visual input across all participants. For comparison, some individually localized theory of mind regions and right-hemisphere homologues of language regions responded more to dialogue than monologue in the first task, and in the second task, activity in some regions was correlated across all participants regardless of which character was speaking comprehensibly. Together, these results suggest that canonical left-hemisphere cortical language regions are not sensitive to differences between observed dialogue and monologue.

2.
Philos Trans R Soc Lond B Biol Sci ; 376(1838): 20200288, 2021 11 22.
Article in English | MEDLINE | ID: mdl-34601922

ABSTRACT

Cross-cultural research on moral reasoning has brought to the fore the question of whether moral judgements always turn on inferences about the mental states of others. Formal legal systems for assigning blame and punishment typically make fine-grained distinctions about mental states, as illustrated by the concept of mens rea, and experimental studies in the USA and elsewhere suggest everyday moral judgements also make use of such distinctions. On the other hand, anthropologists have suggested that some societies have a morality that is disregarding of mental states, and have marshalled ethnographic and experimental evidence in support of this claim. Here, we argue against the claim that some societies are simply less 'mind-minded' than others about morality. In place of this cultural main effects hypothesis about the role of mindreading in morality, we propose a contextual variability view in which the role of mental states in moral judgement depends on the context and the reasons for judgement. On this view, which mental states are or are not relevant for a judgement is context-specific, and what appear to be cultural main effects are better explained by culture-by-context interactions. This article is part of the theme issue 'The language of cooperation: reputation and honest signalling'.


Subject(s)
Judgment , Morals , Humans , Language , Male , Problem Solving , Punishment
3.
Cortex ; 103: 24-43, 2018 06.
Article in English | MEDLINE | ID: mdl-29554540

ABSTRACT

Individuals with Autism Spectrum Disorders (ASD) report difficulties extracting meaningful information from dynamic and complex social cues, like facial expressions. The nature and mechanisms of these difficulties remain unclear. Here we tested whether that difficulty can be traced to the pattern of activity in "social brain" regions, when viewing dynamic facial expressions. In two studies, adult participants (male and female) watched brief videos of a range of positive and negative facial expressions, while undergoing functional magnetic resonance imaging (Study 1: ASD n = 16, control n = 21; Study 2: ASD n = 22, control n = 30). Patterns of hemodynamic activity differentiated among facial emotional expressions in left and right superior temporal sulcus, fusiform gyrus, and parts of medial prefrontal cortex. In both control participants and high-functioning individuals with ASD, we observed (i) similar responses to emotional valence that generalized across facial expressions and animated social events; (ii) similar flexibility of responses to emotional valence, when manipulating the task-relevance of perceived emotions; and (iii) similar responses to a range of emotions within valence. Altogether, the data indicate that there was little or no group difference in cortical responses to isolated dynamic emotional facial expressions, as measured with fMRI. Difficulties with real-world social communication and social interaction in ASD may instead reflect differences in initiating and maintaining contingent interactions, or in integrating social information over time or context.


Subject(s)
Autism Spectrum Disorder/physiopathology , Cerebral Cortex/physiopathology , Emotions/physiology , Facial Expression , Generalization, Stimulus/physiology , Social Perception , Visual Perception/physiology , Adult , Autism Spectrum Disorder/diagnostic imaging , Autism Spectrum Disorder/psychology , Brain Mapping , Cerebral Cortex/diagnostic imaging , Female , Humans , Magnetic Resonance Imaging , Male , Photic Stimulation , Young Adult
4.
Cogn Neuropsychol ; 33(7-8): 362-377, 2016.
Article in English | MEDLINE | ID: mdl-27978778

ABSTRACT

Observers can deliberately attend to some aspects of a face (e.g. emotional expression) while ignoring others. How do internal goals influence representational geometry in face-responsive cortex? Participants watched videos of naturalistic dynamic faces during MRI scanning. We measured multivariate neural response patterns while participants formed an intention to attend to a facial aspect (age, or emotional valence), and then attended to that aspect, and responses to the face's emotional valence, independent of attention. Distinct patterns of response to the two tasks were found while forming the intention, in left fronto-lateral but not face-responsive regions, and while attending to the face, in almost all face-responsive regions. Emotional valence was represented in right posterior superior temporal sulcus and medial prefrontal cortex, but could not be decoded when unattended. Shifting the focus of attention thus alters cortical representation of social information, probably reflecting neural flexibility to optimally integrate goals and perceptual input.


Subject(s)
Attention/physiology , Facial Expression , Magnetic Resonance Imaging/methods , Adult , Female , Humans , Male , Young Adult
5.
Cereb Cortex ; 26(4): 1668-83, 2016 Apr.
Article in English | MEDLINE | ID: mdl-25628345

ABSTRACT

A fundamental and largely unanswered question in neuroscience is whether extrinsic connectivity and function are closely related at a fine spatial grain across the human brain. Using a novel approach, we found that the anatomical connectivity of individual gray-matter voxels (determined via diffusion-weighted imaging) alone can predict functional magnetic resonance imaging (fMRI) responses to 4 visual categories (faces, objects, scenes, and bodies) in individual subjects, thus accounting for both functional differentiation across the cortex and individual variation therein. Furthermore, this approach identified the particular anatomical links between voxels that most strongly predict, and therefore plausibly define, the neural networks underlying specific functions. These results provide the strongest evidence to date for a precise and fine-grained relationship between connectivity and function in the human brain, raise the possibility that early-developing connectivity patterns may determine later functional organization, and offer a method for predicting fine-grained functional organization in populations who cannot be functionally scanned.


Subject(s)
Cerebral Cortex/anatomy & histology , Cerebral Cortex/physiology , Pattern Recognition, Visual/physiology , Adult , Brain Mapping/methods , Diffusion Magnetic Resonance Imaging/methods , Female , Gray Matter/anatomy & histology , Gray Matter/physiology , Humans , Magnetic Resonance Imaging/methods , Male , Neural Pathways/anatomy & histology , Neural Pathways/physiology , Young Adult
6.
Nat Neurosci ; 15(2): 321-7, 2011 Dec 25.
Article in English | MEDLINE | ID: mdl-22197830

ABSTRACT

A fundamental assumption in neuroscience is that brain structure determines function. Accordingly, functionally distinct regions of cortex should be structurally distinct in their connections to other areas. We tested this hypothesis in relation to face selectivity in the fusiform gyrus. By using only structural connectivity, as measured through diffusion-weighted imaging, we were able to predict functional activation to faces in the fusiform gyrus. These predictions outperformed two control models and a standard group-average benchmark. The structure-function relationship discovered from the initial participants was highly robust in predicting activation in a second group of participants, despite differences in acquisition parameters and stimuli. This approach can thus reliably estimate activation in participants who cannot perform functional imaging tasks and is an alternative to group-activation maps. Additionally, we identified cortical regions whose connectivity was highly influential in predicting face selectivity within the fusiform, suggesting a possible mechanistic architecture underlying face processing in humans.


Subject(s)
Brain Mapping , Choice Behavior/physiology , Face , Pattern Recognition, Visual/physiology , Visual Cortex/physiology , Visual Pathways/physiology , Adult , Diffusion Magnetic Resonance Imaging , Female , Humans , Image Processing, Computer-Assisted , Magnetic Resonance Imaging , Male , Oxygen/blood , Photic Stimulation , Regression Analysis , Visual Cortex/blood supply , Young Adult
7.
Neuroimage ; 56(4): 2356-63, 2011 Jun 15.
Article in English | MEDLINE | ID: mdl-21473921

ABSTRACT

Neuroimaging studies have identified multiple face-selective regions in human cortex but the functional division of labor between these regions is not yet clear. A central hypothesis, with some empirical support, is that face-selective regions in the superior temporal sulcus (STS) are particularly responsive to dynamic information in faces, whereas the fusiform face area (FFA) computes the static or invariant properties of faces. Here we directly tested this hypothesis by measuring the magnitude of response in each region to both dynamic and static stimuli. Consistent with the hypothesis, we found that the response to movies of faces was not significantly different from the response to static images of faces from these same movies in the right FFA and right occipital face area (OFA). By contrast the face-selective region in the right posterior STS (pSTS) responded nearly three times as strongly to dynamic faces as to static faces, and a face-selective region in the right anterior STS (aSTS) responded to dynamic faces only. Both of these regions also responded more strongly to moving faces than to moving bodies, indicating that they are preferentially engaged in processing dynamic information from faces, not in more general processing of any dynamic social stimuli. The response to dynamic and static faces was not significantly different in a third face-selective region in the posterior continuation of the STS (pcSTS). The strong selectivity of face-selective regions in the pSTS and aSTS, but not the FFA, OFA or pcSTS, for dynamic face information demonstrates a clear functional dissociation between different face-selective regions, and provides further clues into their function.


Subject(s)
Brain Mapping , Cerebral Cortex/physiology , Motion Perception/physiology , Pattern Recognition, Visual/physiology , Face , Female , Humans , Image Interpretation, Computer-Assisted , Magnetic Resonance Imaging , Male , Motion
9.
Child Dev ; 80(4): 1197-209, 2009.
Article in English | MEDLINE | ID: mdl-19630902

ABSTRACT

Neuroimaging studies with adults have identified cortical regions recruited when people think about other people's thoughts (theory of mind): temporo-parietal junction, posterior cingulate, and medial prefrontal cortex. These same regions were recruited in 13 children aged 6-11 years when they listened to sections of a story describing a character's thoughts compared to sections of the same story that described the physical context. A distinct region in the posterior superior temporal sulcus was implicated in the perception of biological motion. Change in response selectivity with age was observed in just one region. The right temporo-parietal junction was recruited equally for mental and physical facts about people in younger children, but only for mental facts in older children.


Subject(s)
Brain/anatomy & histology , Brain/physiology , Cognition/physiology , Social Perception , Child , Female , Humans , Magnetic Resonance Imaging , Male , Parietal Lobe/physiology , Prefrontal Cortex/physiology , Semantics , Temporal Lobe/physiology
10.
Proc Natl Acad Sci U S A ; 106(27): 11312-7, 2009 Jul 07.
Article in English | MEDLINE | ID: mdl-19553210

ABSTRACT

Humans reason about the mental states of others; this capacity is called Theory of Mind (ToM). In typically developing adults, ToM is supported by a consistent group of brain regions: the bilateral temporoparietal junction (TPJ), medial prefrontal cortex (MPFC), precuneus (PC), and anterior temporal sulci (aSTS). How experience and intrinsic biological factors interact to produce this adult functional profile is not known. In the current study we investigate the role of visual experience in the development of the ToM network by studying congenitally blind adults. In experiment 1, participants listened to stories and answered true/false questions about them. The stories were either about mental or physical representations of reality (e.g., photographs). In experiment 2, participants listened to stories about people's beliefs based on seeing or hearing; people's bodily sensations (e.g., hunger); and control stories without people. Participants judged whether each story had positive or negative valance. We find that ToM brain regions of sighted and congenitally blind adults are similarly localized and functionally specific. In congenitally blind adults, reasoning about mental states leads to activity in bilateral TPJ, MPFC, PC, and aSTS. These brain regions responded more to passages about beliefs than passages about nonbelief representations or passages about bodily sensations. Reasoning about mental states that are based on seeing is furthermore similar in congenitally blind and sighted individuals. Despite their different developmental experience, congenitally blind adults have a typical ToM network. We conclude that the development of neural mechanisms for ToM depends on innate factors and on experiences represented at an abstract level, amodally.


Subject(s)
Nervous System Physiological Phenomena , Psychological Theory , Visually Impaired Persons/psychology , Adult , Auditory Perception , Behavior , Brain Mapping , Culture , Female , Hearing , Humans , Male , Memory , Middle Aged , Organ Specificity , Prefrontal Cortex/physiology , Visual Perception
11.
Neuroimage ; 28(4): 770-7, 2005 Dec.
Article in English | MEDLINE | ID: mdl-16226467

ABSTRACT

Cognitive neuroscientists widely agree on the importance of providing convergent evidence from neuroimaging and lesion studies to establish structure-function relationships. However, such convergent evidence is, in practice, rarely provided. A previous lesion study found a striking double dissociation between two superficially similar social judgment processes, emotion recognition and personality attribution, based on the same body movement stimuli (point-light walkers). Damage to left frontal opercular (LFO) cortices was associated with impairments in personality trait attribution, whereas damage to right postcentral/supramarginal cortices was associated with impairments in emotional state attribution. Here, we present convergent evidence from fMRI in support of this double dissociation, with regions of interest (ROIs) defined by the regions of maximal lesion overlap from the previous study. Subjects learned four emotion words and four trait words, then watched a series of short point-light walker body movement stimuli. After each stimulus, subjects saw either an emotion word or a trait word and rated how well the word described the stimulus. The LFO ROI exhibited greater activity during personality judgments than during emotion judgments. In contrast, the right postcentral/supramarginal ROI exhibited greater activity during emotion judgments than during personality judgments. Follow-up experiments ruled out the possibility that the LFO activation difference was due to word frequency differences. Additionally, we found greater activity in a region of the medial prefrontal cortex previously associated with "theory of mind" tasks when subjects made personality, as compared to emotion judgments.


Subject(s)
Emotions , Personality , Social Perception , Adult , Brain/anatomy & histology , Brain/physiology , Cerebral Cortex/physiology , Character , Cues , Female , Frontal Lobe/physiology , Humans , Image Processing, Computer-Assisted , Magnetic Resonance Imaging , Male , Movement , Oxygen/blood , Photic Stimulation , Reaction Time/physiology , Reading , Reproducibility of Results
SELECTION OF CITATIONS
SEARCH DETAIL
...