Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 13 de 13
Filter
Add more filters










Publication year range
1.
Cereb Cortex ; 34(5)2024 May 02.
Article in English | MEDLINE | ID: mdl-38795358

ABSTRACT

We report an investigation of the neural processes involved in the processing of faces and objects of brain-lesioned patient PS, a well-documented case of pure acquired prosopagnosia. We gathered a substantial dataset of high-density electrophysiological recordings from both PS and neurotypicals. Using representational similarity analysis, we produced time-resolved brain representations in a format that facilitates direct comparisons across time points, different individuals, and computational models. To understand how the lesions in PS's ventral stream affect the temporal evolution of her brain representations, we computed the temporal generalization of her brain representations. We uncovered that PS's early brain representations exhibit an unusual similarity to later representations, implying an excessive generalization of early visual patterns. To reveal the underlying computational deficits, we correlated PS' brain representations with those of deep neural networks (DNN). We found that the computations underlying PS' brain activity bore a closer resemblance to early layers of a visual DNN than those of controls. However, the brain representations in neurotypicals became more akin to those of the later layers of the model compared to PS. We confirmed PS's deficits in high-level brain representations by demonstrating that her brain representations exhibited less similarity with those of a DNN of semantics.


Subject(s)
Prosopagnosia , Humans , Prosopagnosia/physiopathology , Female , Adult , Brain/physiopathology , Neural Networks, Computer , Middle Aged , Pattern Recognition, Visual/physiology , Male , Models, Neurological
2.
PNAS Nexus ; 3(3): pgae095, 2024 Mar.
Article in English | MEDLINE | ID: mdl-38516275

ABSTRACT

Why are some individuals better at recognizing faces? Uncovering the neural mechanisms supporting face recognition ability has proven elusive. To tackle this challenge, we used a multimodal data-driven approach combining neuroimaging, computational modeling, and behavioral tests. We recorded the high-density electroencephalographic brain activity of individuals with extraordinary face recognition abilities-super-recognizers-and typical recognizers in response to diverse visual stimuli. Using multivariate pattern analyses, we decoded face recognition abilities from 1 s of brain activity with up to 80% accuracy. To better understand the mechanisms subtending this decoding, we compared representations in the brains of our participants with those in artificial neural network models of vision and semantics, as well as with those involved in human judgments of shape and meaning similarity. Compared to typical recognizers, we found stronger associations between early brain representations of super-recognizers and midlevel representations of vision models as well as shape similarity judgments. Moreover, we found stronger associations between late brain representations of super-recognizers and representations of the artificial semantic model as well as meaning similarity judgments. Overall, these results indicate that important individual variations in brain processing, including neural computations extending beyond purely visual processes, support differences in face recognition abilities. They provide the first empirical evidence for an association between semantic computations and face recognition abilities. We believe that such multimodal data-driven approaches will likely play a critical role in further revealing the complex nature of idiosyncratic face recognition in the human brain.

3.
J Vis ; 24(1): 7, 2024 Jan 02.
Article in English | MEDLINE | ID: mdl-38197738

ABSTRACT

Humans communicate internal states through complex facial movements shaped by biological and evolutionary constraints. Although real-life social interactions are flooded with dynamic signals, current knowledge on facial expression recognition mainly arises from studies using static face images. This experimental bias might stem from previous studies consistently reporting that young adults minimally benefit from the richer dynamic over static information, whereas children, the elderly, and clinical populations very strongly do (Richoz, Jack, Garrod, Schyns, & Caldara, 2015, Richoz, Jack, Garrod, Schyns, & Caldara, 2018b). These observations point to a near-optimal facial expression decoding system in young adults, almost insensitive to the advantage of dynamic over static cues. Surprisingly, no study has yet tested the idea that such evidence might be rooted in a ceiling effect. To this aim, we asked 70 healthy young adults to perform static and dynamic facial expression recognition of the six basic expressions while parametrically and randomly varying the low-level normalized phase and contrast signal (0%-100%) of the faces. As predicted, when 100% face signals were presented, static and dynamic expressions were recognized with equal efficiency with the exception of those with the most informative dynamics (i.e., happiness and surprise). However, when less signal was available, dynamic expressions were all better recognized than their static counterpart (peaking at ∼20%). Our data show that facial movements increase our ability to efficiently identify emotional states of others under the suboptimal visual conditions that can occur in everyday life. Dynamic signals are more effective and sensitive than static ones for decoding all facial expressions of emotion for all human observers.


Subject(s)
Facial Expression , Facial Recognition , Child , Aged , Young Adult , Humans , Emotions , Happiness , Cues
4.
Neuropsychologia ; 180: 108479, 2023 02 10.
Article in English | MEDLINE | ID: mdl-36623806

ABSTRACT

Healthy observers recognize more accurately same-than other-race faces (i.e., the Same-Race Recognition Advantage - SRRA) but categorize them by race more slowly than other-race faces (i.e., the Other-Race Categorization Advantage - ORCA). Several fMRI studies reported discrepant bilateral activations in the Fusiform Face Area (FFA) and Occipital Face Area (OFA) correlating with both effects. However, due to the very nature and limits of fMRI results, whether these face-sensitive regions play an unequivocal causal role in those other-race effects remains to be clarified. To this aim, we tested PS, a well-studied pure case of acquired prosopagnosia with lesions encompassing the left FFA and the right OFA. PS, healthy age-matched and young adults performed two recognition and three categorization by race tasks, respectively using Western Caucasian and East Asian faces normalized for their low-level properties with and without-external features, as well as in naturalistic settings. As expected, PS was slower and less accurate than the controls. Crucially, however, the magnitudes of her SRRA and ORCA were comparable to the controls in all the tasks. Our data show that prosopagnosia does not abolish other-race effects, as an intact face system, the left FFA and/or right OFA are not critical for eliciting the SRRA and ORCA. Race is a strong visual and social signal that is encoded in a large neural face-sensitive network, robustly tuned for processing same-race faces.


Subject(s)
Prosopagnosia , Female , Humans , Young Adult , Cerebral Cortex/pathology , Magnetic Resonance Imaging , Pattern Recognition, Visual , Prosopagnosia/diagnostic imaging , Recognition, Psychology , White People , East Asian People
5.
Heliyon ; 7(5): e07018, 2021 May.
Article in English | MEDLINE | ID: mdl-34041389

ABSTRACT

During real-life interactions, facial expressions of emotion are perceived dynamically with multimodal sensory information. In the absence of auditory sensory channel inputs, it is unclear how facial expressions are recognised and internally represented by deaf individuals. Few studies have investigated facial expression recognition in deaf signers using dynamic stimuli, and none have included all six basic facial expressions of emotion (anger, disgust, fear, happiness, sadness, and surprise) with stimuli fully controlled for their low-level visual properties, leaving the question of whether or not a dynamic advantage for deaf observers exists unresolved. We hypothesised, in line with the enhancement hypothesis, that the absence of auditory sensory information might have forced the visual system to better process visual (unimodal) signals, and predicted that this greater sensitivity to visual stimuli would result in better recognition performance for dynamic compared to static stimuli, and for deaf-signers compared to hearing non-signers in the dynamic condition. To this end, we performed a series of psychophysical studies with deaf signers with early-onset severe-to-profound deafness (dB loss >70) and hearing controls to estimate their ability to recognize the six basic facial expressions of emotion. Using static, dynamic, and shuffled (randomly permuted video frames of an expression) stimuli, we found that deaf observers showed similar categorization profiles and confusions across expressions compared to hearing controls (e.g., confusing surprise with fear). In contrast to our hypothesis, we found no recognition advantage for dynamic compared to static facial expressions for deaf observers. This observation shows that the decoding of dynamic facial expression emotional signals is not superior even in the deaf expert visual system, suggesting the existence of optimal signals in static facial expressions of emotion at the apex. Deaf individuals match hearing individuals in the recognition of facial expressions of emotion.

6.
J Deaf Stud Deaf Educ ; 24(4): 346-355, 2019 10 01.
Article in English | MEDLINE | ID: mdl-31271428

ABSTRACT

We live in a world of rich dynamic multisensory signals. Hearing individuals rapidly and effectively integrate multimodal signals to decode biologically relevant facial expressions of emotion. Yet, it remains unclear how facial expressions are decoded by deaf adults in the absence of an auditory sensory channel. We thus compared early and profoundly deaf signers (n = 46) with hearing nonsigners (n = 48) on a psychophysical task designed to quantify their recognition performance for the six basic facial expressions of emotion. Using neutral-to-expression image morphs and noise-to-full signal images, we quantified the intensity and signal levels required by observers to achieve expression recognition. Using Bayesian modeling, we found that deaf observers require more signal and intensity to recognize disgust, while reaching comparable performance for the remaining expressions. Our results provide a robust benchmark for the intensity and signal use in deafness and novel insights into the differential coding of facial expressions of emotion between hearing and deaf individuals.


Subject(s)
Deafness/psychology , Emotions , Facial Expression , Facial Recognition , Adolescent , Adult , Female , Humans , Male , Sign Language , Young Adult
7.
Perception ; 48(3): 197-213, 2019 Mar.
Article in English | MEDLINE | ID: mdl-30758252

ABSTRACT

The present study examined whether children with autism spectrum disorder (ASD) and typically developing (TD) children differed in visual perception of food stimuli at both sensorimotor and affective levels. A potential link between visual perception and food neophobia was also investigated. To these aims, 11 children with ASD and 11 TD children were tested. Visual pictures of food were used, and food neophobia was assessed by the parents. Results revealed that children with ASD explored visually longer food stimuli than TD children. Complementary analyses revealed that whereas TD children explored more multiple-item dishes (vs. simple-item dishes), children with ASD explored all the dishes in a similar way. In addition, children with ASD gave more negative appreciation in general. Moreover, hedonic rating was negatively correlated with food neophobia scores in children with ASD, but not in TD children. In sum, we show here that children with ASD have more difficulty than TD children in liking a food when presented visually. Our findings also suggest that a prominent factor that needs to be considered is time management during the food choice process. They also provide new ways of measuring and understanding food neophobia in children with ASD.


Subject(s)
Affect , Autism Spectrum Disorder/complications , Autism Spectrum Disorder/psychology , Phobic Disorders/complications , Phobic Disorders/psychology , Visual Perception , Adolescent , Case-Control Studies , Child , Child, Preschool , Female , Food , Humans , Male , Philosophy , Photic Stimulation
8.
J Vis ; 18(9): 5, 2018 09 04.
Article in English | MEDLINE | ID: mdl-30208425

ABSTRACT

The effective transmission and decoding of dynamic facial expressions of emotion is omnipresent and critical for adapted social interactions in everyday life. Thus, common intuition would suggest an advantage for dynamic facial expression recognition (FER) over the static snapshots routinely used in most experiments. However, although many studies reported an advantage in the recognition of dynamic over static expressions in clinical populations, results obtained from healthy participants are contrasted. To clarify this issue, we conducted a large cross-sectional study to investigate FER across the life span in order to determine if age is a critical factor to account for such discrepancies. More than 400 observers (age range 5-96) performed recognition tasks of the six basic expressions in static, dynamic, and shuffled (temporally randomized frames) conditions, normalized for the amount of energy sampled over time. We applied a Bayesian hierarchical step-linear model to capture the nonlinear relationship between age and FER for the different viewing conditions. Although replicating the typical accuracy profiles of FER, we determined the age at which peak efficiency was reached for each expression and found greater accuracy for most dynamic expressions across the life span. This advantage in the elderly population was driven by a significant decrease in performance for static images, which was twice as large as for the young adults. Our data posit the use of dynamic stimuli as being critical in the assessment of FER in the elderly population, inviting caution when drawing conclusions from the sole use of static face images to this aim.


Subject(s)
Aging/physiology , Emotions , Facial Expression , Facial Recognition/physiology , Adolescent , Adult , Age Factors , Aged , Aged, 80 and over , Bayes Theorem , Child , Child, Preschool , Cross-Sectional Studies , Female , Humans , Male , Middle Aged , Young Adult
10.
Soc Cogn Affect Neurosci ; 12(12): 1959-1971, 2017 12 01.
Article in English | MEDLINE | ID: mdl-29040780

ABSTRACT

The rapid extraction of facial identity and emotional expressions is critical for adapted social interactions. These biologically relevant abilities have been associated with early neural responses on the face sensitive N170 component. However, whether all facial expressions uniformly modulate the N170, and whether this effect occurs only when emotion categorization is task-relevant, is still unclear. To clarify this issue, we recorded high-resolution electrophysiological signals while 22 observers perceived the six basic expressions plus neutral. We used a repetition suppression paradigm, with an adaptor followed by a target face displaying the same identity and expression (trials of interest). We also included catch trials to which participants had to react, by varying identity (identity-task), expression (expression-task) or both (dual-task) on the target face. We extracted single-trial Repetition Suppression (stRS) responses using a data-driven spatiotemporal approach with a robust hierarchical linear model to isolate adaptation effects on the trials of interest. Regardless of the task, fear was the only expression modulating the N170, eliciting the strongest stRS responses. This observation was corroborated by distinct behavioral performance during the catch trials for this facial expression. Altogether, our data reinforce the view that fear elicits distinct neural processes in the brain, enhancing attention and facilitating the early coding of faces.


Subject(s)
Face , Fear/psychology , Adult , Electroencephalography , Facial Expression , Female , Humans , Linear Models , Male , Photic Stimulation , Psychomotor Performance , Young Adult
11.
Soc Cogn Affect Neurosci ; 12(8): 1334-1341, 2017 08 01.
Article in English | MEDLINE | ID: mdl-28459990

ABSTRACT

Acquired prosopagnosia is characterized by a deficit in face recognition due to diverse brain lesions, but interestingly most prosopagnosic patients suffering from posterior lesions use the mouth instead of the eyes for face identification. Whether this bias is present for the recognition of facial expressions of emotion has not yet been addressed. We tested PS, a pure case of acquired prosopagnosia with bilateral occipitotemporal lesions anatomically sparing the regions dedicated for facial expression recognition. PS used mostly the mouth to recognize facial expressions even when the eye area was the most diagnostic. Moreover, PS directed most of her fixations towards the mouth. Her impairment was still largely present when she was instructed to look at the eyes, or when she was forced to look at them. Control participants showed a performance comparable to PS when only the lower part of the face was available. These observations suggest that the deficits observed in PS with static images are not solely attentional, but are rooted at the level of facial information use. This study corroborates neuroimaging findings suggesting that the Occipital Face Area might play a critical role in extracting facial features that are integrated for both face identification and facial expression recognition in static images.


Subject(s)
Emotions/physiology , Facial Expression , Facial Recognition/physiology , Prosopagnosia/physiopathology , Female , Humans , Male , Middle Aged
12.
PLoS One ; 12(1): e0169325, 2017.
Article in English | MEDLINE | ID: mdl-28060872

ABSTRACT

Early multisensory perceptual experiences shape the abilities of infants to perform socially-relevant visual categorization, such as the extraction of gender, age, and emotion from faces. Here, we investigated whether multisensory perception of gender is influenced by infant-directed (IDS) or adult-directed (ADS) speech. Six-, 9-, and 12-month-old infants saw side-by-side silent video-clips of talking faces (a male and a female) and heard either a soundtrack of a female or a male voice telling a story in IDS or ADS. Infants participated in only one condition, either IDS or ADS. Consistent with earlier work, infants displayed advantages in matching female relative to male faces and voices. Moreover, the new finding that emerged in the current study was that extraction of gender from face and voice was stronger at 6 months with ADS than with IDS, whereas at 9 and 12 months, matching did not differ for IDS versus ADS. The results indicate that the ability to perceive gender in audiovisual speech is influenced by speech manner. Our data suggest that infants may extract multisensory gender information developmentally earlier when looking at adults engaged in conversation with other adults (i.e., ADS) than when adults are directly talking to them (i.e., IDS). Overall, our findings imply that the circumstances of social interaction may shape early multisensory abilities to perceive gender.


Subject(s)
Auditory Perception , Speech , Visual Perception , Voice , Acoustic Stimulation , Adult , Child Development , Female , Hearing , Humans , Infant , Male , Photic Stimulation , Speech Perception
13.
Cortex ; 65: 50-64, 2015 Apr.
Article in English | MEDLINE | ID: mdl-25638352

ABSTRACT

The human face transmits a wealth of signals that readily provide crucial information for social interactions, such as facial identity and emotional expression. Yet, a fundamental question remains unresolved: does the face information for identity and emotional expression categorization tap into common or distinct representational systems? To address this question we tested PS, a pure case of acquired prosopagnosia with bilateral occipitotemporal lesions anatomically sparing the regions that are assumed to contribute to facial expression (de)coding (i.e., the amygdala, the insula and the posterior superior temporal sulcus--pSTS). We previously demonstrated that PS does not use information from the eye region to identify faces, but relies on the suboptimal mouth region. PS's abnormal information use for identity, coupled with her neural dissociation, provides a unique opportunity to probe the existence of a dichotomy in the face representational system. To reconstruct the mental models of the six basic facial expressions of emotion in PS and age-matched healthy observers, we used a novel reverse correlation technique tracking information use on dynamic faces. PS was comparable to controls, using all facial features to (de)code facial expressions with the exception of fear. PS's normal (de)coding of dynamic facial expressions suggests that the face system relies either on distinct representational systems for identity and expression, or dissociable cortical pathways to access them. Interestingly, PS showed a selective impairment for categorizing many static facial expressions, which could be accounted for by her lesion in the right inferior occipital gyrus. PS's advantage for dynamic facial expressions might instead relate to a functionally distinct and sufficient cortical pathway directly connecting the early visual cortex to the spared pSTS. Altogether, our data provide critical insights on the healthy and impaired face systems, question evidence of deficits obtained from patients by using static images of facial expressions, and offer novel routes for patient rehabilitation.


Subject(s)
Emotions/physiology , Face , Facial Expression , Prosopagnosia/psychology , Visual Perception/physiology , Brain Mapping , Cerebral Cortex/physiopathology , Discrimination, Psychological , Female , Humans , Male , Middle Aged , Models, Psychological , Prosopagnosia/diagnosis
SELECTION OF CITATIONS
SEARCH DETAIL
...