Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 9 de 9
Filter
Add more filters










Database
Language
Publication year range
1.
J Exp Child Psychol ; 229: 105622, 2023 05.
Article in English | MEDLINE | ID: mdl-36641829

ABSTRACT

In our daily lives, we routinely look at the faces of others to try to understand how they are feeling. Few studies have examined the perceptual strategies that are used to recognize facial expressions of emotion, and none have attempted to isolate visual information use with eye movements throughout development. Therefore, we recorded the eye movements of children from 5 years of age up to adulthood during recognition of the six "basic emotions" to investigate when perceptual strategies for emotion recognition become mature (i.e., most adult-like). Using iMap4, we identified the eye movement fixation patterns for recognition of the six emotions across age groups in natural viewing and gaze-contingent (i.e., expanding spotlight) conditions. While univariate analyses failed to reveal significant differences in fixation patterns, more sensitive multivariate distance analyses revealed a U-shaped developmental trajectory with the eye movement strategies of the 17- to 18-year-old group most similar to adults for all expressions. A developmental dip in strategy similarity was found for each emotional expression revealing which age group had the most distinct eye movement strategy from the adult group: the 13- to 14-year-olds for sadness recognition; the 11- to 12-year-olds for fear, anger, surprise, and disgust; and the 7- to 8-year-olds for happiness. Recognition performance for happy, angry, and sad expressions did not differ significantly across age groups, but the eye movement strategies for these expressions diverged for each group. Therefore, a unique strategy was not a prerequisite for optimal recognition performance for these expressions. Our data provide novel insights into the developmental trajectories underlying facial expression recognition, a critical ability for adaptive social relations.


Subject(s)
Facial Expression , Facial Recognition , Adult , Child , Humans , Adolescent , Eye Movements , Emotions , Anger , Happiness
2.
Heliyon ; 7(5): e07018, 2021 May.
Article in English | MEDLINE | ID: mdl-34041389

ABSTRACT

During real-life interactions, facial expressions of emotion are perceived dynamically with multimodal sensory information. In the absence of auditory sensory channel inputs, it is unclear how facial expressions are recognised and internally represented by deaf individuals. Few studies have investigated facial expression recognition in deaf signers using dynamic stimuli, and none have included all six basic facial expressions of emotion (anger, disgust, fear, happiness, sadness, and surprise) with stimuli fully controlled for their low-level visual properties, leaving the question of whether or not a dynamic advantage for deaf observers exists unresolved. We hypothesised, in line with the enhancement hypothesis, that the absence of auditory sensory information might have forced the visual system to better process visual (unimodal) signals, and predicted that this greater sensitivity to visual stimuli would result in better recognition performance for dynamic compared to static stimuli, and for deaf-signers compared to hearing non-signers in the dynamic condition. To this end, we performed a series of psychophysical studies with deaf signers with early-onset severe-to-profound deafness (dB loss >70) and hearing controls to estimate their ability to recognize the six basic facial expressions of emotion. Using static, dynamic, and shuffled (randomly permuted video frames of an expression) stimuli, we found that deaf observers showed similar categorization profiles and confusions across expressions compared to hearing controls (e.g., confusing surprise with fear). In contrast to our hypothesis, we found no recognition advantage for dynamic compared to static facial expressions for deaf observers. This observation shows that the decoding of dynamic facial expression emotional signals is not superior even in the deaf expert visual system, suggesting the existence of optimal signals in static facial expressions of emotion at the apex. Deaf individuals match hearing individuals in the recognition of facial expressions of emotion.

3.
J Deaf Stud Deaf Educ ; 24(4): 346-355, 2019 10 01.
Article in English | MEDLINE | ID: mdl-31271428

ABSTRACT

We live in a world of rich dynamic multisensory signals. Hearing individuals rapidly and effectively integrate multimodal signals to decode biologically relevant facial expressions of emotion. Yet, it remains unclear how facial expressions are decoded by deaf adults in the absence of an auditory sensory channel. We thus compared early and profoundly deaf signers (n = 46) with hearing nonsigners (n = 48) on a psychophysical task designed to quantify their recognition performance for the six basic facial expressions of emotion. Using neutral-to-expression image morphs and noise-to-full signal images, we quantified the intensity and signal levels required by observers to achieve expression recognition. Using Bayesian modeling, we found that deaf observers require more signal and intensity to recognize disgust, while reaching comparable performance for the remaining expressions. Our results provide a robust benchmark for the intensity and signal use in deafness and novel insights into the differential coding of facial expressions of emotion between hearing and deaf individuals.


Subject(s)
Deafness/psychology , Emotions , Facial Expression , Facial Recognition , Adolescent , Adult , Female , Humans , Male , Sign Language , Young Adult
4.
Psychosom Med ; 81(2): 155-164, 2019.
Article in English | MEDLINE | ID: mdl-30702549

ABSTRACT

OBJECTIVE: Impairments in facial emotion recognition are an underlying factor of deficits in emotion regulation and interpersonal difficulties in mental disorders and are evident in eating disorders (EDs). METHODS: We used a computerized psychophysical paradigm to manipulate parametrically the quantity of signal in facial expressions of emotion (QUEST threshold seeking algorithm). This was used to measure emotion recognition in 308 adult women (anorexia nervosa [n = 61], bulimia nervosa [n = 58], healthy controls [n = 130], and mixed mental disorders [mixed, n = 59]). The M (SD) age was 22.84 (3.90) years. The aims were to establish recognition thresholds defining how much information a person needs to recognize a facial emotion expression and to identify deficits in EDs compared with healthy and clinical controls. The stimuli included six basic emotion expressions (fear, anger, disgust, happiness, sadness, surprise), plus a neutral expression. RESULTS: Happiness was discriminated at the lowest, fear at the highest threshold by all groups. There were no differences regarding thresholds between groups, except for the mixed and the bulimia nervosa group with respect to the expression of disgust (F(3,302) = 5.97, p = .001, η = .056). Emotional clarity, ED pathology, and depressive symptoms did not predict performance (RChange ≤ .010, F(1,305) ≤ 5.74, p ≥ .079). The confusion matrix did not reveal specific biases in either group. CONCLUSIONS: Overall, within-subject effects were as expected, whereas between-subject effects were marginal and psychopathology did not influence emotion recognition. Facial emotion recognition abilities in women experiencing EDs compared with women experiencing mixed mental disorders and healthy controls were similar. Although basic facial emotion recognition processes seems to be intact, dysfunctional aspects such as misinterpretation might be important in emotion regulation problems. CLINICAL TRIAL REGISTRATION NUMBER: DRKS-ID: DRKS00005709.


Subject(s)
Emotional Regulation , Facial Expression , Facial Recognition/physiology , Feeding and Eating Disorders/physiopathology , Social Perception , Adolescent , Adult , Female , Humans , Young Adult
5.
J Exp Child Psychol ; 174: 41-59, 2018 10.
Article in English | MEDLINE | ID: mdl-29906651

ABSTRACT

Behavioral studies investigating facial expression recognition during development have applied various methods to establish by which age emotional expressions can be recognized. Most commonly, these methods employ static images of expressions at their highest intensity (apex) or morphed expressions of different intensities, but they have not previously been compared. Our aim was to (a) quantify the intensity and signal use for recognition of six emotional expressions from early childhood to adulthood and (b) compare both measures and assess their functional relationship to better understand the use of different measures across development. Using a psychophysical approach, we isolated the quantity of signal necessary to recognize an emotional expression at full intensity and the quantity of expression intensity (using neutral expression image morphs of varying intensities) necessary for each observer to recognize the six basic emotions while maintaining performance at 75%. Both measures revealed that fear and happiness were the most difficult and easiest expressions to recognize across age groups, respectively, a pattern already stable during early childhood. The quantity of signal and intensity needed to recognize sad, angry, disgust, and surprise expressions decreased with age. Using a Bayesian update procedure, we then reconstructed the response profiles for both measures. This analysis revealed that intensity and signal processing are similar only during adulthood and, therefore, cannot be straightforwardly compared during development. Altogether, our findings offer novel methodological and theoretical insights and tools for the investigation of the developing affective system.


Subject(s)
Aging/psychology , Emotions , Facial Expression , Facial Recognition , Adolescent , Adult , Bayes Theorem , Child , Child, Preschool , Female , Humans , Male , Photic Stimulation/methods , Young Adult
6.
Dev Sci ; 18(6): 926-39, 2015 Nov.
Article in English | MEDLINE | ID: mdl-25704672

ABSTRACT

Reading the non-verbal cues from faces to infer the emotional states of others is central to our daily social interactions from very early in life. Despite the relatively well-documented ontogeny of facial expression recognition in infancy, our understanding of the development of this critical social skill throughout childhood into adulthood remains limited. To this end, using a psychophysical approach we implemented the QUEST threshold-seeking algorithm to parametrically manipulate the quantity of signals available in faces normalized for contrast and luminance displaying the six emotional expressions, plus neutral. We thus determined observers' perceptual thresholds for effective discrimination of each emotional expression from 5 years of age up to adulthood. Consistent with previous studies, happiness was most easily recognized with minimum signals (35% on average), whereas fear required the maximum signals (97% on average) across groups. Overall, recognition improved with age for all expressions except happiness and fear, for which all age groups including the youngest remained within the adult range. Uniquely, our findings characterize the recognition trajectories of the six basic emotions into three distinct groupings: expressions that show a steep improvement with age - disgust, neutral, and anger; expressions that show a more gradual improvement with age - sadness, surprise; and those that remain stable from early childhood - happiness and fear, indicating that the coding for these expressions is already mature by 5 years of age. Altogether, our data provide for the first time a fine-grained mapping of the development of facial expression recognition. This approach significantly increases our understanding of the decoding of emotions across development and offers a novel tool to measure impairments for specific facial expressions in developmental clinical populations.


Subject(s)
Emotions/physiology , Facial Expression , Human Development/physiology , Pattern Recognition, Visual/physiology , Adolescent , Age Factors , Analysis of Variance , Bayes Theorem , Child , Face , Female , Humans , Linear Models , Male , Photic Stimulation , Psychometrics , Psychophysics , Young Adult
7.
Dev Sci ; 14(5): 1176-84, 2011 Sep.
Article in English | MEDLINE | ID: mdl-21884332

ABSTRACT

Perception and eye movements are affected by culture. Adults from Eastern societies (e.g. China) display a disposition to process information holistically, whereas individuals from Western societies (e.g. Britain) process information analytically. Recently, this pattern of cultural differences has been extended to face processing. Adults from Eastern cultures fixate centrally towards the nose when learning and recognizing faces, whereas adults from Western societies spread fixations across the eye and mouth regions. Although light has been shed on how adults can fixate different areas yet achieve comparable recognition accuracy, the reason why such divergent strategies exist is less certain. Although some argue that culture shapes strategies across development, little direct evidence exists to support this claim. Additionally, it has long been claimed that face recognition in early childhood is largely reliant upon external rather than internal face features, yet recent studies have challenged this theory. To address these issues, we tested children aged 7-12 years of age from the UK and China with an old/new face recognition paradigm while simultaneously recording their eye movements. Both populations displayed patterns of fixations that were consistent with adults from their respective cultural groups, which 'strengthened' across development as qualified by a pattern classifier analysis. Altogether, these observations suggest that cultural forces may indeed be responsible for shaping eye movements from early childhood. Furthermore, fixations made by both cultural groups almost exclusively landed on internal face regions, suggesting that these features, and not external features, are universally used to achieve face recognition in childhood.


Subject(s)
Culture , Eye Movements , Fixation, Ocular , Pattern Recognition, Visual , Recognition, Psychology , Child , China , Face , Facial Expression , Female , Humans , Male , Scotland , Visual Perception
8.
J Vis ; 10(6): 21, 2010 Jun 01.
Article in English | MEDLINE | ID: mdl-20884570

ABSTRACT

Culture shapes how people gather information from the visual world. We recently showed that Western observers focus on the eyes region during face recognition, whereas Eastern observers fixate predominantly the center of faces, suggesting a more effective use of extrafoveal information for Easterners compared to Westerners. However, the cultural variation in eye movements during scene perception is a highly debated topic. Additionally, the extent to which those perceptual differences across observers from different cultures rely on modulations of extrafoveal information use remains to be clarified. We used a gaze-contingent technique designed to dynamically mask central vision, the Blindspot, during a visual search task of animals in natural scenes. We parametrically controlled the Blindspots and target animal sizes (0°, 2°, 5°, or 8°). We processed eye-tracking data using an unbiased data-driven approach based on fixation maps and we introduced novel spatiotemporal analyses in order to finely characterize the dynamics of scene exploration. Both groups of observers, Eastern and Western, showed comparable animal identification performance, which decreased as a function of the Blindspot sizes. Importantly, dynamic analysis of the exploration pathways revealed identical oculomotor strategies for both groups of observers during animal search in scenes. Culture does not impact extrafoveal information use during the ecologically valid visual search of animals in natural scenes.


Subject(s)
Cultural Diversity , Discrimination, Psychological/physiology , Eye Movements/physiology , Pattern Recognition, Visual/physiology , Adult , Face , Female , Humans , Male
9.
Perception ; 39(11): 1491-503, 2010.
Article in English | MEDLINE | ID: mdl-21313946

ABSTRACT

Face processing is widely understood to be a basic, universal visual function effortlessly achieved by people from all cultures and races. The remarkable recognition performance for faces is markedly and specifically affected by picture-plane inversion: the so-called face-inversion effect (FIE), a finding often used as evidence for face-specific mechanisms. However, it has recently been shown that culture shapes the way people deploy eye movements to extract information from faces. Interestingly, the comparable lack of experience with inverted faces across cultures offers a unique opportunity to establish the extent to which such cultural perceptual biases in eye movements are robust, but also to assess whether face-specific mechanisms are universally tuned. Here we monitored the eye movements of Western Caucasian (WC) and East Asian (EA) observers while they learned and recognised WC and EA inverted faces. Both groups of observers showed a comparable impairment in recognising inverted faces of both races. WC observers deployed a scattered inverted triangular scanpath with a bias towards the mouth, whereas EA observers uniformly extended the focus of their fixations from the centre towards the eyes. Overall, our data show that cultural perceptual differences in eye movements persist during the FIE, questioning the universality of face-processing mechanisms.


Subject(s)
Cultural Diversity , Discrimination, Psychological/physiology , Eye Movements/physiology , Face , Pattern Recognition, Visual/physiology , Analysis of Variance , Asian People , Cues , Female , Humans , Male , Orientation/physiology , Reaction Time , White People , Young Adult
SELECTION OF CITATIONS
SEARCH DETAIL
...