Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 39
Filter
Add more filters










Publication year range
1.
Sci Rep ; 13(1): 16120, 2023 09 26.
Article in English | MEDLINE | ID: mdl-37752212

ABSTRACT

Our faces display socially important sex and identity information. How perceptually independent are these facial characteristics? Here, we used a sex categorization task to investigate how changing faces in terms of either their sex or identity affects sex categorization of those faces, whether these manipulations affect sex categorization similarly when the original faces were personally familiar or unknown, and, whether computational models trained for sex classification respond similarly to human observers. Our results show that varying faces along either sex or identity dimension affects their sex categorization. When the sex was swapped (e.g., female faces became male looking, Experiment 1), sex categorization performance was different from that with the original unchanged faces, and significantly more so for people who were familiar with the original faces than those who were not. When the identity of the faces was manipulated by caricaturing or anti-caricaturing them (these manipulations either augment or diminish idiosyncratic facial information, Experiment 2), sex categorization performance to caricatured, original, and anti-caricatured faces increased in that order, independently of face familiarity. Moreover, our face manipulations showed different effects upon computational models trained for sex classification and elicited different patterns of responses in humans and computational models. These results not only support the notion that the sex and identity of faces are processed integratively by human observers but also demonstrate that computational models of face categorization may not capture key characteristics of human face categorization.


Subject(s)
Face , Pattern Recognition, Visual , Humans , Male , Female , Pattern Recognition, Visual/physiology , Recognition, Psychology/physiology , Computer Simulation
2.
Iperception ; 14(6): 20416695231215604, 2023.
Article in English | MEDLINE | ID: mdl-38222319

ABSTRACT

When seeing an object in a scene, the presumption of seeing that object from a general viewpoint (as opposed to an accidental viewpoint) is a useful heuristic to decide which of many interpretations of this object is correct. Similar heuristic assumptions on illumination quality might also be used for scene interpretation. Here we tested that assumption and asked if illumination information helps determine object properties when seen from an accidental viewpoint. Test objects were placed on a flat surface and illumination was varied while keeping the objects' images constant. Observers judged the shape or rigidity of static or moving simple objects presented in accidental view. They also chose which of two seemingly very similar faces was familiar. We found: (1) Objects might appear flat without shadow information but were perceived to be volumetric objects or non-planar in the presence of cast shadows. (2) Apparently non-rigid objects became rigid with shadow information. (3) Shading and shadows helped to infer which of two face was the familiar one. Previous results had shown that cast shadows help determine spatial layout of objects. Our study shows that other properties of objects like rigidity or 3D-shape can be disambiguated by shadow information.

3.
Neuroimage ; 246: 118783, 2022 02 01.
Article in English | MEDLINE | ID: mdl-34879251

ABSTRACT

Face and body orientation convey important information for us to understand other people's actions, intentions and social interactions. It has been shown that several occipitotemporal areas respond differently to faces or bodies of different orientations. However, whether face and body orientation are processed by partially overlapping or completely separate brain networks remains unclear, as the neural coding of face and body orientation is often investigated separately. Here, we recorded participants' brain activity using fMRI while they viewed faces and bodies shown from three different orientations, while attending to either orientation or identity information. Using multivoxel pattern analysis we investigated which brain regions process face and body orientation respectively, and which regions encode both face and body orientation in a stimulus-independent manner. We found that patterns of neural responses evoked by different stimulus orientations in the occipital face area, extrastriate body area, lateral occipital complex and right early visual cortex could generalise across faces and bodies, suggesting a stimulus-independent encoding of person orientation in occipitotemporal cortex. This finding was consistent across functionally defined regions of interest and a whole-brain searchlight approach. The fusiform face area responded to face but not body orientation, suggesting that orientation responses in this area are face-specific. Moreover, neural responses to orientation were remarkably consistent regardless of whether participants attended to the orientation of faces and bodies or not. Together, these results demonstrate that face and body orientation are processed in a partially overlapping brain network, with a stimulus-independent neural code for face and body orientation in occipitotemporal cortex.


Subject(s)
Occipital Lobe/physiology , Pattern Recognition, Visual/physiology , Social Perception , Space Perception/physiology , Temporal Lobe/physiology , Adult , Brain Mapping , Facial Recognition/physiology , Female , Humans , Magnetic Resonance Imaging , Male , Middle Aged , Occipital Lobe/diagnostic imaging , Temporal Lobe/diagnostic imaging , Young Adult
4.
Front Psychol ; 12: 718004, 2021.
Article in English | MEDLINE | ID: mdl-34621218

ABSTRACT

The categorization of dominant facial features, such as sex, is a highly relevant function for social interaction. It has been found that attributes of the perceiver, such as their biological sex, influence the perception of sexually dimorphic facial features with women showing higher recognition performance for female faces than men. However, evidence on how aspects closely related to biological sex influence face sex categorization are scarce. Using a previously validated set of sex-morphed facial images (morphed from male to female and vice versa), we aimed to investigate the influence of the participant's gender role identification and sexual orientation on face sex categorization, besides their biological sex. Image ratings, questionnaire data on gender role identification and sexual orientation were collected from 67 adults (34 females). Contrary to previous literature, biological sex per se was not significantly associated with image ratings. However, an influence of participant sexual attraction and gender role identity became apparent: participants identifying with male gender attributes and showing attraction toward females perceived masculinized female faces as more male and femininized male faces as more female when compared to participants identifying with female gender attributes and attraction toward males. Considering that we found these effects in a predominantly cisgender and heterosexual sample, investigation of face sex perception in individuals identifying with a gender different from their assigned sex (i.e., transgender people) might provide further insights into how assigned sex and gender identity are related.

5.
Cognition ; 216: 104867, 2021 11.
Article in English | MEDLINE | ID: mdl-34364004

ABSTRACT

Average faces have been used frequently in face recognition studies, either as a theoretical concept (e.g., face norm) or as a tool to manipulate facial attributes (e.g., modifying identity strength). Nonetheless, how the face averaging process- the creation of average faces using an increasing number of faces -changes the resulting averaged faces and our ability to differentiate between them remains to be elucidated. Here we addressed these questions by combining 3D-face averaging, eye-movement tracking, and the computation of image-based face similarity. Participants judged whether two average faces showed the same person while we systematically increased their average level (i.e., number of faces being averaged). Our results showed, with increasing averaging, both a nonlinear increase of the computational similarity between the resulting average faces and a nonlinear decrease of face discrimination performance. Participants' performance dropped from near-ceiling level when two different faces had been averaged together to chance level when 80 faces were mixed. We also found a nonlinear relationship between face similarity and face discrimination performance, which was fitted nicely with an exponential function. Furthermore, when the comparison task became more challenging, participants performed more fixations onto the faces. Nonetheless, the distribution of fixations across facial features (eyes, nose, mouth, and the center area of a face) remained unchanged. These results not only set new constraints on the theoretical characterization of the average face and its role in establishing face norms but also offer practical guidance for creating approximated face norms to manipulate face identity.


Subject(s)
Facial Recognition , Mouth , Eye , Eye Movements , Face , Humans
6.
Hum Brain Mapp ; 42(13): 4242-4260, 2021 09.
Article in English | MEDLINE | ID: mdl-34032361

ABSTRACT

Recognising a person's identity often relies on face and body information, and is tolerant to changes in low-level visual input (e.g., viewpoint changes). Previous studies have suggested that face identity is disentangled from low-level visual input in the anterior face-responsive regions. It remains unclear which regions disentangle body identity from variations in viewpoint, and whether face and body identity are encoded separately or combined into a coherent person identity representation. We trained participants to recognise three identities, and then recorded their brain activity using fMRI while they viewed face and body images of these three identities from different viewpoints. Participants' task was to respond to either the stimulus identity or viewpoint. We found consistent decoding of body identity across viewpoint in the fusiform body area, right anterior temporal cortex, middle frontal gyrus and right insula. This finding demonstrates a similar function of fusiform and anterior temporal cortex for bodies as has previously been shown for faces, suggesting these regions may play a general role in extracting high-level identity information. Moreover, we could decode identity across fMRI activity evoked by faces and bodies in the early visual cortex, right inferior occipital cortex, right parahippocampal cortex and right superior parietal cortex, revealing a distributed network that encodes person identity abstractly. Lastly, identity decoding was consistently better when participants attended to identity, indicating that attention to identity enhances its neural representation. These results offer new insights into how the brain develops an abstract neural coding of person identity, shared by faces and bodies.


Subject(s)
Brain Mapping , Cerebral Cortex/physiology , Face , Human Body , Nerve Net/physiology , Pattern Recognition, Visual/physiology , Social Perception , Adult , Cerebral Cortex/diagnostic imaging , Facial Recognition/physiology , Female , Humans , Magnetic Resonance Imaging , Male , Middle Aged , Nerve Net/diagnostic imaging , Space Perception/physiology , Young Adult
7.
Sci Rep ; 11(1): 1927, 2021 01 21.
Article in English | MEDLINE | ID: mdl-33479387

ABSTRACT

Faces can be categorized in various ways, for example as male or female or as belonging to a specific biogeographic ancestry (race). Here we tested the importance of the main facial features for race perception. We exchanged inner facial features (eyes, mouth or nose), face contour (everything but those) or texture (surface information) between Asian and Caucasian faces. Features were exchanged one at a time, creating for each Asian/Caucasian face pair ten facial variations of the original face pair. German and Korean participants performed a race classification task on all faces presented in random order. The results show that eyes and texture are major determinants of perceived biogeographic ancestry for both groups of participants and for both face types. Inserting these features in a face of another race changed its perceived biogeographic ancestry. Contour, nose and mouth, in that order, had decreasing and much weaker influence on race perception for both participant groups. Exchanging those features did not induce a change of perceived biogeographic ancestry. In our study, all manipulated features were imbedded in natural looking faces, which were shown in an off-frontal view. Our findings confirm and extend previous studies investigating the importance of various facial features for race perception.


Subject(s)
Face/anatomy & histology , Pattern Recognition, Visual/physiology , Visual Perception/physiology , Adult , Analysis of Variance , Asian People/classification , Asian People/genetics , Eye/anatomy & histology , Face/physiology , Female , Humans , Male , Mouth/anatomy & histology , Nose/anatomy & histology , Visual Perception/genetics , White People/classification , White People/genetics , Young Adult
8.
Neuroimage ; 226: 117565, 2021 02 01.
Article in English | MEDLINE | ID: mdl-33221444

ABSTRACT

It has been shown that human faces are processed holistically (i.e. as indecomposable wholes, rather than by their component parts) and this holistic face processing is linked to brain activity in face-responsive brain regions. Although several brain regions outside of the face-responsive network are also sensitive to relational processing and perceptual grouping, whether these non-face-responsive regions contribute to holistic processing remains unclear. Here, we investigated holistic face processing in the composite face paradigm both within and outside of face-responsive brain regions. We recorded participants' brain activity using fMRI while they performed a composite face task. Behavioural results indicate that participants tend to judge the same top face halves as different when they are aligned with different bottom face halves but not when they are misaligned, demonstrating a composite face effect. Neuroimaging results revealed significant differences in responses to aligned and misaligned faces in the lateral occipital complex (LOC), and trends in the anterior part of the fusiform face area (FFA2) and transverse occipital sulcus (TOS), suggesting that these regions are sensitive to holistic versus part-based face processing. Furthermore, the retrosplenial cortex (RSC) and the parahippocampal place area (PPA) showed a pattern of neural activity consistent with a holistic representation of face identity, which also correlated with the strength of the behavioural composite face effect. These results suggest that neural activity in brain regions both within and outside of the face-responsive network contributes to the composite-face effect.


Subject(s)
Brain/diagnostic imaging , Facial Recognition/physiology , Adult , Female , Humans , Judgment/physiology , Magnetic Resonance Imaging , Male , Photic Stimulation , Young Adult
9.
Acta Psychol (Amst) ; 210: 103168, 2020 Oct.
Article in English | MEDLINE | ID: mdl-32919093

ABSTRACT

The goal of new adaptive technologies is to allow humans to interact with technical devices, such as robots, in natural ways akin to human interaction. Essential for achieving this goal, is the understanding of the factors that support natural interaction. Here, we examined whether human motor control is linked to the visual appearance of the interaction partner. Motor control theories consider kinematic-related information but not visual appearance as important for the control of motor movements (Flash & Hogan, 1985; Harris & Wolpert, 1998; Viviani & Terzuolo, 1982). We investigated the sensitivity of motor control to visual appearance during the execution of a social interaction, i.e. a high-five. In a novel mixed reality setup participants executed a high-five with a three-dimensional life-size human- or a robot-looking avatar. Our results demonstrate that movement trajectories and adjustments to perturbations depended on the visual appearance of the avatar despite both avatars carrying out identical movements. Moreover, two well-known motor theories (minimum jerk, two-thirds power law) better predict robot than human interaction trajectories. The dependence of motor control on the human likeness of the interaction partner suggests that different motor control principles might be at work in object and human directed interactions.


Subject(s)
Movement , Social Interaction , Biomechanical Phenomena , Humans
10.
Front Integr Neurosci ; 14: 19, 2020.
Article in English | MEDLINE | ID: mdl-32327980

ABSTRACT

Even when we are wearing gloves, we can easily detect whether a surface that we are touching is sticky or not. However, we know little about the similarities between brain activations elicited by this glove contact and by direct contact with our bare skin. In this functional magnetic resonance imaging (fMRI) study, we investigated which brain regions represent stickiness intensity information obtained in both touch conditions, i.e., skin contact and glove contact. First, we searched for neural representations mediating stickiness for each touch condition separately and found regions responding to both mainly in the supramarginal gyrus and the secondary somatosensory cortex. Second, we explored whether surface stickiness is encoded in common neural patterns irrespective of how participants touched the sticky stimuli. Using a cross-condition decoding method, we tested whether the stickiness intensities could be decoded from fMRI signals evoked by skin contact using a classifier trained on the responses elicited by glove contact, and vice versa. Our results found shared neural encoding patterns in the bilateral angular gyri and the inferior frontal gyrus (IFG) and suggest that these areas represent stickiness intensity information regardless of how participants touched the sticky stimuli. Interestingly, we observed that neural encoding patterns of these areas were reflected in participants' intensity ratings. This study revealed common and distinct brain activation patterns of tactile stickiness using two different touch conditions, which may broaden the understanding of neural mechanisms related to surface texture perception.

11.
J Exp Psychol Learn Mem Cogn ; 46(7): 1309-1327, 2020 Jul.
Article in English | MEDLINE | ID: mdl-31724422

ABSTRACT

Many studies have demonstrated that we can identify a familiar face on an image much better than an unfamiliar one, especially when various degradations or changes (e.g., image distortions or blurring, new illuminations) have been applied, but few have asked how different types of facial information from familiar faces are stored in memory. Here we investigated how well we remember personally familiar faces in terms of their identity, gender, and race. In 3 experiments, based on the faces personally familiar to our participants, we created sets of face morphs that parametrically varied the faces in terms of identity, sex, or race using a 3-dimensional morphable face model. For each familiar face, we presented those face morphs together with the original face and asked participants to pick the correct "real" face among morph distracters in each set. They were instructed to pick the face that most closely resembled their memory of that familiar person. We found that participants excelled in retrieving the correct familiar faces among the distracters when the faces were manipulated in terms of their idiosyncratic features (their identity information), but they were less sensitive to changes that occurred along the gender and race continuum. Image similarity analyses indicate that the observed difference cannot be attributed to different levels of image similarity between manipulations. These findings demonstrate that idiosyncratic and categorical face information is represented differently in memory, even for the faces of people we are very familiar with. Implications to current models of face recognition are discussed. (PsycInfo Database Record (c) 2020 APA, all rights reserved).


Subject(s)
Facial Recognition/physiology , Mental Recall/physiology , Recognition, Psychology/physiology , Social Perception , Adult , Caricatures as Topic , Female , Gender Identity , Humans , Male , Racial Groups , Young Adult
12.
Neuroimage ; 202: 116085, 2019 11 15.
Article in English | MEDLINE | ID: mdl-31401238

ABSTRACT

Our visual system can easily categorize objects (e.g. faces vs. bodies) and further differentiate them into subcategories (e.g. male vs. female). This ability is particularly important for objects of social significance, such as human faces and bodies. While many studies have demonstrated category selectivity to faces and bodies in the brain, how subcategories of faces and bodies are represented remains unclear. Here, we investigated how the brain encodes two prominent subcategories shared by both faces and bodies, sex and weight, and whether neural responses to these subcategories rely on low-level visual, high-level visual or semantic similarity. We recorded brain activity with fMRI while participants viewed faces and bodies that varied in sex, weight, and image size. The results showed that the sex of bodies can be decoded from both body- and face-responsive brain areas, with the former exhibiting more consistent size-invariant decoding than the latter. Body weight could also be decoded in face-responsive areas and in distributed body-responsive areas, and this decoding was also invariant to image size. The weight of faces could be decoded from the fusiform body area (FBA), and weight could be decoded across face and body stimuli in the extrastriate body area (EBA) and a distributed body-responsive area. The sex of well-controlled faces (e.g. excluding hairstyles) could not be decoded from face- or body-responsive regions. These results demonstrate that both face- and body-responsive brain regions encode information that can distinguish the sex and weight of bodies. Moreover, the neural patterns corresponding to sex and weight were invariant to image size and could sometimes generalize across face and body stimuli, suggesting that such subcategorical information is encoded with a high-level visual or semantic code.


Subject(s)
Brain Mapping/methods , Cerebral Cortex/physiology , Magnetic Resonance Imaging , Pattern Recognition, Visual/physiology , Adult , Facial Recognition/physiology , Female , Humans , Male , Multivariate Analysis , Photic Stimulation , Young Adult
13.
Sci Rep ; 9(1): 77, 2019 01 11.
Article in English | MEDLINE | ID: mdl-30635598

ABSTRACT

Previous human fMRI studies have reported activation of somatosensory areas not only during actual touch, but also during touch observation. However, it has remained unclear how the brain encodes visually evoked tactile intensities. Using an associative learning method, we investigated neural representations of roughness intensities evoked by (a) tactile explorations and (b) visual observation of tactile explorations. Moreover, we explored (c) modality-independent neural representations of roughness intensities using a cross-modal classification method. Case (a) showed significant decoding performance in the anterior cingulate cortex (ACC) and the supramarginal gyrus (SMG), while in the case (b), the bilateral posterior parietal cortices, the inferior occipital gyrus, and the primary motor cortex were identified. Case (c) observed shared neural activity patterns in the bilateral insula, the SMG, and the ACC. Interestingly, the insular cortices were identified only from the cross-modal classification, suggesting their potential role in modality-independent tactile processing. We further examined correlations of confusion patterns between behavioral and neural similarity matrices for each region. Significant correlations were found solely in the SMG, reflecting a close relationship between neural activities of SMG and roughness intensity perception. The present findings may deepen our understanding of the brain mechanisms underlying intensity perception of tactile roughness.


Subject(s)
Brain Mapping , Somatosensory Cortex/physiology , Touch Perception , Visual Cortex/physiology , Visual Perception , Adult , Female , Gyrus Cinguli/physiology , Healthy Volunteers , Humans , Magnetic Resonance Imaging , Male , Motor Cortex/physiology , Parietal Lobe/physiology , Young Adult
14.
Vision Res ; 157: 242-251, 2019 04.
Article in English | MEDLINE | ID: mdl-29274811

ABSTRACT

Viewing faces in motion or attached to a body instead of isolated static faces improves their subsequent recognition. Here we enhanced the ecological validity of face encoding by having observers physically moving in a virtual room populated by life-size avatars. We compared the recognition performance of this active group to two control groups. The first control group watched a passive reenactment of the visual experience of the active group. The second control group saw static screenshots of the avatars. All groups performed the same old/new recognition task after learning. Half of the learned faces were shown at test in an orientation close to that experienced during learning while the others were viewed from a new viewing angle. All observers found novel views more difficult to recognize than familiar ones. Overall, the active group performed better than both other groups. Furthermore, the group learning faces from static images was the only one to be at chance level in the novel-view condition. These findings suggest that active exploration combined with a dynamic experience of the faces to learn allow for more robust face recognition and point out the value of such techniques for integrating facial visual information and enhancing recognition from novel viewpoints.


Subject(s)
Facial Recognition/physiology , Recognition, Psychology/physiology , Virtual Reality , Adolescent , Adult , Discrimination Learning/physiology , Female , Humans , Male , Motion Perception , Photic Stimulation/methods , Young Adult
15.
Somatosens Mot Res ; 35(3-4): 212-217, 2018.
Article in English | MEDLINE | ID: mdl-30592429

ABSTRACT

The neural substrates of tactile roughness perception have been investigated by many neuroimaging studies, while relatively little effort has been devoted to the investigation of neural representations of visually perceived roughness. In this human fMRI study, we looked for neural activity patterns that could be attributed to five different roughness intensity levels when the stimuli were perceived visually, i.e., in absence of any tactile sensation. During functional image acquisition, participants viewed video clips displaying a right index fingertip actively exploring the sandpapers that had been used for the behavioural experiment. A whole brain multivariate pattern analysis found four brain regions in which visual roughness intensities could be decoded: the bilateral posterior parietal cortex (PPC), the primary somatosensory cortex (S1) extending to the primary motor cortex (M1) in the right hemisphere, and the inferior occipital gyrus (IOG). In a follow-up analysis, we tested for correlations between the decoding accuracies and the tactile roughness discriminability obtained from a preceding behavioural experiment. We could not find any correlation between both although, during scanning, participants were asked to recall the tactilely perceived roughness of the sandpapers. We presume that a better paradigm is needed to reveal any potential visuo-tactile convergence. However, the present study identified brain regions that may subserve the discrimination of different intensities of visual roughness. This finding may contribute to elucidate the neural mechanisms related to the visual roughness perception in the human brain.


Subject(s)
Brain/diagnostic imaging , Magnetic Resonance Imaging , Touch Perception/physiology , Visual Perception/physiology , Adult , Analysis of Variance , Brain Mapping , Female , Humans , Image Processing, Computer-Assisted , Male , Oxygen/blood , Photic Stimulation , Young Adult
16.
Front Psychol ; 9: 1355, 2018.
Article in English | MEDLINE | ID: mdl-30123162

ABSTRACT

Faces that move contain rich information about facial form, such as facial features and their configuration, alongside the motion of those features. During social interactions, humans constantly decode and integrate these cues. To fully understand human face perception, it is important to investigate what information dynamic faces convey and how the human visual system extracts and processes information from this visual input. However, partly due to the difficulty of designing well-controlled dynamic face stimuli, many face perception studies still rely on static faces as stimuli. Here, we focus on evidence demonstrating the usefulness of dynamic faces as stimuli, and evaluate different types of dynamic face stimuli to study face perception. Studies based on dynamic face stimuli revealed a high sensitivity of the human visual system to natural facial motion and consistently reported dynamic advantages when static face information is insufficient for the task. These findings support the hypothesis that the human perceptual system integrates sensory cues for robust perception. In the present paper, we review the different types of dynamic face stimuli used in these studies, and assess their usefulness for several research questions. Natural videos of faces are ecological stimuli but provide limited control of facial form and motion. Point-light faces allow for good control of facial motion but are highly unnatural. Image-based morphing is a way to achieve control over facial motion while preserving the natural facial form. Synthetic facial animations allow separation of facial form and motion to study aspects such as identity-from-motion. While synthetic faces are less natural than videos of faces, recent advances in photo-realistic rendering may close this gap and provide naturalistic stimuli with full control over facial motion. We believe that many open questions, such as what dynamic advantages exist beyond emotion and identity recognition and which dynamic aspects drive these advantages, can be addressed adequately with different types of stimuli and will improve our understanding of face perception in more ecological settings.

17.
Neuroimage ; 172: 689-702, 2018 05 15.
Article in English | MEDLINE | ID: mdl-29432802

ABSTRACT

What cortical mechanisms allow humans to easily discern the expression or identity of a face? Subjects detected changes in expression or identity of a stream of dynamic faces while we measured BOLD responses from topographically and functionally defined areas throughout the visual hierarchy. Responses in dorsal areas increased during the expression task, whereas responses in ventral areas increased during the identity task, consistent with previous studies. Similar to ventral areas, early visual areas showed increased activity during the identity task. If visual responses are weighted by perceptual mechanisms according to their magnitude, these increased responses would lead to improved attentional selection of the task-appropriate facial aspect. Alternatively, increased responses could be a signature of a sensitivity enhancement mechanism that improves representations of the attended facial aspect. Consistent with the latter sensitivity enhancement mechanism, attending to expression led to enhanced decoding of exemplars of expression both in early visual and dorsal areas relative to attending identity. Similarly, decoding identity exemplars when attending to identity was improved in dorsal and ventral areas. We conclude that attending to expression or identity of dynamic faces is associated with increased selectivity in representations consistent with sensitivity enhancement.


Subject(s)
Brain/physiology , Facial Expression , Pattern Recognition, Visual/physiology , Adult , Attention/physiology , Brain Mapping/methods , Female , Humans , Magnetic Resonance Imaging/methods , Male , Photic Stimulation , Recognition, Psychology/physiology
18.
J Vis ; 17(13): 11, 2017 11 01.
Article in English | MEDLINE | ID: mdl-29141085

ABSTRACT

The brain can only attend to a fraction of all the information that is entering the visual system at any given moment. One way of overcoming the so-called bottleneck of selective attention (e.g., J. M. Wolfe, Võ, Evans, & Greene, 2011) is to make use of redundant visual information and extract summarized statistical information of the whole visual scene. Such ensemble representation occurs for low-level features of textures or simple objects, but it has also been reported for complex high-level properties. While the visual system has, for example, been shown to compute summary representations of facial expression, gender, or identity, it is less clear whether perceptual input from all parts of the visual field contributes equally to the ensemble percept. Here we extend the line of ensemble-representation research into the realm of race and look at the possibility that ensemble perception relies on weighting visual information differently depending on its origin from either the fovea or the visual periphery. We find that observers can judge the mean race of a set of faces, similar to judgments of mean emotion from faces and ensemble representations in low-level domains of visual processing. We also find that while peripheral faces seem to be taken into account for the ensemble percept, far more weight is given to stimuli presented foveally than peripherally. Whether this precision weighting of information stems from differences in the accuracy with which the visual system processes information across the visual field or from statistical inferences about the world needs to be determined by further research.


Subject(s)
Asian People , Facial Expression , Fovea Centralis/physiology , Visual Fields/physiology , Visual Perception/physiology , White People , Adult , Emotions , Female , Humans , Judgment , Male , Young Adult
19.
Vision Res ; 135: 10-15, 2017 06.
Article in English | MEDLINE | ID: mdl-28435124

ABSTRACT

Recognizing actions of others across the whole visual field is required for social interactions. In a previous study, we have shown that recognition is very good even when life-size avatars who were facing the observer carried out actions (e.g. waving) and were presented very far away from the fovea (Fademrecht, Bülthoff, & de la Rosa, 2016). We explored the possibility whether this remarkable performance was owed to life-size avatars facing the observer, which - according to some social cognitive theories (e.g. Schilbach et al., 2013) - could potentially activate different social perceptual processes as profile facing avatars. Participants therefore viewed a life-size stick figure avatar that carried out motion-captured social actions (greeting actions: handshake, hugging, waving; attacking actions: slapping, punching and kicking) in frontal and profile view. Participants' task was to identify the actions as 'greeting' or as 'attack' or to assess the emotional valence of the actions. While recognition accuracy for frontal and profile views did not differ, reaction times were significantly faster in general for profile views (i.e. the moving avatar was seen profile on) than for frontal views (i.e. the action was directed toward the observer). Our results suggest that the remarkable well action recognition performance in the visual periphery was not owed to a more socially engaging front facing view. Although action recognition seems to depend on viewpoint, action recognition in general remains remarkable accurate even far into the visual periphery.


Subject(s)
Retina/physiology , Visual Fields/physiology , Visual Perception/physiology , Adult , Female , Humans , Male , Pattern Recognition, Visual/physiology , Photic Stimulation , Young Adult
20.
J Exp Psychol Learn Mem Cogn ; 43(7): 1020-1035, 2017 Jul.
Article in English | MEDLINE | ID: mdl-28191990

ABSTRACT

Humans' face ability develops and matures with extensive experience in perceiving, recognizing, and interacting with faces that move most of the time. However, how facial movements affect 1 core aspect of face ability-holistic face processing-remains unclear. Here we investigated the influence of rigid facial motion on holistic and part-based face processing by manipulating the presence of facial motion during study and at test in a composite face task. The results showed that rigidly moving faces were processed as holistically as static faces (Experiment 1). Holistic processing of moving faces persisted whether facial motion was presented during study, at test, or both (Experiment 2). Moreover, when faces were inverted to eliminate the contributions of both an upright face template and observers' expertise with upright faces, rigid facial motion facilitated holistic face processing (Experiment 3). Thus, holistic processing represents a general principle of face perception that applies to both static and dynamic faces, rather than being limited to static faces. These results support an emerging view that both perceiver-based and face-based factors contribute to holistic face processing, and they offer new insights on what underlies holistic face processing, how information supporting holistic face processing interacts with each other, and why facial motion may affect face recognition and holistic face processing differently. (PsycINFO Database Record


Subject(s)
Facial Recognition/physiology , Motion Perception/physiology , Orientation , Recognition, Psychology/physiology , Adult , Female , Humans , Male , Photic Stimulation , Reaction Time/physiology , Young Adult
SELECTION OF CITATIONS
SEARCH DETAIL
...