Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 92
Filter
1.
J Exp Psychol Gen ; 153(3): 742-753, 2024 Mar.
Article in English | MEDLINE | ID: mdl-38271012

ABSTRACT

Social class is a powerful hierarchy that determines many privileges and disadvantages. People form impressions of others' social class (like other important social attributes) from facial appearance, and these impressions correlate with stereotype judgments. However, what drives these related subjective judgments remains unknown. That is, what makes someone look like they are of higher or lower social class standing (e.g., rich or poor), and how does this relate to harmful or advantageous stereotypes? We addressed these questions using a perception-based data-driven method to model the specific three-dimensional facial features that drive social class judgments and compared them to those of stereotype-related judgments (competence, warmth, dominance, and trustworthiness), based on White Western culture participants and face stimuli. Using a complementary data-reduction analysis and machine learning approach, we show that social class judgments are driven by a unique constellation of facial features that reflect multiple embedded stereotypes: poor-looking (vs. rich-looking) faces are wider, shorter, and flatter with downturned mouths and darker, cooler complexions, mirroring features of incompetent, cold, and untrustworthy-looking (vs. competent, warm, and trustworthy-looking) faces. Our results reveal the specific facial features that underlie the connection between impressions of social class and stereotype-related social traits, with implications for central social perception theories, including understanding the causal links between stereotype knowledge and social class judgments. We anticipate that our results will inform future interventions designed to interrupt biased perception and social inequalities. (PsycInfo Database Record (c) 2024 APA, all rights reserved).


Subject(s)
Facial Recognition , Stereotyping , Humans , Social Perception , Attitude , Judgment , Social Class , Facial Expression , Trust
2.
Curr Biol ; 34(1): 213-223.e5, 2024 01 08.
Article in English | MEDLINE | ID: mdl-38141619

ABSTRACT

Communicating emotional intensity plays a vital ecological role because it provides valuable information about the nature and likelihood of the sender's behavior.1,2,3 For example, attack often follows signals of intense aggression if receivers fail to retreat.4,5 Humans regularly use facial expressions to communicate such information.6,7,8,9,10,11 Yet how this complex signaling task is achieved remains unknown. We addressed this question using a perception-based, data-driven method to mathematically model the specific facial movements that receivers use to classify the six basic emotions-"happy," "surprise," "fear," "disgust," "anger," and "sad"-and judge their intensity in two distinct cultures (East Asian, Western European; total n = 120). In both cultures, receivers expected facial expressions to dynamically represent emotion category and intensity information over time, using a multi-component compositional signaling structure. Specifically, emotion intensifiers peaked earlier or later than emotion classifiers and represented intensity using amplitude variations. Emotion intensifiers are also more similar across emotions than classifiers are, suggesting a latent broad-plus-specific signaling structure. Cross-cultural analysis further revealed similarities and differences in expectations that could impact cross-cultural communication. Specifically, East Asian and Western European receivers have similar expectations about which facial movements represent high intensity for threat-related emotions, such as "anger," "disgust," and "fear," but differ on those that represent low threat emotions, such as happiness and sadness. Together, our results provide new insights into the intricate processes by which facial expressions can achieve complex dynamic signaling tasks by revealing the rich information embedded in facial expressions.


Subject(s)
Emotions , Facial Expression , Humans , Anger , Fear , Happiness
3.
Curr Biol ; 33(24): 5505-5514.e6, 2023 12 18.
Article in English | MEDLINE | ID: mdl-38065096

ABSTRACT

Prediction-for-perception theories suggest that the brain predicts incoming stimuli to facilitate their categorization.1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17 However, it remains unknown what the information contents of these predictions are, which hinders mechanistic explanations. This is because typical approaches cast predictions as an underconstrained contrast between two categories18,19,20,21,22,23,24-e.g., faces versus cars, which could lead to predictions of features specific to faces or cars, or features from both categories. Here, to pinpoint the information contents of predictions and thus their mechanistic processing in the brain, we identified the features that enable two different categorical perceptions of the same stimuli. We then trained multivariate classifiers to discern, from dynamic MEG brain responses, the features tied to each perception. With an auditory cueing design, we reveal where, when, and how the brain reactivates visual category features (versus the typical category contrast) before the stimulus is shown. We demonstrate that the predictions of category features have a more direct influence (bias) on subsequent decision behavior in participants than the typical category contrast. Specifically, these predictions are more precisely localized in the brain (lateralized), are more specifically driven by the auditory cues, and their reactivation strength before a stimulus presentation exerts a greater bias on how the individual participant later categorizes this stimulus. By characterizing the specific information contents that the brain predicts and then processes, our findings provide new insights into the brain's mechanisms of prediction for perception.


Subject(s)
Brain , Cues , Humans , Brain/physiology , Brain Mapping , Photic Stimulation
4.
J Neurosci ; 43(29): 5391-5405, 2023 07 19.
Article in English | MEDLINE | ID: mdl-37369588

ABSTRACT

Models of visual cognition generally assume that brain networks predict the contents of a stimulus to facilitate its subsequent categorization. However, understanding prediction and categorization at a network level has remained challenging, partly because we need to reverse engineer their information processing mechanisms from the dynamic neural signals. Here, we used connectivity measures that can isolate the communications of a specific content to reconstruct these network mechanisms in each individual participant (N = 11, both sexes). Each was cued to the spatial location (left vs right) and contents [low spatial frequency (LSF) vs high spatial frequency (HSF)] of a predicted Gabor stimulus that they then categorized. Using each participant's concurrently measured MEG, we reconstructed networks that predict and categorize LSF versus HSF contents for behavior. We found that predicted contents flexibly propagate top down from temporal to lateralized occipital cortex, depending on task demands, under supervisory control of prefrontal cortex. When they reach lateralized occipital cortex, predictions enhance the bottom-up LSF versus HSF representations of the stimulus, all the way from occipital-ventral-parietal to premotor cortex, in turn producing faster categorization behavior. Importantly, content communications are subsets (i.e., 55-75%) of the signal-to-signal communications typically measured between brain regions. Hence, our study isolates functional networks that process the information of cognitive functions.SIGNIFICANCE STATEMENT An enduring cognitive hypothesis states that our perception is partly influenced by the bottom-up sensory input but also by top-down expectations. However, cognitive explanations of the dynamic brain networks mechanisms that flexibly predict and categorize the visual input according to task-demands remain elusive. We addressed them in a predictive experimental design by isolating the network communications of cognitive contents from all other communications. Our methods revealed a Prediction Network that flexibly communicates contents from temporal to lateralized occipital cortex, with explicit frontal control, and an occipital-ventral-parietal-frontal Categorization Network that represents more sharply the predicted contents from the shown stimulus, leading to faster behavior. Our framework and results therefore shed a new light of cognitive information processing on dynamic brain activity.


Subject(s)
Brain Mapping , Magnetic Resonance Imaging , Male , Female , Humans , Occipital Lobe , Brain , Cognition , Photic Stimulation , Visual Perception
5.
Sci Adv ; 9(6): eabq8421, 2023 02 10.
Article in English | MEDLINE | ID: mdl-36763663

ABSTRACT

Models are the hallmark of mature scientific inquiry. In psychology, this maturity has been reached in a pervasive question-what models best represent facial expressions of emotion? Several hypotheses propose different combinations of facial movements [action units (AUs)] as best representing the six basic emotions and four conversational signals across cultures. We developed a new framework to formalize such hypotheses as predictive models, compare their ability to predict human emotion categorizations in Western and East Asian cultures, explain the causal role of individual AUs, and explore updated, culture-accented models that improve performance by reducing a prevalent Western bias. Our predictive models also provide a noise ceiling to inform the explanatory power and limitations of different factors (e.g., AUs and individual differences). Thus, our framework provides a new approach to test models of social signals, explain their predictive power, and explore their optimization, with direct implications for theory development.


Subject(s)
Emotions , Facial Expression , Humans , Face , Movement
7.
J Neurosci ; 42(48): 9030-9044, 2022 11 30.
Article in English | MEDLINE | ID: mdl-36280264

ABSTRACT

To date, social and nonsocial decisions have been studied largely in isolation. Consequently, the extent to which social and nonsocial forms of decision uncertainty are integrated using shared neurocomputational resources remains elusive. Here, we address this question using simultaneous electroencephalography (EEG)-functional magnetic resonance imaging (fMRI) in healthy human participants (young adults of both sexes) and a task in which decision evidence in social and nonsocial contexts varies along comparable scales. First, we identify time-resolved build-up of activity in the EEG, akin to a process of evidence accumulation (EA), across both contexts. We then use the endogenous trial-by-trial variability in the slopes of these accumulating signals to construct parametric fMRI predictors. We show that a region of the posterior-medial frontal cortex (pMFC) uniquely explains trial-wise variability in the process of evidence accumulation in both social and nonsocial contexts. We further demonstrate a task-dependent coupling between the pMFC and regions of the human valuation system in dorso-medial and ventro-medial prefrontal cortex across both contexts. Finally, we report domain-specific representations in regions known to encode the early decision evidence for each context. These results are suggestive of a domain-general decision-making architecture, whereupon domain-specific information is likely converted into a "common currency" in medial prefrontal cortex and accumulated for the decision in the pMFC.SIGNIFICANCE STATEMENT Little work has directly compared social-versus-nonsocial decisions to investigate whether they share common neurocomputational origins. Here, using combined electroencephalography (EEG)-functional magnetic resonance imaging (fMRI) and computational modeling, we offer a detailed spatiotemporal account of the neural underpinnings of social and nonsocial decisions. Specifically, we identify a comparable mechanism of temporal evidence integration driving both decisions and localize this integration process in posterior-medial frontal cortex (pMFC). We further demonstrate task-dependent coupling between the pMFC and regions of the human valuation system across both contexts. Finally, we report domain-specific representations in regions encoding the early, domain-specific, decision evidence. These results suggest a domain-general decision-making architecture, whereupon domain-specific information is converted into a common representation in the valuation system and integrated for the decision in the pMFC.


Subject(s)
Decision Making , Magnetic Resonance Imaging , Young Adult , Male , Female , Humans , Frontal Lobe , Electroencephalography
8.
Trends Cogn Sci ; 26(12): 1090-1102, 2022 12.
Article in English | MEDLINE | ID: mdl-36216674

ABSTRACT

Deep neural networks (DNNs) have become powerful and increasingly ubiquitous tools to model human cognition, and often produce similar behaviors. For example, with their hierarchical, brain-inspired organization of computations, DNNs apparently categorize real-world images in the same way as humans do. Does this imply that their categorization algorithms are also similar? We have framed the question with three embedded degrees that progressively constrain algorithmic similarity evaluations: equivalence of (i) behavioral/brain responses, which is current practice, (ii) the stimulus features that are processed to produce these outcomes, which is more constraining, and (iii) the algorithms that process these shared features, the ultimate goal. To improve DNNs as models of cognition, we develop for each degree an increasingly constrained benchmark that specifies the epistemological conditions for the considered equivalence.


Subject(s)
Brain , Neural Networks, Computer , Humans , Brain/physiology , Algorithms , Cognition
9.
Trends Cogn Sci ; 26(8): 626-630, 2022 08.
Article in English | MEDLINE | ID: mdl-35710894

ABSTRACT

Experimental studies in cognitive science typically focus on the population average effect. An alternative is to test each individual participant and then quantify the proportion of the population that would show the effect: the prevalence, or participant replication probability. We argue that this approach has conceptual and practical advantages.


Subject(s)
Cognitive Science , Humans , Probability
10.
Elife ; 112022 02 17.
Article in English | MEDLINE | ID: mdl-35174783

ABSTRACT

A key challenge in neuroimaging remains to understand where, when, and now particularly how human brain networks compute over sensory inputs to achieve behavior. To study such dynamic algorithms from mass neural signals, we recorded the magnetoencephalographic (MEG) activity of participants who resolved the classic XOR, OR, and AND functions as overt behavioral tasks (N = 10 participants/task, N-of-1 replications). Each function requires a different computation over the same inputs to produce the task-specific behavioral outputs. In each task, we found that source-localized MEG activity progresses through four computational stages identified within individual participants: (1) initial contralateral representation of each visual input in occipital cortex, (2) a joint linearly combined representation of both inputs in midline occipital cortex and right fusiform gyrus, followed by (3) nonlinear task-dependent input integration in temporal-parietal cortex, and finally (4) behavioral response representation in postcentral gyrus. We demonstrate the specific dynamics of each computation at the level of individual sources. The spatiotemporal patterns of the first two computations are similar across the three tasks; the last two computations are task specific. Our results therefore reveal where, when, and how dynamic network algorithms perform different computations over the same inputs to produce different behaviors.


Subject(s)
Brain Mapping/methods , Brain/physiology , Magnetoencephalography/methods , Nerve Net/physiology , Neuroimaging/methods , Visual Perception/physiology , Female , Humans , Male , Photic Stimulation , Temporal Lobe/physiology
11.
Curr Biol ; 32(1): 200-209.e6, 2022 01 10.
Article in English | MEDLINE | ID: mdl-34767768

ABSTRACT

Human facial expressions are complex, multi-component signals that can communicate rich information about emotions,1-5 including specific categories, such as "anger," and broader dimensions, such as "negative valence, high arousal."6-8 An enduring question is how this complex signaling is achieved. Communication theory predicts that multi-component signals could transmit each type of emotion information-i.e., specific categories and broader dimensions-via the same or different facial signal components, with implications for elucidating the system and ontology of facial expression communication.9 We addressed this question using a communication-systems-based method that agnostically generates facial expressions and uses the receiver's perceptions to model the specific facial signal components that represent emotion category and dimensional information to them.10-12 First, we derived the facial expressions that elicit the perception of emotion categories (i.e., the six classic emotions13 plus 19 complex emotions3) and dimensions (i.e., valence and arousal) separately, in 60 individual participants. Comparison of these facial signals showed that they share subsets of components, suggesting that specific latent signals jointly represent-i.e., multiplex-categorical and dimensional information. Further examination revealed these specific latent signals and the joint information they represent. Our results-based on white Western participants, same-ethnicity face stimuli, and commonly used English emotion terms-show that facial expressions can jointly represent specific emotion categories and broad dimensions to perceivers via multiplexed facial signal components. Our results provide insights into the ontology and system of facial expression communication and a new information-theoretic framework that can characterize its complexities.


Subject(s)
Emotions , Facial Expression , Anger , Arousal , Face , Humans
12.
Patterns (N Y) ; 2(10): 100348, 2021 Oct 08.
Article in English | MEDLINE | ID: mdl-34693374

ABSTRACT

Deep neural networks (DNNs) can resolve real-world categorization tasks with apparent human-level performance. However, true equivalence of behavioral performance between humans and their DNN models requires that their internal mechanisms process equivalent features of the stimulus. To develop such feature equivalence, our methodology leveraged an interpretable and experimentally controlled generative model of the stimuli (realistic three-dimensional textured faces). Humans rated the similarity of randomly generated faces to four familiar identities. We predicted these similarity ratings from the activations of five DNNs trained with different optimization objectives. Using information theoretic redundancy, reverse correlation, and the testing of generalization gradients, we show that DNN predictions of human behavior improve because their shape and texture features overlap with those that subsume human behavior. Thus, we must equate the functional features that subsume the behavioral performances of the brain and its models before comparing where, when, and how these features are processed.

13.
Elife ; 102021 10 06.
Article in English | MEDLINE | ID: mdl-34612811

ABSTRACT

Within neuroscience, psychology, and neuroimaging, the most frequently used statistical approach is null hypothesis significance testing (NHST) of the population mean. An alternative approach is to perform NHST within individual participants and then infer, from the proportion of participants showing an effect, the prevalence of that effect in the population. We propose a novel Bayesian method to estimate such population prevalence that offers several advantages over population mean NHST. This method provides a population-level inference that is currently missing from study designs with small participant numbers, such as in traditional psychophysics and in precision imaging. Bayesian prevalence delivers a quantitative population estimate with associated uncertainty instead of reducing an experiment to a binary inference. Bayesian prevalence is widely applicable to a broad range of studies in neuroscience, psychology, and neuroimaging. Its emphasis on detecting effects within individual participants can also help address replicability issues in these fields.


Scientists use statistical tools to evaluate observations or measurements from carefully designed experiments. In psychology and neuroscience, these experiments involve studying a randomly selected group of people, looking for patterns in their behaviour or brain activity, to infer things about the population at large. The usual method for evaluating the results of these experiments is to carry out null hypothesis statistical testing (NHST) on the population mean ­ that is, the average effect in the population that the study participants were selected from. The test asks whether the observed results in the group studied differ from what might be expected if the average effect in the population was zero. However, in psychology and neuroscience studies, people's brain activity and performance on cognitive tasks can differ a lot. This means important effects in individuals can be lost in the overall population average. Ince et al. propose that this shortcoming of NHST can be overcome by shifting the statistical analysis away from the population mean, and instead focusing on effects in individual participants. This led them to create a new statistical approach named Bayesian prevalence. The method looks at effects within each individual in the study and asks how likely it would be to see the same result if the experiment was repeated with a new person chosen from the wider population at random. Using this approach, it is possible to quantify how typical or uncommon an observed effect is in the population, and the uncertainty around this estimate. This differs from NHST which only provides a binary 'yes or no' answer to the question, 'does this experiment provide sufficient evidence that the average effect in the population is not zero?' Another benefit of Bayesian prevalence is that it can be applied to studies with small numbers of participants which cannot be analysed using other statistical methods. Ince et al. show that the Bayesian prevalence can be applied to a range of psychology and neuroimaging experiments, from brain imaging to electrophysiology studies. Using this alternative statistical method could help address issues of replication in these fields where NHST results are sometimes not the same when studies are repeated.


Subject(s)
Biostatistics , Neurosciences/statistics & numerical data , Psychology/statistics & numerical data , Bayes Theorem , Data Interpretation, Statistical , Humans , Research Design
14.
Curr Biol ; 31(10): 2243-2252.e6, 2021 05 24.
Article in English | MEDLINE | ID: mdl-33798430

ABSTRACT

Facial attractiveness confers considerable advantages in social interactions,1,2 with preferences likely reflecting psychobiological mechanisms shaped by natural selection. Theories of universal beauty propose that attractive faces comprise features that are closer to the population average3 while optimizing sexual dimorphism.4 However, emerging evidence questions this model as an accurate representation of facial attractiveness,5-7 including representing the diversity of beauty preferences within and across cultures.8-12 Here, we demonstrate that Western Europeans (WEs) and East Asians (EAs) evaluate facial beauty using culture-specific features, contradicting theories of universality. With a data-driven method, we modeled, at both the individual and group levels, the attractive face features of young females (25 years old) in two matched groups each of 40 young male WE and EA participants. Specifically, we generated a broad range of same- and other-ethnicity female faces with naturally varying shapes and complexions. Participants rated each on attractiveness. We then reverse correlated the face features that drive perception of attractiveness in each participant. From these individual face models, we reconstructed a facial attractiveness representation space that explains preference variations. We show that facial attractiveness is distinct both from averageness and from sexual dimorphism in both cultures. Finally, we disentangled attractive face features into those shared across cultures, culture specific, and specific to individual participants, thereby revealing their diversity. Our results have direct theoretical and methodological impact for representing diversity in social perception and for the design of culturally and ethnically sensitive socially interactive digital agents.


Subject(s)
Beauty , Culture , Face , Adult , Asian People , Female , Humans , Male , Sex Characteristics , White People
15.
Emotion ; 21(6): 1324-1339, 2021 Sep.
Article in English | MEDLINE | ID: mdl-32628034

ABSTRACT

Action video game players (AVGPs) display superior performance in various aspects of cognition, especially in perception and top-down attention. The existing literature has examined these performance almost exclusively with stimuli and tasks devoid of any emotional content. Thus, whether the superior performance documented in the cognitive domain extend to the emotional domain remains unknown. We present 2 cross-sectional studies contrasting AVGPs and nonvideo game players (NVGPs) in their ability to perceive facial emotions. Under an enhanced perception account, AVGPs should outperform NVGPs when processing facial emotion. Yet, alternative accounts exist. For instance, under some social accounts, exposure to action video games, which often contain violence, may lower sensitivity for empathy-related expressions such as sadness, happiness, and pain while increasing sensitivity to aggression signals. Finally, under the view that AVGPs excel at learning new tasks (in contrast to the view that they are immediately better at all new tasks), the use of stimuli that participants are already experts at predicts little to no group differences. Study 1 uses drift-diffusion modeling and establishes that AVGPs are comparable to NVGPs in every decision-making stage mediating the discrimination of facial emotions, despite showing group difference in aggressive behavior. Study 2 uses the reverse inference technique to assess the mental representation of facial emotion expressions, and again documents no group differences. These results indicate that the perceptual benefits associated with action video game play do not extend to overlearned stimuli such as facial emotion, and rather indicate equivalent facial emotion skills in AVGPs and NVGPs. (PsycInfo Database Record (c) 2021 APA, all rights reserved).


Subject(s)
Video Games , Cross-Sectional Studies , Emotions , Facial Expression , Humans , Perception
16.
Curr Biol ; 30(20): R1277-R1278, 2020 10 19.
Article in English | MEDLINE | ID: mdl-33080203

ABSTRACT

A longstanding debate in the face recognition field concerns the format of face representations in the brain. New face research clarifies some of this mystery by revealing a face-centered format in a patient with a left splenium lesion of the corpus callosum who perceives the right side of faces as 'melted'.


Subject(s)
Facial Recognition , Perceptual Distortion , Brain , Corpus Callosum , Humans , Orientation, Spatial
17.
Philos Trans R Soc Lond B Biol Sci ; 375(1799): 20190705, 2020 05 25.
Article in English | MEDLINE | ID: mdl-32248774

ABSTRACT

The information contents of memory are the cornerstone of the most influential models in cognition. To illustrate, consider that in predictive coding, a prediction implies that specific information is propagated down from memory through the visual hierarchy. Likewise, recognizing the input implies that sequentially accrued sensory evidence is successfully matched with memorized information (categorical knowledge). Although the existing models of prediction, memory, sensory representation and categorical decision are all implicitly cast within an information processing framework, it remains a challenge to precisely specify what this information is, and therefore where, when and how the architecture of the brain dynamically processes it to produce behaviour. Here, we review a framework that addresses these challenges for the studies of perception and categorization-stimulus information representation (SIR). We illustrate how SIR can reverse engineer the information contents of memory from behavioural and brain measures in the context of specific cognitive tasks that involve memory. We discuss two specific lessons from this approach that generally apply to memory studies: the importance of task, to constrain what the brain does, and of stimulus variations, to identify the specific information contents that are memorized, predicted, recalled and replayed. This article is part of the Theo Murphy meeting issue 'Memory reactivation: replaying events past, present and future'.


Subject(s)
Brain/physiology , Cognition/physiology , Memory Consolidation/physiology , Humans
18.
Hum Brain Mapp ; 41(5): 1212-1225, 2020 04 01.
Article in English | MEDLINE | ID: mdl-31782861

ABSTRACT

Fast and accurate face processing is critical for everyday social interactions, but it declines and becomes delayed with age, as measured by both neural and behavioral responses. Here, we addressed the critical challenge of understanding how aging changes neural information processing mechanisms to delay behavior. Young (20-36 years) and older (60-86 years) adults performed the basic social interaction task of detecting a face versus noise while we recorded their electroencephalogram (EEG). In each participant, using a new information theoretic framework we reconstructed the features supporting face detection behavior, and also where, when and how EEG activity represents them. We found that occipital-temporal pathway activity dynamically represents the eyes of the face images for behavior ~170 ms poststimulus, with a 40 ms delay in older adults that underlies their 200 ms behavioral deficit of slower reaction times. Our results therefore demonstrate how aging can change neural information processing mechanisms that underlie behavioral slow down.


Subject(s)
Face , Healthy Aging , Reaction Time/physiology , Visual Perception/physiology , Adult , Aged , Aged, 80 and over , Aging/psychology , Brain Mapping , Electroencephalography , Female , Humans , Magnetic Resonance Imaging , Male , Mental Processes , Middle Aged , Neural Pathways/diagnostic imaging , Neural Pathways/physiology , Occipital Lobe/diagnostic imaging , Occipital Lobe/physiology , Social Interaction , Temporal Lobe/diagnostic imaging , Temporal Lobe/physiology , Young Adult
19.
Nat Hum Behav ; 3(8): 817-826, 2019 08.
Article in English | MEDLINE | ID: mdl-31209368

ABSTRACT

Current cognitive theories are cast in terms of information-processing mechanisms that use mental representations1-4. For example, people use their mental representations to identify familiar faces under various conditions of pose, illumination and ageing, or to draw resemblance between family members. Yet, the actual information contents of these representations are rarely characterized, which hinders knowledge of the mechanisms that use them. Here, we modelled the three-dimensional representational contents of 4 faces that were familiar to 14 participants as work colleagues. The representational contents were created by reverse-correlating identity information generated on each trial with judgements of the face's similarity to the individual participant's memory of this face. In a second study, testing new participants, we demonstrated the validity of the modelled contents using everyday face tasks that generalize identity judgements to new viewpoints, age and sex. Our work highlights that such models of mental representations are critical to understanding generalization behaviour and its underlying information-processing mechanisms.


Subject(s)
Face , Facial Recognition , Memory , Adult , Female , Form Perception , Generalization, Psychological , Humans , Male , Models, Psychological , Young Adult
20.
Curr Biol ; 29(2): 319-326.e4, 2019 01 21.
Article in English | MEDLINE | ID: mdl-30639108

ABSTRACT

Over the past decade, extensive studies of the brain regions that support face, object, and scene recognition suggest that these regions have a hierarchically organized architecture that spans the occipital and temporal lobes [1-14], where visual categorizations unfold over the first 250 ms of processing [15-19]. This same architecture is flexibly involved in multiple tasks that require task-specific representations-e.g. categorizing the same object as "a car" or "a Porsche." While we partly understand where and when these categorizations happen in the occipito-ventral pathway, the next challenge is to unravel how these categorizations happen. That is, how does high-dimensional input collapse in the occipito-ventral pathway to become low dimensional representations that guide behavior? To address this, we investigated what information the brain processes in a visual perception task and visualized the dynamic representation of this information in brain activity. To do so, we developed stimulus information representation (SIR), an information theoretic framework, to tease apart stimulus information that supports behavior from that which does not. We then tracked the dynamic representations of both in magneto-encephalographic (MEG) activity. Using SIR, we demonstrate that a rapid (∼170 ms) reduction of behaviorally irrelevant information occurs in the occipital cortex and that representations of the information that supports distinct behaviors are constructed in the right fusiform gyrus (rFG). Our results thus highlight how SIR can be used to investigate the component processes of the brain by considering interactions between three variables (stimulus information, brain activity, behavior), rather than just two, as is the current norm.


Subject(s)
Occipital Lobe/physiology , Pattern Recognition, Visual/physiology , Temporal Lobe/physiology , Visual Perception/physiology , Humans , Magnetoencephalography , Photic Stimulation
SELECTION OF CITATIONS
SEARCH DETAIL
...