Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 16 de 16
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Proc Natl Acad Sci U S A ; 121(26): e2402282121, 2024 Jun 25.
Artigo em Inglês | MEDLINE | ID: mdl-38885383

RESUMO

Goal-directed actions are characterized by two main features: the content (i.e., the action goal) and the form, called vitality forms (VF) (i.e., how actions are executed). It is well established that both the action content and the capacity to understand the content of another's action are mediated by a network formed by a set of parietal and frontal brain areas. In contrast, the neural bases of action forms (e.g., gentle or rude actions) have not been characterized. However, there are now studies showing that the observation and execution of actions endowed with VF activate, in addition to the parieto-frontal network, the dorso-central insula (DCI). In the present study, we established-using dynamic causal modeling (DCM)-the direction of information flow during observation and execution of actions endowed with gentle and rude VF in the human brain. Based on previous fMRI studies, the selected nodes for the DCM comprised the posterior superior temporal sulcus (pSTS), the inferior parietal lobule (IPL), the premotor cortex (PM), and the DCI. Bayesian model comparison showed that, during action observation, two streams arose from pSTS: one toward IPL, concerning the action goal, and one toward DCI, concerning the action vitality forms. During action execution, two streams arose from PM: one toward IPL, concerning the action goal and one toward DCI concerning action vitality forms. This last finding opens an interesting question concerning the possibility to elicit VF in two distinct ways: cognitively (from PM to DCI) and affectively (from DCI to PM).


Assuntos
Mapeamento Encefálico , Objetivos , Imageamento por Ressonância Magnética , Humanos , Masculino , Feminino , Adulto , Rede Nervosa/fisiologia , Teorema de Bayes , Encéfalo/fisiologia , Encéfalo/diagnóstico por imagem , Lobo Parietal/fisiologia , Modelos Neurológicos , Adulto Jovem
2.
Vision Res ; 218: 108380, 2024 05.
Artigo em Inglês | MEDLINE | ID: mdl-38479050

RESUMO

Biological motion perception plays a critical role in various decisions in daily life. Failure to decide accordingly in such a perceptual task could have life-threatening consequences. Neurophysiology and computational modeling studies suggest two processes mediating perceptual decision-making. One of these signals is associated with the accumulation of sensory evidence and the other with response selection. Recent EEG studies with humans have introduced an event-related potential called Centroparietal Positive Potential (CPP) as a neural marker aligned with the sensory evidence accumulation while effectively distinguishing it from motor-related lateralized readiness potential (LRP). The present study aims to investigate the neural mechanisms of biological motion perception in the framework of perceptual decision-making, which has been overlooked before. More specifically, we examine whether CPP would track the coherence of the biological motion stimuli and could be distinguished from the LRP signal. We recorded EEG from human participants while they performed a direction discrimination task of a point-light walker stimulus embedded in various levels of noise. Our behavioral findings revealed shorter reaction times and reduced miss rates as the coherence of the stimuli increased. In addition, CPP tracked the coherence of the biological motion stimuli with a tendency to reach a common level during the response, albeit with a later onset than the previously reported results in random-dot motion paradigms. Furthermore, CPP was distinguished from the LRP signal based on its temporal profile. Overall, our results suggest that the mechanisms underlying perceptual decision-making generalize to more complex and socially significant stimuli like biological motion.


Assuntos
Percepção de Movimento , Humanos , Percepção de Movimento/fisiologia , Potenciais Evocados , Tempo de Reação/fisiologia , Tomada de Decisões/fisiologia , Variação Contingente Negativa
3.
Vision Res ; 214: 108328, 2024 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-37926626

RESUMO

Considering its importance for one's survival and social significance, biological motion (BM) perception is assumed to occur automatically. Previous behavioral results showed that task-irrelevant BM in the periphery interfered with task performance at the fovea. Under selective attention, BM perception is supported by a network of regions including the occipito-temporal (OTC), parietal, and premotor cortices. Retinotopy studies that use BM stimulus showed distinct maps for its processing under and away from selective attention. Based on these findings, we investigated how bottom-up perception of BM would be processed in the human brain under attentional load when it was shown away from the focus of attention as a task-irrelevant stimulus. Participants (N = 31) underwent an fMRI study in which they performed an attentionally demanding visual detection task at the fovea while intact or scrambled point light displays of BM were shown at the periphery. Our results showed the main effect of attentional load in fronto-parietal regions and both univariate activity maps and multivariate pattern analysis results support the attentional load modulation on the task-irrelevant peripheral stimuli. However, this effect was not specific to intact BM stimuli and was generalized to motion stimuli as evidenced by the motion-sensitive OTC involvement during the presence of dynamic stimuli in the periphery. These results confirm and extend previous work by showing that task-irrelevant distractors can be processed by stimulus-specific regions when there are enough attentional resources available. We discussed the implications of these results for future studies.


Assuntos
Atenção , Percepção de Movimento , Humanos , Lobo Parietal , Análise e Desempenho de Tarefas , Imageamento por Ressonância Magnética , Percepção Visual
4.
J Vis Exp ; (198)2023 08 04.
Artigo em Inglês | MEDLINE | ID: mdl-37677038

RESUMO

Perception of others' actions is crucial for survival, interaction, and communication. Despite decades of cognitive neuroscience research dedicated to understanding the perception of actions, we are still far away from developing a neurally inspired computer vision system that approaches human action perception. A major challenge is that actions in the real world consist of temporally unfolding events in space that happen "here and now" and are actable. In contrast, visual perception and cognitive neuroscience research to date have largely studied action perception through 2D displays (e.g., images or videos) that lack the presence of actors in space and time, hence these displays are limited in affording actability. Despite the growing body of knowledge in the field, these challenges must be overcome for a better understanding of the fundamental mechanisms of the perception of others' actions in the real world. The aim of this study is to introduce a novel setup to conduct naturalistic laboratory experiments with live actors in scenarios that approximate real-world settings. The core element of the setup used in this study is a transparent organic light-emitting diode (OLED) screen through which participants can watch the live actions of a physically present actor while the timing of their presentation is precisely controlled. In this work, this setup was tested in a behavioral experiment. We believe that the setup will help researchers reveal fundamental and previously inaccessible cognitive and neural mechanisms of action perception and will be a foundation for future studies investigating social perception and cognition in naturalistic settings.


Assuntos
Neurociência Cognitiva , Psicologia Experimental , Humanos , Cognição , Comunicação , Laboratórios
5.
Int J Soc Robot ; : 1-17, 2023 Jan 20.
Artigo em Inglês | MEDLINE | ID: mdl-36694634

RESUMO

The present study aims to investigate how gender stereotypes affect people's gender attribution to social robots. To this end, we examined whether a robot can be assigned a gender depending on a performed action. The study consists of 3 stages. In the first stage, we determined masculine and feminine actions by a survey conducted with 54 participants. In the second stage, we selected a gender-neutral robot by having 76 participants rate several robot stimuli in the masculine-feminine spectrum. In the third stage, we created short animation videos in which the gender-neutral robot determined in stage two performed the masculine and feminine actions determined in stage one. We then asked 102 participants to evaluate the robot in the videos in the masculine-feminine spectrum. We asked them to rate the videos according to their own view (self-view) and how they thought society would evaluate them (society-view). We also used the Socialization of Gender Norms Scale (SGNS) to identify individual differences in gender attribution to social robots. We found the main effect of action category (feminine vs. masculine) on both self-view reports and society-view reports suggesting that a neutral robot was reported to be feminine if it performed feminine actions and masculine if it performed masculine actions. However, society-view reports were more pronounced than the self-view reports: when the neutral robot performed masculine actions, it was found to be more masculine in the society-view reports than the self-view reports; and when it performs feminine actions, it was found to be more feminine in the society-view reports than the self-view reports. In addition, the SGNS predicted the society-view reports (for feminine actions) but not the self-view reports. In sum, our study suggests that people can attribute gender to social robots depending on the task they perform.

6.
Front Hum Neurosci ; 16: 883905, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35923750

RESUMO

Functional near infrared spectroscopy (fNIRS) has been gaining increasing interest as a practical mobile functional brain imaging technology for understanding the neural correlates of social cognition and emotional processing in the human prefrontal cortex (PFC). Considering the cognitive complexity of human-robot interactions, the aim of this study was to explore the neural correlates of emotional processing of congruent and incongruent pairs of human and robot audio-visual stimuli in the human PFC with fNIRS methodology. Hemodynamic responses from the PFC region of 29 subjects were recorded with fNIRS during an experimental paradigm which consisted of auditory and visual presentation of human and robot stimuli. Distinct neural responses to human and robot stimuli were detected at the dorsolateral prefrontal cortex (DLPFC) and orbitofrontal cortex (OFC) regions. Presentation of robot voice elicited significantly less hemodynamic response than presentation of human voice in a left OFC channel. Meanwhile, processing of human faces elicited significantly higher hemodynamic activity when compared to processing of robot faces in two left DLPFC channels and a left OFC channel. Significant correlation between the hemodynamic and behavioral responses for the face-voice mismatch effect was found in the left OFC. Our results highlight the potential of fNIRS for unraveling the neural processing of human and robot audio-visual stimuli, which might enable optimization of social robot designs and contribute to elucidation of the neural processing of human and robot stimuli in the PFC in naturalistic conditions.

7.
J Neurosci ; 2022 Jul 20.
Artigo em Inglês | MEDLINE | ID: mdl-35863889

RESUMO

Object and action perception in cluttered dynamic natural scenes relies on efficient allocation of limited brain resources to prioritize the attended targets over distractors. It has been suggested that during visual search for objects, distributed semantic representation of hundreds of object categories is warped to expand the representation of targets. Yet, little is known about whether and where in the brain visual search for action categories modulates semantic representations. To address this fundamental question, we studied brain activity recorded from five subjects (1 female) via functional magnetic resonance imaging while they viewed natural movies and searched for either communication or locomotion actions. We find that attention directed to action categories elicits tuning shifts that warp semantic representations broadly across neocortex, and that these shifts interact with intrinsic selectivity of cortical voxels for target actions. These results suggest that attention serves to facilitate task performance during social interactions by dynamically shifting semantic selectivity towards target actions, and that tuning shifts are a general feature of conceptual representations in the brain.SIGNIFICANCE STATEMENTThe ability to swiftly perceive the actions and intentions of others is a crucial skill for humans, which relies on efficient allocation of limited brain resources to prioritise the attended targets over distractors. However, little is known about the nature of high-level semantic representations during natural visual search for action categories. Here we provide the first evidence showing that attention significantly warps semantic representations by inducing tuning shifts in single cortical voxels, broadly spread across occipitotemporal, parietal, prefrontal, and cingulate cortices. This dynamic attentional mechanism can facilitate action perception by efficiently allocating neural resources to accentuate the representation of task-relevant action categories.

8.
Brain Sci ; 13(1)2022 Dec 29.
Artigo em Inglês | MEDLINE | ID: mdl-36672043

RESUMO

The investigation of the perception of others' actions and underlying neural mechanisms has been hampered by the lack of a comprehensive stimulus set covering the human behavioral repertoire. To fill this void, we present a video set showing 100 human actions recorded in natural settings, covering the human repertoire except for emotion-driven (e.g., sexual) actions and those involving implements (e.g., tools). We validated the set using fMRI and showed that observation of the 100 actions activated the well-established action observation network. We also quantified the videos' low-level visual features (luminance, optic flow, and edges). Thus, this comprehensive video set is a valuable resource for perceptual and neuronal studies.

9.
Neuroimage ; 237: 118220, 2021 08 15.
Artigo em Inglês | MEDLINE | ID: mdl-34058335

RESUMO

Action observation is supported by a network of regions in occipito-temporal, parietal, and premotor cortex in primates. Recent research suggests that the parietal node has regions dedicated to different action classes including manipulation, interpersonal interactions, skin displacement, locomotion, and climbing. The goals of the current study consist of: 1) extending this work with new classes of actions that are communicative and specific to humans, 2) investigating how parietal cortex differs from the occipito-temporal and premotor cortex in representing action classes. Human subjects underwent fMRI scanning while observing three action classes: indirect communication, direct communication, and manipulation, plus two types of control stimuli, static controls which were static frames from the video clips, and dynamic controls consisting of temporally-scrambled optic flow information. Using univariate analysis, MVPA, and representational similarity analysis, our study presents several novel findings. First, we provide further evidence for the anatomical segregation in parietal cortex of different action classes: We have found a new site that is specific for representing human-specific indirect communicative actions in cytoarchitectonic parietal area PFt. Second, we found that the discriminability between action classes was higher in parietal cortex than the other two levels suggesting the coding of action identity information at this level. Finally, our results advocate the use of the control stimuli not just for univariate analysis of complex action videos but also when using multivariate techniques.


Assuntos
Mapeamento Encefálico , Atividade Motora/fisiologia , Comunicação não Verbal/fisiologia , Lobo Parietal/fisiologia , Percepção Social , Percepção Visual/fisiologia , Adulto , Feminino , Humanos , Imageamento por Ressonância Magnética , Masculino , Lobo Parietal/diagnóstico por imagem , Adulto Jovem
10.
Eur J Neurosci ; 52(12): 4732-4750, 2020 12.
Artigo em Inglês | MEDLINE | ID: mdl-32745369

RESUMO

When observing others' behavior, it is important to perceive not only the identity of the observed actions (OAs), but also the number of times they were performed. Given the mounting evidence implicating posterior parietal cortex in action observation, and in particular that of manipulative actions, the aim of this study was to identify the parietal region, if any, that contributes to the processing of observed manipulative action (OMA) numerosity, using the functional magnetic resonance imaging technique. Twenty-one right-handed healthy volunteers performed two discrimination tasks while in the scanner, responding to video stimuli in which an actor performed manipulative actions on colored target balls that appeared four times consecutively. The subjects discriminated between two small numerosities of either OMAs ("Action" condition) or colors of balls ("Ball" condition). A significant difference between the "Action" and "Ball" conditions was observed in occipito-temporal cortex and the putative human anterior intraparietal sulcus (phAIP) area as well as the third topographic map of numerosity-selective neurons at the post-central sulcus (NPC3) of the left parietal cortex. A further region of interest analysis of the group-average data showed that at the single voxel level the latter area, more than any other parietal or occipito-temporal numerosity map, favored numerosity of OAs. These results suggest that phAIP processes the identity of OMAs, while neighboring NPC3 likely processes the numerosity of the identified OAs.


Assuntos
Mapeamento Encefálico , Lobo Parietal , Córtex Cerebral , Mãos , Humanos , Imageamento por Ressonância Magnética , Lobo Parietal/diagnóstico por imagem , Estimulação Luminosa
11.
Cortex ; 128: 132-142, 2020 07.
Artigo em Inglês | MEDLINE | ID: mdl-32335327

RESUMO

Visual perception of actions is supported by a network of brain regions in the occipito-temporal, parietal, and premotor cortex in the primate brain, known as the Action Observation Network (AON). Although there is a growing body of research that characterizes the functional properties of each node of this network, the communication and direction of information flow between the nodes is unclear. According to the predictive coding account of action perception (Kilner, Friston, & Frith, 2007a; 2007b), this network is not a purely feedforward system but has backward connections through which prediction error signals are communicated between the regions of the AON. In the present study, we investigated the effective connectivity of the AON in an experimental setting where the human subjects' predictions about the observed agent were violated, using fMRI and Dynamical Causal Modeling (DCM). We specifically examined the influence of the lowest and highest nodes in the AON hierarchy, pSTS and ventral premotor cortex, respectively, on the middle node, inferior parietal cortex during prediction violation. Our DCM results suggest that the influence on the inferior parietal node is through a feedback connection from ventral premotor cortex during perception of actions that violate people's predictions.


Assuntos
Mapeamento Encefálico , Percepção Visual , Encéfalo , Imageamento por Ressonância Magnética , Estimulação Luminosa
12.
Neuropsychologia ; 127: 35-47, 2019 04.
Artigo em Inglês | MEDLINE | ID: mdl-30772426

RESUMO

Visual processing of actions is supported by a network consisting of occipito-temporal, parietal, and premotor regions in the human brain, known as the Action Observation Network (AON). In the present study, we investigate what aspects of visually perceived actions are represented in this network using fMRI and computational modeling. Human subjects performed an action perception task during scanning. We characterized the different aspects of the stimuli starting from purely visual properties such as form and motion to higher-aspects such as intention using computer vision and categorical modeling. We then linked the models of the stimuli to the three nodes of the AON with representational similarity analysis. Our results show that different nodes of the network represent different aspects of actions. While occipito-temporal cortex performs visual analysis of actions by means of integrating form and motion information, parietal cortex builds on these visual representations and transforms them into more abstract and semantic representations coding target of the action, action type and intention. Taken together, these results shed light on the neuro-computational mechanisms that support visual perception of actions and provide support that AON is a hierarchical system in which increasing levels of the cortex code increasingly complex features.


Assuntos
Córtex Cerebral/diagnóstico por imagem , Córtex Cerebral/fisiologia , Mapeamento Encefálico , Simulação por Computador , Feminino , Humanos , Imageamento por Ressonância Magnética , Masculino , Córtex Motor/diagnóstico por imagem , Córtex Motor/fisiologia , Redes Neurais de Computação , Lobo Occipital/diagnóstico por imagem , Lobo Occipital/fisiologia , Lobo Parietal/diagnóstico por imagem , Lobo Parietal/fisiologia , Estimulação Luminosa , Desempenho Psicomotor/fisiologia , Lobo Temporal/diagnóstico por imagem , Lobo Temporal/fisiologia , Percepção Visual , Adulto Jovem
13.
Neuropsychologia ; 114: 181-185, 2018 06.
Artigo em Inglês | MEDLINE | ID: mdl-29704523

RESUMO

Uncanny valley refers to humans' negative reaction to almost-but-not-quite-human agents. Theoretical work proposes prediction violation as an explanation for uncanny valley but no empirical work has directly tested it. Here, we provide evidence that supports this theory using event-related brain potential recordings from the human scalp. Human subjects were presented images and videos of three agents as EEG was recorded: a real human, a mechanical robot, and a realistic robot in between. The real human and the mechanical robot had congruent appearance and motion whereas the realistic robot had incongruent appearance and motion. We hypothesize that the appearance of the agent would provide a context to predict her movement, and accordingly the perception of the realistic robot would elicit an N400 effect indicating the violation of predictions, whereas the human and the mechanical robot would not. Our data confirmed this hypothesis suggesting that uncanny valley could be explained by violation of one's predictions about human norms when encountered with realistic but artificial human forms. Importantly, our results implicate that the mechanisms underlying perception of other individuals in our environment are predictive in nature.


Assuntos
Mapeamento Encefálico , Encéfalo/fisiologia , Potenciais Evocados/fisiologia , Reconhecimento Visual de Modelos/fisiologia , Percepção Social , Adulto , Eletroencefalografia , Feminino , Humanos , Masculino , Movimento (Física) , Estimulação Luminosa , Robótica , Adulto Jovem
14.
Front Hum Neurosci ; 9: 364, 2015.
Artigo em Inglês | MEDLINE | ID: mdl-26150782

RESUMO

Understanding others' actions is essential for functioning in the physical and social world. In the past two decades research has shown that action perception involves the motor system, supporting theories that we understand others' behavior via embodied motor simulation. Recently, empirical approach to action perception has been facilitated by using well-controlled artificial stimuli, such as robots. One broad question this approach can address is what aspects of similarity between the observer and the observed agent facilitate motor simulation. Since humans have evolved among other humans and animals, using artificial stimuli such as robots allows us to probe whether our social perceptual systems are specifically tuned to process other biological entities. In this study, we used humanoid robots with different degrees of human-likeness in appearance and motion along with electromyography (EMG) to measure muscle activity in participants' arms while they either observed or imitated videos of three agents produce actions with their right arm. The agents were a Human (biological appearance and motion), a Robot (mechanical appearance and motion), and an Android (biological appearance and mechanical motion). Right arm muscle activity increased when participants imitated all agents. Increased muscle activation was found also in the stationary arm both during imitation and observation. Furthermore, muscle activity was sensitive to motion dynamics: activity was significantly stronger for imitation of the human than both mechanical agents. There was also a relationship between the dynamics of the muscle activity and motion dynamics in stimuli. Overall our data indicate that motor simulation is not limited to observation and imitation of agents with a biological appearance, but is also found for robots. However we also found sensitivity to human motion in the EMG responses. Combining data from multiple methods allows us to obtain a more complete picture of action understanding and the underlying neural computations.

16.
Front Neurorobot ; 7: 19, 2013.
Artigo em Inglês | MEDLINE | ID: mdl-24348375

RESUMO

The perception of others' actions supports important skills such as communication, intention understanding, and empathy. Are mechanisms of action processing in the human brain specifically tuned to process biological agents? Humanoid robots can perform recognizable actions, but can look and move differently from humans, and as such, can be used in experiments to address such questions. Here, we recorded EEG as participants viewed actions performed by three agents. In the Human condition, the agent had biological appearance and motion. The other two conditions featured a state-of-the-art robot in two different appearances: Android, which had biological appearance but mechanical motion, and Robot, which had mechanical appearance and motion. We explored whether sensorimotor mu (8-13 Hz) and frontal theta (4-8 Hz) activity exhibited selectivity for biological entities, in particular for whether the visual appearance and/or the motion of the observed agent was biological. Sensorimotor mu suppression has been linked to the motor simulation aspect of action processing (and the human mirror neuron system, MNS), and frontal theta to semantic and memory-related aspects. For all three agents, action observation induced significant attenuation in the power of mu oscillations, with no difference between agents. Thus, mu suppression, considered an index of MNS activity, does not appear to be selective for biological agents. Observation of the Robot resulted in greater frontal theta activity compared to the Android and the Human, whereas the latter two did not differ from each other. Frontal theta thus appears to be sensitive to visual appearance, suggesting agents that are not sufficiently biological in appearance may result in greater memory processing demands for the observer. Studies combining robotics and neuroscience such as this one can allow us to explore neural basis of action processing on the one hand, and inform the design of social robots on the other.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...