Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 20
Filter
Add more filters










Publication year range
1.
Front Psychol ; 14: 1221081, 2023.
Article in English | MEDLINE | ID: mdl-37794914

ABSTRACT

A growing body of research suggests that movement aids facial expression recognition. However, less is known about the conditions under which the dynamic advantage occurs. The aim of this research was to test emotion recognition in static and dynamic facial expressions, thereby exploring the role of three featural parameters (prototypicality, ambiguity, and complexity) in human and machine analysis. In two studies, facial expression videos and corresponding images depicting the peak of the target and non-target emotion were presented to human observers and the machine classifier (FACET). Results revealed higher recognition rates for dynamic stimuli compared to non-target images. Such benefit disappeared in the context of target-emotion images which were similarly well (or even better) recognised than videos, and more prototypical, less ambiguous, and more complex in appearance than non-target images. While prototypicality and ambiguity exerted more predictive power in machine performance, complexity was more indicative of human emotion recognition. Interestingly, recognition performance by the machine was found to be superior to humans for both target and non-target images. Together, the findings point towards a compensatory role of dynamic information, particularly when static-based stimuli lack relevant features of the target emotion. Implications for research using automatic facial expression analysis (AFEA) are discussed.

2.
Neuropsychologia ; 189: 108668, 2023 Oct 10.
Article in English | MEDLINE | ID: mdl-37619935

ABSTRACT

Eye contact with a social robot has been shown to elicit similar psychophysiological responses to eye contact with another human. However, it is becoming increasingly clear that the attention- and affect-related psychophysiological responses differentiate between direct (toward the observer) and averted gaze mainly when viewing embodied faces that are capable of social interaction, whereas pictorial or pre-recorded stimuli have no such capability. It has been suggested that genuine eye contact, as indicated by the differential psychophysiological responses to direct and averted gaze, requires a feeling of being watched by another mind. Therefore, we measured event-related potentials (N170 and frontal P300) with EEG, facial electromyography, skin conductance, and heart rate deceleration responses to seeing a humanoid robot's direct versus averted gaze, while manipulating the impression of the robot's intentionality. The results showed that the N170 and the facial zygomatic responses were greater to direct than to averted gaze of the robot, and independent of the robot's intentionality, whereas the frontal P300 responses were more positive to direct than to averted gaze only when the robot appeared intentional. The study provides further evidence that the gaze behavior of a social robot elicits attentional and affective responses and adds that the robot's seemingly autonomous social behavior plays an important role in eliciting higher-level socio-cognitive processing.

3.
Exp Brain Res ; 241(7): 1739-1756, 2023 Jul.
Article in English | MEDLINE | ID: mdl-37306753

ABSTRACT

In young adults (YA) who practised controlling a virtual tool in augmented reality (AR), the emergence of a sense of body ownership over the tool was associated with the integration of the virtual tool into the body schema (BS). Agency emerged independent of BS plasticity. Here we aimed to replicate these findings in older adults (OA). Although they are still able to learn new motor tasks, brain plasticity and learning capacity are reduced in OA. We predicted that OA would be able to gain control over the virtual tool indicated by the emergence of agency but would show less BS plasticity as compared to YA. Still, an association between BS plasticity and body ownership was expected. OA were trained in AR to control a virtual gripper to enclose and touch a virtual object. In the visuo-tactile (VT) but not the vision-only (V) condition, vibro-tactile feedback was applied through a CyberTouch II glove when the tool touched the object. BS plasticity was assessed with a tactile distance judgement task where participants judged distances between two tactile stimuli applied to their right forearm. Participants further rated their perceived ownership and agency after training. As expected, agency emerged during the use of the tool. However, results did not indicate any changes in the BS of the forearm after virtual tool-use training. Also, an association between BS plasticity and the emergence of body ownership could not be confirmed for OA. Similar to YA, the practice effect was stronger in the visuo-tactile feedback condition compared with the vision-only condition. We conclude that a sense of agency may strongly relate to improvement in tool-use in OA independent of alterations in the BS, while ownership did not emerge due to a lack of BS plasticity.


Subject(s)
Augmented Reality , Illusions , Tool Use Behavior , Touch Perception , Young Adult , Humans , Aged , Forearm , Body Image , Hand
4.
Exp Brain Res ; 241(7): 1721-1738, 2023 Jul.
Article in English | MEDLINE | ID: mdl-37306754

ABSTRACT

In this study we examined if training with a virtual tool in augmented reality (AR) affects the emergence of ownership and agency over the tool and whether this relates to changes in body schema (BS). 34 young adults learned controlling a virtual gripper to grasp a virtual object. In the visuo-tactile (VT) but not the vision-only (V) condition, vibro-tactile feedback was applied to the palm, thumb and index fingers through a CyberTouch II glove when the tool touched the object. Changes in the forearm BS were assessed with a tactile distance judgement task (TDJ) where participants judged distances between two tactile stimuli applied to their right forearm either in proximodistal or mediolateral orientation. Participants further rated their perceived ownership and agency after training. TDJ estimation errors were reduced after training for proximodistal orientations, suggesting that stimuli oriented along the arm axis were perceived as closer together. Higher ratings for ownership were associated with increasing performance level and more BS plasticity, i.e., stronger reduction in TDJ estimation error, and after training in the VT as compared to the V feedback condition, respectively. Agency over the tool was achieved independent of BS plasticity. We conclude that the emergence of a sense of ownership but not agency depends on performance level and the integration of the virtual tool into the arm representation.


Subject(s)
Body Image , Tool Use Behavior , Young Adult , Humans , Visual Perception , Ownership , Hand
5.
Article in German | MEDLINE | ID: mdl-36650296

ABSTRACT

Artificial intelligence (AI) is becoming increasingly important in healthcare. This development triggers serious concerns that can be summarized by six major "worst-case scenarios". From AI spreading disinformation and propaganda, to a potential new arms race between major powers, to a possible rule of algorithms ("algocracy") based on biased gatekeeper intelligence, the real dangers of an uncontrolled development of AI are by no means to be underestimated, especially in the health sector. However, fear of AI could cause humanity to miss the opportunity to positively shape the development of our society together with an AI that is friendly to us.Use cases in healthcare play a primary role in this discussion, as both the risks and the opportunities of new AI-based systems become particularly clear here. For example, would older people with dementia (PWD) be allowed to entrust aspects of their autonomy to AI-based assistance systems so that they may continue to independently manage other aspects of their daily lives? In this paper, we argue that the classic balancing act between the dangers and opportunities of AI in healthcare can be at least partially overcome by taking a long-term ethical approach toward a symbiotic relationship between humans and AI. We exemplify this approach by showcasing our I­CARE system, an AI-based recommendation system for tertiary prevention of dementia. This system has been in development since 2015 as the I­CARE Project at the University of Bremen, where it is still being researched today.


Subject(s)
Artificial Intelligence , Dementia , Humans , Aged , Symbiosis , Germany , Delivery of Health Care
6.
Sensors (Basel) ; 24(1)2023 Dec 26.
Article in English | MEDLINE | ID: mdl-38202988

ABSTRACT

This paper provides a comprehensive overview of affective computing systems for facial expression recognition (FER) research in naturalistic contexts. The first section presents an updated account of user-friendly FER toolboxes incorporating state-of-the-art deep learning models and elaborates on their neural architectures, datasets, and performances across domains. These sophisticated FER toolboxes can robustly address a variety of challenges encountered in the wild such as variations in illumination and head pose, which may otherwise impact recognition accuracy. The second section of this paper discusses multimodal large language models (MLLMs) and their potential applications in affective science. MLLMs exhibit human-level capabilities for FER and enable the quantification of various contextual variables to provide context-aware emotion inferences. These advancements have the potential to revolutionize current methodological approaches for studying the contextual influences on emotions, leading to the development of contextualized emotion models.


Subject(s)
Deep Learning , Humans , Facial Expression , Awareness , Emotions , Language
7.
Sensors (Basel) ; 22(9)2022 May 06.
Article in English | MEDLINE | ID: mdl-35591224

ABSTRACT

In this paper, we introduce an approach for future frames prediction based on a single input image. Our method is able to generate an entire video sequence based on the information contained in the input frame. We adopt an autoregressive approach in our generation process, i.e., the output from each time step is fed as the input to the next step. Unlike other video prediction methods that use "one shot" generation, our method is able to preserve much more details from the input image, while also capturing the critical pixel-level changes between the frames. We overcome the problem of generation quality degradation by introducing a "complementary mask" module in our architecture, and we show that this allows the model to only focus on the generation of the pixels that need to be changed, and to reuse those that should remain static from its previous frame. We empirically validate our methods against various video prediction models on the UT Dallas Dataset, and show that our approach is able to generate high quality realistic video sequences from one static input image. In addition, we also validate the robustness of our method by testing a pre-trained model on the unseen ADFES facial expression dataset. We also provide qualitative results of our model tested on a human action dataset: The Weizmann Action database.


Subject(s)
Algorithms , Databases, Factual , Humans
8.
Front Robot AI ; 9: 836462, 2022.
Article in English | MEDLINE | ID: mdl-35265673

ABSTRACT

Social robots are increasingly being studied in educational roles, including as tutees in learning-by-teaching applications. To explore the benefits and drawbacks of using robots in this way, it is important to study how robot tutees compare to traditional learning-by-teaching situations. In this paper, we report the results of a within-subjects field experiment that compared a robot tutee to a human tutee in a Swedish primary school. Sixth-grade students participated in the study as tutors in a collaborative mathematics game where they were responsible for teaching a robot tutee as well as a third-grade student in two separate sessions. Their teacher was present to provide support and guidance for both sessions. Participants' perceptions of the interactions were then gathered through a set of quantitative instruments measuring their enjoyment and willingness to interact with the tutees again, communication and collaboration with the tutees, their understanding of the task, sense of autonomy as tutors, and perceived learning gains for tutor and tutee. The results showed that the two scenarios were comparable with respect to enjoyment and willingness to play again, as well as perceptions of learning gains. However, significant differences were found for communication and collaboration, which participants considered easier with a human tutee. They also felt significantly less autonomous in their roles as tutors with the robot tutee as measured by their stated need for their teacher's help. Participants further appeared to perceive the activity as somewhat clearer and working better when playing with the human tutee. These findings suggest that children can enjoy engaging in peer tutoring with a robot tutee. However, the interactive capabilities of robots will need to improve quite substantially before they can potentially engage in autonomous and unsupervised interactions with children.

9.
Behav Res Methods ; 54(6): 2678-2692, 2022 12.
Article in English | MEDLINE | ID: mdl-34918224

ABSTRACT

The vast majority of research on human emotional tears has relied on posed and static stimulus materials. In this paper, we introduce the Portsmouth Dynamic Spontaneous Tears Database (PDSTD), a free resource comprising video recordings of 24 female encoders depicting a balanced representation of sadness stimuli with and without tears. Encoders watched a neutral film and a self-selected sad film and reported their emotional experience for 9 emotions. Extending this initial validation, we obtained norming data from an independent sample of naïve observers (N = 91, 45 females) who watched videos of the encoders during three time phases (neutral, pre-sadness, sadness), yielding a total of 72 validated recordings. Observers rated the expressions during each phase on 7 discrete emotions, negative and positive valence, arousal, and genuineness. All data were analyzed by means of general linear mixed modelling (GLMM) to account for sources of random variance. Our results confirm the successful elicitation of sadness, and demonstrate the presence of a tear effect, i.e., a substantial increase in perceived sadness for spontaneous dynamic weeping. To our knowledge, the PDSTD is the first database of spontaneously elicited dynamic tears and sadness that is openly available to researchers. The stimuli can be accessed free of charge via OSF from https://osf.io/uyjeg/?view_only=24474ec8d75949ccb9a8243651db0abf .


Subject(s)
Female , Humans
10.
Emotion ; 21(2): 447-451, 2021 Mar.
Article in English | MEDLINE | ID: mdl-31829721

ABSTRACT

The majority of research on the judgment of emotion from facial expressions has focused on deliberately posed displays, often sampled from single stimulus sets. Herein, we investigate emotion recognition from posed and spontaneous expressions, comparing classification performance between humans and machine in a cross-corpora investigation. For this, dynamic facial stimuli portraying the six basic emotions were sampled from a broad range of different databases, and then presented to human observers and a machine classifier. Recognition performance by the machine was found to be superior for posed expressions containing prototypical facial patterns, and comparable to humans when classifying emotions from spontaneous displays. In both humans and machine, accuracy rates were generally higher for posed compared to spontaneous stimuli. The findings suggest that automated systems rely on expression prototypicality for emotion classification and may perform just as well as humans when tested in a cross-corpora context. (PsycInfo Database Record (c) 2021 APA, all rights reserved).


Subject(s)
Artificial Intelligence/standards , Behavior Observation Techniques/methods , Emotions/physiology , Facial Expression , Recognition, Psychology/physiology , Adolescent , Adult , Female , Humans , Male , Young Adult
11.
Int J Psychol ; 56(3): 454-465, 2021 Jun.
Article in English | MEDLINE | ID: mdl-32935359

ABSTRACT

According to moral typecasting theory, good- and evil-doers (agents) interact with the recipients of their actions (patients) in a moral dyad. When this dyad is completed, mind attribution towards intentionally harmed liminal minds is enhanced. However, from a dehumanisation view, malevolent actions may instead result in a denial of humanness. To contrast both accounts, a visual vignette experiment (N = 253) depicted either malevolent or benevolent intentions towards robotic or human avatars. Additionally, we examined the role of harm-salience by showing patients as either harmed, or still unharmed. The results revealed significantly increased mind attribution towards visibly harmed patients, mediated by perceived pain and expressed empathy. Benevolent and malevolent intentions were evaluated respectively as morally right or wrong, but their impact on the patient was diminished for the robotic avatar. Contrary to dehumanisation predictions, our manipulation of intentions failed to affect mind perception. Nonetheless, benevolent intentions reduced dehumanisation of the patients. Moreover, when pain and empathy were statistically controlled, the effect of intentions on mind perception was mediated by dehumanisation. These findings suggest that perceived intentions might only be indirectly tied to mind perception, and that their role may be better understood when additionally accounting for empathy and dehumanisation.


Subject(s)
Dehumanization , Robotics/methods , Social Perception/psychology , Adult , Female , Humans , Intention , Male
12.
Behav Res Methods ; 53(2): 686-701, 2021 04.
Article in English | MEDLINE | ID: mdl-32804342

ABSTRACT

With a shift in interest toward dynamic expressions, numerous corpora of dynamic facial stimuli have been developed over the past two decades. The present research aimed to test existing sets of dynamic facial expressions (published between 2000 and 2015) in a cross-corpus validation effort. For this, 14 dynamic databases were selected that featured facial expressions of the basic six emotions (anger, disgust, fear, happiness, sadness, surprise) in posed or spontaneous form. In Study 1, a subset of stimuli from each database (N = 162) were presented to human observers and machine analysis, yielding considerable variance in emotion recognition performance across the databases. Classification accuracy further varied with perceived intensity and naturalness of the displays, with posed expressions being judged more accurately and as intense, but less natural compared to spontaneous ones. Study 2 aimed for a full validation of the 14 databases by subjecting the entire stimulus set (N = 3812) to machine analysis. A FACS-based Action Unit (AU) analysis revealed that facial AU configurations were more prototypical in posed than spontaneous expressions. The prototypicality of an expression in turn predicted emotion classification accuracy, with higher performance observed for more prototypical facial behavior. Furthermore, technical features of each database (i.e., duration, face box size, head rotation, and motion) had a significant impact on recognition accuracy. Together, the findings suggest that existing databases vary in their ability to signal specific emotions, thereby facing a trade-off between realism and ecological validity on the one end, and expression uniformity and comparability on the other.


Subject(s)
Emotions , Facial Expression , Anger , Happiness , Humans , Recognition, Psychology
13.
Cogn Sci ; 44(7): e12872, 2020 07.
Article in English | MEDLINE | ID: mdl-33020966

ABSTRACT

A robot's decision to harm a person is sometimes considered to be the ultimate proof of it gaining a human-like mind. Here, we contrasted predictions about attribution of mental capacities from moral typecasting theory, with the denial of agency from dehumanization literature. Experiments 1 and 2 investigated mind perception for intentionally and accidentally harmful robotic agents based on text and image vignettes. Experiment 3 disambiguated agent intention (malevolent and benevolent), and additionally varied the type of agent (robotic and human) using short computer-generated animations. Harmful robotic agents were consistently imbued with mental states to a lower degree than benevolent agents, supporting the dehumanization account. Further results revealed that a human moral patient appeared to suffer less when depicted with a robotic agent than with another human. The findings suggest that future robots may become subject to human-like dehumanization mechanisms, which challenges the established beliefs about anthropomorphism in the domain of moral interactions.


Subject(s)
Robotics , Dehumanization , Humans , Intention , Morals , Social Perception
14.
Front Neurosci ; 14: 400, 2020.
Article in English | MEDLINE | ID: mdl-32410956

ABSTRACT

The ability to automatically assess emotional responses via contact-free video recording taps into a rapidly growing market aimed at predicting consumer choices. If consumer attention and engagement are measurable in a reliable and accessible manner, relevant marketing decisions could be informed by objective data. Although significant advances have been made in automatic affect recognition, several practical and theoretical issues remain largely unresolved. These concern the lack of cross-system validation, a historical emphasis of posed over spontaneous expressions, as well as more fundamental issues regarding the weak association between subjective experience and facial expressions. To address these limitations, the present paper argues that extant commercial and free facial expression classifiers should be rigorously validated in cross-system research. Furthermore, academics and practitioners must better leverage fine-grained emotional response dynamics, with stronger emphasis on understanding naturally occurring spontaneous expressions, and in naturalistic choice settings. We posit that applied consumer research might be better situated to examine facial behavior in socio-emotional contexts rather than decontextualized, laboratory studies, and highlight how AHAA can be successfully employed in this context. Also, facial activity should be considered less as a single outcome variable, and more as a starting point for further analyses. Implications of this approach and potential obstacles that need to be overcome are discussed within the context of consumer research.

15.
PLoS One ; 15(4): e0231968, 2020.
Article in English | MEDLINE | ID: mdl-32330178

ABSTRACT

In the wake of rapid advances in automatic affect analysis, commercial automatic classifiers for facial affect recognition have attracted considerable attention in recent years. While several options now exist to analyze dynamic video data, less is known about the relative performance of these classifiers, in particular when facial expressions are spontaneous rather than posed. In the present work, we tested eight out-of-the-box automatic classifiers, and compared their emotion recognition performance to that of human observers. A total of 937 videos were sampled from two large databases that conveyed the basic six emotions (happiness, sadness, anger, fear, surprise, and disgust) either in posed (BU-4DFE) or spontaneous (UT-Dallas) form. Results revealed a recognition advantage for human observers over automatic classification. Among the eight classifiers, there was considerable variance in recognition accuracy ranging from 48% to 62%. Subsequent analyses per type of expression revealed that performance by the two best performing classifiers approximated those of human observers, suggesting high agreement for posed expressions. However, classification accuracy was consistently lower (although above chance level) for spontaneous affective behavior. The findings indicate potential shortcomings of existing out-of-the-box classifiers for measuring emotions, and highlight the need for more spontaneous facial databases that can act as a benchmark in the training and testing of automatic emotion recognition systems. We further discuss some limitations of analyzing facial expressions that have been recorded in controlled environments.


Subject(s)
Affect , Facial Expression , Recognition, Psychology , Adult , Automation , Female , Humans , Male
16.
Annu Int Conf IEEE Eng Med Biol Soc ; 2019: 3111-3114, 2019 Jul.
Article in English | MEDLINE | ID: mdl-31946546

ABSTRACT

Millions of individuals suffer from impairments that significantly disrupt or completely eliminate their ability to speak. An ideal intervention would restore one's natural ability to physically produce speech. Recent progress has been made in decoding speech-related brain activity to generate synthesized speech. Our vision is to extend these recent advances toward the goal of restoring physical speech production using decoded speech-related brain activity to modulate the electrical stimulation of the orofacial musculature involved in speech. In this pilot study we take a step toward this vision by investigating the feasibility of stimulating orofacial muscles during vocalization in order to alter acoustic production. The results of our study provide necessary foundation for eventual orofacial stimulation controlled directly from decoded speech-related brain activity.


Subject(s)
Electric Stimulation , Facial Muscles/physiology , Movement , Speech , Brain/physiology , Humans , Pilot Projects
17.
Perception ; 47(12): 1139-1152, 2018 12.
Article in English | MEDLINE | ID: mdl-30411653

ABSTRACT

Previous research has shown that when people read vignettes about the infliction of harm upon an entity appearing to have no more than a liminal mind, their attributions of mind to that entity increased. Currently, we investigated if the presence of a facial wound enhanced the perception of mental capacities (experience and agency) in response to images of robotic and human-like avatars, compared with unharmed avatars. The results revealed that harmed versions of both robotic and human-like avatars were imbued with mind to a higher degree, irrespective of the baseline level of mind attributed to their unharmed counterparts. Perceptions of capacity for pain mediated attributions of experience, while both pain and empathy mediated attributions of abilities linked to agency. The findings suggest that harm, even when it appears to have been inflicted unintentionally, may augment mind perception for robotic as well as for nearly human entities, at least as long as it is perceived to elicit pain.


Subject(s)
Empathy , Facial Recognition , Pain/psychology , Theory of Mind , Wounds and Injuries/psychology , Female , Humans , Intention , Interpersonal Relations , Male , Morale , Photic Stimulation/methods , Robotics , Signal Detection, Psychological , Young Adult
18.
Evol Psychol ; 16(1): 1474704918761104, 2018.
Article in English | MEDLINE | ID: mdl-29529867

ABSTRACT

Small pupils elicit empathic socioemotional responses comparable to those found for emotional tears. This might be understood in an evolutionary context. Intense emotional tearing increases tear film volume and disturbs tear layer uniformity, resulting in blurry vision. A constriction of the pupils may help to mitigate this handicap, which in turn may have resulted in a perceptual association of both signals. However, direct empirical evidence for a role of pupil size in tearful emotional crying is still lacking. The present study examined socioemotional responses to different pupil sizes, combined with the presence (absence) of digitally added tears superimposed upon expressively neutral faces. Data from 50 subjects showed significant effects of observing digitally added tears in avatars, replicating previous findings for increased perceived sadness elicited by tearful photographs. No significant interactions were found between tears and pupil size. However, small pupils likewise elicited a significantly greater wish to help in observers. Further analysis showed a significant serial mediation of the effects of tears on perceived wish to help via perceived and then felt sadness. For pupil size, only felt sadness emerged as a significant mediator of the wish to help. These findings support the notion that pupil constriction in the context of intense sadness may function to counteract blurry vision. Pupil size, like emotional tears, appears to have acquired value as a social signal in this context.


Subject(s)
Crying/psychology , Emotions/physiology , Empathy/physiology , Facial Expression , Miosis/psychology , Tears , Adolescent , Female , Humans , Male , Photic Stimulation , Young Adult
19.
R Soc Open Sci ; 3(8): 160059, 2016 Aug.
Article in English | MEDLINE | ID: mdl-27853586

ABSTRACT

We study the changes in emotional states induced by reading and participating in online discussions, empirically testing a computational model of online emotional interaction. Using principles of dynamical systems, we quantify changes in valence and arousal through subjective reports, as recorded in three independent studies including 207 participants (110 female). In the context of online discussions, the dynamics of valence and arousal is composed of two forces: an internal relaxation towards baseline values independent of the emotional charge of the discussion and a driving force of emotional states that depends on the content of the discussion. The dynamics of valence show the existence of positive and negative tendencies, while arousal increases when reading emotional content regardless of its polarity. The tendency of participants to take part in the discussion increases with positive arousal. When participating in an online discussion, the content of participants' expression depends on their valence, and their arousal significantly decreases afterwards as a regulation mechanism. We illustrate how these results allow the design of agent-based models to reproduce and analyse emotions in online communities. Our work empirically validates the microdynamics of a model of online collective emotions, bridging online data analysis with research in the laboratory.

20.
Brain Res ; 1120(1): 141-50, 2006 Nov 20.
Article in English | MEDLINE | ID: mdl-17010951

ABSTRACT

The aim of the present study was to investigate whether central nervous odor processing is affected by the temporary experience of helplessness. To induce helplessness, an unsolvable social discrimination test in combination with false feedback was used. The EEG was recorded from 60 scalp locations, while two standard odors were presented via a constant-flow olfactometer. Helplessness attenuated olfactory stimulus processing at an early perceptual stage: the P2 and P3-1 amplitudes were reduced in response to both odors. Furthermore, the early potentials (N1, P2 and P3-1) of the chemosensory event-related potential (CSERP) appeared with longer latencies when subjects received negative feedback. The state effects of helplessness resemble the deviations in the CSERP found in depressed patients, suggesting a general mood effect.


Subject(s)
Chemoreceptor Cells/physiology , Emotions/physiology , Odorants , Olfaction Disorders/physiopathology , Smell/physiology , Adolescent , Adult , Analysis of Variance , Brain Mapping , Discrimination, Psychological/physiology , Electroencephalography , Evoked Potentials, Auditory/physiology , Female , Humans , Reaction Time , Surveys and Questionnaires
SELECTION OF CITATIONS
SEARCH DETAIL
...