Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 18 de 18
Filter
Add more filters










Publication year range
1.
Neuroimage ; 204: 116216, 2020 01 01.
Article in English | MEDLINE | ID: mdl-31553928

ABSTRACT

Computer-generated (CG) faces are an important visual interface for human-computer interaction in social contexts. Here we investigated whether the human brain processes emotion and gaze similarly in real and carefully matched CG faces. Real faces evoked greater responses in the fusiform face area than CG faces, particularly for fearful expressions. Emotional (angry and fearful) facial expressions evoked similar activations in the amygdala in real and CG faces. Direct as compared with averted gaze elicited greater fMRI responses in the amygdala regardless of facial expression but only for real and not for CG faces. We observed an interaction effect between gaze and emotion (i.e., the shared signal effect) in the right posterior temporal sulcus and other regions, but not in the amygdala, and we found no evidence for different shared signal effects in real and CG faces. Taken together, the present findings highlight similarities (emotional processing in the amygdala) and differences (overall processing in the fusiform face area, gaze processing in the amygdala) in the neural processing of real and CG faces.


Subject(s)
Amygdala/physiology , Brain Mapping , Emotions/physiology , Facial Expression , Facial Recognition/physiology , Fixation, Ocular/physiology , Temporal Lobe/physiology , Adult , Amygdala/diagnostic imaging , Data Display , Female , Humans , Magnetic Resonance Imaging , Male , Temporal Lobe/diagnostic imaging , Young Adult
3.
Front Psychol ; 9: 1362, 2018.
Article in English | MEDLINE | ID: mdl-30123166

ABSTRACT

Virtual as compared with real human characters can elicit a sense of uneasiness in human observers, characterized by lack of familiarity and even feelings of eeriness (the "uncanny valley" hypothesis). Here we test the possibility that this alleged lack of familiarity is literal in the sense that people have lesser perceptual expertise in processing virtual as compared with real human faces. Sixty-four participants took part in a recognition memory study in which they first learned a set of faces and were then asked to recognize them in a testing session. We used real and virtual (computer-rendered) versions of the same faces, presented in either upright or inverted orientation. Real and virtual faces were matched for low-level visual features such as global luminosity and spatial frequency contents. Our results demonstrated a higher response bias toward responding "seen before" for virtual as compared with real faces, which was further explained by a higher false alarm rate for the former. This finding resembles a similar effect for recognizing human faces from other than one's own ethnic groups (the "other race effect"). Virtual faces received clearly higher subjective eeriness ratings than real faces. Our results did not provide evidence of poorer overall recognition memory or lesser inversion effect for virtual faces, however. The higher false alarm rate finding supports the notion that lesser perceptual expertise may contribute to the lack of subjective familiarity with virtual faces. We discuss alternative interpretations and provide suggestions for future research.

4.
Br J Psychol ; 109(3): 421-426, 2018 Aug.
Article in English | MEDLINE | ID: mdl-29806694

ABSTRACT

Virtual reality (VR) promises methodological rigour with the extra benefit of allowing us to study the context-dependent behaviour of individuals in their natural environment. Pan and Hamilton (2018, Br. J. Psychol.) provide a useful overview of methodological recommendations for using VR. Here, we highlight some other aspects of the use of VR. Our first argument is that VR can be useful by virtue of its differences from the normal perceptual environment. That is, by virtue of its relative non-realism and poverty of its perceptual elements, it can actually offer increased clarity with respect to the features of interest for the researcher. Our second argument is that VR exerts its measurable influence more by eliciting an acceptance of the virtual world (i.e., 'suspension of disbelief') rather than by eliciting a true belief of the realism of the VR environment. We conclude by providing a novel suggestion for combining neuroimaging methods with embodied VR that relies on the suspension of disbelief.


Subject(s)
Psychophysics/methods , Virtual Reality , Humans , Psychophysics/trends
5.
J Vis ; 16(9): 5, 2016 07 01.
Article in English | MEDLINE | ID: mdl-27442954

ABSTRACT

Gaze perception has received considerable research attention due to its importance in social interaction. The majority of recent studies have utilized monoscopic pictorial gaze stimuli. However, a monoscopic direct gaze differs from a live or stereoscopic gaze. In the monoscopic condition, both eyes of the observer receive a direct gaze, whereas in live and stereoscopic conditions, only one eye receives a direct gaze. In the present study, we examined the implications of the difference between monoscopic and stereoscopic direct gaze. Moreover, because research has shown that stereoscopy affects the emotions elicited by facial expressions, and facial expressions affect the range of directions where an observer perceives mutual gaze-the cone of gaze-we studied the interaction effect of stereoscopy and facial expressions on gaze perception. Forty observers viewed stereoscopic images wherein one eye of the observer received a direct gaze while the other eye received a horizontally averted gaze at five different angles corresponding to five interaxial distances between the cameras in stimulus acquisition. In addition to monoscopic and stereoscopic conditions, the stimuli included neutral, angry, and happy facial expressions. The observers judged the gaze direction and mutual gaze of four lookers. Our results show that the mean of the directions received by the left and right eyes approximated the perceived gaze direction in the stereoscopic semidirect gaze condition. The probability of perceiving mutual gaze in the stereoscopic condition was substantially lower compared with monoscopic direct gaze. Furthermore, stereoscopic semidirect gaze significantly widened the cone of gaze for happy facial expressions.


Subject(s)
Attention/physiology , Emotions/physiology , Facial Expression , Visual Perception/physiology , Adult , Anger , Female , Happiness , Humans , Male , Social Perception
6.
PLoS One ; 11(5): e0153712, 2016.
Article in English | MEDLINE | ID: mdl-27144385

ABSTRACT

Television viewers' attention is increasingly more often divided between television and "second screens", for example when viewing television broadcasts and following their related social media discussion on a tablet computer. The attentional costs of such multitasking may vary depending on the ebb and flow of the social media channel, such as its emotional contents. In the present study, we tested the hypothesis that negative social media messages would draw more attention than similar positive messages. Specifically, news broadcasts were presented in isolation and with simultaneous positive or negative Twitter messages on a tablet to 38 participants in a controlled experiment. Recognition memory, gaze tracking, cardiac responses, and self-reports were used as attentional indices. The presence of any tweets on the tablet decreased attention to the news broadcasts. As expected, negative tweets drew longer viewing times and elicited more attention to themselves than positive tweets. Negative tweets did not, however, decrease attention to the news broadcasts. Taken together, the present results demonstrate a negativity bias exists for social media messages in media multitasking; however, this effect does not amplify the overall detrimental effects of media multitasking.


Subject(s)
Attention/physiology , Social Media , Adult , Bias , Female , Humans , Male , Mass Media , Television
7.
Front Psychol ; 7: 105, 2016.
Article in English | MEDLINE | ID: mdl-26903913

ABSTRACT

We investigated how technologically mediating two different components of emotion-communicative expression and physiological state-to group members affects physiological linkage and self-reported feelings in a small group during video viewing. In different conditions the availability of second screen text chat (communicative expression) and visualization of group level physiological heart rates and their dyadic linkage (physiology) was varied. Within this four person group two participants formed a physically co-located dyad and the other two were individually situated in two separate rooms. We found that text chat always increased heart rate synchrony but HR visualization only with non-co-located dyads. We also found that physiological linkage was strongly connected to self-reported social presence. The results encourage further exploration of the possibilities of sharing group member's physiological components of emotion by technological means to enhance mediated communication and strengthen social presence.

8.
Front Psychol ; 6: 390, 2015.
Article in English | MEDLINE | ID: mdl-25914661

ABSTRACT

The uncanny valley hypothesis, proposed already in the 1970s, suggests that almost but not fully humanlike artificial characters will trigger a profound sense of unease. This hypothesis has become widely acknowledged both in the popular media and scientific research. Surprisingly, empirical evidence for the hypothesis has remained inconsistent. In the present article, we reinterpret the original uncanny valley hypothesis and review empirical evidence for different theoretically motivated uncanny valley hypotheses. The uncanny valley could be understood as the naïve claim that any kind of human-likeness manipulation will lead to experienced negative affinity at close-to-realistic levels. More recent hypotheses have suggested that the uncanny valley would be caused by artificial-human categorization difficulty or by a perceptual mismatch between artificial and human features. Original formulation also suggested that movement would modulate the uncanny valley. The reviewed empirical literature failed to provide consistent support for the naïve uncanny valley hypothesis or the modulatory effects of movement. Results on the categorization difficulty hypothesis were still too scarce to allow drawing firm conclusions. In contrast, good support was found for the perceptual mismatch hypothesis. Taken together, the present review findings suggest that the uncanny valley exists only under specific conditions. More research is still needed to pinpoint the exact conditions under which the uncanny valley phenomenon manifests itself.

9.
Iperception ; 6(6): 2041669515615071, 2015 Dec.
Article in English | MEDLINE | ID: mdl-27551358

ABSTRACT

Mediated facial expressions do not elicit emotions as strongly as real-life facial expressions, possibly due to the low fidelity of pictorial presentations in typical mediation technologies. In the present study, we investigated the extent to which stereoscopy amplifies emotions elicited by images of neutral, angry, and happy facial expressions. The emotional self-reports of positive and negative valence (which were evaluated separately) and arousal of 40 participants were recorded. The magnitude of perceived depth in the stereoscopic images was manipulated by varying the camera base at 15, 40, 65, 90, and 115 mm. The analyses controlled for participants' gender, gender match, emotional empathy, and trait alexithymia. The results indicated that stereoscopy significantly amplified the negative valence and arousal elicited by angry expressions at the most natural (65 mm) camera base, whereas stereoscopy amplified the positive valence elicited by happy expressions in both the narrowed and most natural (15-65 mm) base conditions. Overall, the results indicate that stereoscopy amplifies the emotions elicited by mediated emotional facial expressions when the depth geometry is close to natural. The findings highlight the sensitivity of the visual system to depth and its effect on emotions.

10.
PLoS One ; 9(7): e100318, 2014.
Article in English | MEDLINE | ID: mdl-24983952

ABSTRACT

Previous research indicates that males prefer competition over cooperation, and it is sometimes suggested that females show the opposite behavioral preference. In the present article, we investigate the emotions behind the preferences: Do males exhibit more positive emotions during competitive than cooperative activities, and do females show the opposite pattern? We conducted two experiments where we assessed the emotional responses of same-gender dyads (in total 130 participants, 50 female) during intrinsically motivating competitive and cooperative digital game play using facial electromyography (EMG), skin conductance, heart rate measures, and self-reported emotional experiences. We found higher positive emotional responses (as indexed by both physiological measures and self-reports) during competitive than cooperative play for males, but no differences for females. In addition, we found no differences in negative emotions, and heart rate, skin conductance, and self-reports yielded contradictory evidence for arousal. These results support the hypothesis that males not only prefer competitive over cooperative play, but they also exhibit more positive emotional responses during them. In contrast, the results suggest that the emotional experiences of females do not differ between cooperation and competition, which implies that less competitiveness does not mean more cooperativeness. Our results pertain to intrinsically motivated game play, but might be relevant also for other kinds of activities.


Subject(s)
Competitive Behavior , Cooperative Behavior , Emotions , Gender Identity , Adult , Electromyography , Face , Female , Game Theory , Heart Rate/physiology , Humans , Male , Self Report , Sex Characteristics , Sex Factors , Social Behavior
11.
Front Hum Neurosci ; 7: 278, 2013.
Article in English | MEDLINE | ID: mdl-23781195

ABSTRACT

Although the multimodal stimulation provided by modern audiovisual video games is pleasing by itself, the rewarding nature of video game playing depends critically also on the players' active engagement in the gameplay. The extent to which active engagement influences dopaminergic brain reward circuit responses remains unsettled. Here we show that striatal reward circuit responses elicited by successes (wins) and failures (losses) in a video game are stronger during active than vicarious gameplay. Eleven healthy males both played a competitive first-person tank shooter game (active playing) and watched a pre-recorded gameplay video (vicarious playing) while their hemodynamic brain activation was measured with 3-tesla functional magnetic resonance imaging (fMRI). Wins and losses were paired with symmetrical monetary rewards and punishments during active and vicarious playing so that the external reward context remained identical during both conditions. Brain activation was stronger in the orbitomedial prefrontal cortex (omPFC) during winning than losing, both during active and vicarious playing. In contrast, both wins and losses suppressed activations in the midbrain and striatum during active playing; however, the striatal suppression, particularly in the anterior putamen, was more pronounced during loss than win events. Sensorimotor confounds related to joystick movements did not account for the results. Self-ratings indicated losing to be more unpleasant during active than vicarious playing. Our findings demonstrate striatum to be selectively sensitive to self-acquired rewards, in contrast to frontal components of the reward circuit that process both self-acquired and passively received rewards. We propose that the striatal responses to repeated acquisition of rewards that are contingent on game related successes contribute to the motivational pull of video-game playing.

12.
Cereb Cortex ; 23(12): 2829-39, 2013 Dec.
Article in English | MEDLINE | ID: mdl-22952277

ABSTRACT

Winning against an opponent in a competitive video game can be expected to be more rewarding than losing, especially when the opponent is a fellow human player rather than a computer. We show that winning versus losing in a first-person video game activates the brain's reward circuit and the ventromedial prefrontal cortex (vmPFC) differently depending on the type of the opponent. Participants played a competitive tank shooter game against alleged human and computer opponents while their brain activity was measured with functional magnetic resonance imaging. Brain responses to wins and losses were contrasted by fitting an event-related model to the hemodynamic data. Stronger activation to winning was observed in ventral and dorsal striatum as well as in vmPFC. Activation in ventral striatum was associated with participants' self-ratings of pleasure. During winning, ventral striatum showed stronger functional coupling with right insula, and weaker coupling with dorsal striatum, sensorimotor pre- and postcentral gyri, and visual association cortices. The vmPFC and dorsal striatum responses were stronger to winning when the subject was playing against a human rather than a computer. These results highlight the importance of social context in the neural encoding of reward value.


Subject(s)
Basal Ganglia/physiology , Competitive Behavior , Prefrontal Cortex/physiology , Reward , Video Games , Adult , Brain Mapping , Corpus Striatum/physiology , Humans , Magnetic Resonance Imaging , Male , Nerve Net/physiology , Young Adult
13.
J Autism Dev Disord ; 42(6): 1011-24, 2012 Jun.
Article in English | MEDLINE | ID: mdl-21822763

ABSTRACT

FMRI was performed with the dynamic facial expressions fear and happiness. This was done to detect differences in valence processing between 25 subjects with autism spectrum disorders (ASDs) and 27 typically developing controls. Valence scaling was abnormal in ASDs. Positive valence induces lower deactivation and abnormally strong activity in ASD in multiple regions. Negative valence increased deactivation in visual areas in subjects with ASDs. The most marked differences between valences focus on fronto-insular and temporal regions. This supports the idea that subjects with ASDs may have difficulty in passive processing of the salience and mirroring of expressions. When the valence scaling of brain activity fails, in contrast to controls, these areas activate and/or deactivate inappropriately during facial stimuli presented dynamically.


Subject(s)
Brain/physiopathology , Child Development Disorders, Pervasive/physiopathology , Emotions/physiology , Facial Expression , Recognition, Psychology/physiology , Adolescent , Brain Mapping , Child , Child Development Disorders, Pervasive/psychology , Female , Humans , Magnetic Resonance Imaging , Male , Severity of Illness Index , Surveys and Questionnaires , Visual Perception/physiology
14.
Hum Brain Mapp ; 33(10): 2295-305, 2012 Oct.
Article in English | MEDLINE | ID: mdl-21826759

ABSTRACT

Perceived emotional valence of sensory stimuli influences their processing in various cortical and subcortical structures. Recent evidence suggests that negative and positive valences are processed separately, not along a single linear continuum. Here, we examined how brain is activated when subjects are listening to auditory stimuli varying parametrically in perceived valence (very unpleasant-neutral-very pleasant). Seventeen healthy volunteers were scanned in 3 Tesla while listening to International Affective Digital Sounds (IADS-2) in a block design paradigm. We found a strong quadratic U-shaped relationship between valence and blood oxygen level dependent (BOLD) signal strength in the medial prefrontal cortex, auditory cortex, and amygdala. Signals were the weakest for neutral stimuli and increased progressively for more unpleasant or pleasant stimuli. The results strengthen the view that valence is a crucial factor in neural processing of emotions. An alternative explanation is salience, which increases with both negative and positive valences.


Subject(s)
Auditory Perception/physiology , Brain Mapping , Brain/physiology , Emotions/physiology , Adult , Female , Humans , Image Interpretation, Computer-Assisted , Magnetic Resonance Imaging , Male , Young Adult
15.
J Autism Dev Disord ; 42(8): 1606-15, 2012 Aug.
Article in English | MEDLINE | ID: mdl-22068821

ABSTRACT

Audiovisual speech perception was studied in adults with Asperger syndrome (AS), by utilizing the McGurk effect, in which conflicting visual articulation alters the perception of heard speech. The AS group perceived the audiovisual stimuli differently from age, sex and IQ matched controls. When a voice saying /p/ was presented with a face articulating /k/, the controls predominantly heard /k/. Instead, the AS group heard /k/ and /t/ with almost equal frequency, but with large differences between individuals. There were no differences in gaze direction or unisensory perception between the AS and control participants that could have contributed to the audiovisual differences. We suggest an explanation in terms of weak support from the motor system for audiovisual speech perception in AS.


Subject(s)
Asperger Syndrome/physiopathology , Eye Movements/physiology , Speech Perception/physiology , Acoustic Stimulation , Adult , Female , Humans , Male , Middle Aged , Photic Stimulation , Visual Perception/physiology
16.
Exp Brain Res ; 213(2-3): 283-90, 2011 Sep.
Article in English | MEDLINE | ID: mdl-21660467

ABSTRACT

Individuals with Asperger syndrome (AS) have problems in following conversation, especially in the situations where several people are talking. This might result from impairments in audiovisual speech perception, especially from difficulties in focusing attention to speech-relevant visual information and ignoring distracting information. We studied the effect of visual spatial attention on the audiovisual speech perception of adult individuals with AS and matched control participants. Two faces were presented side by side, one uttering /aka/ and the other /ata/, while an auditory stimulus of /apa/ was played. The participants fixated on a central cross and directed their attention to the face that an arrow pointed to, reporting which consonant they heard. We hypothesized that the adults with AS would be more distracted by a competing talking face than the controls. Instead, they were able to covertly attend to the talking face, and they were as distracted by a competing face as the controls. Independently of the attentional effect, there was a qualitative difference in audiovisual speech perception: when the visual articulation was /aka/, the control participants heard /aka/ almost exclusively, while the participants with AS heard frequently /ata/. This finding may relate to difficulties in face-to-face communication in AS.


Subject(s)
Asperger Syndrome/physiopathology , Attention/physiology , Speech Perception/physiology , Visual Perception/physiology , Acoustic Stimulation/methods , Adult , Analysis of Variance , Female , Humans , Male , Middle Aged , Photic Stimulation/methods , Reaction Time , Young Adult
17.
Brain Imaging Behav ; 4(2): 164-76, 2010 Jun.
Article in English | MEDLINE | ID: mdl-20502991

ABSTRACT

This paper assessed the neural systems involved in processing of dynamic facial expressions in adolescents. The processing of facial expressions changes as a function of age, and it is thus important to understand how healthy adolescent subjects process dynamic facial expressions prior to analyzing disease-related changes. We hypothesized that viewing of dynamic facial expressions with opposing valences (happy vs. fearful) induces differential activations and deactivations in the brain. 27 healthy adolescents (9 female, 18 male, mean age = 14.5 years; age range 11.6-17.3 years) were examined by using the ASSQ and K-SADS-PL and scanned with 1.5-T fMRI during viewing of dynamic facial expressions and mosaic control images. The stimuli activated the same areas as previously seen in dynamic facial expression in adults. Our results indicated that opposing-valence dynamic facial expressions had differential effects on many cortical structures but not on subcortical limbic structures. The mirror neuron system is activated more during viewing of fearful compared to happy expressions in bilateral inferior frontal gyrus (IFG) and superior temporal sulcus (STS) left dominantly. We also detected more deactivation in the ventral anterior cingulate gyrus (ACG), showing more automated attentional processing of fearful expressions during passive viewing. Females were found to deactivate the right frontal pole more than male adolescents during happy facial expressions, while there were no differences in fear processing between genders. No clear gender or age effects were detected. In conclusion fear induces stronger responses in attention and mirror neurons probably related to fear contagion.


Subject(s)
Brain/physiology , Facial Expression , Fear , Happiness , Social Perception , Visual Perception/physiology , Adolescent , Aging , Brain/growth & development , Brain Mapping , Child , Emotions , Face , Female , Humans , Magnetic Resonance Imaging , Male , Neuropsychological Tests , Photic Stimulation , Sex Characteristics
18.
Neuropsychologia ; 46(7): 1888-97, 2008.
Article in English | MEDLINE | ID: mdl-18314147

ABSTRACT

The theory of 'weak central coherence' [Happe, F., & Frith, U. (2006). The weak coherence account: Detail-focused cognitive style in autism spectrum disorders. Journal of Autism and Developmental Disorders, 36(1), 5-25] implies that persons with autism spectrum disorders (ASDs) have a perceptual bias for local but not for global stimulus features. The recognition of emotional facial expressions representing various different levels of detail has not been studied previously in ASDs. We analyzed the recognition of four basic emotional facial expressions (anger, disgust, fear and happiness) from low-spatial frequencies (overall global shapes without local features) in adults with an ASD. A group of 20 participants with Asperger syndrome (AS) was compared to a group of non-autistic age- and sex-matched controls. Emotion recognition was tested from static and dynamic facial expressions whose spatial frequency contents had been manipulated by low-pass filtering at two levels. The two groups recognized emotions similarly from non-filtered faces and from dynamic vs. static facial expressions. In contrast, the participants with AS were less accurate than controls in recognizing facial emotions from very low-spatial frequencies. The results suggest intact recognition of basic facial emotions and dynamic facial information, but impaired visual processing of global features in ASDs.


Subject(s)
Asperger Syndrome/diagnosis , Emotions , Facial Expression , Recognition, Psychology , Adolescent , Adult , Affective Symptoms/diagnosis , Aged , Attention , Control Groups , Female , Humans , Intelligence Tests , Judgment , Male , Middle Aged , Neuropsychological Tests/statistics & numerical data , Pattern Recognition, Visual , Perceptual Distortion , Perceptual Masking , Photic Stimulation/methods , Prosopagnosia/diagnosis , Space Perception , Visual Perception
SELECTION OF CITATIONS
SEARCH DETAIL
...