Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 39
Filter
Add more filters










Publication year range
1.
Autism Res ; 17(5): 1041-1052, 2024 May.
Article in English | MEDLINE | ID: mdl-38661256

ABSTRACT

Research has shown that children on the autism spectrum and adults with high levels of autistic traits are less sensitive to audiovisual asynchrony compared to their neurotypical peers. However, this evidence has been limited to simultaneity judgments (SJ) which require participants to consider the timing of two cues together. Given evidence of partly divergent perceptual and neural mechanisms involved in making temporal order judgments (TOJ) and SJ, and given that SJ require a more global type of processing which may be impaired in autistic individuals, here we ask whether the observed differences in audiovisual temporal processing are task and stimulus specific. We examined the ability to detect audiovisual asynchrony in a group of 26 autistic adult males and a group of age and IQ-matched neurotypical males. Participants were presented with beep-flash, point-light drumming, and face-voice displays with varying degrees of asynchrony and asked to make SJ and TOJ. The results indicated that autistic participants were less able to detect audiovisual asynchrony compared to the control group, but this effect was specific to SJ and more complex social stimuli (e.g., face-voice) with stronger semantic correspondence between the cues, requiring a more global type of processing. This indicates that audiovisual temporal processing is not generally different in autistic individuals and that a similar level of performance could be achieved by using a more local type of processing, thus informing multisensory integration theory as well as multisensory training aimed to aid perceptual abilities in this population.


Subject(s)
Auditory Perception , Autistic Disorder , Judgment , Visual Perception , Humans , Male , Judgment/physiology , Adult , Visual Perception/physiology , Auditory Perception/physiology , Young Adult , Autistic Disorder/physiopathology , Photic Stimulation/methods , Cues , Acoustic Stimulation/methods , Time Perception/physiology , Adolescent
2.
Virtual Real ; 27(3): 2043-2057, 2023.
Article in English | MEDLINE | ID: mdl-37614716

ABSTRACT

Research has shown that high trait anxiety can alter multisensory processing of threat cues (by amplifying integration of angry faces and voices); however, it remains unknown whether differences in multisensory processing play a role in the psychological response to trauma. This study examined the relationship between multisensory emotion processing and intrusive memories over seven days following exposure to an analogue trauma in a sample of 55 healthy young adults. We used an adapted version of the trauma film paradigm, where scenes showing a car accident trauma were presented using virtual reality, rather than a conventional 2D film. Multisensory processing was assessed prior to the trauma simulation using a forced choice emotion recognition paradigm with happy, sad and angry voice-only, face-only, audiovisual congruent (face and voice expressed matching emotions) and audiovisual incongruent expressions (face and voice expressed different emotions). We found that increased accuracy in recognising anger (but not happiness and sadness) in the audiovisual condition relative to the voice- and face-only conditions was associated with more intrusions following VR trauma. Despite previous results linking trait anxiety and intrusion development, no significant influence of trait anxiety on intrusion frequency was observed. Enhanced integration of threat-related information (i.e. angry faces and voices) could lead to overly threatening appraisals of stressful life events and result in greater intrusion development after trauma. Supplementary Information: The online version contains supplementary material available at 10.1007/s10055-023-00784-1.

3.
Virtual Real ; : 1-17, 2023 Mar 24.
Article in English | MEDLINE | ID: mdl-37360806

ABSTRACT

Attention is the ability to actively process specific information within one's environment over longer periods of time while disregarding other details. Attention is an important process that contributes to overall cognitive performance from performing every day basic tasks to complex work activities. The use of virtual reality (VR) allows study of the attention processes in realistic environments using ecological tasks. To date, research has focused on the efficacy of VR attention tasks in detecting attention impairment, while the impact of the combination of variables such as mental workload, presence and simulator sickness on both self-reported usability and objective attention task performance in immersive VR has not been examined. The current study tested 87 participants on an attention task in a virtual aquarium using a cross-sectional design. The VR task followed the continuous performance test paradigm where participants had to respond to correct targets and ignore non-targets over 18 min. Performance was measured using three outcomes: omission (failing to respond to correct targets), commission errors (incorrect responses to targets) and reaction time to correct targets. Measures of self-reported usability, mental workload, presence and simulator sickness were collected. The results showed that only presence and simulator sickness had a significant impact on usability. For performance outcomes, simulator sickness was significantly and weakly associated with omission errors, but not with reaction time and commission errors. Mental workload and presence did not significantly predict performance. Our results suggest that usability is more likely to be negatively impacted by simulator sickness and lack of presence than performance and that usability and attention performance are linked. They highlight the importance of considering factors such as presence and simulator sickness in attention tasks as these variables can impact usability. Supplementary Information: The online version contains supplementary material available at 10.1007/s10055-023-00782-3.

4.
PLoS One ; 18(3): e0280390, 2023.
Article in English | MEDLINE | ID: mdl-36928040

ABSTRACT

Users' emotions may influence the formation of presence in virtual reality (VR). Users' expectations, state of arousal and personality may also moderate the relationship between emotions and presence. An interoceptive predictive coding model of conscious presence (IPCM) considers presence as a product of the match between predictions of interoceptive emotional states and the actual states evoked by an experience (Seth et al. 2012). The present paper aims to test this model's applicability to VR for both high-arousal and low-arousal emotions. The moderating effect of personality traits on the creation of presence is also investigated. Results show that user expectations about emotional states in VR have an impact on presence, however, expression of this relationship is moderated by the intensity of an emotion, with only high-arousal emotions showing an effect. Additionally, users' personality traits moderated the relationship between emotions and presence. A refined model is proposed that predicts presence in VR by weighting emotions according to their level of arousal and by considering the impact of personality traits.


Subject(s)
Emotions , Virtual Reality , Emotions/physiology , Personality , Arousal , Euphoria
5.
Sci Rep ; 12(1): 20087, 2022 11 22.
Article in English | MEDLINE | ID: mdl-36418441

ABSTRACT

Music involves different senses and is emotional in nature, and musicians show enhanced detection of audio-visual temporal discrepancies and emotion recognition compared to non-musicians. However, whether musical training produces these enhanced abilities or if they are innate within musicians remains unclear. Thirty-one adult participants were randomly assigned to a music training, music listening, or control group who all completed a one-hour session per week for 11 weeks. The music training group received piano training, the music listening group listened to the same music, and the control group did their homework. Measures of audio-visual temporal discrepancy, facial expression recognition, autistic traits, depression, anxiety, stress and mood were completed and compared from the beginning to end of training. ANOVA results revealed that only the music training group showed a significant improvement in detection of audio-visual temporal discrepancies compared to the other groups for both stimuli (flash-beep and face-voice). However, music training did not improve emotion recognition from facial expressions compared to the control group, while it did reduce the levels of depression, stress and anxiety compared to baseline. This RCT study provides the first evidence of a causal effect of music training on improved audio-visual perception that goes beyond the music domain.


Subject(s)
Music , Time Perception , Adult , Humans , Music/psychology , Acoustic Stimulation , Visual Perception , Auditory Perception
6.
J Alzheimers Dis ; 88(4): 1341-1370, 2022.
Article in English | MEDLINE | ID: mdl-35811514

ABSTRACT

BACKGROUND: Mild cognitive impairment (MCI) and dementia result in cognitive decline which can negatively impact everyday functional abilities and quality of life. Virtual reality (VR) interventions could benefit the cognitive abilities of people with MCI and dementia, but evidence is inconclusive. OBJECTIVE: To investigate the efficacy of VR training on global and domain-specific cognition, activities of daily living and quality of life. To explore the influence of priori moderators (e.g., immersion type, training type) on the effects of VR training. Adverse effects of VR training were also considered. METHODS: A systematic literature search was conducted on all major databases for randomized control trial studies. Two separate meta-analyses were performed on studies with people with MCI and dementia. RESULTS: Sixteen studies with people with MCI and four studies with people with dementia were included in each meta-analysis. Results showed moderate to large effects of VR training on global cognition, attention, memory, and construction and motor performance in people with MCI. Immersion and training type were found to be significant moderators of the effect of VR training on global cognition. For people with dementia, results showed moderate to large improvements after VR training on global cognition, memory, and executive function, but a subgroup analysis was not possible. CONCLUSION: Our findings suggest that VR training is an effective treatment for both people with MCI and dementia. These results contribute to the establishment of practical guidelines for VR interventions for patients with cognitive decline.


Subject(s)
Cognitive Dysfunction , Dementia , Virtual Reality , Activities of Daily Living , Cognition , Cognitive Dysfunction/therapy , Dementia/therapy , Humans , Quality of Life , Randomized Controlled Trials as Topic
7.
J Exp Psychol Hum Percept Perform ; 48(9): 926-942, 2022 Sep.
Article in English | MEDLINE | ID: mdl-35862072

ABSTRACT

The tongue is an incredibly complex sensory organ, yet little is known about its tactile capacities compared to the hands. In particular, the tongue receives almost no visual input during development and so may be calibrated differently compared to other tactile senses for spatial tasks. Using a cueing task, via an electro-tactile display, we examined how a tactile cue (to the tongue) or an auditory cue can affect the orientation of attention to electro-tactile targets presented to one of four regions on the tongue. We observed that response accuracy was generally low for the same modality condition, especially at the back of the tongue. This implies that spatial localization ability is diminished either because the tongue is less calibrated by the visual modality or because of its position and orientation inside the body. However, when cues were provided cross-modally, target identification at the back of the tongue seemed to improve. Our findings suggest that, while the brain relies on a general mechanism for spatial (and tactile) attention, the surface of the tongue may not have clear access to these representations of space when solely provided via electro-tactile feedback but can be directed by other sensory modalities. (PsycInfo Database Record (c) 2022 APA, all rights reserved).


Subject(s)
Cues , Touch , Hand , Humans , Reaction Time , Tongue
8.
Sci Rep ; 12(1): 7401, 2022 05 05.
Article in English | MEDLINE | ID: mdl-35513403

ABSTRACT

Individuals are increasingly relying on GPS devices to orient and find their way in their environment and research has pointed to a negative impact of navigational systems on spatial memory. We used immersive virtual reality to examine whether an audio-visual navigational aid can counteract the negative impact of visual only or auditory only GPS systems. We also examined the effect of spatial representation preferences and abilities when using different GPS systems. Thirty-four participants completed an IVR driving game including 4 GPS conditions (No GPS; audio GPS; visual GPS; audio-visual GPS). After driving one of the routes in one of the 4 GPS conditions, participants were asked to drive to a target landmark they had previously encountered. The audio-visual GPS condition returned more accurate performance than the visual and no GPS condition. General orientation ability predicted the distance to the target landmark for the visual and the audio-visual GPS conditions, while landmark preference predicted performance in the audio GPS condition. Finally, the variability in end distance to the target landmark was significantly reduced in the audio-visual GPS condition when compared to the visual and audio GPS conditions. These findings support theories of spatial cognition and inform the optimisation of GPS designs.


Subject(s)
Automobile Driving , Virtual Reality , Cognition , Humans
9.
J Behav Ther Exp Psychiatry ; 74: 101693, 2022 03.
Article in English | MEDLINE | ID: mdl-34563795

ABSTRACT

BACKGROUND: Emotion perception is essential to human interaction and relies on effective integration of emotional cues across sensory modalities. Despite initial evidence for anxiety-related biases in multisensory processing of emotional information, there is no research to date that directly addresses whether the mechanism of multisensory integration is altered by anxiety. Here, we compared audiovisual integration of emotional cues between individuals with low vs. high trait anxiety. METHODS: Participants were 62 young adults who were assessed on their ability to quickly and accurately identify happy, angry and sad emotions from dynamic visual-only, audio-only and audiovisual face and voice displays. RESULTS: The results revealed that individuals in the high anxiety group were more likely to integrate angry faces and voices in a statistically optimal fashion, as predicted by the Maximum Likelihood Estimation model, compared to low anxiety individuals. This means that high anxiety individuals achieved higher precision in correctly recognising anger from angry audiovisual stimuli compared to angry face or voice-only stimuli, and compared to low anxiety individuals. LIMITATIONS: We tested a higher proportion of females, and although this does reflect the higher prevalence of clinical anxiety among females in the general population, potential sex differences in multisensory mechanisms due to anxiety should be examined in future studies. CONCLUSIONS: Individuals with high trait anxiety have multisensory mechanisms that are especially fine-tuned for processing threat-related emotions. This bias may exhaust capacity for processing of other emotional stimuli and lead to overly negative evaluations of social interactions.


Subject(s)
Cues , Voice , Anger , Anxiety/psychology , Emotions , Facial Expression , Female , Humans , Male , Young Adult
10.
Behav Brain Res ; 410: 113346, 2021 07 23.
Article in English | MEDLINE | ID: mdl-33964354

ABSTRACT

In everyday life, information from multiple senses is integrated for a holistic understanding of emotion. Despite evidence of atypical multisensory perception in populations with socio-emotional difficulties (e.g., autistic individuals), little research to date has examined how anxiety impacts on multisensory emotion perception. Here we examined whether the level of trait anxiety in a sample of 56 healthy adults affected audiovisual processing of emotion for three types of stimuli: dynamic faces and voices, body motion and dialogues of two interacting agents, and circles and tones. Participants judged emotion from four types of displays - audio-only, visual-only, audiovisual congruent (e.g., angry face and angry voice) and audiovisual incongruent (e.g., angry face and happy voice) - as happy or angry, as quickly as possible. In one task, participants based their emotional judgements on information in one modality while ignoring information in the other, and in a second task they based their judgements on their overall impressions of the stimuli. The results showed that the higher trait anxiety group prioritized the processing of angry cues when combining faces and voices that portrayed conflicting emotions. Individuals in this group were also more likely to benefit from combining congruent face and voice cues when recognizing anger. The multisensory effects of anxiety were found to be independent of the effects of autistic traits. The observed effects of trait anxiety on multisensory processing of emotion may serve to maintain anxiety by increasing sensitivity to social-threat and thus contributing to interpersonal difficulties.


Subject(s)
Anxiety/physiopathology , Autism Spectrum Disorder/physiopathology , Emotions/physiology , Personality/physiology , Social Perception , Speech Perception/physiology , Visual Perception/physiology , Adolescent , Adult , Cues , Facial Recognition/physiology , Female , Humans , Male , Young Adult
11.
Dev Sci ; 24(1): e13001, 2021 01.
Article in English | MEDLINE | ID: mdl-32506580

ABSTRACT

Integrating different senses to reduce sensory uncertainty and increase perceptual precision can have an important compensatory function for individuals with visual impairment and blindness. However, how visual impairment and blindness impact the development of optimal multisensory integration in the remaining senses is currently unknown. Here we first examined how audio-haptic integration develops and changes across the life span in 92 sighted (blindfolded) individuals between 7 and 70 years of age. We used a child-friendly task in which participants had to discriminate different object sizes by touching them and/or listening to them. We assessed whether audio-haptic performance resulted in a reduction of perceptual uncertainty compared to auditory-only and haptic-only performance as predicted by maximum-likelihood estimation model. We then compared how this ability develops in 28 children and adults with different levels of visual experience, focussing on low-vision individuals and blind individuals that lost their sight at different ages during development. Our results show that in sighted individuals, adult-like audio-haptic integration develops around 13-15 years of age, and remains stable until late adulthood. While early-blind individuals, even at the youngest ages, integrate audio-haptic information in an optimal fashion, late-blind individuals do not. Optimal integration in low-vision individuals follows a similar developmental trajectory as that of sighted individuals. These findings demonstrate that visual experience is not necessary for optimal audio-haptic integration to emerge, but that consistency of sensory information across development is key for the functional outcome of optimal multisensory integration.


Subject(s)
Touch Perception , Visually Impaired Persons , Adult , Auditory Perception , Blindness , Child , Humans , Touch
12.
Behav Brain Res ; 397: 112922, 2021 01 15.
Article in English | MEDLINE | ID: mdl-32971196

ABSTRACT

During self-guided movements, we optimise performance by combining sensory and self-motion cues optimally, based on their reliability. Discrepancies between such cues and problems in combining them are suggested to underlie some pain conditions. Therefore, we examined whether visuomotor integration is altered in twenty-two participants with upper or lower limb complex regional pain syndrome (CRPS) compared to twenty-four controls. Participants located targets that appeared in the unaffected (CRPS) / dominant (controls) or affected (CRPS) / non-dominant (controls) side of space, using the hand of their unaffected/dominant or affected/non-dominant side of the body. For each side of space and each hand, participants located the target using visual information and no movement (vision only condition), an unseen pointing movement (self-motion only condition), or a visually-guided pointing movement (visuomotor condition). In all four space-by-hand conditions, controls reduced their variability in the visuomotor compared to the vision and self-motion only conditions and in line with a model prediction for optimal integration. Participants with CRPS showed similar evidence of cue combination in two of the four conditions. However, they had better-than-optimal integration for the unaffected hand in the affected space. Furthermore, they did not integrate optimally for the hand of the affected side of the body in unaffected space, but instead relied on the visual information. Our results suggest that people with CRPS can optimally integrate visual and self-motion cues under some conditions, despite lower reliability of self-motion cues, and use different strategies to controls.


Subject(s)
Chronic Pain/physiopathology , Complex Regional Pain Syndromes/physiopathology , Hand/physiopathology , Kinesthesis/physiology , Motor Activity/physiology , Psychomotor Performance/physiology , Visual Perception/physiology , Adult , Conflict, Psychological , Cues , Female , Humans , Male , Middle Aged , Sensorimotor Cortex/physiopathology , Young Adult
13.
Front Psychol ; 11: 1443, 2020.
Article in English | MEDLINE | ID: mdl-32754082

ABSTRACT

Human adults can optimally combine vision with self-motion to facilitate navigation. In the absence of visual input (e.g., dark environments and visual impairments), sensory substitution devices (SSDs), such as The vOICe or BrainPort, which translate visual information into auditory or tactile information, could be used to increase navigation precision when integrated together or with self-motion. In Experiment 1, we compared and assessed together The vOICe and BrainPort in aerial maps task performed by a group of sighted participants. In Experiment 2, we examined whether sighted individuals and a group of visually impaired (VI) individuals could benefit from using The vOICe, with and without self-motion, to accurately navigate a three-dimensional (3D) environment. In both studies, 3D motion tracking data were used to determine the level of precision with which participants performed two different tasks (an egocentric and an allocentric task) and three different conditions (two unisensory conditions and one multisensory condition). In Experiment 1, we found no benefit of using the devices together. In Experiment 2, the sighted performance during The vOICe was almost as good as that for self-motion despite a short training period, although we found no benefit (reduction in variability) of using The vOICe and self-motion in combination compared to the two in isolation. In contrast, the group of VI participants did benefit from combining The vOICe and self-motion despite the low number of trials. Finally, while both groups became more accurate in their use of The vOICe with increased trials, only the VI group showed an increased level of accuracy in the combined condition. Our findings highlight how exploiting non-visual multisensory integration to develop new assistive technologies could be key to help blind and VI persons, especially due to their difficulty in attaining allocentric information.

14.
J Exp Psychol Hum Percept Perform ; 46(10): 1105-1117, 2020 Oct.
Article in English | MEDLINE | ID: mdl-32718153

ABSTRACT

The brain's ability to integrate information from the different senses is essential for decreasing sensory uncertainty and ultimately limiting errors. Temporal correspondence is one of the key processes that determines whether information from different senses will be integrated and is influenced by both experience- and task-dependent mechanisms in adults. Here we investigated the development of both task- and experience-dependent temporal mechanisms by testing 7-8-year-old children, 10-11-year-old children, and adults in two tasks (simultaneity judgment, temporal order judgment) using audiovisual stimuli with differing degrees of association based on prior experience (low for beep-flash vs. high for face-voice). By fitting an independent channels model to the data, we found that while the experience-dependent mechanism of audiovisual simultaneity perception is already adult-like in 10-11-year-old children, the task-dependent mechanism is still not. These results indicate that differing maturation rates of experience-dependent and task-dependent mechanisms underlie the development of multisensory integration. Understanding this development has important implications for clinical and educational interventions. (PsycInfo Database Record (c) 2020 APA, all rights reserved).


Subject(s)
Auditory Perception/physiology , Human Development/physiology , Psychomotor Performance/physiology , Time Perception/physiology , Visual Perception/physiology , Adult , Child , Female , Humans , Male , Young Adult
15.
Brain Res ; 1723: 146381, 2019 11 15.
Article in English | MEDLINE | ID: mdl-31419429

ABSTRACT

In order to increase perceptual precision the adult brain dynamically combines redundant information from different senses depending on their reliability. During object size estimation, for example, visual, auditory and haptic information can be integrated to increase the precision of the final size estimate. Young children, however, do not integrate sensory information optimally and instead rely on active touch. Whether this early haptic dominance is reflected in age-related differences in neural mechanisms and whether it is driven by changes in bottom-up perceptual or top-down attentional processes has not yet been investigated. Here, we recorded event-related-potentials from a group of adults and children aged 5-7 years during an object size perception task using auditory, visual and haptic information. Multisensory information was presented either congruently (conveying the same information) or incongruently (conflicting information). No behavioral responses were required from participants. When haptic size information was available via actively tapping the objects, response amplitudes in the mid-parietal area were significantly reduced by information congruency in children but not in adults between 190 ms-250 ms and 310 ms-370 ms. These findings indicate that during object size perception only children's brain activity is modulated by active touch supporting a neural maturational shift from sensory dominance in early childhood to optimal multisensory benefit in adulthood.


Subject(s)
Size Perception/physiology , Touch Perception/physiology , Touch/physiology , Adult , Age Factors , Attention/physiology , Brain/physiology , Brain Mapping , Child , Child, Preschool , Evoked Potentials/physiology , Female , Humans , Male , Reproducibility of Results , Visual Perception/physiology
16.
Front Psychol ; 9: 2168, 2018.
Article in English | MEDLINE | ID: mdl-30473677

ABSTRACT

Long-term music training has been shown to affect different cognitive and perceptual abilities. However, it is less well known whether it can also affect the perception of emotion from music, especially purely rhythmic music. Hence, we asked a group of 16 non-musicians, 16 musicians with no drumming experience, and 16 drummers to judge the level of expressiveness, the valence (positive and negative), and the category of emotion perceived from 96 drumming improvisation clips (audio-only, video-only, and audiovideo) that varied in several music features (e.g., musical genre, tempo, complexity, drummer's expressiveness, and drummer's style). Our results show that the level and type of music training influence the perceived expressiveness, valence, and emotion from solo drumming improvisation. Overall, non-musicians, non-drummer musicians, and drummers were affected differently by changes in some characteristics of the music performance, for example musicians (with and without drumming experience) gave a greater weight to the visual performance than non-musicians when giving their emotional judgments. These findings suggest that besides influencing several cognitive and perceptual abilities, music training also affects how we perceive emotion from music.

17.
Front Hum Neurosci ; 12: 274, 2018.
Article in English | MEDLINE | ID: mdl-30018545

ABSTRACT

Multisensory processing is a core perceptual capability, and the need to understand its neural bases provides a fundamental problem in the study of brain function. Both synchrony and temporal order judgments are commonly used to investigate synchrony perception between different sensory cues and multisensory perception in general. However, extensive behavioral evidence indicates that these tasks do not measure identical perceptual processes. Here we used functional magnetic resonance imaging to investigate how behavioral differences between the tasks are instantiated as neural differences. As these neural differences could manifest at either the sustained (task/state-related) and/or transient (event-related) levels of processing, a mixed block/event-related design was used to investigate the neural response of both time-scales. Clear differences in both sustained and transient BOLD responses were observed between the two tasks, consistent with behavioral differences indeed arising from overlapping but divergent neural mechanisms. Temporal order judgments, but not synchrony judgments, required transient activation in several left hemisphere regions, which may reflect increased task demands caused by an extra stage of processing. Our results highlight that multisensory integration mechanisms can be task dependent, which, in particular, has implications for the study of atypical temporal processing in clinical populations.

18.
Exp Brain Res ; 236(7): 1869-1880, 2018 Jul.
Article in English | MEDLINE | ID: mdl-29687204

ABSTRACT

To overcome differences in physical transmission time and neural processing, the brain adaptively recalibrates the point of simultaneity between auditory and visual signals by adapting to audiovisual asynchronies. Here, we examine whether the prolonged recalibration process of passively sensed visual and auditory signals is affected by naturally occurring multisensory training known to enhance audiovisual perceptual accuracy. Hence, we asked a group of drummers, of non-drummer musicians and of non-musicians to judge the audiovisual simultaneity of musical and non-musical audiovisual events, before and after adaptation with two fixed audiovisual asynchronies. We found that the recalibration for the musicians and drummers was in the opposite direction (sound leading vision) to that of non-musicians (vision leading sound), and change together with both increased music training and increased perceptual accuracy (i.e. ability to detect asynchrony). Our findings demonstrate that long-term musical training reshapes the way humans adaptively recalibrate simultaneity between auditory and visual signals.


Subject(s)
Auditory Perception/physiology , Music , Teaching , Visual Perception/physiology , Acoustic Stimulation , Adaptation, Physiological , Adult , Female , Humans , Judgment , Male , Photic Stimulation , Psychophysics , Reaction Time/physiology , Time Factors , Young Adult
19.
Sci Rep ; 6: 29163, 2016 07 06.
Article in English | MEDLINE | ID: mdl-27381183

ABSTRACT

Human adults can optimally integrate visual and non-visual self-motion cues when navigating, while children up to 8 years old cannot. Whether older children can is unknown, limiting our understanding of how our internal multisensory representation of space develops. Eighteen adults and fifteen 10- to 11-year-old children were guided along a two-legged path in darkness (self-motion only), in a virtual room (visual + self-motion), or were shown a pre-recorded walk in the virtual room while standing still (visual only). Participants then reproduced the path in darkness. We obtained a measure of the dispersion of the end-points (variable error) and of their distances from the correct end point (constant error). Only children reduced their variable error when recalling the path in the visual + self-motion condition, indicating combination of these cues. Adults showed a constant error for the combined condition intermediate to those for single cues, indicative of cue competition, which may explain the lack of near-optimal integration in this group. This suggests that later in childhood humans can gain from optimally integrating spatial cues even when in the same situation these are kept separate in adulthood.


Subject(s)
Aging/physiology , Motion Perception , Space Perception , Vision, Ocular/physiology , Adult , Age Factors , Child , Female , Humans , Male
20.
Behav Res Methods ; 48(4): 1285-1295, 2016 12.
Article in English | MEDLINE | ID: mdl-26542970

ABSTRACT

We describe the creation of the first multisensory stimulus set that consists of dyadic, emotional, point-light interactions combined with voice dialogues. Our set includes 238 unique clips, which present happy, angry and neutral emotional interactions at low, medium and high levels of emotional intensity between nine different actor dyads. The set was evaluated in a between-design experiment, and was found to be suitable for a broad potential application in the cognitive and neuroscientific study of biological motion and voice, perception of social interactions and multisensory integration. We also detail in this paper a number of supplementary materials, comprising AVI movie files for each interaction, along with text files specifying the three dimensional coordinates of each point-light in each frame of the movie, as well as unprocessed AIFF audio files for each dialogue captured. The full set of stimuli is available to download from: http://motioninsocial.com/stimuli_set/ .


Subject(s)
Audiovisual Aids , Behavioral Research/methods , Emotions , Software , Adult , Female , Humans , Male
SELECTION OF CITATIONS
SEARCH DETAIL
...