Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 5 de 5
Filter
Add more filters










Database
Language
Publication year range
1.
Multisens Res ; : 1-30, 2021 Jul 21.
Article in English | MEDLINE | ID: mdl-34298502

ABSTRACT

Previous studies have examined whether audio-visual integration changes in older age, with some studies reporting age-related differences and others reporting no differences. Most studies have either used very basic and ambiguous stimuli (e.g., flash/beep) or highly contextualized, causally related stimuli (e.g., speech). However, few have used tasks that fall somewhere between the extremes of this continuum, such as those that include contextualized, causally related stimuli that are not speech-based; for example, audio-visual impact events. The present study used a paradigm requiring duration estimates and temporal order judgements (TOJ) of audio-visual impact events. Specifically, the Schutz-Lipscomb illusion, in which the perceived duration of a percussive tone is influenced by the length of the visual striking gesture, was examined in younger and older adults. Twenty-one younger and 21 older adult participants were presented with a visual point-light representation of a percussive impact event (i.e., a marimbist striking their instrument with a long or short gesture) combined with a percussive auditory tone. Participants completed a tone duration judgement task and a TOJ task. Five audio-visual temporal offsets (-400 to +400 ms) and five spatial offsets (from -90 to +90°) were randomly introduced. Results demonstrated that the strength of the illusion did not differ between older and younger adults and was not influenced by spatial or temporal offsets. Older adults showed an 'auditory first bias' when making TOJs. The current findings expand what is known about age-related differences in audio-visual integration by considering them in the context of impact-related events.

2.
Mem Cognit ; 32(1): 51-71, 2004 Jan.
Article in English | MEDLINE | ID: mdl-15078044

ABSTRACT

In this study, we examined the orientation dependency of spatial representations following various learning conditions. We assessed the spatial representations of human participants after they had learned a complex spatial layout via map learning, via navigating within a real environment, or via navigating through a virtual simulation of that environment. Performances were compared between conditions involving (1) multiple- versus single-body orientation, (2) active versus passive learning, and (3) high versus low levels of proprioceptive information. Following learning, the participants were required to produce directional judgments to target landmarks. Results showed that the participants developed orientation-specific spatial representations following map learning and passive learning, as indicated by better performance when tested from the initial learning orientation. These results suggest that neither the number of vantage points nor the level of proprioceptive information experienced are determining factors; rather, it is the active aspect of direct navigation that leads to the development of orientation-free representations.


Subject(s)
Orientation , Pattern Recognition, Visual , Social Environment , Space Perception , User-Computer Interface , Walking , Adolescent , Adult , Concept Formation , Female , Humans , Judgment , Male , Memory, Short-Term , Problem Solving
3.
Perception ; 33(1): 49-65, 2004.
Article in English | MEDLINE | ID: mdl-15035328

ABSTRACT

By systematically varying cue availability in the stimulus and response phases of a series of same-modality and cross-modality distance matching tasks, we examined the contributions of static visual information, idiothetic information, and optic flow information. The experiment was conducted in a large-scale, open, outdoor environment. Subjects were presented with information about a distance and were then required to turn 180 before producing a distance estimate. Distance encoding and responding occurred via: (i) visually perceived target distance, or (ii) traversed distance through either blindfolded locomotion or during sighted locomotion. The results demonstrated that subjects performed with similar accuracy across all conditions. In conditions in which the stimulus and the response were delivered in the same mode, when visual information was absent, constant error was minimal; whereas, when visual information was present, overestimation was observed. In conditions in which the stimulus and response modes differed, a consistent error pattern was observed. By systematically comparing complementary conditions, we found that the availability of visual information during locomotion (particularly optic flow) led to an 'under-perception' of movement relative to conditions in which visual information was absent during locomotion.


Subject(s)
Cues , Distance Perception/physiology , Adolescent , Adult , Analysis of Variance , Female , Humans , Locomotion/physiology , Male , Motion Perception/physiology , Psychophysics
4.
Exp Brain Res ; 154(2): 246-54, 2004 Jan.
Article in English | MEDLINE | ID: mdl-14685814

ABSTRACT

One of the fundamental requirements for successful navigation through an environment is the continuous monitoring of distance travelled. To do so, humans normally use one or a combination of visual, proprioceptive/efferent, vestibular, and temporal cues. In the real world, information from one sensory modality is normally congruent with information from other modalities; hence, studying the nature of sensory interactions is often difficult. In order to decouple the natural covariation between different sensory cues, we used virtual reality technology to vary the relation between the information generated from visual sources and the information generated from proprioceptive/efferent sources. When we manipulated the stimuli such that the visual information was coupled in various ways to the proprioceptive/efferent information, human subjects predominantly used visual information to estimate the ratio of two traversed path lengths. Although proprioceptive/efferent information was not used directly, the mere availability of proprioceptive information increased the accuracy of relative path length estimation based on visual cues, even though the proprioceptive/efferent information was inconsistent with the visual information. These results convincingly demonstrated that active movement (locomotion) facilitates visual perception of path length travelled.


Subject(s)
Cues , Motion Perception/physiology , Orientation/physiology , Proprioception/physiology , Space Perception/physiology , Adult , Exercise Test , Feedback/physiology , Female , Humans , Locomotion/physiology , Male , Models, Neurological , Movement/physiology , Photic Stimulation , Postural Balance/physiology , User-Computer Interface
5.
Cyberpsychol Behav ; 6(5): 509-18, 2003 Oct.
Article in English | MEDLINE | ID: mdl-14583126

ABSTRACT

This study assessed the relative contributions of visual and proprioceptive/motor information during self-motion in a virtual environment using a speed discrimination task. Subjects wore a head-mounted display and rode a stationary bicycle along a straight path in an empty, seemingly infinite hallway with random surface texture. For each trial, subjects were required to pedal the bicycle along two paths at two different speeds (a standard speed and a comparison speed) and subsequently report whether the second speed travelled was faster than the first. The standard speed remained the same while the comparison speed was varied between trials according to the method of constant stimuli. When visual and proprioceptive/motor cues were provided separately or in combination, the speed discrimination thresholds were comparable, suggesting that either cue alone is sufficient. When the relation between visual and proprioceptive information was made inconsistent by varying optic flow gain, the resulting psychometric functions shifted along the horizontal axis (pedalling speed). The degree of separation between these functions indicated that both optic flow and proprioceptive cues contributed to speed estimation, with proprioceptive cues being dominant. These results suggest an important role for proprioceptive information in speed estimation during self-motion.


Subject(s)
Computer Simulation , Cues , Discrimination, Psychological/physiology , Kinesthesis/physiology , Mental Processes/physiology , Motion Perception/physiology , User-Computer Interface , Adult , Female , Humans , Male , Motion Sickness/physiopathology , Photic Stimulation , Psychometrics , Reference Values
SELECTION OF CITATIONS
SEARCH DETAIL
...