Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 179
Filter
1.
Iperception ; 14(6): 20416695231215604, 2023.
Article in English | MEDLINE | ID: mdl-38222319

ABSTRACT

When seeing an object in a scene, the presumption of seeing that object from a general viewpoint (as opposed to an accidental viewpoint) is a useful heuristic to decide which of many interpretations of this object is correct. Similar heuristic assumptions on illumination quality might also be used for scene interpretation. Here we tested that assumption and asked if illumination information helps determine object properties when seen from an accidental viewpoint. Test objects were placed on a flat surface and illumination was varied while keeping the objects' images constant. Observers judged the shape or rigidity of static or moving simple objects presented in accidental view. They also chose which of two seemingly very similar faces was familiar. We found: (1) Objects might appear flat without shadow information but were perceived to be volumetric objects or non-planar in the presence of cast shadows. (2) Apparently non-rigid objects became rigid with shadow information. (3) Shading and shadows helped to infer which of two face was the familiar one. Previous results had shown that cast shadows help determine spatial layout of objects. Our study shows that other properties of objects like rigidity or 3D-shape can be disambiguated by shadow information.

2.
PLoS One ; 16(11): e0259015, 2021.
Article in English | MEDLINE | ID: mdl-34793458

ABSTRACT

In dynamic driving simulators, the experience of operating a vehicle is reproduced by combining visual stimuli generated by graphical rendering with inertial stimuli generated by platform motion. Due to inherent limitations of the platform workspace, inertial stimulation is subject to shortcomings in the form of missing cues, false cues, and/or scaling errors, which negatively affect simulation fidelity. In the present study, we aim at quantifying the relative contribution of an active somatosensory stimulation to the perceived intensity of self-motion, relative to other sensory systems. Participants judged the intensity of longitudinal and lateral driving maneuvers in a dynamic driving simulator in passive driving conditions, with and without additional active somatosensory stimulation, as provided by an Active Seat (AS) and Active Belts (AB) integrated system (ASB). The results show that ASB enhances the perceived intensity of sustained decelerations, and increases the precision of acceleration perception overall. Our findings are consistent with models of perception, and indicate that active somatosensory stimulation can indeed be used to improve simulation fidelity.


Subject(s)
Automobile Driving , Computer Simulation , Motion Perception/physiology , Vision, Ocular/physiology , Acceleration , Adult , Female , Humans , Male , Psychophysics , Young Adult
3.
Exp Brain Res ; 239(6): 1727-1745, 2021 Jun.
Article in English | MEDLINE | ID: mdl-33779793

ABSTRACT

Previous literature suggests a relationship between individual characteristics of motion perception and the peak frequency of motion sickness sensitivity. Here, we used well-established paradigms to relate motion perception and motion sickness on an individual level. We recruited 23 participants to complete a two-part experiment. In the first part, we determined individual velocity storage time constants from perceived rotation in response to Earth Vertical Axis Rotation (EVAR) and subjective vertical time constants from perceived tilt in response to centrifugation. The cross-over frequency for resolution of the gravito-inertial ambiguity was derived from our data using the Multi Sensory Observer Model (MSOM). In the second part of the experiment, we determined individual motion sickness frequency responses. Participants were exposed to 30-minute sinusoidal fore-aft motions at frequencies of 0.15, 0.2, 0.3, 0.4 and 0.5 Hz, with a peak amplitude of 2 m/s2 in five separate sessions, approximately 1 week apart. Sickness responses were recorded using both the MIsery SCale (MISC) with 30 s intervals, and the Motion Sickness Assessment Questionnaire (MSAQ) at the end of the motion exposure. The average velocity storage and subjective vertical time constants were 17.2 s (STD = 6.8 s) and 9.2 s (STD = 7.17 s). The average cross-over frequency was 0.21 Hz (STD = 0.10 Hz). At the group level, there was no significant effect of frequency on motion sickness. However, considerable individual variability was observed in frequency sensitivities, with some participants being particularly sensitive to the lowest frequencies, whereas others were most sensitive to intermediate or higher frequencies. The frequency of peak sensitivity did not correlate with the velocity storage time constant (r = 0.32, p = 0.26) or the subjective vertical time constant (r = - 0.37, p = 0.29). Our prediction of a significant correlation between cross-over frequency and frequency sensitivity was not confirmed (r = 0.26, p = 0.44). However, we did observe a strong positive correlation between the subjective vertical time constant and general motion sickness sensitivity (r = 0.74, p = 0.0006). We conclude that frequency sensitivity is best considered a property unique to the individual. This has important consequences for existing models of motion sickness, which were fitted to group averaged sensitivities. The correlation between the subjective vertical time constant and motion sickness sensitivity supports the importance of verticality perception during exposure to translational sickness stimuli.


Subject(s)
Motion Perception , Motion Sickness , Humans , Motion , Rotation , Space Perception
4.
PLoS One ; 16(1): e0245295, 2021.
Article in English | MEDLINE | ID: mdl-33465124

ABSTRACT

Illusory self-motion often provokes motion sickness, which is commonly explained in terms of an inter-sensory conflict that is not in accordance with previous experience. Here we address the influence of cognition in motion sickness and show that such a conflict is not provocative when the observer believes that the motion illusion is indeed actually occurring. Illusory self-motion and motion sickness were elicited in healthy human participants who were seated on a stationary rotary chair inside a rotating optokinetic drum. Participants knew that both chair and drum could rotate but were unaware of the actual motion stimulus. Results showed that motion sickness was correlated with the discrepancy between participants' perceived self-motion and participants' beliefs about the actual motion. Together with the general motion sickness susceptibility, this discrepancy accounted for 51% of the variance in motion sickness intensity. This finding sheds a new light on the causes of visually induced motion sickness and suggests that it is not governed by an inter-sensory conflict per se, but by beliefs concerning the actual self-motion. This cognitive influence provides a promising tool for the development of new countermeasures.


Subject(s)
Motion Perception/physiology , Motion Sickness/physiopathology , Adult , Cognition/physiology , Female , Healthy Volunteers , Humans , Likelihood Functions , Male , Visual Fields , Young Adult
5.
Appl Ergon ; 90: 103282, 2021 Jan.
Article in English | MEDLINE | ID: mdl-33065467

ABSTRACT

The risk of motion sickness is considerably higher in autonomous vehicles than it is in human-operated vehicles. Their introduction will therefore require systems that mitigate motion sickness. We investigated whether this can be achieved by augmenting the vehicle interior with additional visualizations. Participants were immersed in motion simulations on a moving-base driving simulator, where they were backward-facing passengers of an autonomous vehicle. Using a Head-Mounted Display, they were presented either with a regular view from inside the vehicle, or with augmented views that offered additional cues on the vehicle's present motion or motion 500ms into the future, displayed on the vehicle's interior panels. In contrast to the hypotheses and other recent studies, no difference was found between conditions. The absence of differences between conditions suggests a ceiling effect: providing a regular view may limit motion sickness, but presentation of additional visual information beyond this does not further reduce sickness.


Subject(s)
Automobile Driving , Motion Sickness , Cues , Forecasting , Humans , Motion , Motion Sickness/etiology , Motion Sickness/prevention & control
6.
PLoS One ; 15(5): e0233160, 2020.
Article in English | MEDLINE | ID: mdl-32469902

ABSTRACT

To determine own upright body orientation the brain creates a sense of verticality by a combination of multisensory inputs. To test whether this process is affected by aging, we placed younger and older adults on a motion platform and systematically tilted the orientation of their visual surroundings by using an augmented reality setup. In a series of trials, participants adjusted the orientation of the platform until they perceived themselves to be upright. Tilting the visual scene around the roll axis induced a bias in subjective postural vertical determination in the direction of scene tilt in both groups. In the group of older participants, however, the observed peak bias was larger and occurred at larger visual tilt angles. This indicates that the susceptibility to visually induced biases increases with age, possibly caused by a reduced reliability of sensory information.


Subject(s)
Aging/physiology , Orientation, Spatial/physiology , Sitting Position , Standing Position , Visual Perception/physiology , Adult , Aged , Female , Humans , Male , Middle Aged
7.
Front Integr Neurosci ; 14: 19, 2020.
Article in English | MEDLINE | ID: mdl-32327980

ABSTRACT

Even when we are wearing gloves, we can easily detect whether a surface that we are touching is sticky or not. However, we know little about the similarities between brain activations elicited by this glove contact and by direct contact with our bare skin. In this functional magnetic resonance imaging (fMRI) study, we investigated which brain regions represent stickiness intensity information obtained in both touch conditions, i.e., skin contact and glove contact. First, we searched for neural representations mediating stickiness for each touch condition separately and found regions responding to both mainly in the supramarginal gyrus and the secondary somatosensory cortex. Second, we explored whether surface stickiness is encoded in common neural patterns irrespective of how participants touched the sticky stimuli. Using a cross-condition decoding method, we tested whether the stickiness intensities could be decoded from fMRI signals evoked by skin contact using a classifier trained on the responses elicited by glove contact, and vice versa. Our results found shared neural encoding patterns in the bilateral angular gyri and the inferior frontal gyrus (IFG) and suggest that these areas represent stickiness intensity information regardless of how participants touched the sticky stimuli. Interestingly, we observed that neural encoding patterns of these areas were reflected in participants' intensity ratings. This study revealed common and distinct brain activation patterns of tactile stickiness using two different touch conditions, which may broaden the understanding of neural mechanisms related to surface texture perception.

8.
Exp Brain Res ; 238(3): 699-711, 2020 Mar.
Article in English | MEDLINE | ID: mdl-32060563

ABSTRACT

Inertial motions may be defined in terms of acceleration and jerk, the time-derivative of acceleration. We investigated the relative contributions of these characteristics to the perceived intensity of motions. Participants were seated on a high-fidelity motion platform, and presented with 25 above-threshold 1 s forward (surge) motions that had acceleration values ranging between 0.5 and 2.5 [Formula: see text] and jerks between 20 and 60 [Formula: see text], in five steps each. Participants performed two tasks: a magnitude estimation task, where they provided subjective ratings of motion intensity for each motion, and a two-interval forced choice task, where they provided judgments on which motion of a pair was more intense, for all possible combinations of the above motion profiles. Analysis of the data shows that responses on both tasks may be explained by a single model, and that this model should include acceleration only. The finding that perceived motion intensity depends on acceleration only appears inconsistent with previous findings. We show that this discrepancy can be explained by considering the frequency content of the motions, and demonstrate that a linear time-invariant systems model of the otoliths and subsequent processing can account for the present data as well as for previous findings.


Subject(s)
Acceleration , Motion Perception/physiology , Motion , Visual Perception/physiology , Adult , Female , Humans , Male , Models, Biological , Otolithic Membrane , Vestibule, Labyrinth/physiology , Young Adult
9.
Front Neurosci ; 14: 599226, 2020.
Article in English | MEDLINE | ID: mdl-33510611

ABSTRACT

Percepts of verticality are thought to be constructed as a weighted average of multisensory inputs, but the observed weights differ considerably between studies. In the present study, we evaluate whether this can be explained by differences in how visual, somatosensory and proprioceptive cues contribute to representations of the Head In Space (HIS) and Body In Space (BIS). Participants (10) were standing on a force plate on top of a motion platform while wearing a visualization device that allowed us to artificially tilt their visual surroundings. They were presented with (in)congruent combinations of visual, platform, and head tilt, and performed Rod & Frame Test (RFT) and Subjective Postural Vertical (SPV) tasks. We also recorded postural responses to evaluate the relation between perception and balance. The perception data shows that body tilt, head tilt, and visual tilt affect the HIS and BIS in both experimental tasks. For the RFT task, visual tilt induced considerable biases (≈ 10° for 36° visual tilt) in the direction of the vertical expressed in the visual scene; for the SPV task, participants also adjusted platform tilt to correct for illusory body tilt induced by the visual stimuli, but effects were much smaller (≈ 0.25°). Likewise, postural data from the SPV task indicate participants slightly shifted their weight to counteract visual tilt (0.3° for 36° visual tilt). The data reveal a striking dissociation of visual effects between the two tasks. We find that the data can be explained well using a model where percepts of the HIS and BIS are constructed from direct signals from head and body sensors, respectively, and indirect signals based on body and head signals but corrected for perceived neck tilt. These findings show that perception of the HIS and BIS derive from the same sensory signals, but see profoundly different weighting factors. We conclude that observations of different weightings between studies likely result from querying of distinct latent constructs referenced to the body or head in space.

10.
Front Neural Circuits ; 13: 68, 2019.
Article in English | MEDLINE | ID: mdl-31736715

ABSTRACT

Spatial orientation relies on a representation of the position and orientation of the body relative to the surrounding environment. When navigating in the environment, this representation must be constantly updated taking into account the direction, speed, and amplitude of body motion. Visual information plays an important role in this updating process, notably via optical flow. Here, we systematically investigated how the size and the simulated portion of the field of view (FoV) affect perceived visual speed of human observers. We propose a computational model to account for the patterns of human data. This model is composed of hierarchical cells' layers that model the neural processing stages of the dorsal visual pathway. Specifically, we consider that the activity of the MT area is processed by populations of modeled MST cells that are sensitive to the differential components of the optical flow, thus producing selectivity for specific patterns of optical flow. Our results indicate that the proposed computational model is able to describe the experimental evidence and it could be used to predict expected biases of speed perception for conditions in which only some portions of the visual field are visible.


Subject(s)
Models, Neurological , Optic Flow/physiology , Orientation, Spatial/physiology , Visual Fields/physiology , Visual Pathways/physiology , Computer Simulation , Humans , Motion Perception/physiology , Neurons/physiology , Visual Perception/physiology
11.
Neuroimage ; 197: 120-132, 2019 08 15.
Article in English | MEDLINE | ID: mdl-31028922

ABSTRACT

Distinguishing animate from inanimate objects is fundamental for social perception in humans and animals. Visual motion cues indicative of self-propelled object motion are useful for animacy perception: they can be detected over a wide expanse of visual field, at distance and in low visibility conditions, can attract attention and provide clues about object behaviour. However, the neural correlates of animacy perception evoked exclusively by visual motion cues, i.e. not relying on form, background or visual context, are unclear. We aimed to address this question in four psychophysical experiments in humans, two of which performed during neuroimaging. The stimulus was a single dot with constant form that moved on a blank background and evoked controlled degrees of perceived animacy through parametric variations of self-propelled motion cues. BOLD signals reflecting perceived animacy in a graded manner irrespective of eye movements were found in one intraparietal region. Additional whole-brain and region-of-interest analyses revealed no comparable effects in brain regions associated with social processing or other areas. Our study shows that animacy perception evoked solely by visual motion cues, a basic perceptual process in social cognition, engages brain regions not primarily associated with social cognition.


Subject(s)
Cues , Motion Perception/physiology , Parietal Lobe/physiology , Adult , Attention/physiology , Brain Mapping , Female , Humans , Magnetic Resonance Imaging , Male , Photic Stimulation , Psychophysics , Young Adult
12.
IEEE Trans Vis Comput Graph ; 25(5): 1887-1897, 2019 05.
Article in English | MEDLINE | ID: mdl-30794512

ABSTRACT

Creating metrically accurate avatars is important for many applications such as virtual clothing try-on, ergonomics, medicine, immersive social media, telepresence, and gaming. Creating avatars that precisely represent a particular individual is challenging however, due to the need for expensive 3D scanners, privacy issues with photographs or videos, and difficulty in making accurate tailoring measurements. We overcome these challenges by creating "The Virtual Caliper", which uses VR game controllers to make simple measurements. First, we establish what body measurements users can reliably make on their own body. We find several distance measurements to be good candidates and then verify that these are linearly related to 3D body shape as represented by the SMPL body model. The Virtual Caliper enables novice users to accurately measure themselves and create an avatar with their own body shape. We evaluate the metric accuracy relative to ground truth 3D body scan data, compare the method quantitatively to other avatar creation tools, and perform extensive perceptual studies. We also provide a software application to the community that enables novices to rapidly create avatars in fewer than five minutes. Not only is our approach more rapid than existing methods, it exports a metrically accurate 3D avatar model that is rigged and skinned.


Subject(s)
Imaging, Three-Dimensional/methods , Virtual Reality , Anthropometry/methods , Body Image , Body Size , Computer Graphics , Computer Systems , Female , Humans , Male , Self Concept , Software , User-Computer Interface
13.
Sci Rep ; 9(1): 77, 2019 01 11.
Article in English | MEDLINE | ID: mdl-30635598

ABSTRACT

Previous human fMRI studies have reported activation of somatosensory areas not only during actual touch, but also during touch observation. However, it has remained unclear how the brain encodes visually evoked tactile intensities. Using an associative learning method, we investigated neural representations of roughness intensities evoked by (a) tactile explorations and (b) visual observation of tactile explorations. Moreover, we explored (c) modality-independent neural representations of roughness intensities using a cross-modal classification method. Case (a) showed significant decoding performance in the anterior cingulate cortex (ACC) and the supramarginal gyrus (SMG), while in the case (b), the bilateral posterior parietal cortices, the inferior occipital gyrus, and the primary motor cortex were identified. Case (c) observed shared neural activity patterns in the bilateral insula, the SMG, and the ACC. Interestingly, the insular cortices were identified only from the cross-modal classification, suggesting their potential role in modality-independent tactile processing. We further examined correlations of confusion patterns between behavioral and neural similarity matrices for each region. Significant correlations were found solely in the SMG, reflecting a close relationship between neural activities of SMG and roughness intensity perception. The present findings may deepen our understanding of the brain mechanisms underlying intensity perception of tactile roughness.


Subject(s)
Brain Mapping , Somatosensory Cortex/physiology , Touch Perception , Visual Cortex/physiology , Visual Perception , Adult , Female , Gyrus Cinguli/physiology , Healthy Volunteers , Humans , Magnetic Resonance Imaging , Male , Motor Cortex/physiology , Parietal Lobe/physiology , Young Adult
14.
IEEE Trans Cybern ; 49(3): 768-780, 2019 Mar.
Article in English | MEDLINE | ID: mdl-29993968

ABSTRACT

The human controller (HC) in manual control of a dynamical system often follows a visible and predictable reference path (target). The HC can adopt a control strategy combining closed-loop feedback and an open-loop feedforward response. The effects of the target signal waveform shape and the system dynamics on the human feedforward dynamics are still largely unknown, even for common, stable, vehicle-like dynamics. This paper studies the feedforward dynamics through computer model simulations and compares these to system identification results from human-in-the-loop experimental data. Two target waveform shapes are considered, constant velocity ramp segments and constant acceleration parabola segments. Furthermore, three representative vehicle-like system dynamics are considered: 1) a single integrator (SI); 2) a second-order system; and 3) a double integrator. The analyses show that the HC utilizes a combined feedforward/feedback control strategy for all dynamics with the parabola target, and for the SI and second-order system with the ramp target. The feedforward model parameters are, however, very different between the two target waveform shapes, illustrating the adaptability of the HC to task variables. Moreover, strong evidence of anticipatory control behavior in the HC is found for the parabola target signal. The HC anticipates the future course of the parabola target signal given extensive practice, reflected by negative feedforward time delay estimates.

15.
J Exp Psychol Learn Mem Cogn ; 45(6): 993-1013, 2019 Jun.
Article in English | MEDLINE | ID: mdl-30179037

ABSTRACT

Objects learned within single enclosed spaces (e.g., rooms) can be represented within a single reference frame. Contrarily, the representation of navigable spaces (multiple interconnected enclosed spaces) is less well understood. In this study we examined different levels of integration within memory (local, regional, global), when learning object locations in navigable space. Participants consecutively learned two distinctive regions of a virtual environment that eventually converged at a common transition point and subsequently solved a pointing task. In Experiment 1 pointing latency increased with increasing corridor distance to the target and additionally when pointing into the other region. Further, when pointing within a region alignment with local and regional reference frames, when pointing across regional boundaries alignment with a global reference frame was found to accelerate pointing. Thus, participants memorized local corridors, clustered corridors into regions, and integrated globally across the entire environment. Introducing the transition point at the beginning of learning each region in Experiment 2 caused previous region effects to vanish. Our findings emphasize the importance of locally confined spaces for structuring spatial memory and suggest that the opportunity to integrate novel into existing spatial information early during learning may influence unit formation on the regional level. Further, global representations seem to be consulted only when accessing spatial information beyond regional borders. Our results are inconsistent with conceptions of spatial memory for large scale environments based either exclusively on local reference frames or upon a single reference frame encompassing the whole environment, but rather support hierarchical representation of space. (PsycINFO Database Record (c) 2019 APA, all rights reserved).


Subject(s)
Spatial Memory , Spatial Navigation , Adult , Female , Humans , Male , Psychological Theory , Spatial Learning , Virtual Reality
16.
J Exp Psychol Learn Mem Cogn ; 45(7): 1205-1223, 2019 Jul.
Article in English | MEDLINE | ID: mdl-30047770

ABSTRACT

Most studies on spatial memory refer to the horizontal plane, leaving an open question as to whether findings generalize to vertical spaces where gravity and the visual upright of our surrounding space are salient orientation cues. In three experiments, we examined which reference frame is used to organize memory for vertical locations: the one based on the body vertical, the visual-room vertical, or the direction of gravity. Participants judged interobject spatial relationships learned from a vertical layout in a virtual room. During learning and testing, we varied the orientation of the participant's body (upright vs. lying sideways) and the visually presented room relative to gravity (e.g., rotated by 90° along the frontal plane). Across all experiments, participants made quicker or more accurate judgments when the room was oriented in the same way as during learning with respect to their body, irrespective of their orientations relative to gravity. This suggests that participants employed an egocentric body-based reference frame for representing vertical object locations. Our study also revealed an effect of body-gravity alignment during testing. Participants recalled spatial relations more accurately when upright, regardless of the body and visual-room orientation during learning. This finding is consistent with a hypothesis of selection conflict between different reference frames. Overall, our results suggest that a body-based reference frame is preferred over salient allocentric reference frames in memory for vertical locations perceived from a single view. Further, memory of vertical space seems to be tuned to work best in the default upright body orientation. (PsycINFO Database Record (c) 2019 APA, all rights reserved).


Subject(s)
Mental Recall/physiology , Posture/physiology , Space Perception/physiology , Spatial Memory/physiology , Visual Perception/physiology , Adult , Female , Humans , Male , Young Adult
17.
Somatosens Mot Res ; 35(3-4): 212-217, 2018.
Article in English | MEDLINE | ID: mdl-30592429

ABSTRACT

The neural substrates of tactile roughness perception have been investigated by many neuroimaging studies, while relatively little effort has been devoted to the investigation of neural representations of visually perceived roughness. In this human fMRI study, we looked for neural activity patterns that could be attributed to five different roughness intensity levels when the stimuli were perceived visually, i.e., in absence of any tactile sensation. During functional image acquisition, participants viewed video clips displaying a right index fingertip actively exploring the sandpapers that had been used for the behavioural experiment. A whole brain multivariate pattern analysis found four brain regions in which visual roughness intensities could be decoded: the bilateral posterior parietal cortex (PPC), the primary somatosensory cortex (S1) extending to the primary motor cortex (M1) in the right hemisphere, and the inferior occipital gyrus (IOG). In a follow-up analysis, we tested for correlations between the decoding accuracies and the tactile roughness discriminability obtained from a preceding behavioural experiment. We could not find any correlation between both although, during scanning, participants were asked to recall the tactilely perceived roughness of the sandpapers. We presume that a better paradigm is needed to reveal any potential visuo-tactile convergence. However, the present study identified brain regions that may subserve the discrimination of different intensities of visual roughness. This finding may contribute to elucidate the neural mechanisms related to the visual roughness perception in the human brain.


Subject(s)
Brain/diagnostic imaging , Magnetic Resonance Imaging , Touch Perception/physiology , Visual Perception/physiology , Adult , Analysis of Variance , Brain Mapping , Female , Humans , Image Processing, Computer-Assisted , Male , Oxygen/blood , Photic Stimulation , Young Adult
18.
PLoS One ; 13(12): e0209189, 2018.
Article in English | MEDLINE | ID: mdl-30562381

ABSTRACT

Current neuroscientific models of bodily self-consciousness (BSC) argue that inaccurate integration of sensory signals leads to altered states of BSC. Indeed, using virtual reality technology, observers viewing a fake or virtual body while being exposed to tactile stimulation of the real body, can experience illusory ownership over-and mislocalization towards-the virtual body (Full-Body Illusion, FBI). Among the sensory inputs contributing to BSC, the vestibular system is believed to play a central role due to its importance in estimating self-motion and orientation. This theory is supported by clinical evidence that vestibular loss patients are more prone to altered BSC states, and by recent experimental evidence that visuo-vestibular conflicts can disrupt BSC in healthy individuals. Nevertheless, the contribution of vestibular information and self-motion perception to BSC remains largely unexplored. Here, we investigate the relationship between alterations of BSC and self-motion sensitivity in healthy individuals. Fifteen participants were exposed to visuo-vibrotactile conflicts designed to induce an FBI, and subsequently to visual rotations that evoked illusory self-motion (vection). We found that synchronous visuo-vibrotactile stimulation successfully induced the FBI, and further observed a relationship between the strength of the FBI and the time necessary for complete vection to arise. Specifically, higher self-reported FBI scores across synchronous and asynchronous conditions were associated to shorter vection latencies. Our findings are in agreement with clinical observations that vestibular loss patients have higher FBI susceptibility and lower vection latencies, and argue for increased visual over vestibular dependency during altered states of BSC.


Subject(s)
Illusions , Motion Perception , Self Concept , Touch Perception , Virtual Reality , Visual Perception , Adult , Female , Humans , Illusions/physiology , Male , Motion Perception/physiology , Physical Stimulation , Touch Perception/physiology , Vestibular Diseases/physiopathology , Vestibule, Labyrinth/physiology , Vestibule, Labyrinth/physiopathology , Visual Perception/physiology , Young Adult
19.
Sci Rep ; 8(1): 15131, 2018 10 11.
Article in English | MEDLINE | ID: mdl-30310139

ABSTRACT

In environments where orientation is ambiguous, the visual system uses prior knowledge about lighting coming from above to recognize objects, determine which way is up, and reorient the body. Here we investigated the extent with which assumed light from above preferences are affected by body orientation and the orientation of the retina relative to gravity. We tested the ability to extract shape-from-shading with seven human male observers positioned in multiple orientations relative to gravity using a modified KUKA anthropomorphic robot arm. Observers made convex-concave judgments of a central monocularly viewed stimulus with orientations of a shading gradient consistent with being lit from one of 24 simulated illumination directions. By positioning observers in different roll-tilt orientations relative to gravity and when supine, we were able to monitor change in the light-from-above prior (the orientation at which a shaded disk appears maximally convex). The results confirm previous findings that the light-from-above prior changes with body orientation relative to gravity. Interestingly, the results varied also with retinal orientation as well as an additional component that was approximately twice the frequency of retinal orientation. We use a modelling approach to show that the data are well predicted by summing retinal orientation with cross-multiplied utricle and saccule signals of the vestibular system, yielding gravity-dependent biases in the ability to extract shape-from-shading. We conclude that priors such as light coming from above appear to be constantly updated by neural processes that monitor self-orientation to achieve optimal object recognition over moderate deviations from upright posture at the cost of poor recognition when extremely tilted relative to gravity.


Subject(s)
Gravitation , Orientation, Spatial , Visual Perception , Bayes Theorem , Humans , Lighting , Male , Photic Stimulation
20.
J Vis ; 18(11): 9, 2018 10 01.
Article in English | MEDLINE | ID: mdl-30347100

ABSTRACT

Visual heading estimation is subject to periodic patterns of constant (bias) and variable (noise) error. The nature of the errors, however, appears to differ between studies, showing underestimation in some, but overestimation in others. We investigated whether field of view (FOV), the availability of binocular disparity cues, motion profile, and visual scene layout can account for error characteristics, with a potential mediating effect of vection. Twenty participants (12 females) reported heading and rated vection for visual horizontal motion stimuli with headings ranging the full circle, while we systematically varied the above factors. Overall, the results show constant errors away from the fore-aft axis. Error magnitude was affected by FOV, disparity, and scene layout. Variable errors varied with heading angle, and depended on scene layout. Higher vection ratings were associated with smaller variable errors. Vection ratings depended on FOV, motion profile, and scene layout, with the highest ratings for a large FOV, cosine-bell velocity profile, and a ground plane scene rather than a dot cloud scene. Although the factors did affect error magnitude, differences in its direction were observed only between participants. We show that the observations are consistent with prior beliefs that headings align with the cardinal axes, where the attraction of each axis is an idiosyncratic property.


Subject(s)
Depth Perception/physiology , Motion Perception/physiology , Sensory Thresholds/physiology , Adult , Cues , Female , Humans , Individuality , Male , Photic Stimulation/methods , Young Adult
SELECTION OF CITATIONS
SEARCH DETAIL
...