Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 13 de 13
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Hum Mov Sci ; 96: 103250, 2024 Jul 03.
Artigo em Inglês | MEDLINE | ID: mdl-38964027

RESUMO

Movement sonification can improve motor control in both healthy subjects (e.g., learning or refining a sport skill) and those with sensorimotor deficits (e.g., stroke patients and deafferented individuals). It is not known whether improved motor control and learning from movement sonification are driven by feedback-based real-time ("online") trajectory adjustments, adjustments to internal models over multiple trials, or both. We searched for evidence of online trajectory adjustments (muscle twitches) in response to movement sonification feedback by comparing the kinematics and error of reaches made with online (i.e., real-time) and terminal sonification feedback. We found that reaches made with online feedback were significantly more jerky than reaches made with terminal feedback, indicating increased muscle twitching (i.e., online trajectory adjustment). Using a between-subject design, we found that online feedback was associated with improved motor learning of a reach path and target over terminal feedback; however, using a within-subjects design, we found that switching participants who had learned with online sonification feedback to terminal feedback was associated with a decrease in error. Thus, our results suggest that, with our task and sonification, movement sonification leads to online trajectory adjustments which improve internal models over multiple trials, but which themselves are not helpful online corrections.

2.
NPJ Microgravity ; 10(1): 28, 2024 Mar 13.
Artigo em Inglês | MEDLINE | ID: mdl-38480736

RESUMO

Self-motion perception is a multi-sensory process that involves visual, vestibular, and other cues. When perception of self-motion is induced using only visual motion, vestibular cues indicate that the body remains stationary, which may bias an observer's perception. When lowering the precision of the vestibular cue by for example, lying down or by adapting to microgravity, these biases may decrease, accompanied by a decrease in precision. To test this hypothesis, we used a move-to-target task in virtual reality. Astronauts and Earth-based controls were shown a target at a range of simulated distances. After the target disappeared, forward self-motion was induced by optic flow. Participants indicated when they thought they had arrived at the target's previously seen location. Astronauts completed the task on Earth (supine and sitting upright) prior to space travel, early and late in space, and early and late after landing. Controls completed the experiment on Earth using a similar regime with a supine posture used to simulate being in space. While variability was similar across all conditions, the supine posture led to significantly higher gains (target distance/perceived travel distance) than the sitting posture for the astronauts pre-flight and early post-flight but not late post-flight. No difference was detected between the astronauts' performance on Earth and onboard the ISS, indicating that judgments of traveled distance were largely unaffected by long-term exposure to microgravity. Overall, this constitutes mixed evidence as to whether non-visual cues to travel distance are integrated with relevant visual cues when self-motion is simulated using optic flow alone.

3.
PLoS One ; 19(3): e0295110, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38483949

RESUMO

To interact successfully with moving objects in our environment we need to be able to predict their behavior. Predicting the position of a moving object requires an estimate of its velocity. When flow parsing during self-motion is incomplete-that is, when some of the retinal motion created by self-motion is incorrectly attributed to object motion-object velocity estimates become biased. Further, the process of flow parsing should add noise and lead to object velocity judgements being more variable during self-motion. Biases and lowered precision in velocity estimation should then translate to biases and lowered precision in motion extrapolation. We investigated this relationship between self-motion, velocity estimation and motion extrapolation with two tasks performed in a realistic virtual reality (VR) environment: first, participants were shown a ball moving laterally which disappeared after a certain time. They then indicated by button press when they thought the ball would have hit a target rectangle positioned in the environment. While the ball was visible, participants sometimes experienced simultaneous visual lateral self-motion in either the same or in the opposite direction of the ball. The second task was a two-interval forced choice task in which participants judged which of two motions was faster: in one interval they saw the same ball they observed in the first task while in the other they saw a ball cloud whose speed was controlled by a PEST staircase. While observing the single ball, they were again moved visually either in the same or opposite direction as the ball or they remained static. We found the expected biases in estimated time-to-contact, while for the speed estimation task, this was only the case when the ball and observer were moving in opposite directions. Our hypotheses regarding precision were largely unsupported by the data. Overall, we draw several conclusions from this experiment: first, incomplete flow parsing can affect motion prediction. Further, it suggests that time-to-contact estimation and speed judgements are determined by partially different mechanisms. Finally, and perhaps most strikingly, there appear to be certain compensatory mechanisms at play that allow for much higher-than-expected precision when observers are experiencing self-motion-even when self-motion is simulated only visually.


Assuntos
Percepção de Movimento , Humanos , Movimento (Física) , Fatores de Tempo , Retina , Viés
4.
Perception ; 53(3): 197-207, 2024 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-38304970

RESUMO

Aristotle believed that objects fell at a constant velocity. However, Galileo Galilei showed that when an object falls, gravity causes it to accelerate. Regardless, Aristotle's claim raises the possibility that people's visual perception of falling motion might be biased away from acceleration towards constant velocity. We tested this idea by requiring participants to judge whether a ball moving in a simulated naturalistic setting appeared to accelerate or decelerate as a function of its motion direction and the amount of acceleration/deceleration. We found that the point of subjective constant velocity (PSCV) differed between up and down but not between left and right motion directions. The PSCV difference between up and down indicated that more acceleration was needed for a downward-falling object to appear at constant velocity than for an upward "falling" object. We found no significant differences in sensitivity to acceleration for the different motion directions. Generalized linear mixed modeling determined that participants relied predominantly on acceleration when making these judgments. Our results support the idea that Aristotle's belief may in part be due to a bias that reduces the perceived magnitude of acceleration for falling objects, a bias not revealed in previous studies of the perception of visual motion.


Assuntos
Percepção de Movimento , Humanos , Aceleração , Percepção Visual , Gravitação
5.
Sci Rep ; 13(1): 20075, 2023 11 16.
Artigo em Inglês | MEDLINE | ID: mdl-37974023

RESUMO

Changes in perceived eye height influence visually perceived object size in both the real world and in virtual reality. In virtual reality, conflicts can arise between the eye height in the real world and the eye height simulated in a VR application. We hypothesized that participants would be influenced more by variation in simulated eye height when they had a clear expectation about their eye height in the real world such as when sitting or standing, and less so when they did not have a clear estimate of the distance between their eyes and the real-life ground plane, e.g., when lying supine. Using virtual reality, 40 participants compared the height of a red square simulated at three different distances (6, 12, and 18 m) against the length of a physical stick (38.1 cm) held in their hands. They completed this task in all combinations of four real-life postures (supine, sitting, standing, standing on a table) and three simulated eye heights that corresponded to each participant's real-world eye height (123cm sitting; 161cm standing; 201cm on table; on average). Confirming previous results, the square's perceived size varied inversely with simulated eye height. Variations in simulated eye height affected participants' perception of size significantly more when sitting than in the other postures (supine, standing, standing on a table). This shows that real-life posture can influence the perception of size in VR. However, since simulated eye height did not affect size estimates less in the lying supine than in the standing position, our hypothesis that humans would be more influenced by variations in eye height when they had a reliable estimate of the distance between their eyes and the ground plane in the real world was not fully confirmed.


Assuntos
Postura , Percepção de Tamanho , Humanos , Posição Ortostática , Olho , Postura Sentada
6.
PLoS One ; 18(1): e0267983, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-36716328

RESUMO

To interact successfully with moving objects in our environment we need to be able to predict their behavior. Predicting the position of a moving object requires an estimate of its velocity. When flow parsing during self-motion is incomplete-that is, when some of the retinal motion created by self-motion is incorrectly attributed to object motion-object velocity estimates become biased. Further, the process of flow parsing should add noise and lead to object velocity judgements being more variable during self-motion. Biases and lowered precision in velocity estimation should then translate to biases and lowered precision in motion extrapolation. We investigate this relationship between self-motion, velocity estimation and motion extrapolation with two tasks performed in a realistic virtual reality (VR) environment: first, participants are shown a ball moving laterally which disappears after a certain time. They then indicate by button press when they think the ball would have hit a target rectangle positioned in the environment. While the ball is visible, participants sometimes experience simultaneous visual lateral self-motion in either the same or in the opposite direction of the ball. The second task is a two-interval forced choice task in which participants judge which of two motions is faster: in one interval they see the same ball they observed in the first task while in the other they see a ball cloud whose speed is controlled by a PEST staircase. While observing the single ball, they are again moved visually either in the same or opposite direction as the ball or they remain static. We expect participants to overestimate the speed of a ball that moves opposite to their simulated self-motion (speed estimation task), which should then lead them to underestimate the time it takes the ball to reach the target rectangle (prediction task). Seeing the ball during visually simulated self-motion should increase variability in both tasks. We expect to find performance in both tasks to be correlated, both in accuracy and precision.


Assuntos
Percepção de Movimento , Humanos , Movimento (Física) , Fatores de Tempo , Retina , Viés
7.
Atten Percept Psychophys ; 84(1): 25-46, 2022 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-34704212

RESUMO

Judging object speed during observer self-motion requires disambiguating retinal stimulation from two sources: self-motion and object motion. According to the Flow Parsing hypothesis, observers estimate their own motion, then subtract the retinal corresponding motion from the total retinal stimulation and interpret the remaining stimulation as pertaining to object motion. Subtracting noisier self-motion information from retinal input should lead to a decrease in precision. Furthermore, when self-motion is only simulated visually, self-motion is likely to be underestimated, yielding an overestimation of target speed when target and observer move in opposite directions and an underestimation when they move in the same direction. We tested this hypothesis with a two-alternative forced-choice task in which participants judged which of two motions, presented in an immersive 3D environment, was faster. One motion interval contained a ball cloud whose speed was selected dynamically according to a PEST staircase, while the other contained one big target travelling laterally at a fixed speed. While viewing the big target, participants were either static or experienced visually simulated lateral self-motion in the same or opposite direction of the target. Participants were not significantly biased in either motion profile, and precision was only significantly lower when participants moved visually in the direction opposite to the target. We conclude that, when immersed in an ecologically valid 3D environment with rich self-motion cues, participants perceive an object's speed accurately at a small precision cost, even when self-motion is simulated only visually.


Assuntos
Percepção de Movimento , Sinais (Psicologia) , Humanos , Movimento (Física) , Estimulação Luminosa , Retina , Percepção Visual
9.
Sci Rep ; 11(1): 7108, 2021 03 29.
Artigo em Inglês | MEDLINE | ID: mdl-33782443

RESUMO

In a 2-alternative forced-choice protocol, observers judged the duration of ball motions shown on an immersive virtual-reality display as approaching in the sagittal plane along parabolic trajectories compatible with Earth gravity effects. In different trials, the ball shifted along the parabolas with one of three different laws of motion: constant tangential velocity, constant vertical velocity, or gravitational acceleration. Only the latter motion was fully consistent with Newton's laws in the Earth gravitational field, whereas the motions with constant velocity profiles obeyed the spatio-temporal constraint of parabolic paths dictated by gravity but violated the kinematic constraints. We found that the discrimination of duration was accurate and precise for all types of motions, but the discrimination for the trajectories at constant tangential velocity was slightly but significantly more precise than that for the trajectories at gravitational acceleration or constant vertical velocity. The results are compatible with a heuristic internal representation of gravity effects that can be engaged when viewing projectiles shifting along parabolic paths compatible with Earth gravity, irrespective of the specific kinematics. Opportunistic use of a moving frame attached to the target may favour visual tracking of targets with constant tangential velocity, accounting for the slightly superior duration discrimination.

10.
PLoS One ; 15(8): e0236732, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-32813686

RESUMO

Humans expect downwards moving objects to accelerate and upwards moving objects to decelerate. These results have been interpreted as humans maintaining an internal model of gravity. We have previously suggested an interpretation of these results within a Bayesian framework of perception: earth gravity could be represented as a Strong Prior that overrules noisy sensory information (Likelihood) and therefore attracts the final percept (Posterior) very strongly. Based on this framework, we use published data from a timing task involving gravitational motion to determine the mean and the standard deviation of the Strong Earth Gravity Prior. To get its mean, we refine a model of mean timing errors we proposed in a previous paper (Jörges & López-Moliner, 2019), while expanding the range of conditions under which it yields adequate predictions of performance. This underscores our previous conclusion that the gravity prior is likely to be very close to 9.81 m/s2. To obtain the standard deviation, we identify different sources of sensory and motor variability reflected in timing errors. We then model timing responses based on quantitative assumptions about these sensory and motor errors for a range of standard deviations of the earth gravity prior, and find that a standard deviation of around 2 m/s2 makes for the best fit. This value is likely to represent an upper bound, as there are strong theoretical reasons along with supporting empirical evidence for the standard deviation of the earth gravity being lower than this value.


Assuntos
Gravitação , Modelos Estatísticos , Adulto , Planeta Terra , Feminino , Humanos , Masculino , Adulto Jovem
11.
Sci Rep ; 9(1): 14094, 2019 Oct 01.
Artigo em Inglês | MEDLINE | ID: mdl-31575901

RESUMO

There is evidence that humans rely on an earth gravity (9.81 m/s²) prior for a series of tasks involving perception and action, the reason being that gravity helps predict future positions of moving objects. Eye-movements in turn are partially guided by predictions about observed motion. Thus, the question arises whether knowledge about gravity is also used to guide eye-movements: If humans rely on a representation of earth gravity for the control of eye movements, earth-gravity-congruent motion should elicit improved visual pursuit. In a pre-registered experiment, we presented participants (n = 10) with parabolic motion governed by six different gravities (-1/0.7/0.85/1/1.15/1.3 g), two initial vertical velocities and two initial horizontal velocities in a 3D environment. Participants were instructed to follow the target with their eyes. We tracked their gaze and computed the visual gain (velocity of the eyes divided by velocity of the target) as proxy for the quality of pursuit. An LMM analysis with gravity condition as fixed effect and intercepts varying per subject showed that the gain was lower for -1 g than for 1 g (by -0.13, SE = 0.005). This model was significantly better than a null model without gravity as fixed effect (p < 0.001), supporting our hypothesis. A comparison of 1 g and the remaining gravity conditions revealed that 1.15 g (by 0.043, SE = 0.005) and 1.3 g (by 0.065, SE = 0.005) were associated with lower gains, while 0.7 g (by 0.054, SE = 0.005) and 0.85 g (by 0.029, SE = 0.005) were associated with higher gains. This model was again significantly better than a null model (p < 0.001), contradicting our hypothesis. Post-hoc analyses reveal that confounds in the 0.7/0.85/1/1.15/1.3 g condition may be responsible for these contradicting results. Despite these discrepancies, our data thus provide some support for the hypothesis that internalized knowledge about earth gravity guides eye movements.


Assuntos
Movimentos Oculares , Gravitação , Adulto , Movimentos Oculares/fisiologia , Feminino , Humanos , Masculino , Movimento (Física) , Percepção de Movimento , Estimulação Luminosa , Adulto Jovem
12.
Vision Res ; 149: 47-58, 2018 08.
Artigo em Inglês | MEDLINE | ID: mdl-29913247

RESUMO

Evidence suggests that humans rely on an earth gravity prior for sensory-motor tasks like catching or reaching. Even under earth-discrepant conditions, this prior biases perception and action towards assuming a gravitational downwards acceleration of 9.81 m/s2. This can be particularly detrimental in interactions with virtual environments employing earth-discrepant gravity conditions for their visual presentation. The present study thus investigates how well humans discriminate visually presented gravities and which cues they use to extract gravity from the visual scene. To this end, we employed a Two-Interval Forced-Choice Design. In Experiment 1, participants had to judge which of two presented parabolas had the higher underlying gravity. We used two initial vertical velocities, two horizontal velocities and a constant target size. Experiment 2 added a manipulation of the reliability of the target size. Experiment 1 shows that participants have generally high discrimination thresholds for visually presented gravities, with weber fractions of 13 to beyond 30%. We identified the rate of change of the elevation angle (y) and the visual angle (θ) as major cues. Experiment 2 suggests furthermore that size variability has a small influence on discrimination thresholds, while at the same time larger size variability increases reliance on y and decreases reliance on θ. All in all, even though we use all available information, humans display low precision when extracting the governing gravity from a visual scene, which might further impact our capabilities of adapting to earth-discrepant gravity conditions with visual information alone.


Assuntos
Gravitação , Julgamento/fisiologia , Percepção de Movimento/fisiologia , Aceleração , Adulto , Teorema de Bayes , Sinais (Psicologia) , Discriminação Psicológica , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Fluxo Óptico/fisiologia , Estimulação Luminosa/métodos , Reprodutibilidade dos Testes , Realidade Virtual , Adulto Jovem
13.
Front Hum Neurosci ; 11: 203, 2017.
Artigo em Inglês | MEDLINE | ID: mdl-28503140

RESUMO

In the future, humans are likely to be exposed to environments with altered gravity conditions, be it only visually (Virtual and Augmented Reality), or visually and bodily (space travel). As visually and bodily perceived gravity as well as an interiorized representation of earth gravity are involved in a series of tasks, such as catching, grasping, body orientation estimation and spatial inferences, humans will need to adapt to these new gravity conditions. Performance under earth gravity discrepant conditions has been shown to be relatively poor, and few studies conducted in gravity adaptation are rather discouraging. Especially in VR on earth, conflicts between bodily and visual gravity cues seem to make a full adaptation to visually perceived earth-discrepant gravities nearly impossible, and even in space, when visual and bodily cues are congruent, adaptation is extremely slow. We invoke a Bayesian framework for gravity related perceptual processes, in which earth gravity holds the status of a so called "strong prior". As other strong priors, the gravity prior has developed through years and years of experience in an earth gravity environment. For this reason, the reliability of this representation is extremely high and overrules any sensory information to its contrary. While also other factors such as the multisensory nature of gravity perception need to be taken into account, we present the strong prior account as a unifying explanation for empirical results in gravity perception and adaptation to earth-discrepant gravities.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...