Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 5 de 5
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Sci Rep ; 14(1): 15549, 2024 07 05.
Artigo em Inglês | MEDLINE | ID: mdl-38969745

RESUMO

Interacting with objects in our environment requires determining their locations, often with respect to surrounding objects (i.e., allocentrically). According to the scene grammar framework, these usually small, local objects are movable within a scene and represent the lowest level of a scene's hierarchy. How do higher hierarchical levels of scene grammar influence allocentric coding for memory-guided actions? Here, we focused on the effect of large, immovable objects (anchors) on the encoding of local object positions. In a virtual reality study, participants (n = 30) viewed one of four possible scenes (two kitchens or two bathrooms), with two anchors connected by a shelf, onto which were presented three local objects (congruent with one anchor) (Encoding). The scene was re-presented (Test) with 1) local objects missing and 2) one of the anchors shifted (Shift) or not (No shift). Participants, then, saw a floating local object (target), which they grabbed and placed back on the shelf in its remembered position (Response). Eye-tracking data revealed that both local objects and anchors were fixated, with preference for local objects. Additionally, anchors guided allocentric coding of local objects, despite being task-irrelevant. Overall, anchors implicitly influence spatial coding of local object locations for memory-guided actions within naturalistic (virtual) environments.


Assuntos
Semântica , Realidade Virtual , Humanos , Feminino , Masculino , Adulto , Adulto Jovem , Percepção Espacial/fisiologia , Memória/fisiologia
2.
J Vis ; 24(7): 10, 2024 Jul 02.
Artigo em Inglês | MEDLINE | ID: mdl-38995109

RESUMO

A current focus in sensorimotor research is the study of human perception and action in increasingly naturalistic tasks and visual environments. This is further enabled by the recent commercial success of virtual reality (VR) technology, which allows for highly realistic but well-controlled three-dimensional (3D) scenes. VR enables a multitude of different ways to interact with virtual objects, but only rarely are such interaction techniques evaluated and compared before being selected for a sensorimotor experiment. Here, we compare different response techniques for a memory-guided action task, in which participants indicated the position of a previously seen 3D object in a VR scene: pointing, using a virtual laser pointer of short or unlimited length, and placing, either the target object itself or a generic reference cube. Response techniques differed in availability of 3D object cues and requirement to physically move to the remembered object position by walking. Object placement was the most accurate but slowest due to repeated repositioning. When placing objects, participants tended to match the original object's orientation. In contrast, the laser pointer was fastest but least accurate, with the short pointer showing a good speed-accuracy compromise. Our findings can help researchers in selecting appropriate methods when studying naturalistic visuomotor behavior in virtual environments.


Assuntos
Realidade Virtual , Humanos , Masculino , Feminino , Adulto , Adulto Jovem , Desempenho Psicomotor/fisiologia , Sinais (Psicologia) , Estimulação Luminosa/métodos
3.
J Neurophysiol ; 130(1): 104-116, 2023 07 01.
Artigo em Inglês | MEDLINE | ID: mdl-37283453

RESUMO

Pupillary responses have been reliably identified for cognitive and motor tasks, but less is known about their relation to mentally simulated movements (known as motor imagery). Previous work found pupil dilations during the execution of simple finger movements, where peak pupillary dilation scaled with the complexity of the finger movement and force required. Recently, pupillary dilations were reported during imagery of grasping and piano playing. Here, we examined whether pupillary responses are sensitive to the dynamics of the underlying motor task for both executed and imagined reach movements. Participants reached or imagined reaching to one of three targets placed at different distances from a start position. Both executed and imagined movement times scaled with target distance, and they were highly correlated, confirming previous work and suggesting that participants did imagine the respective movement. Increased pupillary dilation was evident during motor execution compared with rest, with stronger dilations for larger movements. Pupil dilations also occurred during motor imagery, however, they were generally weaker than those during motor execution and they were not influenced by imagined movement distance. Instead, dilations during motor imagery resembled pupil responses obtained during a nonmotor imagery task (imagining a previously viewed painting). Our results demonstrate that pupillary responses can reliably capture the dynamics of an executed goal-directed reaching movement, but suggest that pupillary responses during imagined reaching movements reflect general cognitive processes, rather than motor-specific components related to the simulated dynamics of the sensorimotor system.NEW & NOTEWORTHY Pupil size is influenced by the performance of cognitive and motor tasks. Here, we demonstrate that pupil size increases not only during execution but also during mental simulation of goal-directed reaching movements. However, pupil dilations scale with movement amplitude of executed but not of imagined movement, whereas they are similar during motor imagery and a nonmotor imagery task.


Assuntos
Imaginação , Pupila , Humanos , Pupila/fisiologia , Imaginação/fisiologia , Movimento/fisiologia , Tempo , Extremidade Superior , Desempenho Psicomotor/fisiologia
4.
Behav Res Methods ; 55(2): 570-582, 2023 02.
Artigo em Inglês | MEDLINE | ID: mdl-35322350

RESUMO

Virtual reality (VR) is a powerful tool for researchers due to its potential to study dynamic human behavior in highly naturalistic environments while retaining full control over the presented stimuli. Due to advancements in consumer hardware, VR devices are now very affordable and have also started to include technologies such as eye tracking, further extending potential research applications. Rendering engines such as Unity, Unreal, or Vizard now enable researchers to easily create complex VR environments. However, implementing the experimental design can still pose a challenge, and these packages do not provide out-of-the-box support for trial-based behavioral experiments. Here, we present a Python toolbox, designed to facilitate common tasks when developing experiments using the Vizard VR platform. It includes functionality for common tasks like creating, randomizing, and presenting trial-based experimental designs or saving results to standardized file formats. Moreover, the toolbox greatly simplifies continuous recording of eye and body movements using any hardware supported in Vizard. We further implement and describe a simple goal-directed reaching task in VR and show sample data recorded from five volunteers. The toolbox, example code, and data are all available on GitHub under an open-source license. We hope that our toolbox can simplify VR experiment development, reduce code duplication, and aid reproducibility and open-science efforts.


Assuntos
Interface Usuário-Computador , Realidade Virtual , Humanos , Reprodutibilidade dos Testes , Software
5.
J Eye Mov Res ; 15(3)2022.
Artigo em Inglês | MEDLINE | ID: mdl-37125009

RESUMO

A growing number of virtual reality devices now include eye tracking technology, which can facilitate oculomotor and cognitive research in VR and enable use cases like foveated rendering. These applications require different tracking performance, often measured as spatial accuracy and precision. While manufacturers report data quality estimates for their devices, these typically represent ideal performance and may not reflect real-world data quality. Additionally, it is unclear how accuracy and precision change across sessions within the same participant or between devices, and how performance is influenced by vision correction. Here, we measured spatial accuracy and precision of the Vive Pro Eye built-in eye tracker across a range of 30 visual degrees horizontally and vertically. Participants completed ten measurement sessions over multiple days, allowing to evaluate calibration reliability. Accuracy and precision were highest for central gaze and decreased with greater eccentricity in both axes. Calibration was successful in all participants, including those wearing contacts or glasses, but glasses yielded significantly lower performance. We further found differences in accuracy (but not precision) between two Vive Pro Eye headsets, and estimated participants' inter-pupillary distance. Our metrics suggest high calibration reliability and can serve as a baseline for expected eye tracking performance in VR experiments.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...