Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 9 de 9
Filter
Add more filters










Database
Language
Publication year range
1.
Sci Rep ; 14(1): 8596, 2024 Apr 13.
Article in English | MEDLINE | ID: mdl-38615047

ABSTRACT

A popular technique to modulate visual input during search is to use gaze-contingent windows. However, these are often rather discomforting, providing the impression of visual impairment. To counteract this, we asked participants in this study to search through illuminated as well as dark three-dimensional scenes using a more naturalistic flashlight with which they could illuminate the rooms. In a surprise incidental memory task, we tested the identities and locations of objects encountered during search. Importantly, we tested this study design in both immersive virtual reality (VR; Experiment 1) and on a desktop-computer screen (Experiment 2). As hypothesized, searching with a flashlight increased search difficulty and memory usage during search. We found a memory benefit for identities of distractors in the flashlight condition in VR but not in the computer screen experiment. Surprisingly, location memory was comparable across search conditions despite the enormous difference in visual input. Subtle differences across experiments only appeared in VR after accounting for previous recognition performance, hinting at a benefit of flashlight search in VR. Our findings highlight that removing visual information does not necessarily impair location memory, and that screen experiments using virtual environments can elicit the same major effects as VR setups.

2.
Policy Insights Behav Brain Sci ; 10(2): 317-323, 2023 Oct.
Article in English | MEDLINE | ID: mdl-37900910

ABSTRACT

Extended reality (XR, including augmented and virtual reality) creates a powerful intersection between information technology and cognitive, clinical, and education sciences. XR technology has long captured the public imagination, and its development is the focus of major technology companies. This article demonstrates the potential of XR to (1) deliver behavioral insights, (2) transform clinical treatments, and (3) improve learning and education. However, without appropriate policy, funding, and infrastructural investment, many research institutions will struggle to keep pace with the advances and opportunities of XR. To realize the full potential of XR for basic and translational research, funding should incentivize (1) appropriate training, (2) open software solutions, and (3) collaborations between complementary academic and industry partners. Bolstering the XR research infrastructure with the right investments and incentives is vital for delivering on the potential for transformative discoveries, innovations, and applications.

3.
J Vis ; 22(4): 12, 2022 03 02.
Article in English | MEDLINE | ID: mdl-35323868

ABSTRACT

Central and peripheral vision during visual tasks have been extensively studied on two-dimensional screens, highlighting their perceptual and functional disparities. This study has two objectives: replicating on-screen gaze-contingent experiments removing central or peripheral field of view in virtual reality, and identifying visuo-motor biases specific to the exploration of 360 scenes with a wide field of view. Our results are useful for vision modelling, with applications in gaze position prediction (e.g., content compression and streaming). We ask how previous on-screen findings translate to conditions where observers can use their head to explore stimuli. We implemented a gaze-contingent paradigm to simulate loss of vision in virtual reality, participants could freely view omnidirectional natural scenes. This protocol allows the simulation of vision loss with an extended field of view (\(\gt \)80°) and studying the head's contributions to visual attention. The time-course of visuo-motor variables in our pure free-viewing task reveals long fixations and short saccades during first seconds of exploration, contrary to literature in visual tasks guided by instructions. We show that the effect of vision loss is reflected primarily on eye movements, in a manner consistent with two-dimensional screens literature. We hypothesize that head movements mainly serve to explore the scenes during free-viewing, the presence of masks did not significantly impact head scanning behaviours. We present new fixational and saccadic visuo-motor tendencies in a 360° context that we hope will help in the creation of gaze prediction models dedicated to virtual reality.


Subject(s)
Fixation, Ocular , Virtual Reality , Eye Movements , Humans , Saccades , Visual Perception
4.
J Eye Mov Res ; 15(3)2022.
Article in English | MEDLINE | ID: mdl-37215533

ABSTRACT

Image inversion is a powerful tool for investigating cognitive mechanisms of visual perception. However, studies have mainly used inversion in paradigms presented on twodimensional computer screens. It remains open whether disruptive effects of inversion also hold true in more naturalistic scenarios. In our study, we used scene inversion in virtual reality in combination with eye tracking to investigate the mechanisms of repeated visual search through three-dimensional immersive indoor scenes. Scene inversion affected all gaze and head measures except fixation durations and saccade amplitudes. Our behavioral results, surprisingly, did not entirely follow as hypothesized: While search efficiency dropped significantly in inverted scenes, participants did not utilize more memory as measured by search time slopes. This indicates that despite the disruption, participants did not try to compensate the increased difficulty by using more memory. Our study highlights the importance of investigating classical experimental paradigms in more naturalistic scenarios to advance research on daily human behavior.

5.
Brain Sci ; 11(9)2021 Sep 15.
Article in English | MEDLINE | ID: mdl-34573274

ABSTRACT

We wish to make the following correction to the published paper "Effects of Transient Loss of Vision on Head and Eye Movements during Visual Search in a Virtual Environment" [...].

6.
J Vis ; 21(7): 3, 2021 07 06.
Article in English | MEDLINE | ID: mdl-34251433

ABSTRACT

Visual search in natural scenes is a complex task relying on peripheral vision to detect potential targets and central vision to verify them. The segregation of the visual fields has been particularly established by on-screen experiments. We conducted a gaze-contingent experiment in virtual reality in order to test how the perceived roles of central and peripheral visions translated to more natural settings. The use of everyday scenes in virtual reality allowed us to study visual attention by implementing a fairly ecological protocol that cannot be implemented in the real world. Central or peripheral vision was masked during visual search, with target objects selected according to scene semantic rules. Analyzing the resulting search behavior, we found that target objects that were not spatially constrained to a probable location within the scene impacted search measures negatively. Our results diverge from on-screen studies in that search performances were only slightly affected by central vision loss. In particular, a central mask did not impact verification times when the target was grammatically constrained to an anchor object. Our findings demonstrates that the role of central vision (up to 6 degrees of eccentricities) in identifying objects in natural scenes seems to be minor, while the role of peripheral preprocessing of targets in immersive real-world searches may have been underestimated by on-screen experiments.


Subject(s)
Virtual Reality , Humans , Scotoma , Vision, Ocular , Visual Fields , Visual Perception
7.
Brain Sci ; 10(11)2020 Nov 12.
Article in English | MEDLINE | ID: mdl-33198116

ABSTRACT

Central and peripheral fields of view extract information of different quality and serve different roles during visual tasks. Past research has studied this dichotomy on-screen in conditions remote from natural situations where the scene would be omnidirectional and the entire field of view could be of use. In this study, we had participants looking for objects in simulated everyday rooms in virtual reality. By implementing a gaze-contingent protocol we masked central or peripheral vision (masks of 6 deg. of radius) during trials. We analyzed the impact of vision loss on visuo-motor variables related to fixation (duration) and saccades (amplitude and relative directions). An important novelty is that we segregated eye, head and the general gaze movements in our analyses. Additionally, we studied these measures after separating trials into two search phases (scanning and verification). Our results generally replicate past on-screen literature and teach about the role of eye and head movements. We showed that the scanning phase is dominated by short fixations and long saccades to explore, and the verification phase by long fixations and short saccades to analyze. One finding indicates that eye movements are strongly driven by visual stimulation, while head movements serve a higher behavioral goal of exploring omnidirectional scenes. Moreover, losing central vision has a smaller impact than reported on-screen, hinting at the importance of peripheral scene processing for visual search with an extended field of view. Our findings provide more information concerning how knowledge gathered on-screen may transfer to more natural conditions, and attest to the experimental usefulness of eye tracking in virtual reality.

8.
J Vis ; 19(14): 22, 2019 12 02.
Article in English | MEDLINE | ID: mdl-31868896

ABSTRACT

Visual field defects are a world-wide concern, and the proportion of the population experiencing vision loss is ever increasing. Macular degeneration and glaucoma are among the four leading causes of permanent vision loss. Identifying and characterizing visual field losses from gaze alone could prove crucial in the future for screening tests, rehabilitation therapies, and monitoring. In this experiment, 54 participants took part in a free-viewing task of visual scenes while experiencing artificial scotomas (central and peripheral) of varying radii in a gaze-contingent paradigm. We studied the importance of a set of gaze features as predictors to best differentiate between artificial scotoma conditions. Linear mixed models were utilized to measure differences between scotoma conditions. Correlation and factorial analyses revealed redundancies in our data. Finally, hidden Markov models and recurrent neural networks were implemented as classifiers in order to measure the predictive usefulness of gaze features. The results show separate saccade direction biases depending on scotoma type. We demonstrate that the saccade relative angle, amplitude, and peak velocity of saccades are the best features on the basis of which to distinguish between artificial scotomas in a free-viewing task. Finally, we discuss the usefulness of our protocol and analyses as a gaze-feature identifier tool that discriminates between artificial scotomas of different types and sizes.


Subject(s)
Scotoma/physiopathology , Visual Field Tests/methods , Visual Fields , Adult , Blindness , Female , Glaucoma/physiopathology , Humans , Macular Degeneration/physiopathology , Male , Markov Chains , Middle Aged , Neural Networks, Computer , Saccades , Vision Disorders , Young Adult
9.
Neural Netw ; 109: 19-30, 2019 Jan.
Article in English | MEDLINE | ID: mdl-30388430

ABSTRACT

Different studies have shown the efficiency of a feed-forward neural network in categorizing basic emotional facial expressions. However, recent findings in psychology and cognitive neuroscience suggest that visual recognition is not a pure bottom-up process but likely involves top-down recurrent connectivity. In the present computational study, we compared the performances of a pure bottom-up neural network (a standard multi-layer perceptron, MLP) with a neural network involving recurrent top-down connections (a simple recurrent network, SRN) in the anticipation of emotional expressions. In two complementary simulations, results revealed that the SRN outperformed the MLP for ambiguous intensities in the temporal sequence, when the emotions were not fully depicted but when sufficient contextual information (related to previous time frames) was provided. Taken together, these results suggest that, despite the cost of recurrent connections in terms of energy and processing time for biological organisms, they can provide a substantial advantage for the fast recognition of uncertain visual signals.


Subject(s)
Anticipation, Psychological , Emotions , Facial Expression , Neural Networks, Computer , Synapses , Adult , Anticipation, Psychological/physiology , Brain Mapping/methods , Emotions/physiology , Female , Humans , Magnetic Resonance Imaging , Male , Synapses/physiology
SELECTION OF CITATIONS
SEARCH DETAIL
...