Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 5 de 5
Filter
Add more filters










Database
Language
Publication year range
1.
Cortex ; 169: 65-80, 2023 Dec.
Article in English | MEDLINE | ID: mdl-37862831

ABSTRACT

Previous research has demonstrated that humans combine multiple sources of spatial information such as self-motion and landmark cues while navigating through an environment. However, it is unclear whether this involves comparing multiple representations obtained from different sources during navigation (parallel hypothesis) or building a representation first based on self-motion cues and then combining with landmarks later (serial hypothesis). We tested these two hypotheses (parallel vs serial) in an active navigation task using wireless mobile scalp EEG recordings. Participants walked through an immersive virtual hallway with or without conflicts between self-motion and landmarks (i.e., intersections) and pointed toward the starting position of the hallway. We employed the oscillatory signals recorded during mobile wireless scalp EEG as a means of identifying when participant representations based on self-motion versus landmark cues might have first emerged. We found that path segments, including intersections present early during navigation, were more strongly associated with later pointing error, regardless of when they appeared during encoding. We also found that there was sufficient information contained within the frontal-midline theta and posterior alpha oscillatory signals in the earliest segments of navigation involving intersections to decode condition (i.e., conflicting vs not conflicting). Together, these findings suggest that intersections play a pivotal role in the early development of spatial representations, suggesting that memory representations for the geometry of walked paths likely develop early during navigation, in support of the parallel hypothesis.


Subject(s)
Cues , Electroencephalography , Humans
2.
bioRxiv ; 2023 Jul 16.
Article in English | MEDLINE | ID: mdl-37131721

ABSTRACT

Previous research has demonstrated that humans combine multiple sources of spatial information such as self-motion and landmark cues, while navigating through an environment. However, it is unclear whether this involves comparing multiple representations obtained from different sources during navigation (parallel hypothesis) or building a representation first based on self-motion cues and then combining with landmarks later (serial hypothesis). We tested these two hypotheses (parallel vs. serial) in an active navigation task using wireless mobile scalp EEG recordings. Participants walked through an immersive virtual hallway with or without conflicts between self-motion and landmarks (i.e., intersections) and pointed toward the starting position of the hallway. We employed the oscillatory signals recorded during mobile wireless scalp EEG as means of identifying when participant representations based on self-motion vs. landmark cues might have first emerged. We found that path segments, including intersections present early during navigation, were more strongly associated with later pointing error, regardless of when they appeared during encoding. We also found that there was sufficient information contained within the frontal-midline theta and posterior alpha oscillatory signals in the earliest segments of navigation involving intersections to decode condition (i.e., conflicting vs. not conflicting). Together, these findings suggest that intersections play a pivotal role in the early development of spatial representations, suggesting that memory representations for the geometry of walked paths likely develop early during navigation, in support of the parallel hypothesis.

3.
Behav Brain Res ; 426: 113835, 2022 05 24.
Article in English | MEDLINE | ID: mdl-35292332

ABSTRACT

Previous research indicates that while animals who locomote on surfaces have a more variable and less precise spatial coding vertically than horizontally, animals who fly do not demonstrate a horizontal advantage (Hayman et al., 2011; Yartsev and Ulanovsky, 2013). The current study investigated whether humans' localization is more variable vertically than horizontally in different locomotion modes. In an immersive virtual room, participants learned the locations of objects presented on one wall. By locomoting from a location on the floor to each object, they replaced objects using memories. One group of participants (the flying group) flew three-dimensionally along their viewing direction by pushing a joystick. The second group (floor-wall group) locomoted only on the floor and the wall along the projection of the viewing direction onto the current travelling surface. The third group pressed a button to be teleported from the floor to the wall and then locomoted on the wall (wall-only group). The results showed that the variance of localization error was larger vertically than horizontally in the flying and floor-wall groups but that the pattern reversed in the wall-only group. In addition, while both the flying and wall-only groups locomoted straight towards the target location, the floor-wall group locomoted straight towards the projection of the target location onto the ground rather than straight towards the wall, indicating that the floor-wall group tried to avoid horizontal movement on the wall. These results suggest that for humans a horizontal advantage occurs in encoding the objects' locations presented on the wall whereas a vertical advantage occurs in locomotion on the wall.


Subject(s)
Diptera , Spatial Navigation , Animals , Humans , Learning , Locomotion , Space Perception
4.
Front Aging Neurosci ; 13: 640188, 2021.
Article in English | MEDLINE | ID: mdl-33912024

ABSTRACT

Older adults typically perform worse on spatial navigation tasks, although whether this is due to degradation of memory or an impairment in using specific strategies has yet to be determined. An issue with some past studies is that older adults are tested on desktop-based virtual reality: a technology many report lacking familiarity with. Even when controlling for familiarity, these paradigms reduce the information-rich, three-dimensional experience of navigating to a simple two-dimensional task that utilizes a mouse and keyboard (or joystick) as means for ambulation. Here, we utilize a wireless head-mounted display and free ambulation to create a fully immersive virtual Morris water maze in which we compare the navigation of older and younger adults. Older and younger adults learned the locations of hidden targets from same and different start points. Across different conditions tested, older adults remembered target locations less precisely compared to younger adults. Importantly, however, they performed comparably from the same viewpoint as a switched viewpoint, suggesting that they could generalize their memory for the location of a hidden target given a new point of view. When we implicitly moved one of the distal cues to determine whether older adults used an allocentric (multiple landmarks) or beaconing (single landmark) strategy to remember the hidden target, both older and younger adults showed comparable degrees of reliance on allocentric and beacon cues. These findings support the hypothesis that while older adults have less precise spatial memories, they maintain the ability to utilize various strategies when navigating.

5.
Q J Exp Psychol (Hove) ; 74(5): 889-909, 2021 May.
Article in English | MEDLINE | ID: mdl-33234009

ABSTRACT

This study investigated to what extent humans can encode spatial relations between different surfaces (i.e., floor, walls, and ceiling) in a three-dimensional (3D) space and extend their headings on the floor to other surfaces when locomoting to walls (pitch 90°) and the ceiling (pitch 180°). In immersive virtual reality environments, participants first learned a layout of objects on the ground. They then navigated to testing planes: south (or north) walls facing Up, or the ceiling via walls facing North (or South). Participants locomoted to the walls with pitch rotations indicated by visual and idiothetic cues (Experiment 1) and only by visual cues (Experiment 2) and to the ceiling with visual pitch rotations only (Experiment 3). Using the memory of objects' locations, they either reproduced the object layout on the testing plane or did a Judgements of Relative Direction (JRD) task ("imagine standing at object A, facing B, point to C") with imagined headings of south and north on the ground. The results showed that participants who locomoted onto the wall with idiothetic cues showed a better performance in JRD for an imagined heading from which their physical heading was extended (e.g., imagined heading of North at the north wall). In addition, the participants who reproduced the layout of objects on the ceiling from a perspective extended from the ground also showed a sensorimotor alignment effect predicted by an extended heading. These results indicate that humans encode spatial relations between different surfaces and extend headings via pitch rotations three-dimensionally, especially with idiothetic cues.


Subject(s)
Imagination , Space Perception , Cues , Humans , Judgment , Learning
SELECTION OF CITATIONS
SEARCH DETAIL
...