Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 87
Filter
1.
Sci Rep ; 14(1): 8858, 2024 04 17.
Article in English | MEDLINE | ID: mdl-38632303

ABSTRACT

It is often assumed that rendering an alert signal more salient yields faster responses to this alert. Yet, there might be a trade-off between attracting attention and distracting from task execution. Here we tested this in four behavioral experiments with eye-tracking using an abstract alert-signal paradigm. Participants performed a visual discrimination task (primary task) while occasional alert signals occurred in the visual periphery accompanied by a congruently lateralized tone. Participants had to respond to the alert before proceeding with the primary task. When visual salience (contrast) or auditory salience (tone intensity) of the alert were increased, participants directed their gaze to the alert more quickly. This confirms that more salient alerts attract attention more efficiently. Increasing auditory salience yielded quicker responses for the alert and primary tasks, apparently confirming faster responses altogether. However, increasing visual salience did not yield similar benefits: instead, it increased the time between fixating the alert and responding, as high-salience alerts interfered with alert-task execution. Such task interference by high-salience alert-signals counteracts their more efficient attentional guidance. The design of alert signals must be adapted to a "sweet spot" that optimizes this stimulus-dependent trade-off between maximally rapid attentional orienting and minimal task interference.


Subject(s)
Attention , Visual Perception , Humans , Reaction Time/physiology , Attention/physiology , Visual Perception/physiology , Records , Discrimination, Psychological
2.
PLoS One ; 19(3): e0301136, 2024.
Article in English | MEDLINE | ID: mdl-38547114

ABSTRACT

Gaze is an important and potent social cue to direct others' attention towards specific locations. However, in many situations, directional symbols, like arrows, fulfill a similar purpose. Motivated by the overarching question how artificial systems can effectively communicate directional information, we conducted two cueing experiments. In both experiments, participants were asked to identify peripheral targets appearing on the screen and respond to them as quickly as possible by a button press. Prior to the appearance of the target, a cue was presented in the center of the screen. In Experiment 1, cues were either faces or arrows that gazed or pointed in one direction, but were non-predictive of the target location. Consistent with earlier studies, we found a reaction time benefit for the side the arrow or the gaze was directed to. Extending beyond earlier research, we found that this effect was indistinguishable between the vertical and the horizontal axis and between faces and arrows. In Experiment 2, we used 100% "counter-predictive" cues; that is, the target always occurred on the side opposite to the direction of gaze or arrow. With cues without inherent directional meaning (color), we controlled for general learning effects. Despite the close quantitative match between non-predictive gaze and non-predictive arrow cues observed in Experiment 1, the reaction-time benefit for counter-predictive arrows over neutral cues is more robust than the corresponding benefit for counter-predictive gaze. This suggests that-if matched for efficacy towards their inherent direction-gaze cues are harder to override or reinterpret than arrows. This difference can be of practical relevance, for example, when designing cues in the context of human-machine interaction.


Subject(s)
Cues , Fixation, Ocular , Humans , Attention , Reaction Time , Motivation
3.
J Neurophysiol ; 130(4): 1028-1040, 2023 10 01.
Article in English | MEDLINE | ID: mdl-37701952

ABSTRACT

When humans walk, it is important for them to have some measure of the distance they have traveled. Typically, many cues from different modalities are available, as humans perceive both the environment around them (for example, through vision and haptics) and their own walking. Here, we investigate the contribution of visual cues and nonvisual self-motion cues to distance reproduction when walking on a treadmill through a virtual environment by separately manipulating the speed of a treadmill belt and of the virtual environment. Using mobile eye tracking, we also investigate how our participants sampled the visual information through gaze. We show that, as predicted, both modalities affected how participants (N = 28) reproduced a distance. Participants weighed nonvisual self-motion cues more strongly than visual cues, corresponding also to their respective reliabilities, but with some interindividual variability. Those who looked more toward those parts of the visual scene that contained cues to speed and distance tended also to weigh visual information more strongly, although this correlation was nonsignificant, and participants generally directed their gaze toward visually informative areas of the scene less than expected. As measured by motion capture, participants adjusted their gait patterns to the treadmill speed but not to walked distance. In sum, we show in a naturalistic virtual environment how humans use different sensory modalities when reproducing distances and how the use of these cues differs between participants and depends on information sampling.NEW & NOTEWORTHY Combining virtual reality with treadmill walking, we measured the relative importance of visual cues and nonvisual self-motion cues for distance reproduction. Participants used both cues but put more weight on self-motion; weight on visual cues had a trend to correlate with looking at visually informative areas. Participants overshot distances, especially when self-motion was slow; they adjusted steps to self-motion cues but not to visual cues. Our work thus quantifies the multimodal contributions to distance reproduction.


Subject(s)
Motion Perception , Virtual Reality , Humans , Cues , Walking , Gait
4.
J Vis ; 23(8): 8, 2023 08 01.
Article in English | MEDLINE | ID: mdl-37548959

ABSTRACT

Gaze is a powerful cue for directing attention. We investigate the interpretation of an abstract figure as gaze modulates its efficacy as an attentional cue. In each trial, two vertical lines on a central disk moved to one side (left or right). Independent of this "feature-cued" side, a target (black disk) subsequently appeared on one side. After 300 trials (phase 1), participants watched a video of a human avatar walking away. For one group, the avatar wore a helmet that visually matched the central disk and looked at black disks to either side. The other group's video was unrelated to the cueing task. After another 300 trials (phase 2), videos were swapped between groups; 300 further trials (phase 3) followed. In all phases, participants responded more quickly for targets appearing on the feature-cued side. There was a significant interaction between group and phase for reaction times: In phase 3, the group who had just watched the avatar with the helmet had a reduced advantage to the feature-cued side. Hence, interpreting the disk as a turning head seen from behind counteracts the cueing by the motion of the disk. This suggests that the mere perceptual interpretation of an abstract stimulus as gaze yields social cueing effects.


Subject(s)
Cues , Fixation, Ocular , Humans , Attention , Reaction Time
5.
iScience ; 26(5): 106599, 2023 May 19.
Article in English | MEDLINE | ID: mdl-37250300

ABSTRACT

Humans can quickly adapt their behavior to changes in the environment. Classical reversal learning tasks mainly measure how well participants can disengage from a previously successful behavior but not how alternative responses are explored. Here, we propose a novel 5-choice reversal learning task with alternating position-reward contingencies to study exploration behavior after a reversal. We compare human exploratory saccade behavior with a prediction obtained from a neuro-computational model of the basal ganglia. A new synaptic plasticity rule for learning the connectivity between the subthalamic nucleus (STN) and external globus pallidus (GPe) results in exploration biases to previously rewarded positions. The model simulations and human data both show that during experimental experience exploration becomes limited to only those positions that have been rewarded in the past. Our study demonstrates how quite complex behavior may result from a simple sub-circuit within the basal ganglia pathways.

6.
IEEE Trans Vis Comput Graph ; 29(5): 2220-2229, 2023 May.
Article in English | MEDLINE | ID: mdl-37027735

ABSTRACT

Using a map in an unfamiliar environment requires identifying correspondences between elements of the map's allocentric representation and elements in egocentric views. Aligning the map with the environment can be challenging. Virtual reality (VR) allows learning about unfamiliar environments in a sequence of egocentric views that correspond closely to the perspectives and views that are experienced in the actual environment. We compared three methods to prepare for localization and navigation tasks performed by teleoperating a robot in an office building: studying a floor plan of the building and two forms of VR exploration. One group of participants studied a building plan, a second group explored a faithful VR reconstruction of the building from a normal-sized avatar's perspective, and a third group explored the VR from a giant-sized avatar's perspective. All methods contained marked checkpoints. The subsequent tasks were identical for all groups. The self-localization task required indication of the approximate location of the robot in the environment. The navigation task required navigation between checkpoints. Participants took less time to learn with the giant VR perspective and with the floorplan than with the normal VR perspective. Both VR learning methods significantly outperformed the floorplan in the orientation task. Navigation was performed quicker after learning in the giant perspective compared to the normal perspective and the building plan. We conclude that the normal perspective and especially the giant perspective in VR are viable options for preparing for teleoperation in unfamiliar environments when a virtual model of the environment is available.

7.
J Exp Psychol Gen ; 152(7): 2040-2051, 2023 Jul.
Article in English | MEDLINE | ID: mdl-36848107

ABSTRACT

Objects influence attention allocation; when a location within an object is cued, participants react faster to targets appearing in a different location within this object than on a different object. Despite consistent demonstrations of this object-based effect, there is no agreement regarding its underlying mechanisms. To test the most common hypothesis that attention spreads automatically along the cued object, we utilized a continuous, response-free measurement of attentional allocation that relies on the modulation of the pupillary light response. In Experiments 1 and 2, attentional spreading was not encouraged because the target appeared often (60%) at the cued location and considerably less often at other locations (20% within the same object and 20% on another object). In Experiment 3, spreading was encouraged because the target appeared equally often in one of the three possible locations within the cued object (cued end, middle, uncued end). In all experiments, we added gray-to-black and gray-to-white luminance gradients to the objects. By cueing the gray ends of the objects, we could track attention. If attention indeed spreads automatically along objects, then pupil size should be greater after the gray-to-dark object is cued because attention spreads toward darker areas of the object than when the gray-to-white object is cued, regardless of the target location probability. However, unequivocal evidence of attentional spreading was only found when spreading was encouraged. These findings do not support an automatic spreading of attention. Instead, they suggest that attentional spreading along the object is guided by cue-target contingencies. (PsycInfo Database Record (c) 2023 APA, all rights reserved).


Subject(s)
Attention , Pupil , Humans , Reaction Time/physiology , Attention/physiology , Cues
8.
Exp Brain Res ; 241(3): 765-780, 2023 Mar.
Article in English | MEDLINE | ID: mdl-36725725

ABSTRACT

Walking is a complex task. To prevent falls and injuries, gait needs to constantly adjust to the environment. This requires information from various sensory systems; in turn, moving through the environment continuously changes available sensory information. Visual information is available from a distance, and therefore most critical when negotiating difficult terrain. To effectively sample visual information, humans adjust their gaze to the terrain or-in laboratory settings-when facing motor perturbations. During activities of daily living, however, only a fraction of sensory and cognitive resources can be devoted to ensuring safe gait. How do humans deal with challenging walking conditions when they face high cognitive load? Young, healthy participants (N = 24) walked on a treadmill through a virtual, but naturalistic environment. Occasionally, their gait was experimentally perturbed, inducing slipping. We varied cognitive load by asking participants in some blocks to count backward in steps of seven; orthogonally, we varied whether visual cues indicated upcoming perturbations. We replicated earlier findings on how humans adjust their gaze and their gait rapidly and flexibly on various time scales: eye and head movements responded in a partially compensatory pattern and visual cues mostly affected eye movements. Interestingly, the cognitive task affected mainly head orientation. During the cognitive task, we found no clear signs of a less stable gait or of a cautious gait mode, but evidence that participants adapted their gait less to the perturbations than without secondary task. In sum, cognitive load affects head orientation and impairs the ability to adjust to gait perturbations.


Subject(s)
Activities of Daily Living , Cognition , Humans , Gait , Walking/psychology , Cues
9.
J Acoust Soc Am ; 152(5): 2758, 2022 11.
Article in English | MEDLINE | ID: mdl-36456271

ABSTRACT

Sequential auditory scene analysis (ASA) is often studied using sequences of two alternating tones, such as ABAB or ABA_, with "_" denoting a silent gap, and "A" and "B" sine tones differing in frequency (nominally low and high). Many studies implicitly assume that the specific arrangement (ABAB vs ABA_, as well as low-high-low vs high-low-high within ABA_) plays a negligible role, such that decisions about the tone pattern can be governed by other considerations. To explicitly test this assumption, a systematic comparison of different tone patterns for two-tone sequences was performed in three different experiments. Participants were asked to report whether they perceived the sequences as originating from a single sound source (integrated) or from two interleaved sources (segregated). Results indicate that core findings of sequential ASA, such as an effect of frequency separation on the proportion of integrated and segregated percepts, are similar across the different patterns during prolonged listening. However, at sequence onset, the integrated percept was more likely to be reported by the participants in ABA_low-high-low than in ABA_high-low-high sequences. This asymmetry is important for models of sequential ASA, since the formation of percepts at onset is an integral part of understanding how auditory interpretations build up.


Subject(s)
Auditory Perception , Auscultation , Humans , Sound
10.
Neuroimage ; 263: 119601, 2022 11.
Article in English | MEDLINE | ID: mdl-36064139

ABSTRACT

Sensory consequences of one's own action are often perceived as less intense, and lead to reduced neural responses, compared to externally generated stimuli. Presumably, such sensory attenuation is due to predictive mechanisms based on the motor command (efference copy). However, sensory attenuation has also been observed outside the context of voluntary action, namely when stimuli are temporally predictable. Here, we aimed at disentangling the effects of motor and temporal predictability-based mechanisms on the attenuation of sensory action consequences. During fMRI data acquisition, participants (N = 25) judged which of two visual stimuli was brighter. In predictable blocks, the stimuli appeared temporally aligned with their button press (active) or aligned with an automatically generated cue (passive). In unpredictable blocks, stimuli were presented with a variable delay after button press/cue, respectively. Eye tracking was performed to investigate pupil-size changes and to ensure proper fixation. Self-generated stimuli were perceived as darker and led to less neural activation in visual areas than their passive counterparts, indicating sensory attenuation for self-generated stimuli independent of temporal predictability. Pupil size was larger during self-generated stimuli, which correlated negatively with the blood oxygenation level dependent (BOLD) response: the larger the pupil, the smaller the BOLD amplitude in visual areas. Our results suggest that sensory attenuation in visual cortex is driven by action-based predictive mechanisms rather than by temporal predictability. This effect may be related to changes in pupil diameter. Altogether, these results emphasize the role of the efference copy in the processing of sensory action consequences.


Subject(s)
Psychomotor Performance , Visual Cortex , Humans , Psychomotor Performance/physiology , Pupil , Visual Perception/physiology , Visual Cortex/diagnostic imaging , Visual Cortex/physiology
11.
Trends Neurosci ; 45(8): 635-647, 2022 08.
Article in English | MEDLINE | ID: mdl-35662511

ABSTRACT

The course of pupillary constriction and dilation provides an easy-to-access, inexpensive, and noninvasive readout of brain activity. We propose a new taxonomy of factors affecting the pupil and link these to associated neural underpinnings in an ascending hierarchy. In addition to two well-established low-level factors (light level and focal distance), we suggest two further intermediate-level factors, alerting and orienting, and a higher-level factor, executive functioning. Alerting, orienting, and executive functioning - including their respective underlying neural circuitries - overlap with the three principal attentional networks, making pupil size an integrated readout of distinct states of attention. As a now widespread technique, pupillometry is ready to provide meaningful applications and constitutes a viable part of the psychophysiological toolbox.


Subject(s)
Attention , Pupil , Attention/physiology , Executive Function , Humans , Pupil/physiology
12.
J Vis ; 21(8): 11, 2021 08 02.
Article in English | MEDLINE | ID: mdl-34351396

ABSTRACT

Most humans can walk effortlessly across uniform terrain even when they do not pay much attention to it. However, most natural terrain is far from uniform, and we need visual information to maintain stable gait. Recent advances in mobile eye-tracking technology have made it possible to study, in natural environments, how terrain affects gaze and thus the sampling of visual information. However, natural environments provide only limited experimental control, and some conditions cannot safely be tested. Typical laboratory setups, in contrast, are far from natural settings for walking. We used a setup consisting of a dual-belt treadmill, 240\(^\circ\) projection screen, floor projection, three-dimensional optical motion tracking, and mobile eye tracking to investigate eye, head, and body movements during perturbed and unperturbed walking in a controlled yet naturalistic environment. In two experiments (N = 22 each), we simulated terrain difficulty by repeatedly inducing slipping through accelerating either of the two belts rapidly and unpredictably (Experiment 1) or sometimes following visual cues (Experiment 2). We quantified the distinct roles of eye and head movements for adjusting gaze on different time scales. While motor perturbations mainly influenced head movements, eye movements were primarily affected by the presence of visual cues. This was true both immediately following slips and-to a lesser extent-over the course of entire 5-min blocks. We find adapted gaze parameters already after the first perturbation in each block, with little transfer between blocks. In conclusion, gaze-gait interactions in experimentally perturbed yet naturalistic walking are adaptive, flexible, and effector specific.


Subject(s)
Gait , Walking , Adaptation, Physiological , Eye Movements , Head Movements , Humans
13.
PLoS One ; 16(6): e0252370, 2021.
Article in English | MEDLINE | ID: mdl-34086770

ABSTRACT

In multistability, a constant stimulus induces alternating perceptual interpretations. For many forms of visual multistability, the transition from one interpretation to another ("perceptual switch") is accompanied by a dilation of the pupil. Here we ask whether the same holds for auditory multistability, specifically auditory streaming. Two tones were played in alternation, yielding four distinct interpretations: the tones can be perceived as one integrated percept (single sound source), or as segregated with either tone or both tones in the foreground. We found that the pupil dilates significantly around the time a perceptual switch is reported ("multistable condition"). When participants instead responded to actual stimulus changes that closely mimicked the multistable perceptual experience ("replay condition"), the pupil dilated more around such responses than in multistability. This still held when data were corrected for the pupil response to the stimulus change as such. Hence, active responses to an exogeneous stimulus change trigger a stronger or temporally more confined pupil dilation than responses to an endogenous perceptual switch. In another condition, participants randomly pressed the buttons used for reporting multistability. In Study 1, this "random condition" failed to sufficiently mimic the temporal pattern of multistability. By adapting the instructions, in Study 2 we obtained a response pattern more similar to the multistable condition. In this case, the pupil dilated significantly around the random button presses. Albeit numerically smaller, this pupil response was not significantly different from the multistable condition. While there are several possible explanations-related, e.g., to the decision to respond-this underlines the difficulty to isolate a purely perceptual effect in multistability. Our data extend previous findings from visual to auditory multistability. They highlight methodological challenges in interpreting such data and suggest possible approaches to meet them, including a novel stimulus to simulate the experience of perceptual switches in auditory streaming.


Subject(s)
Auditory Perception/physiology , Acoustic Stimulation/methods , Adult , Female , Humans , Male , Pupil/physiology , Sound , Visual Perception/physiology
14.
Front Neurosci ; 15: 656913, 2021.
Article in English | MEDLINE | ID: mdl-34108857

ABSTRACT

How vision guides gaze in realistic settings has been researched for decades. Human gaze behavior is typically measured in laboratory settings that are well controlled but feature-reduced and movement-constrained, in sharp contrast to real-life gaze control that combines eye, head, and body movements. Previous real-world research has shown environmental factors such as terrain difficulty to affect gaze; however, real-world settings are difficult to control or replicate. Virtual reality (VR) offers the experimental control of a laboratory, yet approximates freedom and visual complexity of the real world (RW). We measured gaze data in 8 healthy young adults during walking in the RW and simulated locomotion in VR. Participants walked along a pre-defined path inside an office building, which included different terrains such as long corridors and flights of stairs. In VR, participants followed the same path in a detailed virtual reconstruction of the building. We devised a novel hybrid control strategy for movement in VR: participants did not actually translate: forward movements were controlled by a hand-held device, rotational movements were executed physically and transferred to the VR. We found significant effects of terrain type (flat corridor, staircase up, and staircase down) on gaze direction, on the spatial spread of gaze direction, and on the angular distribution of gaze-direction changes. The factor world (RW and VR) affected the angular distribution of gaze-direction changes, saccade frequency, and head-centered vertical gaze direction. The latter effect vanished when referencing gaze to a world-fixed coordinate system, and was likely due to specifics of headset placement, which cannot confound any other analyzed measure. Importantly, we did not observe a significant interaction between the factors world and terrain for any of the tested measures. This indicates that differences between terrain types are not modulated by the world. The overall dwell time on navigational markers did not differ between worlds. The similar dependence of gaze behavior on terrain in the RW and in VR indicates that our VR captures real-world constraints remarkably well. High-fidelity VR combined with naturalistic movement control therefore has the potential to narrow the gap between the experimental control of a lab and ecologically valid settings.

15.
Perception ; 50(4): 343-366, 2021 Apr.
Article in English | MEDLINE | ID: mdl-33840288

ABSTRACT

A major objective of perception is the reduction of uncertainty about the outside world. Eye-movement research has demonstrated that attention and oculomotor control can subserve the function of decreasing uncertainty in vision. Here, we ask whether a similar effect exists for awareness in binocular rivalry, when two distinct stimuli presented to the two eyes compete for awareness. We tested whether this competition can be biased by uncertainty about the stimuli and their relevance for a perceptual task. Specifically, we have stimuli that are perceptually difficult (i.e., carry high perceptual uncertainty) compete with stimuli that are perceptually easy (low perceptual uncertainty). Using a no-report paradigm and reading the dominant stimulus continuously from the observers' eye movements, we find that the perceptually difficult stimulus becomes more dominant than the easy stimulus. This difference is enhanced by the stimuli's relevance for the task. In trials with task, the difference in dominance emerges quickly, peaks before the response, and then persists throughout the trial (further 10 s). However, the difference is already present in blocks before task instruction and still observable when the stimuli have ceased to be task relevant. This shows that perceptual uncertainty persistently increases perceptual dominance, and this is magnified by task relevance.


Subject(s)
Attention , Vision, Binocular , Eye , Eye Movements , Humans , Sensation
16.
Vision Res ; 182: 69-88, 2021 05.
Article in English | MEDLINE | ID: mdl-33610002

ABSTRACT

In multistability, perceptual interpretations ("percepts") of ambiguous stimuli alternate over time. There is considerable debate as to whether similar regularities govern the first percept after stimulus onset and percepts during prolonged presentation. We address this question in a visual pattern-component rivalry paradigm by presenting two overlaid drifting gratings, which participants perceived as individual gratings passing in front of each other ("segregated") or as a plaid ("integrated"). We varied the enclosed angle ("opening angle") between the gratings (experiments 1 and 2) and stimulus orientation (experiment 2). The relative number of integrated percepts increased monotonically with opening angle. The point of equality, where half of the percepts were integrated, was at a smaller opening angle at onset than during prolonged viewing. The functional dependence of the relative number of integrated percepts on opening angle showed a steeper curve at onset than during prolonged viewing. Dominance durations of integrated percepts were longer at onset than during prolonged viewing and increased with opening angle. The general pattern persisted when stimuli were rotated (experiment 2), despite some perceptual preference for cardinal motion directions over oblique directions. Analysis of eye movements, specifically the slow phase of the optokinetic nystagmus (OKN), confirmed the veridicality of participants' reports and provided a temporal characterization of percept formation after stimulus onset. Together, our results show that the first percept after stimulus onset exhibits a different dependence on stimulus parameters than percepts during prolonged viewing. This underlines the distinct role of the first percept in multistability.


Subject(s)
Nystagmus, Optokinetic , Vision, Binocular , Humans , Photic Stimulation
17.
Sci Rep ; 10(1): 22057, 2020 12 16.
Article in English | MEDLINE | ID: mdl-33328485

ABSTRACT

Whether fixation selection in real-world scenes is guided by image salience or by objects has been a matter of scientific debate. To contrast the two views, we compared effects of location-based and object-based visual salience in young and older (65 + years) adults. Generalized linear mixed models were used to assess the unique contribution of salience to fixation selection in scenes. When analysing fixation guidance without recurrence to objects, visual salience predicted whether image patches were fixated or not. This effect was reduced for the elderly, replicating an earlier finding. When using objects as the unit of analysis, we found that highly salient objects were more frequently selected for fixation than objects with low visual salience. Interestingly, this effect was larger for older adults. We also analysed where viewers fixate within objects, once they are selected. A preferred viewing location close to the centre of the object was found for both age groups. The results support the view that objects are important units of saccadic selection. Reconciling the salience view with the object view, we suggest that visual salience contributes to prioritization among objects. Moreover, the data point towards an increasing relevance of object-bound information with increasing age.


Subject(s)
Fixation, Ocular/physiology , Saccades/physiology , Visual Perception/physiology , Adolescent , Adult , Aged , Aged, 80 and over , Female , Humans , Male , Young Adult
18.
J Vis ; 20(4): 15, 2020 04 09.
Article in English | MEDLINE | ID: mdl-32330229

ABSTRACT

Fixation durations provide insights into processing demands. We investigated factors controlling fixation durations during scene viewing in two experiments. In Experiment 1, we tested the degree to which fixation durations adapt to global scene processing difficulty by manipulating the contrast (from original contrast to isoluminant) and saturation (original vs. grayscale) of the entire scene. We observed longer fixation durations for lower levels of contrast, and longer fixation durations for grayscale than for color scenes. Thus fixation durations were globally slowed as visual information became more and more degraded, making scene processing increasingly more difficult. In Experiment 2, we investigated two possible sources for this slow-down. We used "checkerboard" stimuli in which unmodified patches alternated with patches from which luminance information had been removed (isoluminant patches). Fixation durations showed an inverted immediacy effect (longer, rather than shorter, fixation durations on unmodified patches) along with a parafoveal-on-foveal effect (shorter fixation durations, when an unmodified patch was fixated next). This effect was stronger when the currently fixated patch was isoluminant as opposed to unmodified. Our results suggest that peripheral scene information substantially affects fixation durations and are consistent with the notion of competition among the current and potential future fixation locations.


Subject(s)
Fixation, Ocular/physiology , Visual Perception/physiology , Adult , Female , Humans , Male , Time Factors , Young Adult
19.
Neuroimage ; 210: 116549, 2020 04 15.
Article in English | MEDLINE | ID: mdl-31954844

ABSTRACT

The brain has been theorized to employ inferential processes to overcome the problem of uncertainty. Inference is thought to underlie neural processes, including in disparate domains such as value-based decision-making and perception. Value-based decision-making commonly involves deliberation, a time-consuming process that requires conscious consideration of decision variables. Perception, by contrast, is thought to be automatic and effortless. Both processes may call on a general neural system to resolve for uncertainty however. We addressed this question by directly comparing uncertainty signals in visual perception and an economic task using fMRI. We presented the same individuals with different versions of a bi-stable figure (Necker's cube) and with a gambling task during fMRI acquisition. We experimentally varied uncertainty, either on perceptual state or financial outcome. We found that inferential errors indexed by a formal account of surprise in the gambling task yielded BOLD responses in the anterior insula, in line with earlier findings. Moreover, we found perceptual uncertainty and surprise in the Necker Cube task yielded similar responses in the anterior insula. These results suggest that uncertainty, irrespective of domain, correlates to a common brain region, the anterior insula. These findings provide empirical evidence that the brain interacts with its environment through inferential processes.


Subject(s)
Brain Mapping , Cerebral Cortex/physiology , Decision Making/physiology , Pattern Recognition, Visual/physiology , Uncertainty , Adult , Cerebral Cortex/diagnostic imaging , Female , Humans , Magnetic Resonance Imaging , Male , Young Adult
SELECTION OF CITATIONS
SEARCH DETAIL
...