Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 3 de 3
Filter
Add more filters










Database
Language
Publication year range
1.
Elife ; 122023 09 26.
Article in English | MEDLINE | ID: mdl-37750670

ABSTRACT

How does the human brain combine information across the eyes? It has been known for many years that cortical normalization mechanisms implement 'ocularity invariance': equalizing neural responses to spatial patterns presented either monocularly or binocularly. Here, we used a novel combination of electrophysiology, psychophysics, pupillometry, and computational modeling to ask whether this invariance also holds for flickering luminance stimuli with no spatial contrast. We find dramatic violations of ocularity invariance for these stimuli, both in the cortex and also in the subcortical pathways that govern pupil diameter. Specifically, we find substantial binocular facilitation in both pathways with the effect being strongest in the cortex. Near-linear binocular additivity (instead of ocularity invariance) was also found using a perceptual luminance matching task. Ocularity invariance is, therefore, not a ubiquitous feature of visual processing, and the brain appears to repurpose a generic normalization algorithm for different visual functions by adjusting the amount of interocular suppression.


Subject(s)
Eye , Visual Perception , Humans , Animals , Algorithms , Birds , Brain
2.
Proc Biol Sci ; 290(2000): 20230415, 2023 06 14.
Article in English | MEDLINE | ID: mdl-37282539

ABSTRACT

It is unclear whether our brain extracts and processes time information using a single-centralized mechanism or through a network of distributed mechanisms, which are specific for modality and time range. Visual adaptation has previously been used to investigate the mechanisms underlying time perception for millisecond intervals. Here, we investigated whether a well-known duration after-effect induced by motion adaptation in the sub-second range (referred to as 'perceptual timing') also occurs in the supra-second range (called 'interval timing'), which is more accessible to cognitive control. Participants judged the relative duration of two intervals after spatially localized adaptation to drifting motion. Adaptation substantially compressed the apparent duration of a 600 ms stimulus in the adapted location, whereas it had a much weaker effect on a 1200 ms interval. Discrimination thresholds after adaptation improved slightly relative to baseline, implying that the duration effect cannot be ascribed to changes in attention or to noisier estimates. A novel computational model of duration perception can explain both these results and the bidirectional shifts of perceived duration after adaptation reported in other studies. We suggest that we can use adaptation to visual motion as a tool to investigate the mechanisms underlying time perception at different time scales.


Subject(s)
Adaptation, Physiological , Time Factors
3.
Front Neurosci ; 14: 581706, 2020.
Article in English | MEDLINE | ID: mdl-33362456

ABSTRACT

Two stereoscopic cues that underlie the perception of motion-in-depth (MID) are changes in retinal disparity over time (CD) and interocular velocity differences (IOVD). These cues have independent spatiotemporal sensitivity profiles, depend upon different low-level stimulus properties, and are potentially processed along separate cortical pathways. Here, we ask whether these MID cues code for different motion directions: do they give rise to discriminable patterns of neural signals, and is there evidence for their convergence onto a single "motion-in-depth" pathway? To answer this, we use a decoding algorithm to test whether, and when, patterns of electroencephalogram (EEG) signals measured from across the full scalp, generated in response to CD- and IOVD-isolating stimuli moving toward or away in depth can be distinguished. We find that both MID cue type and 3D-motion direction can be decoded at different points in the EEG timecourse and that direction decoding cannot be accounted for by static disparity information. Remarkably, we find evidence for late processing convergence: IOVD motion direction can be decoded relatively late in the timecourse based on a decoder trained on CD stimuli, and vice versa. We conclude that early CD and IOVD direction decoding performance is dependent upon fundamentally different low-level stimulus features, but that later stages of decoding performance may be driven by a central, shared pathway that is agnostic to these features. Overall, these data are the first to show that neural responses to CD and IOVD cues that move toward and away in depth can be decoded from EEG signals, and that different aspects of MID-cues contribute to decoding performance at different points along the EEG timecourse.

SELECTION OF CITATIONS
SEARCH DETAIL
...