Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 8 de 8
Filter
Add more filters










Database
Language
Publication year range
1.
J Neurosci ; 43(30): 5537-5545, 2023 07 26.
Article in English | MEDLINE | ID: mdl-37344235

ABSTRACT

Hierarchical predictive coding networks are a general model of sensory processing in the brain. Under neural delays, these networks have been suggested to naturally generate oscillatory activity in approximately the α frequency range (∼8-12 Hz). This suggests that α oscillations, a prominent feature of EEG recordings, may be a spectral "fingerprint" of predictive sensory processing. Here, we probed this possibility by investigating whether oscillations over the visual cortex predictively encode visual information. Specifically, we examined whether their power carries information about the position of a moving stimulus, in a temporally predictive fashion. In two experiments (N = 32, 18 female; N = 34, 17 female), participants viewed an apparent-motion stimulus moving along a circular path while EEG was recorded. To investigate the encoding of stimulus-position information, we developed a method of deriving probabilistic spatial maps from oscillatory power estimates. With this method, we demonstrate that it is possible to reconstruct the trajectory of a moving stimulus from α/low-ß oscillations, tracking its position even across unexpected motion reversals. We also show that future position representations are activated in the absence of direct visual input, demonstrating that temporally predictive mechanisms manifest in α/ß band oscillations. In a second experiment, we replicate these findings and show that the encoding of information in this range is not driven by visual entrainment. By demonstrating that occipital α/ß oscillations carry stimulus-related information, in a temporally predictive fashion, we provide empirical evidence of these rhythms as a spectral "fingerprint" of hierarchical predictive processing in the human visual system.SIGNIFICANCE STATEMENT "Hierarchical predictive coding" is a general model of sensory information processing in the brain. When in silico predictive coding models are constrained by neural transmission delays, their activity naturally oscillates in roughly the α range (∼8-12 Hz). Using time-resolved EEG decoding, we show that neural rhythms in this approximate range (α/low-ß) over the human visual cortex predictively encode the position of a moving stimulus. From the amplitude of these oscillations, we are able to reconstruct the stimulus' trajectory, revealing signatures of temporally predictive processing. This provides direct neural evidence linking occipital α/ß rhythms to predictive visual processing, supporting the emerging view of such oscillations as a potential spectral "fingerprint" of hierarchical predictive processing in the human visual system.


Subject(s)
Alpha Rhythm , Visual Cortex , Humans , Female , Visual Perception , Brain , Sensation , Electroencephalography
2.
Elife ; 122023 01 19.
Article in English | MEDLINE | ID: mdl-36656268

ABSTRACT

When interacting with the dynamic world, the brain receives outdated sensory information, due to the time required for neural transmission and processing. In motion perception, the brain may overcome these fundamental delays through predictively encoding the position of moving objects using information from their past trajectories. In the present study, we evaluated this proposition using multivariate analysis of high temporal resolution electroencephalographic data. We tracked neural position representations of moving objects at different stages of visual processing, relative to the real-time position of the object. During early stimulus-evoked activity, position representations of moving objects were activated substantially earlier than the equivalent activity evoked by unpredictable flashes, aligning the earliest representations of moving stimuli with their real-time positions. These findings indicate that the predictability of straight trajectories enables full compensation for the neural delays accumulated early in stimulus processing, but that delays still accumulate across later stages of cortical processing.


The survival of animals depends on their ability to respond to different stimuli quickly and efficiently. From flies fluttering away when a swatter approaches, to deer running away at the sight of a lion to humans ducking to escape a looming punch, fast-paced reactions to harmful stimuli is what keep us (and other fauna) from getting injured or seriously maimed. This entire process is orchestrated by the nervous system, where cells called neurons carry signals from our senses to higher processing centres in the brain, allowing us to react appropriately. However, this relay process from the sensory organs to the brain accumulates delays: it takes time for signals to be transmitted from cell to cell, and also for the brain to process these signals. This means that the information received by our brains is usually outdated, which could lead to delayed responses. Experiments done in cats and monkeys have shown that the brain can compensate for these delays by predicting how objects might move in the immediate future, essentially extrapolating the trajectories of objects moving in a predictable manner. This might explain why rabbits run in an impulsive zigzag manner when trying to escape a predator: if they change direction often enough, the predator may not be able to predict where they are going next. Johnson et al. wanted to find out whether human brains can also compensate for delays in processing the movement of objects, and if so, at what point (early or late) in the processing pipeline the compensation occurs. To do this, they recorded the electrical activity of neurons using electroencephalography from volunteers who were presented with both static and moving stimuli. Electroencephalography or EEG records the average activity of neurons in a region of the brain over a period of time. The data showed that the volunteers' brains responded to moving stimuli significantly faster than to static stimuli in the same position on the screen, essentially being able to track the real-time position of the moving stimulus. Johnson et al. further analysed and compared the EEG recordings for moving versus static stimuli to demonstrate that compensation for processing delays occurred early on in the processing journey. Indeed, the compensation likely happens before the signal reaches a part of the brain called the visual cortex, which processes stimuli from sight. Any delays accrued beyond this point were not accommodated for. Johnson et al. clearly demonstrate that the human brain can work around its own shortcomings to allow us to perceive moving objects in real time. These findings start to explain, for example, how sportspersons are able to catch fast-moving balls and hit serves coming to them at speeds of approximately 200 kilometres per hour. The results also lay the foundation for studying processing delays in other senses, such as hearing and touch.


Subject(s)
Motion Perception , Visual Perception , Visual Perception/physiology , Motion Perception/physiology , Brain/physiology , Reaction Time/physiology , Synaptic Transmission , Photic Stimulation
3.
Cortex ; 138: 191-202, 2021 05.
Article in English | MEDLINE | ID: mdl-33711770

ABSTRACT

Establishing the real-time position of a moving object poses a challenge to the visual system due to neural processing delays. While sensory information is travelling through the visual hierarchy, the object continues moving and information about its position becomes outdated. By extrapolating the position of a moving object along its trajectory, predictive mechanisms might effectively decrease the processing time associated with these objects. Here, we use time-resolved decoding of electroencephalographic (EEG) data from an apparent motion paradigm to demonstrate the interaction of two separate predictive mechanisms. First, we reveal predictive latency advantages for position representations as soon as the second object in an apparent motion sequence - even before the stimulus contains any physical motion energy. This is consistent with the existence of omni-directional, within-layer waves of sub-threshold activity that bring neurons coding for adjacent positions closer to their firing threshold, thereby reducing the processing time of the second stimulus in one of those positions. Second, we show that an additional direction-specific latency advantage emerges from the third sequence position onward, once the direction of the apparent motion stimulus is uniquely determined. Because the receptive fields of early visual areas are too small to encompass sequential apparent motion positions (as evidenced by the lack of latency modulation for the second stimulus position), this latency advantage most likely arises from descending predictions from higher to lower visual areas through feedback connections. Finally, we reveal that the same predictive activation that facilitates the processing of the object in its expected position needs to be overcome when the object's trajectory unexpectedly reverses, causing an additional latency disadvantage for stimuli that violate predictions. Altogether, our results suggest that two complementary mechanisms interact to form and revise predictions in visual motion processing, modulating the latencies of neural position representations at different levels of visual processing.


Subject(s)
Motion Perception , Visual Cortex , Electroencephalography , Humans , Motion , Photic Stimulation , Visual Perception
4.
Cortex ; 136: 140-146, 2021 03.
Article in English | MEDLINE | ID: mdl-33461733

ABSTRACT

Our brains can represent expected future states of our sensory environment. Recent work has shown that, when we expect a specific stimulus to appear at a specific time, we can predictively generate neural representations of that stimulus even before it is physically presented. These observations raise two exciting questions: Are pre-activated sensory representations used for perceptual decision-making? And, do we transiently perceive an expected stimulus that does not actually appear? To address these questions, we propose that pre-activated neural representations provide sensory evidence that is used for perceptual decision-making. This can be understood within the framework of the Diffusion Decision Model as an early accumulation of decision evidence in favour of the expected percept. Our proposal makes novel predictions relating to expectation effects on neural markers of decision evidence accumulation, and also provides an explanation for why we sometimes perceive stimuli that are expected, but do not appear.


Subject(s)
Decision Making , Visual Perception , Brain , Humans , Photic Stimulation
5.
Proc Natl Acad Sci U S A ; 117(13): 7510-7515, 2020 03 31.
Article in English | MEDLINE | ID: mdl-32179666

ABSTRACT

The transmission of sensory information through the visual system takes time. As a result of these delays, the visual information available to the brain always lags behind the timing of events in the present moment. Compensating for these delays is crucial for functioning within dynamic environments, since interacting with a moving object (e.g., catching a ball) requires real-time localization of the object. One way the brain might achieve this is via prediction of anticipated events. Using time-resolved decoding of electroencephalographic (EEG) data, we demonstrate that the visual system represents the anticipated future position of a moving object, showing that predictive mechanisms activate the same neural representations as afferent sensory input. Importantly, this activation is evident before sensory input corresponding to the stimulus position is able to arrive. Finally, we demonstrate that, when predicted events do not eventuate, sensory information arrives too late to prevent the visual system from representing what was expected but never presented. Taken together, we demonstrate how the visual system can implement predictive mechanisms to preactivate sensory representations, and argue that this might allow it to compensate for its own temporal constraints, allowing us to interact with dynamic visual environments in real time.


Subject(s)
Motion Perception/physiology , Vision, Ocular/physiology , Visual Perception/physiology , Adult , Brain/physiology , Electroencephalography , Female , Humans , Male , Models, Neurological , Photic Stimulation , Psychomotor Performance/physiology , Reaction Time/physiology , Visual Pathways/physiology
6.
J Vis ; 19(13): 9, 2019 11 01.
Article in English | MEDLINE | ID: mdl-31715632

ABSTRACT

In the flash-grab effect, when a disk is flashed on a moving background at the moment it reverses direction, the perceived location of the disk is strongly displaced in the direction of the motion that follows the reversal. Here, we ask whether increased expectation of the reversal reduces its effect on the motion-induced shift, as suggested by predictive coding models with first order predictions. Across four experiments we find that when the reversal is expected, the illusion gets stronger, not weaker. We rule out accumulating motion adaptation as a contributing factor. The pattern of results cannot be accounted for by first-order predictions of location. Instead, it appears that second-order predictions of event timing play a role. Specifically, we conclude that temporal expectation causes a transient increase in temporal attention, boosting the strength of the motion signal and thereby increasing the strength of the illusion.


Subject(s)
Motion Perception/physiology , Pattern Recognition, Visual/physiology , Photic Stimulation , Adult , Female , Humans , Illusions/physiology , Male , Young Adult
7.
J Vis ; 19(2): 3, 2019 02 01.
Article in English | MEDLINE | ID: mdl-30725096

ABSTRACT

Motion-induced position shifts constitute a broad class of visual illusions in which motion and position signals interact in the human visual pathway. In such illusions, the presence of visual motion distorts the perceived positions of objects in nearby space. Predictive mechanisms, which could contribute to compensating for processing delays due to neural transmission, have been given as an explanation. However, such mechanisms have struggled to explain why we do not usually perceive objects extrapolated beyond the end of their trajectory. Advocates of this interpretation have proposed a "correction-for-extrapolation" mechanism to explain this: When the object motion ends abruptly, this mechanism corrects the overextrapolation by shifting the perceived object location backwards to its actual location. However, such a mechanism has so far not been empirically demonstrated. Here, we use a novel version of the flash-grab illusion to demonstrate this mechanism. In the flash-grab effect, a target is flashed on a moving background that abruptly changes direction, leading to the mislocalization of the target. Here, we manipulate the angle of the direction change to dissociate the contributions of the background motion before and after the flash. Consistent with previous reports, we observe that perceptual mislocalization in the flash-grab illusion is mainly driven by motion after the flash. Importantly, however, we reveal a small but consistent mislocalization component in the direction opposite to the direction of the first motion sequence. This provides empirical support for the proposed correction-for-extrapolation mechanism, and therefore corroborates the interpretation that motion-induced position shifts might result from predictive interactions between motion and position signals.


Subject(s)
Illusions/physiology , Motion Perception/physiology , Humans , Judgment , Pattern Recognition, Visual , Psychophysics , Visual Pathways/physiology
8.
J Exp Psychol Hum Percept Perform ; 42(11): 1716-1723, 2016 11.
Article in English | MEDLINE | ID: mdl-27428779

ABSTRACT

The pupillary light response has been shown not to be a purely reflexive mechanism but to be sensitive to higher order perceptual processes, such as covert visual attention. In the present study we examined whether the pupillary light response is modulated by stimuli that are not physically present but are maintained in visual working memory. In all conditions, displays contained both bright and dark stimuli. Participants were instructed to covertly attend and encode either the bright or the dark stimuli, which then had to be maintained in visual working memory for a subsequent change-detection task. The pupil was smaller in response to encoding bright stimuli compared to dark stimuli. However, this effect did not sustain during the maintenance phase. This was the case even when brightness was directly relevant for the working memory task. These results reveal that the encoding of task-relevant and physically present information in visual working memory is reflected in the pupil. In contrast, the pupil is not sensitive to the maintenance of task-relevant but no longer visible stimuli. One interpretation of our results is that the pupil optimizes its size for perception of stimuli during encoding; however, once stimuli are no longer visible (during maintenance), an "optimal" pupil size no longer serves a purpose, and the pupil may therefore cease to reflect the brightness of the memorized stimuli. (PsycINFO Database Record


Subject(s)
Attention/physiology , Memory, Short-Term/physiology , Pupil/physiology , Visual Perception/physiology , Adolescent , Adult , Female , Humans , Male , Young Adult
SELECTION OF CITATIONS
SEARCH DETAIL
...