Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 79
Filter
Add more filters










Publication year range
1.
Front Psychol ; 14: 1180561, 2023.
Article in English | MEDLINE | ID: mdl-37663341

ABSTRACT

Our brain employs mechanisms to adapt to changing visual conditions. In addition to natural changes in our physiology and those in the environment, our brain is also capable of adapting to "unnatural" changes, such as inverted visual-inputs generated by inverting prisms. In this study, we examined the brain's capability to adapt to hyperspaces. We generated four spatial-dimensional stimuli in virtual reality and tested the ability to distinguish between rigid and non-rigid motion. We found that observers are able to differentiate rigid and non-rigid motion of hypercubes (4D) with a performance comparable to that obtained using cubes (3D). Moreover, observers' performance improved when they were provided with more immersive 3D experience but remained robust against increasing shape variations. At this juncture, we characterize our findings as "3 1/2 D perception" since, while we show the ability to extract and use 4D information, we do not have yet evidence of a complete phenomenal 4D experience.

2.
Vision Res ; 202: 108142, 2023 01.
Article in English | MEDLINE | ID: mdl-36423519

ABSTRACT

The perception of motion not only depends on the detection of motion signals but also on choosing and applying reference-frames according to which motion is interpreted. Here we propose a neural model that implements the common-fate principle for reference-frame selection. The model starts with a retinotopic layer of directionally-tuned motion detectors. The Gestalt common-fate principle is applied to the activities of these detectors to implement in two neural populations the direction and the magnitude (speed) of the reference-frame. The output activities of retinotopic motion-detectors are decomposed using the direction of the reference-frame. The direction and magnitude of the reference-frame are then applied to these decomposed motion-vectors to generate activities that reflect relative-motion perception, i.e., the perception of motion with respect to the prevailing reference-frame. We simulated this model for classical relative motion stimuli, viz., the three-dot, rotating-wheel, and point-walker (biological motion) paradigms and found the model performance to be close to theoretical vector decomposition values. In the three-dot paradigm, the model made the prediction of perceived curved-trajectories for the target dot when its horizontal velocity was slower or faster than the flanking dots. We tested this prediction in two psychophysical experiments and found a good qualitative and quantitative agreement between the model and the data. Our results show that a simple neural network using solely motion information can account for the perception of group and relative motion.


Subject(s)
Motion Perception , Humans , Motion , Neural Networks, Computer
3.
Atten Percept Psychophys ; 84(6): 1886-1900, 2022 Aug.
Article in English | MEDLINE | ID: mdl-35729455

ABSTRACT

In our daily lives, the visual system receives a plethora of visual information that competes for the brain's limited processing capacity. Nevertheless, not all visual information is useful for our cognitive, emotional, social, and ultimately survival purposes. Therefore, the brain employs mechanisms to select critical information and thereby optimizes its limited resources. Attention is the selective process that serves such a function. In particular, covert spatial attention - attending to a particular location in the visual field without eye movements - improves spatial resolution and paradoxically deteriorates temporal resolution. The neural correlates underlying these attentional effects still remainelusive. In this work, we tested a neural model's predictions that explain these phenomena based on interactions between channels with different spatiotemporal sensitivities - namely, the magnocellular (transient) and parvocellular (sustained) channels. More specifically, our model postulates that spatial attention enhances activities in the parvocellular pathway, thereby producing improved performance in spatial resolution tasks. However, the enhancement of parvocellular activities leads to decreased magnocellular activities due to parvo-magno inhibitory interactions. As a result, spatial attention hampers temporal resolution. We compared the predictions of the model to psychophysical data, and show that our model can account qualitatively and quantitatively for the effects of spatial attention on spatial and temporal acuity.


Subject(s)
Attention , Eye Movements , Brain , Humans , Photic Stimulation , Visual Fields
4.
Vision (Basel) ; 6(1)2022 Mar 01.
Article in English | MEDLINE | ID: mdl-35324600

ABSTRACT

Human memory consists of sensory memory (SM), short-term memory (STM), and long-term memory (LTM). SM enables a large capacity, but decays rapidly. STM has limited capacity, but lasts longer. The traditional view of these memory systems resembles a leaky hourglass, the large top and bottom portions representing the large capacities of SM and LTM, whereas the narrow portion in the middle represents the limited capacity of STM. The "leak" in the top part of the hourglass depicts the rapid decay of the contents of SM. However, recently, it was shown that major bottlenecks for motion processing exist prior to STM, and the "leaky hourglass" model was replaced by a "leaky flask" model with a narrower top part to capture bottlenecks prior to STM. The leaky flask model was based on data from one study, and the first goal of the current paper was to test if the leaky flask model would generalize by using a different set of data. The second goal of the paper was to explore various block diagram models for memory systems and determine the one best supported by the data. We expressed these block diagram models in terms of statistical mixture models and, by using the Bayesian information criterion (BIC), found that a model with four components, viz., SM, attention, STM, and guessing, provided the best fit to our data. In summary, we generalized previous findings about early qualitative and quantitative bottlenecks, as expressed in the leaky flask model and showed that a four-process model can provide a good explanation for how visual information is processed and stored in memory.

5.
Front Neurorobot ; 15: 658450, 2021.
Article in English | MEDLINE | ID: mdl-34966265

ABSTRACT

Newborns demonstrate innate abilities in coordinating their sensory and motor systems through reflexes. One notable characteristic is circular reactions consisting of self-generated motor actions that lead to correlated sensory and motor activities. This paper describes a model for goal-directed reaching based on circular reactions and exocentric reference-frames. The model is built using physiologically plausible visual processing modules and arm-control neural networks. The model incorporates map representations with ego- and exo-centric reference frames for sensory inputs, vector representations for motor systems, as well as local associative learning that result from arm explorations. The integration of these modules is simulated and tested in a three-dimensional spatial environment using Unity3D. The results show that, through self-generated activities, the model self-organizes to generate accurate arm movements that are tolerant with respect to various sources of noise.

6.
Vision (Basel) ; 5(4)2021 Dec 13.
Article in English | MEDLINE | ID: mdl-34941656

ABSTRACT

The first stage of the Atkinson-Shiffrin model of human memory is a sensory memory (SM). The visual component of the SM was shown to operate within a retinotopic reference frame. However, a retinotopic SM (rSM) is unable to account for vision under natural viewing conditions because, for example, motion information needs to be analyzed across space and time. For this reason, the SM store of the Atkinson-Shiffrin model has been extended to include a non-retinotopic component (nrSM). In this paper, we analyze findings from two experimental paradigms and show drastically different properties of rSM and nrSM. We show that nrSM involves complex processes such as motion-based reference frames and Gestalt grouping, which establish object identities across space and time. We also describe a quantitative model for nrSM and show drastic differences between the spatio-temporal properties of rSM and nrSM. Since the reference-frame of the latter is non-retinotopic and motion-stream based, we suggest that the spatiotemporal properties of the nrSM are in accordance with the spatiotemporal properties of the motion system. Overall, these findings indicate that, unlike the traditional rSM, which is a relatively passive store, nrSM exhibits sophisticated processing properties to manage the complexities of ecological perception.

7.
J Vis ; 21(12): 4, 2021 11 01.
Article in English | MEDLINE | ID: mdl-34739035

ABSTRACT

Information about a moving object is usually poor at each retinotopic location because photoreceptor activation is short, noisy, and affected by shadows, reflections of other objects, and so on. Integration across the motion trajectory may yield a much better estimate about the objects' features. Using the sequential metacontrast paradigm, we have shown previously that features, indeed, integrate along a motion trajectory in a long-lasting window of unconscious processing. In the sequential metacontrast paradigm, a percept of two diverging streams is elicited by the presentation of a central line followed by a sequence of flanking pairs of lines. When several lines are spatially offset, the offsets integrate mandatorily for several hundreds of milliseconds along the motion trajectory of the streams. We propose that, within these long-lasting windows, stimuli are first grouped based on Gestalt principles of grouping. These processes establish reference frames that are used to attribute features. Features are then integrated following their respective reference frame. Here using occlusion and bouncing effects, we show that indeed such grouping operations are in place. We found that features integrate only when the spatiotemporal integrity of the object is preserved. Moreover, when several moving objects are present, only features belonging to the same object integrate. Overall, our results show that feature integration is a deliberate strategy of the brain and long-lasting windows of processing can be seen as periods of sense making.


Subject(s)
Motion Perception , Humans , Motion , Photic Stimulation
8.
Vision Res ; 188: 96-114, 2021 11.
Article in English | MEDLINE | ID: mdl-34304144

ABSTRACT

Under ecological conditions, the luminance impinging on the retina varies within a dynamic range of 220 dB. Stimulus contrast can also vary drastically within a scene and eye movements leave little time for sampling luminance. Given these fundamental problems, the human brain allocates a significant amount of resources and deploys both structural and functional solutions that work in tandem to compress this range. Here we propose a new dynamic neural model built upon well-established canonical neural mechanisms. The model consists of two feed-forward stages. The first stage encodes the stimulus spatially and normalizes its activity by extracting contrast and discounting the background luminance. These normalized activities allow a second stage to implement a contrast-dependent spatial-integration strategy. We show how the properties of this model can account for adaptive properties of motion discrimination, integration, and segregation.


Subject(s)
Motion Perception , Contrast Sensitivity , Discrimination, Psychological , Humans , Motion , Pattern Recognition, Visual , Photic Stimulation
9.
Brain Struct Funct ; 226(9): 3067-3081, 2021 Dec.
Article in English | MEDLINE | ID: mdl-33779794

ABSTRACT

Metacontrast masking is a powerful illusion to investigate the dynamics of perceptual processing and to control conscious visual perception. However, the neural mechanisms underlying this fundamental investigative tool are still debated. In the present study, we examined metacontrast masking across different contrast polarities by employing a contour discrimination task combined with EEG (Electroencephalography). When the target and mask had the same contrast polarity, a typical U-shaped metacontrast function was observed. A change in mask polarity (i.e., opposite mask polarity) shifted this masking function to a monotonic increasing function such that the target visibility was strongly suppressed at stimulus onset asynchronies less than 50 ms. This transition in metacontrast function has been typically interpreted as an increase in intrachannel inhibition of the sustained activities functionally linked to object visibility and identity. Our EEG analyses revealed an early (160-300 ms) and a late (300-550 ms) spatiotemporal cluster associated with this effect of polarity. The early cluster was mainly over occipital and parieto-occipital scalp sites. On the other hand, the later modulations of the evoked activities were centered over parietal and centro-parietal sites. Since both of these clusters were beyond 160 ms, the EEG results point to late recurrent inhibitory mechanisms. Although the findings here do not directly preclude other proposed mechanisms for metacontrast, they highlight the involvement of recurrent intrachannel inhibition in metacontrast masking.


Subject(s)
Form Perception , Perceptual Masking , Consciousness , Contrast Sensitivity , Electroencephalography , Visual Perception
10.
J Vis ; 20(7): 33, 2020 07 01.
Article in English | MEDLINE | ID: mdl-32729906

ABSTRACT

Humans make two to four rapid eye movements (saccades) per second, which, surprisingly, does not lead to abrupt changes in vision. To the contrary, we perceive a stable world. Hence, an important question is how information is integrated across saccades. To investigate this question, we used the sequential metacontrast paradigm (SQM), where two expanding streams of lines are presented. When one line is spatially offset, the other lines are perceived as being offset, too. When more lines are offset, all offsets integrate mandatorily; that is, observers cannot report the individual offsets but perceive one integrated offset. Here, we asked observers to make a saccade during the SQM. Even though the saccades caused a highly disrupted motion trajectory on the retina, offsets presented before and after the saccade integrated mandatorily. When observers made no saccade and the streams were displaced on the screen so that a similarly disrupted retinal image occurred as in the previous condition, no integration occurred. We suggest that trans-saccadic integration and perception are determined by object identity in spatiotopic coordinates and not by the retinal image.


Subject(s)
Form Perception/physiology , Retina/physiology , Saccades/physiology , Adult , Female , Humans , Male , Photic Stimulation/methods , Young Adult
11.
Vision Res ; 174: 10-21, 2020 09.
Article in English | MEDLINE | ID: mdl-32505832

ABSTRACT

The early visual system is organized retinotopically. However, under ecological viewing conditions, motion perception occurs in non-retinotopic coordinates. Even though many studies revealed the central role of non-retinotopic processes, very little is known about their mechanisms and neural correlates. Tadin and colleagues found that increasing the spatial size of a high-contrast drifting-Gabor deteriorates motion-direction discrimination, whereas the opposite occurs with a low-contrast stimulus. The results were proposed to reflect an adaptive center-surround antagonism, whereby at low-contrast the excitatory center dominates whereas at high-contrast suppressive-surround mechanisms become more effective. Because ecological vision is non-retinotopic, we tested the hypothesis that the non-retinotopic system also processes motion information by means of an adaptive center-surround mechanism. We used the Ternus-Pikler display designed to provide either a retinotopic or a non-retinotopic reference-frame. Our results suggest that the non-retinotopic processes underlying motion perception are also mediated by an adaptive center-surround mechanism.


Subject(s)
Motion Perception , Retina , Humans , Motion , Photic Stimulation
12.
J Vis ; 19(12): 7, 2019 10 01.
Article in English | MEDLINE | ID: mdl-31621805

ABSTRACT

Perception depends on reference frames. For example, the "true" cycloidal motion trajectory of a reflector on a bike's wheel is invisible because we perceive the reflector motion relative to the bike's motion trajectory, which serves as a reference frame. To understand such an object-based motion perception, we suggested a "two-stage" model in which first reference frames are computed based on perceptual grouping (bike) and then features are attributed (reflector motion) based on group membership. The overarching goal of this study was to investigate how multiple features (i.e., motion, shape, and color) interact with attention to determine retinotopic or nonretinotopic reference frames. We found that, whereas tracking by focal attention can generate nonretinotopic reference-frames, the effect is rather small compared with motion-based grouping. Combined, our results support the two-stage model and clarify how various features and cues can work in conjunction or in competition to determine prevailing groups. These groups in turn establish reference frames according to which features are processed and bound together.


Subject(s)
Attention/physiology , Color Perception/physiology , Eye Movements/physiology , Motion Perception/physiology , Cues , Female , Humans , Male , Optic Nerve/physiology , Retina/physiology , Spatio-Temporal Analysis , Young Adult
13.
Front Psychol ; 10: 3000, 2019.
Article in English | MEDLINE | ID: mdl-32038384

ABSTRACT

We live in a three-dimensional (3D) spatial world; however, our retinas receive a pair of 2D projections of the 3D environment. By using multiple cues, such as disparity, motion parallax, perspective, our brains can construct 3D representations of the world from the 2D projections on our retinas. These 3D representations underlie our 3D perceptions of the world and are mapped into our motor systems to generate accurate sensorimotor behaviors. Three-dimensional perceptual and sensorimotor capabilities emerge during development: the physiology of the growing baby changes hence necessitating an ongoing re-adaptation of the mapping between 3D sensory representations and the motor coordinates. This adaptation continues in adulthood and is quite general to successfully deal with joint-space changes (longer arms due to growth), skull and eye size changes (and still being able of accurate eye movements), etc. A fundamental question is whether our brains are inherently limited to 3D representations of the environment because we are living in a 3D world, or alternatively, our brains may have the inherent capability and plasticity of representing arbitrary dimensions; however, 3D representations emerge from the fact that our development and learning take place in a 3D world. Here, we review research related to inherent capabilities and limitations of brain plasticity in terms of its spatial representations and discuss whether with appropriate training, humans can build perceptual and sensorimotor representations of spatial 4D environments, and how the presence or lack of ability of a solid and direct 4D representation can reveal underlying neural representations of space.

14.
Conscious Cogn ; 62: 135-147, 2018 07.
Article in English | MEDLINE | ID: mdl-29625859

ABSTRACT

Unconscious visual stimuli can affect conscious perception: For example, an invisible prime can affect responses to a subsequent target. The invisible interpretation of an ambiguous figure can have similar effects. Invisibility in these situations is typically explained by stimulus-suppression in early, retinotopic brain areas. We have previously argued that invisibility is closely linked to Gestalt ("object") organization principles. For example, motion is typically perceived in non-retinotopic, object-centered, and not in retinotopic coordinates. Such is the case for a bicycle-reflector that is perceived as circling, although its retinotopic trajectory is cycloidal. Here, we used a modified Ternus-Pikler display in which, just as in everyday vision, the retinotopic motion is invisible and the non-retinotopic motion is perceived. Nevertheless, the invisible retinotopic motion, can strongly degrade the conscious non-retinotopic motion percept. This effect cannot be explained by inhibition at a retinotopic processing stage.


Subject(s)
Motion Perception , Retina/physiology , Consciousness , Female , Fixation, Ocular , Humans , Male , Motion , Photic Stimulation , Rotation , Unconscious, Psychology , Young Adult
15.
Vision (Basel) ; 2(4)2018 Oct 10.
Article in English | MEDLINE | ID: mdl-31735902

ABSTRACT

To efficiently use its finite resources, the visual system selects for further processing only a subset of the rich sensory information. Visual masking and spatial attention control the information transfer from visual sensory-memory to visual short-term memory. There is still a debate whether these two processes operate independently or interact, with empirical evidence supporting both arguments. However, recent studies pointed out that earlier studies showing significant interactions between common-onset masking and attention suffered from ceiling and/or floor effects. Our review of previous studies reporting metacontrast-attention interactions revealed similar artifacts. Therefore, we investigated metacontrast-attention interactions by using an experimental paradigm, in which ceiling/floor effects were avoided. We also examined whether metacontrast masking is differently influenced by endogenous and exogenous attention. We analyzed mean absolute-magnitude of response-errors and their statistical distribution. When targets are masked, our results support the hypothesis that manipulations of the levels of metacontrast and of endogenous/exogenous attention have largely independent effects. Moreover, statistical modeling of the distribution of response-errors suggests weak interactions modulating the probability of "guessing" behavior for some observers in both types of attention. Nevertheless, our data suggest that any joint effect of attention and metacontrast can be adequately explained by their independent and additive contributions.

16.
J Vis ; 17(9): 6, 2017 08 01.
Article in English | MEDLINE | ID: mdl-28800368

ABSTRACT

The motion of parts of an object is usually perceived relative to the object, i.e., nonretinotopically, rather than in retinal coordinates. For example, we perceive a reflector to rotate on the wheel of a moving bicycle even though its trajectory is cycloidal on the retina. The rotation is perceived because the motion of the object (bicycle) is discounted from the motion of its parts (reflector). It seems that the visual system can easily compute the object motion and subtract it from the part motion. Bikes move usually rather predictably. Given the complexity of real-world motion computations, including many ill-posed problems such as the motion correspondence problem, predictability of an object's motion may be essential for nonretinotopic perception. Here, we used the Ternus-Pikler display to investigate this question. Performance was not impaired when contrast polarity, shape, and motion trajectories changed unpredictably. Our findings suggest that predictability is not crucial for nonretinotopic motion processing.


Subject(s)
Motion Perception/physiology , Retina/physiology , Visual Cortex/physiology , Adolescent , Female , Humans , Male , Rotation , Young Adult
17.
Vision Res ; 133: 37-46, 2017 04.
Article in English | MEDLINE | ID: mdl-28185858

ABSTRACT

Levelt's Propositions are central to understanding a wide range of multistable perceptual phenomena, but it is unclear whether they extend to perceptual multistability involving interocular grouping. We presented split-grating stimuli with complementary halves of the same color (either red or green) to human subjects. The subjects reported four percepts in alternation: the two stimuli presented to each eye (half red and half green), as well as the two single color (all red or all green), interocularly grouped percepts. Increasing color saturation lead to increased reports of the single color percept in most subjects, indicating increased predominance of grouped percepts (Levelt's Proposition I). This increase in predominance was due to a decrease in the average dominance duration of single-eye percepts, with grouped percept dominance largely unaffected. This agrees with a generalization of Levelt's Proposition II, as the average dominance duration of the stronger (in this case single-eye) percept was primarily affected by changes in stimulus strength. Moreover, in agreement with Levelt's Proposition III the alternation rate between percepts increased as the difference in the strength of the percepts decreased.


Subject(s)
Color Perception/physiology , Dominance, Ocular/physiology , Models, Theoretical , Visual Perception/physiology , Adult , Female , Functional Laterality , Humans , Male , Vision, Binocular/physiology
18.
Atten Percept Psychophys ; 79(3): 888-910, 2017 Apr.
Article in English | MEDLINE | ID: mdl-28092077

ABSTRACT

The goal of this study was to investigate the reference frames used in perceptual encoding and storage of visual motion information. In our experiments, observers viewed multiple moving objects and reported the direction of motion of a randomly selected item. Using a vector-decomposition technique, we computed performance during smooth pursuit with respect to a spatiotopic (nonretinotopic) and to a retinotopic component and compared them with performance during fixation, which served as the baseline. For the stimulus encoding stage, which precedes memory, we found that the reference frame depends on the stimulus set size. For a single moving target, the spatiotopic reference frame had the most significant contribution with some additional contribution from the retinotopic reference frame. When the number of items increased (Set Sizes 3 to 7), the spatiotopic reference frame was able to account for the performance. Finally, when the number of items became larger than 7, the distinction between reference frames vanished. We interpret this finding as a switch to a more abstract nonmetric encoding of motion direction. We found that the retinotopic reference frame was not used in memory. Taken together with other studies, our results suggest that, whereas a retinotopic reference frame may be employed for controlling eye movements, perception and memory use primarily nonretinotopic reference frames. Furthermore, the use of nonretinotopic reference frames appears to be capacity limited. In the case of complex stimuli, the visual system may use perceptual grouping in order to simplify the complexity of stimuli or resort to a nonmetric abstract coding of motion information.


Subject(s)
Memory, Short-Term/physiology , Motion Perception/physiology , Pursuit, Smooth/physiology , Space Perception/physiology , Adult , Female , Humans , Male , Young Adult
19.
Atten Percept Psychophys ; 79(2): 593-602, 2017 Feb.
Article in English | MEDLINE | ID: mdl-27834045

ABSTRACT

Under natural viewing conditions, a large amount of information reaches our senses, and the visual system uses attention and perceptual grouping to reduce the complexity of stimuli in order to make real-time perception possible. Prior studies have shown that attention and perceptual grouping operate in synergy; exogenous attention is deployed not only to the cued item, but also to the entire group. Here, we investigated how attention and perceptual grouping operate during the formation and dissolution of groups. Our results showed that reaction times are higher in the presence of perceptual groups than they are for ungrouped stimuli. On the other hand, attentional benefits of perceptual grouping were observed during both the formation and the dissolution of groups. The dynamics were similar during group formation and dissolution, showing a gradual effect that takes approximately half a second to reach its maximum level. In the case of group dissolution, the attentional benefits persisted for about a quarter of a second after dissolution of the group. Taken together, our results reveal the dynamics of how attention and grouping work in synergy during the transient period when groups form or dissolve.


Subject(s)
Attention/physiology , Photic Stimulation/methods , Reaction Time/physiology , Time Perception/physiology , Visual Perception/physiology , Adult , Cues , Female , Humans , Male , Young Adult
20.
Psychiatry Res ; 246: 461-465, 2016 Dec 30.
Article in English | MEDLINE | ID: mdl-27792975

ABSTRACT

Schizophrenia impairs cognitive functions as much as perception. For example, patients perceive global motion in random dot kinematograms less strongly, because, as it is argued, the integration of the dots into a single Gestalt is complex and therefore deteriorated. Similarly, the perception of apparent motion is impaired, because filling-in of the illusory trajectory requires complex processing. Here, we investigated very complex motion processing using the Ternus-Pikler display. First, we tested whether the perception of global apparent motion is impaired in schizophrenia patients compared to healthy controls. The task requires both the grouping of multiple elements into a coherent Gestalt and the filling-in of its illusory motion trajectory. Second, we tested the perception of rotation in the same stimulus, which in addition requires the computation of non-retinotopic motion. Contrary to earlier studies, patients were not impaired in either task and even tended to perform better than controls. The results suggest that complex visual processing itself is not impaired in schizophrenia patients.


Subject(s)
Motion Perception/physiology , Schizophrenia/physiopathology , Adult , Female , Humans , Male , Middle Aged , Motion Perception/classification
SELECTION OF CITATIONS
SEARCH DETAIL
...