Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 11 de 11
Filter
Add more filters










Publication year range
1.
J Vis ; 21(8): 14, 2021 08 02.
Article in English | MEDLINE | ID: mdl-34374744

ABSTRACT

Eye movements produce shifts in the positions of objects in the retinal image, but observers are able to integrate these shifting retinal images into a coherent representation of visual space. This ability is thought to be mediated by attention-dependent saccade-related neural activity that is used by the visual system to anticipate the retinal consequences of impending eye movements. Previous investigations of the perceptual consequences of this predictive activity typically infer attentional allocation using indirect measures such as accuracy or reaction time. Here, we investigated the perceptual consequences of saccades using an objective measure of attentional allocation, reverse correlation. Human observers executed a saccade while monitoring a flickering target object flanked by flickering distractors and reported whether the average luminance of the target was lighter or darker than the background. Successful task performance required subjects to integrate visual information across the saccade. A reverse correlation analysis yielded a spatiotemporal "psychophysical kernel" characterizing how different parts of the stimulus contributed to the luminance decision throughout each trial. Just before the saccade, observers integrated luminance information from a distractor located at the post-saccadic retinal position of the target, indicating a predictive perceptual updating of the target. Observers did not integrate information from distractors placed in alternative locations, even when they were nearer to the target object. We also observed simultaneous predictive perceptual updating for two spatially distinct targets. These findings suggest both that shifting neural representations mediate the coherent representation of visual space, and that these shifts have significant consequences for transsaccadic perception.


Subject(s)
Eye Movements , Saccades , Humans , Photic Stimulation , Reaction Time , Retina , Vision, Ocular , Visual Perception
2.
J Vis ; 19(4): 1, 2019 04 01.
Article in English | MEDLINE | ID: mdl-30933237

ABSTRACT

Although studies of visual search have repeatedly demonstrated that visual clutter impairs search performance in natural scenes, these studies have not attempted to disentangle the effects of search set size from those of clutter per se. Here, we investigate the effect of natural image clutter on performance in an overt search for categorical targets when the search set size is controlled. Observers completed a search task that required detecting and localizing common objects in a set of natural images. The images were sorted into high- and low-clutter conditions based on the clutter metric by Bravo and Farid (2008). The search set size was varied independently by fixing the number and positions of potential targets across set size conditions within a block of trials. Within each fixed set size condition, search times increased as a function of increasing clutter, suggesting that clutter degrades overt search performance independently of set size.


Subject(s)
Pattern Recognition, Visual/physiology , Size Perception/physiology , Adult , Attention , Fixation, Ocular/physiology , Humans , Young Adult
3.
J Neurosci ; 38(47): 10069-10079, 2018 11 21.
Article in English | MEDLINE | ID: mdl-30282725

ABSTRACT

How do cortical responses to local image elements combine to form a spatial pattern of population activity in primate V1? Here, we used voltage-sensitive dye imaging, which measures summed membrane potential activity, to examine the rules that govern lateral interactions between the representations of two small local-oriented elements in macaque (Macaca mulatta) V1. We find strong subadditive and mostly orientation-independent interactions for nearby elements [2-4 mm interelement cortical distance (IED)] that gradually become linear at larger separations (>6 mm IED). These results are consistent with a population gain control model describing nonlinear V1 population responses to single oriented elements. However, because of the membrane potential-to-spiking accelerating nonlinearity, the model predicts supra-additive lateral interactions of spiking responses for intermediate separations at a range of locations between the two elements, consistent with some prior facilitatory effects observed in electrophysiology and psychophysics. Overall, our results suggest that population-level lateral interactions in V1 are primarily explained by a simple orientation-independent contrast gain control mechanism.SIGNIFICANCE STATEMENT Interactions between representations of simple visual elements such as oriented edges in primary visual cortex (V1) are thought to contribute to our ability to easily integrate contours and segment surfaces, but the mechanisms that govern these interactions are primarily unknown. Our study provides novel evidence that lateral interactions at the population level are governed by a simple contrast gain-control mechanism, and we show how this divisive gain-control mechanism can give rise to apparently facilitatory spiking responses.


Subject(s)
Contrast Sensitivity/physiology , Form Perception/physiology , Photic Stimulation/methods , Visual Cortex/physiology , Visual Pathways/physiology , Action Potentials/physiology , Animals , Macaca mulatta , Male
4.
Psychol Rev ; 125(3): 391-408, 2018 04.
Article in English | MEDLINE | ID: mdl-29733665

ABSTRACT

Maintaining a continuous, stable perception of the visual world relies on the ability to integrate information from previous fixations with the current one. An essential component of this integration is trans-saccadic memory (TSM), memory for information across saccades. TSM capacity may play a limiting role in tasks requiring efficient trans-saccadic integration, such as multiple-fixation visual search tasks. We estimated TSM capacity and investigated its relationship to visual short-term memory (VSTM) using two visual search tasks, one in which participants maintained fixation while saccades were simulated and another where participants made a sequence of actual saccades. We derived a memory-limited ideal observer model to estimate lower-bounds on memory capacities from human search performance. Analysis of the single-fixation search task resulted in capacity estimates (4-8 bits) consistent with those reported for traditional VSTM tasks. However, analysis of the multiple-fixation search task resulted in capacity estimates (15-32 bits) significantly larger than those measured for VSTM. Our results suggest that TSM plays an important role in visual search tasks, that the effective capacity of TSM is greater than or equal to that of VSTM, and that the TSM capacity of human observers significantly limits performance in multiple-fixation visual search tasks. (PsycINFO Database Record


Subject(s)
Memory, Short-Term/physiology , Saccades/physiology , Visual Perception/physiology , Adult , Humans
5.
J Vis ; 17(9): 13, 2017 08 01.
Article in English | MEDLINE | ID: mdl-28837969

ABSTRACT

Uncertainty regarding the position of the search target is a fundamental component of visual search. However, due to perceptual limitations of the human visual system, this uncertainty can arise from intrinsic, as well as extrinsic, sources. The current study sought to characterize the role of intrinsic position uncertainty (IPU) in overt visual search and to determine whether it significantly limits human search performance. After completing a preliminary detection experiment to characterize sensitivity as a function of visual field position, observers completed a search task that required localizing a Gabor target within a field of synthetic luminance noise. The search experiment included two clutter conditions designed to modulate the effect of IPU across search displays of varying set size. In the Cluttered condition, the display was tiled uniformly with feature clutter to maximize the effects of IPU. In the Uncluttered condition, the clutter at irrelevant locations was removed to attenuate the effects of IPU. Finally, we derived an IPU-constrained ideal searcher model, limited by the IPU measured in human observers. Ideal searchers were simulated based on the detection sensitivity and fixation sequences measured for individual human observers. The IPU-constrained ideal searcher predicted performance trends similar to those exhibited by the human observers. In the Uncluttered condition, performance decreased steeply as a function of increasing set size. However, in the Cluttered condition, the effect of IPU dominated and performance was approximately constant as a function of set size. Our findings suggest that IPU substantially limits overt search performance, especially in crowded displays.


Subject(s)
Cues , Discrimination, Psychological/physiology , Uncertainty , Visual Fields/physiology , Visual Perception/physiology , Adult , Humans , Psychophysics/methods
6.
Vision Res ; 113(Pt B): 155-68, 2015 Aug.
Article in English | MEDLINE | ID: mdl-25988753

ABSTRACT

When we search for visual targets in a cluttered background we systematically move our eyes around to bring different regions of the scene into foveal view. We explored how visual search behavior changes when the fovea is not functional, as is the case in scotopic vision. Scotopic contrast sensitivity is significantly lower overall, with a functional scotoma in the fovea. We found that in scotopic search, for a medium- and a low-spatial-frequency target, individuals made longer lasting fixations that were not broadly distributed across the entire search display but tended to peak in the upper center, especially for the medium-frequency target. The distributions of fixation locations are qualitatively similar to those of an ideal searcher that has human scotopic detectability across the visual field, and interestingly, these predicted distributions are different from those predicted by an ideal searcher with human photopic detectability. We conclude that although there are some qualitative differences between human and ideal search behavior, humans make principled adjustments in their search behavior as ambient light level decreases.


Subject(s)
Attention/physiology , Contrast Sensitivity/physiology , Fovea Centralis/physiopathology , Lighting , Night Vision/physiology , Adult , Analysis of Variance , Eye Movements/physiology , Female , Fixation, Ocular/physiology , Humans , Male , Photic Stimulation/methods , Visual Fields/physiology , Young Adult
7.
Nat Neurosci ; 16(10): 1477-83, 2013 Oct.
Article in English | MEDLINE | ID: mdl-24036915

ABSTRACT

Mammalian primary visual cortex (V1) is topographically organized such that the pattern of neural activation in V1 reflects the location and spatial extent of visual elements in the retinal image, but it is unclear whether this organization contributes to visual perception. We combined computational modeling, voltage-sensitive dye imaging (VSDI) in behaving monkeys and behavioral measurements in humans to investigate whether the large-scale topography of V1 population responses influences shape judgments. Specifically, we used a computational model to design visual stimuli that had the same physical shape, but were predicted to elicit variable V1 response spread. We confirmed these predictions with VSDI. Finally, we designed a behavioral task in which human observers judged the shapes of these stimuli and found that their judgments were systematically distorted by the spread of V1 activity. This illusion suggests that the topographic pattern of neural population responses in visual cortex contributes to visual perception.


Subject(s)
Action Potentials/physiology , Brain Mapping/methods , Form Perception/physiology , Illusions/physiology , Visual Cortex/physiology , Animals , Female , Forecasting , Humans , Illusions/psychology , Macaca mulatta , Male , Photic Stimulation/methods
8.
J Vis ; 10(2): 2.1-15, 2010 Feb 04.
Article in English | MEDLINE | ID: mdl-20462303

ABSTRACT

Existing studies of sensory integration demonstrate how the reliabilities of perceptual cues or features influence perceptual decisions. However, these studies tell us little about the influence of feature reliability on visual learning. In this article, we study the implications of feature reliability for perceptual learning in the context of binary classification tasks. We find that finite sets of training data (i.e., the stimuli and corresponding class labels used on training trials) contain different information about a learner's parameters associated with reliable versus unreliable features. In particular, the statistical information provided by a finite number of training trials strongly constrains the set of possible parameter values associated with unreliable features, but only weakly constrains the parameter values associated with reliable features. Analyses of human subjects' performances reveal that subjects were sensitive to this statistical information. Additional analyses examine why subjects were sub-optimal visual learners.


Subject(s)
Discrimination, Psychological/physiology , Learning/physiology , Models, Neurological , Visual Perception/physiology , Bayes Theorem , Humans , Logistic Models , Photic Stimulation/methods , Reproducibility of Results
9.
J Vis ; 8(2): 3.1-16, 2008 Feb 15.
Article in English | MEDLINE | ID: mdl-18318629

ABSTRACT

A number of studies have demonstrated that people often integrate information from multiple perceptual cues in a statistically optimal manner when judging properties of surfaces in a scene. For example, subjects typically weight the information based on each cue to a degree that is inversely proportional to the variance of the distribution of a scene property given a cue's value. We wanted to determine whether subjects similarly use information about the reliabilities of arbitrary low-level visual features when making image-based discriminations, as in visual texture discrimination. To investigate this question, we developed a modification of the classification image technique and conducted two experiments that explored subjects' discrimination strategies using this improved technique. We created a basis set consisting of 20 low-level features and created stimuli by linearly combining the basis vectors. Subjects were trained to discriminate between two prototype signals corrupted with Gaussian feature noise. When we analyzed subjects' classification images over time, we found that they modified their decision strategies in a manner consistent with optimal feature integration, giving greater weight to reliable features and less weight to unreliable features. We conclude that optimal integration is not a characteristic specific to conventional visual cues or to judgments involving three-dimensional scene properties. Rather, just as researchers have previously demonstrated that people are sensitive to the reliabilities of conventionally defined cues when judging the depth or slant of a surface, we demonstrate that they are likewise sensitive to the reliabilities of arbitrary low-level features when making image-based discriminations.


Subject(s)
Discrimination, Psychological/physiology , Learning/physiology , Pattern Recognition, Visual/physiology , Humans , Photic Stimulation
10.
J Vis ; 7(1): 4, 2007 Jan 16.
Article in English | MEDLINE | ID: mdl-17461672

ABSTRACT

Visual scientists have shown that people are capable of perceptual learning in a large variety of circumstances. Are there constraints on such learning? We propose a new constraint on early perceptual learning, namely, that people are capable of parameter learning-they can modify their knowledge of the prior probabilities of scene variables or of the statistical relationships among scene and perceptual variables that are already considered to be potentially dependent-but they are not capable of structure learning-they cannot learn new relationships among variables that are not considered to be potentially dependent, even when placed in novel environments in which these variables are strongly related. These ideas are formalized using the notation of Bayesian networks. We report the results of five experiments that evaluate whether subjects can demonstrate cue acquisition, which means that they can learn that a sensory signal is a cue to a perceptual judgment. In Experiment 1, subjects were placed in a novel environment that resembled natural environments in the sense that it contained systematic relationships among scene and perceptual variables that which are normally dependent. In this case, cue acquisition requires parameter learning and, as predicted, subjects succeeded in learning a new cue. In Experiments 2-5, subjects were placed in novel environments that did not resemble natural environments-they contained systematic relationships among scene and perceptual variables that are not normally dependent. Cue acquisition requires structure learning in these cases. Consistent with our hypothesis, subjects failed to learn new cues in Experiments 2-5. Overall, the results suggest that the mechanisms of early perceptual learning are biased such that people can only learn new contingencies between scene and sensory variables that are considered to be potentially dependent.


Subject(s)
Bayes Theorem , Learning/physiology , Motion Perception/physiology , Neural Networks, Computer , Cues , Humans , Light , Sound , Vision Disparity
11.
Neural Comput ; 18(3): 660-82, 2006 Mar.
Article in English | MEDLINE | ID: mdl-16483412

ABSTRACT

Investigators debate the extent to which neural populations use pair-wise and higher-order statistical dependencies among neural responses to represent information about a visual stimulus. To study this issue, three statistical decoders were used to extract the information in the responses of model neurons about the binocular disparities present in simulated pairs of left-eye and right-eye images: (1) the full joint probability decoder considered all possible statistical relations among neural responses as potentially important; (2) the dependence tree decoder also considered all possible relations as potentially important, but it approximated high-order statistical correlations using a computationally tractable procedure; and (3) the independent response decoder, which assumed that neural responses are statistically independent, meaning that all correlations should be zero and thus can be ignored. Simulation results indicate that high-order correlations among model neuron responses contain significant information about binocular disparities and that the amount of this high-order information increases rapidly as a function of neural population size. Furthermore, the results highlight the potential importance of the dependence tree decoder to neuroscientists as a powerful but still practical way of approximating high-order correlations among neural responses.


Subject(s)
Neurons/physiology , Retina/physiology , Vision, Binocular/physiology , Visual Cortex/physiology , Visual Fields/physiology , Visual Pathways/physiology , Action Potentials/physiology , Humans , Models, Neurological , Neural Networks, Computer , Photic Stimulation , Signal Processing, Computer-Assisted , Synaptic Transmission/physiology , Visual Perception/physiology
SELECTION OF CITATIONS
SEARCH DETAIL
...