Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 4 de 4
Filter
Add more filters










Database
Language
Publication year range
1.
J Vis ; 18(4): 12, 2018 04 01.
Article in English | MEDLINE | ID: mdl-29710302

ABSTRACT

Little is known about distance discrimination in real scenes, especially at long distances. This is not surprising given the logistical difficulties of making such measurements. To circumvent these difficulties, we collected 81 stereo images of outdoor scenes, together with precisely registered range images that provided the ground-truth distance at each pixel location. We then presented the stereo images in the correct viewing geometry and measured the ability of human subjects to discriminate the distance between locations in the scene, as a function of absolute distance (3 m to 30 m) and the angular spacing between the locations being compared (2°, 5°, and 10°). Measurements were made for binocular and monocular viewing. Thresholds for binocular viewing were quite small at all distances (Weber fractions less than 1% at 2° spacing and less than 4% at 10° spacing). Thresholds for monocular viewing were higher than those for binocular viewing out to distances of 15-20 m, beyond which they were the same. Using standard cue-combination analysis, we also estimated what the thresholds would be based on binocular-stereo cues alone. With two exceptions, we show that the entire pattern of results is consistent with what one would expect from classical studies of binocular disparity thresholds and separation/size discrimination thresholds measured with simple laboratory stimuli. The first exception is some deviation from the expected pattern at close distances (especially for monocular viewing). The second exception is that thresholds in natural scenes are lower, presumably because of the rich figural cues contained in natural images.


Subject(s)
Cues , Distance Perception/physiology , Vision, Binocular/physiology , Vision, Monocular/physiology , Visual Perception/physiology , Adult , Depth Perception/physiology , Humans , Male , Young Adult
2.
J Vis ; 16(13): 2, 2016 10 01.
Article in English | MEDLINE | ID: mdl-27738702

ABSTRACT

Estimating three-dimensional (3D) surface orientation (slant and tilt) is an important first step toward estimating 3D shape. Here, we examine how three local image cues from the same location (disparity gradient, luminance gradient, and dominant texture orientation) should be combined to estimate 3D tilt in natural scenes. We collected a database of natural stereoscopic images with precisely co-registered range images that provide the ground-truth distance at each pixel location. We then analyzed the relationship between ground-truth tilt and image cue values. Our analysis is free of assumptions about the joint probability distributions and yields the Bayes optimal estimates of tilt, given the cue values. Rich results emerge: (a) typical tilt estimates are only moderately accurate and strongly influenced by the cardinal bias in the prior probability distribution; (b) when cue values are similar, or when slant is greater than 40°, estimates are substantially more accurate; (c) when luminance and texture cues agree, they often veto the disparity cue, and when they disagree, they have little effect; and (d) simplifying assumptions common in the cue combination literature is often justified for estimating tilt in natural scenes. The fact that tilt estimates are typically not very accurate is consistent with subjective impressions from viewing small patches of natural scene. The fact that estimates are substantially more accurate for a subset of image locations is also consistent with subjective impressions and with the hypothesis that perceived surface orientation, at more global scales, is achieved by interpolation or extrapolation from estimates at key locations.


Subject(s)
Cues , Imaging, Three-Dimensional , Orientation , Pattern Recognition, Visual/physiology , Depth Perception , Humans , Photography/instrumentation , Probability
3.
J Vis ; 11(10)2011 Sep 27.
Article in English | MEDLINE | ID: mdl-21954297

ABSTRACT

Ganglion cells in the peripheral retina have lower density and larger receptive fields than in the fovea. Consequently, the visual signals relayed from the periphery have substantially lower resolution than those relayed by the fovea. The information contained in peripheral ganglion cell responses can be quantified by how well they predict the foveal ganglion cell responses to the same stimulus. We constructed a model of human ganglion cell outputs by combining existing measurements of the optical transfer function with the receptive field properties and sampling densities of midget (P) ganglion cells. We then simulated a spatial population of P-cell responses to image patches sampled from a large collection of luminance-calibrated natural images. Finally, we characterized the population response to each image patch, at each eccentricity, with two parameters of the spatial power spectrum of the responses: the average response contrast (standard deviation of the response patch) and the falloff in power with spatial frequency. The primary finding is that the optimal estimate of response contrast in the fovea is dependent on both the response contrast and the steepness of the falloff observed in the periphery. Humans could exploit this information when decoding peripheral signals to estimate contrasts, estimate blur levels, or select the most informative locations for saccadic eye movements.


Subject(s)
Contrast Sensitivity/physiology , Fovea Centralis/physiology , Retinal Ganglion Cells/physiology , Space Perception/physiology , Visual Cortex/physiology , Humans , Photic Stimulation/methods
4.
J Neurosci ; 28(17): 4356-67, 2008 Apr 23.
Article in English | MEDLINE | ID: mdl-18434514

ABSTRACT

Recent studies have shown that humans effectively take into account task variance caused by intrinsic motor noise when planning fast hand movements. However, previous evidence suggests that humans have greater difficulty accounting for arbitrary forms of stochasticity in their environment, both in economic decision making and sensorimotor tasks. We hypothesized that humans can learn to optimize movement strategies when environmental randomness can be experienced and thus implicitly learned over several trials, especially if it mimics the kinds of randomness for which subjects might have generative models. We tested the hypothesis using a task in which subjects had to rapidly point at a target region partly covered by three stochastic penalty regions introduced as "defenders." At movement completion, each defender jumped to a new position drawn randomly from fixed probability distributions. Subjects earned points when they hit the target, unblocked by a defender, and lost points otherwise. Results indicate that after approximately 600 trials, subjects approached optimal behavior. We further tested whether subjects simply learned a set of stimulus-contingent motor plans or the statistics of defenders' movements by training subjects with one penalty distribution and then testing them on a new penalty distribution. Subjects immediately changed their strategy to achieve the same average reward as subjects who had trained with the second penalty distribution. These results indicate that subjects learned the parameters of the defenders' jump distributions and used this knowledge to optimally plan their hand movements under conditions involving stochastic rewards and penalties.


Subject(s)
Hand/physiology , Learning/physiology , Psychomotor Performance/physiology , Reaction Time/physiology , Reward , Adolescent , Adult , Female , Humans , Male , Photic Stimulation/methods , Stochastic Processes
SELECTION OF CITATIONS
SEARCH DETAIL
...