Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 26
Filter
Add more filters










Publication year range
1.
J Vis ; 23(13): 4, 2023 Nov 01.
Article in English | MEDLINE | ID: mdl-37930689

ABSTRACT

Horizontal disparity has been recognized as the primary signal driving stereoscopic depth since the invention of the stereoscope in the 1830s. It has a unique status in our understanding of binocular vision. The direction of offset of the eyes gives the disparities of corresponding image point locations across the two retinas a strong horizontal bias. Beyond the retina, other factors give shape to the effective disparity direction used by visual mechanisms. The influence of orientation is examined here. I argue that horizontal disparity is an inflection point along a continuum of effective directions, and its role in stereo vision can be reinterpreted. The pointwise geometric justification for its special status neglects the oriented structural elements of spatial vision, its physiological support is equivocal, and psychophysical support of its special status may partially reflect biased stimulus sampling. The literature shows that horizontal disparity plays no particular role in the processing of one-dimensional stimuli, a reflection of the stereo aperture problem. The resulting depth is non-veridical, even non-transitive. Although one-dimensional components contribute to the stereo depth of visual objects generally, two-dimensional stimuli appear not to inherit the aperture problem. However, a look at the two-dimensional stimuli that predominate in experimental studies shows regularities in orientation that give a new perspective on horizontal disparity.


Subject(s)
Retina , Vision, Binocular , Humans
2.
Cogn Psychol ; 132: 101443, 2022 02.
Article in English | MEDLINE | ID: mdl-34856532

ABSTRACT

Logic and common sense say that judging two stimuli as "same" is the converse of judging them as "different". Empirically, however, 'Same'-'Different' judgment data are anomalous in two major ways. The fast-'Same' effect violates the expectation that 'Same' reaction time (RT) should be predictable by extrapolating from 'Different' RT. The criterion effect violates the expectation that RTs measured when sameness is defined by a conjunction of matching attributes should predict RTs measured when sameness is defined by a disjunction of matching attributes. The two criteria are symmetrical, yet empirically they differ greatly, disjunctive judgments being by far the slower of the two. This study sought the sources of these two effects. With the aid of a cue, a selective-comparison method deconfounded the contributions of stimulus encoding and comparisons to the two effects. The results were paradoxical. Each additional irrelevant (uncued) letter in a random string incremented RT for conjunctive judgments as much as an additional relevant letter did. Yet irrelevant letters were not compared and relevant letters had to be compared. These results appeared again in a second experiment that used words as stimuli. Contrary to intuition, a distinct comparison mechanism-the heart of relative judgment models-is not necessary in judgments of sameness and difference. It is shown here that encoding can carry out the comparison function without the operation of a separate comparison mechanism. Attention mediates the process by selecting from the set of stimulus alternatives, thereby partitioning the set into the 'Same' and 'Different' subsets. The fast-'Same' and criterion effects result from a structural limitation on what attention can select at any one time. With attention mediating the task, 'Same'-'Different' judgments become, in effect, the outcome of a testing of a hypothesis, bridging the distinction between absolute stimulus identification and relative judgments.


Subject(s)
Attention , Judgment , Humans , Reaction Time
3.
PLoS One ; 14(7): e0219052, 2019.
Article in English | MEDLINE | ID: mdl-31356649

ABSTRACT

The stereo correspondence problem exists because false matches between the images from multiple sensors camouflage the true (veridical) matches. True matches are correspondences between image points that have the same generative source; false matches are correspondences between similar image points that have different sources. This problem of selecting true matches among false ones must be overcome by both biological and artificial stereo systems in order for them to be useful depth sensors. The proposed re-examination of this fundamental issue shows that false matches form a symmetrical pattern in the array of all possible matches, with true matches forming the axis of symmetry. The patterning of false matches can therefore be used to locate true matches and derive the depth profile of the surface that gave rise to them. This reverses the traditional strategy, which treats false matches as noise. The new approach is particularly well-suited to extract the 3-D locations and shapes of camouflaged surfaces and to work in scenes characterized by high degrees of clutter. We demonstrate that the symmetry of false-match signals can be exploited to identify surfaces in random-dot stereograms. This strategy permits novel depth computations for target detection, localization, and identification by machine-vision systems, accounts for physiological and psychophysical findings that are otherwise puzzling and makes possible new ways for combining stereo and motion signals.


Subject(s)
Depth Perception/physiology , Imaging, Three-Dimensional/statistics & numerical data , Algorithms , Animals , Computer Simulation , Humans , Image Processing, Computer-Assisted/statistics & numerical data , Photic Stimulation , Psychophysics , Vision Disparity/physiology
4.
Vision Res ; 158: 19-30, 2019 05.
Article in English | MEDLINE | ID: mdl-30771360

ABSTRACT

Stereoscopic depth is most useful when it comes from relative rather than absolute disparities. However, the depth perceived from relative disparities can vary with stimulus parameters that have no connection with depth or are irrelevant to the task. We investigated observers' ability to judge the stereo depth of task-relevant stimuli while ignoring irrelevant stimuli. The calculation of depth from disparity differs for 1-D and 2-D stimuli and we investigated the role this difference plays in observers' ability to selectively process relevant information. We show that the presence of irrelevant disparities affects perceived depth differently depending on stimulus dimensionality. Observers could not ignore disparities of irrelevant stimuli when they judged the relative depth between a 1-D stimulus (a grating) and a 2-D stimulus (a plaid). Yet these irrelevant disparities did not affect judgments of the relative depth between 2-D stimuli. Two processes contributing to stereo depth were identified, only one of which computes depth from a horizontal disparity metric and permits attentional selection. The other uses all stimuli, relevant and irrelevant, to calculate an effective disparity direction for comparing disparity magnitudes. These processes produce inseparable effects in most data sets. Using multiple disparity directions and comparing 1-D and 2-D stimuli can distinguish them.


Subject(s)
Attention/physiology , Depth Perception/physiology , Judgment , Vision, Binocular/physiology , Humans , Lighting , Photic Stimulation , Vision Disparity/physiology
5.
Vision Res ; 105: 137-50, 2014 Dec.
Article in English | MEDLINE | ID: mdl-25449161

ABSTRACT

The separation between the eyes shapes the distribution of binocular disparities and gives a special role to horizontal disparities. However, for one-dimensional stimuli, disparity direction, like motion direction, is linked to stimulus orientation. This makes the perceived depth of one-dimensional stimuli orientation dependent and generally non-veridical. It also allows perceived depth to violate transitivity. Three stimuli, A, B, and C, can be arranged such that A > B (stimulus A is seen as farther than stimulus B when they are presented together) and B > C, yet A ⩽ C. This study examines how the visual system handles the depth of A, B, and C when they are presented together, forming a pairwise inconsistent stereo display. Observers' depth judgments of displays containing a grating and two plaids resolved transitivity violations among the component stimulus pairs. However, these judgments were inconsistent with judgments of the same stimuli within depth-consistent displays containing no transitivity violations. To understand the contribution of individual disparity signals, observers were instructed in subsequent experiments to judge the depth of a subset of display stimuli. This attentional instruction was ineffective; relevant and irrelevant stimuli contributed equally to depth judgments. Thus, the perceived depth separating a pair of stimuli depended on the disparities of the other stimuli presented concurrently. This context dependence of stereo depth can be approximated by an obligatory pooling and comparison of the disparities of one- and two-dimensional stimuli along an axis defined locally by the stimuli.


Subject(s)
Depth Perception/physiology , Vision, Binocular/physiology , Attention/physiology , Female , Humans , Judgment , Photic Stimulation/methods , Psychophysics , Vision Disparity/physiology
6.
J Vis ; 10(4): 25.1-15, 2010 Apr 29.
Article in English | MEDLINE | ID: mdl-20465343

ABSTRACT

Horizontal binocular disparity has long been the conventional predictor of stereo depth. Surprisingly, an alternative predictor fairs just as well. This alternative predicts the relative depth of two stimuli from the relation between their disparity vectors, without regard to horizontal disparities. These predictions can differ; horizontal disparities accurately predict the perceived depth of a grating and a plaid only when the grating is vertical, while the vector calculation accurately predicts it at all except near-horizontal grating orientations. For spatially two-dimensional stimulus pairs, such as plaids, dots, and textures, the predictions cannot be distinguished when the stimuli have the same disparity direction or when the disparity direction of one of the stimuli is horizontal or has a magnitude of zero. These are the conditions that have prevailed in earlier studies. We tested whether the perceived depth of two-dimensional stimuli depends on relative horizontal disparity magnitudes or on relative disparity magnitudes along a disparity axis. On both measures tested-depth matches and depth-interval matches-the perceived depth of plaids varied with their horizontal disparities and not with disparity direction differences as observed for grating-plaid pairs. Differences in disparity directions as great as 120 degrees did not affect depth judgments. This result, though opposite the grating-plaid data, is consistent with them and provides a view into the construction of orientation-invariant disparity representations.


Subject(s)
Depth Perception/physiology , Orientation/physiology , Photic Stimulation/methods , Female , Humans , Lighting
7.
J Vis ; 9(10): 3.1-19, 2009 Sep 04.
Article in English | MEDLINE | ID: mdl-19810784

ABSTRACT

Even though binocular disparity is a very well-studied cue to depth, the function relating disparity and perceived depth has been characterized only for the case of horizontal disparities. We sought to determine the general relationship between disparity and depth for a particular set of stimuli. The horizontal disparity direction is a special case, albeit an especially important one. Non-horizontal disparities arise from a number of sources under natural viewing condition. Moreover, they are implicit in patterns that are one-dimensional, such as gratings, lines, and edges, and in one-dimensional components of two-dimensional patterns, where a stereo matching direction is not well-defined. What function describes perceived depth in these cases? To find out, we measured the phase disparities that produced depth matches between a reference stimulus and a test stimulus. The reference stimulus was two-dimensional, a plaid; the test stimulus was one-dimensional, a grating. We find that horizontal disparity is no more important than other disparity directions in determining depth matches between these two stimuli. As a result, a grating and a plaid appear equal in depth when their horizontal disparities are, in general, unequal. Depth matches are well predicted by a simple disparity vector calculation; they survive changes in component parameters that conserve these vector quantities. The disparity vector rule also describes how the disparities of 1-D components might contribute to the perceived depth of 2-D stimuli.


Subject(s)
Depth Perception , Photic Stimulation/methods , Vision Disparity , Humans , Orientation , Psychophysics
8.
Vision Res ; 49(17): 2209-16, 2009 Aug.
Article in English | MEDLINE | ID: mdl-19540869

ABSTRACT

Binocular disparities have a straightforward geometric relation to object depth, but the computation that humans use to turn disparity signals into depth percepts is neither straightforward nor well understood. One seemingly solid result, which came out of Wheatstone's work in the 1830s, is that the sign and magnitude of horizontal disparity predict the perceived depth of an object: 'positive' horizontal disparities yield the perception of 'far' depth, 'negative' horizontal disparities yield the perception of 'near' depth, and variations in the magnitude of horizontal disparity monotonically increase or decrease the perceived extent of depth. Here we show that this classic link between horizontal disparity and the perception of 'near' versus 'far' breaks down when the stimuli are one-dimensional. For these stimuli, horizontal is not a privileged disparity direction. Instead of relying on horizontal disparities to determine their depth relative to that of two-dimensional stimuli, the visual system uses a disparity calculation that is non-veridical yet well suited to deal with the joint coding of disparity and orientation.


Subject(s)
Depth Perception/physiology , Pattern Recognition, Visual/physiology , Vision Disparity/physiology , Humans , Photic Stimulation/methods , Psychometrics , Psychophysics , Vision, Binocular/physiology
9.
J Vis ; 9(11): 23.1-20, 2009 Oct 26.
Article in English | MEDLINE | ID: mdl-20053086

ABSTRACT

Humans can recover 3-D structure from the projected 2D motion field of a rotating object, a phenomenon called structure from motion (SFM). Current models of SFM perception are limited to the case in which objects rotate about a frontoparallel axis. However, as our recent psychophysical studies showed, frontoparallel axes of rotation are not representative of the general case. Here we present the first model to address the problem of SFM perception for the general case of rotations around an arbitrary axis. The SFM computation is cast as a two-stage process. The first stage computes the structure perpendicular to the axis of rotation. The second stage corrects for the slant of the axis of rotation. For cylinders, the computed object shape is invariant with respect to the observer's viewpoint (that is, perceived shape doesn't change with a change in the direction of the axis of rotation). The model uses template matching to estimate global parameters such as the angular speed of rotation, which are then used to compute the local depth structure. The model provides quantitative predictions that agree well with current psychophysical data for both frontoparallel and non-frontoparallel rotations.


Subject(s)
Depth Perception/physiology , Form Perception/physiology , Models, Neurological , Motion Perception/physiology , Algorithms , Humans , Imaging, Three-Dimensional , Psychophysics , Rotation
10.
J Math Psychol ; 53(2): 86-91, 2009 Apr 01.
Article in English | MEDLINE | ID: mdl-20161280

ABSTRACT

It is often assumed that the space we perceive is Euclidean, although this idea has been challenged by many authors. Here we show that, if spatial cues are combined as described by Maximum Likelihood Estimation, Bayesian, or equivalent models, as appears to be the case, then Euclidean geometry cannot describe our perceptual experience. Rather, our perceptual spatial structure would be better described as belonging to an arbitrarily curved Riemannian space.

11.
Neurocomputing (Amst) ; 71(7-9): 1629-1641, 2008 Mar.
Article in English | MEDLINE | ID: mdl-19255615

ABSTRACT

We introduce a model for the computation of structure-from-motion based on the physiology of visual cortical areas MT and MST. The model assumes that the perception of depth from motion is related to the firing of a subset of MT neurons tuned to both velocity and disparity. The model's MT neurons are connected to each other laterally to form modulatory receptive-field surrounds that are gated by feedback connections from area MST. This allows the building up of a depth map from motion in area MT, even in absence of disparity in the input. Depth maps from motion and from stereo are combined by a weighted average at a final stage. The model's predictions for the interaction between motion and stereo cues agree with previous psychophysical data, both when the cues are consistent with each other or when they are contradictory. In particular, the model shows nonlinearities as a result of early interactions between motion and stereo before their depth maps are averaged. The two cues interact in a way that represents an alternative to the "modified weak fusion" model of depth-cue combination.

12.
J Vis ; 7(7): 3.1-18, 2007 May 23.
Article in English | MEDLINE | ID: mdl-17685799

ABSTRACT

Humans can recover the structure of a 3D object from motion cues alone. Recovery of structure from motion (SFM) from the projected 2D motion field of a rotating object has been studied almost exclusively in one particular condition, that in which the axis of rotation lies in the frontoparallel plane. Here, we assess the ability of humans to recover SFM in the general case, where the axis of rotation may be slanted out of the frontoparallel plane. Using elliptical cylinders whose cross section was constant along the axis of rotation, we find that, across a range of parameters, subjects accurately matched the simulated shape of the cylinder regardless of how much the axis of rotation is inclined away from the frontoparallel plane. Yet, we also find that subjects do not perceive the inclination of the axis of rotation veridically. This combination of results violates a relationship between perceived angle of inclination and perceived shape that must hold if SFM is to be recovered from the instantaneous velocity field. The contradiction can be resolved if the angular speed of rotation is not consistently estimated from the instantaneous velocity field. This, in turn, predicts that variation in object size along the axis of rotation can cause depth-order violations along the line of sight. This prediction was verified using rotating circular cones as stimuli. Thus, as the axis of rotation changes its inclination, shape constancy is maintained through a trade-off. Humans perceive the structure of the object relative to a changing axis of rotation as unchanging by introducing an inconsistency between the perceived speed of rotation and the first-order optic flow. The observed depth-order violations are the cost of the trade-off.


Subject(s)
Depth Perception , Form Perception , Motion Perception , Perceptual Distortion , Humans , Photic Stimulation/methods , Rotation
13.
J Neurosci ; 26(36): 9098-106, 2006 Sep 06.
Article in English | MEDLINE | ID: mdl-16957066

ABSTRACT

The left and right eyes receive subtly different images from a visual scene. Binocular disparities of retinal image locations are correlated with variation in the depth of objects in the scene and make stereoscopic depth perception possible. Disparity stereoscopically specifies a stimulus; changing the stimulus in a way that conserves its disparity leaves the stimulus stereoscopically unchanged. Therefore, a person's ability to use stereo to see the depth separating any two objects should depend only on the disparities of the objects, which in turn depend on where the objects are, not what they are. However, I find that the disparity difference between two stimuli by itself predicts neither stereoacuity nor perceived depth. Human stereo vision is shown here to be most sensitive at detecting the relative depth of two gratings when they are parallel. Rotating one grating by as little as 10 degrees lowers sensitivity. The rotation can make a perceptible depth separation invisible, although it changes neither the relative nor absolute disparities of the gratings, only their relative orientations. The effect of relative orientation is not confined to stimuli that, like gratings, vary along one dimension or to stimuli perceived to have a dominant orientation. Rather, it is the relative orientation of the one-dimensional components of stimuli, even broadband stimuli, that matters. This limit on stereoscopic depth perception appears to be intrinsic to the visual system's computation of disparity; by taking place within orientation bands, the computation renders the coding of disparity inseparable from the coding of orientation.


Subject(s)
Depth Perception/physiology , Orientation/physiology , Pattern Recognition, Visual/physiology , Task Performance and Analysis , Vision, Binocular/physiology , Humans , Psychomotor Performance , Vision Disparity/physiology
14.
Vision Res ; 46(28): 4646-74, 2006 Dec.
Article in English | MEDLINE | ID: mdl-16808957

ABSTRACT

Seeking to understand how people recognize objects, we have examined how they identify letters. We expected this 26-way classification of familiar forms to challenge the popular notion of independent feature detection ("probability summation"), but find instead that this theory parsimoniously accounts for our results. We measured the contrast required for identification of a letter briefly presented in visual noise. We tested a wide range of alphabets and scripts (English, Arabic, Armenian, Chinese, Devanagari, Hebrew, and several artificial ones), three- and five-letter words, and various type styles, sizes, contrasts, durations, and eccentricities, with observers ranging widely in age (3 to 68) and experience (none to fluent). Foreign alphabets are learned quickly. In just three thousand trials, new observers attain the same proficiency in letter identification as fluent readers. Surprisingly, despite this training, the observers-like clinical letter-by-letter readers-have the same meager memory span for random strings of these characters as observers seeing them for the first time. We compare performance across tasks and stimuli that vary in difficulty by pitting the human against the ideal observer, and expressing the results as efficiency. We find that efficiency for letter identification is independent of duration, overall contrast, and eccentricity, and only weakly dependent on size, suggesting that letters are identified by a similar computation across this wide range of viewing conditions. Efficiency is also independent of age and years of reading. However, efficiency does vary across alphabets and type styles, with more complex forms yielding lower efficiencies, as one might expect from Gestalt theories of perception. In fact, we find that efficiency is inversely proportional to perimetric complexity (perimeter squared over "ink" area) and nearly independent of everything else. This, and the surprisingly fixed ratio of detection and identification thresholds, indicate that identifying a letter is mediated by detection of about 7 visual features.


Subject(s)
Contrast Sensitivity , Pattern Recognition, Visual , Adolescent , Adult , Aged , Aging/psychology , Child , Child, Preschool , Discrimination Learning , Humans , Language , Mathematics , Mental Recall , Middle Aged , Photic Stimulation/methods , Reading , Sensory Thresholds , Size Perception , Time Factors
15.
Vision Res ; 46(8-9): 1230-41, 2006 Apr.
Article in English | MEDLINE | ID: mdl-16356526

ABSTRACT

A spatially flat stimulus is perceived as varying in depth if its velocity structure is consistent with that of a three-dimensional (3D) object. This is structure from motion (SFM). We asked if the converse effect also exists. A motion-from-structure effect would skew an object's perceived velocity structure to make it more consistent with the 3D structure provided by its depth cues. This proposed phenomenon should be opposite in sign from velocity constancy and could potentially interfere with it. Previous tests of velocity constancy compared stimuli presented at different times, not simultaneously. This explains why a reversal of SFM has not been previously reported, as it is expected to appear only for simultaneous presentations. We tested this prediction using random-dot stereograms to define two adjacent moving surfaces separated in stereoscopic depth. We found that subjects did not perceive velocity constancy with either simultaneous or sequential stimulus presentations. For sequential presentations, subjects matched retinal speeds, in agreement with previous work. However, for simultaneous presentations, the nearer surface was seen as moving faster when both surfaces were moving with the same retinal speed, an effect opposite in polarity from velocity constancy and a signature of the motion-from-structure phenomenon.


Subject(s)
Form Perception/physiology , Motion Perception/physiology , Optical Illusions , Calibration , Humans , Psychophysics , Vision Disparity , Vision, Binocular
16.
Vision Res ; 46(8-9): 1307-17, 2006 Apr.
Article in English | MEDLINE | ID: mdl-16356530

ABSTRACT

There are two possible binocular mechanisms for the detection of motion in depth. One is based on disparity changes over time and the other is based on interocular velocity differences. It has previously been shown that disparity changes over time can produce the perception of motion in depth. However, existing psychophysical and physiological data are inconclusive as to whether interocular velocity differences play a role in motion in depth perception. We studied this issue using the motion aftereffect, the illusory motion of static patterns that follows adaptation to real motion. We induced a differential motion aftereffect to the two eyes and then tested for motion in depth in a stationary random-dot pattern seen with both eyes. It has been shown previously that a differential translational motion aftereffect produces a strong perception of motion in depth. We show here that a rotational motion aftereffect inhibits this perception of motion in depth, even though a real rotation induces motion in depth. A non-horizontal translational motion aftereffect did not inhibit motion in depth. Together, our results strongly suggest that (1) pure interocular velocity differences can produce motion in depth, and (2) the illusory changes in position from the motion aftereffect are generated relatively late in the visual hierarchy, after binocular combination.


Subject(s)
Adaptation, Psychological , Visual Perception/physiology , Depth Perception/physiology , Figural Aftereffect , Humans , Motion Perception/physiology , Psychophysics , Rotation
17.
Vision Res ; 45(21): 2786-98, 2005 Oct.
Article in English | MEDLINE | ID: mdl-16023695

ABSTRACT

An object moving in depth produces retinal images that change in position over time by different amounts in the two eyes. This allows stereoscopic perception of motion in depth to be based on either one or both of two different visual signals: inter-ocular velocity differences, and binocular disparity change over time. Disparity change over time can produce the perception of motion in depth. However, demonstrating the same for inter-ocular velocity differences has proved elusive because of the difficulty of isolating this cue from disparity change (the inverse can easily be done). No physiological data are available, and existing psychophysical data are inconclusive as to whether inter-ocular velocity differences are used in primate vision. Here, we use motion adaptation to assess the contribution of inter-ocular velocity differences to the perception of motion in depth. If inter-ocular velocity differences contribute to motion in depth, we would expect that discriminability of direction of motion in depth should be improved after adaptation to frontoparallel motion. This is because an inter-ocular velocity difference is a comparison between two monocular frontoparallel motion signals, and because frontoparallel speed discrimination improves after motion adaptation. We show that adapting to frontoparallel motion does improve both frontoparallel speed discrimination and motion-in-depth direction discrimination. No improvement would be expected if only disparity change over time contributes to motion in depth. Furthermore, we found that frontoparallel motion adaptation diminishes discrimination of both speed and direction of motion in depth in dynamic random dot stereograms, in which changing disparity is the only cue available. The results provide strong evidence that inter-ocular velocity differences contribute to the perception of motion in depth and thus that the human visual system contains mechanisms for detecting differences in velocity between the two eyes' retinal images.


Subject(s)
Depth Perception/physiology , Eye Movements/physiology , Motion Perception/physiology , Vision, Binocular/physiology , Adaptation, Physiological , Discrimination, Psychological/physiology , Humans , Photic Stimulation/methods , Sensory Thresholds/physiology , Vision Disparity/physiology
18.
Vision Res ; 45(9): 1165-76, 2005 Apr.
Article in English | MEDLINE | ID: mdl-15707925

ABSTRACT

Variants of a lightness effect described by [Todorovic's, D. (1997). Lightness and junctions. Perception, 26, 379] were studied to quantify the failure of lightness constancy as a function of target luminance and target size. Todorovic's effect is similar to White's effect. Simultaneous lightness contrast appears to operate selectively between stimuli belonging to the same perceptual group, and not between stimuli of equal proximity belonging to different perceptual groups. We found that mid-gray targets grouped with a white contextual stimulus were matched on average to a darker-than-veridical gray. Those grouped with a black contextual stimulus were matched on average veridically. This is consistent with 'anchoring' effects observed in simple two-stimulus displays. However, target luminance had an effect that was not captured by mid-level target luminance data or data averaged across target luminances. For both white and black contextual stimuli, light-gray targets were matched to a darker-than-veridical gray and the direction of this error shifted toward the lighter-than-veridical direction as the luminance of the target was lowered. The result was a constant difference between the perceived lightnesses of targets presented with white and black contextual stimuli. Target size had no effect on perceived lightness. These data imply that the Todorovic-White effect can be characterized as lightness assimilation rather than as lightness contrast. By accounting for compression as well as the Todorovic-White effect, assimilation is the more general explanation.


Subject(s)
Lighting , Visual Perception/physiology , Adult , Aged , Contrast Sensitivity , Female , Humans , Male , Middle Aged , Psychophysics
19.
J Vis ; 5(10): 783-92, 2005 Nov 23.
Article in English | MEDLINE | ID: mdl-16441185

ABSTRACT

Stereoacuity thresholds, measured with bar targets, rise as the absolute disparity of the bars is increased. One explanation for this rise is that, as the bars are moved away from the fixation plane, the stereo system uses coarser mechanisms to encode the bars' disparity; coarse mechanisms are insensitive to small changes in target disparity, resulting in higher thresholds. To test this explanation, we measured stereoacuity with a 6 degrees wide 3 cpd grating presented in a rectangular envelope. We varied the disparity of the grating and its edges (envelope) parametrically from 0 to 20 arcmin (i.e., through one full period). To force observers to make judgments based on carrier disparity, we then varied the interocular phase incrementally from trial-to-trial while keeping edge disparity fixed for a given block of trials. The pedestal phase disparity of the grating necessarily cycles through 360 degrees, back to zero disparity, as the edge disparity increases monotonically from 0 to 20 arcmin. Unlike mechanisms that respond to bars, the mechanism that responds to the interocular phase disparity of the grating should have the same sensitivity at 20 arcmin disparity (360 degrees of phase) as it has at zero disparity. So, if stereoacuity were determined by the most sensitive mechanism, thresholds should oscillate with the pedestal phase disparity. However, these gratings are perceived in depth at the disparity of their edges. If stereoacuity were instead determined by the stereo matching operations that generate perceived depth, thresholds should rise monotonically with increasing edge disparity. We found that the rise in grating thresholds with increasing edge disparity was monotonic and virtually identical to the rise in thresholds observed for bars. Stereoacuity is contingent on stereo matching.


Subject(s)
Depth Perception/physiology , Visual Acuity/physiology , Adult , Contrast Sensitivity , Discrimination, Psychological , Humans , Photic Stimulation/methods , Sensory Thresholds , Space Perception , Visual Perception
20.
J Vis ; 4(7): 524-38, 2004 Jun 29.
Article in English | MEDLINE | ID: mdl-15330699

ABSTRACT

Stereo matching of a textured surface is, in principle, ambiguous because of the quasi-repetitive nature of texture. Here, we used a perfectly repetitive texture, namely a sinusoidal grating, to examine human stereo matching for repetitive patterns. Observers matched the depth of a vertical grating segment, 6-deg wide and presented in a rectangular envelope at or near the disparity of the segment edges. The interocular phase of the carrier also influenced stereo matching, producing shifts in depth arrayed around the plane specified by the edges. The limiting disparity for the edge matches was 40-60 arcmin, independent of the spatial frequency of the carrier. One explanation for these results is that first-order disparity energy mechanisms, tuned to lower spatial frequencies, respond to the edge disparities, while showing little response to the interocular phase of the carrier. In principle, these first-order low frequency mechanisms could account for edge-based stereo matching at high contrasts. But, edge matching is also observed at carrier contrasts as low as 5%, where these low frequency mechanisms are unlikely to detect the grating stimulus. This result suggests that edge matching for gratings depends on coarse-scale second-order stereo mechanisms, similar to the second-order mechanisms that have been proposed for encoding two-dimensional texture. We conclude that stereo matching of gratings (or any other texture) depends on a combination of responses in both coarse-scale second-order and fine-scale first-order disparity mechanisms.


Subject(s)
Contrast Sensitivity/physiology , Depth Perception/physiology , Pattern Recognition, Visual/physiology , Vision, Binocular/physiology , Female , Humans , Male , Sensory Thresholds
SELECTION OF CITATIONS
SEARCH DETAIL
...