Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 30
Filter
Add more filters










Publication year range
1.
Spat Vis ; 20(4): 337-95, 2007.
Article in English | MEDLINE | ID: mdl-17594799

ABSTRACT

How do visual form and motion processes cooperate to compute object motion when each process separately is insufficient? Consider, for example, a deer moving behind a bush. Here the partially occluded fragments of motion signals available to an observer must be coherently grouped into the motion of a single object. A 3D FORMOTION model comprises five important functional interactions involving the brain's form and motion systems that address such situations. Because the model's stages are analogous to areas of the primate visual system, we refer to the stages by corresponding anatomical names. In one of these functional interactions, 3D boundary representations, in which figures are separated from their backgrounds, are formed in cortical area V2. These depth-selective V2 boundaries select motion signals at the appropriate depths in MT via V2-to-MT signals. In another, motion signals in MT disambiguate locally incomplete or ambiguous boundary signals in V2 via MT-to-V1-to-V2 feedback. The third functional property concerns resolution of the aperture problem along straight moving contours by propagating the influence of unambiguous motion signals generated at contour terminators or corners. Here, sparse 'feature tracking signals' from, for example, line ends are amplified to overwhelm numerically superior ambiguous motion signals along line segment interiors. In the fourth, a spatially anisotropic motion grouping process takes place across perceptual space via MT-MST feedback to integrate veridical feature-tracking and ambiguous motion signals to determine a global object motion percept. The fifth property uses the MT-MST feedback loop to convey an attentional priming signal from higher brain areas back to V1 and V2. The model's use of mechanisms such as divisive normalization, endstopping, cross-orientation inhibition, and long-range cooperation is described. Simulated data include: the degree of motion coherence of rotating shapes observed through apertures, the coherent vs. element motion percepts separated in depth during the chopsticks illusion, and the rigid vs. nonrigid appearance of rotating ellipses.


Subject(s)
Form Perception/physiology , Motion Perception/physiology , Visual Cortex/physiology , Humans , Models, Theoretical
2.
Vision Res ; 41(19): 2521-53, 2001 Sep.
Article in English | MEDLINE | ID: mdl-11483182

ABSTRACT

A neural model is developed of how motion integration and segmentation processes, both within and across apertures, compute global motion percepts. Figure-ground properties, such as occlusion, influence which motion signals determine the percept. For visible apertures, a line's terminators do not specify true line motion. For invisible apertures, a line's intrinsic terminators create veridical feature-tracking signals. Sparse feature-tracking signals can be amplified before they propagate across position and are integrated with ambiguous motion signals within line interiors. This integration process determines the global percept. It is the result of several processing stages: directional transient cells respond to image transients and input to a directional short-range filter that selectively boosts feature-tracking signals with the help of competitive signals. Then, a long-range filter inputs to directional cells that pool signals over multiple orientations, opposite contrast polarities, and depths. This all happens no later than cortical area MT. The directional cells activate a directional grouping network, proposed to occur within cortical area MST, within which directions compete to determine a local winner. Enhanced feature-tracking signals typically win over ambiguous motion signals. Model MST cells that encode the winning direction feed back to model MT cells, where they boost directionally consistent cell activities and suppress inconsistent activities over the spatial region to which they project. This feedback accomplishes directional and depthful motion capture within that region. Model simulations include the barberpole illusion, motion capture, the spotted barberpole, the triple barberpole, the occluded translating square illusion, motion transparency and the chopsticks illusion. Qualitative explanations of illusory contours from translating terminators and plaid adaptation are also given.


Subject(s)
Models, Neurological , Motion Perception/physiology , Computer Simulation , Humans , Optical Illusions , Visual Cortex/physiology
3.
J Cogn Neurosci ; 13(1): 102-20, 2001 Jan 01.
Article in English | MEDLINE | ID: mdl-11224912

ABSTRACT

Smooth pursuit eye movements (SPEMs) are eye rotations that are used to maintain fixation on a moving target. Such rotations complicate the interpretation of the retinal image, because they nullify the retinal motion of the target, while generating retinal motion of stationary objects in the background. This poses a problem for the oculomotor system, which must track the stabilized target image while suppressing the optokinetic reflex, which would move the eye in the direction of the retinal background motion (opposite to the direction in which the target is moving). Similarly, the perceptual system must estimate the actual direction and speed of moving objects in spite of the confounding effects of the eye rotation. This paper proposes a neural model to account for the ability of primates to accomplish these tasks. The model simulates the neurophysiological properties of cell types found in the superior temporal sulcus of the macaque monkey, specifically the medial superior temporal (MST) region. These cells process signals related to target motion, background motion, and receive an efference copy of eye velocity during pursuit movements. The model focuses on the interactions between cells in the ventral and dorsal subdivisions of MST, which are hypothesized to process target velocity and background motion, respectively. The model explains how these signals can be combined to explain behavioral data about pursuit maintenance and perceptual data from human studies, including the Aubert--Fleischl phenomenon and the Filehne Illusion, thereby clarifying the functional significance of neurophysiological data about these MST cell properties. It is suggested that the connectivity used in the model may represent a general strategy used by the brain in analyzing the visual world.


Subject(s)
Cerebral Cortex/physiology , Models, Neurological , Motion Perception/physiology , Pursuit, Smooth/physiology , Animals , Brain/physiology , Brain Mapping , Electric Stimulation , Humans , Neurons/physiology , Photic Stimulation , Vision Disparity/physiology
4.
Neural Netw ; 13(6): 571-88, 2000 Jul.
Article in English | MEDLINE | ID: mdl-10987511

ABSTRACT

The visual cortex has a laminar organization whose circuits form functional columns in cortical maps. How this laminar architecture supports visual percepts is not well understood. A neural model proposes how the laminar circuits of V1 and V2 generate perceptual groupings that maintain sensitivity to the contrasts and spatial organization of scenic cues. The model can decisively choose which groupings cohere and survive, even while balanced excitatory and inhibitory interactions preserve contrast-sensitive measures of local boundary likelihood or strength. In the model, excitatory inputs from lateral geniculate nucleus (LGN) activate layers 4 and 6 of V1. Layer 6 activates an on-center off-surround network of inputs to layer 4. Together these layer 4 inputs preserve analog sensitivity to LGN input contrasts. Layer 4 cells excite pyramidal cells in layer 2/3, which activate monosynaptic long-range horizontal excitatory connections between layer 2/3 pyramidal cells, and short-range disynaptic inhibitory connections mediated by smooth stellate cells. These interactions support inward perceptual grouping between two or more boundary inducers, but not outward grouping from a single inducer. These boundary signals feed back to layer 4 via the layer 6-to-4 on-center off-surround network. This folded feedback joins cells in different layers into functional columns while selecting winning groupings. Layer 6 in V1 also sends top-down signals to LGN using an on-center off-surround network, which suppresses LGN cells that do not receive feedback, while selecting, enhancing, and synchronizing activity of those that do. The model is used to simulate psychophysical and neurophysiological data about perceptual grouping, including various Gestalt grouping laws.


Subject(s)
Models, Neurological , Nerve Net/physiology , Pattern Recognition, Visual/physiology , Visual Cortex/physiology , Animals , Biofeedback, Psychology/physiology , Geniculate Bodies/cytology , Geniculate Bodies/physiology , Humans , Nerve Net/cytology , Perceptual Closure/physiology , Pyramidal Cells/cytology , Pyramidal Cells/physiology , Retina/cytology , Retina/physiology , Visual Cortex/cytology , Visual Pathways/cytology , Visual Pathways/physiology
5.
Cereb Cortex ; 9(8): 878-95, 1999 Dec.
Article in English | MEDLINE | ID: mdl-10601006

ABSTRACT

Cells in the dorsal medial superior temporal cortex (MSTd) process optic flow generated by self-motion during visually guided navigation. A neural model shows how interactions between well-known neural mechanisms (log polar cortical magnification, Gaussian motion-sensitive receptive fields, spatial pooling of motion-sensitive signals and subtractive extraretinal eye movement signals) lead to emergent properties that quantitatively simulate neurophysiological data about MSTd cell properties and psychophysical data about human navigation. Model cells match MSTd neuron responses to optic flow stimuli placed in different parts of the visual field, including position invariance, tuning curves, preferred spiral directions, direction reversals, average response curves and preferred locations for stimulus motion centers. The model shows how the preferred motion direction of the most active MSTd cells can explain human judgments of self-motion direction (heading), without using complex heading templates. The model explains when extraretinal eye movement signals are needed for accurate heading perception, and when retinal input is sufficient, and how heading judgments depend on scene layouts and rotation rates.


Subject(s)
Eye Movements/physiology , Models, Neurological , Retina/physiology , Visual Cortex/physiology , Visual Pathways/physiology , Algorithms , Humans , Retina/anatomy & histology , Temporal Lobe/anatomy & histology , Temporal Lobe/physiology , Visual Cortex/anatomy & histology , Visual Pathways/anatomy & histology
6.
Spat Vis ; 12(4): 421-59, 1999.
Article in English | MEDLINE | ID: mdl-10493095

ABSTRACT

An element-arrangement pattern is composed of two types of elements arranged differently in different regions of a pattern. Rapid texture segregation depends on spontaneously discriminating the difference in the arrangement of the elements. Five experiments investigated the perceived segregation of patterns composed of two types of squares arranged in vertical stripes in the top and bottom regions and in a checkerboard arrangement in the middle region. The squares were either equal in luminance and different in hue or equal in hue and different in luminance. The rated similarities of the two hues in a pattern failed to predict perceived segregation. For a given background luminance, the perceived segregation was predicted by the square-root of the sum of the squares of the differences in the outputs of the L - M + S and L + M - S opponent channels, where L, M, and S were the cone contrasts of the long-, medium-, and short-wavelength receptors. The perceived similarity of the two hues in a pattern was not affected by the background luminance but was a function of cone excitations instead. For patterns differing in hue and equal in luminance, perceived segregation was an inverse function of the background luminance. A white background decreased the perceived segregation, but a black background did not. The effect of background luminance was not on the discrimination of the individual hues. The two hues making up a texture pattern were clearly distinguishable on a white background. A white background interfered with the discrimination of the vertical and diagonal columns of squares that distinguished the texture regions. For patterns differing in luminance and equal in hue, black and white backgrounds decreased the perceived segregation. The results indicate that adapting to an achromatic luminance distant from the luminance of the squares increased the Weber threshold for discriminating luminance differences, but did not increase the Weber threshold for discriminating hue differences. The experiments also revealed that luminance was the primary factor affecting perceived segregation and that perceived brightness is secondary. The results are consistent with the hypothesis that perceived segregation in element-arrangement patterns is primarily a function of the differences in the outputs of relatively early filtering mechanisms that encode pattern differences prior to the specification of the element shapes and their properties.


Subject(s)
Color Perception/physiology , Pattern Recognition, Visual/physiology , Color Perception Tests , Computer Terminals , Humans , Photic Stimulation , Predictive Value of Tests , Reference Values , Sensory Thresholds , Visual Cortex/physiology
7.
J Opt Soc Am A Opt Image Sci Vis ; 16(5): 953-78, 1999 May.
Article in English | MEDLINE | ID: mdl-10234852

ABSTRACT

A neural model of motion perception simulates psychophysical data concerning first-order and second-order motion stimuli, including the reversal of perceived motion direction with distance from the stimulus (gamma display), and data about directional judgments as a function of relative spatial phase or spatial and temporal frequency. Many other second-order motion percepts that have ascribed to a second non-Fourier processing stream can also be explained in the model by interactions between ON and OFF cells within a single, neurobiologically interpreted magnocellular processing stream. Yet other percepts may be traced to interactions between form and motion processing streams, rather than to processing within multiple motion processing streams. The model hereby explains why monkeys with lesions of the parvocellular layers, but not of the magnocellular layers, of the lateral geniculate nucleus (LGN) are capable of detecting the correct direction of second-order motion, why most cells in area MT are sensitive to both first-order and second-order motion, and why after 2-amino-4-phosphonobutyrate injection selectively blocks retinal ON bipolar cells, cortical cells are sensitive only to the motion of a moving bright bar's trailing edge. Magnocellular LGN cells show relatively transient responses, whereas parvocellular LGN cells show relatively sustained responses. Correspondingly, the model bases its directional estimates on the outputs of model ON and OFF transient cells that are organized in opponent circuits wherein antagonistic rebounds occur in response to stimulus offset. Center-surround interactions convert these ON and OFF outputs into responses of lightening and darkening cells that are sensitive both to direct inputs and to rebound responses in their receptive field centers and surrounds. The total pattern of activity increments and decrements is used by subsequent processing stages (spatially short-range filters, competitive interactions, spatially long-range filters, and directional grouping cells) to determine the perceived direction of motion.


Subject(s)
Geniculate Bodies/physiology , Models, Neurological , Motion Perception/physiology , Animals , Fourier Analysis , Geniculate Bodies/anatomy & histology , Geniculate Bodies/injuries , Haplorhini , Humans , Mathematics , Photic Stimulation , Psychophysics
8.
Vision Res ; 38(19): 2963-71, 1998 Oct.
Article in English | MEDLINE | ID: mdl-9797991

ABSTRACT

The motion after-effect (MAE) can be elicited by adapting observers to global motion of randomly distributed dots before they view a display containing dots moving in random directions, but no global motion. Experiments by others have shown that if the adaptation stimulus contains two directions of motion, the MAE points opposite to the vector sum of the adapting directions. The present study investigated whether such vector addition in the MAE could also occur if the two directions of motion were presented to separate eyes. Observers were adapted to different, but not opposite, directions of motion in the two eyes. Either the left eye, the right eye, or both eyes were tested. Observers reported the direction of perceived motion during the test. When they saw the test stimulus with both eyes, observers reported seeing motion in the direction opposite that of the vector sum of the adaptation directions. In the monocular test conditions observers reported MAE directions opposite to the corresponding monocular adaptation directions. In a second experiment we verified that subjects had interocular transfer of the MAE. Together these results are consistent with a model in which (1) addition of adaptation directions occurs at a binocular site; (2) directional adaptation occurs at a monocular site; and (3) monocular adaptation is able to change the threshold for obtaining an MAE at the binocular site, thus acting like binocular adaptation in interocular transfer of the MAE.


Subject(s)
Afterimage , Motion Perception , Adaptation, Ocular , Adult , Humans , Male , Vision Tests , Vision, Binocular , Vision, Monocular
9.
Vision Res ; 38(18): 2769-86, 1998 Sep.
Article in English | MEDLINE | ID: mdl-9775325

ABSTRACT

A neural network model of visual motion perception and speed discrimination is presented. The model shows how a distributed population code of speed tuning, that realizes a size-speed correlation, can be derived from the simplest mechanisms whereby activations of multiple spatially short-range filters of different size are transformed into speed-turned cell responses. These mechanisms use transient cell responses to moving stimuli, output thresholds that covary with filter size, and competition. These mechanisms are proposed to occur in the V1-->MT cortical processing stream. The model reproduces empirically derived speed discrimination curves and simulates data showing how visual speed perception and discrimination can be affected by stimulus contrast, duration, dot density and spatial frequency. Model motion mechanisms are analogous to mechanisms that have been used to model 3-D form and figure-ground perception. The model forms the front end of a larger motion processing system that has been used to simulate how global motion capture occurs, and how spatial attention is drawn to moving forms. It provides a computational foundation for an emerging neural theory of 3-D form and motion perception.


Subject(s)
Motion Perception/physiology , Neural Networks, Computer , Vision, Ocular/physiology , Form Perception , Humans
10.
Vision Res ; 38(24): 3883-98, 1998 Dec.
Article in English | MEDLINE | ID: mdl-10211381

ABSTRACT

The influence of monocular occlusion cues on the perceived direction of motion of barber pole patterns is examined. Unlike previous studies that have emphasized the importance of binocular disparity, we find that monocular cues strongly influence the perceived motion direction and can even override binocular depth cues. The difference in motion bias for occluders with and without disparity cues is relatively small. Additionally, although 'T-junctions' aligned with occluders are particularly important, they are not strictly necessary for creating a change in motion perception. Finally, the amount of motion bias differs for several stimulus configurations, suggesting that the extrinsic/intrinsic classification of terminators is not all-or-none.


Subject(s)
Motion Perception/physiology , Optical Illusions/physiology , Pattern Recognition, Visual/physiology , Vision, Monocular/physiology , Contrast Sensitivity , Cues , Depth Perception/physiology , Humans , Mathematics , Vision Disparity/physiology
11.
Vision Res ; 38(20): 3083-93, 1998 Oct.
Article in English | MEDLINE | ID: mdl-9893817

ABSTRACT

When an expansion flow field of moving dots is overlapped by planar motion, observers perceive an illusory displacement of the focus of expansion (FOE) in the direction of the planar motion (Duffy and Wurtz, Vision Research, 1993;33:1481-1490). The illusion may be a consequence of induced motion, wherein an induced component of motion relative to planar dots is added to the motions of expansion dots to produce the FOE shift. While such a process could be mediated by local 'center-surround' receptive fields, the effect could also be due to a higher level process which detects and subtracts large-field planar motion from the flow field. We probed the mechanisms underlying this illusion by adding varying amounts of rotation to the expansion stimulus, and by varying the speed and size of the planar motion field. The introduction of rotation into the stimulus produces an illusory shift in a direction perpendicular to the planar motion. Larger FOE shifts were perceived for greater speeds and sizes of planar motion fields, although the speed effect saturated at high speeds. While the illusion appears to share a common mechanism with center-surround induced motion, our results also point to involvement of a more global mechanism that subtracts coherent planar motion from the flow field. Such a process might help to maintain visual stability during eye movements.


Subject(s)
Motion Perception/physiology , Optical Illusions/physiology , Cues , Humans , Male , Mathematics , Pattern Recognition, Visual/physiology , Psychophysics , Rotation , Time Factors , Visual Fields
12.
Trends Neurosci ; 20(3): 106-11, 1997 Mar.
Article in English | MEDLINE | ID: mdl-9061863

ABSTRACT

How the brain generates visual percepts is a central problem in neuroscience. We propose a detailed neural model of how lateral geniculate nuclei and the interblob cortical stream through V1 and V2 generate context-sensitive perceptual groupings from visual inputs. The model suggests a functional role for cortical layers, columns, maps and networks, and proposes homologous circuits for V1 and V2 with larger-scale processing in V2. An integrated treatment of interlaminar, horizontal, orientational and endstopping cortical interactions and a role for corticogeniculate feedback in grouping are proposed. Modeled circuits simulate parametric psychophysical data about boundary grouping and illusory contour formation.


Subject(s)
Feedback/physiology , Visual Cortex/physiology , Visual Perception/physiology , Animals , Models, Neurological
13.
Perception ; 26(11): 1353-66, 1997.
Article in English | MEDLINE | ID: mdl-9616466
14.
Percept Psychophys ; 58(8): 1293-305, 1996 Nov.
Article in English | MEDLINE | ID: mdl-8961838

ABSTRACT

Lightness constancy in complex scenes requires that the visual system take account of information concerning variations of illumination falling on visible surfaces. Three experiments on the perception of lightness for three-dimensional (3-D) curved objects show that human observers are better able to perform this accounting for certain scenes than for others. The experiments investigate the effect of object curvature, illumination direction, and object shape on lightness perception. Lightness constancy was quite good when a rich local gray-level context was provided. Deviations occurred when both illumination and reflectance changed along the surface of the objects. Does the perception of a 3-D surface and illuminant layout help calibrate lightness judgments? Our results showed a small but consistent improvement between lightness matches on ellipsoid shapes, relative to flat rectangle shapes, under illumination conditions that produce similar image gradients. Illumination change over 3-D forms is therefore taken into account in lightness perception.


Subject(s)
Contrast Sensitivity , Depth Perception , Light , Orientation , Pattern Recognition, Visual , Adult , Attention , Discrimination Learning , Female , Humans , Lighting , Male , Optical Illusions , Psychophysics
15.
Vision Res ; 36(12): 1745-60, 1996 Jun.
Article in English | MEDLINE | ID: mdl-8759444

ABSTRACT

An element-arrangement pattern is composed of two types of elements that differ in the ways in which they are arranged in different regions of the pattern. We report experiments on the perceived segregation of chromatic element-arrangement patterns composed of equal-size red and blue squares as the luminances of the surround, the interspaces and the background (surround plus interspaces) are varied. Perceived segregation was markedly reduced by increasing the luminance of the interspaces. Perceived segregation was approximately constant for constant ratios of interspace luminance to square luminance and increased with the contrast ratio of the squares. Unlike achromatic element-arrangement patterns composed of squares differing in lightness [Beck et al (1991). Vision Research, 32, 719-743] perceived segregation did not decrease when the luminance of the interspaces was below that of the squares. Similar results were obtained for red and yellow, red and green, green and yellow, green and blue, and blue and yellow squares. Perceived segregation based on edge alignment was not interfered with by high intensity interspaces. Stereoscopic cues that caused the squares composing the element-arrangement pattern to be seen in front of the interspaces did not greatly improve perceived segregation. One explanation of the results is in terms of inhibitory interactions among achromatic and chromatic cortical cells tuned to spatial frequency and orientation. Alternately, the results may be explained in terms of how the luminance of the interspaces affects the grouping of the squares for encoding surface representations. Neither explanation accounts fully for the data and both mechanisms may be involved.


Subject(s)
Color Perception/physiology , Pattern Recognition, Visual/physiology , Fixation, Ocular , Humans , Lighting , Male , Rotation , Vision Disparity , Visual Cortex/physiology
16.
Vis Neurosci ; 12(6): 1027-52, 1995.
Article in English | MEDLINE | ID: mdl-8962825

ABSTRACT

A neural network model is developed to explain how visual thalamocortical interactions give rise to boundary percepts such as illusory contours and surface percepts such as filled-in brightnesses. Top-down feedback interactions are needed in addition to bottom-up feed-forward interactions to simulate these data. One feedback loop is modeled between lateral geniculate nucleus (LGN) and cortical area V1, and another within cortical areas V1 and V2. The first feedback loop realizes a matching process which enhances LGN cell activities that are consistent with those of active cortical cells, and suppresses LGN activities that are not. This corticogeniculate feedback, being endstopped and oriented, also enhances LGN ON cell activations at the ends of thin dark lines, thereby leading to enhanced cortical brightness percepts when the lines group into closed illusory contours. The second feedback loop generates boundary representations, including illusory contours, that coherently bind distributed cortical features together. Brightness percepts form within the surface representations through a diffusive filling-in process that is contained by resistive gating signals from the boundary representations. The model is used to simulate illusory contours and surface brightness induced by Ehrenstein disks, Kanizsa squares, Glass patterns, and café wall patterns in single contrast, reverse contrast, and mixed contrast configurations. These examples illustrate how boundary and surface mechanisms can generate percepts that are highly context-sensitive, including how illusory contours can be amodally recognized without being seen, how model simple cells in V1 respond preferentially to luminance discontinuities using inputs from both LGN ON and OFF cells, how model bipole cells in V2 with two colinear receptive fields can help to complete curved illusory contours, how short-range simple cell groupings and long-range bipole cell groupings can sometimes generate different outcomes, and how model double-opponent, filling-in and boundary segmentation mechanisms in V4 interact to generate surface brightness percepts in which filling-in of enhanced brightness and darkness can occur before the net brightness distribution is computed by double-opponent interactions.


Subject(s)
Contrast Sensitivity/physiology , Geniculate Bodies/physiology , Vision, Binocular/physiology , Visual Cortex/physiology , Computer Simulation , Feedback , Forecasting , Humans , Models, Neurological , Neural Pathways
17.
Vision Res ; 35(15): 2201-23, 1995 Aug.
Article in English | MEDLINE | ID: mdl-7667932

ABSTRACT

A neural network model of brightness perception is developed to account for a wide variety of data, including the classical phenomenon of Mach bands, low- and high-contrast missing fundamental, luminance staircases, and non-linear contrast effects associated with sinusoidal waveforms. The model builds upon previous work on filling-in models that produce brightness profiles through the interaction of boundary and feature signals. Boundary computations that are sensitive to luminance steps and to continuous luminance gradients are presented. A new interpretation of feature signals through the explicit representation of contrast-driven and luminance-driven information is provided and directly addresses the issue of brightness "anchoring". Computer simulations illustrate the model's competencies.


Subject(s)
Contrast Sensitivity/physiology , Light , Neural Networks, Computer , Visual Perception/physiology , Humans , Mathematics , Photometry
18.
Psychol Rev ; 101(3): 470-89, 1994 Jul.
Article in English | MEDLINE | ID: mdl-7938340

ABSTRACT

Visual search data are given a unified quantitative explanation by a model of how spatial maps in the parietal cortex and object recognition categories in the inferotemporal cortex deploy attentional resources as they reciprocally interact with visual representations in the prestriate cortex. The model visual representations are organized into multiple boundary and surface representations. Visual search in the model is initiated by organizing multiple items that lie within a given boundary or surface representation into a candidate search grouping. These items are compared with object recognition categories to test for matches or mismatches. Mismatches can trigger deeper searches and recursive selection of new groupings until a target object is identified. The model provides an alternative to Feature Integration and Guided Search models.


Subject(s)
Space Perception , Visual Perception , Algorithms , Color Perception , Form Perception , Humans , Reaction Time
19.
Vision Res ; 34(8): 1089-104, 1994 Apr.
Article in English | MEDLINE | ID: mdl-8160417

ABSTRACT

An analysis of the reset of visual cortical circuits responsible for the binding or segmentation of visual features into coherent visual forms yields a model that explains properties of visual persistence. The reset mechanisms prevent massive smearing of visual percepts in response to rapidly moving images. The model simulates relationships among psychophysical data showing inverse relations of persistence to flash luminance and duration, greater persistence of illusory contours than real contours, a U-shaped temporal function for persistence of illusory contours, a reduction of persistence due to adaptation with a stimulus of like orientation, an increase of persistence with spatial separation of a masking stimulus. The model suggests that a combination of habituative, opponent, and endstopping mechanisms prevent smearing and limit persistence. Earlier work with the model has analyzed data about boundary formation, texture segregation, shape-from-shading, and figure-ground separation. Thus, several types of data support each model mechanism and new predictions are made.


Subject(s)
Form Perception/physiology , Visual Cortex/physiology , Adaptation, Ocular , Habituation, Psychophysiologic , Humans , Light , Mathematics , Models, Neurological , Optical Illusions/physiology , Perceptual Masking/physiology , Rotation , Time Factors
20.
Vision Res ; 33(16): 2253-70, 1993 Nov.
Article in English | MEDLINE | ID: mdl-8273291

ABSTRACT

Illusory contours can be induced along directions approximately colinear to edges or approximately perpendicular to the ends of lines. Using a rating scale procedure we explored the relation between the two types of inducers by systematically varying the thickness of inducing elements to result in varying amounts of "edge-like" or "line-like" induction. Inducers for our illusory figures consisted of concentric rings with arcs missing. Observers judged the clarity and brightness of illusory figures as the number of arcs, their thicknesses, and spacings were parametrically varied. Degree of clarity and amount of induced brightness were both found to be inverted-U functions of the number of arcs. These results mandate that any valid model of illusory contour formation must account for interference effects between parallel lines or between those neural units responsible for completion of boundary signals in directions perpendicular to the ends of thin lines. Line width was found to have an effect on both clarity and brightness, a finding inconsistent with those models which employ only completion perpendicular to inducer orientation.


Subject(s)
Form Perception/physiology , Optical Illusions/physiology , Adult , Humans , Light , Male , Pattern Recognition, Visual/physiology , Psychophysics
SELECTION OF CITATIONS
SEARCH DETAIL
...