Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 19 de 19
Filter
Add more filters










Publication year range
1.
Annu Rev Vis Sci ; 2024 Jun 07.
Article in English | MEDLINE | ID: mdl-38848608

ABSTRACT

Our visual systems are remarkably adept at deriving the shape and material properties of surfaces even when only one image of a surface is available. This ability implies that a single image of a surface contains potent information about both surface shape and material. However, from a computational perspective, the problem of deriving surface shape and material is formally ill posed. Any given image could be due to many combinations of shape, material, and illumination. Early computational models required prior knowledge about two of the three scene variables to derive the third. However, such models are biologically implausible because our visual systems are tasked with extracting all relevant scene variables from images simultaneously. This review describes recent progress in understanding how the visual system solves this problem by identifying complex forms of image structure that support its ability to simultaneously derive the shape and material properties of surfaces from images.

2.
Curr Biol ; 33(12): 2528-2534.e5, 2023 06 19.
Article in English | MEDLINE | ID: mdl-37196655

ABSTRACT

The human visual system extracts both the three-dimensional (3D) shape and the material properties of surfaces from single images.1,2,3,4,5,6,7,8,9,10,11,12,13,14 Understanding this remarkable ability is difficult because the problem of extracting both shape and material is formally ill posed: information about one appears to be needed to recover the other.14,15,16,17 Recent work has suggested that a particular class of image contours formed by a surface curving smoothly out of sight (self-occluding contours) contains information that co-specifies both surface shape and material for opaque surfaces.18 However, many natural materials are light permeable (translucent); it is unknown whether there is information along self-occluding contours that can be used to distinguish opaque and translucent materials. Here, we present physical simulations, which demonstrate that variations in intensity generated by opaque and translucent materials are linked to different shape attributes of self-occluding contours. Psychophysical experiments demonstrate that the human visual system exploits the different forms of intensity-shape covariation along self-occluding contours to distinguish opaque and translucent materials. These results provide insight into how the visual system manages to solve the putatively ill-posed problem of extracting both the shape and material properties of 3D surfaces from images.


Subject(s)
Form Perception , Humans , Visual Perception , Depth Perception
3.
Trends Cogn Sci ; 27(1): 98-110, 2023 01.
Article in English | MEDLINE | ID: mdl-36372694

ABSTRACT

Our visual experience of the world relies on the interaction of light with the different substances, surfaces, and objects in our environment. These optical interactions generate images that contain a conflated mixture of different scene variables, which our visual system must somehow disentangle to extract information about the shape and material properties of the world. Such problems have historically been considered to be ill-posed, but recent work suggests that there are complex patterns of covariation in light that co-specify the 3D shape and material properties of surfaces. This work provides new insights into how the visual system acquired the ability to solve problems that have historically been considered intractable.


Subject(s)
Form Perception , Humans
4.
Curr Biol ; 32(6): 1387-1394.e3, 2022 03 28.
Article in English | MEDLINE | ID: mdl-35139361

ABSTRACT

The human visual system can derive information about three-dimensional (3D) shape from the structure of light reflected by surfaces. Most research on single static images has focused on the 3D shape information contained in variations of brightness caused by interactions between the illumination and local surface orientation ("shading").1-6 Although color can enhance the recovery of surface shading when color and brightness vary independently,7-9 there is no evidence that color alone provides any information about 3D shape. Here, we show that the wavelength-dependent reflectance of chromatic materials provides information about the 3D shape of translucent materials. We show that different wavelengths of light undergo varying degrees of subsurface light transport, which generates multiple forms of spatial structure: wavelengths that are weakly reflected generate shading-like image structure, linked to 3D surface orientation, whereas wavelengths that penetrate more deeply into the material are primarily constrained by the direction of surface curvature (convexities and concavities).10 Psychophysical experiments demonstrate that the enhanced perception of 3D shape in chromatic translucent surfaces arises from the shading structure generated by weakly reflected wavelengths, which, in turn, generates correlated spatial variations in saturation. These results demonstrate a new functional role for color in the perception of the 3D shape of translucent materials.


Subject(s)
Form Perception , Color Perception , Depth Perception , Humans , Imaging, Three-Dimensional
5.
Proc Natl Acad Sci U S A ; 118(14)2021 04 06.
Article in English | MEDLINE | ID: mdl-33811143

ABSTRACT

The problem of extracting the three-dimensional (3D) shape and material properties of surfaces from images is considered to be inherently ill posed. It is thought that a priori knowledge about either 3D shape is needed to infer material properties, or knowledge about material properties are needed to derive 3D shape. Here, we show that there is information in images that cospecify both the material composition and 3D shape of light permeable (translucent) materials. Specifically, we show that the intensity gradients generated by subsurface scattering, the shape of self-occluding contours, and the distribution of specular reflections covary in systematic ways that are diagnostic of both the surface's 3D shape and its material properties. These sources of image covariation emerge from being causally linked to a common environmental source: 3D surface curvature. We show that these sources of covariation take the form of "photogeometric constraints," which link variations in intensity (photometric constraints) to the sign and direction of 3D surface curvature (geometric constraints). We experimentally demonstrate that this covariation generates emergent cues that the visual system exploits to derive the 3D shape and material properties of translucent surfaces and demonstrate the potency of these cues by constructing counterfeit images that evoke vivid percepts of 3D shape and translucency. The concepts of covariation and cospecification articulated herein suggest a principled conceptual path forward for identifying emergent cues that can be used to solve problems in vision that have historically been assumed to be ill posed.

6.
Elife ; 82019 07 12.
Article in English | MEDLINE | ID: mdl-31298655

ABSTRACT

The human visual system is tasked with recovering the different physical sources of optical structure that generate our retinal images. Separate research has focused on understanding how the visual system estimates (a) environmental sources of image structure and (b) blur induced by the eye's limited focal range, but little is known about how the visual system distinguishes environmental sources from optical defocus. Here, we present evidence that this is a fundamental perceptual problem and provide insights into how and when the visual system succeeds and fails in solving it. We show that fully focused surface shading can be misperceived as defocused and that optical blur can be misattributed to the material properties and shape of surfaces. We further reveal how these misperceptions depend on the relationship between shading gradients and sharp contours, and conclude that computations of blur are inherently linked to computations of surface shape, material, and illumination.


Subject(s)
Form Perception/physiology , Visual Perception/physiology , Humans , Optical Phenomena , Photic Stimulation
7.
Vision Res ; 158: 200-207, 2019 05.
Article in English | MEDLINE | ID: mdl-30876910

ABSTRACT

The mechanisms responsible for generating illusory contours are thought to fulfil an adaptive role in providing estimates of missing contour fragments generated by partial camouflage. One striking apparent counter-example to this view was described in Current Biology 21 (2011) 492-496, which showed that illusory contours could arise in motion displays depicting visible occluding discs occluding and disoccluding thin contours. These motion sequences generate illusory contours even though they play no necessary role in accounting for occlusion and disocclusion of the thin contours. The present work sought to more precisely characterize the quantitative dependence of these 'irrational' contours on the relative contrasts in the image. We show that the perceived strength of the illusory contours generated by these displays depends monotonically on the relative contrast of the occluding and occluded contours and that previous attempts to measure their strength with a method of adjustment appears to be contaminated by response bias. We further show that these illusory contours also arise when the occluding disks are rendered transparent and exhibit similar forms of contrast dependencies. These findings reveal a general methodological problem that can arise using methods of adjustment and provide quantitative data that may be used to identify the neural mechanisms responsible for IC genesis and their perceived strength.


Subject(s)
Form Perception/physiology , Illusions/physiology , Motion Perception/physiology , Contrast Sensitivity/physiology , Humans , Models, Neurological
8.
Curr Biol ; 29(2): 306-311.e3, 2019 01 21.
Article in English | MEDLINE | ID: mdl-30612905

ABSTRACT

The human visual system is remarkably adept at extracting the three-dimensional (3D) shape of surfaces from images of smoothly shaded surfaces (shape from shading). Most research into this remarkable perceptual ability has focused on understanding how the visual system derives a specific representation of 3D shape when it is known (or assumed) that shading and self-occluding contours are the sole causes of image structure [1-11]. But there is an even more fundamental problem that must be solved before any such analysis can take place: how does the visual system determine when it's viewing a shaded surface? Here, we present theoretical analyses showing that there is statistically reliable information generated along the bounding contours of smoothly curved surfaces that the visual system uses to identify surface shading. This information can be captured by two photogeometric constraints that link the shape of bounding contours to the distributions of shading intensity along the contours: one that links shading intensity to the local orientations along bounding contours and a second that links shading intensity to bounding contour curvature. We show that these constraints predict the perception of shading for surfaces with smooth self-occluding contours and a widely studied class of bounding contours (planar cuts). The results provide new insights into the information that the visual system exploits to distinguish surface shading from other sources of image structure and offer a coherent explanation of the influence of bounding contours on the perception of surface shading and 3D shape.


Subject(s)
Contrast Sensitivity , Depth Perception , Form Perception , Pattern Recognition, Visual , Cues , Humans , Lighting
9.
Proc Natl Acad Sci U S A ; 114(52): 13840-13845, 2017 12 26.
Article in English | MEDLINE | ID: mdl-29229812

ABSTRACT

A fundamental problem in extracting scene structure is distinguishing different physical sources of image structure. Light reflected by an opaque surface covaries with local surface orientation, whereas light transported through the body of a translucent material does not. This suggests the possibility that the visual system may use the covariation of local surface orientation and intensity as a cue to the opacity of surfaces. We tested this hypothesis by manipulating the contrast of luminance gradients and the surface geometries to which they belonged and assessed how these manipulations affected the perception of surface opacity/translucency. We show that (i) identical luminance gradients can appear either translucent or opaque depending on the relationship between luminance and perceived 3D surface orientation, (ii) illusory percepts of translucency can be induced by embedding opaque surfaces in diffuse light fields that eliminate the covariation between surface orientation and intensity, and (iii) illusory percepts of opacity can be generated when transparent materials are embedded in a light field that generates images where surface orientation and intensity covary. Our results provide insight into how the visual system distinguishes opaque surfaces and light-permeable materials and why discrepancies arise between the perception and physics of opacity and translucency. These results suggest that the most significant information used to compute the perceived opacity and translucency of surfaces arise at a level of representation where 3D shape is made explicit.


Subject(s)
Contrast Sensitivity/physiology , Perceptual Masking/physiology , Female , Humans , Male
10.
J Vis ; 17(3): 17, 2017 03 01.
Article in English | MEDLINE | ID: mdl-28355629

ABSTRACT

Previous studies have shown that the perceived three-dimensional (3D) shape of objects depends on their material composition. The majority of this work has focused on glossy, flat-matte, or velvety materials. Here, we studied perceived 3D shape of translucent materials. We manipulated the spatial frequency of surface relief perturbations of translucent and opaque objects. Observers indicated which of two surfaces appeared to have more bumps. They also judged local surface orientation using gauge probe figures. We found that translucent surfaces appeared to have fewer bumps than opaque surfaces with the same 3D shape (Experiment 1), particularly when self-occluding contours were hidden from view (Experiment 2). We also found that perceived local curvature was underestimated for translucent objects relative to opaque objects, and that estimates of perceived local surface orientation were similarly correlated with luminance for images of both opaque and translucent objects (Experiment 3). These findings suggest that the perceived mesoscopic shape of completely matte translucent objects can be underestimated due to a decline in the steepness of luminance gradients relative to those of opaque objects.


Subject(s)
Form Perception/physiology , Imaging, Three-Dimensional , Surface Properties , Adult , Contrast Media , Depth Perception/physiology , Humans , Orientation, Spatial
11.
Iperception ; 7(5): 2041669516671566, 2016.
Article in English | MEDLINE | ID: mdl-27733897

ABSTRACT

Specular surfaces and refractive media are difficult to distinguish from each other because they both generate distorted images of the surrounding lighting environment. Whereas convex refractive objects invert the orientation of the horizon so the sky appears beneath the ground plane, convex specular surfaces preserve the orientation of the horizon so the sky appears above the ground. Here, we show that a refractive transparent object can be made to appear specular and opaque simply by rotating the image by 180°. This result suggests that the visual system relies on information tied to the orientation of the horizon to distinguish between refractive and specular objects.

12.
J Vis ; 16(1): 5, 2016.
Article in English | MEDLINE | ID: mdl-26756175

ABSTRACT

Specular and matte surfaces can project identical images if the surface geometry and light field are appropriately configured. Our previous work has shown that the visual system can exploit stereopsis and contour cues to 3D shape to disambiguate different surface reflectance interpretations. Here, we test whether material perception depends on information about surface geometry provided by structure from motion and shape from texture. Different surface textures were superimposed on a fixed pattern of luminance gradients to generate two different 3D shape interpretations. Each shape interpretation of the luminance gradients promoted a different experience of surface reflectance and illumination direction, which varied from a specular surface in frontal illumination to a comparatively matte surface in grazing illumination. The shape that appeared most specular exhibited the steepest derivatives of luminance with respect to surface orientation, consistent with physical differences between specular and diffuse reflectance. The effect of apparent shape on perceived reflectance occurred for a variety of surface textures that provided either structure from motion, shape from texture, or both optical sources of shape information. In conjunction with previous findings (Marlow, Todorovic, & Anderson, 2015; Marlow & Anderson, 2015), these results suggest that any cue that provides sufficient information about 3D shape can also be used to derive material properties from the rate that luminance varies as a function of surface curvature.


Subject(s)
Cues , Form Perception/physiology , Motion Perception/physiology , Depth Perception/physiology , Humans , Imaging, Three-Dimensional , Orientation , Photic Stimulation/methods
13.
Vision Res ; 115(Pt B): 199-208, 2015 Oct.
Article in English | MEDLINE | ID: mdl-25982712

ABSTRACT

Retinal image structure is due to a complex mixture of physical sources that includes the surface's 3D shape, light-reflectance and transmittance properties, and the light field. The visual system can somehow discriminate between these different sources of image structure and recover information about the objects and surfaces in the scene. There has been significant debate about the nature of the representations that are used to derive surface reflectance properties such as specularity (gloss). Specularity could be derived either directly from 2D image properties or by exploiting information that can only be derived from representations in which 3D shape has been made explicit. We recently provided evidence that 3D shape information can play a critical role in the perception of material specularity, but the shape manipulation in our prior study also significantly changed 2D image properties (Marlow, Todorovic, & Anderson, 2015). Here, we held fixed all monocularly visible 2D image properties and manipulated 3D shape stereoscopically. When binocularly fused, the depicted 3D shapes induced striking transformations in the surfaces' apparent material properties, which vary from matte to 'metallic'. Our psychophysical measurements of perceived specularity reveal that 3D shape information can play a critical role in material perception for both singly-curved surfaces and more complex geometries that curve in two directions. These results provide strong evidence that the perception of material specularity can depend on physical constraints derived from representations in which three-dimensional shape has been made explicit.


Subject(s)
Form Perception/physiology , Surface Properties , Analysis of Variance , Humans , Photic Stimulation/methods , Vision, Binocular/physiology , Young Adult
14.
Curr Biol ; 25(6): R221-R222, 2015 Mar 16.
Article in English | MEDLINE | ID: mdl-25784037

ABSTRACT

Retinal image structure arises from the interaction between a surface's three-dimensional shape, its reflectance and transmittance properties, and the surrounding light field. Any local image structure can be generated by an infinite number of different combinations of surface properties, which suggests that the visual system must somehow constrain the possible scene interpretations. The research on this has searched for such constraints in statistical regularities of two-dimensional image structure [1,2]. Here, we present a new class of displays in which the perception of material properties cannot be explained with two-dimensional image properties. The displays manipulate the perceived three-dimensional shape of identical luminance gratings, and demonstrate that perceived three-dimensional shape can alter perceived surface reflectance.


Subject(s)
Depth Perception/physiology , Form Perception/physiology , Humans , Light , Models, Psychological , Surface Properties
15.
Perception ; 43(9): 977-88, 2014.
Article in English | MEDLINE | ID: mdl-25420336

ABSTRACT

One current view is that subjective contours may involve high-level detection of a salient shape with back propagation to early visual areas where small receptive fields allow for scrutiny of relevant details. This idea applies to Kanizsa-type figures. However, Gillam and Chan (2002 Psychological Science, 13, 279-282) using figures based on Gillam's graphic 'New York Titanic' (Gillam, 1997 Thresholds: Limits of perception. New York: Arts Magazine) showed that strong subjective contours can be seen along the linearly aligned edges of a set of shapes if occlusion cues of 'extrinsic edge' and 'entropy contrast' are strong. Here we compared ratings of the strength of subjective contours along linear alignments with those seen in Kanizsa figures. The strongest subjective contour for a single set of linearly aligned shapes was similar in strength to the edges of a Kanizsa square (controlling for support ratio) despite the lack of a salient region. The addition of a second set of linearly aligned inducers consistent with a common surface increased subjective-contour strength, as did having four rather than two 'pacmen' in the Kanizsa figure, indicating a role for surface support. We argue that linear subjective contours allow for the investigation of certain occlusion cues and the interactions between them that are not easily explored with Kanizsa figures.


Subject(s)
Form Perception/physiology , Optical Illusions/physiology , Pattern Recognition, Visual/physiology , Adult , Humans , Random Allocation
16.
J Vis ; 14(7)2014 Jun 02.
Article in English | MEDLINE | ID: mdl-24891438

ABSTRACT

The appearance of surface texture depends on the identification of edge contours in an image generated by local variations in reflectance. These edges in the image need to be distinguished from diffuse shading gradients caused by the interaction of light with surface relief. To understand how the brain performs this separation, we generated textures with orientation flows that were initially congruent with the diffuse shading flow of planar surfaces. We found that rotating textures relative to shading increased the appearance of surface pigmentation, which was well explained by an increase in the variation of local orientation fields with increasing offset of texture gradients (Experiment 1). We obtained similar findings when rotating texture flow relative to the diffuse shading of spherical surfaces with global curvature (Experiment 2). In a second set of experiments, we found that perceived pigmentation of spherical surfaces depended on the perceived orientation of the light field; rotating images of spherical surfaces reduced both perceived pigmentation (Experiment 3) and perceived global texture contrast in an objective task (Experiment 4). The dependence of perceived texture on image orientation suggests that the separation of texture flow from shading depends on an assumed light source from above bias. These findings support the view that separation of texture flow from shading, and thus perceived pigmentation, depend not only on the local structure of orientation fields in an image, but also on midlevel representations of shading and illuminance flow.


Subject(s)
Contrast Sensitivity/physiology , Light , Pattern Recognition, Visual/physiology , Depth Perception/physiology , Humans , Orientation , Surface Properties
17.
J Vis ; 13(14)2013 Dec 02.
Article in English | MEDLINE | ID: mdl-24297776

ABSTRACT

Image structure is generated by distinct physical sources that include 3-D surface geometry, surface reflectance, and the light field. A fundamental problem in midlevel vision involves understanding how, and how well, the visual system separates images into their underlying sources. A large body of recent work has shown that the perception of gloss exhibits significant dependencies on both 3-D surface geometry and the light field in which the surface is embedded. We recently proposed that these dependencies arise from the visual system relying on a restricted set of dimensions of specular image structure that are heuristically used to estimate surface gloss (Marlow, Kim, & Anderson, 2012). Here, we report a series of studies that manipulate surface geometry and the structure of the light field to prospectively test whether these image cues predict perceived gloss for a broad range of surface geometries and light fields. We identify generative constraints on each of our proposed cues that allow us to predict which cue will vary most for a given surface geometry, reflectance function, and light field. Our psychophysical data reveal that our generative manipulations succeed in promoting the significance of the intended cue, which in turn increases their predictive value in observers' judgments of gloss. Our results suggest that the perception of gloss in static, monocular images relies on a heuristic weighting of cues to specular image structure, and offer a unified explanation of why surface geometry and the light field modulate perceived gloss.


Subject(s)
Contrast Sensitivity/physiology , Cues , Surface Properties , Visual Perception/physiology , Humans , Imaging, Three-Dimensional , Light
18.
Curr Biol ; 22(20): 1909-13, 2012 Oct 23.
Article in English | MEDLINE | ID: mdl-22959347

ABSTRACT

The amount and spectral content of the light reflected by most natural surfaces depends on the structure of the light field, the observer's viewing position, and 3D surface geometry, particularly for specular (glossy) surfaces. A growing body of data has demonstrated that perceived surface gloss can vary as a function of its 3D shape and its illumination field, but there is currently no explanation for these effects. Here, we show that the perception of gloss can be understood as a direct consequence of image properties that covary with surface geometry and the illumination field. We show that different illumination fields can generate qualitatively different patterns of interaction between perceived gloss and 3D surface geometry. Despite the complexity and variability of these interactions, we demonstrate that the perception (and misperception) of gloss is well predicted by the way that each illumination field modulates the size, contrast, sharpness, and depth of specular reflections. Our results provide a coherent explanation of the effects of extrinsic scene variables on perceived gloss, and our methods suggest a general technique for assessing the role of specific image properties in modulating our visual experience of material properties.


Subject(s)
Contrast Sensitivity , Form Perception , Illusions , Visual Perception , Depth Perception , Humans , Light , Photic Stimulation , Psychophysics
19.
Nat Neurosci ; 15(11): 1590-5, 2012 Nov.
Article in English | MEDLINE | ID: mdl-23001059

ABSTRACT

Our visual system relies on the image structure generated by the interaction of light with objects to infer their material properties. One widely studied surface property is gloss, which can provide information that an object is smooth, shiny or wet. Studies have historically focused on the role of specular highlights in modulating perceived gloss. Here we show in human observers that glossy surfaces can generate both bright specular highlights and dark specular 'lowlights', and that the presence of either is sufficient to generate compelling percepts of gloss. We show that perceived gloss declines when the image structure generated by specular lowlights is blurred or misaligned with surrounding surface shading and that perceived gloss can arise from the presence of lowlights in surface regions isolated from highlights. These results suggest that the image structure generated by specular highlights and lowlights is used to construct our experience of surface gloss.


Subject(s)
Contrast Sensitivity/physiology , Depth Perception/physiology , Form Perception/physiology , Adaptation, Psychological , Analysis of Variance , Humans , Lighting , Photic Stimulation , Psychophysics , Reaction Time , Surface Properties
SELECTION OF CITATIONS
SEARCH DETAIL
...