Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 53
Filter
1.
Vision Res ; 222: 108450, 2024 Jul 03.
Article in English | MEDLINE | ID: mdl-38964164

ABSTRACT

One well-established characteristic of early visual processing is the contrast sensitivity function (CSF) which describes how sensitivity varies with the spatial frequency (SF) content of the visual input. The CSF prompted the development of a now standard model of spatial vision. It represents the visual input by activity in orientation- and SF selective channels which are nonlinearly recombined to predict a perceptual decision. The standard spatial vision model has been extensively tested with sinusoidal gratings at low contrast because their narrow SF spectra isolate the underlying SF selective mechanisms. It is less studied how well these mechanisms account for sensitivity to more behaviourally relevant stimuli such as sharp edges at high contrast (i.e. object boundaries) which abound in the natural environment and have broader SF spectra. Here, we probe sensitivity to edges (2-AFC, edge localization) in the presence of broadband and narrowband noises. We use Cornsweet luminance profiles with peak frequencies at 0.5, 3 and 9 cpd as edge stimuli. To test how well mechanisms underlying sinusoidal contrast sensitivity can account for edge sensitivity, we implement a single- and a multi-scale model building upon standard spatial vision model components. Both models account for most of the data but also systematically deviate in their predictions, particularly in the presence of pink noise and for the lowest SF edge. These deviations might indicate a transition from contrast- to luminance-based detection at low SFs. Alternatively, they might point to a missing component in current spatial vision models.

2.
Sci Rep ; 14(1): 3967, 2024 Feb 17.
Article in English | MEDLINE | ID: mdl-38368485

ABSTRACT

The eye's natural aging influences our ability to focus on close objects. Without optical correction, all adults will suffer from blurry close vision starting in their 40s. In effect, different optical corrections are necessary for near and far vision. Current state-of-the-art glasses offer a gradual change of correction across the field of view for any distance-using Progressive Addition Lenses (PALs). However, an inevitable side effect of PALs is geometric distortion, which causes the swim effect, a phenomenon of unstable perception of the environment leading to discomfort for many wearers. Unfortunately, little is known about the relationship between lens distortions and their perceptual effects, that is, between the complex physical distortions on the one hand and their subjective severity on the other. We show that perceived distortion can be measured as a psychophysical scaling function using a VR experiment with accurately simulated PAL distortions. Despite the multi-dimensional space of physical distortions, the measured perception is well represented as a 1D scaling function; distortions are perceived less with negative far correction, suggesting an advantage for short-sighted people. Beyond that, our results successfully demonstrate that psychophysical scaling with ordinal embedding methods can investigate complex perceptual phenomena like lens distortions that affect geometry, stereo, and motion perception. Our approach provides a new perspective on lens design based on modeling visual processing that could be applied beyond distortions. We anticipate that future PAL designs could be improved using our method to minimize subjectively discomforting distortions rather than merely optimizing physical parameters.

3.
Behav Brain Sci ; 46: e412, 2023 Dec 06.
Article in English | MEDLINE | ID: mdl-38054281

ABSTRACT

Neither the hype exemplified in some exaggerated claims about deep neural networks (DNNs), nor the gloom expressed by Bowers et al. do DNNs as models in vision science justice: DNNs rapidly evolve, and today's limitations are often tomorrow's successes. In addition, providing explanations as well as prediction and image-computability are model desiderata; one should not be favoured at the expense of the other.


Subject(s)
Neural Networks, Computer , Humans
4.
J Vis ; 23(7): 4, 2023 Jul 03.
Article in English | MEDLINE | ID: mdl-37410494

ABSTRACT

In laboratory object recognition tasks based on undistorted photographs, both adult humans and deep neural networks (DNNs) perform close to ceiling. Unlike adults', whose object recognition performance is robust against a wide range of image distortions, DNNs trained on standard ImageNet (1.3M images) perform poorly on distorted images. However, the last 2 years have seen impressive gains in DNN distortion robustness, predominantly achieved through ever-increasing large-scale datasets-orders of magnitude larger than ImageNet. Although this simple brute-force approach is very effective in achieving human-level robustness in DNNs, it raises the question of whether human robustness, too, is simply due to extensive experience with (distorted) visual input during childhood and beyond. Here we investigate this question by comparing the core object recognition performance of 146 children (aged 4-15 years) against adults and against DNNs. We find, first, that already 4- to 6-year-olds show remarkable robustness to image distortions and outperform DNNs trained on ImageNet. Second, we estimated the number of images children had been exposed to during their lifetime. Compared with various DNNs, children's high robustness requires relatively little data. Third, when recognizing objects, children-like adults but unlike DNNs-rely heavily on shape but not on texture cues. Together our results suggest that the remarkable robustness to distortions emerges early in the developmental trajectory of human object recognition and is unlikely the result of a mere accumulation of experience with distorted visual input. Even though current DNNs match human performance regarding robustness, they seem to rely on different and more data-hungry strategies to do so.


Subject(s)
Neural Networks, Computer , Visual Perception , Humans , Adult , Child
5.
Annu Rev Vis Sci ; 9: 501-524, 2023 09 15.
Article in English | MEDLINE | ID: mdl-37001509

ABSTRACT

Deep neural networks (DNNs) are machine learning algorithms that have revolutionized computer vision due to their remarkable successes in tasks like object classification and segmentation. The success of DNNs as computer vision algorithms has led to the suggestion that DNNs may also be good models of human visual perception. In this article, we review evidence regarding current DNNs as adequate behavioral models of human core object recognition. To this end, we argue that it is important to distinguish between statistical tools and computational models and to understand model quality as a multidimensional concept in which clarity about modeling goals is key. Reviewing a large number of psychophysical and computational explorations of core object recognition performance in humans and DNNs, we argue that DNNs are highly valuable scientific tools but that, as of today, DNNs should only be regarded as promising-but not yet adequate-computational models of human core object recognition behavior. On the way, we dispel several myths surrounding DNNs in vision science.


Subject(s)
Neural Networks, Computer , Visual Perception , Humans , Algorithms , Machine Learning
6.
J Vis ; 22(13): 5, 2022 12 01.
Article in English | MEDLINE | ID: mdl-36469015

ABSTRACT

Vision researchers are interested in mapping complex physical stimuli to perceptual dimensions. Such a mapping can be constructed using multidimensional psychophysical scaling or ordinal embedding methods. Both methods infer coordinates that agree as much as possible with the observer's judgments so that perceived similarity corresponds with distance in the inferred space. However, a fundamental problem of all methods that construct scalings in multiple dimensions is that the inferred representation can only reflect perception if the scale has the correct dimension. Here we propose a statistical procedure to overcome this limitation. The critical elements of our procedure are i) measuring the scale's quality by the number of correctly predicted triplets and ii) performing a statistical test to assess if adding another dimension to the scale improves triplet accuracy significantly. We validate our procedure through extensive simulations. In addition, we study the properties and limitations of our procedure using "real" data from various behavioral datasets from psychophysical experiments. We conclude that our procedure can reliably identify (a lower bound on) the number of perceptual dimensions for a given dataset.


Subject(s)
Judgment , Humans
7.
J Vis ; 22(4): 17, 2022 03 02.
Article in English | MEDLINE | ID: mdl-35353153

ABSTRACT

Color constancy is our ability to perceive constant colors across varying illuminations. Here, we trained deep neural networks to be color constant and evaluated their performance with varying cues. Inputs to the networks consisted of two-dimensional images of simulated cone excitations derived from three-dimensional (3D) rendered scenes of 2,115 different 3D shapes, with spectral reflectances of 1,600 different Munsell chips, illuminated under 278 different natural illuminations. The models were trained to classify the reflectance of the objects. Testing was done with four new illuminations with equally spaced CIEL*a*b* chromaticities, two along the daylight locus and two orthogonal to it. High levels of color constancy were achieved with different deep neural networks, and constancy was higher along the daylight locus. When gradually removing cues from the scene, constancy decreased. Both ResNets and classical ConvNets of varying degrees of complexity performed well. However, DeepCC, our simplest sequential convolutional network, represented colors along the three color dimensions of human color vision, while ResNets showed a more complex representation.


Subject(s)
Color Perception , Color Vision , Humans , Lighting , Photic Stimulation , Retinal Cone Photoreceptor Cells
8.
J Vis ; 20(9): 14, 2020 09 02.
Article in English | MEDLINE | ID: mdl-32955551

ABSTRACT

In this article, we address the problem of measuring and analyzing sensation, the subjective magnitude of one's experience. We do this in the context of the method of triads: The sensation of the stimulus is evaluated via relative judgments of the following form: "Is stimulus \(S_i\) more similar to stimulus \(S_j\) or to stimulus \(S_k\)?" We propose to use ordinal embedding methods from machine learning to estimate the scaling function from the relative judgments. We review two relevant and well-known methods in psychophysics that are partially applicable in our setting: nonmetric multidimensional scaling (NMDS) and the method of maximum likelihood difference scaling (MLDS). Considering various scaling functions, we perform an extensive set of simulations to demonstrate the performance of the ordinal embedding methods. We show that in contrast to existing approaches, our ordinal embedding approach allows, first, to obtain reasonable scaling functions from comparatively few relative judgments and, second, to estimate multidimensional perceptual scales. In addition to the simulations, we analyze data from two real psychophysics experiments using ordinal embedding methods. Our results show that in the one-dimensional perceptual scale, our ordinal embedding approach works as well as MLDS, while in higher dimensions, only our ordinal embedding methods can produce a desirable scaling function. To make our methods widely accessible, we provide an R-implementation and general rules of thumb on how to use ordinal embedding in the context of psychophysics.


Subject(s)
Visual Perception/physiology , Humans , Judgment , Psychophysics
9.
Iperception ; 11(3): 2041669520927038, 2020.
Article in English | MEDLINE | ID: mdl-32537119

ABSTRACT

One of the most important tasks for humans is the attribution of causes and effects in all wakes of life. The first systematical study of visual perception of causality-often referred to as phenomenal causality-was done by Albert Michotte using his now well-known launching events paradigm. Launching events are the seeming collision and seeming transfer of movement between two objects-abstract, featureless stimuli ("objects") in Michotte's original experiments. Here, we study the relation between causal ratings for launching events in Michotte's setting and launching collisions in a photorealistically computer-rendered setting. We presented launching events with differing temporal gaps, the same launching processes with photorealistic billiard balls, as well as photorealistic billiard balls with realistic motion dynamics, that is, an initial rebound of the first ball after collision and a short sliding phase of the second ball due to momentum and friction. We found that providing the normal launching stimulus with realistic visuals led to lower causal ratings, but realistic visuals together with realistic motion dynamics evoked higher ratings. Two-dimensional versus three-dimensional presentation, on the other hand, did not affect phenomenal causality. We discuss our results in terms of intuitive physics as well as cue conflict.

10.
Atten Percept Psychophys ; 81(8): 2968-2970, 2019 Nov.
Article in English | MEDLINE | ID: mdl-31529209

ABSTRACT

We discovered an error in the implementation of the function used to generate radial frequency (RF) distortions1 in our article (Wallis, Tobias, Bethge, & Wichmann, 2017).

11.
J Vis ; 19(6): 5, 2019 06 03.
Article in English | MEDLINE | ID: mdl-31173630

ABSTRACT

Scene viewing is used to study attentional selection in complex but still controlled environments. One of the main observations on eye movements during scene viewing is the inhomogeneous distribution of fixation locations: While some parts of an image are fixated by almost all observers and are inspected repeatedly by the same observer, other image parts remain unfixated by observers even after long exploration intervals. Here, we apply spatial point process methods to investigate the relationship between pairs of fixations. More precisely, we use the pair correlation function, a powerful statistical tool, to evaluate dependencies between fixation locations along individual scanpaths. We demonstrate that aggregation of fixation locations within 4° is stronger than expected from chance. Furthermore, the pair correlation function reveals stronger aggregation of fixations when the same image is presented a second time. We use simulations of a dynamical model to show that a narrower spatial attentional span may explain differences in pair correlations between the first and the second inspection of the same image.


Subject(s)
Eye Movements/physiology , Fixation, Ocular/physiology , Models, Statistical , Adolescent , Adult , Attention , Female , Form Perception/physiology , Humans , Male , Memory, Long-Term/physiology , Probability , Young Adult
12.
Elife ; 82019 04 30.
Article in English | MEDLINE | ID: mdl-31038458

ABSTRACT

We subjectively perceive our visual field with high fidelity, yet peripheral distortions can go unnoticed and peripheral objects can be difficult to identify (crowding). Prior work showed that humans could not discriminate images synthesised to match the responses of a mid-level ventral visual stream model when information was averaged in receptive fields with a scaling of about half their retinal eccentricity. This result implicated ventral visual area V2, approximated 'Bouma's Law' of crowding, and has subsequently been interpreted as a link between crowding zones, receptive field scaling, and our perceptual experience. However, this experiment never assessed natural images. We find that humans can easily discriminate real and model-generated images at V2 scaling, requiring scales at least as small as V1 receptive fields to generate metamers. We speculate that explaining why scenes look as they do may require incorporating segmentation and global organisational constraints in addition to local pooling.


Subject(s)
Pattern Recognition, Visual/physiology , Visual Fields/physiology , Visual Perception/physiology , Crowding/psychology , Discrimination, Psychological , Fixation, Ocular/physiology , Humans , Perceptual Masking , Photic Stimulation , Space Perception/physiology
13.
J Vis ; 19(3): 1, 2019 03 01.
Article in English | MEDLINE | ID: mdl-30821809

ABSTRACT

Bottom-up and top-down as well as low-level and high-level factors influence where we fixate when viewing natural scenes. However, the importance of each of these factors and how they interact remains a matter of debate. Here, we disentangle these factors by analyzing their influence over time. For this purpose, we develop a saliency model that is based on the internal representation of a recent early spatial vision model to measure the low-level, bottom-up factor. To measure the influence of high-level, bottom-up features, we use a recent deep neural network-based saliency model. To account for top-down influences, we evaluate the models on two large data sets with different tasks: first, a memorization task and, second, a search task. Our results lend support to a separation of visual scene exploration into three phases: the first saccade, an initial guided exploration characterized by a gradual broadening of the fixation density, and a steady state that is reached after roughly 10 fixations. Saccade-target selection during the initial exploration and in the steady state is related to similar areas of interest, which are better predicted when including high-level features. In the search data set, fixation locations are determined predominantly by top-down processes. In contrast, the first fixation follows a different fixation density and contains a strong central fixation bias. Nonetheless, first fixations are guided strongly by image properties, and as early as 200 ms after image onset, fixations are better predicted by high-level information. We conclude that any low-level, bottom-up factors are mainly limited to the generation of the first saccade. All saccades are better explained when high-level features are considered, and later, this high-level, bottom-up control can be overruled by top-down influences.


Subject(s)
Eye Movements/physiology , Fixation, Ocular/physiology , Eye Movement Measurements , Female , Humans , Male , Memory/physiology , Neural Networks, Computer , Photic Stimulation , Saccades/physiology , Vision, Ocular/physiology , Young Adult
14.
Sci Rep ; 9(1): 1635, 2019 02 07.
Article in English | MEDLINE | ID: mdl-30733470

ABSTRACT

When searching a target in a natural scene, it has been shown that both the target's visual properties and similarity to the background influence whether and how fast humans are able to find it. So far, it was unclear whether searchers adjust the dynamics of their eye movements (e.g., fixation durations, saccade amplitudes) to the target they search for. In our experiment, participants searched natural scenes for six artificial targets with different spatial frequency content throughout eight consecutive sessions. High-spatial frequency targets led to smaller saccade amplitudes and shorter fixation durations than low-spatial frequency targets if target identity was known. If a saccade was programmed in the same direction as the previous saccade, fixation durations and successive saccade amplitudes were not influenced by target type. Visual saliency and empirical fixation density at the endpoints of saccades which maintain direction were comparatively low, indicating that these saccades were less selective. Our results suggest that searchers adjust their eye movement dynamics to the search target efficiently, since previous research has shown that low-spatial frequencies are visible farther into the periphery than high-spatial frequencies. We interpret the saccade direction specificity of our effects as an underlying separation into a default scanning mechanism and a selective, target-dependent mechanism.


Subject(s)
Eye Movements/physiology , Adolescent , Adult , Female , Fixation, Ocular , Humans , Male , Nontherapeutic Human Experimentation , Photic Stimulation , Saccades , Spatial Processing , Time Factors , Young Adult
15.
J Vis ; 17(13): 3, 2017 11 01.
Article in English | MEDLINE | ID: mdl-29094148

ABSTRACT

When watching the image of a natural scene on a computer screen, observers initially move their eyes toward the center of the image-a reliable experimental finding termed central fixation bias. This systematic tendency in eye guidance likely masks attentional selection driven by image properties and top-down cognitive processes. Here, we show that the central fixation bias can be reduced by delaying the initial saccade relative to image onset. In four scene-viewing experiments we manipulated observers' initial gaze position and delayed their first saccade by a specific time interval relative to the onset of an image. We analyzed the distance to image center over time and show that the central fixation bias of initial fixations was significantly reduced after delayed saccade onsets. We additionally show that selection of the initial saccade target strongly depended on the first saccade latency. A previously published model of saccade generation was extended with a central activation map on the initial fixation whose influence declined with increasing saccade latency. This extension was sufficient to replicate the central fixation bias from our experiments. Our results suggest that the central fixation bias is generated by default activation as a response to the sudden image onset and that this default activation pattern decreases over time. Thus, it may often be preferable to use a modified version of the scene viewing paradigm that decouples image onset from the start signal for scene exploration to explicitly reduce the central fixation bias.


Subject(s)
Attention/physiology , Fixation, Ocular/physiology , Saccades/physiology , Adolescent , Adult , Eye Movements , Female , Humans , Male , Photic Stimulation/methods , Young Adult
16.
J Vis ; 17(12): 5, 2017 10 01.
Article in English | MEDLINE | ID: mdl-28983571

ABSTRACT

Our visual environment is full of texture-"stuff" like cloth, bark, or gravel as distinct from "things" like dresses, trees, or paths-and humans are adept at perceiving subtle variations in material properties. To investigate image features important for texture perception, we psychophysically compare a recent parametric model of texture appearance (convolutional neural network [CNN] model) that uses the features encoded by a deep CNN (VGG-19) with two other models: the venerable Portilla and Simoncelli model and an extension of the CNN model in which the power spectrum is additionally matched. Observers discriminated model-generated textures from original natural textures in a spatial three-alternative oddity paradigm under two viewing conditions: when test patches were briefly presented to the near-periphery ("parafoveal") and when observers were able to make eye movements to all three patches ("inspection"). Under parafoveal viewing, observers were unable to discriminate 10 of 12 original images from CNN model images, and remarkably, the simpler Portilla and Simoncelli model performed slightly better than the CNN model (11 textures). Under foveal inspection, matching CNN features captured appearance substantially better than the Portilla and Simoncelli model (nine compared to four textures), and including the power spectrum improved appearance matching for two of the three remaining textures. None of the models we test here could produce indiscriminable images for one of the 12 textures under the inspection condition. While deep CNN (VGG-19) features can often be used to synthesize textures that humans cannot discriminate from natural textures, there is currently no uniformly best model for all textures and viewing conditions.


Subject(s)
Eye Movements/physiology , Neural Networks, Computer , Pattern Recognition, Visual/physiology , Visual Perception/physiology , Fovea Centralis/physiology , Humans , Photic Stimulation
17.
J Vis ; 17(12): 12, 2017 10 01.
Article in English | MEDLINE | ID: mdl-29053781

ABSTRACT

A large part of classical visual psychophysics was concerned with the fundamental question of how pattern information is initially encoded in the human visual system. From these studies a relatively standard model of early spatial vision emerged, based on spatial frequency and orientation-specific channels followed by an accelerating nonlinearity and divisive normalization: contrast gain-control. Here we implement such a model in an image-computable way, allowing it to take arbitrary luminance images as input. Testing our implementation on classical psychophysical data, we find that it explains contrast detection data including the ModelFest data, contrast discrimination data, and oblique masking data, using a single set of parameters. Leveraging the advantage of an image-computable model, we test our model against a recent dataset using natural images as masks. We find that the model explains these data reasonably well, too. To explain data obtained at different presentation durations, our model requires different parameters to achieve an acceptable fit. In addition, we show that contrast gain-control with the fitted parameters results in a very sparse encoding of luminance information, in line with notions from efficient coding. Translating the standard early spatial vision model to be image-computable resulted in two further insights: First, the nonlinear processing requires a denser sampling of spatial frequency and orientation than optimal coding suggests. Second, the normalization needs to be fairly local in space to fit the data obtained with natural image masks. Finally, our image-computable model can serve as tool in future quantitative analyses: It allows optimized stimuli to be used to test the model and variants of it, with potential applications as an image-quality metric. In addition, it may serve as a building block for models of higher level processing.


Subject(s)
Computer Simulation , Contrast Sensitivity/physiology , Orientation/physiology , Pattern Recognition, Visual/physiology , Psychophysics/methods , Space Perception/physiology , Spatial Navigation/physiology , Humans
18.
Psychol Rev ; 124(4): 505-524, 2017 07.
Article in English | MEDLINE | ID: mdl-28447811

ABSTRACT

Dynamical models of cognition play an increasingly important role in driving theoretical and experimental research in psychology. Therefore, parameter estimation, model analysis and comparison of dynamical models are of essential importance. In this article, we propose a maximum likelihood approach for model analysis in a fully dynamical framework that includes time-ordered experimental data. Our methods can be applied to dynamical models for the prediction of discrete behavior (e.g., movement onsets); in particular, we use a dynamical model of saccade generation in scene viewing as a case study for our approach. For this model, the likelihood function can be computed directly by numerical simulation, which enables more efficient parameter estimation including Bayesian inference to obtain reliable estimates and corresponding credible intervals. Using hierarchical models inference is even possible for individual observers. Furthermore, our likelihood approach can be used to compare different models. In our example, the dynamical framework is shown to outperform nondynamical statistical models. Additionally, the likelihood based evaluation differentiates model variants, which produced indistinguishable predictions on hitherto used statistics. Our results indicate that the likelihood approach is a promising framework for dynamical cognitive models. (PsycINFO Database Record


Subject(s)
Bayes Theorem , Cognition , Likelihood Functions , Models, Statistical , Computer Simulation , Humans
19.
J Vis ; 17(1): 37, 2017 01 01.
Article in English | MEDLINE | ID: mdl-28135347

ABSTRACT

Maximum likelihood difference scaling (MLDS) is a method for the estimation of perceptual scales based on the judgment of differences in stimulus appearance (Maloney & Yang, 2003). MLDS has recently also been used to estimate near-threshold discrimination performance (Devinck & Knoblauch, 2012). Using MLDS as a psychophysical method for sensitivity estimation is potentially appealing, because MLDS has been reported to need less data than forced-choice procedures, and particularly naive observers report to prefer suprathreshold comparisons to JND-style threshold tasks. Here we compare two methods, MLDS and two-interval forced-choice (2-IFC), regarding their capability to estimate sensitivity assuming an underlying signal-detection model. We first examined the theoretical equivalence between both methods using simulations. We found that they disagreed in their estimation only when sensitivity was low, or when one of the assumptions on which MLDS is based was violated. Furthermore, we found that the confidence intervals derived from MLDS had a low coverage; i.e., they were too narrow, underestimating the true variability. Subsequently we compared MLDS and 2-IFC empirically using a slant-from-texture task. The amount of agreement between sensitivity estimates from the two methods varied substantially across observers. We discuss possible reasons for the observed disagreements, most notably violations of the MLDS model assumptions. We conclude that in the present example MLDS and 2-IFC could equally be used to estimate sensitivity to differences in slant, with MLDS having the benefit of being more efficient and more pleasant, but having the disadvantage of unsatisfying coverage.


Subject(s)
Choice Behavior , Likelihood Functions , Pattern Recognition, Visual/physiology , Adult , Female , Humans , Judgment , Male , Probability , Psychophysics , Signal Detection, Psychological , Young Adult
20.
Atten Percept Psychophys ; 79(3): 850-862, 2017 Apr.
Article in English | MEDLINE | ID: mdl-28054276

ABSTRACT

When visual features in the periphery are close together they become difficult to recognize: something is present but it is unclear what. This is called "crowding". Here we investigated sensitivity to features in highly familiar shapes (letters) by applying spatial distortions. In Experiment 1, observers detected which of four peripherally presented (8 deg of retinal eccentricity) target letters was distorted (spatial 4AFC). The letters were presented either isolated or surrounded by four undistorted flanking letters, and distorted with one of two types of distortion at a range of distortion frequencies and amplitudes. The bandpass noise distortion ("BPN") technique causes spatial distortions in Cartesian space, whereas radial frequency distortion ("RF") causes shifts in polar coordinates. Detecting distortions in target letters was more difficult in the presence of flanking letters, consistent with the effect of crowding. The BPN distortion type showed evidence of tuning, with sensitivity to distortions peaking at approximately 6.5 c/deg for unflanked letters. The presence of flanking letters causes this peak to rise to approximately 8.5 c/deg. In contrast to the tuning observed for BPN distortions, RF distortion sensitivity increased as the radial frequency of distortion increased. In a series of follow-up experiments, we found that sensitivity to distortions is reduced when flanking letters were also distorted, that this held when observers were required to report which target letter was undistorted, and that this held when flanker distortions were always detectable. The perception of geometric distortions in letter stimuli is impaired by visual crowding.


Subject(s)
Pattern Recognition, Visual/physiology , Reading , Space Perception/physiology , Vision, Binocular/physiology , Adult , Female , Humans , Male , Young Adult
SELECTION OF CITATIONS
SEARCH DETAIL
...