Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 3 de 3
Filter
Add more filters










Database
Language
Publication year range
1.
Front Psychol ; 13: 958787, 2022.
Article in English | MEDLINE | ID: mdl-36591105

ABSTRACT

Lightness Illusions (Contrast, Assimilation, and Natural Scenes with Edges and Gradients) show that appearances do not correlate with the light sent from the scene to the eye. Lightness Illusions begin with a control experiment that includes two identical Gray Regions-Of-Interest(GrayROI) that have equal appearances in uniform surrounds. The Illusion experiment modifies "the-rest-of-the-scene" to make these GrayROIs appear different from each other. Our visual system performs complex-spatial transformations of scene-luminance patterns using two independent spatial mechanisms: optical and neural. First, optical veiling glare transforms scene luminances into a different light pattern on receptors, called retinal contrasts. This article provides a new Python program that calculates retinal contrast. Equal scene luminances become unequal retinal contrasts. Uniform scene segments become nonuniform retinal gradients; darker regions acquire substantial scattered light; and the retinal range-of-light changes. The glare on each receptor is the sum of the individual contributions from every other scene segment. Glare responds to the content of the entire scene. Glare is a scene-dependent optical transformation. Lightness Illusions are intended to demonstrate how our "brain sees" using simple-uniform patterns. However, the after-glare pattern of light on receptors is a morass of high-and low-slope gradients. Quantitative measurements, and pseudocolor renderings are needed to appreciate the magnitude, and spatial patterns of glare. Glare's gradients are invisible when you inspect them. Illusions are generated by neural responses from "the-rest-of-the-scene." The neural network input is the simultaneous array of all receptors' responses. Neural processing performs vision's second scene-dependent spatial transformation. Neural processing generates appearances in Illusions and Natural Scenes. "Glare's Paradox" is that glare adds more re-distributed light to GrayROIs that appear darker, and less light to those that appear lighter. This article describes nine experiments in which neural-spatial-image processing overcompensates the effects of glare. This article studies the first-step in imaging: scene-dependent glare. Despite near invisibility, glare modifies all quantitative measurements of images. This article reveals glare's modification of input data used in quantitative image analysis and models of vision, as well as visual image-quality metrics. Glare redefines the challenges in modeling Lightness Illusions. Neural spatial processing is more powerful than we realized.

2.
Front Psychol ; 8: 2079, 2017.
Article in English | MEDLINE | ID: mdl-29387023

ABSTRACT

This paper describes a computer program for calculating the contrast image on the human retina from an array of scene luminances. We used achromatic transparency targets and measured test target's luminances with meters. We used the CIE standard Glare Spread Function (GSF) to calculate the array of retinal contrast. This paper describes the CIE standard, the calculation and the analysis techniques comparing the calculated retinal image with observer data. The paper also describes in detail the techniques of accurate measurements of HDR scenes, conversion of measurements to input data arrays, calculation of the retinal image, including open source MATLAB code, pseudocolor visualization of HDR images that exceed the range of standard displays, and comparison of observed sensations with retinal stimuli.

3.
Biol Cybern ; 94(3): 192-214, 2006 Mar.
Article in English | MEDLINE | ID: mdl-16402245

ABSTRACT

In this paper we present a parallel artificial cortical network inspired by the Human visual system, which enhances the salient contours of an image. The network consists of independent processing elements, which are organized into hypercolumns. They process concurrently the distinct orientations of all the edges of the image. These processing elements are a new set of orientation kernels appropriate for the discrete lattice of the hypercolumns. The Gestalt laws of proximity and continuity that describe the process of saliency extraction in the human brain are encoded by means of weights. These weights interconnect the kernels according to a novel connection pattern based on co-exponentiality. The output of every kernel is modulated by the outputs of its neighboring kernels, according to a new affinity function. This function takes into account the degree of difference between the facilitation of the two lobes of the kernel. Saliency enhancement results as a consequence of the local interactions between the kernels. The network was tested on real and synthetic images and displays promising results for both. Comparisons with other methods with the same scope, demonstrate that the proposed method performs adequately. Furthermore it exhibits O(N) complexity with execution times that have never been reported by any other method so far, even though it is executed on a conventional PC.


Subject(s)
Artificial Intelligence , Models, Neurological , Orientation , Pattern Recognition, Visual/physiology , Visual Cortex/physiology , Humans , Pattern Recognition, Automated
SELECTION OF CITATIONS
SEARCH DETAIL
...