Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 5 de 5
Filter
Add more filters











Database
Language
Publication year range
1.
Behav Brain Sci ; 46: e389, 2023 Dec 06.
Article in English | MEDLINE | ID: mdl-38054295

ABSTRACT

Bowers et al. counter deep neural networks (DNNs) as good models of human visual perception. From our color perspective we feel their view is based on three misconceptions: A misrepresentation of the state-of-the-art of color perception; the type of model required to move the field forward; and the attribution of shortcomings to DNN research that are already being resolved.


Subject(s)
Neural Networks, Computer , Visual Perception , Humans , Color Perception , Emotions , Social Perception
2.
Elife ; 112022 12 13.
Article in English | MEDLINE | ID: mdl-36511778

ABSTRACT

Color is a prime example of categorical perception, yet it is unclear why and how color categories emerge. On the one hand, prelinguistic infants and several animals treat color categorically. On the other hand, recent modeling endeavors have successfully utilized communicative concepts as the driving force for color categories. Rather than modeling categories directly, we investigate the potential emergence of color categories as a result of acquiring visual skills. Specifically, we asked whether color is represented categorically in a convolutional neural network (CNN) trained to recognize objects in natural images. We systematically trained new output layers to the CNN for a color classification task and, probing novel colors, found borders that are largely invariant to the training colors. The border locations were confirmed using an evolutionary algorithm that relies on the principle of categorical perception. A psychophysical experiment on human observers, analogous to our primary CNN experiment, shows that the borders agree to a large degree with human category boundaries. These results provide evidence that the development of basic visual skills can contribute to the emergence of a categorical representation of color.


Subject(s)
Neural Networks, Computer , Visual Perception , Animals , Infant , Humans , Communication , Color
3.
J Vis ; 22(4): 17, 2022 03 02.
Article in English | MEDLINE | ID: mdl-35353153

ABSTRACT

Color constancy is our ability to perceive constant colors across varying illuminations. Here, we trained deep neural networks to be color constant and evaluated their performance with varying cues. Inputs to the networks consisted of two-dimensional images of simulated cone excitations derived from three-dimensional (3D) rendered scenes of 2,115 different 3D shapes, with spectral reflectances of 1,600 different Munsell chips, illuminated under 278 different natural illuminations. The models were trained to classify the reflectance of the objects. Testing was done with four new illuminations with equally spaced CIEL*a*b* chromaticities, two along the daylight locus and two orthogonal to it. High levels of color constancy were achieved with different deep neural networks, and constancy was higher along the daylight locus. When gradually removing cues from the scene, constancy decreased. Both ResNets and classical ConvNets of varying degrees of complexity performed well. However, DeepCC, our simplest sequential convolutional network, represented colors along the three color dimensions of human color vision, while ResNets showed a more complex representation.


Subject(s)
Color Perception , Color Vision , Humans , Lighting , Photic Stimulation , Retinal Cone Photoreceptor Cells
4.
Vision Res ; 182: 89-100, 2021 05.
Article in English | MEDLINE | ID: mdl-33611127

ABSTRACT

In this work, we examined the color tuning of units in the hidden layers of AlexNet, VGG-16 and VGG-19 convolutional neural networks and their relevance for the successful recognition of an object. We first selected the patches for which the units are maximally responsive among the 1.2 M images of the ImageNet training dataset. We segmented these patches using a k-means clustering algorithm on their chromatic distribution. Then we independently varied the color of these segments, both in hue and chroma, to measure the unit's chromatic tuning. The models exhibited properties at times similar or opposed to the known chromatic processing of biological system. We found that, similarly to the most anterior occipital visual areas in primates, the last convolutional layer exhibited high color sensitivity. We also found the gradual emergence of single to double opponent kernels. Contrary to cells in the visual system, however, these kernels were selective for hues that gradually transit from being broadly distributed in early layers, to mainly falling along the blue-orange axis in late layers. In addition, we found that the classification performance of our models varies as we change the color of our stimuli following the models' kernels properties. Performance was highest for colors the kernels maximally responded to, and images responsible for the activation of color sensitive kernels were more likely to be mis-classified as we changed their color. These observations were shared by all three networks, thus suggesting that they are general properties of current convolutional neural networks trained for object recognition.


Subject(s)
Neural Networks, Computer , Visual Perception , Algorithms , Color
5.
J Opt Soc Am A Opt Image Sci Vis ; 35(4): B334-B346, 2018 Apr 01.
Article in English | MEDLINE | ID: mdl-29603962

ABSTRACT

Deep convolutional neural networks are a class of machine-learning algorithms capable of solving non-trivial tasks, such as object recognition, with human-like performance. Little is known about the exact computations that deep neural networks learn, and to what extent these computations are similar to the ones performed by the primate brain. Here, we investigate how color information is processed in the different layers of the AlexNet deep neural network, originally trained on object classification of over 1.2M images of objects in their natural contexts. We found that the color-responsive units in the first layer of AlexNet learned linear features and were broadly tuned to two directions in color space, analogously to what is known of color responsive cells in the primate thalamus. Moreover, these directions are decorrelated and lead to statistically efficient representations, similar to the cardinal directions of the second-stage color mechanisms in primates. We also found, in analogy to the early stages of the primate visual system, that chromatic and achromatic information were segregated in the early layers of the network. Units in the higher layers of AlexNet exhibit on average a lower responsivity for color than units at earlier stages.

SELECTION OF CITATIONS
SEARCH DETAIL