Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 7 de 7
Filter
Add more filters










Database
Language
Publication year range
1.
Opt Express ; 27(18): 25611-25633, 2019 Sep 02.
Article in English | MEDLINE | ID: mdl-31510431

ABSTRACT

With very simple implementation, regression-based color constancy (CC) methods have recently obtained very competitive performance by applying a correction matrix to the results of some low level-based CC algorithms. However, most regression-based methods, e.g., Corrected Moment (CM), apply a same correction matrix to all the test images. Considering that the captured image color is usually determined by various factors (e.g., illuminant and surface reflectance), it is obviously not reasonable enough to apply a same correction to different test images without considering the intrinsic difference among images. In this work, we first mathematically analyze the key factors that may influence the performance of regression-based CC, and then we design principled rules to automatically select the suitable training images to learn an optimal correction matrix for each test image. With this strategy, the original regression-based CC (e.g., CM) is clearly improved to obtain more competitive performance on four widely used benchmark datasets. We also show that although this work focuses on improving the regression-based CM method, a noteworthy aspect of the proposed automatic training data selection strategy is its applicability to several representative regression-based approaches for the color constancy problem.

2.
IEEE Trans Image Process ; 28(11): 5580-5595, 2019 Nov.
Article in English | MEDLINE | ID: mdl-31180853

ABSTRACT

We propose an underwater image enhancement model inspired by the morphology and function of the teleost fish retina. We aim to solve the problems of underwater image degradation raised by the blurring and nonuniform color biasing. In particular, the feedback from color-sensitive horizontal cells to cones and a red channel compensation are used to correct the nonuniform color bias. The center-surround opponent mechanism of the bipolar cells and the feedback from amacrine cells to interplexiform cells then to horizontal cells serve to enhance the edges and contrasts of the output image. The ganglion cells with color-opponent mechanism are used for color enhancement and color correction. Finally, we adopt a luminance-based fusion strategy to reconstruct the enhanced image from the outputs of ON and OFF pathways of fish retina. Our model utilizes the global statistics (i.e., image contrast) to automatically guide the design of each low-level filter, which realizes the self-adaption of the main parameters. Extensive qualitative and quantitative evaluations on various underwater scenes validate the competitive performance of our technique. Our model also significantly improves the accuracy of transmission map estimation and local feature point matching using the underwater image. Our method is a single image approach that does not require the specialized prior about the underwater condition or scene structure.


Subject(s)
Color Vision/physiology , Image Processing, Computer-Assisted/methods , Models, Neurological , Retina/physiology , Algorithms , Animals , Fishes/physiology , Retinal Ganglion Cells/physiology , Signal Processing, Computer-Assisted , Water/physiology
3.
IEEE Trans Image Process ; 28(9): 4387-4400, 2019 Sep.
Article in English | MEDLINE | ID: mdl-30946665

ABSTRACT

Multi-illuminant-based color constancy (MCC) is quite a challenging task. In this paper, we proposed a novel model motivated by the bottom-up and top-down mechanisms of human visual system (HVS) to estimate the spatially varying illumination in a scene. The motivation for bottom-up based estimation is from our finding that the bright and dark parts in a scene play different roles in encoding illuminants. However, handling the color shift of large colorful objects is difficult using pure bottom-up processing. Thus, we further introduce a top-down constraint inspired by the findings in visual psychophysics, in which high-level information (e.g., the prior of light source colors) plays a key role in visual color constancy. In order to implement the top-down hypothesis, we simply learn a color mapping between the illuminant distribution estimated by bottom-up processing and the ground truth maps provided by the dataset. We evaluated our model on four datasets and the results show that our method obtains very competitive performance compared with the state-of-the-art MCC algorithms. Moreover, the robustness of our model is more tangible considering that our results were obtained using the same parameters for all the datasets or the parameters of our model were learned from the inputs, that is, mimicking how HVS operates. We also show the color correction results on some real-world images taken from the web.

4.
J Opt Soc Am A Opt Image Sci Vis ; 34(8): 1448-1462, 2017 Aug 01.
Article in English | MEDLINE | ID: mdl-29036112

ABSTRACT

It is an ill-posed problem to recover the true scene colors from a color biased image by discounting the effects of scene illuminant and camera spectral sensitivity (CSS) at the same time. Most color constancy (CC) models have been designed to first estimate the illuminant color, which is then removed from the color biased image to obtain an image taken under white light, without the explicit consideration of CSS effect on CC. This paper first studies the CSS effect on illuminant estimation arising in the inter-dataset-based CC (inter-CC), i.e., training a CC model on one dataset and then testing on another dataset captured by a distinct CSS. We show the clear degradation of existing CC models for inter-CC application. Then a simple way is proposed to overcome such degradation by first learning quickly a transform matrix between the two distinct CSSs (CSS-1 and CSS-2). The learned matrix is then used to convert the data (including the illuminant ground truth and the color-biased images) rendered under CSS-1 into CSS-2, and then train and apply the CC model on the color-biased images under CSS-2 without the need of burdensome acquiring of the training set under CSS-2. Extensive experiments on synthetic and real images show that our method can clearly improve the inter-CC performance for traditional CC algorithms. We suggest that, by taking the CSS effect into account, it is more likely to obtain the truly color constant images invariant to the changes of both illuminant and camera sensors.

5.
IEEE Trans Image Process ; 25(3): 1219-32, 2016 Mar.
Article in English | MEDLINE | ID: mdl-26766375

ABSTRACT

In this paper, we propose a novel model for the computational color constancy, inspired by the amazing ability of the human vision system (HVS) to perceive the color of objects largely constant as the light source color changes. The proposed model imitates the color processing mechanisms in the specific level of the retina, the first stage of the HVS, from the adaptation emerging in the layers of cone photoreceptors and horizontal cells (HCs) to the color-opponent mechanism and disinhibition effect of the non-classical receptive field in the layer of retinal ganglion cells (RGCs). In particular, HC modulation provides a global color correction with cone-specific lateral gain control, and the following RGCs refine the processing with iterative adaptation until all the three opponent channels reach their stable states (i.e., obtain stable outputs). Instead of explicitly estimating the scene illuminant(s), such as most existing algorithms, our model directly removes the effect of scene illuminant. Evaluations on four commonly used color constancy data sets show that the proposed model produces competitive results in comparison with the state-of-the-art methods for the scenes under either single or multiple illuminants. The results indicate that single opponency, especially the disinhibitory effect emerging in the receptive field's subunit-structured surround of RGCs, plays an important role in removing scene illuminant(s) by inherently distinguishing the spatial structures of surfaces from extensive illuminant(s).


Subject(s)
Color Perception/physiology , Image Processing, Computer-Assisted/methods , Models, Neurological , Retina/physiology , Retinal Ganglion Cells/physiology , Algorithms , Humans
6.
IEEE Trans Pattern Anal Mach Intell ; 37(10): 1973-85, 2015 Oct.
Article in English | MEDLINE | ID: mdl-26353182

ABSTRACT

The double-opponent (DO) color-sensitive cells in the primary visual cortex (V1) of the human visual system (HVS) have long been recognized as the physiological basis of color constancy. In this work we propose a new color constancy model by imitating the functional properties of the HVS from the single-opponent (SO) cells in the retina to the DO cells in V1 and the possible neurons in the higher visual cortexes. The idea behind the proposed double-opponency based color constancy (DOCC) model originates from the substantial observation that the color distribution of the responses of DO cells to the color-biased images coincides well with the vector denoting the light source color. Then the illuminant color is easily estimated by pooling the responses of DO cells in separate channels in LMS space with the pooling mechanism of sum or max. Extensive evaluations on three commonly used datasets, including the test with the dataset dependent optimal parameters, as well as the intra- and inter-dataset cross validation, show that our physiologically inspired DOCC model can produce quite competitive results in comparison to the state-of-the-art approaches, but with a relative simple implementation and without requiring fine-tuning of the method for each different dataset.


Subject(s)
Color Perception/physiology , Models, Theoretical , Visual Cortex/physiology , Computational Biology , Humans , Retina/cytology
7.
Front Comput Neurosci ; 9: 151, 2015.
Article in English | MEDLINE | ID: mdl-26733857

ABSTRACT

The mammalian retina seems far smarter than scientists have believed so far. Inspired by the visual processing mechanisms in the retina, from the layer of photoreceptors to the layer of retinal ganglion cells (RGCs), we propose a computational model for haze removal from a single input image, which is an important issue in the field of image enhancement. In particular, the bipolar cells serve to roughly remove the low-frequency of haze, and the amacrine cells modulate the output of cone bipolar cells to compensate the loss of details by increasing the image contrast. Then the RGCs with disinhibitory receptive field surround refine the local haze removal as well as the image detail enhancement. Results on a variety of real-world and synthetic hazy images show that the proposed model yields results comparative to or even better than the state-of-the-art methods, having the advantage of simultaneous dehazing and enhancing of single hazy image with simple and straightforward implementation.

SELECTION OF CITATIONS
SEARCH DETAIL
...