Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 10 de 10
Filter
Add more filters










Publication year range
1.
IEEE Trans Pattern Anal Mach Intell ; 42(5): 1286-1287, 2020 May.
Article in English | MEDLINE | ID: mdl-31265383

ABSTRACT

The ColorChecker dataset is one of the most widely used image sets for evaluating and ranking illuminant estimation algorithms. However, this single set of images has at least 3 different sets of ground-truth (i.e., correct answers) associated with it. In the literature it is often asserted that one algorithm is better than another when the algorithms in question have been tuned and tested with the different ground-truths. In this short correspondence we present some of the background as to why the 3 existing ground-truths are different and go on to make a new single and recommended set of correct answers. Experiments reinforce the importance of this work in that we show that the total ordering of a set of algorithms may be reversed depending on whether we use the new or legacy ground-truth data.

2.
IEEE Trans Pattern Anal Mach Intell ; 39(7): 1482-1488, 2017 07.
Article in English | MEDLINE | ID: mdl-27333601

ABSTRACT

The angle between the RGBs of the measured illuminant and estimated illuminant colors-the recovery angular error-has been used to evaluate the performance of the illuminant estimation algorithms. However we noticed that this metric is not in line with how the illuminant estimates are used. Normally, the illuminant estimates are `divided out' from the image to, hopefully, provide image colors that are not confounded by the color of the light. However, even though the same reproduction results the same scene might have a large range of recovery errors. In this work the scale of the problem with the recovery error is quantified. Next we propose a new metric for evaluating illuminant estimation algorithms, called the reproduction angular error, which is defined as the angle between the RGB of a white surface when the actual and estimated illuminations are `divided out'. Our new metric ties algorithm performance to how the illuminant estimates are used. For a given algorithm, adopting the new reproduction angular error leads to different optimal parameters. Further the ranked list of best to worst algorithms changes when the reproduction angular is used. The importance of using an appropriate performance metric is established.

3.
IEEE Trans Image Process ; 23(9): 3855-68, 2014 Sep.
Article in English | MEDLINE | ID: mdl-25051548

ABSTRACT

The aim of color constancy is to remove the effect of the color of the light source. As color constancy is inherently an ill-posed problem, most of the existing color constancy algorithms are based on specific imaging assumptions (e.g., gray-world and white patch assumption). In this paper, 3D geometry models are used to determine which color constancy method to use for the different geometrical regions (depth/layer) found in images. The aim is to classify images into stages (rough 3D geometry models). According to stage models, images are divided into stage regions using hard and soft segmentation. After that, the best color constancy methods are selected for each geometry depth. To this end, we propose a method to combine color constancy algorithms by investigating the relation between depth, local image statistics, and color constancy. Image statistics are then exploited per depth to select the proper color constancy method. Our approach opens the possibility to estimate multiple illuminations by distinguishing nearby light source from distant illuminations. Experiments on state-of-the-art data sets show that the proposed algorithm outperforms state-of-the-art single color constancy algorithms with an improvement of almost 50% of median angular error. When using a perfect classifier (i.e, all of the test images are correctly classified into stages); the performance of the proposed method achieves an improvement of 52% of the median angular error compared with the best-performing single color constancy algorithm.

4.
J Opt Soc Am A Opt Image Sci Vis ; 30(9): 1871-84, 2013 Sep 01.
Article in English | MEDLINE | ID: mdl-24323269

ABSTRACT

We measure the color fidelity of visual scenes that are rendered under different (simulated) illuminants and shown on a calibrated LCD display. Observers make triad illuminant comparisons involving the renderings from two chromatic test illuminants and one achromatic reference illuminant shown simultaneously. Four chromatic test illuminants are used: two along the daylight locus (yellow and blue), and two perpendicular to it (red and green). The observers select the rendering having the best color fidelity, thereby indirectly judging which of the two test illuminants induces the smallest color differences compared to the reference. Both multicolor test scenes and natural scenes are studied. The multicolor scenes are synthesized and represent ellipsoidal distributions in CIELAB chromaticity space having the same mean chromaticity but different chromatic orientations. We show that, for those distributions, color fidelity is best when the vector of the illuminant change (pointing from neutral to chromatic) is parallel to the major axis of the scene's chromatic distribution. For our selection of natural scenes, which generally have much broader chromatic distributions, we measure a higher color fidelity for the yellow and blue illuminants than for red and green. Scrambled versions of the natural images are also studied to exclude possible semantic effects. We quantitatively predict the average observer response (i.e., the illuminant probability) with four types of models, differing in the extent to which they incorporate information processing by the visual system. Results show different levels of performance for the models, and different levels for the multicolor scenes and the natural scenes. Overall, models based on the scene averaged color difference have the best performance. We discuss how color constancy algorithms may be improved by exploiting knowledge of the chromatic distribution of the visual scene.


Subject(s)
Color Perception/physiology , Image Processing, Computer-Assisted/methods , Adult , Algorithms , Color , Computer Graphics , Computer Simulation , Humans , Male , Middle Aged , Normal Distribution , Observer Variation , User-Computer Interface , Vision, Ocular
5.
IEEE Trans Pattern Anal Mach Intell ; 34(5): 918-29, 2012 May.
Article in English | MEDLINE | ID: mdl-22442121

ABSTRACT

Edge-based color constancy methods make use of image derivatives to estimate the illuminant. However, different edge types exist in real-world images, such as material, shadow, and highlight edges. These different edge types may have a distinctive influence on the performance of the illuminant estimation. Therefore, in this paper, an extensive analysis is provided of different edge types on the performance of edge-based color constancy methods. First, an edge-based taxonomy is presented classifying edge types based on their photometric properties (e.g., material, shadow-geometry, and highlights). Then, a performance evaluation of edge-based color constancy is provided using these different edge types. From this performance evaluation, it is derived that specular and shadow edge types are more valuable than material edges for the estimation of the illuminant. To this end, the (iterative) weighted Gray-Edge algorithm is proposed in which these edge types are more emphasized for the estimation of the illuminant. Images that are recorded under controlled circumstances demonstrate that the proposed iterative weighted Gray-Edge algorithm based on highlights reduces the median angular error with approximately 25 percent. In an uncontrolled environment, improvements in angular error up to 11 percent are obtained with respect to regular edge-based color constancy.

6.
IEEE Trans Image Process ; 21(2): 697-707, 2012 Feb.
Article in English | MEDLINE | ID: mdl-21859624

ABSTRACT

Color constancy algorithms are generally based on the simplifying assumption that the spectral distribution of a light source is uniform across scenes. However, in reality, this assumption is often violated due to the presence of multiple light sources. In this paper, we will address more realistic scenarios where the uniform light-source assumption is too restrictive. First, a methodology is proposed to extend existing algorithms by applying color constancy locally to image patches, rather than globally to the entire image. After local (patch-based) illuminant estimation, these estimates are combined into more robust estimations, and a local correction is applied based on a modified diagonal model. Quantitative and qualitative experiments on spectral and real images show that the proposed methodology reduces the influence of two light sources simultaneously present in one scene. If the chromatic difference between these two illuminants is more than 1°, the proposed framework outperforms algorithms based on the uniform light-source assumption (with error-reduction up to approximately 30%). Otherwise, when the chromatic difference is less than 1° and the scene can be considered to contain one (approximately) uniform light source, the performance of the proposed method framework is similar to global color constancy methods.

7.
IEEE Trans Image Process ; 20(9): 2475-89, 2011 Sep.
Article in English | MEDLINE | ID: mdl-21342844

ABSTRACT

Computational color constancy is a fundamental prerequisite for many computer vision applications. This paper presents a survey of many recent developments and state-of-the-art methods. Several criteria are proposed that are used to assess the approaches. A taxonomy of existing algorithms is proposed and methods are separated in three groups: static methods, gamut-based methods, and learning-based methods. Further, the experimental setup is discussed including an overview of publicly available datasets. Finally, various freely available methods, of which some are considered to be state of the art, are evaluated on two datasets.

8.
IEEE Trans Pattern Anal Mach Intell ; 33(4): 687-98, 2011 Apr.
Article in English | MEDLINE | ID: mdl-20421672

ABSTRACT

Existing color constancy methods are all based on specific assumptions such as the spatial and spectral characteristics of images. As a consequence, no algorithm can be considered as universal. However, with the large variety of available methods, the question is how to select the method that performs best for a specific image. To achieve selection and combining of color constancy algorithms, in this paper natural image statistics are used to identify the most important characteristics of color images. Then, based on these image characteristics, the proper color constancy algorithm (or best combination of algorithms) is selected for a specific image. To capture the image characteristics, the Weibull parameterization (e.g., grain size and contrast) is used. It is shown that the Weibull parameterization is related to the image attributes to which the used color constancy methods are sensitive. An MoG-classifier is used to learn the correlation and weighting between the Weibull-parameters and the image attributes (number of edges, amount of texture, and SNR). The output of the classifier is the selection of the best performing color constancy method for a certain image. Experimental results show a large improvement over state-of-the-art single algorithms. On a data set consisting of more than 11,000 images, an increase in color constancy performance up to 20 percent (median angular error) can be obtained compared to the best-performing single algorithm. Further, it is shown that for certain scene categories, one specific color constancy algorithm can be used instead of the classifier considering several algorithms.


Subject(s)
Algorithms , Image Enhancement/methods , Image Processing, Computer-Assisted/methods , Color/standards , Reproducibility of Results , Semantics , Sensitivity and Specificity
9.
J Opt Soc Am A Opt Image Sci Vis ; 26(10): 2243-56, 2009 Oct.
Article in English | MEDLINE | ID: mdl-19798406

ABSTRACT

Color constancy algorithms are often evaluated by using a distance measure that is based on mathematical principles, such as the angular error. However, it is unknown whether these distance measures correlate to human vision. Therefore, the main goal of our paper is to analyze the correlation between several performance measures and the quality, obtained by using psychophysical experiments, of the output images generated by various color constancy algorithms. Subsequent issues that are addressed are the distribution of performance measures, suggesting additional and alternative information that can be provided to summarize the performance over a large set of images, and the perceptual significance of obtained improvements, i.e., the improvement that should be obtained before the difference becomes noticeable to a human observer.


Subject(s)
Algorithms , Color Perception , Adult , Distance Perception , Female , Humans , Lighting , Male , Models, Biological , Observer Variation , Young Adult
10.
IEEE Trans Image Process ; 16(9): 2207-14, 2007 Sep.
Article in English | MEDLINE | ID: mdl-17784594

ABSTRACT

Color constancy is the ability to measure colors of objects independent of the color of the light source. A well-known color constancy method is based on the gray-world assumption which assumes that the average reflectance of surfaces in the world is achromatic. In this paper, we propose a new hypothesis for color constancy namely the gray-edge hypothesis, which assumes that the average edge difference in a scene is achromatic. Based on this hypothesis, we propose an algorithm for color constancy. Contrary to existing color constancy algorithms, which are computed from the zero-order structure of images, our method is based on the derivative structure of images. Furthermore, we propose a framework which unifies a variety of known (gray-world, max-RGB, Minkowski norm) and the newly proposed gray-edge and higher order gray-edge algorithms. The quality of the various instantiations of the framework is tested and compared to the state-of-the-art color constancy methods on two large data sets of images recording objects under a large number of different light sources. The experiments show that the proposed color constancy algorithms obtain comparable results as the state-of-the-art color constancy methods with the merit of being computationally more efficient.


Subject(s)
Algorithms , Color , Colorimetry/methods , Image Enhancement/methods , Image Interpretation, Computer-Assisted/methods , Reproducibility of Results , Sensitivity and Specificity
SELECTION OF CITATIONS
SEARCH DETAIL
...