ABSTRACT
In this paper we present T1K+, a very large, heterogeneous database of high-quality texture images acquired under variable conditions. T1K+ contains 1129 classes of textures ranging from natural subjects to food, textile samples, construction materials, etc. T1K+ allows the design of experiments especially aimed at understanding the specific issues related to texture classification and retrieval. To help the exploration of the database, all the 1129 classes are hierarchically organized in 5 thematic categories and 266 sub-categories. To complete our study, we present an evaluation of hand-crafted and learned visual descriptors in supervised texture classification tasks.
ABSTRACT
In this work we present SpliNet, a novel CNNbased method that estimates a global color transform for the enhancement of raw images. The method is designed to improve the perceived quality of the images by reproducing the ability of an expert in the field of photo editing. The transformation applied to the input image is found by a convolutional neural network specifically trained for this purpose. More precisely, the network takes as input a raw image and produces as output one set of control points for each of the three color channels. Then, the control points are interpolated with natural cubic splines and the resulting functions are globally applied to the values of the input pixels to produce the output image. Experimental results compare favorably against recent methods in the state of the art on the MIT-Adobe FiveK dataset. Furthermore, we also propose an extension of the SpliNet in which a single neural network is used to model the style of multiple reference retouchers by embedding them into a user space. The style of new users can be reproduced without retraining the network, after a quick modeling stage in which they are positioned in the user space on the basis of their preferences on a very small set of retouched images.
ABSTRACT
In this paper, we present a three-stage method for the estimation of the color of the illuminant in RAW images. The first stage uses a convolutional neural network that has been specially designed to produce multiple local estimates of the illuminant. The second stage, given the local estimates, determines the number of illuminants in the scene. Finally, local illuminant estimates are refined by non-linear local aggregation, resulting in a global estimate in case of single illuminant. An extensive comparison with both local and global illuminant estimation methods in the state of the art, on standard data sets with single and multiple illuminants, proves the effectiveness of our method.
ABSTRACT
The recognition of color texture under varying lighting conditions remains an open issue. Several features have been proposed for this purpose, ranging from traditional statistical descriptors to features extracted with neural networks. Still, it is not completely clear under what circumstances a feature performs better than others. In this paper, we report an extensive comparison of old and new texture features, with and without a color normalization step, with a particular focus on how these features are affected by small and large variations in the lighting conditions. The evaluation is performed on a new texture database, which includes 68 samples of raw food acquired under 46 conditions that present single and combined variations of light color, direction, and intensity. The database allows us to systematically investigate the robustness of texture descriptors across large variations of imaging conditions.
ABSTRACT
This paper presents a texture descriptor for color texture classification specially designed to be robust against changes in the illumination conditions. The descriptor combines a histogram of local binary patterns (LBPs) with a novel feature measuring the distribution of local color contrast. The proposed descriptor is invariant with respect to rotations and translations of the image plane and with respect to several transformations in the color space. We evaluated the proposed descriptor on the Outex test suite, by measuring the classification accuracy in the case in which training and test images have been acquired under different illuminants. The results obtained show that our descriptor outperforms the original LBP approach and its color variants, even when these are computed after color normalization. Moreover, it also outperforms several other color texture descriptors in the state of the art.
ABSTRACT
In this work, we investigate how illuminant estimation techniques can be improved, taking into account automatically extracted information about the content of the images. We considered indoor/outdoor classification because the images of these classes present different content and are usually taken under different illumination conditions. We have designed different strategies for the selection and the tuning of the most appropriate algorithm (or combination of algorithms) for each class. We also considered the adoption of an uncertainty class which corresponds to the images where the indoor/outdoor classifier is not confident enough. The illuminant estimation algorithms considered here are derived from the framework recently proposed by Van de Weijer and Gevers. We present a procedure to automatically tune the algorithms' parameters. We have tested the proposed strategies on a suitable subset of the widely used Funt and Ciurea dataset. Experimental results clearly demonstrate that classification based strategies outperform general purpose algorithms.