Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 15 de 15
Filter
1.
IEEE Trans Pattern Anal Mach Intell ; 42(5): 1286-1287, 2020 May.
Article in English | MEDLINE | ID: mdl-31265383

ABSTRACT

The ColorChecker dataset is one of the most widely used image sets for evaluating and ranking illuminant estimation algorithms. However, this single set of images has at least 3 different sets of ground-truth (i.e., correct answers) associated with it. In the literature it is often asserted that one algorithm is better than another when the algorithms in question have been tuned and tested with the different ground-truths. In this short correspondence we present some of the background as to why the 3 existing ground-truths are different and go on to make a new single and recommended set of correct answers. Experiments reinforce the importance of this work in that we show that the total ordering of a set of algorithms may be reversed depending on whether we use the new or legacy ground-truth data.

2.
IEEE J Biomed Health Inform ; 23(2): 779-786, 2019 03.
Article in English | MEDLINE | ID: mdl-29993758

ABSTRACT

We propose a novel approach to identify one of the most significant dermoscopic criteria in the diagnosis of cutaneous Melanoma: the blue-white structure (BWS). In this paper, we achieve this goal in a multiple instance learning (MIL) framework using only image-level labels indicating whether the feature is present or not. To this aim, each image is represented as a bag of (nonoverlapping) regions, where each region may or may not be identified as an instance of BWS. A probabilistic graphical model is trained (in MIL fashion) to predict the bag (image) labels. As output, we predict the classification label for the image (i.e., the presence or absence of BWS in each image) and we also localize the feature in the image. Experiments are conducted on a challenging dataset with results outperforming state-of-the-art techniques, with BWS detection besting competing methods in terms of performance. This study provides an improvement on the scope of modeling for computerized image analysis of skin lesions. In particular, it propounds a framework for identification of dermoscopic local features from weakly labeled data.


Subject(s)
Dermoscopy/methods , Image Interpretation, Computer-Assisted/methods , Algorithms , Databases, Factual , Humans , Melanoma/diagnostic imaging , Skin Neoplasms/diagnostic imaging , Supervised Machine Learning
3.
Int J Biomed Imaging ; 2016: 4868305, 2016.
Article in English | MEDLINE | ID: mdl-28096807

ABSTRACT

Cutaneous melanoma is the most life-threatening form of skin cancer. Although advanced melanoma is often considered as incurable, if detected and excised early, the prognosis is promising. Today, clinicians use computer vision in an increasing number of applications to aid early detection of melanoma through dermatological image analysis (dermoscopy images, in particular). Colour assessment is essential for the clinical diagnosis of skin cancers. Due to this diagnostic importance, many studies have either focused on or employed colour features as a constituent part of their skin lesion analysis systems. These studies range from using low-level colour features, such as simple statistical measures of colours occurring in the lesion, to availing themselves of high-level semantic features such as the presence of blue-white veil, globules, or colour variegation in the lesion. This paper provides a retrospective survey and critical analysis of contributions in this research direction.

4.
J Opt Soc Am A Opt Image Sci Vis ; 32(12): 2384-96, 2015 Dec 01.
Article in English | MEDLINE | ID: mdl-26831392

ABSTRACT

This paper describes a novel approach to image fusion for color display. Our goal is to generate an output image whose gradient matches that of the input as closely as possible. We achieve this using a constrained contrast mapping paradigm in the gradient domain, where the structure tensor of a high-dimensional gradient representation is mapped exactly to that of a low-dimensional gradient field which is then reintegrated to form an output. Constraints on output colors are provided by an initial RGB rendering. Initially, we motivate our solution with a simple "ansatz" (educated guess) for projecting higher-D contrast onto color gradients, which we expand to a more rigorous theorem to incorporate color constraints. The solution to these constrained optimizations is closed-form, allowing for simple and hence fast and efficient algorithms. The approach can map any N-D image data to any M-D output and can be used in a variety of applications using the same basic algorithm. In this paper, we focus on the problem of mapping N-D inputs to 3D color outputs. We present results in five applications: hyperspectral remote sensing, fusion of color and near-infrared or clear-filter images, multilighting imaging, dark flash, and color visualization of magnetic resonance imaging diffusion-tensor imaging.

5.
Article in English | MEDLINE | ID: mdl-25333100

ABSTRACT

We describe a technique that employs the stochastic Latent Topic Models framework to allow quantification of melanin and hemoglobin content in dermoscopy images. Such information bears useful implications for analysis of skin hyperpigmentation, and for classification of skin diseases. The proposed method outperforms existing approaches while allowing for more stringent and probabilistic modeling than previously.


Subject(s)
Dermoscopy/methods , Hemoglobins/metabolism , Image Interpretation, Computer-Assisted/methods , Melanins/metabolism , Skin Neoplasms/diagnosis , Skin Neoplasms/metabolism , Biomarkers, Tumor/metabolism , Data Interpretation, Statistical , Humans , Molecular Imaging/methods , Reproducibility of Results , Sensitivity and Specificity
6.
IEEE Trans Pattern Anal Mach Intell ; 36(5): 860-73, 2014 May.
Article in English | MEDLINE | ID: mdl-26353222

ABSTRACT

Exemplar-based learning or, equally, nearest neighbor methods have recently gained interest from researchers in a variety of computer science domains because of the prevalence of large amounts of accessible data and storage capacity. In computer vision, these types of technique have been successful in several problems such as scene recognition, shape matching, image parsing, character recognition, and object detection. Applying the concept of exemplar-based learning to the problem of color constancy seems odd at first glance since, in the first place, similar nearest neighbor images are not usually affected by precisely similar illuminants and, in the second place, gathering a dataset consisting of all possible real-world images, including indoor and outdoor scenes and for all possible illuminant colors and intensities, is indeed impossible. In this paper, we instead focus on surfaces in the image and address the color constancy problem by unsupervised learning of an appropriate model for each training surface in training images. We find nearest neighbor models for each surface in a test image and estimate its illumination based on comparing the statistics of pixels belonging to nearest neighbor surfaces and the target surface. The final illumination estimation results from combining these estimated illuminants over surfaces to generate a unique estimate. We show that it performs very well, for standard datasets, compared to current color constancy algorithms, including when learning based on one image dataset is applied to tests from a different dataset. The proposed method has the advantage of overcoming multi-illuminant situations, which is not possible for most current methods since they assume the color of the illuminant is constant all over the image. We show a technique to overcome the multiple illuminant situation using the proposed method and test our technique on images with two distinct sources of illumination using a multiple-illuminant color constancy dataset. The concept proposed here is a completely new approach to the color constancy problem and provides a simple learning-based framework.

7.
Med Image Comput Comput Assist Interv ; 16(Pt 3): 453-60, 2013.
Article in English | MEDLINE | ID: mdl-24505793

ABSTRACT

Skin lesions are often comprised of various colours. The presence of multiple colours with an irregular distribution can signal malignancy. Among common colours under dermoscopy, blue-grey (blue-white veil) is a strong indicator of malignant melanoma. Since it is not always easy to visually identify and recognize this feature, a computerised automatic colour analysis method can provide the clinician with an objective second opinion. In this paper, we put forward an innovative method, through colour analysis and computer vision techniques, to automatically detect and segment blue-white veil areas in dermoscopy images. The proposed method is an attempt to mimic the human perception of lesion colours, and improves and outperforms the state-of-the-art as shown in our experiments.


Subject(s)
Algorithms , Artificial Intelligence , Colorimetry/methods , Dermoscopy/methods , Image Interpretation, Computer-Assisted/methods , Melanoma/pathology , Pattern Recognition, Automated/methods , Skin Neoplasms/pathology , Humans , Image Enhancement/methods , Reproducibility of Results , Sensitivity and Specificity
8.
Med Image Comput Comput Assist Interv ; 15(Pt 1): 315-22, 2012.
Article in English | MEDLINE | ID: mdl-23285566

ABSTRACT

In this paper we propose a new log-chromaticity 2-D colour space, an extension of previous approaches, which succeeds in removing confounding factors from dermoscopic images: (i) the effects of the particular camera characteristics for the camera system used in forming RGB images; (ii) the colour of the light used in the dermoscope; (iii) shading induced by imaging non-flat skin surfaces; (iv) and light intensity, removing the effect of light-intensity falloff toward the edges of the dermoscopic image. In the context of a blind source separation of the underlying colour, we arrive at intrinsic melanin and hemoglobin images, whose properties are then used in supervised learning to achieve excellent malignant vs. benign skin lesion classification. In addition, we propose using the geometric-mean of colour for skin lesion segmentation based on simple grey-level thresholding, with results outperforming the state of the art.


Subject(s)
Hemoglobins/metabolism , Melanins/metabolism , Skin Neoplasms/diagnosis , Skin Neoplasms/pathology , Skin/pathology , Algorithms , Area Under Curve , Colorimetry/methods , Dermoscopy/methods , Diagnostic Imaging/methods , Early Detection of Cancer/methods , Hemoglobins/chemistry , Humans , Image Processing, Computer-Assisted , Melanins/chemistry , Melanoma/diagnosis , Melanoma/metabolism , Models, Statistical , Nevus, Epithelioid and Spindle Cell/diagnosis
9.
IEEE Trans Image Process ; 20(10): 2827-36, 2011 Oct.
Article in English | MEDLINE | ID: mdl-21788194

ABSTRACT

In computer vision, there are many applications, where it is advantageous to process an image in the gradient domain and then reintegrate the gradient field: important examples include shadow removal, lightness calculation, and data fusion. A serious problem with this approach is that the reconstruction step often introduces artefacts-commonly, smoothed and smeared edges-to the recovered image. This is a result of the inherent ill-posedness of reintegrating a nonintegrable field. Artefacts can be diminished but not removed, by using complex to highly complex reintegration techniques. Here, we present a remarkably simple (and on the face of it naive) algorithm for reconstructing gradient fields. Suppose we start with a multichannel original, and from it derive a (possibly one of many) 1-D gradient field; for many applications, the derived gradient field will be nonintegrable. Here, we propose a lookup-table-based map relating the multichannel original to a reconstructed scalar output image, whose gradient best matches the target gradient field. The idea, at base, is that if we learn how to map the gradients of the multichannel original onto the desired output gradient, and then using the lookup table (LUT) constraint, we effectively derive the mapping from the multichannel input to the desired, reintegrated, image output. While this map could take a variety of forms, here we derive the best map from the multichannel gradient as a (nonlinear) function of the input to each of the target scalar gradients. In this framework, reconstruction is a simple equation-solving exercise of low dimensionality. One obvious application of our method is to the image-fusion problem, e.g., the problem of converting a color or higher-D image into grayscale. We will show, through extensive experiments and complementary theoretical arguments, that our straightforward method preserves the target contrast as well as do complex previous reintegration methods, but without artefacts, and with a substantially cheaper computational cost. Finally, we demonstrate the generality of the method by applying it to gradient field reconstruction in an additional area, the shading recovery problem.

10.
IEEE Trans Med Imaging ; 30(7): 1314-27, 2011 Jul.
Article in English | MEDLINE | ID: mdl-21296705

ABSTRACT

A method for visualizing manifold-valued medical image data is proposed. The method operates on images in which each pixel is assumed to be sampled from an underlying manifold. For example, each pixel may contain a high dimensional vector, such as the time activity curve (TAC) in a dynamic positron emission tomography (dPET) or a dynamic single photon emission computed tomography (dSPECT) image, or the positive semi-definite tensor in a diffusion tensor magnetic resonance image (DTMRI). A nonlinear mapping reduces the dimensionality of the pixel data to achieve two goals: distance preservation and embedding into a perceptual color space. We use multidimensional scaling distance-preserving mapping to render similar pixels (e.g., DT or TAC pixels) with perceptually similar colors. The 3D CIELAB perceptual color space is adopted as the range of the distance preserving mapping, with a final similarity transform mapping colors to a maximum gamut size. Similarity between pixels is either determined analytically as geodesics on the manifold of pixels or is approximated using manifold learning techniques. In particular, dissimilarity between DTMRI pixels is evaluated via a Log-Euclidean Riemannian metric respecting the manifold of the rank 3, second-order positive semi-definite DTs, whereas the dissimilarity between TACs is approximated via ISOMAP. We demonstrate our approach via artificial high-dimensional, manifold-valued data, as well as case studies of normal and pathological clinical brain and heart DTMRI, dPET, and dSPECT images. Our results demonstrate the effectiveness of our approach in capturing, in a perceptually meaningful way, important features in the data.


Subject(s)
Diffusion Tensor Imaging/methods , Image Processing, Computer-Assisted/methods , Models, Biological , Positron-Emission Tomography/methods , Tomography, Emission-Computed, Single-Photon/methods , Algorithms , Brain Neoplasms/pathology , Color , Corpus Callosum/anatomy & histology , Diagnostic Imaging , Glioblastoma/pathology , Heart/anatomy & histology , Humans , Kidney Diseases/pathology , Multiple Sclerosis/pathology , Nonlinear Dynamics , Putamen/anatomy & histology , Regression Analysis
11.
IEEE Trans Pattern Anal Mach Intell ; 29(6): 959-75, 2007 Jun.
Article in English | MEDLINE | ID: mdl-17431296

ABSTRACT

We present a novel convex programming scheme to solve matching problems, focusing on the challenging problem of matching in a large search range and with cluttered background. Matching is formulated as metric labeling with L1 regularization terms, for which we propose a novel linear programming relaxation method and an efficient successive convexification implementation. The unique feature of the proposed relaxation scheme is that a much smaller set of basis labels is used to represent the original label space. This greatly reduces the size of the searching space. A successive convexification scheme solves the labeling problem in a coarse to fine manner. Importantly, the original cost function is reconvexified at each stage, in the new focus region only, and the focus region is updated so as to refine the searching result. This makes the method well-suited for large label set matching. Experiments demonstrate successful applications of the proposed matching scheme in object detection, motion estimation, and tracking.


Subject(s)
Algorithms , Artificial Intelligence , Image Enhancement/methods , Image Interpretation, Computer-Assisted/methods , Pattern Recognition, Automated/methods , Subtraction Technique , Information Storage and Retrieval/methods , Numerical Analysis, Computer-Assisted , Programming, Linear , Reproducibility of Results , Sensitivity and Specificity , Signal Processing, Computer-Assisted
12.
J Opt Soc Am A Opt Image Sci Vis ; 24(2): 294-303, 2007 Feb.
Article in English | MEDLINE | ID: mdl-17206246

ABSTRACT

The measured light spectrum is the result of an illuminant interacting with a surface. The illuminant spectral power distribution multiplies the surface spectral reflectance function to form a color signal--the light spectrum that gives rise to our perception. Disambiguation of the two factors, illuminant and surface, is difficult without prior knowledge. Previously [IEEE Trans. Pattern Anal. Mach. Intell.12, 966 (1990); J. Opt. Soc. Am. A21, 1825 (2004)], one approach to this problem applied a finite-dimensional basis function model to recover the separate illuminant and surface reflectance components that make up the color signal, using principal component bases for lights and for reflectances. We introduce the idea of making use of finite-dimensional models of logarithms of spectra for this problem. Recognizing that multiplications turn into additions in such a formulation, we can replace the original iterative method with a direct, analytic algorithm with no iteration, resulting in a speedup of several orders of magnitude. Moreover, in the new, logarithm-based approach, it is straightforward to further design new basis functions, for both illuminant and reflectance simultaneously, such that the initial basis function coefficients derived from the input color signal are optimally mapped onto separate coefficients that produce spectra that more closely approximate the illuminant and the surface reflectance for any given dimensionality. This is accomplished by using an extra bias correction step that maps the analytically determined basis function coefficients onto the optimal coefficient set, separately for lights and surfaces, for the training set. The analytic equation plus the bias correction is then used for unknown input color signals.

13.
IEEE Trans Pattern Anal Mach Intell ; 28(1): 59-68, 2006 Jan.
Article in English | MEDLINE | ID: mdl-16402619

ABSTRACT

This paper is concerned with the derivation of a progression of shadow-free image representations. First, we show that adopting certain assumptions about lights and cameras leads to a 1D, gray-scale image representation which is illuminant invariant at each image pixel. We show that as a consequence, images represented in this form are shadow-free. We then extend this 1D representation to an equivalent 2D, chromaticity representation. We show that in this 2D representation, it is possible to relight all the image pixels in the same way, effectively deriving a 2D image representation which is additionally shadow-free. Finally, we show how to recover a 3D, full color shadow-free image representation by first (with the help of the 2D representation) identifying shadow edges. We then remove shadow edges from the edge-map of the original image by edge in-painting and we propose a method to reintegrate this thresholded edge map, thus deriving the sought-after 3D shadow-free image.


Subject(s)
Algorithms , Artifacts , Artificial Intelligence , Image Enhancement/methods , Image Interpretation, Computer-Assisted/methods , Imaging, Three-Dimensional/methods , Pattern Recognition, Automated/methods , Information Storage and Retrieval/methods
14.
IEEE Trans Vis Comput Graph ; 11(2): 207-16, 2005.
Article in English | MEDLINE | ID: mdl-15747643

ABSTRACT

To make a spectral representation of color practicable for volume rendering, a new low-dimensional subspace method is used to act as the carrier of spectral information. With that model, spectral light material interaction can be integrated into existing volume rendering methods at almost no penalty. In addition, slow rendering methods can profit from the new technique of postillumination-generating spectral images in real-time for arbitrary light spectra under a fixed viewpoint. Thus, the capability of spectral rendering to create distinct impressions of a scene under different lighting conditions is established as a method of real-time interaction. Although we use an achromatic opacity in our rendering, we show how spectral rendering permits different data set features to be emphasized or hidden as long as they have not been entirely obscured. The use of postillumination is an order of magnitude faster than changing the transfer function and repeating the projection step. To put the user in control of the spectral visualization, we devise a new widget, a "light-dial," for interactively changing the illumination and include a usability study of this new light space exploration tool. Applied to spectral transfer functions, different lights bring out or hide specific qualities of the data. In conjunction with postillumination, this provides a new means for preparing data for visualization and forms a new degree of freedom for guided exploration of volumetric data sets.


Subject(s)
Algorithms , Color , Computer Graphics , Image Interpretation, Computer-Assisted/methods , Imaging, Three-Dimensional/methods , Information Storage and Retrieval/methods , User-Computer Interface , Computer Simulation , Numerical Analysis, Computer-Assisted , Online Systems , Pattern Recognition, Automated/methods , Photometry/methods , Software
15.
J Opt Soc Am A Opt Image Sci Vis ; 20(7): 1181-93, 2003 Jul.
Article in English | MEDLINE | ID: mdl-12868625

ABSTRACT

It is often the case that multiplications of whole spectra, component by component, must be carried out,for example when light reflects from or is transmitted through materials. This leads to particularly taxing calculations, especially in spectrally based ray tracing or radiosity in graphics, making a full-spectrum method prohibitively expensive. Nevertheless, using full spectra is attractive because of the many important phenomena that can be modeled only by using all the physics at hand. We apply to the task of spectral multiplication a method previously used in modeling RGB-based light propagation. We show that we can often multiply spectra without carrying out spectral multiplication. In previous work [J. Opt. Soc. Am. A 11, 1553 (1994)] we developed a method called spectral sharpening, which took camera RGBs to a special sharp basis that was designed to render illuminant change simple to model. Specifically, in the new basis, one can effectively model illuminant change by using a diagonal matrix rather than the 3 x 3 linear transform that results from a three-component finite-dimensional model [G. Healey and D. Slater, J. Opt. Soc. Am. A 11, 3003 (1994)]. We apply this idea of sharpening to the set of principal components vectors derived from a representative set of spectra that might reasonably be encountered in a given application. With respect to the sharp spectral basis, we show that spectral multiplications can be modeled as the multiplication of the basis coefficients. These new product coefficients applied to the sharp basis serve to accurately reconstruct the spectral product. Although the method is quite general, we show how to use spectral modeling by taking advantage of metameric surfaces, ones that match under one light but not another, for tasks such as volume rendering. The use of metamers allows a user to pick out or merge different volume structures in real time simply by changing the lighting.

SELECTION OF CITATIONS
SEARCH DETAIL
...