Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 25
Filter
1.
Entropy (Basel) ; 26(3)2024 Mar 19.
Article in English | MEDLINE | ID: mdl-38539782

ABSTRACT

The partial information decomposition (PID) framework is concerned with decomposing the information that a set of (two or more) random variables (the sources) has about another variable (the target) into three types of information: unique, redundant, and synergistic. Classical information theory alone does not provide a unique way to decompose information in this manner and additional assumptions have to be made. One often overlooked way to achieve this decomposition is using a so-called measure of union information-which quantifies the information that is present in at least one of the sources-from which a synergy measure stems. In this paper, we introduce a new measure of union information based on adopting a communication channel perspective, compare it with existing measures, and study some of its properties. We also include a comprehensive critical review of characterizations of union information and synergy measures that have been proposed in the literature.

2.
Entropy (Basel) ; 25(7)2023 Jun 25.
Article in English | MEDLINE | ID: mdl-37509922

ABSTRACT

The partial information decomposition (PID) framework is concerned with decomposing the information that a set of random variables has with respect to a target variable into three types of components: redundant, synergistic, and unique. Classical information theory alone does not provide a unique way to decompose information in this manner, and additional assumptions have to be made. Recently, Kolchinsky proposed a new general axiomatic approach to obtain measures of redundant information based on choosing an order relation between information sources (equivalently, order between communication channels). In this paper, we exploit this approach to introduce three new measures of redundant information (and the resulting decompositions) based on well-known preorders between channels, contributing to the enrichment of the PID landscape. We relate the new decompositions to existing ones, study several of their properties, and provide examples illustrating their novelty. As a side result, we prove that any preorder that satisfies Kolchinsky's axioms yields a decomposition that meets the axioms originally introduced by Williams and Beer when they first proposed PID.

3.
Neural Netw ; 127: 193-203, 2020 Jul.
Article in English | MEDLINE | ID: mdl-32387926

ABSTRACT

In this paper, we introduce a neural network framework for semi-supervised clustering with pairwise (must-link or cannot-link) constraints. In contrast to existing approaches, we decompose semi-supervised clustering into two simpler classification tasks: the first stage uses a pair of Siamese neural networks to label the unlabeled pairs of points as must-link or cannot-link; the second stage uses the fully pairwise-labeled dataset produced by the first stage in a supervised neural-network-based clustering method. The proposed approach is motivated by the observation that binary classification (such as assigning pairwise relations) is usually easier than multi-class clustering with partial supervision. On the other hand, being classification-based, our method solves only well-defined classification problems, rather than less well specified clustering tasks. Extensive experiments on various datasets demonstrate the high performance of the proposed method.


Subject(s)
Neural Networks, Computer , Supervised Machine Learning , Cluster Analysis , Databases, Factual/trends , Supervised Machine Learning/trends
4.
Article in English | MEDLINE | ID: mdl-31021796

ABSTRACT

This paper introduces a new approach to patchbased image restoration based on external datasets and importance sampling. The minimum mean squared error (MMSE) estimate of the image patches, the computation of which requires solving a multidimensional (typically intractable) integral, is approximated using samples from an external dataset. The new method, which can be interpreted as a generalization of the external non-local means (NLM), uses self-normalized importance sampling to efficiently approximate the MMSE estimates. The use of self-normalized importance sampling endows the proposed method with great flexibility, namely regarding the statistical properties of the measurement noise. The effectiveness of the proposed method is shown in a series of experiments using both generic large-scale and class-specific external datasets.

5.
Article in English | MEDLINE | ID: mdl-30222572

ABSTRACT

We propose a new approach to image fusion, inspired by the recent plug-and-play (PnP) framework. In PnP, a denoiser is treated as a black-box and plugged into an iterative algorithm, taking the place of the proximity operator of some convex regularizer, which is formally equivalent to a denoising operation. This approach offers flexibility and excellent performance, but convergence may be hard to analyze, as most state-of-the-art denoisers lack an explicit underlying objective function. Here, we propose using a scene-adapted denoiser (i.e., targeted to the specific scene being imaged) plugged into the iterations of the alternating direction method of multipliers (ADMM). This approach, which is a natural choice for image fusion problems, not only yields state-of-the-art results, but it also allows proving convergence of the resulting algorithm. The proposed method is tested on two different problems: hyperspectral fusion/sharpening and fusion of blurred-noisy image pairs.

6.
Appl Spectrosc ; 71(6): 1148-1156, 2017 Jun.
Article in English | MEDLINE | ID: mdl-27852875

ABSTRACT

The monitoring of biopharmaceutical products using Fourier transform infrared (FT-IR) spectroscopy relies on calibration techniques involving the acquisition of spectra of bioprocess samples along the process. The most commonly used method for that purpose is partial least squares (PLS) regression, under the assumption that a linear model is valid. Despite being successful in the presence of small nonlinearities, linear methods may fail in the presence of strong nonlinearities. This paper studies the potential usefulness of nonlinear regression methods for predicting, from in situ near-infrared (NIR) and mid-infrared (MIR) spectra acquired in high-throughput mode, biomass and plasmid concentrations in Escherichia coli DH5-α cultures producing the plasmid model pVAX-LacZ. The linear methods PLS and ridge regression (RR) are compared with their kernel (nonlinear) versions, kPLS and kRR, as well as with the (also nonlinear) relevance vector machine (RVM) and Gaussian process regression (GPR). For the systems studied, RR provided better predictive performances compared to the remaining methods. Moreover, the results point to further investigation based on larger data sets whenever differences in predictive accuracy between a linear method and its kernelized version could not be found. The use of nonlinear methods, however, shall be judged regarding the additional computational cost required to tune their additional parameters, especially when the less computationally demanding linear methods herein studied are able to successfully monitor the variables under study.


Subject(s)
Bioreactors , Nonlinear Dynamics , Plasmids , Spectroscopy, Fourier Transform Infrared , Biomass , Escherichia coli/genetics , Escherichia coli/metabolism , Plasmids/genetics , Plasmids/metabolism
7.
IEEE Trans Image Process ; 23(1): 466-77, 2014 Jan.
Article in English | MEDLINE | ID: mdl-24144664

ABSTRACT

This paper presents a new method to estimate the parameters of two types of blurs, linear uniform motion (approximated by a line characterized by angle and length) and out-of-focus (modeled as a uniform disk characterized by its radius), for blind restoration of natural images. The method is based on the spectrum of the blurred images and is supported on a weak assumption, which is valid for the most natural images: the power-spectrum is approximately isotropic and has a power-law decay with the spatial frequency. We introduce two modifications to the radon transform, which allow the identification of the blur spectrum pattern of the two types of blurs above mentioned. The blur parameters are identified by fitting an appropriate function that accounts separately for the natural image spectrum and the blur frequency response. The accuracy of the proposed method is validated by simulations, and the effectiveness of the proposed method is assessed by testing the algorithm on real natural blurred images and comparing it with state-of-the-art blind deconvolution methods.


Subject(s)
Algorithms , Artifacts , Image Enhancement/methods , Image Interpretation, Computer-Assisted/methods , Pattern Recognition, Automated/methods , Computer Simulation , Linear Models , Reproducibility of Results , Sensitivity and Specificity
8.
IEEE Trans Image Process ; 22(7): 2751-63, 2013 Jul.
Article in English | MEDLINE | ID: mdl-23591491

ABSTRACT

Image deblurring (ID) is an ill-posed problem typically addressed by using regularization, or prior knowledge, on the unknown image (and also on the blur operator, in the blind case). ID is often formulated as an optimization problem, where the objective function includes a data term encouraging the estimated image (and blur, in blind ID) to explain the observed data well (typically, the squared norm of a residual) plus a regularizer that penalizes solutions deemed undesirable. The performance of this approach depends critically (among other things) on the relative weight of the regularizer (the regularization parameter) and on the number of iterations of the algorithm used to address the optimization problem. In this paper, we propose new criteria for adjusting the regularization parameter and/or the number of iterations of ID algorithms. The rationale is that if the recovered image (and blur, in blind ID) is well estimated, the residual image is spectrally white; contrarily, a poorly deblurred image typically exhibits structured artifacts (e.g., ringing, oversmoothness), yielding residuals that are not spectrally white. The proposed criterion is particularly well suited to a recent blind ID algorithm that uses continuation, i.e., slowly decreases the regularization parameter along the iterations; in this case, choosing this parameter and deciding when to stop are one and the same thing. Our experiments show that the proposed whiteness-based criteria yield improvements in SNR, on average, only 0.15 dB below those obtained by (clairvoyantly) stopping the algorithm at the best SNR. We also illustrate the proposed criteria on non-blind ID, reporting results that are competitive with state-of-the-art criteria (such as Monte Carlo-based GSURE and projected SURE), which, however, are not applicable for blind ID.

9.
IEEE Trans Image Process ; 22(5): 1712-25, 2013 May.
Article in English | MEDLINE | ID: mdl-23193235

ABSTRACT

The analysis of moving objects in image sequences (video) has been one of the major themes in computer vision. In this paper, we focus on video-surveillance tasks; more specifically, we consider pedestrian trajectories and propose modeling them through a small set of motion/vector fields together with a space-varying switching mechanism. Despite the diversity of motion patterns that can occur in a given scene, we show that it is often possible to find a relatively small number of typical behaviors, and model each of these behaviors by a "simple" motion field. We increase the expressiveness of the formulation by allowing the trajectories to switch from one motion field to another, in a space-dependent manner. We present an expectation-maximization algorithm to learn all the parameters of the model, and apply it to trajectory classification tasks. Experiments with both synthetic and real data support the claims about the performance of the proposed approach.

10.
J Integr Bioinform ; 9(3): 207, 2012 Jul 24.
Article in English | MEDLINE | ID: mdl-22829578

ABSTRACT

Biclustering has been recognized as a remarkably effective method for discovering local temporal expression patterns and unraveling potential regulatory mechanisms, essential to understanding complex biomedical processes, such as disease progression and drug response. In this work, we propose a classification approach based on meta-biclusters (a set of similar biclusters) applied to prognostic prediction. We use real clinical expression time series to predict the response of patients with multiple sclerosis to treatment with Interferon-ß. As compared to previous approaches, the main advantages of this strategy are the interpretability of the results and the reduction of data dimensionality, due to biclustering. This would allow the identification of the genes and time points which are most promising for explaining different types of response profiles, according to clinical knowledge. We assess the impact of different unsupervised and supervised discretization techniques on the classification accuracy. The experimental results show that, in many cases, the use of these discretization methods improves the classification accuracy, as compared to the use of the original features.


Subject(s)
Algorithms , Computational Biology/methods , Gene Expression Regulation , Cluster Analysis , Humans , Time Factors , Workflow
11.
IEEE Trans Image Process ; 20(3): 681-95, 2011 Mar.
Article in English | MEDLINE | ID: mdl-20840899

ABSTRACT

We propose a new fast algorithm for solving one of the standard approaches to ill-posed linear inverse problems (IPLIP), where a (possibly nonsmooth) regularizer is minimized under the constraint that the solution explains the observations sufficiently well. Although the regularizer and constraint are usually convex, several particular features of these problems (huge dimensionality, nonsmoothness) preclude the use of off-the-shelf optimization tools and have stimulated a considerable amount of research. In this paper, we propose a new efficient algorithm to handle one class of constrained problems (often known as basis pursuit denoising) tailored to image recovery applications. The proposed algorithm, which belongs to the family of augmented Lagrangian methods, can be used to deal with a variety of imaging IPLIP, including deconvolution and reconstruction from compressive observations (such as MRI), using either total-variation or wavelet-based (or, more generally, frame-based) regularization. The proposed algorithm is an instance of the so-called alternating direction method of multipliers, for which convergence sufficient conditions are known; we show that these conditions are satisfied by the proposed algorithm. Experiments on a set of image restoration and reconstruction benchmark problems show that the proposed algorithm is a strong contender for the state-of-the-art.

12.
J Acoust Soc Am ; 128(4): 1747-54, 2010 Oct.
Article in English | MEDLINE | ID: mdl-20968348

ABSTRACT

Low noise surfaces have been increasingly considered as a viable and cost-effective alternative to acoustical barriers. However, road planners and administrators frequently lack information on the correlation between the type of road surface and the resulting noise emission profile. To address this problem, a method to identify and classify different types of road pavements was developed, whereby near field road noise is analyzed using statistical learning methods. The vehicle rolling sound signal near the tires and close to the road surface was acquired by two microphones in a special arrangement which implements the Close-Proximity method. A set of features, characterizing the properties of the road pavement, was extracted from the corresponding sound profiles. A feature selection method was used to automatically select those that are most relevant in predicting the type of pavement, while reducing the computational cost. A set of different types of road pavement segments were tested and the performance of the classifier was evaluated. Results of pavement classification performed during a road journey are presented on a map, together with geographical data. This procedure leads to a considerable improvement in the quality of road pavement noise data, thereby increasing the accuracy of road traffic noise prediction models.


Subject(s)
Automobiles , City Planning , Hydrocarbons , Models, Statistical , Noise, Transportation , Signal Processing, Computer-Assisted , Acoustics/instrumentation , Fourier Analysis , Porosity , Pressure , Sound Spectrography
13.
IEEE Trans Image Process ; 19(12): 3133-45, 2010 Dec.
Article in English | MEDLINE | ID: mdl-20833604

ABSTRACT

Much research has been devoted to the problem of restoring Poissonian images, namely for medical and astronomical applications. However, the restoration of these images using state-of-the-art regularizers (such as those based upon multiscale representations or total variation) is still an active research area, since the associated optimization problems are quite challenging. In this paper, we propose an approach to deconvolving Poissonian images, which is based upon an alternating direction optimization method. The standard regularization [or maximum a posteriori (MAP)] restoration criterion, which combines the Poisson log-likelihood with a (nonsmooth) convex regularizer (log-prior), leads to hard optimization problems: the log-likelihood is nonquadratic and nonseparable, the regularizer is nonsmooth, and there is a nonnegativity constraint. Using standard convex analysis tools, we present sufficient conditions for existence and uniqueness of solutions of these optimization problems, for several types of regularizers: total-variation, frame-based analysis, and frame-based synthesis. We attack these problems with an instance of the alternating direction method of multipliers (ADMM), which belongs to the family of augmented Lagrangian algorithms. We study sufficient conditions for convergence and show that these are satisfied, either under total-variation or frame-based (analysis and synthesis) regularization. The resulting algorithms are shown to outperform alternative state-of-the-art methods, both in terms of speed and restoration accuracy.


Subject(s)
Algorithms , Image Enhancement/methods , Pattern Recognition, Automated/methods
14.
IEEE Trans Image Process ; 19(9): 2345-56, 2010 Sep.
Article in English | MEDLINE | ID: mdl-20378469

ABSTRACT

We propose a new fast algorithm for solving one of the standard formulations of image restoration and reconstruction which consists of an unconstrained optimization problem where the objective includes an l2 data-fidelity term and a nonsmooth regularizer. This formulation allows both wavelet-based (with orthogonal or frame-based representations) regularization or total-variation regularization. Our approach is based on a variable splitting to obtain an equivalent constrained optimization formulation, which is then addressed with an augmented Lagrangian method. The proposed algorithm is an instance of the so-called alternating direction method of multipliers, for which convergence has been proved. Experiments on a set of image restoration and reconstruction benchmark problems show that the proposed algorithm is faster than the current state of the art methods.

15.
IEEE Trans Image Process ; 19(7): 1720-30, 2010 Jul.
Article in English | MEDLINE | ID: mdl-20215071

ABSTRACT

Multiplicative noise (also known as speckle noise) models are central to the study of coherent imaging systems, such as synthetic aperture radar and sonar, and ultrasound and laser imaging. These models introduce two additional layers of difficulties with respect to the standard Gaussian additive noise scenario: (1) the noise is multiplied by (rather than added to) the original image; (2) the noise is not Gaussian, with Rayleigh and Gamma being commonly used densities. These two features of multiplicative noise models preclude the direct application of most state-of-the-art algorithms, which are designed for solving unconstrained optimization problems where the objective has two terms: a quadratic data term (log-likelihood), reflecting the additive and Gaussian nature of the noise, plus a convex (possibly nonsmooth) regularizer (e.g., a total variation or wavelet-based regularizer/prior). In this paper, we address these difficulties by: (1) converting the multiplicative model into an additive one by taking logarithms, as proposed by some other authors; (2) using variable splitting to obtain an equivalent constrained problem; and (3) dealing with this optimization problem using the augmented Lagrangian framework. A set of experiments shows that the proposed method, which we name MIDAL (multiplicative image denoising by augmented Lagrangian), yields state-of-the-art results both in terms of speed and denoising performance.

16.
Anal Chem ; 82(4): 1462-9, 2010 Feb 15.
Article in English | MEDLINE | ID: mdl-20095581

ABSTRACT

A rapid detection of the nonauthenticity of suspect tablets is a key first step in the fight against pharmaceutical counterfeiting. The chemical characterization of these tablets is the logical next step to evaluate their impact on patient health and help authorities in tracking their source. Hyperspectral unmixing of near-infrared (NIR) image data is an emerging effective technology to infer the number of compounds, their spectral signatures, and the mixing fractions in a given tablet, with a resolution of a few tens of micrometers. In a linear mixing scenario, hyperspectral vectors belong to a simplex whose vertices correspond to the spectra of the compounds present in the sample. SISAL (simplex identification via split augmented Lagrangian), MVSA (minimum volume simplex analysis), and MVES (minimum-volume enclosing simplex) are recent algorithms designed to identify the vertices of the minimum volume simplex containing the spectral vectors and the mixing fractions at each pixel (vector). This work demonstrates the usefulness of these techniques, based on minimum volume criteria, for unmixing NIR hyperspectral data of tablets. The experiments herein reported show that SISAL/MVSA and MVES largely outperform MCR-ALS (multivariate curve resolution-alternating least-squares), which is considered the state-of-the-art in spectral unmixing for analytical chemistry. These experiments are based on synthetic data (studying the effect of noise and the presence/absence of pure pixels) and on a real data set composed of NIR images of counterfeit tablets.


Subject(s)
Fraud , Pharmaceutical Preparations/analysis , Pharmaceutical Preparations/chemistry , Spectrophotometry, Infrared , Tablets , Time Factors
17.
Anal Chim Acta ; 641(1-2): 46-51, 2009 May 08.
Article in English | MEDLINE | ID: mdl-19393365

ABSTRACT

According to the WHO definition for counterfeit medicines, several categories can be established, e.g., medicines containing the correct active pharmaceutical ingredient (API) but different excipients, medicines containing low levels of API, no API or even a substitute API. Obviously, these different scenarios will have different detrimental effects on a patient's health. Establishing the degree of risk to the patient through determination of the composition of counterfeit medicines found in the market place is thus of paramount importance. In this work, classical least squares was used for predicting the composition of counterfeit Heptodin tablets found in a market survey. Near infrared chemical imaging (NIR-CI) was used as a non-destructive measurement technique. No prior knowledge about the origin and composition of the tablets was available. Good API (i.e., lamivudine) predictions were obtained, especially for tablets containing a high API (close to the authentic) dose. Concentration maps of each pure material, i.e., the API (lamivudine) and the excipients microcrystalline cellulose, sodium starch glycollate, rice starch and talc, were estimated. Below 1% of the energy was not explained by the model (residuals percentage) for every pixel in all 12 counterfeit tablets. The similarities among tablets with respect to the total API percentage determined, as well as the corresponding concentration maps, support the classification of the tablets into the different groups obtained in previous work.


Subject(s)
Chemistry, Pharmaceutical/methods , Fraud , Spectroscopy, Near-Infrared/methods , Tablets/chemistry , Antiviral Agents/analysis , Humans , Lamivudine/analysis , Least-Squares Analysis
18.
IEEE Trans Image Process ; 16(12): 2980-91, 2007 Dec.
Article in English | MEDLINE | ID: mdl-18092597

ABSTRACT

Standard formulations of image/signal deconvolution under wavelet-based priors/regularizers lead to very high-dimensional optimization problems involving the following difficulties: the non-Gaussian (heavy-tailed) wavelet priors lead to objective functions which are nonquadratic, usually nondifferentiable, and sometimes even nonconvex; the presence of the convolution operator destroys the separability which underlies the simplicity of wavelet-based denoising. This paper presents a unified view of several recently proposed algorithms for handling this class of optimization problems, placing them in a common majorization-minimization (MM) framework. One of the classes of algorithms considered (when using quadratic bounds on nondifferentiable log-priors) shares the infamous "singularity issue" (SI) of "iteratively reweighted least squares" (IRLS) algorithms: the possibility of having to handle infinite weights, which may cause both numerical and convergence issues. In this paper, we prove several new results which strongly support the claim that the SI does not compromise the usefulness of this class of algorithms. Exploiting the unified MM perspective, we introduce a new algorithm, resulting from using l1 bounds for nonconvex regularizers; the experiments confirm the superior performance of this method, when compared to the one based on quadratic majorization. Finally, an experimental comparison of the several algorithms, reveals their relative merits for different standard types of scenarios.


Subject(s)
Algorithms , Image Enhancement/methods , Image Interpretation, Computer-Assisted/methods , Pattern Recognition, Automated/methods , Signal Processing, Computer-Assisted , Information Storage and Retrieval/methods , Numerical Analysis, Computer-Assisted , Reproducibility of Results , Sensitivity and Specificity
19.
IEEE Trans Image Process ; 16(12): 2992-3004, 2007 Dec.
Article in English | MEDLINE | ID: mdl-18092598

ABSTRACT

Iterative shrinkage/thresholding (IST) algorithms have been recently proposed to handle a class of convex unconstrained optimization problems arising in image restoration and other linear inverse problems. This class of problems results from combining a linear observation model with a nonquadratic regularizer (e.g., total variation or wavelet-based regularization). It happens that the convergence rate of these IST algorithms depends heavily on the linear observation operator, becoming very slow when this operator is ill-conditioned or ill-posed. In this paper, we introduce two-step IST (TwIST) algorithms, exhibiting much faster convergence rate than IST for ill-conditioned problems. For a vast class of nonquadratic convex regularizers (l(p) norms, some Besov norms, and total variation), we show that TwIST converges to a minimizer of the objective function, for a given range of values of its parameters. For noninvertible observation operators, we introduce a monotonic version of TwIST (MTwIST); although the convergence proof does not apply to this scenario, we give experimental evidence that MTwIST exhibits similar speed gains over IST. The effectiveness of the new methods are experimentally confirmed on problems of image deconvolution and of restoration with missing samples.


Subject(s)
Algorithms , Image Enhancement/methods , Image Interpretation, Computer-Assisted/methods , Pattern Recognition, Automated/methods , Signal Processing, Computer-Assisted , Reproducibility of Results , Sensitivity and Specificity
20.
IEEE Trans Pattern Anal Mach Intell ; 27(6): 957-68, 2005 Jun.
Article in English | MEDLINE | ID: mdl-15943426

ABSTRACT

Recently developed methods for learning sparse classifiers are among the state-of-the-art in supervised learning. These methods learn classifiers that incorporate weighted sums of basis functions with sparsity-promoting priors encouraging the weight estimates to be either significantly large or exactly zero. From a learning-theoretic perspective, these methods control the capacity of the learned classifier by minimizing the number of basis functions used, resulting in better generalization. This paper presents three contributions related to learning sparse classifiers. First, we introduce a true multiclass formulation based on multinomial logistic regression. Second, by combining a bound optimization approach with a component-wise update procedure, we derive fast exact algorithms for learning sparse multiclass classifiers that scale favorably in both the number of training samples and the feature dimensionality, making them applicable even to large data sets in high-dimensional feature spaces. To the best of our knowledge, these are the first algorithms to perform exact multinomial logistic regression with a sparsity-promoting prior. Third, we show how nontrivial generalization bounds can be derived for our classifier in the binary case. Experimental results on standard benchmark data sets attest to the accuracy, sparsity, and efficiency of the proposed methods.


Subject(s)
Algorithms , Artificial Intelligence , Information Storage and Retrieval/methods , Models, Statistical , Pattern Recognition, Automated/methods , Cluster Analysis , Computer Simulation , Models, Biological , Regression Analysis
SELECTION OF CITATIONS
SEARCH DETAIL
...