Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 26
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
IEEE Trans Neural Netw Learn Syst ; 32(5): 2209-2223, 2021 May.
Artigo em Inglês | MEDLINE | ID: mdl-32609616

RESUMO

Nonnegative blind source separation (nBSS) is often a challenging inverse problem, namely, when the mixing system is ill-conditioned. In this work, we focus on an important nBSS instance, known as hyperspectral unmixing (HU) in remote sensing. HU is a matrix factorization problem aimed at factoring the so-called endmember matrix, holding the material hyperspectral signatures, and the abundance matrix, holding the material fractions at each image pixel. The hyperspectral signatures are usually highly correlated, leading to a fast decay of the singular values (and, hence, high condition number) of the endmember matrix, so HU often introduces an ill-conditioned nBSS scenario. We introduce a new theoretical framework to attack such tough scenarios via the John ellipsoid (JE) in functional analysis. The idea is to identify the maximum volume ellipsoid inscribed in the data convex hull, followed by affinely mapping such ellipsoid into a Euclidean ball. By applying the same affine mapping to the data mixtures, we prove that the endmember matrix associated with the mapped data has condition number 1, the lowest possible, and that these (preconditioned) endmembers form a regular simplex. Exploiting this regular structure, we design a novel nBSS criterion with a provable identifiability guarantee and devise an algorithm to realize the criterion. Moreover, for the first time, the optimization problem for computing JE is exactly solved for a large-scale instance; our solver employs a split augmented Lagrangian shrinkage algorithm with all proximal operators solved by closed-form solutions. The competitiveness of the proposed method is illustrated by numerical simulations and real data experiments.

2.
IEEE Trans Cybern ; 50(10): 4469-4480, 2020 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-31794410

RESUMO

Combining a high-spatial-resolution multispectral image (HR-MSI) with a low-spatial-resolution hyperspectral image (LR-HSI) has become a common way to enhance the spatial resolution of the HSI. The existing state-of-the-art LR-HSI and HR-MSI fusion methods are mostly based on the matrix factorization, where the matrix data representation may be hard to fully make use of the inherent structures of 3-D HSI. We propose a nonlocal sparse tensor factorization approach, called the NLSTF_SMBF, for the semiblind fusion of HSI and MSI. The proposed method decomposes the HSI into smaller full-band patches (FBPs), which, in turn, are factored as dictionaries of the three HSI modes and a sparse core tensor. This decomposition allows to solve the fusion problem as estimating a sparse core tensor and three dictionaries for each FBP. Similar FBPs are clustered together, and they are assumed to share the same dictionaries to make use of the nonlocal self-similarities of the HSI. For each group, we learn the dictionaries from the observed HR-MSI and LR-HSI. The corresponding sparse core tensor of each FBP is computed via tensor sparse coding. Two distinctive features of NLSTF_SMBF are that: 1) it is blind with respect to the point spread function (PSF) of the hyperspectral sensor and 2) it copes with spatially variant PSFs. The experimental results provide the evidence of the advantages of the NLSTF_SMBF method over the existing state-of-the-art methods, namely, in semiblind scenarios.

3.
Artigo em Inglês | MEDLINE | ID: mdl-31021796

RESUMO

This paper introduces a new approach to patchbased image restoration based on external datasets and importance sampling. The minimum mean squared error (MMSE) estimate of the image patches, the computation of which requires solving a multidimensional (typically intractable) integral, is approximated using samples from an external dataset. The new method, which can be interpreted as a generalization of the external non-local means (NLM), uses self-normalized importance sampling to efficiently approximate the MMSE estimates. The use of self-normalized importance sampling endows the proposed method with great flexibility, namely regarding the statistical properties of the measurement noise. The effectiveness of the proposed method is shown in a series of experiments using both generic large-scale and class-specific external datasets.

4.
Sensors (Basel) ; 18(11)2018 Nov 16.
Artigo em Inglês | MEDLINE | ID: mdl-30453582

RESUMO

This paper proposes a novel algorithm for image phase retrieval, i.e., for recovering complex-valued images from the amplitudes of noisy linear combinations (often the Fourier transform) of the sought complex images. The algorithm is developed using the alternating projection framework and is aimed to obtain high performance for heavily noisy (Poissonian or Gaussian) observations. The estimation of the target images is reformulated as a sparse regression, often termed sparse coding, in the complex domain. This is accomplished by learning a complex domain dictionary from the data it represents via matrix factorization with sparsity constraints on the code (i.e., the regression coefficients). Our algorithm, termed dictionary learning phase retrieval (DLPR), jointly learns the referred to dictionary and reconstructs the unknown target image. The effectiveness of DLPR is illustrated through experiments conducted on complex images, simulated and real, where it shows noticeable advantages over the state-of-the-art competitors.

5.
Artigo em Inglês | MEDLINE | ID: mdl-30222572

RESUMO

We propose a new approach to image fusion, inspired by the recent plug-and-play (PnP) framework. In PnP, a denoiser is treated as a black-box and plugged into an iterative algorithm, taking the place of the proximity operator of some convex regularizer, which is formally equivalent to a denoising operation. This approach offers flexibility and excellent performance, but convergence may be hard to analyze, as most state-of-the-art denoisers lack an explicit underlying objective function. Here, we propose using a scene-adapted denoiser (i.e., targeted to the specific scene being imaged) plugged into the iterations of the alternating direction method of multipliers (ADMM). This approach, which is a natural choice for image fusion problems, not only yields state-of-the-art results, but it also allows proving convergence of the resulting algorithm. The proposed method is tested on two different problems: hyperspectral fusion/sharpening and fusion of blurred-noisy image pairs.

6.
Artigo em Inglês | MEDLINE | ID: mdl-29994767

RESUMO

Fusing a low spatial resolution hyperspectral image (LR-HSI) with a high spatial resolution multispectral image (HR-MSI) to obtain a high spatial resolution hyperspectral image (HR-HSI) has attracted increasing interest in recent years. In this paper, we propose a coupled sparse tensor factorization (CSTF) based approach for fusing such images. In the proposed CSTF method, we consider an HR-HSI as a three-dimensional tensor and redefine the fusion problem as the estimation of a core tensor and dictionaries of the three modes. The high spatial-spectral correlations in the HR-HSI are modeled by incorporating a regularizer which promotes sparse core tensors. The estimation of the dictionaries and the core tensor are formulated as a coupled tensor factorization of the LR-HSI and of the HR-MSI. Experiments on two remotely sensed HSIs demonstrate the superiority of the proposed CSTF algorithm over current state-of-the-art HSI-MSI fusion approaches.

7.
Appl Spectrosc ; 71(6): 1148-1156, 2017 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-27852875

RESUMO

The monitoring of biopharmaceutical products using Fourier transform infrared (FT-IR) spectroscopy relies on calibration techniques involving the acquisition of spectra of bioprocess samples along the process. The most commonly used method for that purpose is partial least squares (PLS) regression, under the assumption that a linear model is valid. Despite being successful in the presence of small nonlinearities, linear methods may fail in the presence of strong nonlinearities. This paper studies the potential usefulness of nonlinear regression methods for predicting, from in situ near-infrared (NIR) and mid-infrared (MIR) spectra acquired in high-throughput mode, biomass and plasmid concentrations in Escherichia coli DH5-α cultures producing the plasmid model pVAX-LacZ. The linear methods PLS and ridge regression (RR) are compared with their kernel (nonlinear) versions, kPLS and kRR, as well as with the (also nonlinear) relevance vector machine (RVM) and Gaussian process regression (GPR). For the systems studied, RR provided better predictive performances compared to the remaining methods. Moreover, the results point to further investigation based on larger data sets whenever differences in predictive accuracy between a linear method and its kernelized version could not be found. The use of nonlinear methods, however, shall be judged regarding the additional computational cost required to tune their additional parameters, especially when the less computationally demanding linear methods herein studied are able to successfully monitor the variables under study.


Assuntos
Reatores Biológicos , Dinâmica não Linear , Plasmídeos , Espectroscopia de Infravermelho com Transformada de Fourier , Biomassa , Escherichia coli/genética , Escherichia coli/metabolismo , Plasmídeos/genética , Plasmídeos/metabolismo
8.
IEEE Trans Image Process ; 25(11): 5266-80, 2016 11.
Artigo em Inglês | MEDLINE | ID: mdl-27576251

RESUMO

In image deconvolution problems, the diagonalization of the underlying operators by means of the fast Fourier transform (FFT) usually yields very large speedups. When there are incomplete observations (e.g., in the case of unknown boundaries), standard deconvolution techniques normally involve non-diagonalizable operators, resulting in rather slow methods or, otherwise, use inexact convolution models, resulting in the occurrence of artifacts in the enhanced images. In this paper, we propose a new deconvolution framework for images with incomplete observations that allows us to work with diagonalized convolution operators, and therefore is very fast. We iteratively alternate the estimation of the unknown pixels and of the deconvolved image, using, e.g., an FFT-based deconvolution method. This framework is an efficient, high-quality alternative to existing methods of dealing with the image boundaries, such as edge tapering. It can be used with any fast deconvolution method. We give an example in which a state-of-the-art method that assumes periodic boundary conditions is extended, using this framework, to unknown boundary conditions. Furthermore, we propose a specific implementation of this framework, based on the alternating direction method of multipliers (ADMM). We provide a proof of convergence for the resulting algorithm, which can be seen as a "partial" ADMM, in which not all variables are dualized. We report experimental comparisons with other primal-dual methods, where the proposed one performed at the level of the state of the art. Four different kinds of applications were tested in the experiments: deconvolution, deconvolution with inpainting, superresolution, and demosaicing, all with unknown boundaries.

9.
IEEE Trans Image Process ; 25(10): 4565-79, 2016 10.
Artigo em Inglês | MEDLINE | ID: mdl-27416597

RESUMO

This paper presents three hyperspectral mixture models jointly with Bayesian algorithms for supervised hyperspectral unmixing. Based on the residual component analysis model, the proposed general formulation assumes the linear model to be corrupted by an additive term whose expression can be adapted to account for nonlinearities (NLs), endmember variability (EV), or mismodeling effects (MEs). The NL effect is introduced by considering a polynomial expression that is related to bilinear models. The proposed new formulation of EV accounts for shape and scale endmember changes while enforcing a smooth spectral/spatial variation. The ME formulation considers the effect of outliers and copes with some types of EV and NL. The known constraints on the parameter of each observation model are modeled via suitable priors. The posterior distribution associated with each Bayesian model is optimized using a coordinate descent algorithm, which allows the computation of the maximum a posteriori estimator of the unknown model parameters. The proposed mixture and Bayesian models and their estimation algorithms are validated on both synthetic and real images showing competitive results regarding the quality of the inferences and the computational complexity, when compared with the state-of-the-art algorithms.

10.
IEEE Trans Image Process ; 25(1): 274-88, 2016 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-26540685

RESUMO

Remote sensing hyperspectral images (HSIs) are quite often low rank, in the sense that the data belong to a low dimensional subspace/manifold. This has been recently exploited for the fusion of low spatial resolution HSI with high spatial resolution multispectral images in order to obtain super-resolution HSI. Most approaches adopt an unmixing or a matrix factorization perspective. The derived methods have led to state-of-the-art results when the spectral information lies in a low-dimensional subspace/manifold. However, if the subspace/manifold dimensionality spanned by the complete data set is large, i.e., larger than the number of multispectral bands, the performance of these methods mainly decreases because the underlying sparse regression problem is severely ill-posed. In this paper, we propose a local approach to cope with this difficulty. Fundamentally, we exploit the fact that real world HSIs are locally low rank, that is, pixels acquired from a given spatial neighborhood span a very low-dimensional subspace/manifold, i.e., lower or equal than the number of multispectral bands. Thus, we propose to partition the image into patches and solve the data fusion problem independently for each patch. This way, in each patch the subspace/manifold dimensionality is low enough, such that the problem is not ill-posed anymore. We propose two alternative approaches to define the hyperspectral super-resolution through local dictionary learning using endmember induction algorithms. We also explore two alternatives to define the local regions, using sliding windows and binary partition trees. The effectiveness of the proposed approaches is illustrated with synthetic and semi real data.

11.
IEEE Trans Image Process ; 24(12): 5800-11, 2015 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-26452285

RESUMO

This paper presents a new Bayesian collaborative sparse regression method for linear unmixing of hyperspectral images. Our contribution is twofold; first, we propose a new Bayesian model for structured sparse regression in which the supports of the sparse abundance vectors are a priori spatially correlated across pixels (i.e., materials are spatially organized rather than randomly distributed at a pixel level). This prior information is encoded in the model through a truncated multivariate Ising Markov random field, which also takes into consideration the facts that pixels cannot be empty (i.e., there is at least one material present in each pixel), and that different materials may exhibit different degrees of spatial regularity. Second, we propose an advanced Markov chain Monte Carlo algorithm to estimate the posterior probabilities that materials are present or absent in each pixel, and, conditionally to the maximum marginal a posteriori configuration of the support, compute the minimum mean squared error estimates of the abundance vectors. A remarkable property of this algorithm is that it self-adjusts the values of the parameters of the Markov random field, thus relieving practitioners from setting regularization parameters by cross-validation. The performance of the proposed methodology is finally demonstrated through a series of experiments with synthetic and real data and comparisons with other algorithms from the literature.

12.
IEEE Trans Neural Netw Learn Syst ; 25(10): 1894-908, 2014 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-25291741

RESUMO

In this paper, we study the separation of synchronous sources (SSS) problem, which deals with the separation of sources whose phases are synchronous. This problem cannot be addressed through independent component analysis methods because synchronous sources are statistically dependent. We present a two-step algorithm, called phase locked matrix factorization (PLMF), to perform SSS. We also show that SSS is identifiable under some assumptions and that any global minimum of PLMFs cost function is a desirable solution for SSS. We extensively study the algorithm on simulated data and conclude that it can perform SSS with various numbers of sources and sensors and with various phase lags between the sources, both in the ideal (i.e., perfectly synchronous and nonnoisy) case, and with various levels of additive noise in the observed signals and of phase jitter in the sources.

13.
IEEE Trans Image Process ; 23(1): 466-77, 2014 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-24144664

RESUMO

This paper presents a new method to estimate the parameters of two types of blurs, linear uniform motion (approximated by a line characterized by angle and length) and out-of-focus (modeled as a uniform disk characterized by its radius), for blind restoration of natural images. The method is based on the spectrum of the blurred images and is supported on a weak assumption, which is valid for the most natural images: the power-spectrum is approximately isotropic and has a power-law decay with the spatial frequency. We introduce two modifications to the radon transform, which allow the identification of the blur spectrum pattern of the two types of blurs above mentioned. The blur parameters are identified by fitting an appropriate function that accounts separately for the natural image spectrum and the blur frequency response. The accuracy of the proposed method is validated by simulations, and the effectiveness of the proposed method is assessed by testing the algorithm on real natural blurred images and comparing it with state-of-the-art blind deconvolution methods.


Assuntos
Algoritmos , Artefatos , Aumento da Imagem/métodos , Interpretação de Imagem Assistida por Computador/métodos , Reconhecimento Automatizado de Padrão/métodos , Simulação por Computador , Modelos Lineares , Reprodutibilidade dos Testes , Sensibilidade e Especificidade
14.
IEEE Trans Neural Netw ; 22(9): 1419-34, 2011 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-21791409

RESUMO

It has been proven that there are synchrony (or phase-locking) phenomena present in multiple oscillating systems such as electrical circuits, lasers, chemical reactions, and human neurons. If the measurements of these systems cannot detect the individual oscillators but rather a superposition of them, as in brain electrophysiological signals (electro- and magneoencephalogram), spurious phase locking will be detected. Current source-extraction techniques attempt to undo this superposition by assuming properties on the data, which are not valid when underlying sources are phase-locked. Statistical independence of the sources is one such invalid assumption, as phase-locked sources are dependent. In this paper, we introduce methods for source separation and clustering which make adequate assumptions for data where synchrony is present, and show with simulated data that they perform well even in cases where independent component analysis and other well-known source-separation methods fail. The results in this paper provide a proof of concept that synchrony-based techniques are useful for low-noise applications.


Assuntos
Encéfalo/citologia , Modelos Neurológicos , Neurônios/fisiologia , Processamento de Sinais Assistido por Computador , Algoritmos , Mapeamento Encefálico , Análise por Conglomerados , Simulação por Computador , Análise de Fourier , Humanos , Oscilometria
15.
IEEE Trans Image Process ; 20(3): 681-95, 2011 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-20840899

RESUMO

We propose a new fast algorithm for solving one of the standard approaches to ill-posed linear inverse problems (IPLIP), where a (possibly nonsmooth) regularizer is minimized under the constraint that the solution explains the observations sufficiently well. Although the regularizer and constraint are usually convex, several particular features of these problems (huge dimensionality, nonsmoothness) preclude the use of off-the-shelf optimization tools and have stimulated a considerable amount of research. In this paper, we propose a new efficient algorithm to handle one class of constrained problems (often known as basis pursuit denoising) tailored to image recovery applications. The proposed algorithm, which belongs to the family of augmented Lagrangian methods, can be used to deal with a variety of imaging IPLIP, including deconvolution and reconstruction from compressive observations (such as MRI), using either total-variation or wavelet-based (or, more generally, frame-based) regularization. The proposed algorithm is an instance of the so-called alternating direction method of multipliers, for which convergence sufficient conditions are known; we show that these conditions are satisfied by the proposed algorithm. Experiments on a set of image restoration and reconstruction benchmark problems show that the proposed algorithm is a strong contender for the state-of-the-art.

16.
IEEE Trans Image Process ; 19(12): 3133-45, 2010 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-20833604

RESUMO

Much research has been devoted to the problem of restoring Poissonian images, namely for medical and astronomical applications. However, the restoration of these images using state-of-the-art regularizers (such as those based upon multiscale representations or total variation) is still an active research area, since the associated optimization problems are quite challenging. In this paper, we propose an approach to deconvolving Poissonian images, which is based upon an alternating direction optimization method. The standard regularization [or maximum a posteriori (MAP)] restoration criterion, which combines the Poisson log-likelihood with a (nonsmooth) convex regularizer (log-prior), leads to hard optimization problems: the log-likelihood is nonquadratic and nonseparable, the regularizer is nonsmooth, and there is a nonnegativity constraint. Using standard convex analysis tools, we present sufficient conditions for existence and uniqueness of solutions of these optimization problems, for several types of regularizers: total-variation, frame-based analysis, and frame-based synthesis. We attack these problems with an instance of the alternating direction method of multipliers (ADMM), which belongs to the family of augmented Lagrangian algorithms. We study sufficient conditions for convergence and show that these are satisfied, either under total-variation or frame-based (analysis and synthesis) regularization. The resulting algorithms are shown to outperform alternative state-of-the-art methods, both in terms of speed and restoration accuracy.


Assuntos
Algoritmos , Aumento da Imagem/métodos , Reconhecimento Automatizado de Padrão/métodos
17.
IEEE Trans Image Process ; 19(9): 2345-56, 2010 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-20378469

RESUMO

We propose a new fast algorithm for solving one of the standard formulations of image restoration and reconstruction which consists of an unconstrained optimization problem where the objective includes an l2 data-fidelity term and a nonsmooth regularizer. This formulation allows both wavelet-based (with orthogonal or frame-based representations) regularization or total-variation regularization. Our approach is based on a variable splitting to obtain an equivalent constrained optimization formulation, which is then addressed with an augmented Lagrangian method. The proposed algorithm is an instance of the so-called alternating direction method of multipliers, for which convergence has been proved. Experiments on a set of image restoration and reconstruction benchmark problems show that the proposed algorithm is faster than the current state of the art methods.

18.
IEEE Trans Image Process ; 19(7): 1720-30, 2010 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-20215071

RESUMO

Multiplicative noise (also known as speckle noise) models are central to the study of coherent imaging systems, such as synthetic aperture radar and sonar, and ultrasound and laser imaging. These models introduce two additional layers of difficulties with respect to the standard Gaussian additive noise scenario: (1) the noise is multiplied by (rather than added to) the original image; (2) the noise is not Gaussian, with Rayleigh and Gamma being commonly used densities. These two features of multiplicative noise models preclude the direct application of most state-of-the-art algorithms, which are designed for solving unconstrained optimization problems where the objective has two terms: a quadratic data term (log-likelihood), reflecting the additive and Gaussian nature of the noise, plus a convex (possibly nonsmooth) regularizer (e.g., a total variation or wavelet-based regularizer/prior). In this paper, we address these difficulties by: (1) converting the multiplicative model into an additive one by taking logarithms, as proposed by some other authors; (2) using variable splitting to obtain an equivalent constrained problem; and (3) dealing with this optimization problem using the augmented Lagrangian framework. A set of experiments shows that the proposed method, which we name MIDAL (multiplicative image denoising by augmented Lagrangian), yields state-of-the-art results both in terms of speed and denoising performance.

19.
Anal Chem ; 82(4): 1462-9, 2010 Feb 15.
Artigo em Inglês | MEDLINE | ID: mdl-20095581

RESUMO

A rapid detection of the nonauthenticity of suspect tablets is a key first step in the fight against pharmaceutical counterfeiting. The chemical characterization of these tablets is the logical next step to evaluate their impact on patient health and help authorities in tracking their source. Hyperspectral unmixing of near-infrared (NIR) image data is an emerging effective technology to infer the number of compounds, their spectral signatures, and the mixing fractions in a given tablet, with a resolution of a few tens of micrometers. In a linear mixing scenario, hyperspectral vectors belong to a simplex whose vertices correspond to the spectra of the compounds present in the sample. SISAL (simplex identification via split augmented Lagrangian), MVSA (minimum volume simplex analysis), and MVES (minimum-volume enclosing simplex) are recent algorithms designed to identify the vertices of the minimum volume simplex containing the spectral vectors and the mixing fractions at each pixel (vector). This work demonstrates the usefulness of these techniques, based on minimum volume criteria, for unmixing NIR hyperspectral data of tablets. The experiments herein reported show that SISAL/MVSA and MVES largely outperform MCR-ALS (multivariate curve resolution-alternating least-squares), which is considered the state-of-the-art in spectral unmixing for analytical chemistry. These experiments are based on synthetic data (studying the effect of noise and the presence/absence of pure pixels) and on a real data set composed of NIR images of counterfeit tablets.


Assuntos
Fraude , Preparações Farmacêuticas/análise , Preparações Farmacêuticas/química , Espectrofotometria Infravermelho , Comprimidos , Fatores de Tempo
20.
J Opt Soc Am A Opt Image Sci Vis ; 26(9): 2093-106, 2009 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-19721694

RESUMO

An absolute phase estimation algorithm for interferometric applications is introduced. The approach is Bayesian. Besides coping with the 2pi-periodic sinusoidal nonlinearity in the observations, the proposed methodology assumes a first-order Markov random field prior and a maximum a posteriori probability (MAP) viewpoint. For computing the MAP solution, we provide a combinatorial suboptimal algorithm that involves a multiprecision sequence. In the coarser precision, it unwraps the phase by using, essentially, the previously introduced PUMA algorithm [IEEE Trans. Image Proc.16, 698 (2007)], which blindly detects discontinuities and yields a piecewise smooth unwrapped phase. In the subsequent increasing precision iterations, the proposed algorithm denoises each piecewise smooth region, thanks to the previously detected location of the discontinuities. For each precision, we map the problem into a sequence of binary optimizations, which we tackle by computing min-cuts on appropriate graphs. This unified rationale for both phase unwrapping and denoising inherits the fast performance of the graph min-cuts algorithms. In a set of experimental results, we illustrate the effectiveness of the proposed approach.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...