Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 19 de 19
Filter
1.
IEEE Trans Neural Netw Learn Syst ; 32(5): 2209-2223, 2021 May.
Article in English | MEDLINE | ID: mdl-32609616

ABSTRACT

Nonnegative blind source separation (nBSS) is often a challenging inverse problem, namely, when the mixing system is ill-conditioned. In this work, we focus on an important nBSS instance, known as hyperspectral unmixing (HU) in remote sensing. HU is a matrix factorization problem aimed at factoring the so-called endmember matrix, holding the material hyperspectral signatures, and the abundance matrix, holding the material fractions at each image pixel. The hyperspectral signatures are usually highly correlated, leading to a fast decay of the singular values (and, hence, high condition number) of the endmember matrix, so HU often introduces an ill-conditioned nBSS scenario. We introduce a new theoretical framework to attack such tough scenarios via the John ellipsoid (JE) in functional analysis. The idea is to identify the maximum volume ellipsoid inscribed in the data convex hull, followed by affinely mapping such ellipsoid into a Euclidean ball. By applying the same affine mapping to the data mixtures, we prove that the endmember matrix associated with the mapped data has condition number 1, the lowest possible, and that these (preconditioned) endmembers form a regular simplex. Exploiting this regular structure, we design a novel nBSS criterion with a provable identifiability guarantee and devise an algorithm to realize the criterion. Moreover, for the first time, the optimization problem for computing JE is exactly solved for a large-scale instance; our solver employs a split augmented Lagrangian shrinkage algorithm with all proximal operators solved by closed-form solutions. The competitiveness of the proposed method is illustrated by numerical simulations and real data experiments.

2.
IEEE Trans Cybern ; 50(10): 4469-4480, 2020 Oct.
Article in English | MEDLINE | ID: mdl-31794410

ABSTRACT

Combining a high-spatial-resolution multispectral image (HR-MSI) with a low-spatial-resolution hyperspectral image (LR-HSI) has become a common way to enhance the spatial resolution of the HSI. The existing state-of-the-art LR-HSI and HR-MSI fusion methods are mostly based on the matrix factorization, where the matrix data representation may be hard to fully make use of the inherent structures of 3-D HSI. We propose a nonlocal sparse tensor factorization approach, called the NLSTF_SMBF, for the semiblind fusion of HSI and MSI. The proposed method decomposes the HSI into smaller full-band patches (FBPs), which, in turn, are factored as dictionaries of the three HSI modes and a sparse core tensor. This decomposition allows to solve the fusion problem as estimating a sparse core tensor and three dictionaries for each FBP. Similar FBPs are clustered together, and they are assumed to share the same dictionaries to make use of the nonlocal self-similarities of the HSI. For each group, we learn the dictionaries from the observed HR-MSI and LR-HSI. The corresponding sparse core tensor of each FBP is computed via tensor sparse coding. Two distinctive features of NLSTF_SMBF are that: 1) it is blind with respect to the point spread function (PSF) of the hyperspectral sensor and 2) it copes with spatially variant PSFs. The experimental results provide the evidence of the advantages of the NLSTF_SMBF method over the existing state-of-the-art methods, namely, in semiblind scenarios.

3.
Sensors (Basel) ; 18(11)2018 Nov 16.
Article in English | MEDLINE | ID: mdl-30453582

ABSTRACT

This paper proposes a novel algorithm for image phase retrieval, i.e., for recovering complex-valued images from the amplitudes of noisy linear combinations (often the Fourier transform) of the sought complex images. The algorithm is developed using the alternating projection framework and is aimed to obtain high performance for heavily noisy (Poissonian or Gaussian) observations. The estimation of the target images is reformulated as a sparse regression, often termed sparse coding, in the complex domain. This is accomplished by learning a complex domain dictionary from the data it represents via matrix factorization with sparsity constraints on the code (i.e., the regression coefficients). Our algorithm, termed dictionary learning phase retrieval (DLPR), jointly learns the referred to dictionary and reconstructs the unknown target image. The effectiveness of DLPR is illustrated through experiments conducted on complex images, simulated and real, where it shows noticeable advantages over the state-of-the-art competitors.

4.
Article in English | MEDLINE | ID: mdl-30222572

ABSTRACT

We propose a new approach to image fusion, inspired by the recent plug-and-play (PnP) framework. In PnP, a denoiser is treated as a black-box and plugged into an iterative algorithm, taking the place of the proximity operator of some convex regularizer, which is formally equivalent to a denoising operation. This approach offers flexibility and excellent performance, but convergence may be hard to analyze, as most state-of-the-art denoisers lack an explicit underlying objective function. Here, we propose using a scene-adapted denoiser (i.e., targeted to the specific scene being imaged) plugged into the iterations of the alternating direction method of multipliers (ADMM). This approach, which is a natural choice for image fusion problems, not only yields state-of-the-art results, but it also allows proving convergence of the resulting algorithm. The proposed method is tested on two different problems: hyperspectral fusion/sharpening and fusion of blurred-noisy image pairs.

5.
Article in English | MEDLINE | ID: mdl-29994767

ABSTRACT

Fusing a low spatial resolution hyperspectral image (LR-HSI) with a high spatial resolution multispectral image (HR-MSI) to obtain a high spatial resolution hyperspectral image (HR-HSI) has attracted increasing interest in recent years. In this paper, we propose a coupled sparse tensor factorization (CSTF) based approach for fusing such images. In the proposed CSTF method, we consider an HR-HSI as a three-dimensional tensor and redefine the fusion problem as the estimation of a core tensor and dictionaries of the three modes. The high spatial-spectral correlations in the HR-HSI are modeled by incorporating a regularizer which promotes sparse core tensors. The estimation of the dictionaries and the core tensor are formulated as a coupled tensor factorization of the LR-HSI and of the HR-MSI. Experiments on two remotely sensed HSIs demonstrate the superiority of the proposed CSTF algorithm over current state-of-the-art HSI-MSI fusion approaches.

6.
Appl Spectrosc ; 71(6): 1148-1156, 2017 Jun.
Article in English | MEDLINE | ID: mdl-27852875

ABSTRACT

The monitoring of biopharmaceutical products using Fourier transform infrared (FT-IR) spectroscopy relies on calibration techniques involving the acquisition of spectra of bioprocess samples along the process. The most commonly used method for that purpose is partial least squares (PLS) regression, under the assumption that a linear model is valid. Despite being successful in the presence of small nonlinearities, linear methods may fail in the presence of strong nonlinearities. This paper studies the potential usefulness of nonlinear regression methods for predicting, from in situ near-infrared (NIR) and mid-infrared (MIR) spectra acquired in high-throughput mode, biomass and plasmid concentrations in Escherichia coli DH5-α cultures producing the plasmid model pVAX-LacZ. The linear methods PLS and ridge regression (RR) are compared with their kernel (nonlinear) versions, kPLS and kRR, as well as with the (also nonlinear) relevance vector machine (RVM) and Gaussian process regression (GPR). For the systems studied, RR provided better predictive performances compared to the remaining methods. Moreover, the results point to further investigation based on larger data sets whenever differences in predictive accuracy between a linear method and its kernelized version could not be found. The use of nonlinear methods, however, shall be judged regarding the additional computational cost required to tune their additional parameters, especially when the less computationally demanding linear methods herein studied are able to successfully monitor the variables under study.


Subject(s)
Bioreactors , Nonlinear Dynamics , Plasmids , Spectroscopy, Fourier Transform Infrared , Biomass , Escherichia coli/genetics , Escherichia coli/metabolism , Plasmids/genetics , Plasmids/metabolism
7.
IEEE Trans Image Process ; 25(10): 4565-79, 2016 10.
Article in English | MEDLINE | ID: mdl-27416597

ABSTRACT

This paper presents three hyperspectral mixture models jointly with Bayesian algorithms for supervised hyperspectral unmixing. Based on the residual component analysis model, the proposed general formulation assumes the linear model to be corrupted by an additive term whose expression can be adapted to account for nonlinearities (NLs), endmember variability (EV), or mismodeling effects (MEs). The NL effect is introduced by considering a polynomial expression that is related to bilinear models. The proposed new formulation of EV accounts for shape and scale endmember changes while enforcing a smooth spectral/spatial variation. The ME formulation considers the effect of outliers and copes with some types of EV and NL. The known constraints on the parameter of each observation model are modeled via suitable priors. The posterior distribution associated with each Bayesian model is optimized using a coordinate descent algorithm, which allows the computation of the maximum a posteriori estimator of the unknown model parameters. The proposed mixture and Bayesian models and their estimation algorithms are validated on both synthetic and real images showing competitive results regarding the quality of the inferences and the computational complexity, when compared with the state-of-the-art algorithms.

8.
IEEE Trans Image Process ; 25(1): 274-88, 2016 Jan.
Article in English | MEDLINE | ID: mdl-26540685

ABSTRACT

Remote sensing hyperspectral images (HSIs) are quite often low rank, in the sense that the data belong to a low dimensional subspace/manifold. This has been recently exploited for the fusion of low spatial resolution HSI with high spatial resolution multispectral images in order to obtain super-resolution HSI. Most approaches adopt an unmixing or a matrix factorization perspective. The derived methods have led to state-of-the-art results when the spectral information lies in a low-dimensional subspace/manifold. However, if the subspace/manifold dimensionality spanned by the complete data set is large, i.e., larger than the number of multispectral bands, the performance of these methods mainly decreases because the underlying sparse regression problem is severely ill-posed. In this paper, we propose a local approach to cope with this difficulty. Fundamentally, we exploit the fact that real world HSIs are locally low rank, that is, pixels acquired from a given spatial neighborhood span a very low-dimensional subspace/manifold, i.e., lower or equal than the number of multispectral bands. Thus, we propose to partition the image into patches and solve the data fusion problem independently for each patch. This way, in each patch the subspace/manifold dimensionality is low enough, such that the problem is not ill-posed anymore. We propose two alternative approaches to define the hyperspectral super-resolution through local dictionary learning using endmember induction algorithms. We also explore two alternatives to define the local regions, using sliding windows and binary partition trees. The effectiveness of the proposed approaches is illustrated with synthetic and semi real data.

9.
IEEE Trans Image Process ; 23(1): 466-77, 2014 Jan.
Article in English | MEDLINE | ID: mdl-24144664

ABSTRACT

This paper presents a new method to estimate the parameters of two types of blurs, linear uniform motion (approximated by a line characterized by angle and length) and out-of-focus (modeled as a uniform disk characterized by its radius), for blind restoration of natural images. The method is based on the spectrum of the blurred images and is supported on a weak assumption, which is valid for the most natural images: the power-spectrum is approximately isotropic and has a power-law decay with the spatial frequency. We introduce two modifications to the radon transform, which allow the identification of the blur spectrum pattern of the two types of blurs above mentioned. The blur parameters are identified by fitting an appropriate function that accounts separately for the natural image spectrum and the blur frequency response. The accuracy of the proposed method is validated by simulations, and the effectiveness of the proposed method is assessed by testing the algorithm on real natural blurred images and comparing it with state-of-the-art blind deconvolution methods.


Subject(s)
Algorithms , Artifacts , Image Enhancement/methods , Image Interpretation, Computer-Assisted/methods , Pattern Recognition, Automated/methods , Computer Simulation , Linear Models , Reproducibility of Results , Sensitivity and Specificity
10.
IEEE Trans Image Process ; 20(3): 681-95, 2011 Mar.
Article in English | MEDLINE | ID: mdl-20840899

ABSTRACT

We propose a new fast algorithm for solving one of the standard approaches to ill-posed linear inverse problems (IPLIP), where a (possibly nonsmooth) regularizer is minimized under the constraint that the solution explains the observations sufficiently well. Although the regularizer and constraint are usually convex, several particular features of these problems (huge dimensionality, nonsmoothness) preclude the use of off-the-shelf optimization tools and have stimulated a considerable amount of research. In this paper, we propose a new efficient algorithm to handle one class of constrained problems (often known as basis pursuit denoising) tailored to image recovery applications. The proposed algorithm, which belongs to the family of augmented Lagrangian methods, can be used to deal with a variety of imaging IPLIP, including deconvolution and reconstruction from compressive observations (such as MRI), using either total-variation or wavelet-based (or, more generally, frame-based) regularization. The proposed algorithm is an instance of the so-called alternating direction method of multipliers, for which convergence sufficient conditions are known; we show that these conditions are satisfied by the proposed algorithm. Experiments on a set of image restoration and reconstruction benchmark problems show that the proposed algorithm is a strong contender for the state-of-the-art.

11.
IEEE Trans Image Process ; 19(12): 3133-45, 2010 Dec.
Article in English | MEDLINE | ID: mdl-20833604

ABSTRACT

Much research has been devoted to the problem of restoring Poissonian images, namely for medical and astronomical applications. However, the restoration of these images using state-of-the-art regularizers (such as those based upon multiscale representations or total variation) is still an active research area, since the associated optimization problems are quite challenging. In this paper, we propose an approach to deconvolving Poissonian images, which is based upon an alternating direction optimization method. The standard regularization [or maximum a posteriori (MAP)] restoration criterion, which combines the Poisson log-likelihood with a (nonsmooth) convex regularizer (log-prior), leads to hard optimization problems: the log-likelihood is nonquadratic and nonseparable, the regularizer is nonsmooth, and there is a nonnegativity constraint. Using standard convex analysis tools, we present sufficient conditions for existence and uniqueness of solutions of these optimization problems, for several types of regularizers: total-variation, frame-based analysis, and frame-based synthesis. We attack these problems with an instance of the alternating direction method of multipliers (ADMM), which belongs to the family of augmented Lagrangian algorithms. We study sufficient conditions for convergence and show that these are satisfied, either under total-variation or frame-based (analysis and synthesis) regularization. The resulting algorithms are shown to outperform alternative state-of-the-art methods, both in terms of speed and restoration accuracy.


Subject(s)
Algorithms , Image Enhancement/methods , Pattern Recognition, Automated/methods
12.
IEEE Trans Image Process ; 19(9): 2345-56, 2010 Sep.
Article in English | MEDLINE | ID: mdl-20378469

ABSTRACT

We propose a new fast algorithm for solving one of the standard formulations of image restoration and reconstruction which consists of an unconstrained optimization problem where the objective includes an l2 data-fidelity term and a nonsmooth regularizer. This formulation allows both wavelet-based (with orthogonal or frame-based representations) regularization or total-variation regularization. Our approach is based on a variable splitting to obtain an equivalent constrained optimization formulation, which is then addressed with an augmented Lagrangian method. The proposed algorithm is an instance of the so-called alternating direction method of multipliers, for which convergence has been proved. Experiments on a set of image restoration and reconstruction benchmark problems show that the proposed algorithm is faster than the current state of the art methods.

13.
IEEE Trans Image Process ; 19(7): 1720-30, 2010 Jul.
Article in English | MEDLINE | ID: mdl-20215071

ABSTRACT

Multiplicative noise (also known as speckle noise) models are central to the study of coherent imaging systems, such as synthetic aperture radar and sonar, and ultrasound and laser imaging. These models introduce two additional layers of difficulties with respect to the standard Gaussian additive noise scenario: (1) the noise is multiplied by (rather than added to) the original image; (2) the noise is not Gaussian, with Rayleigh and Gamma being commonly used densities. These two features of multiplicative noise models preclude the direct application of most state-of-the-art algorithms, which are designed for solving unconstrained optimization problems where the objective has two terms: a quadratic data term (log-likelihood), reflecting the additive and Gaussian nature of the noise, plus a convex (possibly nonsmooth) regularizer (e.g., a total variation or wavelet-based regularizer/prior). In this paper, we address these difficulties by: (1) converting the multiplicative model into an additive one by taking logarithms, as proposed by some other authors; (2) using variable splitting to obtain an equivalent constrained problem; and (3) dealing with this optimization problem using the augmented Lagrangian framework. A set of experiments shows that the proposed method, which we name MIDAL (multiplicative image denoising by augmented Lagrangian), yields state-of-the-art results both in terms of speed and denoising performance.

14.
Anal Chem ; 82(4): 1462-9, 2010 Feb 15.
Article in English | MEDLINE | ID: mdl-20095581

ABSTRACT

A rapid detection of the nonauthenticity of suspect tablets is a key first step in the fight against pharmaceutical counterfeiting. The chemical characterization of these tablets is the logical next step to evaluate their impact on patient health and help authorities in tracking their source. Hyperspectral unmixing of near-infrared (NIR) image data is an emerging effective technology to infer the number of compounds, their spectral signatures, and the mixing fractions in a given tablet, with a resolution of a few tens of micrometers. In a linear mixing scenario, hyperspectral vectors belong to a simplex whose vertices correspond to the spectra of the compounds present in the sample. SISAL (simplex identification via split augmented Lagrangian), MVSA (minimum volume simplex analysis), and MVES (minimum-volume enclosing simplex) are recent algorithms designed to identify the vertices of the minimum volume simplex containing the spectral vectors and the mixing fractions at each pixel (vector). This work demonstrates the usefulness of these techniques, based on minimum volume criteria, for unmixing NIR hyperspectral data of tablets. The experiments herein reported show that SISAL/MVSA and MVES largely outperform MCR-ALS (multivariate curve resolution-alternating least-squares), which is considered the state-of-the-art in spectral unmixing for analytical chemistry. These experiments are based on synthetic data (studying the effect of noise and the presence/absence of pure pixels) and on a real data set composed of NIR images of counterfeit tablets.


Subject(s)
Fraud , Pharmaceutical Preparations/analysis , Pharmaceutical Preparations/chemistry , Spectrophotometry, Infrared , Tablets , Time Factors
15.
Anal Chim Acta ; 641(1-2): 46-51, 2009 May 08.
Article in English | MEDLINE | ID: mdl-19393365

ABSTRACT

According to the WHO definition for counterfeit medicines, several categories can be established, e.g., medicines containing the correct active pharmaceutical ingredient (API) but different excipients, medicines containing low levels of API, no API or even a substitute API. Obviously, these different scenarios will have different detrimental effects on a patient's health. Establishing the degree of risk to the patient through determination of the composition of counterfeit medicines found in the market place is thus of paramount importance. In this work, classical least squares was used for predicting the composition of counterfeit Heptodin tablets found in a market survey. Near infrared chemical imaging (NIR-CI) was used as a non-destructive measurement technique. No prior knowledge about the origin and composition of the tablets was available. Good API (i.e., lamivudine) predictions were obtained, especially for tablets containing a high API (close to the authentic) dose. Concentration maps of each pure material, i.e., the API (lamivudine) and the excipients microcrystalline cellulose, sodium starch glycollate, rice starch and talc, were estimated. Below 1% of the energy was not explained by the model (residuals percentage) for every pixel in all 12 counterfeit tablets. The similarities among tablets with respect to the total API percentage determined, as well as the corresponding concentration maps, support the classification of the tablets into the different groups obtained in previous work.


Subject(s)
Chemistry, Pharmaceutical/methods , Fraud , Spectroscopy, Near-Infrared/methods , Tablets/chemistry , Antiviral Agents/analysis , Humans , Lamivudine/analysis , Least-Squares Analysis
16.
IEEE Trans Image Process ; 16(12): 2980-91, 2007 Dec.
Article in English | MEDLINE | ID: mdl-18092597

ABSTRACT

Standard formulations of image/signal deconvolution under wavelet-based priors/regularizers lead to very high-dimensional optimization problems involving the following difficulties: the non-Gaussian (heavy-tailed) wavelet priors lead to objective functions which are nonquadratic, usually nondifferentiable, and sometimes even nonconvex; the presence of the convolution operator destroys the separability which underlies the simplicity of wavelet-based denoising. This paper presents a unified view of several recently proposed algorithms for handling this class of optimization problems, placing them in a common majorization-minimization (MM) framework. One of the classes of algorithms considered (when using quadratic bounds on nondifferentiable log-priors) shares the infamous "singularity issue" (SI) of "iteratively reweighted least squares" (IRLS) algorithms: the possibility of having to handle infinite weights, which may cause both numerical and convergence issues. In this paper, we prove several new results which strongly support the claim that the SI does not compromise the usefulness of this class of algorithms. Exploiting the unified MM perspective, we introduce a new algorithm, resulting from using l1 bounds for nonconvex regularizers; the experiments confirm the superior performance of this method, when compared to the one based on quadratic majorization. Finally, an experimental comparison of the several algorithms, reveals their relative merits for different standard types of scenarios.


Subject(s)
Algorithms , Image Enhancement/methods , Image Interpretation, Computer-Assisted/methods , Pattern Recognition, Automated/methods , Signal Processing, Computer-Assisted , Information Storage and Retrieval/methods , Numerical Analysis, Computer-Assisted , Reproducibility of Results , Sensitivity and Specificity
17.
IEEE Trans Image Process ; 16(12): 2992-3004, 2007 Dec.
Article in English | MEDLINE | ID: mdl-18092598

ABSTRACT

Iterative shrinkage/thresholding (IST) algorithms have been recently proposed to handle a class of convex unconstrained optimization problems arising in image restoration and other linear inverse problems. This class of problems results from combining a linear observation model with a nonquadratic regularizer (e.g., total variation or wavelet-based regularization). It happens that the convergence rate of these IST algorithms depends heavily on the linear observation operator, becoming very slow when this operator is ill-conditioned or ill-posed. In this paper, we introduce two-step IST (TwIST) algorithms, exhibiting much faster convergence rate than IST for ill-conditioned problems. For a vast class of nonquadratic convex regularizers (l(p) norms, some Besov norms, and total variation), we show that TwIST converges to a minimizer of the objective function, for a given range of values of its parameters. For noninvertible observation operators, we introduce a monotonic version of TwIST (MTwIST); although the convergence proof does not apply to this scenario, we give experimental evidence that MTwIST exhibits similar speed gains over IST. The effectiveness of the new methods are experimentally confirmed on problems of image deconvolution and of restoration with missing samples.


Subject(s)
Algorithms , Image Enhancement/methods , Image Interpretation, Computer-Assisted/methods , Pattern Recognition, Automated/methods , Signal Processing, Computer-Assisted , Reproducibility of Results , Sensitivity and Specificity
18.
IEEE Trans Image Process ; 16(3): 698-709, 2007 Mar.
Article in English | MEDLINE | ID: mdl-17357730

ABSTRACT

Phase unwrapping is the inference of absolute phase from modulo-2pi phase. This paper introduces a new energy minimization framework for phase unwrapping. The considered objective functions are first-order Markov random fields. We provide an exact energy minimization algorithm, whenever the corresponding clique potentials are convex, namely for the phase unwrapping classical Lp norm, with p > or = 1. Its complexity is KT (n, 3n), where K is the length of the absolute phase domain measured in 2pi units and T (n, m) is the complexity of a max-flow computation in a graph with n nodes and m edges. For nonconvex clique potentials, often used owing to their discontinuity preserving ability, we face an NP-hard problem for which we devise an approximate solution. Both algorithms solve integer optimization problems by computing a sequence of binary optimizations, each one solved by graph cut techniques. Accordingly, we name the two algorithms PUMA, for phase unwrappping max-flow/min-cut. A set of experimental results illustrates the effectiveness of the proposed approach and its competitiveness in comparison with state-of-the-art phase unwrapping algorithms.


Subject(s)
Algorithms , Image Enhancement/methods , Image Interpretation, Computer-Assisted/methods , Imaging, Three-Dimensional/methods , Magnetic Resonance Imaging/methods , Reproducibility of Results , Sensitivity and Specificity
19.
IEEE Trans Image Process ; 15(4): 937-51, 2006 Apr.
Article in English | MEDLINE | ID: mdl-16579380

ABSTRACT

Image deconvolution is formulated in the wavelet domain under the Bayesian framework. The well-known sparsity of the wavelet coefficients of real-world images is modeled by heavy-tailed priors belonging to the Gaussian scale mixture (GSM) class; i.e., priors given by a linear (finite of infinite) combination of Gaussian densities. This class includes, among others, the generalized Gaussian, the Jeffreys, and the Gaussian mixture priors. Necessary and sufficient conditions are stated under which the prior induced by a thresholding/shrinking denoising rule is a GSM. This result is then used to show that the prior induced by the "nonnegative garrote" thresholding/shrinking rule, herein termed the garrote prior, is a GSM. To compute the maximum a posteriori estimate, we propose a new generalized expectation maximization (GEM) algorithm, where the missing variables are the scale factors of the GSM densities. The maximization step of the underlying expectation maximization algorithm is replaced with a linear stationary second-order iterative method. The result is a GEM algorithm of O(N log N) computational complexity. In a series of benchmark tests, the proposed approach outperforms or performs similarly to state-of-the art methods, demanding comparable (in some cases, much less) computational complexity.


Subject(s)
Algorithms , Image Enhancement/methods , Image Interpretation, Computer-Assisted/methods , Information Storage and Retrieval/methods , Signal Processing, Computer-Assisted , Bayes Theorem , Computer Simulation , Models, Statistical , Numerical Analysis, Computer-Assisted , Reproducibility of Results , Sensitivity and Specificity
SELECTION OF CITATIONS
SEARCH DETAIL
...