Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 38
Filter
1.
Article in English | MEDLINE | ID: mdl-33001800

ABSTRACT

Ultrasound (US) image restoration from radio frequency (RF) signals is generally addressed by deconvolution techniques mitigating the effect of the system point spread function (PSF). Most of the existing methods estimate the tissue reflectivity function (TRF) from the so-called fundamental US images, based on an image model assuming the linear US wave propagation. However, several human tissues or tissues with contrast agents have a nonlinear behavior when interacting with US waves leading to harmonic images. This work takes this nonlinearity into account in the context of TRF restoration, by considering both fundamental and harmonic RF signals. Starting from two observation models (for the fundamental and harmonic images), TRF estimation is expressed as the minimization of a cost function defined as the sum of two data fidelity terms and one sparsity-based regularization stabilizing the solution. The high attenuation with a depth of harmonic echoes is integrated into the direct model that relates the observed harmonic image to the TRF. The interest of the proposed method is shown through synthetic and in vivo results and compared with other restoration methods.

2.
Nat Commun ; 11(1): 5929, 2020 11 23.
Article in English | MEDLINE | ID: mdl-33230217

ABSTRACT

Non-line-of-sight (NLOS) imaging is a rapidly growing field seeking to form images of objects outside the field of view, with potential applications in autonomous navigation, reconnaissance, and even medical imaging. The critical challenge of NLOS imaging is that diffuse reflections scatter light in all directions, resulting in weak signals and a loss of directional information. To address this problem, we propose a method for seeing around corners that derives angular resolution from vertical edges and longitudinal resolution from the temporal response to a pulsed light source. We introduce an acquisition strategy, scene response model, and reconstruction algorithm that enable the formation of 2.5-dimensional representations-a plan view plus heights-and a 180∘ field of view for large-scale scenes. Our experiments demonstrate accurate reconstructions of hidden rooms up to 3 meters in each dimension despite a small scan aperture (1.5-centimeter radius) and only 45 measurement locations.

3.
Article in English | MEDLINE | ID: mdl-32142435

ABSTRACT

This paper introduces a new fusion method for magnetic resonance (MR) and ultrasound (US) images, which aims at combining the advantages of each modality, i.e., good contrast and signal to noise ratio for the MR image and good spatial resolution for the US image. The proposed algorithm is based on two inverse problems, performing a super-resolution of the MR image and a denoising of the US image. A polynomial function is introduced to model the relationships between the gray levels of the two modalities. The resulting inverse problem is solved using a proximal alternating linearized minimization framework. The accuracy and the interest of the fusion algorithm are shown quantitatively and qualitatively via evaluations on synthetic and experimental phantom data.

4.
Nat Commun ; 10(1): 4984, 2019 11 01.
Article in English | MEDLINE | ID: mdl-31676824

ABSTRACT

Single-photon lidar has emerged as a prime candidate technology for depth imaging through challenging environments. Until now, a major limitation has been the significant amount of time required for the analysis of the recorded data. Here we show a new computational framework for real-time three-dimensional (3D) scene reconstruction from single-photon data. By combining statistical models with highly scalable computational tools from the computer graphics community, we demonstrate 3D reconstruction of complex outdoor scenes with processing times of the order of 20 ms, where the lidar data was acquired in broad daylight from distances up to 320 metres. The proposed method can handle an unknown number of surfaces in each pixel, allowing for target detection and imaging through cluttered scenes. This enables robust, real-time target reconstruction of complex moving scenes, paving the way for single-photon lidar at video rates for practical 3D imaging applications.

5.
IEEE Trans Med Imaging ; 38(3): 741-752, 2019 03.
Article in English | MEDLINE | ID: mdl-30235121

ABSTRACT

This paper introduces a robust 2-D cardiac motion estimation method. The problem is formulated as an energy minimization with an optical flow-based data fidelity term and two regularization terms imposing spatial smoothness and the sparsity of the motion field in an appropriate cardiac motion dictionary. Robustness to outliers, such as imaging artefacts and anatomical motion boundaries, is introduced using robust weighting functions for the data fidelity term as well as for the spatial and sparse regularizations. The motion fields and the weights are computed jointly using an iteratively re-weighted minimization strategy. The proposed robust approach is evaluated on synthetic data and realistic simulation sequences with available ground-truth by comparing the performance with state-of-the-art algorithms. Finally, the proposed method is validated using two sequences of in vivo images. The obtained results show the interest of the proposed approach for 2-D cardiac ultrasound imaging.


Subject(s)
Echocardiography/methods , Heart/diagnostic imaging , Image Processing, Computer-Assisted/methods , Algorithms , Artifacts , Computer Simulation , Echocardiography, Doppler , Humans , Image Interpretation, Computer-Assisted/methods , Reproducibility of Results , Ultrasonography
6.
IEEE Trans Med Imaging ; 38(6): 1524-1531, 2019 06.
Article in English | MEDLINE | ID: mdl-30507496

ABSTRACT

Available super-resolution techniques for 3-D images are either computationally inefficient prior-knowledge-based iterative techniques or deep learning methods which require a large database of known low-resolution and high-resolution image pairs. A recently introduced tensor-factorization-based approach offers a fast solution without the use of known image pairs or strict prior assumptions. In this paper, this factorization framework is investigated for single image resolution enhancement with an offline estimate of the system point spread function. The technique is applied to 3-D cone beam computed tomography for dental image resolution enhancement. To demonstrate the efficiency of our method, it is compared to a recent state-of-the-art iterative technique using low-rank and total variation regularizations. In contrast to this comparative technique, the proposed reconstruction technique gives a 2-order-of-magnitude improvement in running time-2 min compared to 2 h for a dental volume of 282×266×392 voxels. Furthermore, it also offers slightly improved quantitative results (peak signal-to-noise ratio and segmentation quality). Another advantage of the presented technique is the low number of hyperparameters. As demonstrated in this paper, the framework is not sensitive to small changes in its parameters, proposing an ease of use.


Subject(s)
Cone-Beam Computed Tomography/methods , Imaging, Three-Dimensional/methods , Radiography, Dental/methods , Tooth/diagnostic imaging , Algorithms , Databases, Factual , Humans
7.
Article in English | MEDLINE | ID: mdl-30507510

ABSTRACT

Compressive spectral imagers reduce the number of sampled pixels by coding and combining the spectral information. However, sampling compressed information with simultaneous high spatial and high spectral resolution demands expensive high-resolution sensors. This work introduces a model allowing data from high spatial/low spectral and low spatial/high spectral resolution compressive sensors to be fused. Based on this model, the compressive fusion process is formulated as an inverse problem that minimizes an objective function defined as the sum of a quadratic data fidelity term and smoothness and sparsity regularization penalties. The parameters of the different sensors are optimized and the choice of an appropriate regularization is studied in order to improve the quality of the high resolution reconstructed images. Simulation results conducted on synthetic and real data, with different CS imagers, allow the quality of the proposed fusion method to be appreciated.

8.
IEEE Trans Image Process ; 27(1): 64-77, 2018.
Article in English | MEDLINE | ID: mdl-28922120

ABSTRACT

This paper introduces a new method for cardiac motion estimation in 2-D ultrasound images. The motion estimation problem is formulated as an energy minimization, whose data fidelity term is built using the assumption that the images are corrupted by multiplicative Rayleigh noise. In addition to a classical spatial smoothness constraint, the proposed method exploits the sparse properties of the cardiac motion to regularize the solution via an appropriate dictionary learning step. The proposed method is evaluated on one data set with available ground-truth, including four sequences of highly realistic simulations. The approach is also validated on both healthy and pathological sequences of in vivo data. We evaluate the method in terms of motion estimation accuracy and strain errors and compare the performance with state-of-the-art algorithms. The results show that the proposed method gives competitive results for the considered data. Furthermore, the in vivo strain analysis demonstrates that meaningful clinical interpretation can be obtained from the estimated motion vectors.This paper introduces a new method for cardiac motion estimation in 2-D ultrasound images. The motion estimation problem is formulated as an energy minimization, whose data fidelity term is built using the assumption that the images are corrupted by multiplicative Rayleigh noise. In addition to a classical spatial smoothness constraint, the proposed method exploits the sparse properties of the cardiac motion to regularize the solution via an appropriate dictionary learning step. The proposed method is evaluated on one data set with available ground-truth, including four sequences of highly realistic simulations. The approach is also validated on both healthy and pathological sequences of in vivo data. We evaluate the method in terms of motion estimation accuracy and strain errors and compare the performance with state-of-the-art algorithms. The results show that the proposed method gives competitive results for the considered data. Furthermore, the in vivo strain analysis demonstrates that meaningful clinical interpretation can be obtained from the estimated motion vectors.

9.
Biomed Opt Express ; 8(12): 5450-5467, 2017 Dec 01.
Article in English | MEDLINE | ID: mdl-29296480

ABSTRACT

Detecting skin lentigo in reflectance confocal microscopy images is an important and challenging problem. This imaging modality has not yet been widely investigated for this problem and there are a few automatic processing techniques. They are mostly based on machine learning approaches and rely on numerous classical image features that lead to high computational costs given the very large resolution of these images. This paper presents a detection method with very low computational complexity that is able to identify the skin depth at which the lentigo can be detected. The proposed method performs multiresolution decomposition of the image obtained at each skin depth. The distribution of image pixels at a given depth can be approximated accurately by a generalized Gaussian distribution whose parameters depend on the decomposition scale, resulting in a very-low-dimension parameter space. SVM classifiers are then investigated to classify the scale parameter of this distribution allowing real-time detection of lentigo. The method is applied to 45 healthy and lentigo patients from a clinical study, where sensitivity of 81.4% and specificity of 83.3% are achieved. Our results show that lentigo is identifiable at depths between 50µm and 60µm, corresponding to the average location of the the dermoepidermal junction. This result is in agreement with the clinical practices that characterize the lentigo by assessing the disorganization of the dermoepidermal junction.

10.
Neuroimage ; 144(Pt A): 142-152, 2017 01 01.
Article in English | MEDLINE | ID: mdl-27639353

ABSTRACT

This paper deals with EEG source localization. The aim is to perform spatially coherent focal localization and recover temporal EEG waveforms, which can be useful in certain clinical applications. A new hierarchical Bayesian model is proposed with a multivariate Bernoulli Laplacian structured sparsity prior for brain activity. This distribution approximates a mixed ℓ20 pseudo norm regularization in a Bayesian framework. A partially collapsed Gibbs sampler is proposed to draw samples asymptotically distributed according to the posterior of the proposed Bayesian model. The generated samples are used to estimate the brain activity and the model hyperparameters jointly in an unsupervised framework. Two different kinds of Metropolis-Hastings moves are introduced to accelerate the convergence of the Gibbs sampler. The first move is based on multiple dipole shifts within each MCMC chain, whereas the second exploits proposals associated with different MCMC chains. Experiments with focal synthetic data shows that the proposed algorithm is more robust and has a higher recovery rate than the weighted ℓ21 mixed norm regularization. Using real data, the proposed algorithm finds sources that are spatially coherent with state of the art methods, namely a multiple sparse prior approach and the Champagne algorithm. In addition, the method estimates waveforms showing peaks at meaningful timestamps. This information can be valuable for activity spread characterization.


Subject(s)
Brain/physiology , Electroencephalography/methods , Evoked Potentials/physiology , Auditory Perception/physiology , Bayes Theorem , Facial Recognition/physiology , Humans , Models, Statistical
11.
IEEE Trans Image Process ; 26(1): 426-438, 2017 Jan.
Article in English | MEDLINE | ID: mdl-27810822

ABSTRACT

Recent work has shown that existing powerful Bayesian hyperspectral unmixing algorithms can be significantly improved by incorporating the inherent local spatial correlations between pixel class labels via the use of Markov random fields. We here propose a new Bayesian approach to joint hyperspectral unmixing and image classification such that the previous assumption of stochastic abundance vectors is relaxed to a formulation whereby a common abundance vector is assumed for pixels in each class. This allows us to avoid stochastic reparameterizations and, instead, we propose a symmetric Dirichlet distributionmodel with adjustable parameters for the common abundance vector of each class. Inference over the proposed model is achieved via a hybrid Gibbs sampler, and in particular, simulated annealing is introduced for the label estimation in order to avoid the local-trap problem. Experiments on a synthetic image and a popular, publicly available real data set indicate the proposed model is faster than and outperforms the existing approach quantitatively and qualitatively. Moreover, for appropriate choices of the Dirichlet parameter, it is shown that the proposed approach has the capability to induce sparsity in the inferred abundance vectors. It is demonstrated that this offers increased robustness in cases where the preprocessing endmember extraction algorithms overestimate the number of active endmembers present in a given scene.

12.
IEEE Trans Image Process ; 25(9): 3979-90, 2016 09.
Article in English | MEDLINE | ID: mdl-27305679

ABSTRACT

Hyperspectral unmixing is aimed at identifying the reference spectral signatures composing a hyperspectral image and their relative abundance fractions in each pixel. In practice, the identified signatures may vary spectrally from an image to another due to varying acquisition conditions, thus inducing possibly significant estimation errors. Against this background, the hyperspectral unmixing of several images acquired over the same area is of considerable interest. Indeed, such an analysis enables the endmembers of the scene to be tracked and the corresponding endmember variability to be characterized. Sequential endmember estimation from a set of hyperspectral images is expected to provide improved performance when compared with methods analyzing the images independently. However, the significant size of the hyperspectral data precludes the use of batch procedures to jointly estimate the mixture parameters of a sequence of hyperspectral images. Provided that each elementary component is present in at least one image of the sequence, we propose to perform an online hyperspectral unmixing accounting for temporal endmember variability. The online hyperspectral unmixing is formulated as a two-stage stochastic program, which can be solved using a stochastic approximation. The performance of the proposed method is evaluated on synthetic and real data. Finally, a comparison with independent unmixing algorithms illustrates the interest of the proposed strategy.

13.
NMR Biomed ; 29(7): 918-31, 2016 07.
Article in English | MEDLINE | ID: mdl-27166741

ABSTRACT

Magnetic resonance spectroscopic imaging (MRSI) is a non-invasive technique able to provide the spatial distribution of relevant biochemical compounds commonly used as biomarkers of disease. Information provided by MRSI can be used as a valuable insight for the diagnosis, treatment and follow-up of several diseases such as cancer or neurological disorders. Obtaining accurate metabolite concentrations from in vivo MRSI signals is a crucial requirement for the clinical utility of this technique. Despite the numerous publications on the topic, accurate quantification is still a challenging problem due to the low signal-to-noise ratio of the data, overlap of spectral lines and the presence of nuisance components. We propose a novel quantification method, which alleviates these limitations by exploiting a spatio-spectral regularization scheme. In contrast to previous methods, the regularization terms are not expressed directly on the parameters being sought, but on appropriate transformed domains. In order to quantify all signals simultaneously in the MRSI grid, while introducing prior information, a fast proximal optimization algorithm is proposed. Experiments on synthetic MRSI data demonstrate that the error in the estimated metabolite concentrations is reduced by a mean of 41% with the proposed scheme. Results on in vivo brain MRSI data show the benefit of the proposed approach, which is able to fit overlapping peaks correctly and to capture metabolites that are missed by single-voxel methods due to their lower concentrations. Copyright © 2016 John Wiley & Sons, Ltd.


Subject(s)
Algorithms , Brain Neoplasms/metabolism , Brain/metabolism , Image Enhancement/methods , Magnetic Resonance Spectroscopy/methods , Molecular Imaging/methods , Signal Processing, Computer-Assisted , Biomarkers, Tumor/metabolism , Humans , Image Interpretation, Computer-Assisted/methods , Reproducibility of Results , Sensitivity and Specificity , Signal-To-Noise Ratio , Spatio-Temporal Analysis
14.
IEEE Trans Image Process ; 25(8): 3736-50, 2016 08.
Article in English | MEDLINE | ID: mdl-27187959

ABSTRACT

This paper proposes a joint segmentation and deconvolution Bayesian method for medical ultrasound (US) images. Contrary to piecewise homogeneous images, US images exhibit heavy characteristic speckle patterns correlated with the tissue structures. The generalized Gaussian distribution (GGD) has been shown to be one of the most relevant distributions for characterizing the speckle in US images. Thus, we propose a GGD-Potts model defined by a label map coupling US image segmentation and deconvolution. The Bayesian estimators of the unknown model parameters, including the US image, the label map, and all the hyperparameters are difficult to be expressed in a closed form. Thus, we investigate a Gibbs sampler to generate samples distributed according to the posterior of interest. These generated samples are finally used to compute the Bayesian estimators of the unknown parameters. The performance of the proposed Bayesian model is compared with the existing approaches via several experiments conducted on realistic synthetic data and in vivo US images.

15.
IEEE Trans Image Process ; 25(8): 3683-97, 2016 08.
Article in English | MEDLINE | ID: mdl-27187960

ABSTRACT

This paper addresses the problem of single image super-resolution (SR), which consists of recovering a high-resolution image from its blurred, decimated, and noisy version. The existing algorithms for single image SR use different strategies to handle the decimation and blurring operators. In addition to the traditional first-order gradient methods, recent techniques investigate splitting-based methods dividing the SR problem into up-sampling and deconvolution steps that can be easily solved. Instead of following this splitting strategy, we propose to deal with the decimation and blurring operators simultaneously by taking advantage of their particular properties in the frequency domain, leading to a new fast SR approach. Specifically, an analytical solution is derived and implemented efficiently for the Gaussian prior or any other regularization that can be formulated into an l2 -regularized quadratic model, i.e., an l2 - l2 optimization problem. The flexibility of the proposed SR scheme is shown through the use of various priors/regularizations, ranging from generic image priors to learning-based approaches. In the case of non-Gaussian priors, we show how the analytical solution derived from the Gaussian case can be embedded into traditional splitting frameworks, allowing the computation cost of existing algorithms to be decreased significantly. Simulation results conducted on several images with different priors illustrate the effectiveness of our fast SR approach compared with existing techniques.

16.
IEEE Trans Image Process ; 25(3): 1136-51, 2016 Mar.
Article in English | MEDLINE | ID: mdl-26685243

ABSTRACT

Mixing phenomena in hyperspectral images depend on a variety of factors, such as the resolution of observation devices, the properties of materials, and how these materials interact with incident light in the scene. Different parametric and nonparametric models have been considered to address hyperspectral unmixing problems. The simplest one is the linear mixing model. Nevertheless, it has been recognized that the mixing phenomena can also be nonlinear. The corresponding nonlinear analysis techniques are necessarily more challenging and complex than those employed for linear unmixing. Within this context, it makes sense to detect the nonlinearly mixed pixels in an image prior to its analysis, and then employ the simplest possible unmixing technique to analyze each pixel. In this paper, we propose a technique for detecting nonlinearly mixed pixels. The detection approach is based on the comparison of the reconstruction errors using both a Gaussian process regression model and a linear regression model. The two errors are combined into a detection statistics for which a probability density function can be reasonably approximated. We also propose an iterative endmember extraction algorithm to be employed in combination with the detection algorithm. The proposed detect-then-unmix strategy, which consists of extracting endmembers, detecting nonlinearly mixed pixels and unmixing, is tested with synthetic and real images.

17.
IEEE Trans Image Process ; 24(12): 4904-17, 2015 Dec.
Article in English | MEDLINE | ID: mdl-26302517

ABSTRACT

This paper presents an unsupervised Bayesian algorithm for hyperspectral image unmixing, accounting for endmember variability. The pixels are modeled by a linear combination of endmembers weighted by their corresponding abundances. However, the endmembers are assumed random to consider their variability in the image. An additive noise is also considered in the proposed model, generalizing the normal compositional model. The proposed algorithm exploits the whole image to benefit from both spectral and spatial information. It estimates both the mean and the covariance matrix of each endmember in the image. This allows the behavior of each material to be analyzed and its variability to be quantified in the scene. A spatial segmentation is also obtained based on the estimated abundances. In order to estimate the parameters associated with the proposed Bayesian model, we propose to use a Hamiltonian Monte Carlo algorithm. The performance of the resulting unmixing strategy is evaluated through simulations conducted on both synthetic and real data.

18.
IEEE Trans Image Process ; 24(11): 4109-21, 2015 Nov.
Article in English | MEDLINE | ID: mdl-26208345

ABSTRACT

This paper proposes a fast multi-band image fusion algorithm, which combines a high-spatial low-spectral resolution image and a low-spatial high-spectral resolution image. The well admitted forward model is explored to form the likelihoods of the observations. Maximizing the likelihoods leads to solving a Sylvester equation. By exploiting the properties of the circulant and downsampling matrices associated with the fusion problem, a closed-form solution for the corresponding Sylvester equation is obtained explicitly, getting rid of any iterative update step. Coupled with the alternating direction method of multipliers and the block coordinate descent method, the proposed algorithm can be easily generalized to incorporate prior information for the fusion problem, allowing a Bayesian estimator. Simulation results show that the proposed algorithm achieves the same performance as the existing algorithms with the advantage of significantly decreasing the computational complexity of these algorithms.

19.
IEEE Trans Biomed Eng ; 62(12): 2888-98, 2015 Dec.
Article in English | MEDLINE | ID: mdl-26126270

ABSTRACT

Source localization in electroencephalography has received an increasing amount of interest in the last decade. Solving the underlying ill-posed inverse problem usually requires choosing an appropriate regularization. The usual l2 norm has been considered and provides solutions with low computational complexity. However, in several situations, realistic brain activity is believed to be focused in a few focal areas. In these cases, the l2 norm is known to overestimate the activated spatial areas. One solution to this problem is to promote sparse solutions for instance based on the l1 norm that are easy to handle with optimization techniques. In this paper, we consider the use of an l0 + l1 norm to enforce sparse source activity (by ensuring the solution has few nonzero elements) while regularizing the nonzero amplitudes of the solution. More precisely, the l0 pseudonorm handles the position of the nonzero elements while the l1 norm constrains the values of their amplitudes. We use a Bernoulli-Laplace prior to introduce this combined l0 + l1 norm in a Bayesian framework. The proposed Bayesian model is shown to favor sparsity while jointly estimating the model hyperparameters using a Markov chain Monte Carlo sampling technique. We apply the model to both simulated and real EEG data, showing that the proposed method provides better results than the l2 and l1  norms regularizations in the presence of pointwise sources. A comparison with a recent method based on multiple sparse priors is also conducted.


Subject(s)
Electroencephalography/methods , Signal Processing, Computer-Assisted , Adult , Bayes Theorem , Brain/physiology , Humans , Male , Markov Chains , Monte Carlo Method
20.
IEEE Trans Image Process ; 24(8): 2540-51, 2015 Aug.
Article in English | MEDLINE | ID: mdl-25915958

ABSTRACT

Texture characterization is a central element in many image processing applications. Multifractal analysis is a useful signal and image processing tool, yet, the accurate estimation of multifractal parameters for image texture remains a challenge. This is due in the main to the fact that current estimation procedures consist of performing linear regressions across frequency scales of the 2D dyadic wavelet transform, for which only a few such scales are computable for images. The strongly non-Gaussian nature of multifractal processes, combined with their complicated dependence structure, makes it difficult to develop suitable models for parameter estimation. Here, we propose a Bayesian procedure that addresses the difficulties in the estimation of the multifractality parameter. The originality of the procedure is threefold. The construction of a generic semiparametric statistical model for the logarithm of wavelet leaders; the formulation of Bayesian estimators that are associated with this model and the set of parameter values admitted by multifractal theory; the exploitation of a suitable Whittle approximation within the Bayesian model which enables the otherwise infeasible evaluation of the posterior distribution associated with the model. Performance is assessed numerically for several 2D multifractal processes, for several image sizes and a large range of process parameters. The procedure yields significant benefits over current benchmark estimators in terms of estimation performance and ability to discriminate between the two most commonly used classes of multifractal process models. The gains in performance are particularly pronounced for small image sizes, notably enabling for the first time the analysis of image patches as small as 64 × 64 pixels.

SELECTION OF CITATIONS
SEARCH DETAIL
...