Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 37
Filter
1.
Gait Posture ; 113: 67-74, 2024 May 28.
Article in English | MEDLINE | ID: mdl-38850852

ABSTRACT

INTRODUCTION: Foot and ankle alignment plays a pivotal role in human gait and posture. Traditional assessment methods, relying on 2D standing radiographs, present limitations in capturing the dynamic 3D nature of foot alignment during weight-bearing and are prone to observer error. This study aims to integrate weight-bearing CT (WBCT) imaging and advanced deep learning (DL) techniques to automate and enhance quantification of the 3D foot and ankle alignment. METHODS: Thirty-two patients who underwent a WBCT of the foot and ankle were retrospectively included. After training and validation of a 3D nnU-Net model on 45 cases to automate the segmentation into bony models, 35 clinically relevant 3D measurements were automatically computed using a custom-made tool. Automated measurements were assessed for accuracy against manual measurements, while the latter were analyzed for inter-observer reliability. RESULTS: DL-segmentation results showed a mean dice coefficient of 0.95 and mean Hausdorff distance of 1.41 mm. A good to excellent reliability and mean prediction error of under 2 degrees was found for all angles except the talonavicular coverage angle and distal metatarsal articular angle. CONCLUSION: In summary, this study introduces a fully automated framework for quantifying foot and ankle alignment, showcasing reliability comparable to current clinical practice measurements. This operator-friendly and time-efficient tool holds promise for implementation in clinical settings, benefiting both radiologists and surgeons. Future studies are encouraged to assess the tool's impact on streamlining image assessment workflows in a clinical environment.

2.
Sensors (Basel) ; 24(5)2024 Feb 22.
Article in English | MEDLINE | ID: mdl-38474954

ABSTRACT

Generative models have the potential to revolutionize 3D extended reality. A primary obstacle is that augmented and virtual reality need real-time computing. Current state-of-the-art point cloud random generation methods are not fast enough for these applications. We introduce a vector-quantized variational autoencoder model (VQVAE) that can synthesize high-quality point clouds in milliseconds. Unlike previous work in VQVAEs, our model offers a compact sample representation suitable for conditional generation and data exploration with potential applications in rapid prototyping. We achieve this result by combining architectural improvements with an innovative approach for probabilistic random generation. First, we rethink current parallel point cloud autoencoder structures, and we propose several solutions to improve robustness, efficiency and reconstruction quality. Notable contributions in the decoder architecture include an innovative computation layer to process the shape semantic information, an attention mechanism that helps the model focus on different areas and a filter to cover possible sampling errors. Secondly, we introduce a parallel sampling strategy for VQVAE models consisting of a double encoding system, where a variational autoencoder learns how to generate the complex discrete distribution of the VQVAE, not only allowing quick inference but also describing the shape with a few global variables. We compare the proposed decoder and our VQVAE model with established and concurrent work, and we prove, one by one, the validity of the single contributions.

3.
Comput Methods Programs Biomed ; 245: 108044, 2024 Mar.
Article in English | MEDLINE | ID: mdl-38290289

ABSTRACT

BACKGROUND: The field of dermatological image analysis using deep neural networks includes the semantic segmentation of skin lesions, pivotal for lesion analysis, pathology inference, and diagnoses. While biases in neural network-based dermatoscopic image classification against darker skin tones due to dataset imbalance and contrast disparities are acknowledged, a comprehensive exploration of skin color bias in lesion segmentation models is lacking. It is imperative to address and understand the biases in these models. METHODS: Our study comprehensively evaluates skin tone bias within prevalent neural networks for skin lesion segmentation. Since no information about skin color exists in widely used datasets, to quantify the bias we use three distinct skin color estimation methods: Fitzpatrick skin type estimation, Individual Typology Angle estimation as well as manual grouping of images by skin color. We assess bias across common models by training a variety of U-Net-based models on three widely-used datasets with 1758 different dermoscopic and clinical images. We also evaluate commonly suggested methods to mitigate bias. RESULTS: Our findings expose a significant and large correlation between segmentation performance and skin color, revealing consistent challenges in segmenting lesions for darker skin tones across diverse datasets. Using various methods of skin color quantification, we have found significant bias in skin lesion segmentation against darker-skinned individuals when evaluated both in and out-of-sample. We also find that commonly used methods for bias mitigation do not result in any significant reduction in bias. CONCLUSIONS: Our findings suggest a pervasive bias in most published lesion segmentation methods, given our use of commonly employed neural network architectures and publicly available datasets. In light of our findings, we propose recommendations for unbiased dataset collection, labeling, and model development. This presents the first comprehensive evaluation of fairness in skin lesion segmentation.


Subject(s)
Deep Learning , Skin Diseases , Humans , Skin Pigmentation , Dermoscopy/methods , Skin Diseases/diagnostic imaging , Skin/diagnostic imaging , Image Processing, Computer-Assisted/methods
4.
Sensors (Basel) ; 23(19)2023 Sep 28.
Article in English | MEDLINE | ID: mdl-37836959

ABSTRACT

High-quality data are of utmost importance for any deep-learning application. However, acquiring such data and their annotation is challenging. This paper presents a GPU-accelerated simulator that enables the generation of high-quality, perfectly labelled data for any Time-of-Flight sensor, including LiDAR. Our approach optimally exploits the 3D graphics pipeline of the GPU, significantly decreasing data generation time while preserving compatibility with all real-time rendering engines. The presented algorithms are generic and allow users to perfectly mimic the unique sampling pattern of any such sensor. To validate our simulator, two neural networks are trained for denoising and semantic segmentation. To bridge the gap between reality and simulation, a novel loss function is introduced that requires only a small set of partially annotated real data. It enables the learning of classes for which no labels are provided in the real data, hence dramatically reducing annotation efforts. With this work, we hope to provide means for alleviating the data acquisition problem that is pertinent to deep-learning applications.

5.
IEEE Trans Neural Netw Learn Syst ; 34(7): 3269-3283, 2023 Jul.
Article in English | MEDLINE | ID: mdl-37053063

ABSTRACT

Image denoising and classification are typically conducted separately and sequentially according to their respective objectives. In such a setup, where the two tasks are decoupled, the denoising operation does not optimally serve the classification task and sometimes even deteriorates it. We introduce here a unified deep learning framework for joint denoising and classification of high-dimensional images, and we particularly apply it in the framework of hyperspectral imaging. Earlier works on joint image denoising and classification are very scarce, and to the best of our knowledge, no deep learning models were proposed or studied yet for this type of multitask image processing. A key component in our joint learning model is a compound loss function, designed in such a way that the denoising and classification operations benefit each other iteratively during the learning process. Hyperspectral images (HSIs) are particularly challenging for both denoising and classification due to their high dimensionality and varying noise statistics across the bands. We argue that a well-designed end-to-end deep learning framework for joint denoising and classification is superior to current deep learning approaches for processing HSI data, and we substantiate this by results on real HSI images in remote sensing. We experimentally show that the proposed joint learning framework substantially improves the classification performance compared to the common deep learning approaches in HSI processing, and as a by-product, the denoising results are enhanced as well, especially in terms of the semantic content, benefiting from the classification.


Subject(s)
Image Processing, Computer-Assisted , Neural Networks, Computer , Knowledge , Semantics
6.
Sensors (Basel) ; 23(2)2023 Jan 05.
Article in English | MEDLINE | ID: mdl-36679429

ABSTRACT

Medical images are often of huge size, which presents a challenge in terms of memory requirements when training machine learning models. Commonly, the images are downsampled to overcome this challenge, but this leads to a loss of information. We present a general approach for training semantic segmentation neural networks on much smaller input sizes called Segment-then-Segment. To reduce the input size, we use image crops instead of downscaling. One neural network performs the initial segmentation on a downscaled image. This segmentation is then used to take the most salient crops of the full-resolution image with the surrounding context. Each crop is segmented using a second specially trained neural network. The segmentation masks of each crop are joined to form the final output image. We evaluate our approach on multiple medical image modalities (microscopy, colonoscopy, and CT) and show that this approach greatly improves segmentation performance with small network input sizes when compared to baseline models trained on downscaled images, especially in terms of pixel-wise recall.


Subject(s)
Image Processing, Computer-Assisted , Neural Networks, Computer , Image Processing, Computer-Assisted/methods , Machine Learning , Semantics
7.
IEEE Trans Neural Netw Learn Syst ; 34(11): 9259-9273, 2023 Nov.
Article in English | MEDLINE | ID: mdl-35294365

ABSTRACT

Band selection (BS) reduces effectively the spectral dimension of a hyperspectral image (HSI) by selecting relatively few representative bands, which allows efficient processing in subsequent tasks. Existing unsupervised BS methods based on subspace clustering are built on matrix-based models, where each band is reshaped as a vector. They encode the correlation of data only in the spectral mode (dimension) and neglect strong correlations between different modes, i.e., spatial modes and spectral mode. Another issue is that the subspace representation of bands is performed in the raw data space, where the dimension is often excessively high, resulting in a less efficient and less robust performance. To address these issues, in this article, we propose a tensor-based subspace clustering model for hyperspectral BS. Our model is developed on the well-known Tucker decomposition. The three factor matrices and a core tensor in our model encode jointly the multimode correlations of HSI, avoiding effectively to destroy the tensor structure and information loss. In addition, we propose well-motivated heterogeneous regularizations (HRs) on the factor matrices by taking into account the important local and global properties of HSI along three dimensions, which facilitates the learning of the intrinsic cluster structure of bands in the low-dimensional subspaces. Instead of learning the correlations of bands in the original domain, a common way for the matrix-based models, our model learns naturally the band correlations in a low-dimensional latent feature space, which is derived by the projections of two factor matrices associated with spatial dimensions, leading to a computationally efficient model. More importantly, the latent feature space is learned in a unified framework. We also develop an efficient algorithm to solve the resulting model. Experimental results on benchmark datasets demonstrate that our model yields improved performance compared to the state-of-the-art.

8.
Magn Reson Med ; 85(3): 1397-1413, 2021 03.
Article in English | MEDLINE | ID: mdl-33009866

ABSTRACT

PURPOSE: Echo planar imaging (EPI) is commonly used to acquire the many volumes needed for high angular resolution diffusion Imaging (HARDI), posing a higher risk for artifacts, such as distortion and deformation. An alternative to EPI is fast spin echo (FSE) imaging, which has fewer artifacts but is inherently slower. The aim is to accelerate FSE such that a HARDI data set can be acquired in a time comparable to EPI using compressed sensing. METHODS: Compressed sensing was applied in either q-space or simultaneously in k-space and q-space, by undersampling the k-space in the phase-encoding direction or retrospectively eliminating diffusion directions for different degrees of undersampling. To test the replicability of the acquisition and reconstruction, brain data were acquired from six mice, and a numerical phantom experiment was performed. All HARDI data were analyzed individually using constrained spherical deconvolution, and the apparent fiber density and complexity metric were evaluated, together with whole-brain tractography. RESULTS: The apparent fiber density and complexity metric showed relatively minor differences when only q-space undersampling was used, but deteriorate when k-space undersampling was applied. Likewise, the tract density weighted image showed good results when only q-space undersampling was applied using 15 directions or more, but information was lost when fewer volumes or k-space undersampling were used. CONCLUSION: It was found that acquiring 15 to 20 diffusion directions with a full k-space and reconstructed using compressed sensing could suffice for a replicable measurement of quantitative measures in mice, where areas near the sinuses and ear cavities are untainted by signal loss.


Subject(s)
Artifacts , Echo-Planar Imaging , Animals , Diffusion Tensor Imaging , Image Processing, Computer-Assisted , Mice , Phantoms, Imaging , Retrospective Studies
9.
Cardiovasc Eng Technol ; 11(6): 725-747, 2020 12.
Article in English | MEDLINE | ID: mdl-33140174

ABSTRACT

BACKGROUND: Preservation and improvement of heart and vessel health is the primary motivation behind cardiovascular disease (CVD) research. Development of advanced imaging techniques can improve our understanding of disease physiology and serve as a monitor for disease progression. Various image processing approaches have been proposed to extract parameters of cardiac shape and function from different cardiac imaging modalities with an overall intention of providing full cardiac analysis. Due to differences in image modalities, the selection of an appropriate segmentation algorithm may be a challenging task. PURPOSE: This paper presents a comprehensive and critical overview of research on the whole heart, bi-ventricles and left atrium segmentation methods from computed tomography (CT), magnetic resonance (MRI) and echocardiography (echo) imaging. The paper aims to: (1) summarize the considerable challenges of cardiac image segmentation, (2) provide the comparison of the segmentation methods, (3) classify significant contributions in the field and (4) critically review approaches in terms of their performance and accuracy. CONCLUSION: The methods described are classified based on the used segmentation approach into (1) edge-based segmentation methods, (2) model-fitting segmentation methods and (3) machine and deep learning segmentation methods and are further split based on the targeted cardiac structure. Edge-based methods are mostly developed as semi-automatic and allow end-user interaction, which provides physicians with extra control over the final segmentation. Model-fitting methods are very robust and resistant to the high variability in image contrast and overall image quality. Nevertheless, they are often time-consuming and require appropriate models with prior knowledge. While the emerging deep learning segmentation approaches provide unprecedented performance in some specific scenarios and under the appropriate training, their performance highly depends on the data quality and the amount and the accuracy of provided annotations.


Subject(s)
Algorithms , Echocardiography , Heart Diseases/diagnostic imaging , Heart/diagnostic imaging , Magnetic Resonance Imaging , Radiographic Image Interpretation, Computer-Assisted , Tomography, X-Ray Computed , Biomechanical Phenomena , Heart/physiopathology , Heart Diseases/physiopathology , Hemodynamics , Humans , Predictive Value of Tests , Reproducibility of Results , Ventricular Function, Left , Ventricular Function, Right
10.
Sensors (Basel) ; 20(18)2020 Sep 15.
Article in English | MEDLINE | ID: mdl-32942592

ABSTRACT

Supervised hyperspectral image (HSI) classification relies on accurate label information. However, it is not always possible to collect perfectly accurate labels for training samples. This motivates the development of classifiers that are sufficiently robust to some reasonable amounts of errors in data labels. Despite the growing importance of this aspect, it has not been sufficiently studied in the literature yet. In this paper, we analyze the effect of erroneous sample labels on probability distributions of the principal components of HSIs, and provide in this way a statistical analysis of the resulting uncertainty in classifiers. Building on the theory of imprecise probabilities, we develop a novel robust dynamic classifier selection (R-DCS) model for data classification with erroneous labels. Particularly, spectral and spatial features are extracted from HSIs to construct two individual classifiers for the dynamic selection, respectively. The proposed R-DCS model is based on the robustness of the classifiers' predictions: the extent to which a classifier can be altered without changing its prediction. We provide three possible selection strategies for the proposed model with different computational complexities and apply them on three benchmark data sets. Experimental results demonstrate that the proposed model outperforms the individual classifiers it selects from and is more robust to errors in labels compared to widely adopted approaches.

11.
Sensors (Basel) ; 20(11)2020 Jun 03.
Article in English | MEDLINE | ID: mdl-32503338

ABSTRACT

Reconstruction of magnetic resonance images (MRI) benefits from incorporating a priori knowledge about statistical dependencies among the representation coefficients. Recent results demonstrate that modeling intraband dependencies with Markov Random Field (MRF) models enable superior reconstructions compared to inter-scale models. In this paper, we develop a novel reconstruction method, which includes a composite prior based on an MRF model and Total Variation (TV). We use an anisotropic MRF model and propose an original data-driven method for the adaptive estimation of its parameters. From a Bayesian perspective, we define a new position-dependent type of regularization and derive a compact reconstruction algorithm with a novel soft-thresholding rule. Experimental results show the effectiveness of this method compared to the state of the art in the field.

12.
J Imaging ; 6(2)2020 Feb 11.
Article in English | MEDLINE | ID: mdl-34460553

ABSTRACT

Multichannel images, i.e., images of the same object or scene taken in different spectral bands or with different imaging modalities/settings, are common in many applications. For example, multispectral images contain several wavelength bands and hence, have richer information than color images. Multichannel magnetic resonance imaging and multichannel computed tomography images are common in medical imaging diagnostics, and multimodal images are also routinely used in art investigation. All the methods for grayscale images can be applied to multichannel images by processing each channel/band separately. However, it requires vast computational time, especially for the task of searching for overlapping patches similar to a given query patch. To address this problem, we propose a three-dimensional orthonormal tree-structured Haar transform (3D-OTSHT) targeting fast full search equivalent for three-dimensional block matching in multichannel images. The use of a three-dimensional integral image significantly saves time to obtain the 3D-OTSHT coefficients. We demonstrate superior performance of the proposed block matching.

13.
Comput Biol Med ; 104: 163-174, 2019 01.
Article in English | MEDLINE | ID: mdl-30481731

ABSTRACT

BACKGROUND: Percutaneous left atrial appendage (LAA) closure (placement of an occluder to close the appendage) is a novel procedure for stroke prevention in patients suffering from atrial fibrillation. The closure procedure planning requires accurate LAA measurements which can be obtained from computed tomography (CT) images. METHOD: We propose a novel semi-automatic LAA segmentation method from 3D coronary CT angiography (CCTA) images. The method segments the LAA, proposes the location for the occluder placement (a delineation plane between the left atrium and LAA) and calculates measurements needed for closure procedure planning. The method requires only two inputs from the user: a threshold value and a single seed point inside the LAA. Proposed location of the delineation plane can be intuitively corrected if necessary. Measurements are calculated from the segmented LAA according to the final delineation plane. RESULTS: Performance of the proposed method is validated on 17 CCTA images, manually segmented by two medical doctors. We achieve the average dice coefficient overlap of 92.52% and 91.63% against the ground truth segmentations. The average dice coefficient overlap between the two ground truth segmentations is 92.66%. Our proposed LAA orifice localization is evaluated against the desired location of the LAA orifice determined by the expert. The average distance between our proposed location and the desired location is 2.51 mm. CONCLUSION: Segmentation results show high correspondence to the ground truth segmentations. The occluder placement method shows high accuracy which indicates potential in clinical procedure planning.


Subject(s)
Algorithms , Angiography , Atrial Appendage , Atrial Fibrillation , Imaging, Three-Dimensional , Tomography, X-Ray Computed , Aged , Atrial Appendage/diagnostic imaging , Atrial Appendage/physiopathology , Atrial Fibrillation/diagnostic imaging , Atrial Fibrillation/physiopathology , Female , Heart Atria/diagnostic imaging , Heart Atria/physiopathology , Humans , Male , Middle Aged
14.
IEEE Trans Med Imaging ; 36(10): 2104-2115, 2017 10.
Article in English | MEDLINE | ID: mdl-28858789

ABSTRACT

Recent research in compressed sensing of magnetic resonance imaging (CS-MRI) emphasizes the importance of modeling structured sparsity, either in the acquisition or in the reconstruction stages. Subband coefficients of typical images show certain structural patterns, which can be viewed in terms of fixed groups (like wavelet trees) or statistically (certain configurations are more likely than others). Wavelet tree models have already demonstrated excellent performance in MRI recovery from partial data. However, much less attention has been given in CS-MRI to modeling statistically spatial clustering of subband data, although the potentials of such models have been indicated. In this paper, we propose a practical CS-MRI reconstruction algorithm making use of a Markov random field prior model for spatial clustering of subband coefficients and an efficient optimization approach based on proximal splitting. The results demonstrate an improved reconstruction performance compared with both the standard CS-MRI methods and the recent related methods.


Subject(s)
Image Processing, Computer-Assisted/methods , Magnetic Resonance Imaging/methods , Algorithms , Animals , Brain/diagnostic imaging , Humans , Markov Chains , Mice
15.
Sensors (Basel) ; 17(9)2017 Sep 12.
Article in English | MEDLINE | ID: mdl-28895908

ABSTRACT

Sparse representation has been extensively investigated for hyperspectral image (HSI) classification and led to substantial improvements in the performance over the traditional methods, such as support vector machine (SVM). However, the existing sparsity-based classification methods typically assume Gaussian noise, neglecting the fact that HSIs are often corrupted by different types of noise in practice. In this paper, we develop a robust classification model that admits realistic mixed noise, which includes Gaussian noise and sparse noise. We combine a model for mixed noise with a prior on the representation coefficients of input data within a unified framework, which produces three kinds of robust classification methods based on sparse representation classification (SRC), joint SRC and joint SRC on a super-pixels level. Experimental results on simulated and real data demonstrate the effectiveness of the proposed method and clear benefits from the introduced mixed-noise model.

16.
PLoS One ; 11(3): e0149778, 2016.
Article in English | MEDLINE | ID: mdl-26930054

ABSTRACT

Diffusion Weighted (DW) MRI allows for the non-invasive study of water diffusion inside living tissues. As such, it is useful for the investigation of human brain white matter (WM) connectivity in vivo through fiber tractography (FT) algorithms. Many DW-MRI tailored restoration techniques and FT algorithms have been developed. However, it is not clear how accurately these methods reproduce the WM bundle characteristics in real-world conditions, such as in the presence of noise, partial volume effect, and a limited spatial and angular resolution. The difficulty lies in the lack of a realistic brain phantom on the one hand, and a sufficiently accurate way of modeling the acquisition-related degradation on the other. This paper proposes a software phantom that approximates a human brain to a high degree of realism and that can incorporate complex brain-like structural features. We refer to it as a Diffusion BRAIN (D-BRAIN) phantom. Also, we propose an accurate model of a (DW) MRI acquisition protocol to allow for validation of methods in realistic conditions with data imperfections. The phantom model simulates anatomical and diffusion properties for multiple brain tissue components, and can serve as a ground-truth to evaluate FT algorithms, among others. The simulation of the acquisition process allows one to include noise, partial volume effects, and limited spatial and angular resolution in the images. In this way, the effect of image artifacts on, for instance, fiber tractography can be investigated with great detail. The proposed framework enables reliable and quantitative evaluation of DW-MR image processing and FT algorithms at the level of large-scale WM structures. The effect of noise levels and other data characteristics on cortico-cortical connectivity and tractography-based grey matter parcellation can be investigated as well.


Subject(s)
Algorithms , Brain/diagnostic imaging , Computational Biology/methods , Image Processing, Computer-Assisted/methods , Computational Biology/instrumentation , Humans , Image Processing, Computer-Assisted/instrumentation , Models, Anatomic , Nerve Net/diagnostic imaging , Phantoms, Imaging , Radiography , Reproducibility of Results , White Matter/diagnostic imaging
17.
Neuroimage ; 120: 441-55, 2015 Oct 15.
Article in English | MEDLINE | ID: mdl-26142273

ABSTRACT

Diffusion-weighted (DW) magnetic resonance imaging (MRI) is a unique method to investigate microstructural tissue properties noninvasively and is one of the most popular methods for studying the brain white matter in vivo. To obtain reliable statistical inferences with diffusion MRI, however, there are still many challenges, such as acquiring high-quality DW-MRI data (e.g., high SNR and high resolution), careful data preprocessing (e.g., correcting for subject motion and eddy current induced geometric distortions), choosing the appropriate diffusion approach (e.g., diffusion tensor imaging (DTI), diffusion kurtosis imaging (DKI), or diffusion spectrum MRI), and applying a robust analysis strategy (e.g., tractography based or voxel based analysis). Notwithstanding the numerous efforts to optimize many steps in this complex and lengthy diffusion analysis pipeline, to date, a well-known artifact in MRI--i.e., Gibbs ringing (GR)--has largely gone unnoticed or deemed insignificant as a potential confound in quantitative DW-MRI analysis. Considering the recent explosion of diffusion MRI applications in biomedical and clinical applications, a systematic and comprehensive investigation is necessary to understand the influence of GR on the estimation of diffusion measures. In this work, we demonstrate with simulations and experimental DW-MRI data that diffusion estimates are significantly affected by GR artifacts and we show that an off-the-shelf GR correction procedure based on total variation already can alleviate this issue substantially.


Subject(s)
Artifacts , Brain/anatomy & histology , Diffusion Magnetic Resonance Imaging/methods , Diffusion Magnetic Resonance Imaging/standards , Anisotropy , Brain/physiology , Computer Simulation , Humans
18.
IEEE Trans Image Process ; 24(1): 444-56, 2015 Jan.
Article in English | MEDLINE | ID: mdl-25420260

ABSTRACT

In this paper, we first introduce a general approach for context-aware patch-based image inpainting, where textural descriptors are used to guide and accelerate the search for well-matching (candidate) patches. A novel top-down splitting procedure divides the image into variable size blocks according to their context, constraining thereby the search for candidate patches to nonlocal image regions with matching context. This approach can be employed to improve the speed and performance of virtually any (patch-based) inpainting method. We apply this approach to the so-called global image inpainting with the Markov random field (MRF) prior, where MRF encodes a priori knowledge about consistency of neighboring image patches. We solve the resulting optimization problem with an efficient low-complexity inference method. Experimental results demonstrate the potential of the proposed approach in inpainting applications like scratch, text, and object removal. Improvement and significant acceleration of a related global MRF-based inpainting method is also evident.

19.
PLoS One ; 9(6): e98937, 2014.
Article in English | MEDLINE | ID: mdl-24915203

ABSTRACT

Today, many MRI reconstruction techniques exist for undersampled MRI data. Regularization-based techniques inspired by compressed sensing allow for the reconstruction of undersampled data that would lead to an ill-posed reconstruction problem. Parallel imaging enables the reconstruction of MRI images from undersampled multi-coil data that leads to a well-posed reconstruction problem. Autocalibrating pMRI techniques encompass pMRI techniques where no explicit knowledge of the coil sensivities is required. A first purpose of this paper is to derive a novel autocalibration approach for pMRI that allows for the estimation and use of smooth, but high-bandwidth coil profiles instead of a compactly supported kernel. These high-bandwidth models adhere more accurately to the physics of an antenna system. The second purpose of this paper is to demonstrate the feasibility of a parameter-free reconstruction algorithm that combines autocalibrating pMRI and compressed sensing. Therefore, we present several techniques for automatic parameter estimation in MRI reconstruction. Experiments show that a higher reconstruction accuracy can be had using high-bandwidth coil models and that the automatic parameter choices yield an acceptable result.


Subject(s)
Image Processing, Computer-Assisted , Magnetic Resonance Imaging , Models, Theoretical , Algorithms , Calibration , Humans , Image Processing, Computer-Assisted/methods , Magnetic Resonance Imaging/methods
20.
Comput Med Imaging Graph ; 38(3): 179-89, 2014 Apr.
Article in English | MEDLINE | ID: mdl-24405817

ABSTRACT

Aortic stiffness has proven to be an important diagnostic and prognostic factor of many cardiovascular diseases, as well as an estimate of overall cardiovascular health. Pulse wave velocity (PWV) represents a good measure of the aortic stiffness, while the aortic distensibility is used as an aortic elasticity index. Obtaining the PWV and the aortic distensibility from magnetic resonance imaging (MRI) data requires diverse segmentation tasks, namely the extraction of the aortic center line and the segmentation of aortic regions, combined with signal processing methods for the analysis of the pulse wave. In our study non-contrasted MRI images of abdomen were used in healthy volunteers (22 data sets) for the sake of non-invasive analysis and contrasted magnetic resonance (MR) images were used for the aortic examination of Marfan syndrome patients (8 data sets). In this research we present a novel robust segmentation technique for the PWV and aortic distensibility calculation as a complete image processing toolbox. We introduce a novel graph-based method for the centerline extraction of a thoraco-abdominal aorta for the length calculation from 3-D MRI data, robust to artifacts and noise. Moreover, we design a new projection-based segmentation method for transverse aortic region delineation in cardiac magnetic resonance (CMR) images which is robust to high presence of artifacts. Finally, we propose a novel method for analysis of velocity curves in order to obtain pulse wave propagation times. In order to validate the proposed method we compare the obtained results with manually determined aortic centerlines and a region segmentation by an expert, while the results of the PWV measurement were compared to a validated software (LUMC, Leiden, the Netherlands). The obtained results show high correctness and effectiveness of our method for the aortic PWV and distensibility calculation.


Subject(s)
Aorta/physiopathology , Image Interpretation, Computer-Assisted/methods , Magnetic Resonance Angiography/methods , Magnetic Resonance Imaging, Cine/methods , Marfan Syndrome/physiopathology , Pulsatile Flow , Pulse Wave Analysis/methods , Algorithms , Elastic Modulus , Humans , Image Enhancement/methods , Imaging, Three-Dimensional/methods , Marfan Syndrome/diagnosis , Reproducibility of Results , Sensitivity and Specificity , Vascular Resistance
SELECTION OF CITATIONS
SEARCH DETAIL
...