Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 16 de 16
Filter
1.
NPJ Digit Med ; 7(1): 120, 2024 May 09.
Article in English | MEDLINE | ID: mdl-38724581

ABSTRACT

Distribution shifts remain a problem for the safe application of regulated medical AI systems, and may impact their real-world performance if undetected. Postmarket shifts can occur for example if algorithms developed on data from various acquisition settings and a heterogeneous population are predominantly applied in hospitals with lower quality data acquisition or other centre-specific acquisition factors, or where some ethnicities are over-represented. Therefore, distribution shift detection could be important for monitoring AI-based medical products during postmarket surveillance. We implemented and evaluated three deep-learning based shift detection techniques (classifier-based, deep kernel, and multiple univariate kolmogorov-smirnov tests) on simulated shifts in a dataset of 130'486 retinal images. We trained a deep learning classifier for diabetic retinopathy grading. We then simulated population shifts by changing the prevalence of patients' sex, ethnicity, and co-morbidities, and example acquisition shifts by changes in image quality. We observed classification subgroup performance disparities w.r.t. image quality, patient sex, ethnicity and co-morbidity presence. The sensitivity at detecting referable diabetic retinopathy ranged from 0.50 to 0.79 for different ethnicities. This motivates the need for detecting shifts after deployment. Classifier-based tests performed best overall, with perfect detection rates for quality and co-morbidity subgroup shifts at a sample size of 1000. It was the only method to detect shifts in patient sex, but required large sample sizes ( > 3 0 ' 000 ). All methods identified easier-to-detect out-of-distribution shifts with small (≤300) sample sizes. We conclude that effective tools exist for detecting clinically relevant distribution shifts. In particular classifier-based tests can be easily implemented components in the post-market surveillance strategy of medical device manufacturers.

2.
Bioengineering (Basel) ; 10(2)2023 Feb 18.
Article in English | MEDLINE | ID: mdl-36829761

ABSTRACT

Magnetic Resonance Imaging (MRI) offers strong soft tissue contrast but suffers from long acquisition times and requires tedious annotation from radiologists. Traditionally, these challenges have been addressed separately with reconstruction and image analysis algorithms. To see if performance could be improved by treating both as end-to-end, we hosted the K2S challenge, in which challenge participants segmented knee bones and cartilage from 8× undersampled k-space. We curated the 300-patient K2S dataset of multicoil raw k-space and radiologist quality-checked segmentations. 87 teams registered for the challenge and there were 12 submissions, varying in methodologies from serial reconstruction and segmentation to end-to-end networks to another that eschewed a reconstruction algorithm altogether. Four teams produced strong submissions, with the winner having a weighted Dice Similarity Coefficient of 0.910 ± 0.021 across knee bones and cartilage. Interestingly, there was no correlation between reconstruction and segmentation metrics. Further analysis showed the top four submissions were suitable for downstream biomarker analysis, largely preserving cartilage thicknesses and key bone shape features with respect to ground truth. K2S thus showed the value in considering reconstruction and image analysis as end-to-end tasks, as this leaves room for optimization while more realistically reflecting the long-term use case of tools being developed by the MR community.

3.
IEEE Trans Med Imaging ; 41(7): 1885-1896, 2022 07.
Article in English | MEDLINE | ID: mdl-35143393

ABSTRACT

Undersampling the k-space during MR acquisitions saves time, however results in an ill-posed inversion problem, leading to an infinite set of images as possible solutions. Traditionally, this is tackled as a reconstruction problem by searching for a single "best" image out of this solution set according to some chosen regularization or prior. This approach, however, misses the possibility of other solutions and hence ignores the uncertainty in the inversion process. In this paper, we propose a method that instead returns multiple images which are possible under the acquisition model and the chosen prior to capture the uncertainty in the inversion process. To this end, we introduce a low dimensional latent space and model the posterior distribution of the latent vectors given the acquisition data in k-space, from which we can sample in the latent space and obtain the corresponding images. We use a variational autoencoder for the latent model and the Metropolis adjusted Langevin algorithm for the sampling. We evaluate our method on two datasets; with images from the Human Connectome Project and in-house measured multi-coil images. We compare to five alternative methods. Results indicate that the proposed method produces images that match the measured k-space data better than the alternatives, while showing realistic structural variability. Furthermore, in contrast to the compared methods, the proposed method yields higher uncertainty in the undersampled phase encoding direction, as expected.


Subject(s)
Connectome , Image Processing, Computer-Assisted , Algorithms , Humans , Image Processing, Computer-Assisted/methods , Magnetic Resonance Imaging/methods
4.
Arthritis Care Res (Hoboken) ; 74(6): 929-936, 2022 06.
Article in English | MEDLINE | ID: mdl-33337584

ABSTRACT

OBJECTIVE: To study the longitudinal performance of fully automated cartilage segmentation in knees with radiographic osteoarthritis (OA), we evaluated the sensitivity to change in progressor knees from the Foundation for the National Institutes of Health OA Biomarkers Consortium between the automated and previously reported manual expert segmentation, and we determined whether differences in progression rates between predefined cohorts can be detected by the fully automated approach. METHODS: The OA Initiative Biomarker Consortium was a nested case-control study. Progressor knees had both medial tibiofemoral radiographic joint space width loss (≥0.7 mm) and a persistent increase in Western Ontario and McMaster Universities Osteoarthritis Index pain scores (≥9 on a 0-100 scale) after 2 years from baseline (n = 194), whereas non-progressor knees did not have either of both (n = 200). Deep-learning automated algorithms trained on radiographic OA knees or knees of a healthy reference cohort (HRC) were used to automatically segment medial femorotibial compartment (MFTC) and lateral femorotibial cartilage on baseline and 2-year follow-up magnetic resonance imaging. Findings were compared with previously published manual expert segmentation. RESULTS: The mean ± SD MFTC cartilage loss in the progressor cohort was -181 ± 245 µm by manual segmentation (standardized response mean [SRM] -0.74), -144 ± 200 µm by the radiographic OA-based model (SRM -0.72), and -69 ± 231 µm by HRC-based model segmentation (SRM -0.30). Cohen's d for rates of progression between progressor versus the non-progressor cohort was -0.84 (P < 0.001) for manual, -0.68 (P < 0.001) for the automated radiographic OA model, and -0.14 (P = 0.18) for automated HRC model segmentation. CONCLUSION: A fully automated deep-learning segmentation approach not only displays similar sensitivity to change of longitudinal cartilage thickness loss in knee OA as did manual expert segmentation but also effectively differentiates longitudinal rates of loss of cartilage thickness between cohorts with different progression profiles.


Subject(s)
Cartilage, Articular , Deep Learning , Osteoarthritis, Knee , Algorithms , Biomarkers , Cartilage, Articular/diagnostic imaging , Cartilage, Articular/pathology , Case-Control Studies , Disease Progression , Humans , Knee Joint/diagnostic imaging , Knee Joint/pathology , Magnetic Resonance Imaging/methods , National Institutes of Health (U.S.) , Osteoarthritis, Knee/diagnostic imaging , Osteoarthritis, Knee/pathology , United States
5.
Med Image Anal ; 68: 101934, 2021 02.
Article in English | MEDLINE | ID: mdl-33385699

ABSTRACT

Supervised learning-based segmentation methods typically require a large number of annotated training data to generalize well at test time. In medical applications, curating such datasets is not a favourable option because acquiring a large number of annotated samples from experts is time-consuming and expensive. Consequently, numerous methods have been proposed in the literature for learning with limited annotated examples. Unfortunately, the proposed approaches in the literature have not yet yielded significant gains over random data augmentation for image segmentation, where random augmentations themselves do not yield high accuracy. In this work, we propose a novel task-driven data augmentation method for learning with limited labeled data where the synthetic data generator, is optimized for the segmentation task. The generator of the proposed method models intensity and shape variations using two sets of transformations, as additive intensity transformations and deformation fields. Both transformations are optimized using labeled as well as unlabeled examples in a semi-supervised framework. Our experiments on three medical datasets, namely cardiac, prostate and pancreas, show that the proposed approach significantly outperforms standard augmentation and semi-supervised approaches for image segmentation in the limited annotation setting. The code is made publicly available at https://github.com/krishnabits001/task_driven_data_augmentation.


Subject(s)
Prostate , Supervised Machine Learning , Humans , Male
6.
J Cardiovasc Magn Reson ; 22(1): 60, 2020 08 20.
Article in English | MEDLINE | ID: mdl-32814579

ABSTRACT

BACKGROUND: Tissue characterisation with cardiovascular magnetic resonance (CMR) parametric mapping has the potential to detect and quantify both focal and diffuse alterations in myocardial structure not assessable by late gadolinium enhancement. Native T1 mapping in particular has shown promise as a useful biomarker to support diagnostic, therapeutic and prognostic decision-making in ischaemic and non-ischaemic cardiomyopathies. METHODS: Convolutional neural networks (CNNs) with Bayesian inference are a category of artificial neural networks which model the uncertainty of the network output. This study presents an automated framework for tissue characterisation from native shortened modified Look-Locker inversion recovery ShMOLLI T1 mapping at 1.5 T using a Probabilistic Hierarchical Segmentation (PHiSeg) network (PHCUMIS 119-127, 2019). In addition, we use the uncertainty information provided by the PHiSeg network in a novel automated quality control (QC) step to identify uncertain T1 values. The PHiSeg network and QC were validated against manual analysis on a cohort of the UK Biobank containing healthy subjects and chronic cardiomyopathy patients (N=100 for the PHiSeg network and N=700 for the QC). We used the proposed method to obtain reference T1 ranges for the left ventricular (LV) myocardium in healthy subjects as well as common clinical cardiac conditions. RESULTS: T1 values computed from automatic and manual segmentations were highly correlated (r=0.97). Bland-Altman analysis showed good agreement between the automated and manual measurements. The average Dice metric was 0.84 for the LV myocardium. The sensitivity of detection of erroneous outputs was 91%. Finally, T1 values were automatically derived from 11,882 CMR exams from the UK Biobank. For the healthy cohort, the mean (SD) corrected T1 values were 926.61 (45.26), 934.39 (43.25) and 927.56 (50.36) for global, interventricular septum and free-wall respectively. CONCLUSIONS: The proposed pipeline allows for automatic analysis of myocardial native T1 mapping and includes a QC process to detect potentially erroneous results. T1 reference values were presented for healthy subjects and common clinical cardiac conditions from the largest cohort to date using T1-mapping images.


Subject(s)
Cardiomyopathies/diagnostic imaging , Image Interpretation, Computer-Assisted , Magnetic Resonance Imaging , Myocardium/pathology , Neural Networks, Computer , Automation , Bayes Theorem , Cardiomyopathies/pathology , Cardiomyopathies/physiopathology , Case-Control Studies , Humans , Predictive Value of Tests , Quality Control , Reproducibility of Results , Stroke Volume , Uncertainty , Ventricular Function, Left
7.
MAGMA ; 33(4): 483-493, 2020 Aug.
Article in English | MEDLINE | ID: mdl-31872357

ABSTRACT

OBJECTIVE: Segmentation of thigh muscle and adipose tissue is important for the understanding of musculoskeletal diseases such as osteoarthritis. Therefore, the purpose of this work is (a) to evaluate whether a fully automated approach provides accurate segmentation of muscles and adipose tissue cross-sectional areas (CSA) compared with manual segmentation and (b) to evaluate the validity of this method based on a previous clinical study. MATERIALS AND METHODS: The segmentation method is based on U-Net architecture trained on 250 manually segmented thighs from the Osteoarthritis Initiative (OAI). The clinical evaluation is performed on a hold-out test set bilateral thighs of 48 subjects with unilateral knee pain. RESULTS: The segmentation time of the method is < 1 s and demonstrated high agreement with the manual method (dice similarity coeffcient: 0.96 ± 0.01). In the clinical study, the automated method shows that similar to manual segmentation (- 5.7 ± 7.9%, p < 0.001, effect size: 0.69), painful knees display significantly lower quadriceps CSAs than contralateral painless knees (- 5.6 ± 7.6%, p < 0.001, effect size: 0.73). DISCUSSION: Automated segmentation of thigh muscle and adipose tissues has high agreement with manual segmentations and can replicate the effect size seen in a clinical study on osteoarthritic pain.


Subject(s)
Magnetic Resonance Imaging/methods , Osteoarthritis, Knee/diagnostic imaging , Pain Measurement/methods , Pattern Recognition, Automated , Adipose Tissue/diagnostic imaging , Aged , Automation , Deep Learning , Diagnosis, Computer-Assisted , Female , Humans , Knee Joint , Male , Middle Aged , Muscle, Skeletal/diagnostic imaging , Neural Networks, Computer , Pain
8.
IEEE Trans Med Imaging ; 38(7): 1633-1642, 2019 07.
Article in English | MEDLINE | ID: mdl-30571618

ABSTRACT

Algorithms for magnetic resonance (MR) image reconstruction from undersampled measurements exploit prior information to compensate for missing k-space data. Deep learning (DL) provides a powerful framework for extracting such information from existing image datasets, through learning, and then using it for reconstruction. Leveraging this, recent methods employed DL to learn mappings from undersampled to fully sampled images using paired datasets, including undersampled and corresponding fully sampled images, integrating prior knowledge implicitly. In this letter, we propose an alternative approach that learns the probability distribution of fully sampled MR images using unsupervised DL, specifically variational autoencoders (VAE), and use this as an explicit prior term in reconstruction, completely decoupling the encoding operation from the prior. The resulting reconstruction algorithm enjoys a powerful image prior to compensate for missing k-space data without requiring paired datasets for training nor being prone to associated sensitivities, such as deviations in undersampling patterns used in training and test time or coil settings. We evaluated the proposed method with T1 weighted images from a publicly available dataset, multi-coil complex images acquired from healthy volunteers ( N=8 ), and images with white matter lesions. The proposed algorithm, using the VAE prior, produced visually high quality reconstructions and achieved low RMSE values, outperforming most of the alternative methods on the same dataset. On multi-coil complex data, the algorithm yielded accurate magnitude and phase reconstruction results. In the experiments on images with white matter lesions, the method faithfully reconstructed the lesions.


Subject(s)
Deep Learning , Image Processing, Computer-Assisted/methods , Magnetic Resonance Imaging/methods , Algorithms , Connectome , Humans
9.
Annu Int Conf IEEE Eng Med Biol Soc ; 2018: 714-717, 2018 Jul.
Article in English | MEDLINE | ID: mdl-30440496

ABSTRACT

Measurement of head biometrics from fetal ultrasonography images is of key importance in monitoring the healthy development of fetuses. However, the accurate measurement of relevant anatomical structures is subject to large inter-observer variability in the clinic. To address this issue, an automated method utilizing Fully Convolutional Networks (FCN) is proposed to determine measurements of fetal head circumference (HC) and biparietal diameter (BPD). An FCN was trained on approximately 2000 2D ultrasound images of the head with annotations provided by 45 different sonographers during routine screening examinations to perform semantic segmentation of the head. An ellipse is fitted to the resulting segmentation contours to mimic the annotation typically produced by a sonographer. The model's performance was compared with inter-observer variability, where two experts manually annotated 100 test images. Mean absolute model-expert error was slightly better than inter-observer error for HC (1.99mm vs 2.16mm), and comparable for BPD (0.61mm vs 0.59mm), as well as Dice coefficient (0.980 vs 0.980). Our results demonstrate that the model performs at a level similar to a human expert, and learns to produce accurate predictions from a large dataset annotated by many sonographers. Additionally, measurements are generated in near real-time at 15fps on a GPU, which could speed up clinical workflow for both skilled and trainee sonographers.


Subject(s)
Head , Neural Networks, Computer , Ultrasonography, Prenatal , Biometry , Cephalometry , Female , Humans , Pregnancy
10.
IEEE Trans Med Imaging ; 37(11): 2514-2525, 2018 11.
Article in English | MEDLINE | ID: mdl-29994302

ABSTRACT

Delineation of the left ventricular cavity, myocardium, and right ventricle from cardiac magnetic resonance images (multi-slice 2-D cine MRI) is a common clinical task to establish diagnosis. The automation of the corresponding tasks has thus been the subject of intense research over the past decades. In this paper, we introduce the "Automatic Cardiac Diagnosis Challenge" dataset (ACDC), the largest publicly available and fully annotated dataset for the purpose of cardiac MRI (CMR) assessment. The dataset contains data from 150 multi-equipments CMRI recordings with reference measurements and classification from two medical experts. The overarching objective of this paper is to measure how far state-of-the-art deep learning methods can go at assessing CMRI, i.e., segmenting the myocardium and the two ventricles as well as classifying pathologies. In the wake of the 2017 MICCAI-ACDC challenge, we report results from deep learning methods provided by nine research groups for the segmentation task and four groups for the classification task. Results show that the best methods faithfully reproduce the expert analysis, leading to a mean value of 0.97 correlation score for the automatic extraction of clinical indices and an accuracy of 0.96 for automatic diagnosis. These results clearly open the door to highly accurate and fully automatic analysis of cardiac CMRI. We also identify scenarios for which deep learning methods are still failing. Both the dataset and detailed results are publicly available online, while the platform will remain open for new submissions.


Subject(s)
Cardiac Imaging Techniques/methods , Deep Learning , Heart/diagnostic imaging , Image Interpretation, Computer-Assisted/methods , Magnetic Resonance Imaging/methods , Databases, Factual , Female , Heart Diseases/diagnostic imaging , Humans , Male
11.
IEEE Trans Med Imaging ; 36(11): 2204-2215, 2017 11.
Article in English | MEDLINE | ID: mdl-28708546

ABSTRACT

Identifying and interpreting fetal standard scan planes during 2-D ultrasound mid-pregnancy examinations are highly complex tasks, which require years of training. Apart from guiding the probe to the correct location, it can be equally difficult for a non-expert to identify relevant structures within the image. Automatic image processing can provide tools to help experienced as well as inexperienced operators with these tasks. In this paper, we propose a novel method based on convolutional neural networks, which can automatically detect 13 fetal standard views in freehand 2-D ultrasound data as well as provide a localization of the fetal structures via a bounding box. An important contribution is that the network learns to localize the target anatomy using weak supervision based on image-level labels only. The network architecture is designed to operate in real-time while providing optimal output for the localization task. We present results for real-time annotation, retrospective frame retrieval from saved videos, and localization on a very large and challenging dataset consisting of images and video recordings of full clinical anomaly screenings. We found that the proposed method achieved an average F1-score of 0.798 in a realistic classification experiment modeling real-time detection, and obtained a 90.09% accuracy for retrospective frame retrieval. Moreover, an accuracy of 77.8% was achieved on the localization task.


Subject(s)
Image Processing, Computer-Assisted/methods , Neural Networks, Computer , Ultrasonography, Prenatal/methods , Algorithms , Female , Humans , Pregnancy , Video Recording
12.
IEEE Trans Med Imaging ; 36(4): 960-971, 2017 04.
Article in English | MEDLINE | ID: mdl-28113339

ABSTRACT

We present a novel retrospective self-gating method based on manifold alignment (MA), which enables reconstruction of free breathing, high spatial, and temporal resolution abdominal magnetic resonance imaging sequences. Based on a radial golden-angle acquisition trajectory, our method enables a multidimensional self-gating signal to be extracted from the k -space data for more accurate motion representation. The k -space radial profiles are evenly divided into a number of overlapping groups based on their radial angles. MA is then used to simultaneously learn and align the low dimensional manifolds of all groups, and embed them into a common manifold. In the manifold, k -space profiles that represent similar respiratory positions are close to each other. Image reconstruction is performed by combining radial profiles with evenly distributed angles that are close in the manifold. Our method was evaluated on both 2-D and 3-D synthetic and in vivo data sets. On the synthetic data sets, our method achieved high correlation with the ground truth in terms of image intensity and virtual navigator values. Using the in vivo data, compared with a state-of-the-art approach based on the center of k -space gating, our method was able to make use of much richer profile data for self-gating, resulting in statistically significantly better quantitative measurements in terms of organ sharpness and image gradient entropy.


Subject(s)
Magnetic Resonance Imaging , Abdomen , Humans , Image Enhancement , Respiration , Retrospective Studies
13.
Med Image Anal ; 35: 83-100, 2017 01.
Article in English | MEDLINE | ID: mdl-27343436

ABSTRACT

Respiratory motion poses significant challenges in image-guided interventions. In emerging treatments such as MR-guided HIFU or MR-guided radiotherapy, it may cause significant misalignments between interventional road maps obtained pre-procedure and the anatomy during the treatment, and may affect intra-procedural imaging such as MR-thermometry. Patient specific respiratory motion models provide a solution to this problem. They establish a correspondence between the patient motion and simpler surrogate data which can be acquired easily during the treatment. Patient motion can then be estimated during the treatment by acquiring only the simpler surrogate data. In the majority of classical motion modelling approaches once the correspondence between the surrogate data and the patient motion is established it cannot be changed unless the model is recalibrated. However, breathing patterns are known to significantly change in the time frame of MR-guided interventions. Thus, the classical motion modelling approach may yield inaccurate motion estimations when the relation between the motion and the surrogate data changes over the duration of the treatment and frequent recalibration may not be feasible. We propose a novel methodology for motion modelling which has the ability to automatically adapt to new breathing patterns. This is achieved by choosing the surrogate data in such a way that it can be used to estimate the current motion in 3D as well as to update the motion model. In particular, in this work, we use 2D MR slices from different slice positions to build as well as to apply the motion model. We implemented such an autoadaptive motion model by extending our previous work on manifold alignment. We demonstrate a proof-of-principle of the proposed technique on cardiac gated data of the thorax and evaluate its adaptive behaviour on realistic synthetic data containing two breathing types generated from 6 volunteers, and real data from 4 volunteers. On synthetic data the autoadaptive motion model yielded 21.45% more accurate motion estimations compared to a non-adaptive motion model 10 min after a change in breathing pattern. On real data we demonstrated the method's ability to maintain motion estimation accuracy despite a drift in the respiratory baseline. Due to the cardiac gating of the imaging data, the method is currently limited to one update per heart beat and the calibration requires approximately 12 min of scanning. Furthermore, the method has a prediction latency of 800 ms. These limitations may be overcome in future work by altering the acquisition protocol.


Subject(s)
Algorithms , Computer Simulation , Magnetic Resonance Imaging/methods , Movement , Respiration , Respiratory Mechanics/physiology , Humans , Motion
14.
Inf Process Med Imaging ; 24: 363-74, 2015.
Article in English | MEDLINE | ID: mdl-26221687

ABSTRACT

Manifold alignment can be used to reduce the dimensionality of multiple medical image datasets into a single globally consistent low-dimensional space. This may be desirable in a wide variety of problems, from fusion of different imaging modalities for Alzheimer's disease classification to 4DMR reconstruction from 2D MR slices. Unfortunately, most existing manifold alignment techniques require either a set of prior correspondences or comparability between the datasets in high-dimensional space, which is often not possible. We propose a novel technique for the 'self-alignment' of manifolds (SAM) from multiple dissimilar imaging datasets without prior correspondences or inter-dataset image comparisons. We quantitatively evaluate the method on 4DMR reconstruction from realistic, synthetic sagittal 2D MR slices from 6 volunteers and real data from 4 volunteers. Additionally, we demonstrate the technique for the compounding of two free breathing 3D ultrasound views from one volunteer. The proposed method performs significantly better for 4DMR reconstruction than state-of-the-art image-based techniques.


Subject(s)
Algorithms , Image Interpretation, Computer-Assisted/methods , Imaging, Three-Dimensional/methods , Magnetic Resonance Imaging , Pattern Recognition, Automated/methods , Subtraction Technique , Ultrasonography/methods , Humans , Image Enhancement/methods , Magnetic Resonance Imaging/methods , Reproducibility of Results , Sensitivity and Specificity
15.
Med Image Anal ; 18(7): 939-52, 2014 Oct.
Article in English | MEDLINE | ID: mdl-24972374

ABSTRACT

Respiratory motion is a complicating factor in PET imaging as it leads to blurring of the reconstructed images which adversely affects disease diagnosis and staging. Existing motion correction techniques are often based on 1D navigators which cannot capture the inter- and intra-cycle variabilities that may occur in respiration. MR imaging is an attractive modality for estimating such motion more accurately, and the recent emergence of hybrid PET/MR systems allows the combination of the high molecular sensitivity of PET with the versatility of MR. However, current MR imaging techniques cannot achieve good image contrast inside the lungs in 3D. 2D slices, on the other hand, have excellent contrast properties inside the lungs due to the in-flow of previously unexcited blood, but lack the coverage of 3D volumes. In this work we propose an approach for the robust, navigator-less reconstruction of dynamic 3D volumes from 2D slice data. Our technique relies on the fact that data acquired at different slice positions have similar low-dimensional representations which can be extracted using manifold learning. By aligning these manifolds we are able to obtain accurate matchings of slices with regard to respiratory position. The approach naturally models all respiratory variabilities. We compare our method against two recently proposed MR slice stacking methods for the correction of PET data: a technique based on a 1D pencil beam navigator, and an image-based technique. On synthetic data with a known ground truth our proposed technique produces significantly better reconstructions than all other examined techniques. On real data without a known ground truth the method gives the most plausible reconstructions and high consistency of reconstruction. Lastly, we demonstrate how our method can be applied for the respiratory motion correction of simulated PET/MR data.


Subject(s)
Image Interpretation, Computer-Assisted/methods , Imaging, Three-Dimensional/methods , Magnetic Resonance Imaging/methods , Multimodal Imaging , Positron-Emission Tomography/methods , Respiratory Mechanics/physiology , Algorithms , Humans , Image Enhancement/methods , Movement/physiology
16.
Inf Process Med Imaging ; 23: 232-43, 2013.
Article in English | MEDLINE | ID: mdl-24683972

ABSTRACT

Respiratory motion is a complicating factor for many applications in medical imaging and there is significant interest in dynamic imaging that can be used to estimate such motion. Magnetic resonance imaging (MRI) is an attractive modality for motion estimation but current techniques cannot achieve good image contrast inside the lungs. Manifold learning is a powerful tool to discover the underlying structure of high-dimensional data. Aligning the manifolds of multiple datasets can be useful to establish relationships between different types of data. However, the current state-of-the-art in manifold alignment is not robust to the wide variations in manifold structure that may occur in clinical datasets. In this work we propose a novel, fully automatic technique for the simultaneous alignment of large numbers of manifolds with varying manifold structure. We apply the technique to reconstruct high-resolution and high-contrast dynamic 3D MRI images from multiple 2D datasets for the purpose of respiratory motion estimation. The proposed method is validated on synthetic data with known ground truth and real data. We demonstrate that our approach can be applied to reconstruct significantly more accurate and consistent dynamic images of the lungs compared to the current state-of-the-art in manifold alignment.


Subject(s)
Imaging, Three-Dimensional/methods , Lung/anatomy & histology , Lung/physiology , Magnetic Resonance Imaging/methods , Pattern Recognition, Automated/methods , Respiratory Mechanics/physiology , Subtraction Technique , Algorithms , Image Enhancement/methods , Image Interpretation, Computer-Assisted/methods , Movement/physiology , Reproducibility of Results , Sensitivity and Specificity
SELECTION OF CITATIONS
SEARCH DETAIL
...