Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 12 de 12
Filter
1.
IEEE Trans Med Imaging ; PP2024 Mar 28.
Article in English | MEDLINE | ID: mdl-38546999

ABSTRACT

Accurate myocardial segmentation is crucial in the diagnosis and treatment of myocardial infarction (MI), especially in Late Gadolinium Enhancement (LGE) cardiac magnetic resonance (CMR) images, where the infarcted myocardium exhibits a greater brightness. However, segmentation annotations for LGE images are usually not available. Although knowledge gained from CMR images of other modalities with ample annotations, such as balanced-Steady State Free Precession (bSSFP), can be transferred to the LGE images, the difference in image distribution between the two modalities (i.e., domain shift) usually results in a significant degradation in model performance. To alleviate this, an end-to-end Variational autoencoder based feature Alignment Module Combining Explicit and Implicit features (VAMCEI) is proposed. We first re-derive the Kullback-Leibler (KL) divergence between the posterior distributions of the two domains as a measure of the global distribution distance. Second, we calculate the prototype contrastive loss between the two domains, bringing closer the prototypes of the same category across domains and pushing away the prototypes of different categories within or across domains. Finally, a domain discriminator is added to the output space, which indirectly aligns the feature distribution and forces the extracted features to be more favorable for segmentation. In addition, by combining CycleGAN and VAMCEI, we propose a more refined multi-stage unsupervised domain adaptation (UDA) framework for myocardial structure segmentation. We conduct extensive experiments on the MSCMRSeg 2019, MyoPS 2020 and MM-WHS 2017 datasets. The experimental results demonstrate that our framework achieves superior performances than state-of-the-art methods.

2.
J Cardiovasc Magn Reson ; 26(1): 101005, 2024.
Article in English | MEDLINE | ID: mdl-38302000

ABSTRACT

BACKGROUND: The prognostic value of left ventricular (LV) myocardial trabecular complexity on cardiovascular magnetic resonance (CMR) in dilated cardiomyopathy (DCM) remains unknown. This study aimed to evaluate the prognostic value of LV myocardial trabecular complexity using fractal analysis in patients with DCM. METHODS: Consecutive patients with DCM who underwent CMR between March 2017 and November 2021 at two hospitals were prospectively enrolled. The primary endpoints were defined as the combination of all-cause death and heart failure hospitalization. The events of cardiac death alone were defined as the secondary endpoints.LV trabeculae complexity was quantified by measuring the fractal dimension (FD) of the endocardial border based on fractal geometry on CMR. Cox proportional hazards regression and Kaplan-Meier survival analysis were used to examine the association between variables and outcomes. The incremental prognostic value of FD was assessed in nested models. RESULTS: A total of 403 patients with DCM (49.31 ± 14.68 years, 69% male) were recruited. After a median follow-up of 43 months (interquartile range, 28-55 months), 87 and 24 patients reached the primary and secondary endpoints, respectively. Age, heart rate, New York Heart Association functional class >II, N-terminal pro-B-type natriuretic peptide, LV ejection fraction, LV end-diastolic volume index, LV end-systolic volume index, LV mass index, presence of late gadolinium enhancement, global FD, LV mean apical FD, and LV maximal apical FD were univariably associated with the outcomes (all P < 0.05). After multivariate adjustment, LV maximal apical FD remained a significant independent predictor of outcome [hazard ratio = 1.179 (1.116, 1.246), P < 0.001]. The addition of LV maximal apical FD in the nested models added incremental prognostic value to other common clinical and imaging risk factors (all <0.001; C-statistic: 0.84-0.88, P < 0.001). CONCLUSION: LV maximal apical FD was an independent predictor of the adverse clinical outcomes in patients with DCM and provided incremental prognostic value over conventional clinical and imaging risk factors.


Subject(s)
Cardiomyopathy, Dilated , Fractals , Magnetic Resonance Imaging, Cine , Predictive Value of Tests , Ventricular Function, Left , Humans , Male , Female , Cardiomyopathy, Dilated/physiopathology , Cardiomyopathy, Dilated/diagnostic imaging , Cardiomyopathy, Dilated/mortality , Middle Aged , Prognosis , Adult , Risk Factors , Prospective Studies , Time Factors , Risk Assessment , Heart Ventricles/diagnostic imaging , Heart Ventricles/physiopathology , Aged , Image Interpretation, Computer-Assisted , Heart Failure/physiopathology , Heart Failure/diagnostic imaging , Heart Failure/mortality , Ventricular Remodeling
3.
IEEE Trans Med Imaging ; 43(6): 2180-2190, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38265913

ABSTRACT

Chest radiography is the most common radiology examination for thoracic disease diagnosis, such as pneumonia. A tremendous number of chest X-rays prompt data-driven deep learning models in constructing computer-aided diagnosis systems for thoracic diseases. However, in realistic radiology practice, a deep learning-based model often suffers from performance degradation when trained on data with noisy labels possibly caused by different types of annotation biases. To this end, we present a novel stochastic neural ensemble learning (SNEL) framework for robust thoracic disease diagnosis using chest X-rays. The core idea of our method is to learn from noisy labels by constructing model ensembles and designing noise-robust loss functions. Specifically, we propose a fast neural ensemble method that collects parameters simultaneously across model instances and along optimization trajectories. Moreover, we propose a loss function that both optimizes a robust measure and characterizes a diversity measure of ensembles. We evaluated our proposed SNEL method on three publicly available hospital-scale chest X-ray datasets. The experimental results indicate that our method outperforms competing methods and demonstrate the effectiveness and robustness of our method in learning from noisy labels. Our code is available at https://github.com/hywang01/SNEL.


Subject(s)
Deep Learning , Radiography, Thoracic , Humans , Radiography, Thoracic/methods , Thoracic Diseases/diagnostic imaging , Stochastic Processes , Radiographic Image Interpretation, Computer-Assisted/methods , Algorithms , Databases, Factual , Neural Networks, Computer
4.
Comput Med Imaging Graph ; 108: 102263, 2023 09.
Article in English | MEDLINE | ID: mdl-37487363

ABSTRACT

Deformable medical image registration can achieve fast and accurate alignment between two images, enabling medical professionals to analyze images of different subjects in a unified anatomical space. As such, it plays an important role in many medical image studies. Current deep learning (DL)-based approaches for image registration directly learn spatial transformation from one image to another, relying on a convolutional neural network and ground truth or similarity metrics. However, these methods only use a global similarity energy function to evaluate the similarity of a pair of images, which ignores the similarity of regions of interest (ROIs) within the images. This can limit the accuracy of the image registration and affect the analysis of specific ROIs. Additionally, DL-based methods often estimate global spatial transformations of images directly, without considering local spatial transformations of ROIs within the images. To address this issue, we propose a novel global-local transformation network with a region similarity constraint that maximizes the similarity of ROIs within the images and estimates both global and local spatial transformations simultaneously. Experiments conducted on four public 3D MRI datasets demonstrate that the proposed method achieves the highest registration performance in terms of accuracy and generalization compared to other state-of-the-art methods.


Subject(s)
Magnetic Resonance Imaging , Neural Networks, Computer , Humans , Image Processing, Computer-Assisted/methods
5.
IEEE J Biomed Health Inform ; 27(7): 3408-3419, 2023 Jul.
Article in English | MEDLINE | ID: mdl-37040240

ABSTRACT

Detailed information of substructures of the whole heart is usually vital in the diagnosis of cardiovascular diseases and in 3D modeling of the heart. Deep convolutional neural networks have been demonstrated to achieve state-of-the-art performance in 3D cardiac structures segmentation. However, when dealing with high-resolution 3D data, current methods employing tiling strategies usually degrade segmentation performances due to GPU memory constraints. This work develops a two-stage multi-modality whole heart segmentation strategy, which adopts an improved Combination of Faster R-CNN and 3D U-Net (CFUN+). More specifically, the bounding box of the heart is first detected by Faster R-CNN, and then the original Computed Tomography (CT) and Magnetic Resonance Imaging (MRI) images of the heart aligned with the bounding box are input into 3D U-Net for segmentation. The proposed CFUN+ method redefines the bounding box loss function by replacing the previous Intersection over Union (IoU) loss with Complete Intersection over Union (CIoU) loss. Meanwhile, the integration of the edge loss makes the segmentation results more accurate, and also improves the convergence speed. The proposed method achieves an average Dice score of 91.1% on the Multi-Modality Whole Heart Segmentation (MM-WHS) 2017 challenge CT dataset, which is 5.2% higher than the baseline CFUN model, and achieves state-of-the-art segmentation results. In addition, the segmentation speed of a single heart has been dramatically improved from a few minutes to less than 6 seconds.


Subject(s)
Image Processing, Computer-Assisted , Neural Networks, Computer , Humans , Image Processing, Computer-Assisted/methods , Thorax , Magnetic Resonance Imaging/methods , Heart/diagnostic imaging
6.
Comput Biol Med ; 157: 106748, 2023 05.
Article in English | MEDLINE | ID: mdl-36958235

ABSTRACT

Fluorodeoxyglucose Positron Emission Tomography (FDG-PET) provides metabolic information, while Computed Tomography (CT) provides the anatomical context of the tumors. Combined PET-CT segmentation helps in computer-assisted tumor diagnosis, staging, and treatment planning. Current state-of-the-art models mainly rely on early or late fusion techniques. These methods, however, rarely learn PET-CT complementary features and cannot efficiently co-relate anatomical and metabolic features. These drawbacks can be removed by intermediate fusion; however, it produces inaccurate segmentations in the case of heterogeneous textures in the modalities. Furthermore, it requires massive computation. In this work, we propose AATSN (Anatomy Aware Tumor Segmentation Network), which extracts anatomical CT features, and then intermediately fuses with PET features through a fusion-attention mechanism. Our anatomy-aware fusion-attention mechanism fuses the selective useful CT and PET features instead of fusing the full features set. Thus this not only improves the network performance but also requires lesser resources. Furthermore, our model is scalable to 2D images and 3D volumes. The proposed model is rigorously trained, tested, evaluated, and compared to the state-of-the-art through several ablation studies on the largest available datasets. We have achieved a 0.8104 dice score and 2.11 median HD95 score in a 3D setup, while 0.6756 dice score in a 2D setup. We demonstrate that AATSN achieves a significant performance gain while being lightweight at the same time compared to the state-of-the-art methods. The implications of AATSN include improved tumor delineation for diagnosis, analysis, and radiotherapy treatment.


Subject(s)
Neoplasms , Positron Emission Tomography Computed Tomography , Humans , Positron Emission Tomography Computed Tomography/methods , Tomography, X-Ray Computed/methods , Positron-Emission Tomography/methods , Radiotherapy Planning, Computer-Assisted/methods , Image Processing, Computer-Assisted/methods
7.
Comput Biol Med ; 151(Pt A): 106218, 2022 12.
Article in English | MEDLINE | ID: mdl-36308898

ABSTRACT

BACKGROUND: Myocardial pathology segmentation plays an utmost role in the diagnosis and treatment of myocardial infarction (MI). However, manual segmentation is time-consuming and labor-intensive, and requires a lot of professional knowledge and clinical experience. METHODS: In this work, we develop an automatic and accurate coarse-to-fine myocardial pathology segmentation framework based on the U-Net++ and EfficientSeg model. The U-Net++ network with deep supervision is first applied to delineate the cardiac structures from the multi-sequence cardiac magnetic resonance (CMR) images to generate a coarse segmentation map. Then the coarse segmentation map together with the three-sequence CMR data is sent to the EfficientSeg-B1 for fine segmentation, that is, further segmentation of myocardial scar and edema areas. In addition, we apply the Focal loss to replace the original cross-entropy term, for the purpose of encouraging the model to pay more attention to the pathological areas. RESULTS: The proposed segmentation approach is tested on the public Myocardial Pathology Segmentation Challenge (MyoPS 2020) dataset. Experimental results demonstrate that our solution achieves an average Dice score of 0.7148 ± 0.2213 for scar, an average Dice score of 0.7439 ± 0.1011 for edema + scar, and the final average score of 0.7294 on the overall 20 testing sets, all of which have outperformed the first place method in the competition. Moreover, extensive ablation experiments are performed, which shows that the two-stage strategy with Focal loss greatly improves the segmentation quality of pathological areas. CONCLUSION: Given its effectiveness and superiority, our method can further facilitate myocardial pathology segmentation in medical practice.


Subject(s)
Image Processing, Computer-Assisted , Neural Networks, Computer , Humans , Image Processing, Computer-Assisted/methods , Cicatrix , Magnetic Resonance Imaging/methods , Heart
8.
Comput Biol Med ; 136: 104726, 2021 09.
Article in English | MEDLINE | ID: mdl-34371318

ABSTRACT

BACKGROUND: A novel Generative Adversarial Networks (GAN) based bidirectional cross-modality unsupervised domain adaptation (GBCUDA) framework is developed for cardiac image segmentation, which can effectively tackle the problem of network's segmentation performance degradation when adapting to the target domain without ground truth labels. METHOD: GBCUDA uses GAN for image alignment, applies adversarial learning to extract image features, and gradually enhances the domain invariance of extracted features. The shared encoder performs an end-to-end learning task in which features that differ between the two domains complement each other. The self-attention mechanism is incorporated to the GAN network, which can generate details based on the prompts of all feature positions. Furthermore, spectrum normalization is implemented to stabilize the training of GAN, and knowledge distillation loss is introduced to process high-level feature-maps in order to better complete the cross-mode segmentation task. RESULTS: The effectiveness of our proposed unsupervised domain adaptation framework is tested over the Multi-Modality Whole Heart Segmentation (MM-WHS) Challenge 2017 dataset. The proposed method is able to improve the average Dice from 74.1% to 81.5% for the four cardiac substructures, and reduce the average symmetric surface distance (ASD) from 7.0 to 5.8 over CT images. For MRI images, our proposed framework trained on CT images gives the average Dice of 59.2% and reduces the average ASD from 5.7 to 4.9. CONCLUSIONS: The evaluation results demonstrate our method's effectiveness on domain adaptation and the superiority to the current state-of-the-art domain adaptation methods.


Subject(s)
Heart , Image Processing, Computer-Assisted , Heart/diagnostic imaging , Magnetic Resonance Imaging
9.
Comput Methods Programs Biomed ; 208: 106197, 2021 Sep.
Article in English | MEDLINE | ID: mdl-34102562

ABSTRACT

Accurate and automatic segmentation of the hippocampus plays a vital role in the diagnosis and treatment of nervous system diseases. However, due to the anatomical variability of different subjects, the registered atlas images are not always perfectly aligned with the target image. This makes the segmentation of the hippocampus still face great challenges. In this paper, we propose a robust discriminative label fusion method under the multi-atlas framework. It is a patch embedding label fusion method based on conditional random field (CRF) model that integrates the metric learning and the graph cuts by an integrated formulation. Unlike most current label fusion methods with fixed (non-learning) distance metrics, a novel distance metric learning is presented to enhance discriminative observation and embed it into the unary potential function. In particular, Bayesian inference is utilized to extend a classic distance metric learning, in which large margin constraints are instead of pairwise constraints to obtain a more robust distance metric. And the pairwise homogeneity is fully considered in the spatial prior term based on classification labels and voxel intensity. The resulting integrated formulation is globally minimized by the efficient graph cuts algorithm. Further, sparse patch based method is utilized to polish the obtained segmentation results in label space. The proposed method is evaluated on IABA dataset and ADNI dataset for hippocampus segmentation. The Dice scores achieved by our method are 87.2%, 87.8%, 88.2% and 88.9% on left and right hippocampus on both two datasets, while the best Dice scores obtained by other methods are 86.0%, 86.9%, 86.8% and 88.0% on IABA dataset and ADNI dataset respectively. Experiments show that our approach achieves higher accuracy than state-of-the-art methods. We hope the proposed model can be transferred to combine with other promising distance measurement algorithms.


Subject(s)
Image Interpretation, Computer-Assisted , Magnetic Resonance Imaging , Algorithms , Bayes Theorem , Hippocampus/diagnostic imaging , Humans
10.
Comput Methods Programs Biomed ; 206: 106142, 2021 Jul.
Article in English | MEDLINE | ID: mdl-34004500

ABSTRACT

BACKGROUND AND OBJECTIVE: Automatic cardiac segmentation plays an utmost role in the diagnosis and quantification of cardiovascular diseases. METHODS: This paper proposes a new cardiac segmentation method in short-axis Magnetic Resonance Imaging (MRI) images, called attention U-Net architecture with input image pyramid and deep supervised output layers (AID), which can fully-automatically learn to pay attention to target structures of various sizes and shapes. During each training process, the model continues to learn how to emphasize the desired features and suppress irrelevant areas in the original images, effectively improving the accuracy of cardiac segmentation. At the same time, we introduce the Focal Tversky Loss (FTL), which can effectively solve the problem of high imbalance in the amount of data between the target class and the background class during cardiac image segmentation. In order to obtain a better representation of intermediate features, we add a multi-scale input pyramid to the attention network. RESULTS: The proposed cardiac segmentation technique is tested on the public Left Ventricle Segmentation Challenge (LVSC) dataset, which is shown to achieve 0.75, 0.87 and 0.92 for Jaccard Index, Sensitivity and Specificity, respectively. Experimental results demonstrate that the proposed method is able to improve the segmentation accuracy compared with the standard U-Net, and achieves comparable performance to the most advanced fully-automated methods. CONCLUSIONS: Given its effectiveness and advantages, the proposed method can facilitate cardiac segmentation in short-axis MRI images in clinical practice.


Subject(s)
Magnetic Resonance Imaging, Cine , Neural Networks, Computer , Heart/diagnostic imaging , Heart Ventricles , Image Processing, Computer-Assisted , Magnetic Resonance Imaging
11.
Int J Numer Method Biomed Eng ; 36(7): e3348, 2020 07.
Article in English | MEDLINE | ID: mdl-32368868

ABSTRACT

Intravascular ultrasound (IVUS) has been widely used to capture cross sectional lumen frames of inner wall of coronary arteries. This kind of medical imaging modalities is capable of providing detailed and significant information of lumen contour shape, which is very important for clinical diagnosis and analysis of cardiovascular diseases. Numerous learning based techniques have recently become very popular for coronary artery segmentation due to their impressive results. In this work, a supervised machine learning method for coronary artery lumen segmentation with high accuracy and minimal user interaction is designed. The fully discriminative lumen segmentation method jointly learning a classifier the weak learners rely on and the features of the classifier is developed. Additionally, the theoretical supports of the Gradient Boosting framework used in this work and its quadratic approximation are presented. The proposed algorithm is tested on the public datasets of boundary detection of lumen in IVUS challenge held in MICCAI 2011 and achieves a higher average Jaccard similarity of 96.8% and a lower mean error distance of 0.55 (in Cartesian coordinates), which shows higher accuracy compared to the existing learning based methods. Moreover, three real patient IVUS datasets are used to evaluate the performance of the proposed coronary artery lumen segmentation algorithm, which is shown to achieve lower percent error of lumen area of 1.861% ± 0.965%, 1.968% ± 0.864%, and 1.671% ± 0.584%, respectively, compared to the manually measured lumen area (ground truth). The proposed lumen segmentation method is found to be superior to the latest learning based segmentation techniques. Given the efficiency and robustness, our method has great potential in IVUS images processing and coronary artery segmentation and quantification. NOVELTY STATEMENT: The main contributions are summarized in the following aspects: A detailed review of related work about learning based coronary artery lumen segmentation in intravascular ultrasound images is presented. A fully discriminative lumen segmentation method jointly learning a classifier our weak learners rely on and the features of the classifier is developed. The theoretical supports of the Gradient Boosting framework and its quadratic approximation used in this work are presented.


Subject(s)
Algorithms , Coronary Vessels , Supervised Machine Learning , Coronary Vessels/diagnostic imaging , Cross-Sectional Studies , Humans , Ultrasonography , Ultrasonography, Interventional
12.
Cardiovasc Eng Technol ; 7(2): 159-69, 2016 06.
Article in English | MEDLINE | ID: mdl-27140197

ABSTRACT

The CT angiography (CTA) is a clinically indicated test for the assessment of coronary luminal stenosis that requires centerline extractions. There is currently no centerline extraction algorithm that is automatic, real-time and very accurate. Therefore, we sought to (i) develop a hybrid approach by incorporating fast marching and Runge-Kutta based methods for the extraction of coronary artery centerlines from CTA; (ii) evaluate the accuracy of the present method compared to Van's method by using ground truth centerline as a reference; (iii) evaluate the coronary lumen area of our centerline method in comparison with the intravascular ultrasound (IVUS) as the standard of reference. The proposed method was found to be more computationally efficient, and performed better than the Van's method in terms of overlap measures (i.e., OV: [Formula: see text] vs. [Formula: see text]; OF: [Formula: see text] vs. [Formula: see text]; and OT: [Formula: see text] vs. [Formula: see text], all [Formula: see text]). In comparison with IVUS derived coronary lumen area, the proposed approach was more accurate than the Van's method. This hybrid approach by incorporating fast marching and Runge-Kutta based methods could offer fast and accurate extraction of centerline as well as the lumen area. This method may garner wider clinical potential as a real-time coronary stenosis assessment tool.


Subject(s)
Algorithms , Coronary Angiography/methods , Image Processing, Computer-Assisted/methods , Databases, Factual , Humans , Reproducibility of Results
SELECTION OF CITATIONS
SEARCH DETAIL
...