Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 14 de 14
Filter
Add more filters










Publication year range
1.
Article in English | MEDLINE | ID: mdl-38969836

ABSTRACT

Heart failure (HF) is associated with high rates of morbidity and mortality. The value of deep learning survival prediction models using chest radiographs in patients with heart failure is currently unclear. The aim of our study is to develop and validate a deep learning survival prediction model using chest X-ray (DLSPCXR) in patients with HF. The study retrospectively enrolled a cohort of 353 patients with HF who underwent chest X-ray (CXR) at our institution between March 2012 and March 2017. The dataset was randomly divided into training (n = 247) and validation (n = 106) datasets. Univariate and multivariate Cox analysis were conducted on the training dataset to develop clinical and imaging survival prediction models. The DLSPCXR was trained and the selected clinical parameters were incorporated into DLSPCXR to establish a new model called DLSPinteg. Discrimination performance was evaluated using the time-dependent area under the receiver operating characteristic curves (TD AUC) at 1, 3, and 5-years survival. Delong's test was employed for the comparison of differences between two AUCs of different models. The risk-discrimination capability of the optimal model was evaluated by the Kaplan-Meier curve. In multivariable Cox analysis, older age, higher N-terminal pro-B-type natriuretic peptide (NT-ProBNP), systolic pulmonary artery pressure (sPAP) > 50 mmHg, New York Heart Association (NYHA) functional class III-IV and cardiothoracic ratio (CTR) ≥ 0.62 in CXR were independent predictors of poor prognosis in patients with HF. Based on the receiver operating characteristic (ROC) curve analysis, DLSPCXR had better performance at predicting 5-year survival than the imaging Cox model in the validation cohort (AUC: 0.757 vs. 0.561, P = 0.01). DLSPinteg as the optimal model outperforms the clinical Cox model (AUC: 0.826 vs. 0.633, P = 0.03), imaging Cox model (AUC: 0.826 vs. 0.555, P < 0.001), and DLSPCXR (AUC: 0.826 vs. 0.767, P = 0.06). Deep learning models using chest radiographs can predict survival in patients with heart failure with acceptable accuracy.

2.
Eur Radiol ; 2024 May 15.
Article in English | MEDLINE | ID: mdl-38750169

ABSTRACT

OBJECTIVES: To evaluate signal enhancement ratio (SER) for tissue characterization and prognosis stratification in pancreatic adenocarcinoma (PDAC), with quantitative histopathological analysis (QHA) as the reference standard. METHODS: This retrospective study included 277 PDAC patients who underwent multi-phase contrast-enhanced (CE) MRI and whole-slide imaging (WSI) from three centers (2015-2021). SER is defined as (SIlt - SIpre)/(SIea - SIpre), where SIpre, SIea, and SIlt represent the signal intensity of the tumor in pre-contrast, early-, and late post-contrast images, respectively. Deep-learning algorithms were implemented to quantify the stroma, epithelium, and lumen of PDAC on WSIs. Correlation, regression, and Bland-Altman analyses were utilized to investigate the associations between SER and QHA. The prognostic significance of SER on overall survival (OS) was evaluated using Cox regression analysis and Kaplan-Meier curves. RESULTS: The internal dataset comprised 159 patients, which was further divided into training, validation, and internal test datasets (n = 60, 41, and 58, respectively). Sixty-five and 53 patients were included in two external test datasets. Excluding lumen, SER demonstrated significant correlations with stroma (r = 0.29-0.74, all p < 0.001) and epithelium (r = -0.23 to -0.71, all p < 0.001) across a wide post-injection time window (range, 25-300 s). Bland-Altman analysis revealed a small bias between SER and QHA for quantifying stroma/epithelium in individual training, validation (all within ± 2%), and three test datasets (all within ± 4%). Moreover, SER-predicted low stromal proportion was independently associated with worse OS (HR = 1.84 (1.17-2.91), p = 0.009) in training and validation datasets, which remained significant across three combined test datasets (HR = 1.73 (1.25-2.41), p = 0.001). CONCLUSION: SER of multi-phase CE-MRI allows for tissue characterization and prognosis stratification in PDAC. CLINICAL RELEVANCE STATEMENT: The signal enhancement ratio of multi-phase CE-MRI can serve as a novel imaging biomarker for characterizing tissue composition and holds the potential for improving patient stratification and therapy in PDAC. KEY POINTS: Imaging biomarkers are needed to better characterize tumor tissue in pancreatic adenocarcinoma. Signal enhancement ratio (SER)-predicted stromal/epithelial proportion showed good agreement with histopathology measurements across three distinct centers. Signal enhancement ratio (SER)-predicted stromal proportion was demonstrated to be an independent prognostic factor for OS in PDAC.

3.
Int J Surg ; 110(2): 740-749, 2024 Feb 01.
Article in English | MEDLINE | ID: mdl-38085810

ABSTRACT

BACKGROUND: Undetectable occult liver metastases block the long-term survival of pancreatic ductal adenocarcinoma (PDAC). This study aimed to develop a radiomics-based model to predict occult liver metastases and assess its prognostic capacity for survival. MATERIALS AND METHODS: Patients who underwent surgical resection and were pathologically proven with PDAC were recruited retrospectively from five tertiary hospitals between January 2015 and December 2020. Radiomics features were extracted from tumors, and the radiomics-based model was developed in the training cohort using LASSO-logistic regression. The model's performance was assessed in the internal and external validation cohorts using the area under the receiver operating curve (AUC). Subsequently, the association of the model's risk stratification with progression-free survival (PFS) and overall survival (OS) was then statistically examined using Cox regression analysis and the log-rank test. RESULTS: A total of 438 patients [mean (SD) age, 62.0 (10.0) years; 255 (58.2%) male] were divided into the training cohort ( n =235), internal validation cohort ( n =100), and external validation cohort ( n =103). The radiomics-based model yielded an AUC of 0.73 (95% CI: 0.66-0.80), 0.72 (95% CI: 0.62-0.80), and 0.71 (95% CI: 0.61-0.80) in the training, internal validation, and external validation cohorts, respectively, which were higher than the preoperative clinical model. The model's risk stratification was an independent predictor of PFS (all P <0.05) and OS (all P <0.05). Furthermore, patients in the high-risk group stratified by the model consistently had a significantly shorter PFS and OS at each TNM stage (all P <0.05). CONCLUSION: The proposed radiomics-based model provided a promising tool to predict occult liver metastases and had a great significance in prognosis.


Subject(s)
Carcinoma, Pancreatic Ductal , Liver Neoplasms , Pancreatic Neoplasms , Humans , Male , Middle Aged , Female , Radiomics , Retrospective Studies , Pancreatic Neoplasms/diagnostic imaging , Pancreatic Neoplasms/surgery , Carcinoma, Pancreatic Ductal/diagnostic imaging , Carcinoma, Pancreatic Ductal/surgery , Liver Neoplasms/diagnostic imaging , Liver Neoplasms/surgery
4.
IEEE Trans Med Imaging ; 42(4): 910-921, 2023 04.
Article in English | MEDLINE | ID: mdl-36331637

ABSTRACT

Low-Dose Computed Tomography (LDCT) technique, which reduces the radiation harm to human bodies, is now attracting increasing interest in the medical imaging field. As the image quality is degraded by low dose radiation, LDCT exams require specialized reconstruction methods or denoising algorithms. However, most of the recent effective methods overlook the inner-structure of the original projection data (sinogram) which limits their denoising ability. The inner-structure of the sinogram represents special characteristics of the data in the sinogram domain. By maintaining this structure while denoising, the noise can be obviously restrained. Therefore, we propose an LDCT denoising network namely Sinogram Inner-Structure Transformer (SIST) to reduce the noise by utilizing the inner-structure in the sinogram domain. Specifically, we study the CT imaging mechanism and statistical characteristics of sinogram to design the sinogram inner-structure loss including the global and local inner-structure for restoring high-quality CT images. Besides, we propose a sinogram transformer module to better extract sinogram features. The transformer architecture using a self-attention mechanism can exploit interrelations between projections of different view angles, which achieves an outstanding performance in sinogram denoising. Furthermore, in order to improve the performance in the image domain, we propose the image reconstruction module to complementarily denoise both in the sinogram and image domain.


Subject(s)
Image Processing, Computer-Assisted , Tomography, X-Ray Computed , Humans , Image Processing, Computer-Assisted/methods , Tomography, X-Ray Computed/methods , Algorithms , Endoscopy
5.
Article in English | MEDLINE | ID: mdl-35895657

ABSTRACT

In this work, we address the task of few-shot medical image segmentation (MIS) with a novel proposed framework based on the learning registration to learn segmentation (LRLS) paradigm. To cope with the limitations of lack of authenticity, diversity, and robustness in the existing LRLS frameworks, we propose the better registration better segmentation (BRBS) framework with three main contributions that are experimentally shown to have substantial practical merit. First, we improve the authenticity in the registration-based generation program and propose the knowledge consistency constraint strategy that constrains the registration network to learn according to the domain knowledge. It brings the semantic-aligned and topology-preserved registration, thus allowing the generation program to output new data with great space and style authenticity. Second, we deeply studied the diversity of the generation process and propose the space-style sampling program, which introduces the modeling of the transformation path of style and space change between few atlases and numerous unlabeled images into the generation program. Therefore, the sampling on the transformation paths provides much more diverse space and style features to the generated data effectively improving the diversity. Third, we first highlight the robustness in the learning of segmentation in the LRLS paradigm and propose the mix misalignment regularization, which simulates the misalignment distortion and constrains the network to reduce the fitting degree of misaligned regions. Therefore, it builds regularization for these regions improving the robustness of segmentation learning. Without any bells and whistles, our approach achieves a new state-of-the-art performance in few-shot MIS on two challenging tasks that outperform the existing LRLS-based few-shot methods. We believe that this novel and effective framework will provide a powerful few-shot benchmark for the field of medical image and efficiently reduce the costs of medical image research. All of our code will be made publicly available online.

6.
Int J Comput Assist Radiol Surg ; 17(6): 1115-1124, 2022 Jun.
Article in English | MEDLINE | ID: mdl-35384552

ABSTRACT

PURPOSE: Clinical rib fracture diagnosis via computed tomography (CT) screening has attracted much attention in recent years. However, automated and accurate segmentation solutions remain a challenging task due to the large sets of 3D CT data to deal with. Down-sampling is often required to face computer constraints, but the performance of the segmentation may decrease in this case. METHODS: A new multi-angle projection network (MAPNet) method is proposed for accurately segmenting rib fractures by means of a deep learning approach. The proposed method incorporates multi-angle projection images to complementarily and comprehensively extract the rib characteristics using a rib extraction (RE) module and the fracture features using a fracture segmentation (FS) module. A multi-angle projection fusion (MPF) module is designed for fusing multi-angle spatial features. RESULTS: It is shown that MAPNet can capture more detailed rib fracture features than some commonly used segmentation networks. Our method achieves a better performance in accuracy (88.06 ± 6.97%), sensitivity (89.26 ± 5.69%), specificity (87.58% ± 7.66%) and in terms of classical criteria like dice (85.41 ± 3.35%), intersection over union (IoU, 80.37 ± 4.63%), and Hausdorff distance (HD, 4.34 ± 3.1). CONCLUSION: We propose a rib fracture segmentation technique to deal with the problem of automatic fracture diagnosis. The proposed method avoids the down-sampling of 3D CT data through a projection technique. Experimental results show that it has excellent potential for clinical applications.


Subject(s)
Deep Learning , Rib Fractures , Humans , Image Processing, Computer-Assisted/methods , Neural Networks, Computer , Rib Fractures/diagnostic imaging , Tomography, X-Ray Computed/methods
7.
IEEE J Biomed Health Inform ; 26(8): 3938-3949, 2022 08.
Article in English | MEDLINE | ID: mdl-35254999

ABSTRACT

Susceptibility weighted imaging (SWI) is a routine magnetic resonance imaging (MRI) sequence that combines the magnitude and high-pass filtered phase images to qualitatively enhance the image contrasts related to tissue susceptibility. Tremendous amounts of the high-pass filtered phase data with low signal to noise ratio and incomplete background field removal have thus been collected under default clinical settings. Since SWI cannot quantitatively estimate the susceptibility, it is thus non-trivial to derive quantitative susceptibility mapping (QSM) directly from these redundant phase data, which effectively promotes the mining of the SWI data collected previously. To this end, a novel deep learning based SWI-to-QSM-Net (S2Q-Net) is proposed for QSM reconstruction from SWI high-pass filtered phase data. S2Q-Net firstly estimates the edge maps of QSM to integrate edge prior into features, which benefits the network to reconstruct QSM with realistic and clear tissue boundaries. Furthermore, a novel Second-order Cross Dense Block is proposed in S2Q-Net, which can capture rich inter-region interactions to provide more non-local phase information related to local tissue susceptibility. Experimental results on both simulated and in-vivo data indicate its superiority over all the compared deep learning based QSM reconstruction methods.


Subject(s)
Brain , Magnetic Resonance Imaging , Brain/pathology , Brain Mapping/methods , Contrast Media , Humans , Image Processing, Computer-Assisted/methods , Magnetic Resonance Imaging/methods
8.
IEEE J Biomed Health Inform ; 26(3): 1177-1187, 2022 03.
Article in English | MEDLINE | ID: mdl-34232899

ABSTRACT

Deformable medical image registration estimates corresponding deformation to align the regions of interest (ROIs) of two images to a same spatial coordinate system. However, recent unsupervised registration models only have correspondence ability without perception, making misalignment on blurred anatomies and distortion on task-unconcerned backgrounds. Label-constrained (LC) registration models embed the perception ability via labels, but the lack of texture constraints in labels and the expensive labeling costs causes distortion internal ROIs and overfitted perception. We propose the first few-shot deformable medical image registration framework, Perception-Correspondence Registration (PC-Reg), which embeds perception ability to registration models only with few labels, thus greatly improving registration accuracy and reducing distortion. 1) We propose the Perception-Correspondence Decoupling which decouples the perception and correspondence actions of registration to two CNNs. Therefore, independent optimizations and feature representations are available avoiding interference of the correspondence due to the lack of texture constraints. 2) For few-shot learning, we propose Reverse Teaching which aligns labeled and unlabeled images to each other to provide supervision information to the structure and style knowledge in unlabeled images, thus generating additional training data. Therefore, these data will reversely teach our perception CNN more style and structure knowledge, improving its generalization ability. Our experiments on three datasets with only five labels demonstrate that our PC-Reg has competitive registration accuracy and effective distortion-reducing ability. Compared with LC-VoxelMorph( λ = 1), we achieve the 12.5%, 6.3% and 1.0% Reg-DSC improvements on three datasets, revealing our framework with great potential in clinical application.


Subject(s)
Image Processing, Computer-Assisted , Unsupervised Machine Learning , Algorithms , Humans , Image Processing, Computer-Assisted/methods , Perception
9.
Artif Intell Med ; 121: 102181, 2021 11.
Article in English | MEDLINE | ID: mdl-34763803

ABSTRACT

Automatic detection of arrhythmia through an electrocardiogram (ECG) is of great significance for the prevention and treatment of cardiovascular diseases. In Convolutional neural network, the ECG signal is converted into multiple feature channels with equal weights through the convolution operation. Multiple feature channels can provide richer and more comprehensive information, but also contain redundant information, which will affect the diagnosis of arrhythmia, so feature channels that contain arrhythmia information should be paid attention to and given larger weight. In this paper, we introduced the Squeeze-and-Excitation (SE) block for the first time for the automatic detection of multiple types of arrhythmias with ECG. Our algorithm combines the residual convolutional module and the SE block to extract features from the original ECG signal. The SE block adaptively enhances the discriminative features and suppresses noise by explicitly modeling the interdependence between the channels, which can adaptively integrate information from different feature channels of ECG. The one-dimensional convolution operation over the time dimension is used to extract temporal information and the shortcut connection of the Se-Residual convolutional module in the proposed model makes the network easier to optimize. Thanks to the powerful feature extraction capabilities of the network, which can effectively extract discriminative arrhythmia features in multiple feature channels, so that no extra data preprocessing including denoising in other methods are need for our framework. It thus improves the working efficiency and keeps the collected biological information without loss. Experiments conducted with the 12-lead ECG dataset of the China Physiological Signal Challenge (CPSC) 2018 and the dataset of PhysioNet/Computing in Cardiology (CinC) Challenge 2017. The experiment results show that our model gains great performance and has great potential in clinical.


Subject(s)
Arrhythmias, Cardiac , Electrocardiography , Algorithms , Arrhythmias, Cardiac/diagnosis , Disease Progression , Humans , Neural Networks, Computer
10.
Med Image Anal ; 71: 102055, 2021 07.
Article in English | MEDLINE | ID: mdl-33866259

ABSTRACT

Three-dimensional (3D) integrated renal structures (IRS) segmentation targets segmenting the kidneys, renal tumors, arteries, and veins in one inference. Clinicians will benefit from the 3D IRS visual model for accurate preoperative planning and intraoperative guidance of laparoscopic partial nephrectomy (LPN). However, no success has been reported in 3D IRS segmentation due to the inherent challenges in grayscale distribution: low contrast caused by the narrow task-dependent distribution range of regions of interest (ROIs), and the networks representation preferences caused by the distribution variation inter-images. In this paper, we propose the Meta Greyscale Adaptive Network (MGANet), the first deep learning framework to simultaneously segment the kidney, renal tumors, arteries and veins on CTA images in one inference. It makes innovations in two collaborate aspects: 1) The Grayscale Interest Search (GIS) adaptively focuses segmentation networks on task-dependent grayscale distributions via scaling the window width and center with two cross-correlated coefficients for the first time, thus learning the fine-grained representation for fine segmentation. 2) The Meta Grayscale Adaptive (MGA) learning makes an image-level meta-learning strategy. It represents diverse robust features from multiple distributions, perceives the distribution characteristic, and generates the model parameters to fuse features dynamically according to image's distribution, thus adapting the grayscale distribution variation. This study enrolls 123 patients and the average Dice coefficients of the renal structures are up to 87.9%. Fine selection of the task-dependent grayscale distribution ranges and personalized fusion of multiple representations on different distributions will lead to better 3D IRS segmentation quality. Extensive experiments with promising results on renal structures reveal powerful segmentation accuracy and great clinical significance in renal cancer treatment.


Subject(s)
Image Processing, Computer-Assisted , Kidney Neoplasms , Humans , Kidney/diagnostic imaging , Kidney Neoplasms/diagnostic imaging , Kidney Neoplasms/surgery
11.
Med Image Anal ; 67: 101838, 2021 01.
Article in English | MEDLINE | ID: mdl-33129148

ABSTRACT

Automatic and accurate esophageal lesion classification and segmentation is of great significance to clinically estimate the lesion statuses of the esophageal diseases and make suitable diagnostic schemes. Due to individual variations and visual similarities of lesions in shapes, colors, and textures, current clinical methods remain subject to potential high-risk and time-consumption issues. In this paper, we propose an Esophageal Lesion Network (ELNet) for automatic esophageal lesion classification and segmentation using deep convolutional neural networks (DCNNs). The underlying method automatically integrates dual-view contextual lesion information to extract global features and local features for esophageal lesion classification and lesion-specific segmentation network is proposed for automatic esophageal lesion annotation at pixel level. For the established clinical large-scale database of 1051 white-light endoscopic images, ten-fold cross-validation is used in method validation. Experiment results show that the proposed framework achieves classification with sensitivity of 0.9034, specificity of 0.9718, and accuracy of 0.9628, and the segmentation with sensitivity of 0.8018, specificity of 0.9655, and accuracy of 0.9462. All of these indicate that our method enables an efficient, accurate, and reliable esophageal lesion diagnosis in clinics.


Subject(s)
Neural Networks, Computer , Humans
12.
Phys Med Biol ; 65(23): 235053, 2020 12 05.
Article in English | MEDLINE | ID: mdl-32698172

ABSTRACT

Pulmonary nodule false-positive reduction is of great significance for automated nodule detection in clinical diagnosis of low-dose computed tomography (LDCT) lung cancer screening. Due to individual intra-nodule variations and visual similarities between true nodules and false positives as soft tissues in LDCT images, the current clinical practices remain subject to shortcomings of potential high-risk and time-consumption issues. In this paper, we propose a multi-dimensional nodule detection network (MD-NDNet) for automatic nodule false-positive reduction using deep convolutional neural network (DCNNs). The underlying method collaboratively integrates multi-dimensional nodule information to complementarily and comprehensively extract nodule inter-plane volumetric correlation features using three-dimensional CNNs (3D CNNs) and spatial nodule correlation features from sagittal, coronal, and axial planes using two-dimensional CNNs (2D CNNs) with attention module. To incorporate different sizes and shapes of nodule candidates, a multi-scale ensemble strategy is employed for probability aggregation with weights. The proposed method is evaluated on the LUNA16 challenge dataset in ISBI 2016 with ten-fold cross-validation. Experiment results show that the proposed framework achieves classification performance with a CPM score of 0.9008. All of these indicate that our method enables an efficient, accurate and reliable pulmonary nodule detection for clinical diagnosis.


Subject(s)
Early Detection of Cancer/methods , Lung Neoplasms/pathology , Neural Networks, Computer , Solitary Pulmonary Nodule/pathology , False Positive Reactions , Humans , Imaging, Three-Dimensional/methods , Lung Neoplasms/diagnostic imaging , Solitary Pulmonary Nodule/diagnostic imaging , Tomography, X-Ray Computed/methods
13.
IEEE Trans Med Imaging ; 39(5): 1690-1702, 2020 05.
Article in English | MEDLINE | ID: mdl-31765307

ABSTRACT

The integration of segmentation and direct quantification on the left ventricle (LV) from the paired apical views(i.e., apical 4-chamber and 2-chamber together) echo sequence clinically achieves the comprehensive cardiac assessment: multiview segmentation for anatomical morphology, and multidimensional quantification for contractile function. Direct quantification of LV, i.e., to automatically quantify multiple LV indices directly from the image via task-aware feature representation and regression, avoids accumulative error from the inter-step target. This integration sequentially makes a stereoscopical reflection of cardiac activity jointly from the paired orthogonal cross views sequences, overcoming limited observation with a single plane. We propose a K-shaped Unified Network (K-Net), the first end-to-end framework to simultaneously segment LV from apical 4-chamber and 2-chamber views, and directly quantify LV from major- and minor-axis dimensions (1D), area (2D), and volume (3D), in sequence. It works via four components: 1) the K-Net architecture with the Attention Junction enables heterogeneous tasks learning of segmentation task of pixel-wise classification, and direct quantification task of image-wise regression, by interactively introducing the information from segmentation to jointly promote spatial attention map to guide quantification focusing on LV-related region, and transferring quantification feedback to make global constraint on segmentation; 2) the Bi-ResLSTMs distributed in K-Net layer-by-layer hierarchically extract spatial-temporal information in echo sequence, with bidirectional recurrent and short-cut connection to model spatial-temporal information among all frames; 3) the Information Valve tailing the Bi-ResLSTMs selectively exchanges information among multiple views, by stimulating complementary information and suppressing redundant information to make the efficient cross-flow for each view; 4) the Evolution Loss comprehensively guides sequential data learning, with static constraint for frame values, and dynamic constraint for inter-frame value changes. The experiments show that our K-Net gains high performance with a Dice coefficient up to 91.44% and a mean absolute error of the major-axis dimension down to 2.74mm, which reveal its clinical potential.


Subject(s)
Heart Ventricles , Heart , Heart Ventricles/diagnostic imaging
14.
Med Image Anal ; 58: 101554, 2019 12.
Article in English | MEDLINE | ID: mdl-31546227

ABSTRACT

Accurate direct estimation of the left ventricle (LV) multitype indices from two-dimensional (2D) echocardiograms of paired apical views, i.e., paired apical four-chamber (A4C) and two-chamber (A2C), is of great significance to clinically evaluate cardiac function. It enables a comprehensive assessment from multiple dimensions and views. Yet it is extremely challenging and has never been attempted, due to significantly varied LV shape and appearance across subjects and along cardiac cycle, the complexity brought by the paired different views, unexploited inter-frame indices relatedness hampering working effect, and low image quality preventing segmentation. We propose a paired-views LV network (PV-LVNet) to automatically and directly estimate LV multitype indices from paired echo apical views. Based on a newly designed Res-circle Net, the PV-LVNet robustly locates LV and automatically crops LV region of interest from A4C and A2C sequence with location module and image resampling, then accurately and consistently estimates 7 different indices of multiple dimensions (1D, 2D & 3D) and views (A2C, A4C, and union of A2C+A4C) with indices module. The experiments show that our method achieves high performance with accuracy up to 2.85mm mean absolute error and internal consistency up to 0.974 Cronbach's α for the cardiac indices estimation. All of these indicate that our method enables an efficient, accurate and reliable cardiac function diagnosis in clinical.


Subject(s)
Echocardiography , Heart Ventricles/diagnostic imaging , Image Enhancement/methods , Neural Networks, Computer , Humans
SELECTION OF CITATIONS
SEARCH DETAIL
...