Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 2 de 2
Filter
Add more filters










Database
Language
Publication year range
1.
Acad Radiol ; 2024 Mar 07.
Article in English | MEDLINE | ID: mdl-38458886

ABSTRACT

RATIONALE AND OBJECTIVES: To develop a Dual generative-adversarial-network (GAN) Cascaded Network (DGCN) for generating super-resolution computed tomography (SRCT) images from normal-resolution CT (NRCT) images and evaluate the performance of DGCN in multi-center datasets. MATERIALS AND METHODS: This retrospective study included 278 patients with chest CT from two hospitals between January 2020 and June 2023, and each patient had all three NRCT (512×512 matrix CT images with a resolution of 0.70 mm, 0.70 mm,1.0 mm), high-resolution CT (HRCT, 1024×1024 matrix CT images with a resolution of 0.35 mm, 0.35 mm,1.0 mm), and ultra-high-resolution CT (UHRCT, 1024×1024 matrix CT images with a resolution of 0.17 mm, 0.17 mm, 0.5 mm) examinations. Initially, a deep chest CT super-resolution residual network (DCRN) was built to generate HRCT from NRCT. Subsequently, we employed the DCRN as a pre-trained model for the training of DGCN to further enhance resolution along all three axes, ultimately yielding SRCT. PSNR, SSIM, FID, subjective evaluation scores, and objective evaluation parameters related to pulmonary nodule segmentation in the testing set were recorded and analyzed. RESULTS: DCRN obtained a PSNR of 52.16, SSIM of 0.9941, FID of 137.713, and an average diameter difference of 0.0981 mm. DGCN obtained a PSNR of 46.50, SSIM of 0.9990, FID of 166.421, and an average diameter difference of 0.0981 mm on 39 testing cases. There were no significant differences between the SRCT and UHRCT images in subjective evaluation. CONCLUSION: Our model exhibited a significant enhancement in generating HRCT and SRCT images and outperformed established methods regarding image quality and clinical segmentation accuracy across both internal and external testing datasets.

2.
Front Oncol ; 12: 892890, 2022.
Article in English | MEDLINE | ID: mdl-35747810

ABSTRACT

Objective: This study aimed to develop effective artificial intelligence (AI) diagnostic models based on CT images of pulmonary nodules only, on descriptional and quantitative clinical or image features, or on a combination of both to differentiate benign and malignant ground-glass nodules (GGNs) to assist in the determination of surgical intervention. Methods: Our study included a total of 867 nodules (benign nodules: 112; malignant nodules: 755) with postoperative pathological diagnoses from two centers. For the diagnostic models to discriminate between benign and malignant GGNs, we adopted three different artificial intelligence (AI) approaches: a) an image-based deep learning approach to build a deep neural network (DNN); b) a clinical feature-based machine learning approach based on the clinical and image features of nodules; c) a fusion diagnostic model integrating the original images and the clinical and image features. The performance of the models was evaluated on an internal test dataset (the "Changzheng Dataset") and an independent test dataset collected from an external institute (the "Longyan Dataset"). In addition, the performance of automatic diagnostic models was compared with that of manual evaluations by two radiologists on the 'Longyan dataset'. Results: The image-based deep learning model achieved an appealing diagnostic performance, yielding AUC values of 0.75 (95% confidence interval [CI]: 0.62, 0.89) and 0.76 (95% CI: 0.61, 0.90), respectively, on both the Changzheng and Longyan datasets. The clinical feature-based machine learning model performed well on the Changzheng dataset (AUC, 0.80 [95% CI: 0.64, 0.96]), whereas it performed poorly on the Longyan dataset (AUC, 0.62 [95% CI: 0.42, 0.83]). The fusion diagnostic model achieved the best performance on both the Changzheng dataset (AUC, 0.82 [95% CI: 0.71-0.93]) and the Longyan dataset (AUC, 0.83 [95% CI: 0.70-0.96]), and it achieved a better specificity (0.69) than the radiologists (0.33-0.44) on the Longyan dataset. Conclusion: The deep learning models, including both the image-based deep learning model and the fusion model, have the ability to assist radiologists in differentiating between benign and malignant nodules for the precise management of patients with GGNs.

SELECTION OF CITATIONS
SEARCH DETAIL
...