Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 6 de 6
Filter
Add more filters










Database
Language
Publication year range
1.
Acad Radiol ; 2024 Mar 07.
Article in English | MEDLINE | ID: mdl-38458886

ABSTRACT

RATIONALE AND OBJECTIVES: To develop a Dual generative-adversarial-network (GAN) Cascaded Network (DGCN) for generating super-resolution computed tomography (SRCT) images from normal-resolution CT (NRCT) images and evaluate the performance of DGCN in multi-center datasets. MATERIALS AND METHODS: This retrospective study included 278 patients with chest CT from two hospitals between January 2020 and June 2023, and each patient had all three NRCT (512×512 matrix CT images with a resolution of 0.70 mm, 0.70 mm,1.0 mm), high-resolution CT (HRCT, 1024×1024 matrix CT images with a resolution of 0.35 mm, 0.35 mm,1.0 mm), and ultra-high-resolution CT (UHRCT, 1024×1024 matrix CT images with a resolution of 0.17 mm, 0.17 mm, 0.5 mm) examinations. Initially, a deep chest CT super-resolution residual network (DCRN) was built to generate HRCT from NRCT. Subsequently, we employed the DCRN as a pre-trained model for the training of DGCN to further enhance resolution along all three axes, ultimately yielding SRCT. PSNR, SSIM, FID, subjective evaluation scores, and objective evaluation parameters related to pulmonary nodule segmentation in the testing set were recorded and analyzed. RESULTS: DCRN obtained a PSNR of 52.16, SSIM of 0.9941, FID of 137.713, and an average diameter difference of 0.0981 mm. DGCN obtained a PSNR of 46.50, SSIM of 0.9990, FID of 166.421, and an average diameter difference of 0.0981 mm on 39 testing cases. There were no significant differences between the SRCT and UHRCT images in subjective evaluation. CONCLUSION: Our model exhibited a significant enhancement in generating HRCT and SRCT images and outperformed established methods regarding image quality and clinical segmentation accuracy across both internal and external testing datasets.

2.
J Digit Imaging ; 36(5): 2138-2147, 2023 10.
Article in English | MEDLINE | ID: mdl-37407842

ABSTRACT

To develop a deep learning-based model for detecting rib fractures on chest X-Ray and to evaluate its performance based on a multicenter study. Chest digital radiography (DR) images from 18,631 subjects were used for the training, testing, and validation of the deep learning fracture detection model. We first built a pretrained model, a simple framework for contrastive learning of visual representations (simCLR), using contrastive learning with the training set. Then, simCLR was used as the backbone for a fully convolutional one-stage (FCOS) objective detection network to identify rib fractures from chest X-ray images. The detection performance of the network for four different types of rib fractures was evaluated using the testing set. A total of 127 images from Data-CZ and 109 images from Data-CH with the annotations for four types of rib fractures were used for evaluation. The results showed that for Data-CZ, the sensitivities of the detection model with no pretraining, pretrained ImageNet, and pretrained DR were 0.465, 0.735, and 0.822, respectively, and the average number of false positives per scan was five in all cases. For the Data-CH test set, the sensitivities of three different pretraining methods were 0.403, 0.655, and 0.748. In the identification of four fracture types, the detection model achieved the highest performance for displaced fractures, with sensitivities of 0.873 and 0.774 for the Data-CZ and Data-CH test sets, respectively, with 5 false positives per scan, followed by nondisplaced fractures, buckle fractures, and old fractures. A pretrained model can significantly improve the performance of the deep learning-based rib fracture detection based on X-ray images, which can reduce missed diagnoses and improve the diagnostic efficacy.


Subject(s)
Rib Fractures , Humans , Rib Fractures/diagnostic imaging , Tomography, X-Ray Computed/methods , X-Rays , Radiography , Retrospective Studies
3.
J Digit Imaging ; 36(5): 2278-2289, 2023 10.
Article in English | MEDLINE | ID: mdl-37268840

ABSTRACT

Image quality control (QC) is crucial for the accurate diagnosis of knee diseases using radiographs. However, the manual QC process is subjective, labor intensive, and time-consuming. In this study, we aimed to develop an artificial intelligence (AI) model to automate the QC procedure typically performed by clinicians. We proposed an AI-based fully automatic QC model for knee radiographs using high-resolution net (HR-Net) to identify predefined key points in images. We then performed geometric calculations to transform the identified key points into three QC criteria, namely, anteroposterior (AP)/lateral (LAT) overlap ratios and LAT flexion angle. The proposed model was trained and validated using 2212 knee plain radiographs from 1208 patients and an additional 1572 knee radiographs from 753 patients collected from six external centers for further external validation. For the internal validation cohort, the proposed AI model and clinicians showed high intraclass consistency coefficients (ICCs) for AP/LAT fibular head overlap and LAT knee flexion angle of 0.952, 0.895, and 0.993, respectively. For the external validation cohort, the ICCs were also high, with values of 0.934, 0.856, and 0.991, respectively. There were no significant differences between the AI model and clinicians in any of the three QC criteria, and the AI model required significantly less measurement time than clinicians. The experimental results demonstrated that the AI model performed comparably to clinicians and required less time. Therefore, the proposed AI-based model has great potential as a convenient tool for clinical practice by automating the QC procedure for knee radiographs.


Subject(s)
Artificial Intelligence , Knee Joint , Humans , Knee Joint/diagnostic imaging , Quality Control , Radiography
4.
Med Phys ; 50(6): 3612-3622, 2023 Jun.
Article in English | MEDLINE | ID: mdl-36542389

ABSTRACT

BACKGROUND: Ultra-high resolution computed tomography (UHRCT) has shown great potential for the detection of pulmonary diseases. However, UHRCT scanning generally induces increases in scanning time and radiation exposure. Super resolution is a gradually prosperous application in CT imaging despite higher radiation dose. Recent works have proved that the convolution neural network especially the generative adversarial network (GAN) based model could generate high-resolution CT using phantom images or simulated low resolution data without extra dose. Research that used clinical CT particularly lung images are rare due to the difficulty in collecting paired dataset. PURPOSE: To generate clinical UHRCT in lung from low resolution computed tomography (LRCT) using a GAN model. METHODS: 43 clinical scans with LRCT and UHRCT were collected in this study. Paired patches were selected using the structural similarity index measure (SSIM) and the peak signal-to-noise ratio (PSNR) threshold. A relativistic GAN with gradient guidance was trained to learn the mapping from LRCT to UHRCT. The performance of the proposed method was evaluated using PSNR and SSIM. A reader study with five-point Likert score (five for the worst and one for the best) is also applied to assess the proposed method in terms of general quality, diagnostic confidence, sharpness and denoise level. RESULTS: Experimental results show that our method got PSNR 32.60 ± 2.92 and SSIM 0.881 ± 0.057 on our clinical CT dataset, outperforming other state-of-the-art methods based on the simulated scenarios. Moreover, reader study shows that our method reached the good clinical performance in terms of general quality (1.14 ± 0.36), diagnostic confidence (1.36 ± 0.49), sharpness (1.07 ± 0.27) and high denoise level (1.29 ± 0.61) compared to other SR methods. CONCLUSION: This study demonstrated the feasibility of generating UHRCT images from LRCT without longer scanning time or increased radiation dose.


Subject(s)
Image Processing, Computer-Assisted , Tomography, X-Ray Computed , Image Processing, Computer-Assisted/methods , Tomography, X-Ray Computed/methods , Neural Networks, Computer , Lung , Signal-To-Noise Ratio
5.
Acta Radiol ; 64(3): 1184-1193, 2023 Mar.
Article in English | MEDLINE | ID: mdl-36039494

ABSTRACT

BACKGROUND: Differentiating diagnosis between the benign schwannoma and the malignant counterparts merely by neuroimaging is not always clear and remains still confounding in many cases because of atypical imaging presentation encountered in clinic and the lack of specific diagnostic markers. PURPOSE: To construct and validate a novel deep learning model based on multi-source magnetic resonance imaging (MRI) in automatically differentiating malignant spinal schwannoma from benign. MATERIAL AND METHODS: We retrospectively reviewed MRI imaging data from 119 patients with the initial diagnosis of benign or malignant spinal schwannoma confirmed by postoperative pathology. A novel convolutional neural network (CNN)-based deep learning model named GAIN-CP (Guided Attention Inference Network with Clinical Priors) was constructed. An ablation study for the fivefold cross-validation and cross-source experiments were conducted to validate the novel model. The diagnosis performance among our GAIN-CP model, the conventional radiomics model, and the radiologist-based clinical assessment were compared using the area under the receiver operating characteristic curve (AUC) and balanced accuracy (BAC). RESULTS: The AUC score of the proposed GAIN method is 0.83, which outperforms the radiomics method (0.65) and the evaluations from the radiologists (0.67). By incorporating both the image data and the clinical prior features, our GAIN-CP achieves an AUC score of 0.95. The GAIN-CP also achieves the best performance on fivefold cross-validation and cross-source experiments. CONCLUSION: The novel GAIN-CP method can successfully classify malignant spinal schwannoma from benign cases using the provided multi-source MR images exhibiting good prospect in clinical diagnosis.


Subject(s)
Magnetic Resonance Imaging , Neurilemmoma , Humans , Retrospective Studies , Magnetic Resonance Imaging/methods , Neural Networks, Computer , Neurilemmoma/diagnostic imaging , Radiologists
6.
J Thorac Dis ; 13(3): 1327-1337, 2021 Mar.
Article in English | MEDLINE | ID: mdl-33841926

ABSTRACT

BACKGROUND: The peri-tumor microenvironment plays an important role in the occurrence, growth and metastasis of cancer. The aim of this study is to explore the value and application of a CT image-based deep learning model of tumors and peri-tumors in predicting the invasiveness of ground-glass nodules (GGNs). METHODS: Preoperative thin-section chest CT images were reviewed retrospectively in 622 patients with a total of 687 pulmonary GGNs. GGNs are classified according to clinical management strategies as invasive lesions (IAC) and non-invasive lesions (AAH, AIS and MIA). The two volumes of interest (VOIs) identified on CT were the gross tumor volume (GTV) and the gross volume of tumor incorporating peritumoral region (GPTV). Three dimensional (3D) DenseNet was used to model and predict GGN invasiveness, and five-fold cross validation was performed. We used GTV and GPTV as inputs for the comparison model. Prediction performance was evaluated by sensitivity, specificity, and area under the receiver operating characteristic curve (AUC). RESULTS: The GTV-based model was able to successfully predict GGN invasiveness, with an AUC of 0.921 (95% CI, 0.896-0.937). Using GPTV, the AUC of the model increased to 0.955 (95% CI, 0.939-0.971). CONCLUSIONS: The deep learning method performed well in predicting GGN invasiveness. The predictive ability of the GPTV-based model was more effective than that of the GTV-based model.

SELECTION OF CITATIONS
SEARCH DETAIL
...