Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 3 de 3
Filter
Add more filters










Database
Language
Publication year range
1.
Radiol Artif Intell ; 5(3): e220246, 2023 May.
Article in English | MEDLINE | ID: mdl-37293349

ABSTRACT

Purpose: To develop a deep learning approach that enables ultra-low-dose, 1% of the standard clinical dosage (3 MBq/kg), ultrafast whole-body PET reconstruction in cancer imaging. Materials and Methods: In this Health Insurance Portability and Accountability Act-compliant study, serial fluorine 18-labeled fluorodeoxyglucose PET/MRI scans of pediatric patients with lymphoma were retrospectively collected from two cross-continental medical centers between July 2015 and March 2020. Global similarity between baseline and follow-up scans was used to develop Masked-LMCTrans, a longitudinal multimodality coattentional convolutional neural network (CNN) transformer that provides interaction and joint reasoning between serial PET/MRI scans from the same patient. Image quality of the reconstructed ultra-low-dose PET was evaluated in comparison with a simulated standard 1% PET image. The performance of Masked-LMCTrans was compared with that of CNNs with pure convolution operations (classic U-Net family), and the effect of different CNN encoders on feature representation was assessed. Statistical differences in the structural similarity index measure (SSIM), peak signal-to-noise ratio (PSNR), and visual information fidelity (VIF) were assessed by two-sample testing with the Wilcoxon signed rank t test. Results: The study included 21 patients (mean age, 15 years ± 7 [SD]; 12 female) in the primary cohort and 10 patients (mean age, 13 years ± 4; six female) in the external test cohort. Masked-LMCTrans-reconstructed follow-up PET images demonstrated significantly less noise and more detailed structure compared with simulated 1% extremely ultra-low-dose PET images. SSIM, PSNR, and VIF were significantly higher for Masked-LMCTrans-reconstructed PET (P < .001), with improvements of 15.8%, 23.4%, and 186%, respectively. Conclusion: Masked-LMCTrans achieved high image quality reconstruction of 1% low-dose whole-body PET images.Keywords: Pediatrics, PET, Convolutional Neural Network (CNN), Dose Reduction Supplemental material is available for this article. © RSNA, 2023.

2.
Eur J Nucl Med Mol Imaging ; 50(5): 1337-1350, 2023 04.
Article in English | MEDLINE | ID: mdl-36633614

ABSTRACT

PURPOSE: To provide a holistic and complete comparison of the five most advanced AI models in the augmentation of low-dose 18F-FDG PET data over the entire dose reduction spectrum. METHODS: In this multicenter study, five AI models were investigated for restoring low-count whole-body PET/MRI, covering convolutional benchmarks - U-Net, enhanced deep super-resolution network (EDSR), generative adversarial network (GAN) - and the most cutting-edge image reconstruction transformer models in computer vision to date - Swin transformer image restoration network (SwinIR) and EDSR-ViT (vision transformer). The models were evaluated against six groups of count levels representing the simulated 75%, 50%, 25%, 12.5%, 6.25%, and 1% (extremely ultra-low-count) of the clinical standard 3 MBq/kg 18F-FDG dose. The comparisons were performed upon two independent cohorts - (1) a primary cohort from Stanford University and (2) a cross-continental external validation cohort from Tübingen University - in order to ensure the findings are generalizable. A total of 476 original count and simulated low-count whole-body PET/MRI scans were incorporated into this analysis. RESULTS: For low-count PET restoration on the primary cohort, the mean structural similarity index (SSIM) scores for dose 6.25% were 0.898 (95% CI, 0.887-0.910) for EDSR, 0.893 (0.881-0.905) for EDSR-ViT, 0.873 (0.859-0.887) for GAN, 0.885 (0.873-0.898) for U-Net, and 0.910 (0.900-0.920) for SwinIR. In continuation, SwinIR and U-Net's performances were also discreetly evaluated at each simulated radiotracer dose levels. Using the primary Stanford cohort, the mean diagnostic image quality (DIQ; 5-point Likert scale) scores of SwinIR restoration were 5 (SD, 0) for dose 75%, 4.50 (0.535) for dose 50%, 3.75 (0.463) for dose 25%, 3.25 (0.463) for dose 12.5%, 4 (0.926) for dose 6.25%, and 2.5 (0.534) for dose 1%. CONCLUSION: Compared to low-count PET images, with near-to or nondiagnostic images at higher dose reduction levels (up to 6.25%), both SwinIR and U-Net significantly improve the diagnostic quality of PET images. A radiotracer dose reduction to 1% of the current clinical standard radiotracer dose is out of scope for current AI techniques.


Subject(s)
Artificial Intelligence , Fluorodeoxyglucose F18 , Humans , Drug Tapering , Positron-Emission Tomography/methods , Image Processing, Computer-Assisted/methods , Magnetic Resonance Imaging/methods
3.
Radiol Artif Intell ; 4(3): e210174, 2022 May.
Article in English | MEDLINE | ID: mdl-35652118

ABSTRACT

Purpose: To develop a deep learning-based risk stratification system for thyroid nodules using US cine images. Materials and Methods: In this retrospective study, 192 biopsy-confirmed thyroid nodules (175 benign, 17 malignant) in 167 unique patients (mean age, 56 years ± 16 [SD], 137 women) undergoing cine US between April 2017 and May 2018 with American College of Radiology (ACR) Thyroid Imaging Reporting and Data System (TI-RADS)-structured radiology reports were evaluated. A deep learning-based system that exploits the cine images obtained during three-dimensional volumetric thyroid scans and outputs malignancy risk was developed and compared, using fivefold cross-validation, against a two-dimensional (2D) deep learning-based model (Static-2DCNN), a radiomics-based model using cine images (Cine-Radiomics), and the ACR TI-RADS level, with histopathologic diagnosis as ground truth. The system was used to revise the ACR TI-RADS recommendation, and its diagnostic performance was compared against the original ACR TI-RADS. Results: The system achieved higher average area under the receiver operating characteristic curve (AUC, 0.88) than Static-2DCNN (0.72, P = .03) and tended toward higher average AUC than Cine-Radiomics (0.78, P = .16) and ACR TI-RADS level (0.80, P = .21). The system downgraded recommendations for 92 benign and two malignant nodules and upgraded none. The revised recommendation achieved higher specificity (139 of 175, 79.4%) than the original ACR TI-RADS (47 of 175, 26.9%; P < .001), with no difference in sensitivity (12 of 17, 71% and 14 of 17, 82%, respectively; P = .63). Conclusion: The risk stratification system using US cine images had higher diagnostic performance than prior models and improved specificity of ACR TI-RADS when used to revise ACR TI-RADS recommendation.Keywords: Neural Networks, US, Abdomen/GI, Head/Neck, Thyroid, Computer Applications-3D, Oncology, Diagnosis, Supervised Learning, Transfer Learning, Convolutional Neural Network (CNN) Supplemental material is available for this article. © RSNA, 2022.

SELECTION OF CITATIONS
SEARCH DETAIL
...