Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 76
Filter
1.
J Imaging Inform Med ; 2024 Jun 10.
Article in English | MEDLINE | ID: mdl-38858260

ABSTRACT

To develop a robust segmentation model, encoding the underlying features/structures of the input data is essential to discriminate the target structure from the background. To enrich the extracted feature maps, contrastive learning and self-learning techniques are employed, particularly when the size of the training dataset is limited. In this work, we set out to investigate the impact of contrastive learning and self-learning on the performance of the deep learning-based semantic segmentation. To this end, three different datasets were employed used for brain tumor and hippocampus delineation from MR images (BraTS and Decathlon datasets, respectively) and kidney segmentation from CT images (Decathlon dataset). Since data augmentation techniques are also aimed at enhancing the performance of deep learning methods, a deformable data augmentation technique was proposed and compared with contrastive learning and self-learning frameworks. The segmentation accuracy for the three datasets was assessed with and without applying data augmentation, contrastive learning, and self-learning to individually investigate the impact of these techniques. The self-learning and deformable data augmentation techniques exhibited comparable performance with Dice indices of 0.913 ± 0.030 and 0.920 ± 0.022 for kidney segmentation, 0.890 ± 0.035 and 0.898 ± 0.027 for hippocampus segmentation, and 0.891 ± 0.045 and 0.897 ± 0.040 for lesion segmentation, respectively. These two approaches significantly outperformed the contrastive learning and the original model with Dice indices of 0.871 ± 0.039 and 0.868 ± 0.042 for kidney segmentation, 0.872 ± 0.045 and 0.865 ± 0.048 for hippocampus segmentation, and 0.870 ± 0.049 and 0.860 ± 0.058 for lesion segmentation, respectively. The combination of self-learning with deformable data augmentation led to a robust segmentation model with no outliers in the outcomes. This work demonstrated the beneficial impact of self-learning and deformable data augmentation on organ and lesion segmentation, where no additional training datasets are needed.

2.
Phys Med Biol ; 69(11)2024 May 30.
Article in English | MEDLINE | ID: mdl-38744305

ABSTRACT

This review casts a spotlight on intraoperative positron emission tomography (PET) scanners and the distinctive challenges they confront. Specifically, these systems contend with the necessity of partial coverage geometry, essential for ensuring adequate access to the patient. This inherently leans them towards limited-angle PET imaging, bringing along its array of reconstruction and geometrical sensitivity challenges. Compounding this, the need for real-time imaging in navigation systems mandates rapid acquisition and reconstruction times. For these systems, the emphasis is on dependable PET image reconstruction (without significant artefacts) while rapid processing takes precedence over the spatial resolution of the system. In contrast, specimen PET imagers are unburdened by the geometrical sensitivity challenges, thanks to their ability to leverage full coverage PET imaging geometries. For these devices, the focus shifts: high spatial resolution imaging takes precedence over rapid image reconstruction. This review concurrently probes into the technical complexities of both intraoperative and specimen PET imaging, shedding light on their recent designs, inherent challenges, and technological advancements.


Subject(s)
Image Processing, Computer-Assisted , Operating Rooms , Positron-Emission Tomography , Positron-Emission Tomography/instrumentation , Humans , Image Processing, Computer-Assisted/methods
3.
Clin Genitourin Cancer ; 22(3): 102076, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38593599

ABSTRACT

The objective of this work was to review comparisons of the efficacy of 68Ga-PSMA-11 (prostate-specific membrane antigen) PET/CT and multiparametric magnetic resonance imaging (mpMRI) in the detection of prostate cancer among patients undergoing initial staging prior to radical prostatectomy or experiencing recurrent prostate cancer, based on histopathological data. A comprehensive search was conducted in PubMed and Web of Science, and relevant articles were analyzed with various parameters, including year of publication, study design, patient count, age, PSA (prostate-specific antigen) value, Gleason score, standardized uptake value (SUVmax), detection rate, treatment history, sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), and PI-RADS (prostate imaging reporting and data system) scores. Only studies directly comparing PSMA-PET and mpMRI were considered, while those examining combined accuracy or focusing on either modality alone were excluded. In total, 24 studies comprising 1717 patients were analyzed, with the most common indication for screening being staging, followed by relapse. The findings indicated that 68Ga-PSMA-PET/CT effectively diagnosed prostate cancer in patients with suspected or confirmed disease, and both methods exhibited comparable efficacy in identifying lesion-specific information. However, notable heterogeneity was observed, highlighting the necessity for standardization of imaging and histopathology systems to mitigate inter-study variability. Future research should prioritize evaluating the combined diagnostic performance of both modalities to enhance sensitivity and reduce unnecessary biopsies. Overall, the utilization of PSMA-PET and mpMRI in combination holds substantial potential for significantly advancing the diagnosis and management of prostate cancer.


Subject(s)
Gallium Isotopes , Gallium Radioisotopes , Multiparametric Magnetic Resonance Imaging , Neoplasm Recurrence, Local , Positron Emission Tomography Computed Tomography , Prostatic Neoplasms , Humans , Prostatic Neoplasms/diagnostic imaging , Prostatic Neoplasms/pathology , Prostatic Neoplasms/metabolism , Male , Neoplasm Recurrence, Local/diagnostic imaging , Neoplasm Recurrence, Local/metabolism , Positron Emission Tomography Computed Tomography/methods , Multiparametric Magnetic Resonance Imaging/methods , Edetic Acid/analogs & derivatives , Oligopeptides , Radiopharmaceuticals , Prostate-Specific Antigen/blood , Prostate-Specific Antigen/metabolism , Prostatectomy , Neoplasm Staging
4.
Quant Imaging Med Surg ; 14(3): 2146-2164, 2024 Mar 15.
Article in English | MEDLINE | ID: mdl-38545051

ABSTRACT

Background: Positron emission tomography (PET) imaging encounters the obstacle of partial volume effects, arising from its limited intrinsic resolution, giving rise to (I) considerable bias, particularly for structures comparable in size to the point spread function (PSF) of the system; and (II) blurred image edges and blending of textures along the borders. We set out to build a deep learning-based framework for predicting partial volume corrected full-dose (FD + PVC) images from either standard or low-dose (LD) PET images without requiring any anatomical data in order to provide a joint solution for partial volume correction and de-noise LD PET images. Methods: We trained a modified encoder-decoder U-Net network with standard of care or LD PET images as the input and FD + PVC images by six different PVC methods as the target. These six PVC approaches include geometric transfer matrix (GTM), multi-target correction (MTC), region-based voxel-wise correction (RBV), iterative Yang (IY), reblurred Van-Cittert (RVC), and Richardson-Lucy (RL). The proposed models were evaluated using standard criteria, such as peak signal-to-noise ratio (PSNR), root mean squared error (RMSE), structural similarity index (SSIM), relative bias, and absolute relative bias. Results: Different levels of error were observed for these partial volume correction methods, which were relatively smaller for GTM with a SSIM of 0.63 for LD and 0.29 for FD, IY with an SSIM of 0.63 for LD and 0.67 for FD, RBV with an SSIM of 0.57 for LD and 0.65 for FD, and RVC with an SSIM of 0.89 for LD and 0.94 for FD PVC approaches. However, large quantitative errors were observed for multi-target MTC with an RMSE of 2.71 for LD and 2.45 for FD and RL with an RMSE of 5 for LD and 3.27 for FD PVC approaches. Conclusions: We found that the proposed framework could effectively perform joint de-noising and partial volume correction for PET images with LD and FD input PET data (LD vs. FD). When no magnetic resonance imaging (MRI) images are available, the developed deep learning models could be used for partial volume correction on LD or standard PET-computed tomography (PET-CT) scans as an image quality enhancement technique.

5.
Phys Med ; 119: 103315, 2024 Mar.
Article in English | MEDLINE | ID: mdl-38377837

ABSTRACT

PURPOSE: This work set out to propose an attention-based deep neural network to predict partial volume corrected images from PET data not utilizing anatomical information. METHODS: An attention-based convolutional neural network (ATB-Net) is developed to predict PVE-corrected images in brain PET imaging by concentrating on anatomical areas of the brain. The performance of the deep neural network for performing PVC without using anatomical images was evaluated for two PVC methods, including iterative Yang (IY) and reblurred Van-Cittert (RVC) approaches. The RVC and IY PVC approaches were applied to PET images to generate the reference images. The training of the U-Net network for the partial volume correction was trained twice, once without using the attention module and once with the attention module concentrating on the anatomical brain regions. RESULTS: Regarding the peak signal-to-noise ratio (PSNR), structural similarity index (SSIM), and root mean square error (RMSE) metrics, the proposed ATB-Net outperformed the standard U-Net model (without attention compartment). For the RVC technique, the ATB-Net performed just marginally better than the U-Net; however, for the IY method, which is a region-wise method, the attention-based approach resulted in a substantial improvement. The mean absolute relative SUV difference and mean absolute relative bias improved by 38.02 % and 91.60 % for the RVC method and 77.47 % and 79.68 % for the IY method when using the ATB-Net model, respectively. CONCLUSIONS: Our results propose that without using anatomical data, the attention-based DL model could perform PVC on PET images, which could be employed for PVC in PET imaging.


Subject(s)
Brain , Fluorodeoxyglucose F18 , Brain/diagnostic imaging , Neural Networks, Computer , Positron-Emission Tomography/methods , Signal-To-Noise Ratio , Image Processing, Computer-Assisted/methods
6.
Med Phys ; 51(2): 870-880, 2024 Feb.
Article in English | MEDLINE | ID: mdl-38197492

ABSTRACT

BACKGROUND: Attenuation and scatter correction is crucial for quantitative positron emission tomography (PET) imaging. Direct attenuation correction (AC) in the image domain using deep learning approaches has been recently proposed for combined PET/MR and standalone PET modalities lacking transmission scanning devices or anatomical imaging. PURPOSE: In this study, different input settings were considered in the model training to investigate deep learning-based AC in the image space. METHODS: Three different deep learning methods were developed for direct AC in the image space: (i) use of non-attenuation-corrected PET images as input (NonAC-PET), (ii) use of attenuation-corrected PET images with a simple two-class AC map (composed of soft-tissue and background air) obtained from NonAC-PET images (PET segmentation-based AC [SegAC-PET]), and (iii) use of both NonAC-PET and SegAC-PET images in a Double-Channel fashion to predict ground truth attenuation corrected PET images with Computed Tomography images (CTAC-PET). Since a simple two-class AC map (generated from NonAC-PET images) can easily be generated, this work assessed the added value of incorporating SegAC-PET images into direct AC in the image space. A 4-fold cross-validation scheme was adopted to train and evaluate the different models based using 80 brain 18 F-Fluorodeoxyglucose PET/CT images. The voxel-wise and region-wise accuracy of the models were examined via measuring the standardized uptake value (SUV) quantification bias in different regions of the brain. RESULTS: The overall root mean square error (RMSE) for the Double-Channel setting was 0.157 ± 0.08 SUV in the whole brain region, while RMSEs of 0.214 ± 0.07 and 0.189 ± 0.14 SUV were observed in NonAC-PET and SegAC-PET models, respectively. A mean SUV bias of 0.01 ± 0.26% was achieved by the Double-Channel model regarding the activity concentration in cerebellum region, as opposed to 0.08 ± 0.28% and 0.05 ± 0.28% SUV biases for the network that uniquely used NonAC-PET or SegAC-PET as input, respectively. SegAC-PET images with an SUV bias of -1.15 ± 0.54%, served as a benchmark for clinically accepted errors. In general, the Double-Channel network, relying on both SegAC-PET and NonAC-PET images, outperformed the other AC models. CONCLUSION: Since the generation of two-class AC maps from non-AC PET images is straightforward, the current study investigated the potential added value of incorporating SegAC-PET images into a deep learning-based direct AC approach. Altogether, compared with models that use only NonAC-PET and SegAC-PET images, the Double-Channel deep learning network exhibited superior attenuation correction accuracy.


Subject(s)
Deep Learning , Positron Emission Tomography Computed Tomography , Fluorodeoxyglucose F18 , Image Processing, Computer-Assisted/methods , Magnetic Resonance Imaging/methods , Positron-Emission Tomography/methods , Brain/diagnostic imaging
7.
Ann Nucl Med ; 38(1): 31-70, 2024 Jan.
Article in English | MEDLINE | ID: mdl-37952197

ABSTRACT

We focus on reviewing state-of-the-art developments of dedicated PET scanners with irregular geometries and the potential of different aspects of multifunctional PET imaging. First, we discuss advances in non-conventional PET detector geometries. Then, we present innovative designs of organ-specific dedicated PET scanners for breast, brain, prostate, and cardiac imaging. We will also review challenges and possible artifacts by image reconstruction algorithms for PET scanners with irregular geometries, such as non-cylindrical and partial angular coverage geometries and how they can be addressed. Then, we attempt to address some open issues about cost/benefits analysis of dedicated PET scanners, how far are the theoretical conceptual designs from the market/clinic, and strategies to reduce fabrication cost without compromising performance.


Subject(s)
Image Processing, Computer-Assisted , Positron-Emission Tomography , Humans , Phantoms, Imaging , Positron-Emission Tomography/methods , Image Processing, Computer-Assisted/methods , Brain , Algorithms
8.
Radiol Phys Technol ; 17(1): 124-134, 2024 Mar.
Article in English | MEDLINE | ID: mdl-37980315

ABSTRACT

This study aimed to assist doctors in detecting early-stage lung cancer. To achieve this, a hierarchical system that can detect nodules in the lungs using computed tomography (CT) images was developed. In the initial phase, a preexisting model (YOLOv5s) was used to detect lung nodules. A 0.3 confidence threshold was established for identifying nodules in this phase to enhance the model's sensitivity. The primary objective of the hierarchical model was to locate and categorize all lung nodules while minimizing the false-negative rate. Following the analysis of the results from the first phase, a novel 3D convolutional neural network (CNN) classifier was developed to examine and categorize the potential nodules detected by the YOLOv5s model. The objective was to create a detection framework characterized by an extremely low false positive rate and high accuracy. The Lung Nodule Analysis 2016 (LUNA 16) dataset was used to evaluate the effectiveness of this framework. This dataset comprises 888 CT scans that include the positions of 1186 nodules and 400,000 non-nodular regions in the lungs. The YOLOv5s technique yielded numerous incorrect detections owing to its low confidence level. Nevertheless, the addition of a 3D classification system significantly enhanced the precision of nodule identification. By integrating the outcomes of the YOLOv5s approach using a 30% confidence limit and the 3D CNN classification model, the overall system achieved 98.4% nodule detection accuracy and an area under the curve of 98.9%. Despite producing some false negatives and false positives, the suggested method for identifying lung nodules from CT scans is promising as a valuable aid in decision-making for nodule detection.


Subject(s)
Lung Neoplasms , Solitary Pulmonary Nodule , Humans , Lung Neoplasms/diagnostic imaging , Solitary Pulmonary Nodule/diagnostic imaging , Tomography, X-Ray Computed/methods , Lung/diagnostic imaging , Neural Networks, Computer , Radiographic Image Interpretation, Computer-Assisted/methods
9.
Magn Reson Imaging Clin N Am ; 31(4): 503-515, 2023 Nov.
Article in English | MEDLINE | ID: mdl-37741638

ABSTRACT

More than a decade has passed since the clinical deployment of the first commercial whole-body hybrid PET/MR scanner in the clinic. The major advantages and limitations of this technology have been investigated from technical and medical perspectives. Despite the remarkable advantages associated with hybrid PET/MR imaging, such as reduced radiation dose and fully simultaneous functional and structural imaging, this technology faced major challenges in terms of mutual interference between MRI and PET components, in addition to the complexity of achieving quantitative imaging owing to the intricate MRI-guided attenuation correction in PET/MRI. In this review, the latest technical developments in PET/MRI technology as well as the state-of-the-art solutions to the major challenges of quantitative PET/MR imaging are discussed.


Subject(s)
Magnetic Resonance Imaging , Positron-Emission Tomography , Humans , Magnetic Resonance Imaging/methods , Multimodal Imaging , Technology
10.
Insights Imaging ; 14(1): 141, 2023 Aug 25.
Article in English | MEDLINE | ID: mdl-37620554

ABSTRACT

PURPOSE: This study focuses on assessing the performance of active learning techniques to train a brain MRI glioma segmentation model. METHODS: The publicly available training dataset provided for the 2021 RSNA-ASNR-MICCAI Brain Tumor Segmentation (BraTS) Challenge was used in this study, consisting of 1251 multi-institutional, multi-parametric MR images. Post-contrast T1, T2, and T2 FLAIR images as well as ground truth manual segmentation were used as input for the model. The data were split into a training set of 1151 cases and testing set of 100 cases, with the testing set remaining constant throughout. Deep convolutional neural network segmentation models were trained using the NiftyNet platform. To test the viability of active learning in training a segmentation model, an initial reference model was trained using all 1151 training cases followed by two additional models using only 575 cases and 100 cases. The resulting predicted segmentations of these two additional models on the remaining training cases were then addended to the training dataset for additional training. RESULTS: It was demonstrated that an active learning approach for manual segmentation can lead to comparable model performance for segmentation of brain gliomas (0.906 reference Dice score vs 0.868 active learning Dice score) while only requiring manual annotation for 28.6% of the data. CONCLUSION: The active learning approach when applied to model training can drastically reduce the time and labor spent on preparation of ground truth training data. CRITICAL RELEVANCE STATEMENT: Active learning concepts were applied to a deep learning-assisted segmentation of brain gliomas from MR images to assess their viability in reducing the required amount of manually annotated ground truth data in model training. KEY POINTS: • This study focuses on assessing the performance of active learning techniques to train a brain MRI glioma segmentation model. • The active learning approach for manual segmentation can lead to comparable model performance for segmentation of brain gliomas. • Active learning when applied to model training can drastically reduce the time and labor spent on preparation of ground truth training data.

11.
J Med Signals Sens ; 13(2): 118-128, 2023.
Article in English | MEDLINE | ID: mdl-37448548

ABSTRACT

Background: Computed tomography (CT) scan is one of the main tools to diagnose and grade COVID-19 progression. To avoid the side effects of CT imaging, low-dose CT imaging is of crucial importance to reduce population absorbed dose. However, this approach introduces considerable noise levels in CT images. Methods: In this light, we set out to simulate four reduced dose levels (60% dose, 40% dose, 20% dose, and 10% dose) of standard CT imaging using Beer-Lambert's law across 49 patients infected with COVID-19. Then, three denoising filters, namely Gaussian, bilateral, and median, were applied to the different low-dose CT images, the quality of which was assessed prior to and after the application of the various filters via calculation of peak signal-to-noise ratio, root mean square error (RMSE), structural similarity index measure, and relative CT-value bias, separately for the lung tissue and whole body. Results: The quantitative evaluation indicated that 10%-dose CT images have inferior quality (with RMSE = 322.1 ± 104.0 HU and bias = 11.44% ± 4.49% in the lung) even after the application of the denoising filters. The bilateral filter exhibited superior performance to suppress the noise and recover the underlying signals in low-dose CT images compared to the other denoising techniques. The bilateral filter led to RMSE and bias of 100.21 ± 16.47 HU and - 0.21% ± 1.20%, respectively, in the lung regions for 20%-dose CT images compared to the Gaussian filter with RMSE = 103.46 ± 15.70 HU and bias = 1.02% ± 1.68% and median filter with RMSE = 129.60 ± 18.09 HU and bias = -6.15% ± 2.24%. Conclusions: The 20%-dose CT imaging followed by the bilateral filtering introduced a reasonable compromise between image quality and patient dose reduction.

12.
J Digit Imaging ; 36(4): 1588-1596, 2023 08.
Article in English | MEDLINE | ID: mdl-36988836

ABSTRACT

The existing deep learning-based denoising methods predicting standard-dose PET images (S-PET) from the low-dose versions (L-PET) solely rely on a single-dose level of PET images as the input of deep learning network. In this work, we exploited the prior knowledge in the form of multiple low-dose levels of PET images to estimate the S-PET images. To this end, a high-resolution ResNet architecture was utilized to predict S-PET images from 6 to 4% L-PET images. For the 6% L-PET imaging, two models were developed; the first and second models were trained using a single input of 6% L-PET and three inputs of 6%, 4%, and 2% L-PET as input to predict S-PET images, respectively. Similarly, for 4% L-PET imaging, a model was trained using a single input of 4% low-dose data, and a three-channel model was developed getting 4%, 3%, and 2% L-PET images. The performance of the four models was evaluated using structural similarity index (SSI), peak signal-to-noise ratio (PSNR), and root mean square error (RMSE) within the entire head regions and malignant lesions. The 4% multi-input model led to improved SSI and PSNR and a significant decrease in RMSE by 22.22% and 25.42% within the entire head region and malignant lesions, respectively. Furthermore, the 4% multi-input network remarkably decreased the lesions' SUVmean bias and SUVmax bias by 64.58% and 37.12% comparing to single-input network. In addition, the 6% multi-input network decreased the RMSE within the entire head region, within the lesions, lesions' SUVmean bias, and SUVmax bias by 37.5%, 39.58%, 86.99%, and 45.60%, respectively. This study demonstrated the significant benefits of using prior knowledge in the form of multiple L-PET images to predict S-PET images.


Subject(s)
Image Processing, Computer-Assisted , Positron-Emission Tomography , Humans , Positron-Emission Tomography/methods , Signal-To-Noise Ratio , Image Processing, Computer-Assisted/methods
13.
Eur J Nucl Med Mol Imaging ; 50(7): 1881-1896, 2023 06.
Article in English | MEDLINE | ID: mdl-36808000

ABSTRACT

PURPOSE: Partial volume effect (PVE) is a consequence of the limited spatial resolution of PET scanners. PVE can cause the intensity values of a particular voxel to be underestimated or overestimated due to the effect of surrounding tracer uptake. We propose a novel partial volume correction (PVC) technique to overcome the adverse effects of PVE on PET images. METHODS: Two hundred and twelve clinical brain PET scans, including 50 18F-Fluorodeoxyglucose (18F-FDG), 50 18F-Flortaucipir, 36 18F-Flutemetamol, and 76 18F-FluoroDOPA, and their corresponding T1-weighted MR images were enrolled in this study. The Iterative Yang technique was used for PVC as a reference or surrogate of the ground truth for evaluation. A cycle-consistent adversarial network (CycleGAN) was trained to directly map non-PVC PET images to PVC PET images. Quantitative analysis using various metrics, including structural similarity index (SSIM), root mean squared error (RMSE), and peak signal-to-noise ratio (PSNR), was performed. Furthermore, voxel-wise and region-wise-based correlations of activity concentration between the predicted and reference images were evaluated through joint histogram and Bland and Altman analysis. In addition, radiomic analysis was performed by calculating 20 radiomic features within 83 brain regions. Finally, a voxel-wise two-sample t-test was used to compare the predicted PVC PET images with reference PVC images for each radiotracer. RESULTS: The Bland and Altman analysis showed the largest and smallest variance for 18F-FDG (95% CI: - 0.29, + 0.33 SUV, mean = 0.02 SUV) and 18F-Flutemetamol (95% CI: - 0.26, + 0.24 SUV, mean = - 0.01 SUV), respectively. The PSNR was lowest (29.64 ± 1.13 dB) for 18F-FDG and highest (36.01 ± 3.26 dB) for 18F-Flutemetamol. The smallest and largest SSIM were achieved for 18F-FDG (0.93 ± 0.01) and 18F-Flutemetamol (0.97 ± 0.01), respectively. The average relative error for the kurtosis radiomic feature was 3.32%, 9.39%, 4.17%, and 4.55%, while it was 4.74%, 8.80%, 7.27%, and 6.81% for NGLDM_contrast feature for 18F-Flutemetamol, 18F-FluoroDOPA, 18F-FDG, and 18F-Flortaucipir, respectively. CONCLUSION: An end-to-end CycleGAN PVC method was developed and evaluated. Our model generates PVC images from the original non-PVC PET images without requiring additional anatomical information, such as MRI or CT. Our model eliminates the need for accurate registration or segmentation or PET scanner system response characterization. In addition, no assumptions regarding anatomical structure size, homogeneity, boundary, or background level are required.


Subject(s)
Aniline Compounds , Fluorodeoxyglucose F18 , Humans , Positron-Emission Tomography/methods , Brain/diagnostic imaging , Image Processing, Computer-Assisted/methods
14.
Eur Radiol ; 33(5): 3243-3252, 2023 May.
Article in English | MEDLINE | ID: mdl-36703015

ABSTRACT

OBJECTIVES: This study aimed to improve patient positioning accuracy by relying on a CT localizer and a deep neural network to optimize image quality and radiation dose. METHODS: We included 5754 chest CT axial and anterior-posterior (AP) images from two different centers, C1 and C2. After pre-processing, images were split into training (80%) and test (20%) datasets. A deep neural network was trained to generate 3D axial images from the AP localizer. The geometric centerlines of patient bodies were indicated by creating a bounding box on the predicted images. The distance between the body centerline, estimated by the deep learning model and ground truth (BCAP), was compared with patient mis-centering during manual positioning (BCMP). We evaluated the performance of our model in terms of distance between the lung centerline estimated by the deep learning model and the ground truth (LCAP). RESULTS: The error in terms of BCAP was - 0.75 ± 7.73 mm and 2.06 ± 10.61 mm for C1 and C2, respectively. This error was significantly lower than BCMP, which achieved an error of 9.35 ± 14.94 and 13.98 ± 14.5 mm for C1 and C2, respectively. The absolute BCAP was 5.7 ± 5.26 and 8.26 ± 6.96 mm for C1 and C2, respectively. The LCAP metric was 1.56 ± 10.8 and -0.27 ± 16.29 mm for C1 and C2, respectively. The error in terms of BCAP and LCAP was higher for larger patients (p value < 0.01). CONCLUSION: The accuracy of the proposed method was comparable to available alternative methods, carrying the advantage of being free from errors related to objects blocking the camera visibility. KEY POINTS: • Patient mis-centering in the anterior-posterior direction (AP) is a common problem in clinical practice which can degrade image quality and increase patient radiation dose. • We proposed a deep neural network for automatic patient positioning using only the CT image localizer, achieving a performance comparable to alternative techniques, such as the external 3D visual camera. • The advantage of the proposed method is that it is free from errors related to objects blocking the camera visibility and that it could be implemented on imaging consoles as a patient positioning support tool.


Subject(s)
Neural Networks, Computer , Tomography, X-Ray Computed , Humans , Tomography, X-Ray Computed/methods , Imaging, Three-Dimensional , Patient Positioning/methods , Image Processing, Computer-Assisted/methods
15.
Eur J Nucl Med Mol Imaging ; 50(4): 1034-1050, 2023 03.
Article in English | MEDLINE | ID: mdl-36508026

ABSTRACT

PURPOSE: Attenuation correction and scatter compensation (AC/SC) are two main steps toward quantitative PET imaging, which remain challenging in PET-only and PET/MRI systems. These can be effectively tackled via deep learning (DL) methods. However, trustworthy, and generalizable DL models commonly require well-curated, heterogeneous, and large datasets from multiple clinical centers. At the same time, owing to legal/ethical issues and privacy concerns, forming a large collective, centralized dataset poses significant challenges. In this work, we aimed to develop a DL-based model in a multicenter setting without direct sharing of data using federated learning (FL) for AC/SC of PET images. METHODS: Non-attenuation/scatter corrected and CT-based attenuation/scatter corrected (CT-ASC) 18F-FDG PET images of 300 patients were enrolled in this study. The dataset consisted of 6 different centers, each with 50 patients, with scanner, image acquisition, and reconstruction protocols varying across the centers. CT-based ASC PET images served as the standard reference. All images were reviewed to include high-quality and artifact-free PET images. Both corrected and uncorrected PET images were converted to standardized uptake values (SUVs). We used a modified nested U-Net utilizing residual U-block in a U-shape architecture. We evaluated two FL models, namely sequential (FL-SQ) and parallel (FL-PL) and compared their performance with the baseline centralized (CZ) learning model wherein the data were pooled to one server, as well as center-based (CB) models where for each center the model was built and evaluated separately. Data from each center were divided to contribute to training (30 patients), validation (10 patients), and test sets (10 patients). Final evaluations and reports were performed on 60 patients (10 patients from each center). RESULTS: In terms of percent SUV absolute relative error (ARE%), both FL-SQ (CI:12.21-14.81%) and FL-PL (CI:11.82-13.84%) models demonstrated excellent agreement with the centralized framework (CI:10.32-12.00%), while FL-based algorithms improved model performance by over 11% compared to CB training strategy (CI: 22.34-26.10%). Furthermore, the Mann-Whitney test between different strategies revealed no significant differences between CZ and FL-based algorithms (p-value > 0.05) in center-categorized mode. At the same time, a significant difference was observed between the different training approaches on the overall dataset (p-value < 0.05). In addition, voxel-wise comparison, with respect to reference CT-ASC, exhibited similar performance for images predicted by CZ (R2 = 0.94), FL-SQ (R2 = 0.93), and FL-PL (R2 = 0.92), while CB model achieved a far lower coefficient of determination (R2 = 0.74). Despite the strong correlations between CZ and FL-based methods compared to reference CT-ASC, a slight underestimation of predicted voxel values was observed. CONCLUSION: Deep learning-based models provide promising results toward quantitative PET image reconstruction. Specifically, we developed two FL models and compared their performance with center-based and centralized models. The proposed FL-based models achieved higher performance compared to center-based models, comparable with centralized models. Our work provided strong empirical evidence that the FL framework can fully benefit from the generalizability and robustness of DL models used for AC/SC in PET, while obviating the need for the direct sharing of datasets between clinical imaging centers.


Subject(s)
Deep Learning , Image Processing, Computer-Assisted , Humans , Image Processing, Computer-Assisted/methods , Positron Emission Tomography Computed Tomography , Positron-Emission Tomography/methods , Magnetic Resonance Imaging/methods
16.
J Digit Imaging ; 36(2): 574-587, 2023 04.
Article in English | MEDLINE | ID: mdl-36417026

ABSTRACT

In this study, an inter-fraction organ deformation simulation framework for the locally advanced cervical cancer (LACC), which considers the anatomical flexibility, rigidity, and motion within an image deformation, was proposed. Data included 57 CT scans (7202 2D slices) of patients with LACC randomly divided into the train (n = 42) and test (n = 15) datasets. In addition to CT images and the corresponding RT structure (bladder, cervix, and rectum), the bone was segmented, and the coaches were eliminated. The correlated stochastic field was simulated using the same size as the target image (used for deformation) to produce the general random deformation. The deformation field was optimized to have a maximum amplitude in the rectum region, a moderate amplitude in the bladder region, and an amplitude as minimum as possible within bony structures. The DIRNet is a convolutional neural network that consists of convolutional regressors, spatial transformation, as well as resampling blocks. It was implemented by different parameters. Mean Dice indices of 0.89 ± 0.02, 0.96 ± 0.01, and 0.93 ± 0.02 were obtained for the cervix, bladder, and rectum (defined as at-risk organs), respectively. Furthermore, a mean average symmetric surface distance of 1.61 ± 0.46 mm for the cervix, 1.17 ± 0.15 mm for the bladder, and 1.06 ± 0.42 mm for the rectum were achieved. In addition, a mean Jaccard of 0.86 ± 0.04 for the cervix, 0.93 ± 0.01 for the bladder, and 0.88 ± 0.04 for the rectum were observed on the test dataset (15 subjects). Deep learning-based non-rigid image registration is, therefore, proposed for the high-dose-rate brachytherapy in inter-fraction cervical cancer since it outperformed conventional algorithms.


Subject(s)
Brachytherapy , Deep Learning , Uterine Cervical Neoplasms , Female , Humans , Brachytherapy/methods , Radiotherapy Dosage , Radiotherapy Planning, Computer-Assisted/methods , Rectum , Uterine Cervical Neoplasms/diagnostic imaging , Uterine Cervical Neoplasms/radiotherapy
17.
Eur J Radiol ; 157: 110602, 2022 Dec.
Article in English | MEDLINE | ID: mdl-36410091

ABSTRACT

PURPOSE: Extracting water equivalent diameter (DW), as a good descriptor of patient size, from the CT localizer before the spiral scan not only minimizes truncation errors due to the limited scan field-of-view but also enables prior size-specific dose estimation as well as scan protocol optimization. This study proposed a unified methodology to measure patient size, shape, and attenuation parameters from a 2D anterior-posterior localizer image using deep learning algorithms without the need for labor-intensive vendor-specific calibration procedures. METHODS: 3D CT chest images and 2D localizers were collected for 4005 patients. A modified U-NET architecture was trained to predict the 3D CT images from their corresponding localizer scans. The algorithm was tested on 648 and 138 external cases with fixed and variable table height positions. To evaluate the performance of the prediction model, structural similarity index measure (SSIM), body area, body contour, Dice index, and water equivalent diameter (DW) were calculated and compared between the predicted 3D CT images and the ground truth (GT) images in a slicewise manner. RESULTS: The average age of the patients included in this study (1827 male and 1554 female) was 53.8 ± 17.9 (18-120) years. The DW, tube current ,and CTDIvol measured on original axial images in the external 138 cases group were significantly larger than those of the external 648 cases (P < 0.05). The SSIM and Dice index calculated between the prediction and GT for body contour were 0.998 ± 0.001 and 0.950 ± 0.016, respectively. The average percentage error in the calculation of DW was 2.7 ± 3.5 %. The error in the DW calculation was more considerable in larger patients (p-value < 0.05). CONCLUSIONS: We developed a model to predict the patient size, shape, and attenuation factors slice-by-slice prior to spiral scanning. The model exhibited remarkable robustness to table height variations. The estimated parameters are helpful for patient dose reduction and protocol optimization.


Subject(s)
Deep Learning , Humans , Female , Male , Adult , Middle Aged , Aged , Thorax , Tomography, X-Ray Computed , Algorithms , Calibration
18.
Sci Rep ; 12(1): 14817, 2022 09 01.
Article in English | MEDLINE | ID: mdl-36050434

ABSTRACT

We aimed to construct a prediction model based on computed tomography (CT) radiomics features to classify COVID-19 patients into severe-, moderate-, mild-, and non-pneumonic. A total of 1110 patients were studied from a publicly available dataset with 4-class severity scoring performed by a radiologist (based on CT images and clinical features). The entire lungs were segmented and followed by resizing, bin discretization and radiomic features extraction. We utilized two feature selection algorithms, namely bagging random forest (BRF) and multivariate adaptive regression splines (MARS), each coupled to a classifier, namely multinomial logistic regression (MLR), to construct multiclass classification models. The dataset was divided into 50% (555 samples), 20% (223 samples), and 30% (332 samples) for training, validation, and untouched test datasets, respectively. Subsequently, nested cross-validation was performed on train/validation to select the features and tune the models. All predictive power indices were reported based on the testing set. The performance of multi-class models was assessed using precision, recall, F1-score, and accuracy based on the 4 × 4 confusion matrices. In addition, the areas under the receiver operating characteristic curves (AUCs) for multi-class classifications were calculated and compared for both models. Using BRF, 23 radiomic features were selected, 11 from first-order, 9 from GLCM, 1 GLRLM, 1 from GLDM, and 1 from shape. Ten features were selected using the MARS algorithm, namely 3 from first-order, 1 from GLDM, 1 from GLRLM, 1 from GLSZM, 1 from shape, and 3 from GLCM features. The mean absolute deviation, skewness, and variance from first-order and flatness from shape, and cluster prominence from GLCM features and Gray Level Non Uniformity Normalize from GLRLM were selected by both BRF and MARS algorithms. All selected features by BRF or MARS were significantly associated with four-class outcomes as assessed within MLR (All p values < 0.05). BRF + MLR and MARS + MLR resulted in pseudo-R2 prediction performances of 0.305 and 0.253, respectively. Meanwhile, there was a significant difference between the feature selection models when using a likelihood ratio test (p value = 0.046). Based on confusion matrices for BRF + MLR and MARS + MLR algorithms, the precision was 0.856 and 0.728, the recall was 0.852 and 0.722, whereas the accuracy was 0.921 and 0.861, respectively. AUCs (95% CI) for multi-class classification were 0.846 (0.805-0.887) and 0.807 (0.752-0.861) for BRF + MLR and MARS + MLR algorithms, respectively. Our models based on the utilization of radiomic features, coupled with machine learning were able to accurately classify patients according to the severity of pneumonia, thus highlighting the potential of this emerging paradigm in the prognostication and management of COVID-19 patients.


Subject(s)
COVID-19 , Algorithms , COVID-19/diagnostic imaging , Humans , Machine Learning , ROC Curve , Tomography, X-Ray Computed/methods
19.
Hum Brain Mapp ; 43(16): 5032-5043, 2022 11.
Article in English | MEDLINE | ID: mdl-36087092

ABSTRACT

We aim to synthesize brain time-of-flight (TOF) PET images/sinograms from their corresponding non-TOF information in the image space (IS) and sinogram space (SS) to increase the signal-to-noise ratio (SNR) and contrast of abnormalities, and decrease the bias in tracer uptake quantification. One hundred forty clinical brain 18 F-FDG PET/CT scans were collected to generate TOF and non-TOF sinograms. The TOF sinograms were split into seven time bins (0, ±1, ±2, ±3). The predicted TOF sinogram was reconstructed and the performance of both models (IS and SS) compared with reference TOF and non-TOF. Wide-ranging quantitative and statistical analysis metrics, including structural similarity index metric (SSIM), root mean square error (RMSE), as well as 28 radiomic features for 83 brain regions were extracted to evaluate the performance of the CycleGAN model. SSIM and RMSE of 0.99 ± 0.03, 0.98 ± 0.02 and 0.12 ± 0.09, 0.16 ± 0.04 were achieved for the generated TOF-PET images in IS and SS, respectively. They were 0.97 ± 0.03 and 0.22 ± 0.12, respectively, for non-TOF-PET images. The Bland & Altman analysis revealed that the lowest tracer uptake value bias (-0.02%) and minimum variance (95% CI: -0.17%, +0.21%) were achieved for TOF-PET images generated in IS. For malignant lesions, the contrast in the test dataset was enhanced from 3.22 ± 2.51 for non-TOF to 3.34 ± 0.41 and 3.65 ± 3.10 for TOF PET in SS and IS, respectively. The implemented CycleGAN is capable of generating TOF from non-TOF PET images to achieve better image quality.


Subject(s)
Deep Learning , Fluorodeoxyglucose F18 , Humans , Image Processing, Computer-Assisted/methods , Positron-Emission Tomography , Positron Emission Tomography Computed Tomography , Brain/diagnostic imaging
20.
Iran J Med Sci ; 47(5): 440-449, 2022 09.
Article in English | MEDLINE | ID: mdl-36117575

ABSTRACT

Background: Automated image segmentation is an essential step in quantitative image analysis. This study assesses the performance of a deep learning-based model for lung segmentation from computed tomography (CT) images of normal and COVID-19 patients. Methods: A descriptive-analytical study was conducted from December 2020 to April 2021 on the CT images of patients from various educational hospitals affiliated with Mashhad University of Medical Sciences (Mashhad, Iran). Of the selected images and corresponding lung masks of 1,200 confirmed COVID-19 patients, 1,080 were used to train a residual neural network. The performance of the residual network (ResNet) model was evaluated on two distinct external test datasets, namely the remaining 120 COVID-19 and 120 normal patients. Different evaluation metrics such as Dice similarity coefficient (DSC), mean absolute error (MAE), relative mean Hounsfield unit (HU) difference, and relative volume difference were calculated to assess the accuracy of the predicted lung masks. The Mann-Whitney U test was used to assess the difference between the corresponding values in the normal and COVID-19 patients. P<0.05 was considered statistically significant. Results: The ResNet model achieved a DSC of 0.980 and 0.971 and a relative mean HU difference of -2.679% and -4.403% for the normal and COVID-19 patients, respectively. Comparable performance in lung segmentation of normal and COVID-19 patients indicated the model's accuracy for identifying lung tissue in the presence of COVID-19-associated infections. Although a slightly better performance was observed in normal patients. Conclusion: The ResNet model provides an accurate and reliable automated lung segmentation of COVID-19 infected lung tissue.A preprint version of this article was published on arXiv before formal peer review (https://arxiv.org/abs/2104.02042).


Subject(s)
COVID-19 , COVID-19/diagnostic imaging , Humans , Lung/diagnostic imaging , Neural Networks, Computer , Thorax , Tomography, X-Ray Computed/methods
SELECTION OF CITATIONS
SEARCH DETAIL
...