Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 29
Filter
1.
J Endourol ; 2024 May 16.
Article in English | MEDLINE | ID: mdl-38695176

ABSTRACT

Background: Differential kidney function assessment is an important part of preoperative evaluation of various urological interventions. It is obtained through dedicated nuclear medical imaging and is not yet implemented through conventional Imaging. Objective: We assess if differential kidney function can be obtained through evaluation of contrast-enhanced computed tomography(CT) using a combination of deep learning and (2D and 3D) radiomic features. Methods: All patients who underwent kidney nuclear scanning at Mayo Clinic sites between 2018-2022 were collected. CT scans of the kidneys were obtained within a 3-month interval before or after the nuclear scans were extracted. Patients who underwent a urological or radiological intervention within this time frame were excluded. A segmentation model was used to segment both kidneys. 2D and 3D radiomics features were extracted and compared between the two kidneys to compute delta radiomics and assess its ability to predict differential kidney function. Performance was reported using receiver operating characteristics, sensitivity, and specificity. Results: Studies from Arizona & Rochester formed our internal dataset (n = 1,159). Studies from Florida were separately processed as an external test set to validate generalizability. We obtained 323 studies from our internal sites and 39 studies from external sites. The best results were obtained by a random forest model trained on 3D delta radiomics features. This model achieved an area under curve (AUC) of 0.85 and 0.81 on internal and external test sets, while specificity and sensitivity were 0.84,0.68 on the internal set, 0.70, and 0.65 on the external set. Conclusion: This proposed automated pipeline can derive important differential kidney function information from contrast-enhanced CT and reduce the need for dedicated nuclear scans for early-stage differential kidney functional assessment. Clinical Impact: We establish a machine learning methodology for assessing differential kidney function from routine CT without the need for expensive and radioactive nuclear medicine scans.

2.
medRxiv ; 2024 Jan 11.
Article in English | MEDLINE | ID: mdl-38260571

ABSTRACT

Background: To create an opportunistic screening strategy by multitask deep learning methods to stratify prediction for coronary artery calcium (CAC) and associated cardiovascular risk with frontal chest x-rays (CXR) and minimal data from electronic health records (EHR). Methods: In this retrospective study, 2,121 patients with available computed tomography (CT) scans and corresponding CXR images were collected internally (Mayo Enterprise) with calculated CAC scores binned into 3 categories (0, 1-99, and 100+) as ground truths for model training. Results from the internal training were tested on multiple external datasets (domestic (EUH) and foreign (VGHTPE)) with significant racial and ethnic differences and classification performance was compared. Findings: Classification performance between 0, 1-99, and 100+ CAC scores performed moderately on both the internal test and external datasets, reaching average f1-score of 0.66 for Mayo, 0.62 for EUH and 0.61 for VGHTPE. For the clinically relevant binary task of 0 vs 400+ CAC classification, the performance of our model on the internal test and external datasets reached an average AUCROC of 0.84. Interpretation: The fusion model trained on CXR performed better (0.84 average AUROC on internal and external dataset) than existing state-of-the-art models on predicting CAC scores only on internal (0.73 AUROC), with robust performance on external datasets. Thus, our proposed model may be used as a robust, first-pass opportunistic screening method for cardiovascular risk from regular chest radiographs. For community use, trained model and the inference code can be downloaded with an academic open-source license from https://github.com/jeong-jasonji/MTL_CAC_classification . Funding: The study was partially supported by National Institute of Health 1R01HL155410-01A1 award.

3.
JACC Cardiovasc Imaging ; 17(4): 349-360, 2024 Apr.
Article in English | MEDLINE | ID: mdl-37943236

ABSTRACT

BACKGROUND: Constrictive pericarditis (CP) is an uncommon but reversible cause of diastolic heart failure if appropriately identified and treated. However, its diagnosis remains a challenge for clinicians. Artificial intelligence may enhance the identification of CP. OBJECTIVES: The authors proposed a deep learning approach based on transthoracic echocardiography to differentiate CP from restrictive cardiomyopathy. METHODS: Patients with a confirmed diagnosis of CP and cardiac amyloidosis (CA) (as the representative disease of restrictive cardiomyopathy) at Mayo Clinic Rochester from January 2003 to December 2021 were identified to extract baseline demographics. The apical 4-chamber view from transthoracic echocardiography studies was used as input data. The patients were split into a 60:20:20 ratio for training, validation, and held-out test sets of the ResNet50 deep learning model. The model performance (differentiating CP and CA) was evaluated in the test set with the area under the curve. GradCAM was used for model interpretation. RESULTS: A total of 381 patients were identified, including 184 (48.3%) CP, and 197 (51.7%) CA cases. The mean age was 68.7 ± 11.4 years, and 72.8% were male. ResNet50 had a performance with an area under the curve of 0.97 to differentiate the 2-class classification task (CP vs CA). The GradCAM heatmap showed activation around the ventricular septal area. CONCLUSIONS: With a standard apical 4-chamber view, our artificial intelligence model provides a platform to facilitate the detection of CP, allowing for improved workflow efficiency and prompt referral for more advanced evaluation and intervention of CP.


Subject(s)
Cardiomyopathy, Restrictive , Deep Learning , Pericarditis, Constrictive , Humans , Male , Middle Aged , Aged , Aged, 80 and over , Female , Cardiomyopathy, Restrictive/diagnostic imaging , Pericarditis, Constrictive/diagnostic imaging , Artificial Intelligence , Predictive Value of Tests , Echocardiography , Diagnosis, Differential
4.
J Med Imaging (Bellingham) ; 10(5): 054502, 2023 Sep.
Article in English | MEDLINE | ID: mdl-37840850

ABSTRACT

Purpose: The inherent characteristics of transthoracic echocardiography (TTE) images such as low signal-to-noise ratio and acquisition variations can limit the direct use of TTE images in the development and generalization of deep learning models. As such, we propose an innovative automated framework to address the common challenges in the process of echocardiography deep learning model generalization on the challenging task of constrictive pericarditis (CP) and cardiac amyloidosis (CA) differentiation. Approach: Patients with a confirmed diagnosis of CP or CA and normal cases from Mayo Clinic Rochester and Arizona were identified to extract baseline demographics and the apical 4 chamber view from TTE studies. We proposed an innovative preprocessing and image generalization framework to process the images for training the ResNet50, ResNeXt101, and EfficientNetB2 models. Ablation studies were conducted to justify the effect of each proposed processing step in the final classification performance. Results: The models were initially trained and validated on 720 unique TTE studies from Mayo Rochester and further validated on 225 studies from Mayo Arizona. With our proposed generalization framework, EfficientNetB2 generalized the best with an average area under the curve (AUC) of 0.96 (±0.01) and 0.83 (±0.03) on the Rochester and Arizona test sets, respectively. Conclusions: Leveraging the proposed generalization techniques, we successfully developed an echocardiography-based deep learning model that can accurately differentiate CP from CA and normal cases and applied the model to images from two sites. The proposed framework can be further extended for the development of echocardiography-based deep learning models.

5.
Abdom Radiol (NY) ; 48(11): 3537-3549, 2023 11.
Article in English | MEDLINE | ID: mdl-37665385

ABSTRACT

PURPOSE: To develop and assess the utility of synthetic dual-energy CT (sDECT) images generated from single-energy CT (SECT) using two state-of-the-art generative adversarial network (GAN) architectures for artificial intelligence-based image translation. METHODS: In this retrospective study, 734 patients (389F; 62.8 years ± 14.9) who underwent enhanced DECT of the chest, abdomen, and pelvis between January 2018 and June 2019 were included. Using 70-keV as the input images (n = 141,009) and 50-keV, iodine, and virtual unenhanced (VUE) images as outputs, separate models were trained using Pix2PixHD and CycleGAN. Model performance on the test set (n = 17,839) was evaluated using mean squared error, structural similarity index, and peak signal-to-noise ratio. To objectively test the utility of these models, synthetic iodine material density and 50-keV images were generated from SECT images of 16 patients with gastrointestinal bleeding performed at another institution. The conspicuity of gastrointestinal bleeding using sDECT was compared to portal venous phase SECT. Synthetic VUE images were generated from 37 patients who underwent a CT urogram at another institution and model performance was compared to true unenhanced images. RESULTS: sDECT from both Pix2PixHD and CycleGAN were qualitatively indistinguishable from true DECT by a board-certified radiologist (avg accuracy 64.5%). Pix2PixHD had better quantitative performance compared to CycleGAN (e.g., structural similarity index for iodine: 87% vs. 46%, p-value < 0.001). sDECT using Pix2PixHD showed increased bleeding conspicuity for gastrointestinal bleeding and better removal of iodine on synthetic VUE compared to CycleGAN. CONCLUSIONS: sDECT from SECT using Pix2PixHD may afford some of the advantages of DECT.


Subject(s)
Iodine , Radiography, Dual-Energy Scanned Projection , Humans , Contrast Media , Tomography, X-Ray Computed/methods , Retrospective Studies , Artificial Intelligence , Radiography, Dual-Energy Scanned Projection/methods , Gastrointestinal Hemorrhage
6.
Nutrients ; 15(11)2023 May 26.
Article in English | MEDLINE | ID: mdl-37299451

ABSTRACT

Stress-induced depression and anxiety (DA) are closely connected to gastrointestinal inflammation and dysbiosis, which can suppress brain-derived neurotrophic factor (BDNF) in the brain. Herein, we isolated the BDNF expression-inducing probiotics Lactobacillus casei HY2782 and Bifidobacterium lactis HY8002 in lipopolysaccharide-stimulated SH-SY5Y cells. Then, we investigated the effects of HY2782, HY8002, anti-inflammatory L-theanine, and their supplement (PfS, probiotics-fermented L-theanine-containing supplement) on DA in mice exposed to restraint stress (RS) or the fecal microbiota of patients with inflammatory bowel disease and depression (FMd). Oral administration of HY2782, HY8002, or L-theanine alleviated RS-induced DA-like behaviors. They also decreased RS-induced hippocampal interleukin (IL)-1ß and IL-6 levels, as well as NF-κB-positive cell numbers, blood corticosterone level, and colonic IL-1ß and IL-6 levels and NF-κB-positive cell numbers. L-theanine more potently suppressed DA-like behaviors and inflammation-related marker levels than probiotics. However, these probiotics more potently increased RS-suppressed hippocampal BDNF level and BDNF+NeuN+ cell numbers than L-theanine. Furthermore, HY2782 and HY8002 suppressed RS-increased Proteobacteria and Verrucomicrobia populations in gut microbiota. In particular, they increased Lachnospiraceae and Lactobacillacease populations, which are closely positively associated with hippocampal BDNF expression, and suppressed Sutterellaceae, Helicobacteriaceae, Akkermansiaceae, and Enterobacteriaceae populations, which are closely positively associated with hippocampal IL-1ß expression. HY2782 and HY8002 potently alleviated FMd-induced DA-like behaviors and increased FMd-suppressed BDNF, serotonin levels, and BDNF-positive neuronal cell numbers in the brain. They alleviated blood corticosterone level and colonic IL-1ß α and IL-6 levels. However, L-theanine weakly, but not significantly, alleviated FMd-induced DA-like behaviors and gut inflammation. BDNF expression-inducing probiotic (HY2782, HY8002, Streptococcus thermophilus, and Lactobacillus acidophilus)-fermented and anti-inflammatory L-theanine-containing supplement PfS alleviated DA-like behaviors, inflammation-related biomarker levels, and gut dysbiosis more than probiotics or L-theanine. Based on these findings, a combination of BDNF expression-inducing probiotics with anti-inflammatory L-theanine may additively or synergistically alleviate DA and gut dysbiosis by regulating gut microbiota-mediated inflammation and BDNF expression, thereby being beneficial for DA.


Subject(s)
Lacticaseibacillus casei , Neuroblastoma , Probiotics , Mice , Humans , Animals , NF-kappa B/metabolism , Brain-Derived Neurotrophic Factor/metabolism , Depression/etiology , Depression/therapy , Corticosterone , Dysbiosis , Interleukin-6 , Anxiety/therapy , Anxiety/microbiology , Inflammation/therapy , Probiotics/pharmacology , Probiotics/therapeutic use , Anti-Inflammatory Agents
7.
J Imaging ; 9(2)2023 Feb 18.
Article in English | MEDLINE | ID: mdl-36826967

ABSTRACT

AIMS: Increased left ventricular (LV) wall thickness is frequently encountered in transthoracic echocardiography (TTE). While accurate and early diagnosis is clinically important, given the differences in available therapeutic options and prognosis, an extensive workup is often required to establish the diagnosis. We propose the first echo-based, automated deep learning model with a fusion architecture to facilitate the evaluation and diagnosis of increased left ventricular (LV) wall thickness. METHODS AND RESULTS: Patients with an established diagnosis of increased LV wall thickness (hypertrophic cardiomyopathy (HCM), cardiac amyloidosis (CA), and hypertensive heart disease (HTN)/others) between 1/2015 and 11/2019 at Mayo Clinic Arizona were identified. The cohort was divided into 80%/10%/10% for training, validation, and testing sets, respectively. Six baseline TTE views were used to optimize a pre-trained InceptionResnetV2 model. Each model output was used to train a meta-learner under a fusion architecture. Model performance was assessed by multiclass area under the receiver operating characteristic curve (AUROC). A total of 586 patients were used for the final analysis (194 HCM, 201 CA, and 191 HTN/others). The mean age was 55.0 years, and 57.8% were male. Among the individual view-dependent models, the apical 4-chamber model had the best performance (AUROC: HCM: 0.94, CA: 0.73, and HTN/other: 0.87). The final fusion model outperformed all the view-dependent models (AUROC: HCM: 0.93, CA: 0.90, and HTN/other: 0.92). CONCLUSION: The echo-based InceptionResnetV2 fusion model can accurately classify the main etiologies of increased LV wall thickness and can facilitate the process of diagnosis and workup.

8.
J Imaging ; 9(2)2023 Feb 20.
Article in English | MEDLINE | ID: mdl-36826969

ABSTRACT

Echocardiography is an integral part of the diagnosis and management of cardiovascular disease. The use and application of artificial intelligence (AI) is a rapidly expanding field in medicine to improve consistency and reduce interobserver variability. AI can be successfully applied to echocardiography in addressing variance during image acquisition and interpretation. Furthermore, AI and machine learning can aid in the diagnosis and management of cardiovascular disease. In the realm of echocardiography, accurate interpretation is largely dependent on the subjective knowledge of the operator. Echocardiography is burdened by the high dependence on the level of experience of the operator, to a greater extent than other imaging modalities like computed tomography, nuclear imaging, and magnetic resonance imaging. AI technologies offer new opportunities for echocardiography to produce accurate, automated, and more consistent interpretations. This review discusses machine learning as a subfield within AI in relation to image interpretation and how machine learning can improve the diagnostic performance of echocardiography. This review also explores the published literature outlining the value of AI and its potential to improve patient care.

10.
Med Phys ; 50(1): 274-283, 2023 Jan.
Article in English | MEDLINE | ID: mdl-36203393

ABSTRACT

BACKGROUND: Multimodality positron emission tomography/computed tomography (PET/CT) imaging combines the anatomical information of CT with the functional information of PET. In the diagnosis and treatment of many cancers, such as non-small cell lung cancer (NSCLC), PET/CT imaging allows more accurate delineation of tumor or involved lymph nodes for radiation planning. PURPOSE: In this paper, we propose a hybrid regional network method of automatically segmenting lung tumors from PET/CT images. METHODS: The hybrid regional network architecture synthesizes the functional and anatomical information from the two image modalities, whereas the mask regional convolutional neural network (R-CNN) and scoring fine-tune the regional location and quality of the output segmentation. This model consists of five major subnetworks, that is, a dual feature representation network (DFRN), a regional proposal network (RPN), a specific tumor-wise R-CNN, a mask-Net, and a score head. Given a PET/CT image as inputs, the DFRN extracts feature maps from the PET and CT images. Then, the RPN and R-CNN work together to localize lung tumors and reduce the image size and feature map size by removing irrelevant regions. The mask-Net is used to segment tumor within a volume-of-interest (VOI) with a score head evaluating the segmentation performed by the mask-Net. Finally, the segmented tumor within the VOI was mapped back to the volumetric coordinate system based on the location information derived via the RPN and R-CNN. We trained, validated, and tested the proposed neural network using 100 PET/CT images of patients with NSCLC. A fivefold cross-validation study was performed. The segmentation was evaluated with two indicators: (1) multiple metrics, including the Dice similarity coefficient, Jacard, 95th percentile Hausdorff distance, mean surface distance (MSD), residual mean square distance, and center-of-mass distance; (2) Bland-Altman analysis and volumetric Pearson correlation analysis. RESULTS: In fivefold cross-validation, this method achieved Dice and MSD of 0.84 ± 0.15 and 1.38 ± 2.2 mm, respectively. A new PET/CT can be segmented in 1 s by this model. External validation on The Cancer Imaging Archive dataset (63 PET/CT images) indicates that the proposed model has superior performance compared to other methods. CONCLUSION: The proposed method shows great promise to automatically delineate NSCLC tumors on PET/CT images, thereby allowing for a more streamlined clinical workflow that is faster and reduces physician effort.


Subject(s)
Carcinoma, Non-Small-Cell Lung , Lung Neoplasms , Humans , Positron Emission Tomography Computed Tomography/methods , Lung Neoplasms/diagnostic imaging , Lung Neoplasms/pathology , Carcinoma, Non-Small-Cell Lung/diagnostic imaging , Neural Networks, Computer , Multimodal Imaging , Image Processing, Computer-Assisted/methods
11.
J Med Imaging (Bellingham) ; 9(3): 035504, 2022 May.
Article in English | MEDLINE | ID: mdl-35769344

ABSTRACT

Purpose: In recent years, the development and exploration of deeper and more complex deep learning models has been on the rise. However, the availability of large heterogeneous datasets to support efficient training of deep learning models is lacking. While linear image transformations for augmentation have been used traditionally, the recent development of generative adversarial networks (GANs) could theoretically allow us to generate an infinite amount of data from the real distribution to support deep learning model training. Recently, the Radiological Society of North America (RSNA) curated a multiclass hemorrhage detection challenge dataset that includes over 800,000 images for hemorrhage detection, but all high-performing models were trained using traditional data augmentation techniques. Given a wide variety of selections, the augmentation for image classification often follows a trial-and-error policy. Approach: We designed conditional DCGAN (cDCGAN) and in parallel trained multiple popular GAN models to use as online augmentations and compared them to traditional augmentation methods for the hemorrhage case study. Results: Our experimentations show that the super-minority, epidural hemorrhages with cDCGAN augmentation presented a minimum of 2 × improvement in their performance against the traditionally augmented model using the same classifier configuration. Conclusion: This shows that for complex and imbalanced datasets, traditional data imbalancing solutions may not be sufficient and require more complex and diverse data augmentation methods such as GANs to solve.

12.
J Digit Imaging ; 35(2): 137-152, 2022 04.
Article in English | MEDLINE | ID: mdl-35022924

ABSTRACT

In recent years, generative adversarial networks (GANs) have gained tremendous popularity for various imaging related tasks such as artificial image generation to support AI training. GANs are especially useful for medical imaging-related tasks where training datasets are usually limited in size and heavily imbalanced against the diseased class. We present a systematic review, following the PRISMA guidelines, of recent GAN architectures used for medical image analysis to help the readers in making an informed decision before employing GANs in developing medical image classification and segmentation models. We have extracted 54 papers that highlight the capabilities and application of GANs in medical imaging from January 2015 to August 2020 and inclusion criteria for meta-analysis. Our results show four main architectures of GAN that are used for segmentation or classification in medical imaging. We provide a comprehensive overview of recent trends in the application of GANs in clinical diagnosis through medical image segmentation and classification and ultimately share experiences for task-based GAN implementations.


Subject(s)
Image Processing, Computer-Assisted , Neural Networks, Computer , Humans , Image Processing, Computer-Assisted/methods
13.
Ultrasound J ; 13(1): 24, 2021 Apr 20.
Article in English | MEDLINE | ID: mdl-33877462

ABSTRACT

BACKGROUND: Ultrasound was first introduced in clinical dermatology in 1979. Since that time, ultrasound technology has continued to develop along with its popularity and utility. Today, high-frequency ultrasound (HFUS), or ultrasound using a frequency of at least 10 megahertz (MHz), allows for high-resolution imaging of the skin from the stratum corneum to the deep fascia. This non-invasive and easy-to-interpret tool allows physicians to assess skin findings in real-time, enabling enhanced diagnostic, management, and surgical capabilities. In this review, we discuss how HFUS fits into the landscape of skin imaging. We provide a brief history of its introduction to dermatology, explain key principles of ultrasonography, and review its use in characterizing normal skin, common neoplasms of the skin, dermatologic diseases and cosmetic dermatology. CONCLUSION: As frequency advancements in ultrasonography continue, the broad applications of this imaging modality will continue to grow. HFUS is a fast, safe and readily available tool that can aid in diagnosing, monitoring and treating dermatologic conditions by providing more objective assessment measures.

14.
Phys Med Biol ; 65(18): 185009, 2020 09 18.
Article in English | MEDLINE | ID: mdl-32674075

ABSTRACT

The segmentation of neoplasms is an important part of radiotherapy treatment planning, monitoring disease progression, and predicting patient outcome. In the brain, functional magnetic resonance imaging (MRI) like dynamic susceptibility contrast enhanced (DSCE) or T1-weighted dynamic contrast enhanced (DCE) perfusion MRI are important tools for diagnosis. They play a crucial role in providing pre-operative assessment of tumor histology, grading, and biopsy guidance. However, the manual contouring of these neoplasms is tedious, expensive, time-consuming, and vulnerable to inter-observer variability. In this work, we propose a 3D mask region-based convolutional neural network (R-CNN) method to automatically segment brain tumors in DSCE MRI perfusion images. As our goal is to simultaneously localize and segment the tumor, our training process contained both a region-of-interest (ROI) localization and regression with voxel-wise segmentation. The combination of classification loss, ROI location and size regression loss, and segmentation loss were used to supervise the proposed network. We retrospectively investigated 21 patients' perfusion images, with between 50 and 70 perfusion time point volumes, a total of 1260 3D volumes. Tumor contours were automatically segmented by our proposed method and compared against other state-of-the-art methods and those delineated by physicians as the ground truth. The results of our method demonstrated good agreement with the ground truth contours. The average DSC, precision, recall, Hausdorff distance, mean surface distance (MSD), root MSD, and center of mass distance were 0.90 ± 0.04, 0.91 ± 0.04, 0.90 ± 0.06, 7.16 ± 5.78 mm, 0.45 ± 0.34 mm, 1.03 ± 0.72 mm, and 0.86 ± 0.91 mm, respectively. These results support the feasibility of our method in accurately localizing and segmenting brain tumors in DSCE perfusion MRI. Our 3D Mask R-CNN segmentation method in DSCE perfusion imaging has great promise for future clinical use.


Subject(s)
Brain Neoplasms/diagnostic imaging , Contrast Media , Imaging, Three-Dimensional/methods , Neural Networks, Computer , Perfusion Imaging , Humans , Observer Variation , Retrospective Studies
15.
IEEE Trans Med Imaging ; 39(7): 2302-2315, 2020 07.
Article in English | MEDLINE | ID: mdl-31985414

ABSTRACT

Accurate and automatic multi-needle detection in three-dimensional (3D) ultrasound (US) is a key step of treatment planning for US-guided brachytherapy. However, most current studies are concentrated on single-needle detection by only using a small number of images with a needle, regardless of the massive database of US images without needles. In this paper, we propose a workflow for multi-needle detection by considering the images without needles as auxiliary. Concretely, we train position-specific dictionaries on 3D overlapping patches of auxiliary images, where we develop an enhanced sparse dictionary learning method by integrating spatial continuity of 3D US, dubbed order-graph regularized dictionary learning. Using the learned dictionaries, target images are reconstructed to obtain residual pixels which are then clustered in every slice to yield centers. With the obtained centers, regions of interest (ROIs) are constructed via seeking cylinders. Finally, we detect needles by using the random sample consensus algorithm per ROI and then locate the tips by finding the sharp intensity drops along the detected axis for every needle. Extensive experiments were conducted on a phantom dataset and a prostate dataset of 70/21 patients without/with needles. Visualization and quantitative results show the effectiveness of our proposed workflow. Specifically, our method can correctly detect 95% of needles with a tip location error of 1.01 mm on the prostate dataset. This technique provides accurate multi-needle detection for US-guided HDR prostate brachytherapy, facilitating the clinical workflow.


Subject(s)
Brachytherapy , Prostatic Neoplasms , Humans , Imaging, Three-Dimensional , Male , Needles , Ultrasonography
16.
Quant Imaging Med Surg ; 9(7): 1201-1213, 2019 Jul.
Article in English | MEDLINE | ID: mdl-31448207

ABSTRACT

BACKGROUND: Glioblastoma is the most aggressive brain tumor with poor prognosis. The purpose of this study is to improve the tissue characterization of these highly heterogeneous tumors using delta-radiomic features of images from dynamic susceptibility contrast enhanced (DSC) magnetic resonance imaging (MRI). METHODS: Twenty-five patients with histopathologically confirmed to be 13 high-grade (HG) and 12 low-grade (LG) gliomas who underwent the standard brain tumor MRI protocol, including DSC MRI, were included. Tumor regions on all DSC MRI images were registered to and contoured in T2-weighted fluid-attenuated inversion recovery (FLAIR) images. These contours and its contralateral regions of the normal tissue were used to extract delta-radiomic features before applying feature selection. The most informative and non-redundant features were selected to train a random forest to differentiate HG and LG gliomas. Then a leave-one-out cross-validation random forest was applied to classify these tumors for grading. Finally, a majority-voting method was applied to reduce binarization bias and to combine the results of various feature lists. RESULTS: Analysis of the predictions showed that the reported method consistently predicted the tumor grade of 24 out of 25 patients correctly (0.96). Finally, the mean prediction accuracy was 0.950±0.091 for HG and 0.850±0.255 for LG. The area under the receiver operating characteristic curve (AUC) was 0.94. CONCLUSIONS: This study shows that delta-radiomic features derived from DSC MRI data can be used to characterize and determine the tumor grades. The radiomic features from DSC MRI may be used to elucidate the underlying tumor biology and response to therapy.

17.
Med Phys ; 46(2): 601-618, 2019 Feb.
Article in English | MEDLINE | ID: mdl-30471129

ABSTRACT

PURPOSE: Quantitative Cone Beam CT (CBCT) imaging is increasing in demand for precise image-guided radiotherapy because it provides a foundation for advanced image-guided techniques, including accurate treatment setup, online tumor delineation, and patient dose calculation. However, CBCT is currently limited only to patient setup in the clinic because of the severe issues in its image quality. In this study, we develop a learning-based approach to improve CBCT's image quality for extended clinical applications. MATERIALS AND METHODS: An auto-context model is integrated into a machine learning framework to iteratively generate corrected CBCT (CCBCT) with high-image quality. The first step is data preprocessing for the built training dataset, in which uninformative image regions are removed, noise is reduced, and CT and CBCT images are aligned. After a CBCT image is divided into a set of patches, the most informative and salient anatomical features are extracted to train random forests. Within each patch, alternating RF is applied to create a CCBCT patch as the output. Moreover, an iterative refinement strategy is exercised to enhance the image quality of CCBCT. Then, all the CCBCT patches are integrated to reconstruct final CCBCT images. RESULTS: The learning-based CBCT correction algorithm was evaluated using the leave-one-out cross-validation method applied on a cohort of 12 patients' brain data and 14 patients' pelvis data. The mean absolute error (MAE), peak signal-to-noise ratio (PSNR), normalized cross-correlation (NCC) indexes, and spatial nonuniformity (SNU) in the selected regions of interest (ROIs) were used to quantify the proposed algorithm's correction accuracy and generat the following results: mean MAE = 12.81 ± 2.04 and 19.94 ± 5.44 HU, mean PSNR = 40.22 ± 3.70 and 31.31 ± 2.85 dB, mean NCC = 0.98 ± 0.02 and 0.95 ± 0.01, and SNU = 2.07 ± 3.36% and 2.07 ± 3.36% for brain and pelvis data. CONCLUSION: Preliminary results demonstrated that the novel learning-based correction method can significantly improve CBCT image quality. Hence, the proposed algorithm is of great potential in improving CBCT's image quality to support its clinical utility in CBCT-guided adaptive radiotherapy.


Subject(s)
Cone-Beam Computed Tomography , Image Processing, Computer-Assisted/methods , Machine Learning , Artifacts , Brain/diagnostic imaging , Humans , Pelvis/diagnostic imaging , Radiation Dosage
19.
J Med Imaging (Bellingham) ; 5(3): 034001, 2018 Jul.
Article in English | MEDLINE | ID: mdl-30155512

ABSTRACT

Magnetic resonance imaging (MRI) provides a number of advantages over computed tomography (CT) for radiation therapy treatment planning; however, MRI lacks the key electron density information necessary for accurate dose calculation. We propose a dictionary-learning-based method to derive electron density information from MRIs. Specifically, we first partition a given MR image into a set of patches, for which we used a joint dictionary learning method to directly predict a CT patch as a structured output. Then a feature selection method is used to ensure prediction robustness. Finally, we combine all the predicted CT patches to obtain the final prediction for the given MR image. This prediction technique was validated for a clinical application using 14 patients with brain MR and CT images. The peak signal-to-noise ratio (PSNR), mean absolute error (MAE), normalized cross-correlation (NCC) indices and similarity index (SI) for air, soft-tissue and bone region were used to quantify the prediction accuracy. The mean ± std of PSNR, MAE, and NCC were: 22.4±1.9 dB , 82.6±26.1 HU, and 0.91±0.03 for the 14 patients. The SIs for air, soft-tissue, and bone regions are 0.98±0.01 , 0.88±0.03 , and 0.69±0.08 . These indices demonstrate the CT prediction accuracy of the proposed learning-based method. This CT image prediction technique could be used as a tool for MRI-based radiation treatment planning, or for PET attenuation correction in a PET/MRI scanner.

20.
J Med Imaging (Bellingham) ; 5(4): 043504, 2018 Oct.
Article in English | MEDLINE | ID: mdl-30840748

ABSTRACT

We develop a learning-based method to generate patient-specific pseudo computed tomography (CT) from routinely acquired magnetic resonance imaging (MRI) for potential MRI-based radiotherapy treatment planning. The proposed pseudo CT (PCT) synthesis method consists of a training stage and a synthesizing stage. During the training stage, patch-based features are extracted from MRIs. Using a feature selection, the most informative features are identified as an anatomical signature to train a sequence of alternating random forests based on an iterative refinement model. During the synthesizing stage, we feed the anatomical signatures extracted from an MRI into the sequence of well-trained forests for a PCT synthesis. Our PCT was compared with original CT (ground truth) to quantitatively assess the synthesis accuracy. The mean absolute error, peak signal-to-noise ratio, and normalized cross-correlation indices were 60.87 ± 15.10 HU , 24.63 ± 1.73 dB , and 0.954 ± 0.013 for 14 patients' brain data and 29.86 ± 10.4 HU , 34.18 ± 3.31 dB , and 0.980 ± 0.025 for 12 patients' pelvic data, respectively. We have investigated a learning-based approach to synthesize CTs from routine MRIs and demonstrated its feasibility and reliability. The proposed PCT synthesis technique can be a useful tool for MRI-based radiation treatment planning.

SELECTION OF CITATIONS
SEARCH DETAIL
...