Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 27
Filter
1.
Scand J Urol ; 59: 90-97, 2024 May 02.
Article in English | MEDLINE | ID: mdl-38698545

ABSTRACT

OBJECTIVE: To evaluate whether artificial intelligence (AI) based automatic image analysis utilising convolutional neural networks (CNNs) can be used to evaluate computed tomography urography (CTU) for the presence of urinary bladder cancer (UBC) in patients with macroscopic hematuria. METHODS: Our study included patients who had undergone evaluation for macroscopic hematuria. A CNN-based AI model was trained and validated on the CTUs included in the study on a dedicated research platform (Recomia.org). Sensitivity and specificity were calculated to assess the performance of the AI model. Cystoscopy findings were used as the reference method. RESULTS: The training cohort comprised a total of 530 patients. Following the optimisation process, we developed the last version of our AI model. Subsequently, we utilised the model in the validation cohort which included an additional 400 patients (including 239 patients with UBC). The AI model had a sensitivity of 0.83 (95% confidence intervals [CI], 0.76-0.89), specificity of 0.76 (95% CI 0.67-0.84), and a negative predictive value (NPV) of 0.97 (95% CI 0.95-0.98). The majority of tumours in the false negative group (n = 24) were solitary (67%) and smaller than 1 cm (50%), with the majority of patients having cTaG1-2 (71%). CONCLUSIONS: We developed and tested an AI model for automatic image analysis of CTUs to detect UBC in patients with macroscopic hematuria. This model showed promising results with a high detection rate and excessive NPV. Further developments could lead to a decreased need for invasive investigations and prioritising patients with serious tumours.


Subject(s)
Artificial Intelligence , Hematuria , Tomography, X-Ray Computed , Urinary Bladder Neoplasms , Urography , Humans , Hematuria/etiology , Hematuria/diagnostic imaging , Urinary Bladder Neoplasms/diagnostic imaging , Urinary Bladder Neoplasms/complications , Male , Aged , Female , Tomography, X-Ray Computed/methods , Urography/methods , Middle Aged , Neural Networks, Computer , Sensitivity and Specificity , Aged, 80 and over , Retrospective Studies , Adult
2.
Clin Physiol Funct Imaging ; 44(4): 332-339, 2024 Jul.
Article in English | MEDLINE | ID: mdl-38563413

ABSTRACT

BACKGROUND: We developed a fully automated artificial intelligence (AI)AI-based-based method for detecting suspected lymph node metastases in prostate-specific membrane antigen (PSMA)(PSMA) positron emission tomography-computed tomography (PET-CT)(PET-CT) images of prostate cancer patients by using data augmentation that adds synthetic lymph node metastases to the images to expand the training set. METHODS: Synthetic data were derived from original training images to which new synthetic lymph node metastases were added. Thus, the original training set from a previous study (n = 420) was expanded by one synthetic image for every original image (n = 840), which was used to train an AI model. The performance of the AI model was compared to that of nuclear medicine physicians and a previously developed AI model. The human readers were alternately used as a reference and compared to either another reading or AI model. RESULTS: The new AI model had an average sensitivity of 84% for detecting lymph node metastases compared with 78% for human readings. Our previously developed AI method without synthetic data had an average sensitivity of 79%. The number of false positive lesions were slightly higher for the new AI model (average 3.3 instances per patient) compared to human readings and the previous AI model (average 2.8 instances per patient), while the number of false negative lesions was lower. CONCLUSIONS: Creating synthetic lymph node metastases, as a form of data augmentation, on [18F]PSMA-1007F]PSMA-1007 PETPET-CT-CT images improved the sensitivity of an AI model for detecting suspected lymph node metastases. However, the number of false positive lesions increased somewhat.


Subject(s)
Glutamate Carboxypeptidase II , Lymph Nodes , Lymphatic Metastasis , Positron Emission Tomography Computed Tomography , Predictive Value of Tests , Prostatic Neoplasms , Humans , Male , Positron Emission Tomography Computed Tomography/methods , Prostatic Neoplasms/diagnostic imaging , Prostatic Neoplasms/pathology , Glutamate Carboxypeptidase II/metabolism , Lymph Nodes/diagnostic imaging , Lymph Nodes/pathology , Reproducibility of Results , Antigens, Surface/metabolism , Artificial Intelligence , Automation , Aged , Image Interpretation, Computer-Assisted/methods , Middle Aged , Radiopharmaceuticals
3.
Adv Radiat Oncol ; 9(3): 101383, 2024 Mar.
Article in English | MEDLINE | ID: mdl-38495038

ABSTRACT

Purpose: Meticulous manual delineations of the prostate and the surrounding organs at risk are necessary for prostate cancer radiation therapy to avoid side effects to the latter. This process is time consuming and hampered by inter- and intraobserver variability, all of which could be alleviated by artificial intelligence (AI). This study aimed to evaluate the performance of AI compared with manual organ delineations on computed tomography (CT) scans for radiation treatment planning. Methods and Materials: Manual delineations of the prostate, urinary bladder, and rectum of 1530 patients with prostate cancer who received curative radiation therapy from 2006 to 2018 were included. Approximately 50% of those CT scans were used as a training set, 25% as a validation set, and 25% as a test set. Patients with hip prostheses were excluded because of metal artifacts. After training and fine-tuning with the validation set, automated delineations of the prostate and organs at risk were obtained for the test set. Sørensen-Dice similarity coefficient, mean surface distance, and Hausdorff distance were used to evaluate the agreement between the manual and automated delineations. Results: The median Sørensen-Dice similarity coefficient between the manual and AI delineations was 0.82, 0.95, and 0.88 for the prostate, urinary bladder, and rectum, respectively. The median mean surface distance and Hausdorff distance were 1.7 and 9.2 mm for the prostate, 0.7 and 6.7 mm for the urinary bladder, and 1.1 and 13.5 mm for the rectum, respectively. Conclusions: Automated CT-based organ delineation for prostate cancer radiation treatment planning is feasible and shows good agreement with manually performed contouring.

4.
Article in English | MEDLINE | ID: mdl-38456971

ABSTRACT

PURPOSE: Multiple myeloma (MM) is a highly heterogeneous disease with wide variations in patient outcome. [18F]FDG PET/CT can provide prognostic information in MM, but it is hampered by issues regarding standardization of scan interpretation. Our group has recently demonstrated the feasibility of automated, volumetric assessment of bone marrow (BM) metabolic activity on PET/CT using a novel artificial intelligence (AI)-based tool. Accordingly, the aim of the current study is to investigate the prognostic role of whole-body calculations of BM metabolism in patients with newly diagnosed MM using this AI tool. MATERIALS AND METHODS: Forty-four, previously untreated MM patients underwent whole-body [18F]FDG PET/CT. Automated PET/CT image segmentation and volumetric quantification of BM metabolism were based on an initial CT-based segmentation of the skeleton, its transfer to the standardized uptake value (SUV) PET images, subsequent application of different SUV thresholds, and refinement of the resulting regions using postprocessing. In the present analysis, ten different uptake thresholds (AI approaches), based on reference organs or absolute SUV values, were applied for definition of pathological tracer uptake and subsequent calculation of the whole-body metabolic tumor volume (MTV) and total lesion glycolysis (TLG). Correlation analysis was performed between the automated PET values and histopathological results of the BM as well as patients' progression-free survival (PFS) and overall survival (OS). Receiver operating characteristic (ROC) curve analysis was used to investigate the discrimination performance of MTV and TLG for prediction of 2-year PFS. The prognostic performance of the new Italian Myeloma criteria for PET Use (IMPeTUs) was also investigated. RESULTS: Median follow-up [95% CI] of the patient cohort was 110 months [105-123 months]. AI-based BM segmentation and calculation of MTV and TLG were feasible in all patients. A significant, positive, moderate correlation was observed between the automated quantitative whole-body PET/CT parameters, MTV and TLG, and BM plasma cell infiltration for all ten [18F]FDG uptake thresholds. With regard to PFS, univariable analysis for both MTV and TLG predicted patient outcome reasonably well for all AI approaches. Adjusting for cytogenetic abnormalities and BM plasma cell infiltration rate, multivariable analysis also showed prognostic significance for high MTV, which defined pathological [18F]FDG uptake in the BM via the liver. In terms of OS, univariable and multivariable analysis showed that whole-body MTV, again mainly using liver uptake as reference, was significantly associated with shorter survival. In line with these findings, ROC curve analysis showed that MTV and TLG, assessed using liver-based cut-offs, could predict 2-year PFS rates. The application of IMPeTUs showed that the number of focal hypermetabolic BM lesions and extramedullary disease had an adverse effect on PFS. CONCLUSIONS: The AI-based, whole-body calculations of BM metabolism via the parameters MTV and TLG not only correlate with the degree of BM plasma cell infiltration, but also predict patient survival in MM. In particular, the parameter MTV, using the liver uptake as reference for BM segmentation, provides solid prognostic information for disease progression. In addition to highlighting the prognostic significance of automated, global volumetric estimation of metabolic tumor burden, these data open up new perspectives towards solving the complex problem of interpreting PET scans in MM with a simple, fast, and robust method that is not affected by operator-dependent interventions.

5.
Eur J Nucl Med Mol Imaging ; 50(12): 3697-3708, 2023 10.
Article in English | MEDLINE | ID: mdl-37493665

ABSTRACT

PURPOSE: [18F]FDG PET/CT is an imaging modality of high performance in multiple myeloma (MM). Nevertheless, the inter-observer reproducibility in PET/CT scan interpretation may be hampered by the different patterns of bone marrow (BM) infiltration in the disease. Although many approaches have been recently developed to address the issue of standardization, none can yet be considered a standard method in the interpretation of PET/CT. We herein aim to validate a novel three-dimensional deep learning-based tool on PET/CT images for automated assessment of the intensity of BM metabolism in MM patients. MATERIALS AND METHODS: Whole-body [18F]FDG PET/CT scans of 35 consecutive, previously untreated MM patients were studied. All patients were investigated in the context of an open-label, multicenter, randomized, active-controlled, phase 3 trial (GMMG-HD7). Qualitative (visual) analysis classified the PET/CT scans into three groups based on the presence and number of focal [18F]FDG-avid lesions as well as the degree of diffuse [18F]FDG uptake in the BM. The proposed automated method for BM metabolism assessment is based on an initial CT-based segmentation of the skeleton, its transfer to the SUV PET images, the subsequent application of different SUV thresholds, and refinement of the resulting regions using postprocessing. In the present analysis, six different SUV thresholds (Approaches 1-6) were applied for the definition of pathological tracer uptake in the skeleton [Approach 1: liver SUVmedian × 1.1 (axial skeleton), gluteal muscles SUVmedian × 4 (extremities). Approach 2: liver SUVmedian × 1.5 (axial skeleton), gluteal muscles SUVmedian × 4 (extremities). Approach 3: liver SUVmedian × 2 (axial skeleton), gluteal muscles SUVmedian × 4 (extremities). Approach 4: ≥ 2.5. Approach 5: ≥ 2.5 (axial skeleton), ≥ 2.0 (extremities). Approach 6: SUVmax liver]. Using the resulting masks, subsequent calculations of the whole-body metabolic tumor volume (MTV) and total lesion glycolysis (TLG) in each patient were performed. A correlation analysis was performed between the automated PET values and the results of the visual PET/CT analysis as well as the histopathological, cytogenetical, and clinical data of the patients. RESULTS: BM segmentation and calculation of MTV and TLG after the application of the deep learning tool were feasible in all patients. A significant positive correlation (p < 0.05) was observed between the results of the visual analysis of the PET/CT scans for the three patient groups and the MTV and TLG values after the employment of all six [18F]FDG uptake thresholds. In addition, there were significant differences between the three patient groups with regard to their MTV and TLG values for all applied thresholds of pathological tracer uptake. Furthermore, we could demonstrate a significant, moderate, positive correlation of BM plasma cell infiltration and plasma levels of ß2-microglobulin with the automated quantitative PET/CT parameters MTV and TLG after utilization of Approaches 1, 2, 4, and 5. CONCLUSIONS: The automated, volumetric, whole-body PET/CT assessment of the BM metabolic activity in MM is feasible with the herein applied method and correlates with clinically relevant parameters in the disease. This methodology offers a potentially reliable tool in the direction of optimization and standardization of PET/CT interpretation in MM. Based on the present promising findings, the deep learning-based approach will be further evaluated in future prospective studies with larger patient cohorts.


Subject(s)
Multiple Myeloma , Positron Emission Tomography Computed Tomography , Humans , Artificial Intelligence , Bone Marrow/metabolism , Fluorodeoxyglucose F18/metabolism , Glycolysis , Multiple Myeloma/diagnostic imaging , Multiple Myeloma/pathology , Prognosis , Radiopharmaceuticals , Reproducibility of Results , Retrospective Studies , Tumor Burden
6.
Nucl Med Mol Imaging ; 57(2): 110-116, 2023 Apr.
Article in English | MEDLINE | ID: mdl-36998589

ABSTRACT

Purpose: Classification of focal skeleton/bone marrow uptake (BMU) can be challenging. The aim is to investigate whether an artificial intelligence-based method (AI), which highlights suspicious focal BMU, increases interobserver agreement among a group of physicians from different hospitals classifying Hodgkin's lymphoma (HL) patients staged with [18F]FDG PET/CT. Methods: Forty-eight patients staged with [18F]FDG PET/CT at Sahlgenska University Hospital between 2017 and 2018 were reviewed twice, 6 months apart, regarding focal BMU. During the second time review, the 10 physicians also had access to AI-based advice regarding focal BMU. Results: Each physician's classifications were pairwise compared with the classifications made by all the other physicians, resulting in 45 unique pairs of comparisons both without and with AI advice. The agreement between the physicians increased significantly when AI advice was available, which was measured as an increase in mean Kappa values from 0.51 (range 0.25-0.80) without AI advice to 0.61 (range 0.19-0.94) with AI advice (p = 0.005). The majority of the physicians agreed with the AI-based method in 40 (83%) of the 48 cases. Conclusion: An AI-based method significantly increases interobserver agreement among physicians working at different hospitals by highlighting suspicious focal BMU in HL patients staged with [18F]FDG PET/CT.

7.
Diagnostics (Basel) ; 12(9)2022 Aug 30.
Article in English | MEDLINE | ID: mdl-36140502

ABSTRACT

Here, we aimed to develop and validate a fully automated artificial intelligence (AI)-based method for the detection and quantification of suspected prostate tumour/local recurrence, lymph node metastases, and bone metastases from [18F]PSMA-1007 positron emission tomography-computed tomography (PET-CT) images. Images from 660 patients were included. Segmentations by one expert reader were ground truth. A convolutional neural network (CNN) was developed and trained on a training set, and the performance was tested on a separate test set of 120 patients. The AI method was compared with manual segmentations performed by several nuclear medicine physicians. Assessment of tumour burden (total lesion volume (TLV) and total lesion uptake (TLU)) was performed. The sensitivity of the AI method was, on average, 79% for detecting prostate tumour/recurrence, 79% for lymph node metastases, and 62% for bone metastases. On average, nuclear medicine physicians' corresponding sensitivities were 78%, 78%, and 59%, respectively. The correlations of TLV and TLU between AI and nuclear medicine physicians were all statistically significant and ranged from R = 0.53 to R = 0.83. In conclusion, the development of an AI-based method for prostate cancer detection with sensitivity on par with nuclear medicine physicians was possible. The developed AI tool is freely available for researchers.

8.
Eur J Nucl Med Mol Imaging ; 49(10): 3412-3418, 2022 08.
Article in English | MEDLINE | ID: mdl-35475912

ABSTRACT

PURPOSE: The aim of this study was to develop and validate an artificial intelligence (AI)-based method using convolutional neural networks (CNNs) for the detection of pelvic lymph node metastases in scans obtained using [18F]PSMA-1007 positron emission tomography-computed tomography (PET-CT) from patients with high-risk prostate cancer. The second goal was to make the AI-based method available to other researchers. METHODS: [18F]PSMA PET-CT scans were collected from 211 patients. Suspected pelvic lymph node metastases were marked by three independent readers. A CNN was developed and trained on a training and validation group of 161 of the patients. The performance of the AI method and the inter-observer agreement between the three readers were assessed in a separate test group of 50 patients. RESULTS: The sensitivity of the AI method for detecting pelvic lymph node metastases was 82%, and the corresponding sensitivity for the human readers was 77% on average. The average number of false positives was 1.8 per patient. A total of 5-17 false negative lesions in the whole cohort were found, depending on which reader was used as a reference. The method is available for researchers at www.recomia.org . CONCLUSION: This study shows that AI can obtain a sensitivity on par with that of physicians with a reasonable number of false positives. The difficulty in achieving high inter-observer sensitivity emphasizes the need for automated methods. On the road to qualifying AI tools for clinical use, independent validation is critical and allows performance to be assessed in studies from different hospitals. Therefore, we have made our AI tool freely available to other researchers.


Subject(s)
Nuclear Medicine , Physicians , Prostatic Neoplasms , Artificial Intelligence , Gallium Radioisotopes , Humans , Lymphatic Metastasis/diagnostic imaging , Male , Positron Emission Tomography Computed Tomography/methods , Prostatic Neoplasms/diagnostic imaging , Prostatic Neoplasms/pathology , Radiopharmaceuticals
9.
EJNMMI Phys ; 9(1): 6, 2022 Feb 03.
Article in English | MEDLINE | ID: mdl-35113252

ABSTRACT

BACKGROUND: Metabolic positron emission tomography/computed tomography (PET/CT) parameters describing tumour activity contain valuable prognostic information, but to perform the measurements manually leads to both intra- and inter-reader variability and is too time-consuming in clinical practice. The use of modern artificial intelligence-based methods offers new possibilities for automated and objective image analysis of PET/CT data. PURPOSE: We aimed to train a convolutional neural network (CNN) to segment and quantify tumour burden in [18F]-fluorodeoxyglucose (FDG) PET/CT images and to evaluate the association between CNN-based measurements and overall survival (OS) in patients with lung cancer. A secondary aim was to make the method available to other researchers. METHODS: A total of 320 consecutive patients referred for FDG PET/CT due to suspected lung cancer were retrospectively selected for this study. Two nuclear medicine specialists manually segmented abnormal FDG uptake in all of the PET/CT studies. One-third of the patients were assigned to a test group. Survival data were collected for this group. The CNN was trained to segment lung tumours and thoracic lymph nodes. Total lesion glycolysis (TLG) was calculated from the CNN-based and manual segmentations. Associations between TLG and OS were investigated using a univariate Cox proportional hazards regression model. RESULTS: The test group comprised 106 patients (median age, 76 years (IQR 61-79); n = 59 female). Both CNN-based TLG (hazard ratio 1.64, 95% confidence interval 1.21-2.21; p = 0.001) and manual TLG (hazard ratio 1.54, 95% confidence interval 1.14-2.07; p = 0.004) estimations were significantly associated with OS. CONCLUSION: Fully automated CNN-based TLG measurements of PET/CT data showed were significantly associated with OS in patients with lung cancer. This type of measurement may be of value for the management of future patients with lung cancer. The CNN is publicly available for research purposes.

10.
Clin Imaging ; 81: 54-59, 2022 Jan.
Article in English | MEDLINE | ID: mdl-34598006

ABSTRACT

BACKGROUND: Osteoporosis is an underdiagnosed and undertreated disease worldwide. Recent studies have highlighted the use of simple vertebral trabecular attenuation values for opportunistic osteoporosis screening. Meanwhile, machine learning has been used to accurately segment large parts of the human skeleton. PURPOSE: To evaluate a fully automated deep learning-based method for lumbar vertebral segmentation and measurement of vertebral volumetric trabecular attenuation values. MATERIAL AND METHODS: A deep learning-based method for automated segmentation of bones was retrospectively applied to non-contrast CT scans of 1008 patients (mean age 57 years, 472 female, 536 male). Each vertebral segmentation was automatically reduced by 7 mm in all directions in order to avoid cortical bone. The mean and median volumetric attenuation values from Th12 to L4 were obtained and plotted against patient age and sex. L1 values were further analyzed to facilitate comparison with previous studies. RESULTS: The mean L1 attenuation values decreased linearly with age by -2.2 HU per year (age > 30, 95% CI: -2.4, -2.0, R2 = 0.3544). The mean L1 attenuation value of the entire population cohort was 140 HU ± 54. CONCLUSIONS: With results closely matching those of previous studies, we believe that our fully automated deep learning-based method can be used to obtain lumbar volumetric trabecular attenuation values which can be used for opportunistic screening of osteoporosis in patients undergoing CT scans for other reasons.


Subject(s)
Deep Learning , Osteoporosis , Absorptiometry, Photon , Bone Density , Female , Humans , Lumbar Vertebrae/diagnostic imaging , Male , Middle Aged , Osteoporosis/diagnostic imaging , Retrospective Studies
11.
Sci Rep ; 11(1): 23905, 2021 12 13.
Article in English | MEDLINE | ID: mdl-34903773

ABSTRACT

To develop a fully automatic model capable of reliably quantifying epicardial adipose tissue (EAT) volumes and attenuation in large scale population studies to investigate their relation to markers of cardiometabolic risk. Non-contrast cardiac CT images from the SCAPIS study were used to train and test a convolutional neural network based model to quantify EAT by: segmenting the pericardium, suppressing noise-induced artifacts in the heart chambers, and, if image sets were incomplete, imputing missing EAT volumes. The model achieved a mean Dice coefficient of 0.90 when tested against expert manual segmentations on 25 image sets. Tested on 1400 image sets, the model successfully segmented 99.4% of the cases. Automatic imputation of missing EAT volumes had an error of less than 3.1% with up to 20% of the slices in image sets missing. The most important predictors of EAT volumes were weight and waist, while EAT attenuation was predicted mainly by EAT volume. A model with excellent performance, capable of fully automatic handling of the most common challenges in large scale EAT quantification has been developed. In studies of the importance of EAT in disease development, the strong co-variation with anthropometric measures needs to be carefully considered.


Subject(s)
Adipose Tissue/diagnostic imaging , Image Interpretation, Computer-Assisted/methods , Machine Learning , Pericardium/diagnostic imaging , Female , Humans , Image Interpretation, Computer-Assisted/standards , Male , Mass Screening/methods , Middle Aged , Software/standards
12.
Eur Radiol Exp ; 5(1): 50, 2021 11 19.
Article in English | MEDLINE | ID: mdl-34796422

ABSTRACT

BACKGROUND: Radical cystectomy for urinary bladder cancer is a procedure associated with a high risk of complications, and poor overall survival (OS) due to both patient and tumour factors. Sarcopenia is one such patient factor. We have developed a fully automated artificial intelligence (AI)-based image analysis tool for segmenting skeletal muscle of the torso and calculating the muscle volume. METHODS: All patients who have undergone radical cystectomy for urinary bladder cancer 2011-2019 at Sahlgrenska University Hospital, and who had a pre-operative computed tomography of the abdomen within 90 days of surgery were included in the study. All patients CT studies were analysed with the automated AI-based image analysis tool. Clinical data for the patients were retrieved from the Swedish National Register for Urinary Bladder Cancer. Muscle volumes dichotomised by the median for each sex were analysed with Cox regression for OS and logistic regression for 90-day high-grade complications. The study was approved by the Swedish Ethical Review Authority (2020-03985). RESULTS: Out of 445 patients who underwent surgery, 299 (67%) had CT studies available for analysis. The automated AI-based tool failed to segment the muscle volume in seven (2%) patients. Cox regression analysis showed an independent significant association with OS (HR 1.62; 95% CI 1.07-2.44; p = 0.022). Logistic regression did not show any association with high-grade complications. CONCLUSION: The fully automated AI-based CT image analysis provides a low-cost and meaningful clinical measure that is an independent biomarker for OS following radical cystectomy.


Subject(s)
Cystectomy , Urinary Bladder Neoplasms , Artificial Intelligence , Cystectomy/adverse effects , Female , Humans , Male , Muscle, Skeletal/diagnostic imaging , Retrospective Studies , Urinary Bladder Neoplasms/diagnostic imaging , Urinary Bladder Neoplasms/surgery
13.
EJNMMI Res ; 11(1): 105, 2021 Oct 12.
Article in English | MEDLINE | ID: mdl-34637028

ABSTRACT

BACKGROUND: Since three-dimensional segmentation of cardiac region in 123I-metaiodobenzylguanidine (MIBG) study has not been established, this study aimed to achieve organ segmentation using a convolutional neural network (CNN) with 123I-MIBG single photon emission computed tomography (SPECT) imaging, to calculate heart counts and washout rates (WR) automatically and to compare with conventional quantitation based on planar imaging. METHODS: We assessed 48 patients (aged 68.4 ± 11.7 years) with heart and neurological diseases, including chronic heart failure, dementia with Lewy bodies, and Parkinson's disease. All patients were assessed by early and late 123I-MIBG planar and SPECT imaging. The CNN was initially trained to individually segment the lungs and liver on early and late SPECT images. The segmentation masks were aligned, and then, the CNN was trained to directly segment the heart, and all models were evaluated using fourfold cross-validation. The CNN-based average heart counts and WR were calculated and compared with those determined using planar parameters. The CNN-based SPECT and conventional planar heart counts were corrected by physical time decay, injected dose of 123I-MIBG, and body weight. We also divided WR into normal and abnormal groups from linear regression lines determined by the relationship between planar WR and CNN-based WR and then analyzed agreement between them. RESULTS: The CNN segmented the cardiac region in patients with normal and reduced uptake. The CNN-based SPECT heart counts significantly correlated with conventional planar heart counts with and without background correction and a planar heart-to-mediastinum ratio (R2 = 0.862, 0.827, and 0.729, p < 0.0001, respectively). The CNN-based and planar WRs also correlated with and without background correction and WR based on heart-to-mediastinum ratios of R2 = 0.584, 0.568 and 0.507, respectively (p < 0.0001). Contingency table findings of high and low WR (cutoffs: 34% and 30% for planar and SPECT studies, respectively) showed 87.2% agreement between CNN-based and planar methods. CONCLUSIONS: The CNN could create segmentation from SPECT images, and average heart counts and WR were reliably calculated three-dimensionally, which might be a novel approach to quantifying SPECT images of innervation.

14.
Scand J Urol ; 55(6): 427-433, 2021 Dec.
Article in English | MEDLINE | ID: mdl-34565290

ABSTRACT

OBJECTIVE: Artificial intelligence (AI) offers new opportunities for objective quantitative measurements of imaging biomarkers from positron-emission tomography/computed tomography (PET/CT). Clinical image reporting relies predominantly on observer-dependent visual assessment and easily accessible measures like SUVmax, representing lesion uptake in a relatively small amount of tissue. Our hypothesis is that measurements of total volume and lesion uptake of the entire tumour would better reflect the disease`s activity with prognostic significance, compared with conventional measurements. METHODS: An AI-based algorithm was trained to automatically measure the prostate and its tumour content in PET/CT of 145 patients. The algorithm was then tested retrospectively on 285 high-risk patients, who were examined using 18F-choline PET/CT for primary staging between April 2008 and July 2015. Prostate tumour volume, tumour fraction of the prostate gland, lesion uptake of the entire tumour, and SUVmax were obtained automatically. Associations between these measurements, age, PSA, Gleason score and prostate cancer-specific survival were studied, using a Cox proportional-hazards regression model. RESULTS: Twenty-three patients died of prostate cancer during follow-up (median survival 3.8 years). Total tumour volume of the prostate (p = 0.008), tumour fraction of the gland (p = 0.005), total lesion uptake of the prostate (p = 0.02), and age (p = 0.01) were significantly associated with disease-specific survival, whereas SUVmax (p = 0.2), PSA (p = 0.2), and Gleason score (p = 0.8) were not. CONCLUSION: AI-based assessments of total tumour volume and lesion uptake were significantly associated with disease-specific survival in this patient cohort, whereas SUVmax and Gleason scores were not. The AI-based approach appears well-suited for clinically relevant patient stratification and monitoring of individual therapy.


Subject(s)
Positron Emission Tomography Computed Tomography , Prostatic Neoplasms , Artificial Intelligence , Biomarkers , Humans , Male , Prostatic Neoplasms/diagnostic imaging , Retrospective Studies
15.
Sci Rep ; 11(1): 10382, 2021 05 17.
Article in English | MEDLINE | ID: mdl-34001922

ABSTRACT

To develop an artificial intelligence (AI)-based method for the detection of focal skeleton/bone marrow uptake (BMU) in patients with Hodgkin's lymphoma (HL) undergoing staging with FDG-PET/CT. The results of the AI in a separate test group were compared to the interpretations of independent physicians. The skeleton and bone marrow were segmented using a convolutional neural network. The training of AI was based on 153 un-treated patients. Bone uptake significantly higher than the mean BMU was marked as abnormal, and an index, based on the total squared abnormal uptake, was computed to identify the focal uptake. Patients with an index above a predefined threshold were interpreted as having focal uptake. As the test group, 48 un-treated patients who had undergone a staging FDG-PET/CT between 2017-2018 with biopsy-proven HL were retrospectively included. Ten physicians classified the 48 cases regarding focal skeleton/BMU. The majority of the physicians agreed with the AI in 39/48 cases (81%) regarding focal skeleton/bone marrow involvement. Inter-observer agreement between the physicians was moderate, Kappa 0.51 (range 0.25-0.80). An AI-based method can be developed to highlight suspicious focal skeleton/BMU in HL patients staged with FDG-PET/CT. Inter-observer agreement regarding focal BMU is moderate among nuclear medicine physicians.


Subject(s)
Artificial Intelligence , Bone Marrow/metabolism , Hodgkin Disease/diagnosis , Skeleton/diagnostic imaging , Adolescent , Adult , Aged , Aged, 80 and over , Biological Transport/genetics , Biopsy , Bone Marrow/diagnostic imaging , Child , Female , Fluorodeoxyglucose F18/administration & dosage , Hodgkin Disease/diagnostic imaging , Hodgkin Disease/metabolism , Hodgkin Disease/pathology , Humans , Male , Middle Aged , Multimodal Imaging , Musculoskeletal System/diagnostic imaging , Musculoskeletal System/metabolism , Neural Networks, Computer , Positron Emission Tomography Computed Tomography , Radiopharmaceuticals/administration & dosage , Skeleton/metabolism , Skeleton/pathology , Young Adult
16.
EJNMMI Phys ; 8(1): 32, 2021 Mar 25.
Article in English | MEDLINE | ID: mdl-33768311

ABSTRACT

BACKGROUND: [18F]-fluorodeoxyglucose (FDG) positron emission tomography with computed tomography (PET-CT) is a well-established modality in the work-up of patients with suspected or confirmed diagnosis of lung cancer. Recent research efforts have focused on extracting theragnostic and textural information from manually indicated lung lesions. Both semi-automatic and fully automatic use of artificial intelligence (AI) to localise and classify FDG-avid foci has been demonstrated. To fully harness AI's usefulness, we have developed a method which both automatically detects abnormal lung lesions and calculates the total lesion glycolysis (TLG) on FDG PET-CT. METHODS: One hundred twelve patients (59 females and 53 males) who underwent FDG PET-CT due to suspected or for the management of known lung cancer were studied retrospectively. These patients were divided into a training group (59%; n = 66), a validation group (20.5%; n = 23) and a test group (20.5%; n = 23). A nuclear medicine physician manually segmented abnormal lung lesions with increased FDG-uptake in all PET-CT studies. The AI-based method was trained to segment the lesions based on the manual segmentations. TLG was then calculated from manual and AI-based measurements, respectively and analysed with Bland-Altman plots. RESULTS: The AI-tool's performance in detecting lesions had a sensitivity of 90%. One small lesion was missed in two patients, respectively, where both had a larger lesion which was correctly detected. The positive and negative predictive values were 88% and 100%, respectively. The correlation between manual and AI TLG measurements was strong (R2 = 0.74). Bias was 42 g and 95% limits of agreement ranged from - 736 to 819 g. Agreement was particularly high in smaller lesions. CONCLUSIONS: The AI-based method is suitable for the detection of lung lesions and automatic calculation of TLG in small- to medium-sized tumours. In a clinical setting, it will have an added value due to its capability to sort out negative examinations resulting in prioritised and focused care on patients with potentially malignant lesions.

17.
Eur Radiol Exp ; 5(1): 11, 2021 03 11.
Article in English | MEDLINE | ID: mdl-33694046

ABSTRACT

BACKGROUND: Body composition is associated with survival outcome in oncological patients, but it is not routinely calculated. Manual segmentation of subcutaneous adipose tissue (SAT) and muscle is time-consuming and therefore limited to a single CT slice. Our goal was to develop an artificial-intelligence (AI)-based method for automated quantification of three-dimensional SAT and muscle volumes from CT images. METHODS: Ethical approvals from Gothenburg and Lund Universities were obtained. Convolutional neural networks were trained to segment SAT and muscle using manual segmentations on CT images from a training group of 50 patients. The method was applied to a separate test group of 74 cancer patients, who had two CT studies each with a median interval between the studies of 3 days. Manual segmentations in a single CT slice were used for comparison. The accuracy was measured as overlap between the automated and manual segmentations. RESULTS: The accuracy of the AI method was 0.96 for SAT and 0.94 for muscle. The average differences in volumes were significantly lower than the corresponding differences in areas in a single CT slice: 1.8% versus 5.0% (p < 0.001) for SAT and 1.9% versus 3.9% (p < 0.001) for muscle. The 95% confidence intervals for predicted volumes in an individual subject from the corresponding single CT slice areas were in the order of ± 20%. CONCLUSIONS: The AI-based tool for quantification of SAT and muscle volumes showed high accuracy and reproducibility and provided a body composition analysis that is more relevant than manual analysis of a single CT slice.


Subject(s)
Artificial Intelligence , Tomography, X-Ray Computed , Body Composition , Humans , Neural Networks, Computer , Reproducibility of Results
18.
Clin Physiol Funct Imaging ; 41(1): 62-67, 2021 Jan.
Article in English | MEDLINE | ID: mdl-32976691

ABSTRACT

INTRODUCTION: Lymph node metastases are a key prognostic factor in prostate cancer (PCa), but detecting lymph node lesions from PET/CT images is a subjective process resulting in inter-reader variability. Artificial intelligence (AI)-based methods can provide an objective image analysis. We aimed at developing and validating an AI-based tool for detection of lymph node lesions. METHODS: A group of 399 patients with biopsy-proven PCa who had undergone 18 F-choline PET/CT for staging prior to treatment were used to train (n = 319) and test (n = 80) the AI-based tool. The tool consisted of convolutional neural networks using complete PET/CT scans as inputs. In the test set, the AI-based lymph node detections were compared to those of two independent readers. The association with PCa-specific survival was investigated. RESULTS: The AI-based tool detected more lymph node lesions than Reader B (98 vs. 87/117; p = .045) using Reader A as reference. AI-based tool and Reader A showed similar performance (90 vs. 87/111; p = .63) using Reader B as reference. The number of lymph node lesions detected by the AI-based tool, PSA, and curative treatment was significantly associated with PCa-specific survival. CONCLUSION: This study shows the feasibility of using an AI-based tool for automated and objective interpretation of PET/CT images that can provide assessments of lymph node lesions comparable with that of experienced readers and prognostic information in PCa patients.


Subject(s)
Artificial Intelligence , Image Interpretation, Computer-Assisted/methods , Lymphatic Metastasis/diagnostic imaging , Positron Emission Tomography Computed Tomography/methods , Prostatic Neoplasms/pathology , Adult , Aged , Aged, 80 and over , Feasibility Studies , Humans , Lymph Nodes/diagnostic imaging , Male , Middle Aged , Predictive Value of Tests , Survival Analysis
19.
Clin Transl Radiat Oncol ; 25: 37-45, 2020 Nov.
Article in English | MEDLINE | ID: mdl-33005756

ABSTRACT

BACKGROUND: It is time-consuming for oncologists to delineate volumes for radiotherapy treatment in computer tomography (CT) images. Automatic delineation based on image processing exists, but with varied accuracy and moderate time savings. Using convolutional neural network (CNN), delineations of volumes are faster and more accurate. We have used CTs with the annotated structure sets to train and evaluate a CNN. MATERIAL AND METHODS: The CNN is a standard segmentation network modified to minimize memory usage. We used CTs and structure sets from 75 cervical cancers and 191 anorectal cancers receiving radiation therapy at Skåne University Hospital 2014-2018. Five structures were investigated: left/right femoral heads, bladder, bowel bag, and clinical target volume of lymph nodes (CTVNs). Dice score and mean surface distance (MSD) (mm) evaluated accuracy, and one oncologist qualitatively evaluated auto-segmentations. RESULTS: Median Dice/MSD scores for anorectal cancer: 0.91-0.92/1.93-1.86 femoral heads, 0.94/2.07 bladder, and 0.83/6.80 bowel bag. Median Dice scores for cervical cancer were 0.93-0.94/1.42-1.49 femoral heads, 0.84/3.51 bladder, 0.88/5.80 bowel bag, and 0.82/3.89 CTVNs. With qualitative evaluation, performance on femoral heads and bladder auto-segmentations was mostly excellent, but CTVN auto-segmentations were not acceptable to a larger extent. DISCUSSION: It is possible to train a CNN with high overlap using structure sets as ground truth. Manually delineated pelvic volumes from structure sets do not always strictly follow volume boundaries and are sometimes inaccurately defined, which leads to similar inaccuracies in the CNN output. More data that is consistently annotated is needed to achieve higher CNN accuracy and to enable future clinical implementation.

20.
EJNMMI Phys ; 7(1): 51, 2020 Aug 04.
Article in English | MEDLINE | ID: mdl-32754893

ABSTRACT

BACKGROUND: Artificial intelligence (AI) is about to transform medical imaging. The Research Consortium for Medical Image Analysis (RECOMIA), a not-for-profit organisation, has developed an online platform to facilitate collaboration between medical researchers and AI researchers. The aim is to minimise the time and effort researchers need to spend on technical aspects, such as transfer, display, and annotation of images, as well as legal aspects, such as de-identification. The purpose of this article is to present the RECOMIA platform and its AI-based tools for organ segmentation in computed tomography (CT), which can be used for extraction of standardised uptake values from the corresponding positron emission tomography (PET) image. RESULTS: The RECOMIA platform includes modules for (1) local de-identification of medical images, (2) secure transfer of images to the cloud-based platform, (3) display functions available using a standard web browser, (4) tools for manual annotation of organs or pathology in the images, (5) deep learning-based tools for organ segmentation or other customised analyses, (6) tools for quantification of segmented volumes, and (7) an export function for the quantitative results. The AI-based tool for organ segmentation in CT currently handles 100 organs (77 bones and 23 soft tissue organs). The segmentation is based on two convolutional neural networks (CNNs): one network to handle organs with multiple similar instances, such as vertebrae and ribs, and one network for all other organs. The CNNs have been trained using CT studies from 339 patients. Experienced radiologists annotated organs in the CT studies. The performance of the segmentation tool, measured as mean Dice index on a manually annotated test set, with 10 representative organs, was 0.93 for all foreground voxels, and the mean Dice index over the organs were 0.86 (0.82 for the soft tissue organs and 0.90 for the bones). CONCLUSION: The paper presents a platform that provides deep learning-based tools that can perform basic organ segmentations in CT, which can then be used to automatically obtain the different measurement in the corresponding PET image. The RECOMIA platform is available on request at www.recomia.org for research purposes.

SELECTION OF CITATIONS
SEARCH DETAIL
...