Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 12 de 12
Filter
Add more filters










Publication year range
1.
J Nucl Med ; 2024 Jun 06.
Article in English | MEDLINE | ID: mdl-38844359

ABSTRACT

The integration of automated whole-body tumor segmentation using 18F-FDG PET/CT images represents a pivotal shift in oncologic diagnostics, enhancing the precision and efficiency of tumor burden assessment. This editorial examines the transition toward automation, propelled by advancements in artificial intelligence, notably through deep learning techniques. We highlight the current availability of commercial tools and the academic efforts that have set the stage for these developments. Further, we comment on the challenges of data diversity, validation needs, and regulatory barriers. The role of metabolic tumor volume and total lesion glycolysis as vital metrics in cancer management underscores the significance of this evaluation. Despite promising progress, we call for increased collaboration across academia, clinical users, and industry to better realize the clinical benefits of automated segmentation, thus helping to streamline workflows and improve patient outcomes in oncology.

2.
Cancer Imaging ; 24(1): 51, 2024 Apr 11.
Article in English | MEDLINE | ID: mdl-38605408

ABSTRACT

The evolution of Positron Emission Tomography (PET), culminating in the Total-Body PET (TB-PET) system, represents a paradigm shift in medical imaging. This paper explores the transformative role of Artificial Intelligence (AI) in enhancing clinical and research applications of TB-PET imaging. Clinically, TB-PET's superior sensitivity facilitates rapid imaging, low-dose imaging protocols, improved diagnostic capabilities and higher patient comfort. In research, TB-PET shows promise in studying systemic interactions and enhancing our understanding of human physiology and pathophysiology. In parallel, AI's integration into PET imaging workflows-spanning from image acquisition to data analysis-marks a significant development in nuclear medicine. This review delves into the current and potential roles of AI in augmenting TB-PET/CT's functionality and utility. We explore how AI can streamline current PET imaging processes and pioneer new applications, thereby maximising the technology's capabilities. The discussion also addresses necessary steps and considerations for effectively integrating AI into TB-PET/CT research and clinical practice. The paper highlights AI's role in enhancing TB-PET's efficiency and addresses the challenges posed by TB-PET's increased complexity. In conclusion, this exploration emphasises the need for a collaborative approach in the field of medical imaging. We advocate for shared resources and open-source initiatives as crucial steps towards harnessing the full potential of the AI/TB-PET synergy. This collaborative effort is essential for revolutionising medical imaging, ultimately leading to significant advancements in patient care and medical research.


Subject(s)
Artificial Intelligence , Positron Emission Tomography Computed Tomography , Humans , Positron-Emission Tomography
4.
J Nucl Med ; 64(7): 1145-1153, 2023 07.
Article in English | MEDLINE | ID: mdl-37290795

ABSTRACT

We introduce the Fast Algorithm for Motion Correction (FALCON) software, which allows correction of both rigid and nonlinear motion artifacts in dynamic whole-body (WB) images, irrespective of the PET/CT system or the tracer. Methods: Motion was corrected using affine alignment followed by a diffeomorphic approach to account for nonrigid deformations. In both steps, images were registered using multiscale image alignment. Moreover, the frames suited to successful motion correction were automatically estimated by calculating the initial normalized cross-correlation metric between the reference frame and the other moving frames. To evaluate motion correction performance, WB dynamic image sequences from 3 different PET/CT systems (Biograph mCT, Biograph Vision 600, and uEXPLORER) using 6 different tracers (18F-FDG, 18F-fluciclovine, 68Ga-PSMA, 68Ga-DOTATATE, 11C-Pittsburgh compound B, and 82Rb) were considered. Motion correction accuracy was assessed using 4 different measures: change in volume mismatch between individual WB image volumes to assess gross body motion, change in displacement of a large organ (liver dome) within the torso due to respiration, change in intensity in small tumor nodules due to motion blur, and constancy of activity concentration levels. Results: Motion correction decreased gross body motion artifacts and reduced volume mismatch across dynamic frames by about 50%. Moreover, large-organ motion correction was assessed on the basis of correction of liver dome motion, which was removed entirely in about 70% of all cases. Motion correction also improved tumor intensity, resulting in an average increase in tumor SUVs by 15%. Large deformations seen in gated cardiac 82Rb images were managed without leading to anomalous distortions or substantial intensity changes in the resulting images. Finally, the constancy of activity concentration levels was reasonably preserved (<2% change) in large organs before and after motion correction. Conclusion: FALCON allows fast and accurate correction of rigid and nonrigid WB motion artifacts while being insensitive to scanner hardware or tracer distribution, making it applicable to a wide range of PET imaging scenarios.


Subject(s)
Motion , Positron Emission Tomography Computed Tomography , Positron Emission Tomography Computed Tomography/methods , Automation , Whole Body Imaging/methods , Time Factors , Humans , Software , Neoplasms/diagnostic imaging
6.
J Nucl Med ; 63(12): 1941-1948, 2022 12.
Article in English | MEDLINE | ID: mdl-35772962

ABSTRACT

We introduce multiple-organ objective segmentation (MOOSE) software that generates subject-specific, multiorgan segmentation using data-centric artificial intelligence principles to facilitate high-throughput systemic investigations of the human body via whole-body PET imaging. Methods: Image data from 2 PET/CT systems were used in training MOOSE. For noncerebral structures, 50 whole-body CT images were used, 30 of which were acquired from healthy controls (14 men and 16 women), and 20 datasets were acquired from oncology patients (14 men and 6 women). Noncerebral tissues consisted of 13 abdominal organs, 20 bone segments, subcutaneous fat, visceral fat, psoas muscle, and skeletal muscle. An expert panel manually segmented all noncerebral structures except for subcutaneous fat, visceral fat, and skeletal muscle, which were semiautomatically segmented using thresholding. A majority-voting algorithm was used to generate a reference-standard segmentation. From the 50 CT datasets, 40 were used for training and 10 for testing. For cerebral structures, 34 18F-FDG PET/MRI brain image volumes were used from 10 healthy controls (5 men and 5 women imaged twice) and 14 nonlesional epilepsy patients (7 men and 7 women). Only 18F-FDG PET images were considered for training: 24 and 10 of 34 volumes were used for training and testing, respectively. The Dice score coefficient (DSC) was used as the primary metric, and the average symmetric surface distance as a secondary metric, to evaluate the automated segmentation performance. Results: An excellent overlap between the reference labels and MOOSE-derived organ segmentations was observed: 92% of noncerebral tissues showed DSCs of more than 0.90, whereas a few organs exhibited lower DSCs (e.g., adrenal glands [0.72], pancreas [0.85], and bladder [0.86]). The median DSCs of brain subregions derived from PET images were lower. Only 29% of the brain segments had a median DSC of more than 0.90, whereas segmentation of 60% of regions yielded a median DSC of 0.80-0.89. The results of the average symmetric surface distance analysis demonstrated that the average distance between the reference standard and the automatically segmented tissue surfaces (organs, bones, and brain regions) lies within the size of image voxels (2 mm). Conclusion: The proposed segmentation pipeline allows automatic segmentation of 120 unique tissues from whole-body 18F-FDG PET/CT images with high accuracy.


Subject(s)
Fluorodeoxyglucose F18 , Positron Emission Tomography Computed Tomography , Male , Humans , Female , Positron Emission Tomography Computed Tomography/methods , Artificial Intelligence , Human Body , Semantics , Image Processing, Computer-Assisted/methods
7.
J Nucl Med ; 62(6): 871-879, 2021 06 01.
Article in English | MEDLINE | ID: mdl-33246982

ABSTRACT

This work set out to develop a motion-correction approach aided by conditional generative adversarial network (cGAN) methodology that allows reliable, data-driven determination of involuntary subject motion during dynamic 18F-FDG brain studies. Methods: Ten healthy volunteers (5 men/5 women; mean age ± SD, 27 ± 7 y; weight, 70 ± 10 kg) underwent a test-retest 18F-FDG PET/MRI examination of the brain (n = 20). The imaging protocol consisted of a 60-min PET list-mode acquisition contemporaneously acquired with MRI, including MR navigators and a 3-dimensional time-of-flight MR angiography sequence. Arterial blood samples were collected as a reference standard representing the arterial input function (AIF). Training of the cGAN was performed using 70% of the total datasets (n = 16, randomly chosen), which was corrected for motion using MR navigators. The resulting cGAN mappings (between individual frames and the reference frame [55-60 min after injection]) were then applied to the test dataset (remaining 30%, n = 6), producing artificially generated low-noise images from early high-noise PET frames. These low-noise images were then coregistered to the reference frame, yielding 3-dimensional motion vectors. Performance of cGAN-aided motion correction was assessed by comparing the image-derived input function (IDIF) extracted from a cGAN-aided motion-corrected dynamic sequence with the AIF based on the areas under the curves (AUCs). Moreover, clinical relevance was assessed through direct comparison of the average cerebral metabolic rates of glucose (CMRGlc) values in gray matter calculated using the AIF and the IDIF. Results: The absolute percentage difference between AUCs derived using the motion-corrected IDIF and the AIF was (1.2% + 0.9%). The gray matter CMRGlc values determined using these 2 input functions differed by less than 5% (2.4% + 1.7%). Conclusion: A fully automated data-driven motion-compensation approach was established and tested for 18F-FDG PET brain imaging. cGAN-aided motion correction enables the translation of noninvasive clinical absolute quantification from PET/MR to PET/CT by allowing the accurate determination of motion vectors from the PET data itself.


Subject(s)
Brain/diagnostic imaging , Fluorodeoxyglucose F18 , Image Processing, Computer-Assisted/methods , Movement , Neural Networks, Computer , Positron-Emission Tomography , Humans , Magnetic Resonance Imaging
8.
Methods ; 188: 4-19, 2021 04.
Article in English | MEDLINE | ID: mdl-33068741

ABSTRACT

State-of-the-art patient management frequently mandates the investigation of both anatomy and physiology of the patients. Hybrid imaging modalities such as the PET/MRI, PET/CT and SPECT/CT have the ability to provide both structural and functional information of the investigated tissues in a single examination. With the introduction of such advanced hardware fusion, new problems arise such as the exceedingly large amount of multi-modality data that requires novel approaches of how to extract a maximum of clinical information from large sets of multi-dimensional imaging data. Artificial intelligence (AI) has emerged as one of the leading technologies that has shown promise in facilitating highly integrative analysis of multi-parametric data. Specifically, the usefulness of AI algorithms in the medical imaging field has been heavily investigated in the realms of (1) image acquisition and reconstruction, (2) post-processing and (3) data mining and modelling. Here, we aim to provide an overview of the challenges encountered in hybrid imaging and discuss how AI algorithms can facilitate potential solutions. In addition, we highlight the pitfalls and challenges in using advanced AI algorithms in the context of hybrid imaging and provide suggestions for building robust AI solutions that enable reproducible and transparent research.


Subject(s)
Artificial Intelligence , Data Mining , Image Processing, Computer-Assisted/methods , Multimodal Imaging/methods , Datasets as Topic , Humans
9.
Med Phys ; 47(10): 4786-4799, 2020 Oct.
Article in English | MEDLINE | ID: mdl-32679623

ABSTRACT

PURPOSE: We developed a target-based cone beam computed tomography (CBCT) imaging framework for optimizing an unconstrained three dimensional (3D) source-detector trajectory by incorporating prior image information. Our main aim is to enable a CBCT system to provide topical information about the target using a limited angle noncircular scan orbit with a minimal number of projections. Such a customized trajectory should include enough information to sufficiently reconstruct a particular volume of interest (VOI) under kinematic constraints, which may result from the patient size or additional surgical or radiation therapy-related equipment. METHODS: A patient-specific model from a prior diagnostic computed tomography (CT) volume is used as a digital phantom for CBCT trajectory simulations. Selection of the best projection views is accomplished through maximizing an objective function fed by the imaging quality provided by different x-ray positions on the digital phantom data. The final optimized trajectory includes a limited angular range and a minimal number of projections which can be applied to a C-arm device capable of general source-detector positioning. The performance of the proposed framework is investigated in experiments involving an in-house-built box phantom including spherical targets as well as an Alderson-Rando head phantom. In order to quantify the image quality of the reconstructed image, we use the average full-width-half-maximum (FWHMavg ) for the spherical target and feature similarity index (FSIM), universal quality index (UQI), and contrast-to-noise ratio (CNR) for an anatomical target. RESULTS: Our experiments based on both the box and head phantom showed that optimized trajectories could achieve a comparable image quality in the VOI with respect to the standard C-arm circular CBCT while using approximately one quarter of projections. We achieved a relative deviation <7% for FWHMavg between the reconstructed images from the optimized trajectories and the standard C-arm CBCT for all spherical targets. Furthermore, for the anatomical target, the relative deviation of FSIM, UQI, and CNR between the reconstructed image related to the proposed trajectory and the standard C-arm circular CBCT was found to be 5.06%, 6.89%, and 8.64%, respectively. We also compared our proposed trajectories to circular trajectories with equivalent angular sampling as the optimized trajectories. Our results show that optimized trajectories can outperform simple partial circular trajectories in the VOI in term of image quality. Typically, an angular range between 116° and 152° was used for the optimized trajectories. CONCLUSION: We demonstrated that applying limited angle noncircular trajectories with optimized orientations in 3D space can provide a suitable image quality for particular image targets and has a potential for limited angle and low-dose CBCT-based interventions under strong spatial constraints.


Subject(s)
Algorithms , Cone-Beam Computed Tomography , Humans , Image Processing, Computer-Assisted , Phantoms, Imaging , Radionuclide Imaging
10.
Cancer Imaging ; 20(1): 38, 2020 Jun 09.
Article in English | MEDLINE | ID: mdl-32517801

ABSTRACT

Oncological diseases account for a significant portion of the burden on public healthcare systems with associated costs driven primarily by complex and long-lasting therapies. Through the visualization of patient-specific morphology and functional-molecular pathways, cancerous tissue can be detected and characterized non-invasively, so as to provide referring oncologists with essential information to support therapy management decisions. Following the onset of stand-alone anatomical and functional imaging, we witness a push towards integrating molecular image information through various methods, including anato-metabolic imaging (e.g., PET/CT), advanced MRI, optical or ultrasound imaging.This perspective paper highlights a number of key technological and methodological advances in imaging instrumentation related to anatomical, functional, molecular medicine and hybrid imaging, that is understood as the hardware-based combination of complementary anatomical and molecular imaging. These include novel detector technologies for ionizing radiation used in CT and nuclear medicine imaging, and novel system developments in MRI and optical as well as opto-acoustic imaging. We will also highlight new data processing methods for improved non-invasive tissue characterization. Following a general introduction to the role of imaging in oncology patient management we introduce imaging methods with well-defined clinical applications and potential for clinical translation. For each modality, we report first on the status quo and, then point to perceived technological and methodological advances in a subsequent status go section. Considering the breadth and dynamics of these developments, this perspective ends with a critical reflection on where the authors, with the majority of them being imaging experts with a background in physics and engineering, believe imaging methods will be in a few years from now.Overall, methodological and technological medical imaging advances are geared towards increased image contrast, the derivation of reproducible quantitative parameters, an increase in volume sensitivity and a reduction in overall examination time. To ensure full translation to the clinic, this progress in technologies and instrumentation is complemented by advances in relevant acquisition and image-processing protocols and improved data analysis. To this end, we should accept diagnostic images as "data", and - through the wider adoption of advanced analysis, including machine learning approaches and a "big data" concept - move to the next stage of non-invasive tumour phenotyping. The scans we will be reading in 10 years from now will likely be composed of highly diverse multi-dimensional data from multiple sources, which mandate the use of advanced and interactive visualization and analysis platforms powered by Artificial Intelligence (AI) for real-time data handling by cross-specialty clinical experts with a domain knowledge that will need to go beyond that of plain imaging.


Subject(s)
Image Processing, Computer-Assisted/methods , Medical Oncology/trends , Multimodal Imaging/methods , Neoplasms/diagnostic imaging , Artificial Intelligence , Humans , Magnetic Resonance Imaging/methods , Medical Oncology/methods , Multimodal Imaging/trends , Radionuclide Imaging/methods , Ultrasonography/methods
11.
Front Neurosci ; 14: 252, 2020.
Article in English | MEDLINE | ID: mdl-32269510

ABSTRACT

In the past, determination of absolute values of cerebral metabolic rate of glucose (CMRGlc) in clinical routine was rarely carried out due to the invasive nature of arterial sampling. With the advent of combined PET/MR imaging technology, CMRGlc values can be obtained non-invasively, thereby providing the opportunity to take advantage of fully quantitative data in clinical routine. However, CMRGlc values display high physiological variability, presumably due to fluctuations in the intrinsic activity of the brain at rest. To reduce CMRGlc variability associated with these fluctuations, the objective of this study was to determine whether functional connectivity measures derived from resting-state fMRI (rs-fMRI) could be used to correct for these fluctuations in intrinsic brain activity. METHODS: We studied 10 healthy volunteers who underwent a test-retest dynamic [18F]FDG-PET study using a fully integrated PET/MR system (Siemens Biograph mMR). To validate the non-invasive derivation of an image-derived input function based on combined analysis of PET and MR data, arterial blood samples were obtained. Using the arterial input function (AIF), parametric images representing CMRGlc were determined using the Patlak graphical approach. Both directed functional connectivity (dFC) and undirected functional connectivity (uFC) were determined between nodes in six major networks (Default mode network, Salience, L/R Executive, Attention, and Sensory-motor network) using either a bivariate-correlation (R coefficient) or a Multi-Variate AutoRegressive (MVAR) model. In addition, the performance of a regional connectivity measure, the fractional amplitude of low frequency fluctuations (fALFF), was also investigated. RESULTS: The average intrasubject variability for CMRGlc values between test and retest was determined as (14 ±8%) with an average inter-subject variability of 25% at test and 15% at retest. The average CMRGlc value (umol/100 g/min) across all networks was 39 ±10 at test and increased slightly to 43 ±6 at retest. The R, MVAR and fALFF coefficients showed relatively large test-retest variability in comparison to the inter-subjects variability, resulting in poor reliability (intraclass correlation in the range of 0.11-0.65). More importantly, no significant relationship was found between the R coefficients (for uFC), MVAR coefficients (for dFC) or fALFF and corresponding CMRGlc values for any of the six major networks. DISCUSSION: Measurement of functional connectivity within established brain networks did not provide a means to decrease the inter- or intrasubject variability of CMRGlc values. As such, our results indicate that connectivity measured derived from rs-fMRI acquired contemporaneously with PET imaging are not suited for correction of CMRGlc variability associated with intrinsic fluctuations of resting-state brain activity. Thus, given the observed substantial inter- and intrasubject variability of CMRGlc values, the relevance of absolute quantification for clinical routine is presently uncertain.

12.
J Nucl Med ; 61(2): 276-284, 2020 02.
Article in English | MEDLINE | ID: mdl-31375567

ABSTRACT

We describe a fully automated processing pipeline to support the noninvasive absolute quantification of the cerebral metabolic rate for glucose (CMRGlc) in a clinical setting. This pipeline takes advantage of "anatometabolic" information associated with fully integrated PET/MRI. Methods: Ten healthy volunteers (5 men and /5 women; 27 ± 7 y old; 70 ± 10 kg) underwent a test-retest 18F-FDG PET/MRI examination of the brain. The imaging protocol consisted of a 60-min PET list-mode acquisition with parallel MRI acquisitions, including 3-dimensional time-of-flight MR angiography, MRI navigators, and a T1-weighted MRI scan. State-of-the-art MRI-based attenuation correction was derived from T1-weighted MRI (pseudo-CT [pCT]). For validation purposes, a low-dose CT scan was also performed. Arterial blood samples were collected as the reference standard (arterial input function [AIF]). The developed pipeline allows the derivation of an image-derived input function (IDIF), which is subsequently used to create CMRGlc maps by means of a Patlak analysis. The pipeline also includes motion correction using the MRI navigator sequence as well as a novel partial-volume correction that accounts for background heterogeneity. Finally, CMRGlc maps are used to generate a normative database to facilitate the detection of metabolic abnormalities in future patient scans. To assess the performance of the developed pipeline, IDIFs extracted by both CT-based attenuation correction (CT-IDIF) and MRI-based attenuation correction (pCT-IDIF) were compared with the reference standard (AIF) using the absolute percentage difference between the areas under the curves as well as the absolute percentage difference in regional CMRGlc values. Results: The absolute percentage differences between the areas under the curves for CT-IDIF and pCT-IDIF were determined to be 1.4% ± 1.0% and 3.4% ± 2.6%, respectively. The absolute percentage difference in regional CMRGlc values based on CT-IDIF and pCT-IDIF differed by less than 6% from the reference values obtained from the AIF. Conclusion: By taking advantage of the capabilities of fully integrated PET/MRI, we developed a fully automated computational pipeline that allows the noninvasive determination of regional CMRGlc values in a clinical setting. This methodology might facilitate the proliferation of fully quantitative imaging into the clinical arena and, as a result, might contribute to improved diagnostic efficacy.


Subject(s)
Brain/diagnostic imaging , Brain/metabolism , Glucose/metabolism , Magnetic Resonance Imaging , Multimodal Imaging , Positron-Emission Tomography , Adult , Female , Humans , Male
SELECTION OF CITATIONS
SEARCH DETAIL
...