Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 132
Filtrar
1.
Res Sq ; 2024 Jun 28.
Artigo em Inglês | MEDLINE | ID: mdl-38978576

RESUMO

Over 85 million computed tomography (CT) scans are performed annually in the US, of which approximately one quarter focus on the abdomen. Given the current shortage of both general and specialized radiologists, there is a large impetus to use artificial intelligence to alleviate the burden of interpreting these complex imaging studies while simultaneously using the images to extract novel physiological insights. Prior state-of-the-art approaches for automated medical image interpretation leverage vision language models (VLMs) that utilize both the image and the corresponding textual radiology reports. However, current medical VLMs are generally limited to 2D images and short reports. To overcome these shortcomings for abdominal CT interpretation, we introduce Merlin - a 3D VLM that leverages both structured electronic health records (EHR) and unstructured radiology reports for pretraining without requiring additional manual annotations. We train Merlin using a high-quality clinical dataset of paired CT scans (6+ million images from 15,331 CTs), EHR diagnosis codes (1.8+ million codes), and radiology reports (6+ million tokens) for training. We comprehensively evaluate Merlin on 6 task types and 752 individual tasks. The non-adapted (off-the-shelf) tasks include zero-shot findings classification (31 findings), phenotype classification (692 phenotypes), and zero-shot cross-modal retrieval (image to findings and image to impressions), while model adapted tasks include 5-year chronic disease prediction (6 diseases), radiology report generation, and 3D semantic segmentation (20 organs). We perform internal validation on a test set of 5,137 CTs, and external validation on 7,000 clinical CTs and on two public CT datasets (VerSe, TotalSegmentator). Beyond these clinically-relevant evaluations, we assess the efficacy of various network architectures and training strategies to depict that Merlin has favorable performance to existing task-specific baselines. We derive data scaling laws to empirically assess training data needs for requisite downstream task performance. Furthermore, unlike conventional VLMs that require hundreds of GPUs for training, we perform all training on a single GPU. This computationally efficient design can help democratize foundation model training, especially for health systems with compute constraints. We plan to release our trained models, code, and dataset, pending manual removal of all protected health information.

2.
AJR Am J Roentgenol ; 2024 May 29.
Artigo em Inglês | MEDLINE | ID: mdl-38809123

RESUMO

Artificial intelligence (AI) is transforming medical imaging of adult patients. However, its utilization in pediatric oncology imaging remains constrained, in part due to the inherent data scarcity associated with childhood cancers. Pediatric cancers are rare, and imaging technologies are evolving rapidly, leading to insufficient data of a particular type to effectively train these algorithms. The small market size of pediatrics compared to adults could also contribute to this challenge, as market size is a driver of commercialization. This article provides an overview of the current state of AI applications for pediatric cancer imaging, including applications for medical image acquisition, processing, reconstruction, segmentation, diagnosis, staging, and treatment response monitoring. While current developments are promising, impediments due to diverse anatomies of growing children and nonstandardized imaging protocols have led to limited clinical translation thus far. Opportunities include leveraging reconstruction algorithms to achieve accelerated low-dose imaging and automating the generation of metric-based staging and treatment monitoring scores. Transfer-learning of adult-based AI models to pediatric cancers, multi-institutional data sharing, and ethical data privacy practices for pediatric patients with rare cancers will be keys to unlocking AI's full potential for clinical translation and improved outcomes for these young patients.

3.
Artigo em Inglês | MEDLINE | ID: mdl-38814528

RESUMO

PURPOSE: AI-assisted techniques for lesion registration and segmentation have the potential to make CT-based tumor follow-up assessment faster and less reader-dependent. However, empirical evidence on the advantages of AI-assisted volumetric segmentation for lymph node and soft tissue metastases in follow-up CT scans is lacking. The aim of this study was to assess the efficiency, quality, and inter-reader variability of an AI-assisted workflow for volumetric segmentation of lymph node and soft tissue metastases in follow-up CT scans. Three hypotheses were tested: (H1) Assessment time for follow-up lesion segmentation is reduced using an AI-assisted workflow. (H2) The quality of the AI-assisted segmentation is non-inferior to the quality of fully manual segmentation. (H3) The inter-reader variability of the resulting segmentations is reduced with AI assistance. MATERIALS AND METHODS: The study retrospectively analyzed 126 lymph nodes and 135 soft tissue metastases from 55 patients with stage IV melanoma. Three radiologists from two institutions performed both AI-assisted and manual segmentation, and the results were statistically analyzed and compared to a manual segmentation reference standard. RESULTS: AI-assisted segmentation reduced user interaction time significantly by 33% (222 s vs. 336 s), achieved similar Dice scores (0.80-0.84 vs. 0.81-0.82) and decreased inter-reader variability (median Dice 0.85-1.0 vs. 0.80-0.82; ICC 0.84 vs. 0.80), compared to manual segmentation. CONCLUSION: The findings of this study support the use of AI-assisted registration and volumetric segmentation for lymph node and soft tissue metastases in follow-up CT scans. The AI-assisted workflow achieved significant time savings, similar segmentation quality, and reduced inter-reader variability compared to manual segmentation.

4.
Nat Med ; 30(4): 1134-1142, 2024 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-38413730

RESUMO

Analyzing vast textual data and summarizing key information from electronic health records imposes a substantial burden on how clinicians allocate their time. Although large language models (LLMs) have shown promise in natural language processing (NLP) tasks, their effectiveness on a diverse range of clinical summarization tasks remains unproven. Here we applied adaptation methods to eight LLMs, spanning four distinct clinical summarization tasks: radiology reports, patient questions, progress notes and doctor-patient dialogue. Quantitative assessments with syntactic, semantic and conceptual NLP metrics reveal trade-offs between models and adaptation methods. A clinical reader study with 10 physicians evaluated summary completeness, correctness and conciseness; in most cases, summaries from our best-adapted LLMs were deemed either equivalent (45%) or superior (36%) compared with summaries from medical experts. The ensuing safety analysis highlights challenges faced by both LLMs and medical experts, as we connect errors to potential medical harm and categorize types of fabricated information. Our research provides evidence of LLMs outperforming medical experts in clinical text summarization across multiple tasks. This suggests that integrating LLMs into clinical workflows could alleviate documentation burden, allowing clinicians to focus more on patient care.


Assuntos
Documentação , Semântica , Humanos , Registros Eletrônicos de Saúde , Processamento de Linguagem Natural , Relações Médico-Paciente
5.
Magn Reson Med ; 92(1): 289-302, 2024 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-38282254

RESUMO

PURPOSE: To estimate pixel-wise predictive uncertainty for deep learning-based MR image reconstruction and to examine the impact of domain shifts and architecture robustness. METHODS: Uncertainty prediction could provide a measure for robustness of deep learning (DL)-based MR image reconstruction from undersampled data. DL methods bear the risk of inducing reconstruction errors like in-painting of unrealistic structures or missing pathologies. These errors may be obscured by visual realism of DL reconstruction and thus remain undiscovered. Furthermore, most methods are task-agnostic and not well calibrated to domain shifts. We propose a strategy that estimates aleatoric (data) and epistemic (model) uncertainty, which entails training a deep ensemble (epistemic) with nonnegative log-likelihood (aleatoric) loss in addition to the conventional applied losses terms. The proposed procedure can be paired with any DL reconstruction, enabling investigations of their predictive uncertainties on a pixel level. Five different architectures were investigated on the fastMRI database. The impact on the examined uncertainty of in-distributional and out-of-distributional data with changes to undersampling pattern, imaging contrast, imaging orientation, anatomy, and pathology were explored. RESULTS: Predictive uncertainty could be captured and showed good correlation to normalized mean squared error. Uncertainty was primarily focused along the aliased anatomies and on hyperintense and hypointense regions. The proposed uncertainty measure was able to detect disease prevalence shifts. Distinct predictive uncertainty patterns were observed for changing network architectures. CONCLUSION: The proposed approach enables aleatoric and epistemic uncertainty prediction for DL-based MR reconstruction with an interpretable examination on a pixel level.


Assuntos
Aprendizado Profundo , Processamento de Imagem Assistida por Computador , Imageamento por Ressonância Magnética , Humanos , Imageamento por Ressonância Magnética/métodos , Processamento de Imagem Assistida por Computador/métodos , Incerteza , Algoritmos , Encéfalo/diagnóstico por imagem , Bases de Dados Factuais
6.
PLoS One ; 19(1): e0296253, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38180971

RESUMO

BACKGROUND: Checkpoint inhibitors have drastically improved the therapy of patients with advanced melanoma. 18F-FDG-PET/CT parameters might act as biomarkers for response and survival and thus can identify patients that do not benefit from immunotherapy. However, little literature exists on the association of baseline 18F-FDG-PET/CT parameters with progression free survival (PFS), best overall response (BOR), and overall survival (OS). MATERIALS AND METHODS: Using a whole tumor volume segmentation approach, we investigated in a retrospective registry study (n = 50) whether pre-treatment 18F-FDG-PET/CT parameters of three subgroups (tumor burden, tumor glucose uptake and non-tumoral hematopoietic tissue metabolism), can act as biomarkers for the primary endpoints PFS and BOR as well as for the secondary endpoint OS. RESULTS: Compared to the sole use of clinical parameters, baseline 18F-FDG-PET/CT parameters did not significantly improve a Cox proportional-hazard model for PFS (C-index/AIC: 0.70/225.17 and 0.68/223.54, respectively; p = 0.14). A binomial logistic regression analysis for BOR was not statistically significant (χ2(15) = 16.44, p = 0.35), with a low amount of explained variance (Nagelkerke's R2 = 0.38). Mean FDG uptake of the spleen contributed significantly to a Cox proportional-hazard model for OS (HR 3.55, p = 0.04). CONCLUSIONS: The present study could not confirm the capability of the pre-treatment 18F-FDG-PET/CT parameters tumor burden, tumor glucose uptake and non-tumoral hematopoietic tissue metabolism to act as biomarkers for PFS and BOR in metastatic melanoma patients receiving first-line immunotherapy. The documented potential of 18F-FDG uptake by immune-mediating tissues such as the spleen to act as a biomarker for OS has been reproduced.


Assuntos
Melanoma , Segunda Neoplasia Primária , Humanos , Melanoma/diagnóstico por imagem , Melanoma/tratamento farmacológico , Fluordesoxiglucose F18 , Tomografia por Emissão de Pósitrons combinada à Tomografia Computadorizada , Intervalo Livre de Progressão , Estudos Retrospectivos , Imunoterapia , Biomarcadores , Glucose
7.
Res Sq ; 2023 Oct 30.
Artigo em Inglês | MEDLINE | ID: mdl-37961377

RESUMO

Sifting through vast textual data and summarizing key information from electronic health records (EHR) imposes a substantial burden on how clinicians allocate their time. Although large language models (LLMs) have shown immense promise in natural language processing (NLP) tasks, their efficacy on a diverse range of clinical summarization tasks has not yet been rigorously demonstrated. In this work, we apply domain adaptation methods to eight LLMs, spanning six datasets and four distinct clinical summarization tasks: radiology reports, patient questions, progress notes, and doctor-patient dialogue. Our thorough quantitative assessment reveals trade-offs between models and adaptation methods in addition to instances where recent advances in LLMs may not improve results. Further, in a clinical reader study with ten physicians, we show that summaries from our best-adapted LLMs are preferable to human summaries in terms of completeness and correctness. Our ensuing qualitative analysis highlights challenges faced by both LLMs and human experts. Lastly, we correlate traditional quantitative NLP metrics with reader study scores to enhance our understanding of how these metrics align with physician preferences. Our research marks the first evidence of LLMs outperforming human experts in clinical text summarization across multiple tasks. This implies that integrating LLMs into clinical workflows could alleviate documentation burden, empowering clinicians to focus more on personalized patient care and the inherently human aspects of medicine.

8.
PLoS One ; 18(11): e0292993, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37934735

RESUMO

Aging is an important risk factor for disease, leading to morphological change that can be assessed on Computed Tomography (CT) scans. We propose a deep learning model for automated age estimation based on CT- scans of the thorax and abdomen generated in a clinical routine setting. These predictions could serve as imaging biomarkers to estimate a "biological" age, that better reflects a patient's true physical condition. A pre-trained ResNet-18 model was modified to predict chronological age as well as to quantify its aleatoric uncertainty. The model was trained using 1653 non-pathological CT-scans of the thorax and abdomen of subjects aged between 20 and 85 years in a 5-fold cross-validation scheme. Generalization performance as well as robustness and reliability was assessed on a publicly available test dataset consisting of thorax-abdomen CT-scans of 421 subjects. Score-CAM saliency maps were generated for interpretation of model outputs. We achieved a mean absolute error of 5.76 ± 5.17 years with a mean uncertainty of 5.01 ± 1.44 years after 5-fold cross-validation. A mean absolute error of 6.50 ± 5.17 years with a mean uncertainty of 6.39 ± 1.46 years was obtained on the test dataset. CT-based age estimation accuracy was largely uniform across all age groups and between male and female subjects. The generated saliency maps highlighted especially the lumbar spine and abdominal aorta. This study demonstrates, that accurate and generalizable deep learning-based automated age estimation is feasible using clinical CT image data. The trained model proved to be robust and reliable. Methods of uncertainty estimation and saliency analysis improved the interpretability.


Assuntos
Aprendizado Profundo , Humanos , Masculino , Adulto , Feminino , Adulto Jovem , Pessoa de Meia-Idade , Idoso , Idoso de 80 Anos ou mais , Reprodutibilidade dos Testes , Processamento de Imagem Assistida por Computador/métodos , Tomografia Computadorizada por Raios X , Abdome/diagnóstico por imagem , Tórax/diagnóstico por imagem
9.
Diagnostics (Basel) ; 13(20)2023 Oct 14.
Artigo em Inglês | MEDLINE | ID: mdl-37892030

RESUMO

BACKGROUND: The aim of this study was to investigate whether the combination of radiomics and clinical parameters in a machine-learning model offers additive information compared with the use of only clinical parameters in predicting the best response, progression-free survival after six months, as well as overall survival after six and twelve months in patients with stage IV malignant melanoma undergoing first-line targeted therapy. METHODS: A baseline machine-learning model using clinical variables (demographic parameters and tumor markers) was compared with an extended model using clinical variables and radiomic features of the whole tumor burden, utilizing repeated five-fold cross-validation. Baseline CTs of 91 stage IV malignant melanoma patients, all treated in the same university hospital, were identified in the Central Malignant Melanoma Registry and all metastases were volumetrically segmented (n = 4727). RESULTS: Compared with the baseline model, the extended radiomics model did not add significantly more information to the best-response prediction (AUC [95% CI] 0.548 (0.188, 0.808) vs. 0.487 (0.139, 0.743)), the prediction of PFS after six months (AUC [95% CI] 0.699 (0.436, 0.958) vs. 0.604 (0.373, 0.867)), or the overall survival prediction after six and twelve months (AUC [95% CI] 0.685 (0.188, 0.967) vs. 0.766 (0.433, 1.000) and AUC [95% CI] 0.554 (0.163, 0.781) vs. 0.616 (0.271, 1.000), respectively). CONCLUSIONS: The results showed no additional value of baseline whole-body CT radiomics for best-response prediction, progression-free survival prediction for six months, or six-month and twelve-month overall survival prediction for stage IV melanoma patients receiving first-line targeted therapy. These results need to be validated in a larger cohort.

10.
Nuklearmedizin ; 62(5): 296-305, 2023 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-37802057

RESUMO

BACKGROUND: Artificial intelligence (AI) applications have become increasingly relevant across a broad spectrum of settings in medical imaging. Due to the large amount of imaging data that is generated in oncological hybrid imaging, AI applications are desirable for lesion detection and characterization in primary staging, therapy monitoring, and recurrence detection. Given the rapid developments in machine learning (ML) and deep learning (DL) methods, the role of AI will have significant impact on the imaging workflow and will eventually improve clinical decision making and outcomes. METHODS AND RESULTS: The first part of this narrative review discusses current research with an introduction to artificial intelligence in oncological hybrid imaging and key concepts in data science. The second part reviews relevant examples with a focus on applications in oncology as well as discussion of challenges and current limitations. CONCLUSION: AI applications have the potential to leverage the diagnostic data stream with high efficiency and depth to facilitate automated lesion detection, characterization, and therapy monitoring to ultimately improve quality and efficiency throughout the medical imaging workflow. The goal is to generate reproducible, structured, quantitative diagnostic data for evidence-based therapy guidance in oncology. However, significant challenges remain regarding application development, benchmarking, and clinical implementation. KEY POINTS: · Hybrid imaging generates a large amount of multimodality medical imaging data with high complexity and depth.. · Advanced tools are required to enable fast and cost-efficient processing along the whole radiology value chain.. · AI applications promise to facilitate the assessment of oncological disease in hybrid imaging with high quality and efficiency for lesion detection, characterization, and response assessment. The goal is to generate reproducible, structured, quantitative diagnostic data for evidence-based oncological therapy guidance.. · Selected applications in three oncological entities (lung, prostate, and neuroendocrine tumors) demonstrate how AI algorithms may impact imaging-based tasks in hybrid imaging and potentially guide clinical decision making..


Assuntos
Inteligência Artificial , Radiologia , Aprendizado de Máquina , Imagem Multimodal
11.
Radiat Oncol ; 18(1): 148, 2023 Sep 06.
Artigo em Inglês | MEDLINE | ID: mdl-37674171

RESUMO

BACKGROUND: Target volume definition for curative radiochemotherapy in head and neck cancer is crucial since the predominant recurrence pattern is local. Additional diagnostic imaging like MRI is increasingly used, yet it is usually hampered by different patient positioning compared to radiotherapy. In this study, we investigated the impact of diagnostic MRI in treatment position for target volume delineation. METHODS: We prospectively analyzed patients who were suitable and agreed to undergo an MRI in treatment position with immobilization devices prior to radiotherapy planning from 2017 to 2019. Target volume delineation for the primary tumor was first performed using all available information except for the MRI and subsequently with additional consideration of the co-registered MRI. The derived volumes were compared by subjective visual judgment and by quantitative mathematical methods. RESULTS: Sixteen patients were included and underwent the planning CT, MRI and subsequent definitive radiochemotherapy. In 69% of the patients, there were visually relevant changes to the gross tumor volume (GTV) by use of the MRI. In 44%, the GTV_MRI would not have been covered completely by the planning target volume (PTV) of the CT-only contour. Yet, median Hausdorff und DSI values did not reflect these differences. The 3-year local control rate was 94%. CONCLUSIONS: Adding a diagnostic MRI in RT treatment position is feasible and results in relevant changes in target volumes in the majority of patients.


Assuntos
Neoplasias de Cabeça e Pescoço , Radioterapia (Especialidade) , Humanos , Imageamento por Ressonância Magnética , Quimiorradioterapia , Neoplasias de Cabeça e Pescoço/diagnóstico por imagem , Neoplasias de Cabeça e Pescoço/radioterapia , Posicionamento do Paciente
12.
Nat Biomed Eng ; 7(8): 1014-1027, 2023 08.
Artigo em Inglês | MEDLINE | ID: mdl-37277483

RESUMO

In oncology, intratumoural heterogeneity is closely linked with the efficacy of therapy, and can be partially characterized via tumour biopsies. Here we show that intratumoural heterogeneity can be characterized spatially via phenotype-specific, multi-view learning classifiers trained with data from dynamic positron emission tomography (PET) and multiparametric magnetic resonance imaging (MRI). Classifiers trained with PET-MRI data from mice with subcutaneous colon cancer quantified phenotypic changes resulting from an apoptosis-inducing targeted therapeutic and provided biologically relevant probability maps of tumour-tissue subtypes. When applied to retrospective PET-MRI data of patients with liver metastases from colorectal cancer, the trained classifiers characterized intratumoural tissue subregions in agreement with tumour histology. The spatial characterization of intratumoural heterogeneity in mice and patients via multimodal, multiparametric imaging aided by machine-learning may facilitate applications in precision oncology.


Assuntos
Imageamento por Ressonância Magnética Multiparamétrica , Neoplasias , Animais , Camundongos , Imageamento por Ressonância Magnética/métodos , Estudos Retrospectivos , Medicina de Precisão , Tomografia por Emissão de Pósitrons/métodos , Aprendizado de Máquina
13.
Radiol Artif Intell ; 5(3): e220246, 2023 May.
Artigo em Inglês | MEDLINE | ID: mdl-37293349

RESUMO

Purpose: To develop a deep learning approach that enables ultra-low-dose, 1% of the standard clinical dosage (3 MBq/kg), ultrafast whole-body PET reconstruction in cancer imaging. Materials and Methods: In this Health Insurance Portability and Accountability Act-compliant study, serial fluorine 18-labeled fluorodeoxyglucose PET/MRI scans of pediatric patients with lymphoma were retrospectively collected from two cross-continental medical centers between July 2015 and March 2020. Global similarity between baseline and follow-up scans was used to develop Masked-LMCTrans, a longitudinal multimodality coattentional convolutional neural network (CNN) transformer that provides interaction and joint reasoning between serial PET/MRI scans from the same patient. Image quality of the reconstructed ultra-low-dose PET was evaluated in comparison with a simulated standard 1% PET image. The performance of Masked-LMCTrans was compared with that of CNNs with pure convolution operations (classic U-Net family), and the effect of different CNN encoders on feature representation was assessed. Statistical differences in the structural similarity index measure (SSIM), peak signal-to-noise ratio (PSNR), and visual information fidelity (VIF) were assessed by two-sample testing with the Wilcoxon signed rank t test. Results: The study included 21 patients (mean age, 15 years ± 7 [SD]; 12 female) in the primary cohort and 10 patients (mean age, 13 years ± 4; six female) in the external test cohort. Masked-LMCTrans-reconstructed follow-up PET images demonstrated significantly less noise and more detailed structure compared with simulated 1% extremely ultra-low-dose PET images. SSIM, PSNR, and VIF were significantly higher for Masked-LMCTrans-reconstructed PET (P < .001), with improvements of 15.8%, 23.4%, and 186%, respectively. Conclusion: Masked-LMCTrans achieved high image quality reconstruction of 1% low-dose whole-body PET images.Keywords: Pediatrics, PET, Convolutional Neural Network (CNN), Dose Reduction Supplemental material is available for this article. © RSNA, 2023.

14.
Invest Radiol ; 58(5): 346-354, 2023 05 01.
Artigo em Inglês | MEDLINE | ID: mdl-36729536

RESUMO

OBJECTIVES: The UK Biobank (UKBB) and German National Cohort (NAKO) are among the largest cohort studies, capturing a wide range of health-related data from the general population, including comprehensive magnetic resonance imaging (MRI) examinations. The purpose of this study was to demonstrate how MRI data from these large-scale studies can be jointly analyzed and to derive comprehensive quantitative image-based phenotypes across the general adult population. MATERIALS AND METHODS: Image-derived features of abdominal organs (volumes of liver, spleen, kidneys, and pancreas; volumes of kidney hilum adipose tissue; and fat fractions of liver and pancreas) were extracted from T1-weighted Dixon MRI data of 17,996 participants of UKBB and NAKO based on quality-controlled deep learning generated organ segmentations. To enable valid cross-study analysis, we first analyzed the data generating process using methods of causal discovery. We subsequently harmonized data from UKBB and NAKO using the ComBat approach for batch effect correction. We finally performed quantile regression on harmonized data across studies providing quantitative models for the variation of image-derived features stratified for sex and dependent on age, height, and weight. RESULTS: Data from 8791 UKBB participants (49.9% female; age, 63 ± 7.5 years) and 9205 NAKO participants (49.1% female, age: 51.8 ± 11.4 years) were analyzed. Analysis of the data generating process revealed direct effects of age, sex, height, weight, and the data source (UKBB vs NAKO) on image-derived features. Correction of data source-related effects resulted in markedly improved alignment of image-derived features between UKBB and NAKO. Cross-study analysis on harmonized data revealed comprehensive quantitative models for the phenotypic variation of abdominal organs across the general adult population. CONCLUSIONS: Cross-study analysis of MRI data from UKBB and NAKO as proposed in this work can be helpful for future joint data analyses across cohorts linking genetic, environmental, and behavioral risk factors to MRI-derived phenotypes and provide reference values for clinical diagnostics.


Assuntos
Bancos de Espécimes Biológicos , Imageamento por Ressonância Magnética , Humanos , Feminino , Masculino , Imageamento por Ressonância Magnética/métodos , Estudos de Coortes , Abdome/diagnóstico por imagem , Reino Unido
15.
Eur J Nucl Med Mol Imaging ; 50(5): 1337-1350, 2023 04.
Artigo em Inglês | MEDLINE | ID: mdl-36633614

RESUMO

PURPOSE: To provide a holistic and complete comparison of the five most advanced AI models in the augmentation of low-dose 18F-FDG PET data over the entire dose reduction spectrum. METHODS: In this multicenter study, five AI models were investigated for restoring low-count whole-body PET/MRI, covering convolutional benchmarks - U-Net, enhanced deep super-resolution network (EDSR), generative adversarial network (GAN) - and the most cutting-edge image reconstruction transformer models in computer vision to date - Swin transformer image restoration network (SwinIR) and EDSR-ViT (vision transformer). The models were evaluated against six groups of count levels representing the simulated 75%, 50%, 25%, 12.5%, 6.25%, and 1% (extremely ultra-low-count) of the clinical standard 3 MBq/kg 18F-FDG dose. The comparisons were performed upon two independent cohorts - (1) a primary cohort from Stanford University and (2) a cross-continental external validation cohort from Tübingen University - in order to ensure the findings are generalizable. A total of 476 original count and simulated low-count whole-body PET/MRI scans were incorporated into this analysis. RESULTS: For low-count PET restoration on the primary cohort, the mean structural similarity index (SSIM) scores for dose 6.25% were 0.898 (95% CI, 0.887-0.910) for EDSR, 0.893 (0.881-0.905) for EDSR-ViT, 0.873 (0.859-0.887) for GAN, 0.885 (0.873-0.898) for U-Net, and 0.910 (0.900-0.920) for SwinIR. In continuation, SwinIR and U-Net's performances were also discreetly evaluated at each simulated radiotracer dose levels. Using the primary Stanford cohort, the mean diagnostic image quality (DIQ; 5-point Likert scale) scores of SwinIR restoration were 5 (SD, 0) for dose 75%, 4.50 (0.535) for dose 50%, 3.75 (0.463) for dose 25%, 3.25 (0.463) for dose 12.5%, 4 (0.926) for dose 6.25%, and 2.5 (0.534) for dose 1%. CONCLUSION: Compared to low-count PET images, with near-to or nondiagnostic images at higher dose reduction levels (up to 6.25%), both SwinIR and U-Net significantly improve the diagnostic quality of PET images. A radiotracer dose reduction to 1% of the current clinical standard radiotracer dose is out of scope for current AI techniques.


Assuntos
Inteligência Artificial , Fluordesoxiglucose F18 , Humanos , Redução da Medicação , Tomografia por Emissão de Pósitrons/métodos , Processamento de Imagem Assistida por Computador/métodos , Imageamento por Ressonância Magnética/métodos
16.
Comput Med Imaging Graph ; 104: 102174, 2023 03.
Artigo em Inglês | MEDLINE | ID: mdl-36640485

RESUMO

Medical image segmentation has seen significant progress through the use of supervised deep learning. Hereby, large annotated datasets were employed to reliably segment anatomical structures. To reduce the requirement for annotated training data, self-supervised pre-training strategies on non-annotated data were designed. Especially contrastive learning schemes operating on dense pixel-wise representations have been introduced as an effective tool. In this work, we expand on this strategy and leverage inherent anatomical similarities in medical imaging data. We apply our approach to the task of semantic segmentation in a semi-supervised setting with limited amounts of annotated volumes. Trained alongside a segmentation loss in one single training stage, a contrastive loss aids to differentiate between salient anatomical regions that conform to the available annotations. Our approach builds upon the work of Jabri et al. (2020), who proposed cyclical contrastive random walks (CCRW) for self-supervision on palindromes of video frames. We adapt this scheme to operate on entries of paired embedded image slices. Using paths of cyclical random walks bypasses the need for negative samples, as commonly used in contrastive approaches, enabling the algorithm to discriminate among relevant salient (anatomical) regions implicitly. Further, a multi-level supervision strategy is employed, ensuring adequate representations of local and global characteristics of anatomical structures. The effectiveness of reducing the amount of required annotations is shown on three MRI datasets. A median increase of 8.01 and 5.90 pp in the Dice Similarity Coefficient (DSC) compared to our baseline could be achieved across all three datasets in the case of one and two available annotated examples per dataset.


Assuntos
Algoritmos , Processamento de Imagem Assistida por Computador , Aprendizado de Máquina Supervisionado
17.
Clin Transl Radiat Oncol ; 38: 1-5, 2023 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-36299279

RESUMO

Background: Online adaptive MR-guided radiotherapy allows for the reduction of safety margins in dose escalated treatment of rectal tumors. With the use of smaller margins, precise tumor delineation becomes more critical. In the present study we investigated the impact of rectal ultrasound gel filling on interobserver variability in delineation of primary rectal tumor volumes. Methods: Six patients with locally advanced rectal cancer were scanned on a 1.5 T MRI-Linac without (MRI_e) and with application of 100 cc of ultrasound gel transanally (MRI_f). Eight international radiation oncologists expert in the treatment of gastrointestinal cancers delineated the gross tumor volume (GTV) on both MRI scans. MRI_f scans were provided to the participating centers after MRI_e scans had been returned. Interobserver variability was analyzed by either comparing the observers' delineations with a reference delineation (approach 1) and by building all possible pairs between observers (approach 2). Dice Similarity Index (DICE) and 95 % Hausdorff-Distance (95 %HD) were calculated. Results: Rectal ultrasound gel filling was well tolerated by all patients. Overall, interobserver agreement was superior in MRI_f scans based on median DICE (0.81 vs 0.74, p < 0.005 for approach 1 and 0.76 vs 0.64, p < 0.0001 for approach 2) and 95 %HD (6.9 mm vs 4.2 mm for approach 1, p = 0.04 and 8.9 mm vs 6.1 mm, p = 0.04 for approach 2). Delineated median tumor volumes and inter-quartile ranges were 26.99 cc [18.01-50.34 cc] in MRI_e and 44.20 [19.72-61.59 cc] in MRI_f scans respectively, p = 0.012. Conclusions: Although limited by the small number of patients, in this study the application of rectal ultrasound gel resulted in higher interobserver agreement in rectal GTV delineation. The endorectal gel filling might be a useful tool for future dose escalation strategies.

18.
Rofo ; 195(2): 105-114, 2023 02.
Artigo em Inglês | MEDLINE | ID: mdl-36170852

RESUMO

BACKGROUND: Artificial intelligence (AI) applications have become increasingly relevant across a broad spectrum of settings in medical imaging. Due to the large amount of imaging data that is generated in oncological hybrid imaging, AI applications are desirable for lesion detection and characterization in primary staging, therapy monitoring, and recurrence detection. Given the rapid developments in machine learning (ML) and deep learning (DL) methods, the role of AI will have significant impact on the imaging workflow and will eventually improve clinical decision making and outcomes. METHODS AND RESULTS: The first part of this narrative review discusses current research with an introduction to artificial intelligence in oncological hybrid imaging and key concepts in data science. The second part reviews relevant examples with a focus on applications in oncology as well as discussion of challenges and current limitations. CONCLUSION: AI applications have the potential to leverage the diagnostic data stream with high efficiency and depth to facilitate automated lesion detection, characterization, and therapy monitoring to ultimately improve quality and efficiency throughout the medical imaging workflow. The goal is to generate reproducible, structured, quantitative diagnostic data for evidence-based therapy guidance in oncology. However, significant challenges remain regarding application development, benchmarking, and clinical implementation. KEY POINTS: · Hybrid imaging generates a large amount of multimodality medical imaging data with high complexity and depth.. · Advanced tools are required to enable fast and cost-efficient processing along the whole radiology value chain.. · AI applications promise to facilitate the assessment of oncological disease in hybrid imaging with high quality and efficiency for lesion detection, characterization, and response assessment. The goal is to generate reproducible, structured, quantitative diagnostic data for evidence-based oncological therapy guidance.. · Selected applications in three oncological entities (lung, prostate, and neuroendocrine tumors) demonstrate how AI algorithms may impact imaging-based tasks in hybrid imaging and potentially guide clinical decision making.. CITATION FORMAT: · Feuerecker B, Heimer M, Geyer T et al. Artificial Intelligence in Oncological Hybrid Imaging. Fortschr Röntgenstr 2023; 195: 105 - 114.


Assuntos
Algoritmos , Inteligência Artificial , Masculino , Humanos , Aprendizado de Máquina , Oncologia , Imagem Multimodal
19.
Sci Rep ; 12(1): 18733, 2022 11 04.
Artigo em Inglês | MEDLINE | ID: mdl-36333523

RESUMO

Large epidemiological studies such as the UK Biobank (UKBB) or German National Cohort (NAKO) provide unprecedented health-related data of the general population aiming to better understand determinants of health and disease. As part of these studies, Magnetic Resonance Imaging (MRI) is performed in a subset of participants allowing for phenotypical and functional characterization of different organ systems. Due to the large amount of imaging data, automated image analysis is required, which can be performed using deep learning methods, e. g. for automated organ segmentation. In this paper we describe a computational pipeline for automated segmentation of abdominal organs on MRI data from 20,000 participants of UKBB and NAKO and provide results of the quality control process. We found that approx. 90% of data sets showed no relevant segmentation errors while relevant errors occurred in a varying proportion of data sets depending on the organ of interest. Image-derived features based on automated organ segmentations showed relevant deviations of varying degree in the presence of segmentation errors. These results show that large-scale, deep learning-based abdominal organ segmentation on MRI data is feasible with overall high accuracy, but visual quality control remains an important step ensuring the validity of down-stream analyses in large epidemiological imaging studies.


Assuntos
Bancos de Espécimes Biológicos , Processamento de Imagem Assistida por Computador , Humanos , Processamento de Imagem Assistida por Computador/métodos , Imageamento por Ressonância Magnética , Controle de Qualidade , Reino Unido
20.
Insights Imaging ; 13(1): 159, 2022 Oct 04.
Artigo em Inglês | MEDLINE | ID: mdl-36194301

RESUMO

BACKGROUND: Lesion/tissue segmentation on digital medical images enables biomarker extraction, image-guided therapy delivery, treatment response measurement, and training/validation for developing artificial intelligence algorithms and workflows. To ensure data reproducibility, criteria for standardised segmentation are critical but currently unavailable. METHODS: A modified Delphi process initiated by the European Imaging Biomarker Alliance (EIBALL) of the European Society of Radiology (ESR) and the European Organisation for Research and Treatment of Cancer (EORTC) Imaging Group was undertaken. Three multidisciplinary task forces addressed modality and image acquisition, segmentation methodology itself, and standards and logistics. Devised survey questions were fed via a facilitator to expert participants. The 58 respondents to Round 1 were invited to participate in Rounds 2-4. Subsequent rounds were informed by responses of previous rounds. RESULTS/CONCLUSIONS: Items with ≥ 75% consensus are considered a recommendation. These include system performance certification, thresholds for image signal-to-noise, contrast-to-noise and tumour-to-background ratios, spatial resolution, and artefact levels. Direct, iterative, and machine or deep learning reconstruction methods, use of a mixture of CE marked and verified research tools were agreed and use of specified reference standards and validation processes considered essential. Operator training and refreshment were considered mandatory for clinical trials and clinical research. Items with a 60-74% agreement require reporting (site-specific accreditation for clinical research, minimal pixel number within lesion segmented, use of post-reconstruction algorithms, operator training refreshment for clinical practice). Items with ≤ 60% agreement are outside current recommendations for segmentation (frequency of system performance tests, use of only CE-marked tools, board certification of operators, frequency of operator refresher training). Recommendations by anatomical area are also specified.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...