Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 15 de 15
Filtrar
1.
Radiology ; 309(1): e230806, 2023 10.
Artigo em Inglês | MEDLINE | ID: mdl-37787671

RESUMO

Background Clinicians consider both imaging and nonimaging data when diagnosing diseases; however, current machine learning approaches primarily consider data from a single modality. Purpose To develop a neural network architecture capable of integrating multimodal patient data and compare its performance to models incorporating a single modality for diagnosing up to 25 pathologic conditions. Materials and Methods In this retrospective study, imaging and nonimaging patient data were extracted from the Medical Information Mart for Intensive Care (MIMIC) database and an internal database comprised of chest radiographs and clinical parameters inpatients in the intensive care unit (ICU) (January 2008 to December 2020). The MIMIC and internal data sets were each split into training (n = 33 893, n = 28 809), validation (n = 740, n = 7203), and test (n = 1909, n = 9004) sets. A novel transformer-based neural network architecture was trained to diagnose up to 25 conditions using nonimaging data alone, imaging data alone, or multimodal data. Diagnostic performance was assessed using area under the receiver operating characteristic curve (AUC) analysis. Results The MIMIC and internal data sets included 36 542 patients (mean age, 63 years ± 17 [SD]; 20 567 male patients) and 45 016 patients (mean age, 66 years ± 16; 27 577 male patients), respectively. The multimodal model showed improved diagnostic performance for all pathologic conditions. For the MIMIC data set, the mean AUC was 0.77 (95% CI: 0.77, 0.78) when both chest radiographs and clinical parameters were used, compared with 0.70 (95% CI: 0.69, 0.71; P < .001) for only chest radiographs and 0.72 (95% CI: 0.72, 0.73; P < .001) for only clinical parameters. These findings were confirmed on the internal data set. Conclusion A model trained on imaging and nonimaging data outperformed models trained on only one type of data for diagnosing multiple diseases in patients in an ICU setting. © RSNA, 2023 Supplemental material is available for this article. See also the editorial by Kitamura and Topol in this issue.


Assuntos
Aprendizado Profundo , Humanos , Masculino , Pessoa de Meia-Idade , Idoso , Estudos Retrospectivos , Radiografia , Bases de Dados Factuais , Pacientes Internados
2.
Sci Rep ; 13(1): 10666, 2023 07 01.
Artigo em Inglês | MEDLINE | ID: mdl-37393383

RESUMO

When clinicians assess the prognosis of patients in intensive care, they take imaging and non-imaging data into account. In contrast, many traditional machine learning models rely on only one of these modalities, limiting their potential in medical applications. This work proposes and evaluates a transformer-based neural network as a novel AI architecture that integrates multimodal patient data, i.e., imaging data (chest radiographs) and non-imaging data (clinical data). We evaluate the performance of our model in a retrospective study with 6,125 patients in intensive care. We show that the combined model (area under the receiver operating characteristic curve [AUROC] of 0.863) is superior to the radiographs-only model (AUROC = 0.811, p < 0.001) and the clinical data-only model (AUROC = 0.785, p < 0.001) when tasked with predicting in-hospital survival per patient. Furthermore, we demonstrate that our proposed model is robust in cases where not all (clinical) data points are available.


Assuntos
Cuidados Críticos , Diagnóstico por Imagem , Humanos , Estudos Retrospectivos , Área Sob a Curva , Fontes de Energia Elétrica
3.
Sci Rep ; 13(1): 12098, 2023 07 26.
Artigo em Inglês | MEDLINE | ID: mdl-37495660

RESUMO

Although generative adversarial networks (GANs) can produce large datasets, their limited diversity and fidelity have been recently addressed by denoising diffusion probabilistic models, which have demonstrated superiority in natural image synthesis. In this study, we introduce Medfusion, a conditional latent DDPM designed for medical image generation, and evaluate its performance against GANs, which currently represent the state-of-the-art. Medfusion was trained and compared with StyleGAN-3 using fundoscopy images from the AIROGS dataset, radiographs from the CheXpert dataset, and histopathology images from the CRCDX dataset. Based on previous studies, Progressively Growing GAN (ProGAN) and Conditional GAN (cGAN) were used as additional baselines on the CheXpert and CRCDX datasets, respectively. Medfusion exceeded GANs in terms of diversity (recall), achieving better scores of 0.40 compared to 0.19 in the AIROGS dataset, 0.41 compared to 0.02 (cGAN) and 0.24 (StyleGAN-3) in the CRMDX dataset, and 0.32 compared to 0.17 (ProGAN) and 0.08 (StyleGAN-3) in the CheXpert dataset. Furthermore, Medfusion exhibited equal or higher fidelity (precision) across all three datasets. Our study shows that Medfusion constitutes a promising alternative to GAN-based models for generating high-quality medical images, leading to improved diversity and less artifacts in the generated images.


Assuntos
Artefatos , Rememoração Mental , Difusão , Modelos Estatísticos , Oftalmoscopia , Processamento de Imagem Assistida por Computador
4.
Sci Rep ; 13(1): 7303, 2023 05 05.
Artigo em Inglês | MEDLINE | ID: mdl-37147413

RESUMO

Recent advances in computer vision have shown promising results in image generation. Diffusion probabilistic models have generated realistic images from textual input, as demonstrated by DALL-E 2, Imagen, and Stable Diffusion. However, their use in medicine, where imaging data typically comprises three-dimensional volumes, has not been systematically evaluated. Synthetic images may play a crucial role in privacy-preserving artificial intelligence and can also be used to augment small datasets. We show that diffusion probabilistic models can synthesize high-quality medical data for magnetic resonance imaging (MRI) and computed tomography (CT). For quantitative evaluation, two radiologists rated the quality of the synthesized images regarding "realistic image appearance", "anatomical correctness", and "consistency between slices". Furthermore, we demonstrate that synthetic images can be used in self-supervised pre-training and improve the performance of breast segmentation models when data is scarce (Dice scores, 0.91 [without synthetic data], 0.95 [with synthetic data]).


Assuntos
Inteligência Artificial , Imageamento Tridimensional , Imageamento por Ressonância Magnética , Tomografia Computadorizada por Raios X , Modelos Estatísticos , Processamento de Imagem Assistida por Computador/métodos
5.
Radiology ; 307(1): e220510, 2023 04.
Artigo em Inglês | MEDLINE | ID: mdl-36472534

RESUMO

Background Supine chest radiography for bedridden patients in intensive care units (ICUs) is one of the most frequently ordered imaging studies worldwide. Purpose To evaluate the diagnostic performance of a neural network-based model that is trained on structured semiquantitative radiologic reports of bedside chest radiographs. Materials and Methods For this retrospective single-center study, children and adults in the ICU of a university hospital who had been imaged using bedside chest radiography from January 2009 to December 2020 were reported by using a structured and itemized template. Ninety-eight radiologists rated the radiographs semiquantitatively for the severity of disease patterns. These data were used to train a neural network to identify cardiomegaly, pulmonary congestion, pleural effusion, pulmonary opacities, and atelectasis. A held-out internal test set (100 radiographs from 100 patients) that was assessed independently by an expert panel of six radiologists provided the ground truth. Individual assessments by each of these six radiologists, by two nonradiologist physicians in the ICU, and by the neural network were compared with the ground truth. Separately, the nonradiologist physicians assessed the images without and with preliminary readings provided by the neural network. The weighted Cohen κ coefficient was used to measure agreement between the readers and the ground truth. Results A total of 193 566 radiographs in 45 016 patients (mean age, 66 years ± 16 [SD]; 61% men) were included and divided into training (n = 122 294; 64%), validation (n = 31 243; 16%), and test (n = 40 029; 20%) sets. The neural network exhibited higher agreement with a majority vote of the expert panel (κ = 0.86) than each individual radiologist compared with the majority vote of the expert panel (κ = 0.81 to ≤0.84). When the neural network provided preliminary readings, the reports of the nonradiologist physicians improved considerably (aided vs unaided, κ = 0.87 vs 0.79, respectively; P < .001). Conclusion A neural network trained with structured semiquantitative bedside chest radiography reports allowed nonradiologist physicians improved interpretations compared with the consensus reading of expert radiologists. © RSNA, 2022 Supplemental material is available for this article. See also the editorial by Wielpütz in this issue.


Assuntos
Inteligência Artificial , Radiografia Torácica , Masculino , Adulto , Criança , Humanos , Idoso , Feminino , Estudos Retrospectivos , Radiografia Torácica/métodos , Pulmão , Radiografia
6.
Diagnostics (Basel) ; 12(2)2022 Jan 19.
Artigo em Inglês | MEDLINE | ID: mdl-35204338

RESUMO

Machine learning results based on radiomic analysis are often not transferrable. A potential reason for this is the variability of radiomic features due to varying human made segmentations. Therefore, the aim of this study was to provide comprehensive inter-reader reliability analysis of radiomic features in five clinical image datasets and to assess the association of inter-reader reliability and survival prediction. In this study, we analyzed 4598 tumor segmentations in both computed tomography and magnetic resonance imaging data. We used a neural network to generate 100 additional segmentation outlines for each tumor and performed a reliability analysis of radiomic features. To prove clinical utility, we predicted patient survival based on all features and on the most reliable features. Survival prediction models for both computed tomography and magnetic resonance imaging datasets demonstrated less statistical spread and superior survival prediction when based on the most reliable features. Mean concordance indices were Cmean = 0.58 [most reliable] vs. Cmean = 0.56 [all] (p < 0.001, CT) and Cmean = 0.58 vs. Cmean = 0.57 (p = 0.23, MRI). Thus, preceding reliability analyses and selection of the most reliable radiomic features improves the underlying model's ability to predict patient survival across clinical imaging modalities and tumor entities.

8.
Nat Commun ; 12(1): 4315, 2021 07 14.
Artigo em Inglês | MEDLINE | ID: mdl-34262044

RESUMO

Unmasking the decision making process of machine learning models is essential for implementing diagnostic support systems in clinical practice. Here, we demonstrate that adversarially trained models can significantly enhance the usability of pathology detection as compared to their standard counterparts. We let six experienced radiologists rate the interpretability of saliency maps in datasets of X-rays, computed tomography, and magnetic resonance imaging scans. Significant improvements are found for our adversarial models, which are further improved by the application of dual-batch normalization. Contrary to previous research on adversarially trained models, we find that accuracy of such models is equal to standard models, when sufficiently large datasets and dual batch norm training are used. To ensure transferability, we additionally validate our results on an external test set of 22,433 X-rays. These findings elucidate that different paths for adversarial and real images are needed during training to achieve state of the art results with superior clinical interpretability.


Assuntos
Redes Neurais de Computação , Interpretação de Imagem Radiográfica Assistida por Computador/métodos , Humanos , Aprendizado de Máquina , Reprodutibilidade dos Testes
9.
IEEE Trans Med Imaging ; 40(12): 3543-3554, 2021 12.
Artigo em Inglês | MEDLINE | ID: mdl-34138702

RESUMO

The emergence of deep learning has considerably advanced the state-of-the-art in cardiac magnetic resonance (CMR) segmentation. Many techniques have been proposed over the last few years, bringing the accuracy of automated segmentation close to human performance. However, these models have been all too often trained and validated using cardiac imaging samples from single clinical centres or homogeneous imaging protocols. This has prevented the development and validation of models that are generalizable across different clinical centres, imaging conditions or scanner vendors. To promote further research and scientific benchmarking in the field of generalizable deep learning for cardiac segmentation, this paper presents the results of the Multi-Centre, Multi-Vendor and Multi-Disease Cardiac Segmentation (M&Ms) Challenge, which was recently organized as part of the MICCAI 2020 Conference. A total of 14 teams submitted different solutions to the problem, combining various baseline models, data augmentation strategies, and domain adaptation techniques. The obtained results indicate the importance of intensity-driven data augmentation, as well as the need for further research to improve generalizability towards unseen scanner vendors or new imaging protocols. Furthermore, we present a new resource of 375 heterogeneous CMR datasets acquired by using four different scanner vendors in six hospitals and three different countries (Spain, Canada and Germany), which we provide as open-access for the community to enable future research in the field.


Assuntos
Coração , Imageamento por Ressonância Magnética , Técnicas de Imagem Cardíaca , Coração/diagnóstico por imagem , Humanos
10.
Sci Adv ; 6(49)2020 12.
Artigo em Inglês | MEDLINE | ID: mdl-33268370

RESUMO

Computer vision (CV) has the potential to change medicine fundamentally. Expert knowledge provided by CV can enhance diagnosis. Unfortunately, existing algorithms often remain below expectations, as databases used for training are usually too small, incomplete, and heterogeneous in quality. Moreover, data protection is a serious obstacle to the exchange of data. To overcome this limitation, we propose to use generative models (GMs) to produce high-resolution synthetic radiographs that do not contain any personal identification information. Blinded analyses by CV and radiology experts confirmed the high similarity of synthesized and real radiographs. The combination of pooled GM improves the performance of CV algorithms trained on smaller datasets, and the integration of synthesized data into patient data repositories can compensate for underrepresented disease entities. By integrating federated learning strategies, even hospitals with few datasets can contribute to and benefit from GM training.

11.
Sci Rep ; 10(1): 12688, 2020 07 29.
Artigo em Inglês | MEDLINE | ID: mdl-32728098

RESUMO

Identifying image features that are robust with respect to segmentation variability is a tough challenge in radiomics. So far, this problem has mainly been tackled in test-retest analyses. In this work we analyse radiomics feature reproducibility in two phases: first with manual segmentations provided by four expert readers and second with probabilistic automated segmentations using a recently developed neural network (PHiseg). We test feature reproducibility on three publicly available datasets of lung, kidney and liver lesions. We find consistent results both over manual and automated segmentations in all three datasets and show that there are subsets of radiomic features which are robust against segmentation variability and other radiomic features which are prone to poor reproducibility under differing segmentations. By providing a detailed analysis of robustness of the most common radiomics features across several datasets, we envision that more reliable and reproducible radiomic models can be built in the future based on this work.


Assuntos
Neoplasias/diagnóstico por imagem , Interpretação de Imagem Radiográfica Assistida por Computador/métodos , Tomografia Computadorizada por Raios X/métodos , Automação , Bases de Dados Factuais , Humanos , Redes Neurais de Computação , Variações Dependentes do Observador
12.
Eur Radiol Exp ; 4(1): 20, 2020 04 06.
Artigo em Inglês | MEDLINE | ID: mdl-32249336

RESUMO

BACKGROUND: To evaluate whether machine learning algorithms allow the prediction of Child-Pugh classification on clinical multiphase computed tomography (CT). METHODS: A total of 259 patients who underwent diagnostic abdominal CT (unenhanced, contrast-enhanced arterial, and venous phases) were included in this retrospective study. Child-Pugh scores were determined based on laboratory and clinical parameters. Linear regression (LR), Random Forest (RF), and convolutional neural network (CNN) algorithms were used to predict the Child-Pugh class. Their performances were compared to the prediction of experienced radiologists (ERs). Spearman correlation coefficients and accuracy were assessed for all predictive models. Additionally, a binary classification in low disease severity (Child-Pugh class A) and advanced disease severity (Child-Pugh class ≥ B) was performed. RESULTS: Eleven imaging features exhibited a significant correlation when adjusted for multiple comparisons with Child-Pugh class. Significant correlations between predicted and measured Child-Pugh classes were observed (ρLA = 0.35, ρRF = 0.32, ρCNN = 0.51, ρERs = 0.60; p < 0.001). Significantly better accuracies for the prediction of Child-Pugh classes versus no-information rate were found for CNN and ERs (p ≤ 0.034), not for LR and RF (p ≥ 0.384). For binary severity classification, the area under the curve at receiver operating characteristic analysis was significantly lower (p ≤ 0.042) for LR (0.71) and RF (0.69) than for CNN (0.80) and ERs (0.76), without significant differences between CNN and ERs (p = 0.144). CONCLUSIONS: The performance of a CNN in assessing Child-Pugh class based on multiphase abdominal CT images is comparable to that of ERs.


Assuntos
Hepatopatias/classificação , Hepatopatias/diagnóstico por imagem , Aprendizado de Máquina , Tomografia Computadorizada por Raios X/métodos , Idoso , Meios de Contraste , Feminino , Humanos , Iohexol/análogos & derivados , Masculino , Pessoa de Meia-Idade , Estudos Retrospectivos
13.
Front Comput Neurosci ; 13: 73, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-31780915

RESUMO

Prediction of overall survival based on multimodal MRI of brain tumor patients is a difficult problem. Although survival also depends on factors that cannot be assessed via preoperative MRI such as surgical outcome, encouraging results for MRI-based survival analysis have been published for different datasets. We assess if and how established radiomic approaches as well as novel methods can predict overall survival of brain tumor patients on the BraTS challenge dataset. This dataset consists of multimodal preoperative images of 211 glioblastoma patients from several institutions with reported resection status and known survival. In the official challenge setting, only patients with a reported gross total resection (GTR) are taken into account. We therefore evaluated previously published methods as well as different machine learning approaches on the BraTS dataset. For different types of resection status, these approaches are compared to a baseline, a linear regression on patient age only. This naive approach won the 3rd place out of 26 participants in the BraTS survival prediction challenge 2018. Previously published radiomic signatures show significant correlations and predictiveness to patient survival for patients with a reported subtotal resection. However, for patients with reported GTR, none of the evaluated approaches was able to outperform the age-only baseline in a cross-validation setting, explaining the poor performance of approaches based on radiomics in the BraTS challenge 2018.

14.
Radiology ; 290(2): 290-297, 2019 02.
Artigo em Inglês | MEDLINE | ID: mdl-30422086

RESUMO

Purpose To compare the diagnostic performance of radiomic analysis (RA) and a convolutional neural network (CNN) to radiologists for classification of contrast agent-enhancing lesions as benign or malignant at multiparametric breast MRI. Materials and Methods Between August 2011 and August 2015, 447 patients with 1294 enhancing lesions (787 malignant, 507 benign; median size, 15 mm ± 20) were evaluated. Lesions were manually segmented by one breast radiologist. RA was performed by using L1 regularization and principal component analysis. CNN used a deep residual neural network with 34 layers. All algorithms were also retrained on half the number of lesions (n = 647). Machine interpretations were compared with prospective interpretations by three breast radiologists. Standard of reference was histologic analysis or follow-up. Areas under the receiver operating curve (AUCs) were used to compare diagnostic performance. Results CNN trained on the full cohort was superior to training on the half-size cohort (AUC, 0.88 vs 0.83, respectively; P = .01), but there was no difference for RA and L1 regularization (AUC, 0.81 vs 0.80, respectively; P = .76) or RA and principal component analysis (AUC, 0.78 vs 0.78, respectively; P = .93). By using the full cohort, CNN performance (AUC, 0.88; 95% confidence interval: 0.86, 0.89) was better than RA and L1 regularization (AUC, 0.81; 95% confidence interval: 0.79, 0.83; P < .001) and RA and principal component analysis (AUC, 0.78; 95% confidence interval: 0.76, 0.80; P < .001). However, CNN was inferior to breast radiologist interpretation (AUC, 0.98; 95% confidence interval: 0.96, 0.99; P < .001). Conclusion A convolutional neural network was superior to radiomic analysis for classification of enhancing lesions as benign or malignant at multiparametric breast MRI. Both approaches were inferior to radiologists' performance; however, more training data will further improve performance of convolutional neural network, but not that of radiomics algorithms. © RSNA, 2018 Online supplemental material is available for this article.


Assuntos
Neoplasias da Mama/diagnóstico por imagem , Mama/diagnóstico por imagem , Interpretação de Imagem Assistida por Computador/métodos , Imageamento por Ressonância Magnética/métodos , Redes Neurais de Computação , Feminino , Humanos , Estudos Prospectivos
15.
Annu Int Conf IEEE Eng Med Biol Soc ; 2016: 3531-3534, 2016 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-28269060

RESUMO

Manual palpation is still the gold standard for assessment of pulse presence during cardiopulmonary resuscitation (CPR) for professional rescuers. However, this method is unreliable, time-consuming and subjective. Therefore, reliable, quick and objectified assessment of pulse presence in cardiac arrest situations to assist professional rescuers is still an unmet need. Accelerometers may present a promising sensor modality as pulse palpation technology for which pulse detection at the carotid artery has been demonstrated to be feasible. This study extends previous work by presenting an algorithm for automatic, accelerometer-based pulse presence detection at the carotid site during CPR. We show that accelerometers might be helpful in automated detection of pulse presence during CPR.


Assuntos
Acelerometria/instrumentação , Algoritmos , Reanimação Cardiopulmonar/métodos , Determinação da Frequência Cardíaca/métodos , Acelerometria/métodos , Idoso , Artérias Carótidas , Desenho de Equipamento , Parada Cardíaca/diagnóstico , Parada Cardíaca/terapia , Determinação da Frequência Cardíaca/instrumentação , Humanos , Masculino , Pessoa de Meia-Idade , Palpação
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...