Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 15 de 15
Filtrar
1.
Lancet Digit Health ; 2024 Jul 08.
Artigo em Inglês | MEDLINE | ID: mdl-38981834

RESUMO

BACKGROUND: Chest x-ray is a basic, cost-effective, and widely available imaging method that is used for static assessments of organic diseases and anatomical abnormalities, but its ability to estimate dynamic measurements such as pulmonary function is unknown. We aimed to estimate two major pulmonary functions from chest x-rays. METHODS: In this retrospective model development and validation study, we trained, validated, and externally tested a deep learning-based artificial intelligence (AI) model to estimate forced vital capacity (FVC) and forced expiratory volume in 1 s (FEV1) from chest x-rays. We included consecutively collected results of spirometry and any associated chest x-rays that had been obtained between July 1, 2003, and Dec 31, 2021, from five institutions in Japan (labelled institutions A-E). Eligible x-rays had been acquired within 14 days of spirometry and were labelled with the FVC and FEV1. X-rays from three institutions (A-C) were used for training, validation, and internal testing, with the testing dataset being independent of the training and validation datasets, and then x-rays from the two other institutions (D and E) were used for independent external testing. Performance for estimating FVC and FEV1 was evaluated by calculating the Pearson's correlation coefficient (r), intraclass correlation coefficient (ICC), mean square error (MSE), root mean square error (RMSE), and mean absolute error (MAE) compared with the results of spirometry. FINDINGS: We included 141 734 x-ray and spirometry pairs from 81 902 patients from the five institutions. The training, validation, and internal test datasets included 134 307 x-rays from 75 768 patients (37 718 [50%] female, 38 050 [50%] male; mean age 56 years [SD 18]), and the external test datasets included 2137 x-rays from 1861 patients (742 [40%] female, 1119 [60%] male; mean age 65 years [SD 17]) from institution D and 5290 x-rays from 4273 patients (1972 [46%] female, 2301 [54%] male; mean age 63 years [SD 17]) from institution E. External testing for FVC yielded r values of 0·91 (99% CI 0·90-0·92) for institution D and 0·90 (0·89-0·91) for institution E, ICC of 0·91 (99% CI 0·90-0·92) and 0·89 (0·88-0·90), MSE of 0·17 L2 (99% CI 0·15-0·19) and 0·17 L2 (0·16-0·19), RMSE of 0·41 L (99% CI 0·39-0·43) and 0·41 L (0·39-0·43), and MAE of 0·31 L (99% CI 0·29-0·32) and 0·31 L (0·30-0·32). External testing for FEV1 yielded r values of 0·91 (99% CI 0·90-0·92) for institution D and 0·91 (0·90-0·91) for institution E, ICC of 0·90 (99% CI 0·89-0·91) and 0·90 (0·90-0·91), MSE of 0·13 L2 (99% CI 0·12-0·15) and 0·11 L2 (0·10-0·12), RMSE of 0·37 L (99% CI 0·35-0·38) and 0·33 L (0·32-0·35), and MAE of 0·28 L (99% CI 0·27-0·29) and 0·25 L (0·25-0·26). INTERPRETATION: This deep learning model allowed estimation of FVC and FEV1 from chest x-rays, showing high agreement with spirometry. The model offers an alternative to spirometry for assessing pulmonary function, which is especially useful for patients who are unable to undergo spirometry, and might enhance the customisation of CT imaging protocols based on insights gained from chest x-rays, improving the diagnosis and management of lung diseases. Future studies should investigate the performance of this AI model in combination with clinical information to enable more appropriate and targeted use. FUNDING: None.

2.
Sci Rep ; 14(1): 2911, 2024 02 05.
Artigo em Inglês | MEDLINE | ID: mdl-38316892

RESUMO

This study created an image-to-image translation model that synthesizes diffusion tensor images (DTI) from conventional diffusion weighted images, and validated the similarities between the original and synthetic DTI. Thirty-two healthy volunteers were prospectively recruited. DTI and DWI were obtained with six and three directions of the motion probing gradient (MPG), respectively. The identical imaging plane was paired for the image-to-image translation model that synthesized one direction of the MPG from DWI. This process was repeated six times in the respective MPG directions. Regions of interest (ROIs) in the lentiform nucleus, thalamus, posterior limb of the internal capsule, posterior thalamic radiation, and splenium of the corpus callosum were created and applied to maps derived from the original and synthetic DTI. The mean values and signal-to-noise ratio (SNR) of the original and synthetic maps for each ROI were compared. The Bland-Altman plot between the original and synthetic data was evaluated. Although the test dataset showed a larger standard deviation of all values and lower SNR in the synthetic data than in the original data, the Bland-Altman plots showed each plot localizing in a similar distribution. Synthetic DTI could be generated from conventional DWI with an image-to-image translation model.


Assuntos
Aprendizado Profundo , Substância Branca , Humanos , Corpo Caloso/diagnóstico por imagem , Razão Sinal-Ruído , Cápsula Interna , Imagem de Difusão por Ressonância Magnética/métodos
3.
Lancet Healthy Longev ; 4(9): e478-e486, 2023 09.
Artigo em Inglês | MEDLINE | ID: mdl-37597530

RESUMO

BACKGROUND: Chest radiographs are widely available and cost-effective; however, their usefulness as a biomarker of ageing using multi-institutional data remains underexplored. The aim of this study was to develop a biomarker of ageing from chest radiography and examine the correlation between the biomarker and diseases. METHODS: In this retrospective, multi-institutional study, we trained, tuned, and externally tested an artificial intelligence (AI) model to estimate the age of healthy individuals using chest radiographs as a biomarker. For the biomarker modelling phase of the study, we used healthy chest radiographs consecutively collected between May 22, 2008, and Dec 28, 2021, from three institutions in Japan. Data from two institutions were used for training, tuning, and internal testing, and data from the third institution were used for external testing. To evaluate the performance of the AI model in estimating ages, we calculated the correlation coefficient, mean square error, root mean square error, and mean absolute error. The correlation investigation phase of the study included chest radiographs from individuals with a known disease that were consecutively collected between Jan 1, 2018, and Dec 31, 2021, from an additional two institutions in Japan. We investigated the odds ratios (ORs) for various diseases given the difference between the AI-estimated age and chronological age (ie, the difference-age). FINDINGS: We included 101 296 chest radiographs from 70 248 participants across five institutions. In the biomarker modelling phase, the external test dataset from 3467 healthy participants included 8046 radiographs. Between the AI-estimated age and chronological age, the correlation coefficient was 0·95 (99% CI 0·95-0·95), the mean square error was 15·0 years (99% CI 14·0-15·0), the root mean square error was 3·8 years (99% CI 3·8-3·9), and the mean absolute error was 3·0 years (99% CI 3·0-3·1). In the correlation investigation phase, the external test datasets from 34 197 participants with a known disease included 34 197 radiographs. The ORs for difference-age were as follows: 1·04 (99% CI 1·04-1·05) for hypertension; 1·02 (1·01-1·03) for hyperuricaemia; 1·05 (1·03-1·06) for chronic obstructive pulmonary disease; 1·08 (1·06-1·09) for interstitial lung disease; 1·05 (1·03-1·06) for chronic renal failure; 1·04 (1·03-1·06) for atrial fibrillation; 1·03 (1·02-1·04) for osteoporosis; and 1·05 (1·03-1·06) for liver cirrhosis. INTERPRETATION: The AI-estimated age using chest radiographs showed a strong correlation with chronological age in the healthy cohorts. Furthermore, in cohorts of individuals with known diseases, the difference between estimated age and chronological age correlated with various chronic diseases. The use of this biomarker might pave the way for enhanced risk stratification methodologies, individualised therapeutic interventions, and innovative early diagnostic and preventive approaches towards age-associated pathologies. FUNDING: None. TRANSLATION: For the Japanese translation of the abstract see Supplementary Materials section.


Assuntos
Envelhecimento , Inteligência Artificial , Humanos , Japão , Estudos Retrospectivos , Biomarcadores
4.
Radiology ; 308(2): e223016, 2023 08.
Artigo em Inglês | MEDLINE | ID: mdl-37526545

RESUMO

Background Carbon 11 (11C)-methionine is a useful PET radiotracer for the management of patients with glioma, but radiation exposure and lack of molecular imaging facilities limit its use. Purpose To generate synthetic methionine PET images from contrast-enhanced (CE) MRI through an artificial intelligence (AI)-based image-to-image translation model and to compare its performance for grading and prognosis of gliomas with that of real PET. Materials and Methods An AI-based model to generate synthetic methionine PET images from CE MRI was developed and validated from patients who underwent both methionine PET and CE MRI at a university hospital from January 2007 to December 2018 (institutional data set). Pearson correlation coefficients for the maximum and mean tumor to background ratio (TBRmax and TBRmean, respectively) of methionine uptake and the lesion volume between synthetic and real PET were calculated. Two additional open-source glioma databases of preoperative CE MRI without methionine PET were used as the external test set. Using the TBRs, the area under the receiver operating characteristic curve (AUC) for classifying high-grade and low-grade gliomas and overall survival were evaluated. Results The institutional data set included 362 patients (mean age, 49 years ± 19 [SD]; 195 female, 167 male; training, n = 294; validation, n = 34; test, n = 34). In the internal test set, Pearson correlation coefficients were 0.68 (95% CI: 0.47, 0.81), 0.76 (95% CI: 0.59, 0.86), and 0.92 (95% CI: 0.85, 0.95) for TBRmax, TBRmean, and lesion volume, respectively. The external test set included 344 patients with gliomas (mean age, 53 years ± 15; 192 male, 152 female; high grade, n = 269). The AUC for TBRmax was 0.81 (95% CI: 0.75, 0.86) and the overall survival analysis showed a significant difference between the high (2-year survival rate, 27%) and low (2-year survival rate, 71%; P < .001) TBRmax groups. Conclusion The AI-based model-generated synthetic methionine PET images strongly correlated with real PET images and showed good performance for glioma grading and prognostication. Published under a CC BY 4.0 license. Supplemental material is available for this article.


Assuntos
Neoplasias Encefálicas , Glioma , Humanos , Masculino , Feminino , Pessoa de Meia-Idade , Metionina , Neoplasias Encefálicas/diagnóstico por imagem , Neoplasias Encefálicas/patologia , Inteligência Artificial , Tomografia por Emissão de Pósitrons/métodos , Gradação de Tumores , Glioma/diagnóstico por imagem , Glioma/patologia , Imageamento por Ressonância Magnética/métodos , Racemetionina
5.
Lancet Digit Health ; 5(8): e525-e533, 2023 08.
Artigo em Inglês | MEDLINE | ID: mdl-37422342

RESUMO

BACKGROUND: Chest radiography is a common and widely available examination. Although cardiovascular structures-such as cardiac shadows and vessels-are visible on chest radiographs, the ability of these radiographs to estimate cardiac function and valvular disease is poorly understood. Using datasets from multiple institutions, we aimed to develop and validate a deep-learning model to simultaneously detect valvular disease and cardiac functions from chest radiographs. METHODS: In this model development and validation study, we trained, validated, and externally tested a deep learning-based model to classify left ventricular ejection fraction, tricuspid regurgitant velocity, mitral regurgitation, aortic stenosis, aortic regurgitation, mitral stenosis, tricuspid regurgitation, pulmonary regurgitation, and inferior vena cava dilation from chest radiographs. The chest radiographs and associated echocardiograms were collected from four institutions between April 1, 2013, and Dec 31, 2021: we used data from three sites (Osaka Metropolitan University Hospital, Osaka, Japan; Habikino Medical Center, Habikino, Japan; and Morimoto Hospital, Osaka, Japan) for training, validation, and internal testing, and data from one site (Kashiwara Municipal Hospital, Kashiwara, Japan) for external testing. We evaluated the area under the receiver operating characteristic curve (AUC), sensitivity, specificity, and accuracy. FINDINGS: We included 22 551 radiographs associated with 22 551 echocardiograms obtained from 16 946 patients. The external test dataset featured 3311 radiographs from 2617 patients with a mean age of 72 years [SD 15], of whom 49·8% were male and 50·2% were female. The AUCs, accuracy, sensitivity, and specificity for this dataset were 0·92 (95% CI 0·90-0·95), 86% (85-87), 82% (75-87), and 86% (85-88) for classifying the left ventricular ejection fraction at a 40% cutoff, 0·85 (0·83-0·87), 75% (73-76), 83% (80-87), and 73% (71-75) for classifying the tricuspid regurgitant velocity at a 2·8 m/s cutoff, 0·89 (0·86-0·92), 85% (84-86), 82% (76-87), and 85% (84-86) for classifying mitral regurgitation at the none-mild versus moderate-severe cutoff, 0·83 (0·78-0·88), 73% (71-74), 79% (69-87), and 72% (71-74) for classifying aortic stenosis, 0·83 (0·79-0·87), 68% (67-70), 88% (81-92), and 67% (66-69) for classifying aortic regurgitation, 0·86 (0·67-1·00), 90% (89-91), 83% (36-100), and 90% (89-91) for classifying mitral stenosis, 0·92 (0·89-0·94), 83% (82-85), 87% (83-91), and 83% (82-84) for classifying tricuspid regurgitation, 0·86 (0·82-0·90), 69% (68-71), 91% (84-95), and 68% (67-70) for classifying pulmonary regurgitation, and 0·85 (0·81-0·89), 86% (85-88), 73% (65-81), and 87% (86-88) for classifying inferior vena cava dilation. INTERPRETATION: The deep learning-based model can accurately classify cardiac functions and valvular heart diseases using information from digital chest radiographs. This model can classify values typically obtained from echocardiography in a fraction of the time, with low system requirements and the potential to be continuously available in areas where echocardiography specialists are scarce or absent. FUNDING: None.


Assuntos
Doenças das Valvas Cardíacas , Insuficiência da Valva Mitral , Humanos , Masculino , Feminino , Idoso , Estudos Retrospectivos , Inteligência Artificial , Volume Sistólico , Função Ventricular Esquerda , Doenças das Valvas Cardíacas/complicações , Doenças das Valvas Cardíacas/diagnóstico , Insuficiência da Valva Mitral/complicações , Insuficiência da Valva Mitral/diagnóstico por imagem , Radiografia
6.
Eur Respir Rev ; 32(168)2023 Jun 30.
Artigo em Inglês | MEDLINE | ID: mdl-37286217

RESUMO

BACKGROUND: Deep learning (DL), a subset of artificial intelligence (AI), has been applied to pneumothorax diagnosis to aid physician diagnosis, but no meta-analysis has been performed. METHODS: A search of multiple electronic databases through September 2022 was performed to identify studies that applied DL for pneumothorax diagnosis using imaging. Meta-analysis via a hierarchical model to calculate the summary area under the curve (AUC) and pooled sensitivity and specificity for both DL and physicians was performed. Risk of bias was assessed using a modified Prediction Model Study Risk of Bias Assessment Tool. RESULTS: In 56 of the 63 primary studies, pneumothorax was identified from chest radiography. The total AUC was 0.97 (95% CI 0.96-0.98) for both DL and physicians. The total pooled sensitivity was 84% (95% CI 79-89%) for DL and 85% (95% CI 73-92%) for physicians and the pooled specificity was 96% (95% CI 94-98%) for DL and 98% (95% CI 95-99%) for physicians. More than half of the original studies (57%) had a high risk of bias. CONCLUSIONS: Our review found the diagnostic performance of DL models was similar to that of physicians, although the majority of studies had a high risk of bias. Further pneumothorax AI research is needed.


Assuntos
Aprendizado Profundo , Pneumotórax , Humanos , Pneumotórax/diagnóstico por imagem , Inteligência Artificial , Sensibilidade e Especificidade , Diagnóstico por Imagem
7.
J Orthop Sci ; 2023 May 24.
Artigo em Inglês | MEDLINE | ID: mdl-37236873

RESUMO

BACKGROUND: Early diagnosis of rotator cuff tears is essential for appropriate and timely treatment. Although radiography is the most used technique in clinical practice, it is difficult to accurately rule out rotator cuff tears as an initial imaging diagnostic modality. Deep learning-based artificial intelligence has recently been applied in medicine, especially diagnostic imaging. This study aimed to develop a deep learning algorithm as a screening tool for rotator cuff tears based on radiography. METHODS: We used 2803 shoulder radiographs of the true anteroposterior view to develop the deep learning algorithm. Radiographs were labeled 0 and 1 as intact or low-grade partial-thickness rotator cuff tears and high-grade partial or full-thickness rotator cuff tears, respectively. The diagnosis of rotator cuff tears was determined based on arthroscopic findings. The diagnostic performance of the deep learning algorithm was assessed by calculating the area under the curve (AUC), sensitivity, negative predictive value (NPV), and negative likelihood ratio (LR-) of test datasets with a cutoff value of expected high sensitivity determination based on validation datasets. Furthermore, the diagnostic performance for each rotator cuff tear size was evaluated. RESULTS: The AUC, sensitivity, NPV, and LR- with expected high sensitivity determination were 0.82, 84/92 (91.3%), 102/110 (92.7%), and 0.16, respectively. The sensitivity, NPV, and LR- for full-thickness rotator cuff tears were 69/73 (94.5%), 102/106 (96.2%), and 0.10, respectively, while the diagnostic performance for partial-thickness rotator cuff tears was low at 15/19 (78.9%), NPV of 102/106 (96.2%) and LR- of 0.39. CONCLUSIONS: Our algorithm had a high diagnostic performance for full-thickness rotator cuff tears. The deep learning algorithm based on shoulder radiography helps screen rotator cuff tears by setting an appropriate cutoff value. LEVEL OF EVIDENCE: Level III: Diagnostic Study.

8.
Cancers (Basel) ; 15(7)2023 Apr 04.
Artigo em Inglês | MEDLINE | ID: mdl-37046801

RESUMO

We aimed to develop the deep learning (DL) predictive model for postoperative early recurrence (within 2 years) of hepatocellular carcinoma (HCC) based on contrast-enhanced computed tomography (CECT) imaging. This study included 543 patients who underwent initial hepatectomy for HCC and were randomly classified into training, validation, and test datasets at a ratio of 8:1:1. Several clinical variables and arterial CECT images were used to create predictive models for early recurrence. Artificial intelligence models were implemented using convolutional neural networks and multilayer perceptron as a classifier. Furthermore, the Youden index was used to discriminate between high- and low-risk groups. The importance values of each explanatory variable for early recurrence were calculated using permutation importance. The DL predictive model for postoperative early recurrence was developed with the area under the curve values of 0.71 (test datasets) and 0.73 (validation datasets). Postoperative early recurrence incidences in the high- and low-risk groups were 73% and 30%, respectively (p = 0.0057). Permutation importance demonstrated that among the explanatory variables, the variable with the highest importance value was CECT imaging analysis. We developed a DL model to predict postoperative early HCC recurrence. DL-based analysis is effective for determining the treatment strategies in patients with HCC.

9.
J Digit Imaging ; 36(1): 178-188, 2023 02.
Artigo em Inglês | MEDLINE | ID: mdl-35941407

RESUMO

Accurate estimation of mortality and time to death at admission for COVID-19 patients is important and several deep learning models have been created for this task. However, there are currently no prognostic models which use end-to-end deep learning to predict time to event for admitted COVID-19 patients using chest radiographs and clinical data. We retrospectively implemented a new artificial intelligence model combining DeepSurv (a multiple-perceptron implementation of the Cox proportional hazards model) and a convolutional neural network (CNN) using 1356 COVID-19 inpatients. For comparison, we also prepared DeepSurv only with clinical data, DeepSurv only with images (CNNSurv), and Cox proportional hazards models. Clinical data and chest radiographs at admission were used to estimate patient outcome (death or discharge) and duration to the outcome. The Harrel's concordance index (c-index) of the DeepSurv with CNN model was 0.82 (0.75-0.88) and this was significantly higher than the DeepSurv only with clinical data model (c-index = 0.77 (0.69-0.84), p = 0.011), CNNSurv (c-index = 0.70 (0.63-0.79), p = 0.001), and the Cox proportional hazards model (c-index = 0.71 (0.63-0.79), p = 0.001). These results suggest that the time-to-event prognosis model became more accurate when chest radiographs and clinical data were used together.


Assuntos
COVID-19 , Aprendizado Profundo , Humanos , Inteligência Artificial , Estudos Retrospectivos , Radiografia
10.
Br J Radiol ; 95(1140): 20220058, 2022 Dec 01.
Artigo em Inglês | MEDLINE | ID: mdl-36193755

RESUMO

OBJECTIVES: The purpose of this study was to develop an artificial intelligence-based model to prognosticate COVID-19 patients at admission by combining clinical data and chest radiographs. METHODS: This retrospective study used the Stony Brook University COVID-19 dataset of 1384 inpatients. After exclusions, 1356 patients were randomly divided into training (1083) and test datasets (273). We implemented three artificial intelligence models, which classified mortality, ICU admission, or ventilation risk. Each model had three submodels with different inputs: clinical data, chest radiographs, and both. We showed the importance of the variables using SHapley Additive exPlanations (SHAP) values. RESULTS: The mortality prediction model was best overall with area under the curve, sensitivity, specificity, and accuracy of 0.79 (0.72-0.86), 0.74 (0.68-0.79), 0.77 (0.61-0.88), and 0.74 (0.69-0.79) for the clinical data-based model; 0.77 (0.69-0.85), 0.67 (0.61-0.73), 0.81 (0.67-0.92), 0.70 (0.64-0.75) for the image-based model, and 0.86 (0.81-0.91), 0.76 (0.70-0.81), 0.77 (0.61-0.88), 0.76 (0.70-0.81) for the mixed model. The mixed model had the best performance (p value < 0.05). The radiographs ranked fourth for prognostication overall, and first of the inpatient tests assessed. CONCLUSIONS: These results suggest that prognosis models become more accurate if AI-derived chest radiograph features and clinical data are used together. ADVANCES IN KNOWLEDGE: This AI model evaluates chest radiographs together with clinical data in order to classify patients as having high or low mortality risk. This work shows that chest radiographs taken at admission have significant COVID-19 prognostic information compared to clinical data other than age and sex.


Assuntos
COVID-19 , Humanos , COVID-19/diagnóstico por imagem , Inteligência Artificial , Estudos Retrospectivos , Radiografia , Prognóstico
11.
Radiol Artif Intell ; 4(2): e210221, 2022 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-35391769

RESUMO

Purpose: To develop an artificial intelligence-based model to detect mitral regurgitation on chest radiographs. Materials and Methods: This retrospective study included echocardiographs and associated chest radiographs consecutively collected at a single institution between July 2016 and May 2019. Associated radiographs were those obtained within 30 days of echocardiography. These radiographs were labeled as positive or negative for mitral regurgitation on the basis of the echocardiographic reports and were divided into training, validation, and test datasets. An artificial intelligence model was developed by using the training dataset and was tuned by using the validation dataset. To evaluate the model, the area under the curve, sensitivity, specificity, accuracy, positive predictive value, and negative predictive value were assessed by using the test dataset. Results: This study included a total of 10 367 images from 5270 patients. The training dataset included 8240 images (4216 patients), the validation dataset included 1073 images (527 patients), and the test dataset included 1054 images (527 patients). The area under the curve, sensitivity, specificity, accuracy, positive predictive value, and negative predictive value in the test dataset were 0.80 (95% CI: 0.77, 0.82), 71% (95% CI: 67, 75), 74% (95% CI: 70, 77), 73% (95% CI: 70, 75), 68% (95% CI: 64, 72), and 77% (95% CI: 73, 80), respectively. Conclusion: The developed deep learning-based artificial intelligence model may possibly differentiate patients with and without mitral regurgitation by using chest radiographs.Keywords: Computer-aided Diagnosis (CAD), Cardiac, Heart, Valves, Supervised Learning, Convolutional Neural Network (CNN), Deep Learning Algorithms, Machine Learning Algorithms Supplemental material is available for this article. © RSNA, 2022.

12.
Eur Radiol ; 32(9): 5890-5897, 2022 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-35357542

RESUMO

OBJECTIVE: The purpose of this study was to develop an artificial intelligence (AI)-based model to detect features of atrial fibrillation (AF) on chest radiographs. METHODS: This retrospective study included consecutively collected chest radiographs of patients who had echocardiography at our institution from July 2016 to May 2019. Eligible radiographs had been acquired within 30 days of the echocardiography. These radiographs were labeled as AF-positive or AF-negative based on the associated electronic medical records; then, each patient was randomly divided into training, validation, and test datasets in an 8:1:1 ratio. A deep learning-based model to classify radiographs as with or without AF was trained on the training dataset, tuned with the validation dataset, and evaluated with the test dataset. RESULTS: The training dataset included 11,105 images (5637 patients; 3145 male, mean age ± standard deviation, 68 ± 14 years), the validation dataset included 1388 images (704 patients, 397 male, 67 ± 14 years), and the test dataset included 1375 images (706 patients, 395 male, 68 ± 15 years). Applying the model to the validation and test datasets gave a respective area under the curve of 0.81 (95% confidence interval, 0.78-0.85) and 0.80 (0.76-0.84), sensitivity of 0.76 (0.70-0.81) and 0.70 (0.64-0.76), specificity of 0.75 (0.72-0.77) and 0.74 (0.72-0.77), and accuracy of 0.75 (0.72-0.77) and 0.74 (0.71-0.76). CONCLUSION: Our AI can identify AF on chest radiographs, which provides a new way for radiologists to infer AF. KEY POINTS: • A deep learning-based model was trained to detect atrial fibrillation in chest radiographs, showing that there are indicators of atrial fibrillation visible even on static images. • The validation and test datasets each gave a solid performance with area under the curve, sensitivity, and specificity of 0.81, 0.76, and 0.75, respectively, for the validation dataset, and 0.80, 0.70, and 0.74, respectively, for the test dataset. • The saliency maps highlighted anatomical areas consistent with those reported for atrial fibrillation on chest radiographs, such as the atria.


Assuntos
Inteligência Artificial , Fibrilação Atrial , Aprendizado Profundo , Idoso , Idoso de 80 Anos ou mais , Fibrilação Atrial/diagnóstico por imagem , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Radiografia , Radiografia Torácica/métodos , Estudos Retrospectivos
13.
Ann Nucl Med ; 36(5): 468-478, 2022 May.
Artigo em Inglês | MEDLINE | ID: mdl-35182328

RESUMO

OBJECTIVE: It is important to detect parathyroid adenomas by parathyroid scintigraphy with 99m-technetium sestamibi (99mTc-MIBI) before surgery. This study aimed to develop and validate deep learning (DL)-based models to detect parathyroid adenoma in patients with primary hyperparathyroidism, from parathyroid scintigrams with 99mTc-MIBI. METHODS: DL-based models for detecting parathyroid adenoma in early- and late-phase parathyroid scintigrams were, respectively, developed and evaluated. The training dataset used to train the models was collected from 192 patients (165 adenoma cases, mean age: 64 years ± 13, 145 women) and the validation dataset used to tune the models was collected from 45 patients (30 adenoma cases, mean age: 67 years ± 12, 37 women). The images were collected from patients who were pathologically diagnosed with parathyroid adenomas or in whom no lesions could be detected by either parathyroid scintigraphy or ultrasonography at our institution from June 2010 to March 2019. The models were tested on a dataset collected from 44 patients (30 adenoma cases, mean age: 67 years ± 12, 38 women) who took scintigraphy from April 2019 to March 2020. The models' lesion-based sensitivity and mean false positive indications per image (mFPI) were assessed with the test dataset. RESULTS: The sensitivity was 82% [95% confidence interval 72-92%] with mFPI of 0.44 for the scintigrams of the early-phase model and 83% [73-92%] with mFPI of 0.31 for the scintigrams of the delayed-phase model in the test dataset, respectively. CONCLUSIONS: The DL-based models were able to detect parathyroid adenomas with a high sensitivity using parathyroid scintigraphy with 99m-technetium sestamibi.


Assuntos
Adenoma , Aprendizado Profundo , Hiperparatireoidismo Primário , Neoplasias das Paratireoides , Adenoma/complicações , Adenoma/diagnóstico por imagem , Idoso , Feminino , Humanos , Hiperparatireoidismo Primário/diagnóstico por imagem , Hiperparatireoidismo Primário/patologia , Masculino , Pessoa de Meia-Idade , Glândulas Paratireoides/diagnóstico por imagem , Glândulas Paratireoides/patologia , Neoplasias das Paratireoides/diagnóstico por imagem , Neoplasias das Paratireoides/cirurgia , Cintilografia , Compostos Radiofarmacêuticos , Sensibilidade e Especificidade , Tecnécio , Tecnécio Tc 99m Sestamibi
14.
Eur Heart J Digit Health ; 3(1): 20-28, 2022 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-36713993

RESUMO

Aims: We aimed to develop models to detect aortic stenosis (AS) from chest radiographs-one of the most basic imaging tests-with artificial intelligence. Methods and results: We used 10 433 retrospectively collected digital chest radiographs from 5638 patients to train, validate, and test three deep learning models. Chest radiographs were collected from patients who had also undergone echocardiography at a single institution between July 2016 and May 2019. These were labelled from the corresponding echocardiography assessments as AS-positive or AS-negative. The radiographs were separated on a patient basis into training [8327 images from 4512 patients, mean age 65 ± (standard deviation) 15 years], validation (1041 images from 563 patients, mean age 65 ± 14 years), and test (1065 images from 563 patients, mean age 65 ± 14 years) datasets. The soft voting-based ensemble of the three developed models had the best overall performance for predicting AS with an area under the receiver operating characteristic curve, sensitivity, specificity, accuracy, positive predictive value, and negative predictive value of 0.83 (95% confidence interval 0.77-0.88), 0.78 (0.67-0.86), 0.71 (0.68-0.73), 0.71 (0.68-0.74), 0.18 (0.14-0.23), and 0.97 (0.96-0.98), respectively, in the validation dataset and 0.83 (0.78-0.88), 0.83 (0.74-0.90), 0.69 (0.66-0.72), 0.71 (0.68-0.73), 0.23 (0.19-0.28), and 0.97 (0.96-0.98), respectively, in the test dataset. Conclusion: Deep learning models using chest radiographs have the potential to differentiate between radiographs of patients with and without AS. Lay Summary: We created artificial intelligence (AI) models using deep learning to identify aortic stenosis (AS) from chest radiographs. Three AI models were developed and evaluated with 10 433 retrospectively collected radiographs and labelled from echocardiography reports. The ensemble AI model could detect AS in a test dataset with an area under the receiver operating characteristic curve of 0.83 (95% confidence interval 0.78-0.88). Since chest radiography is a cost-effective and widely available imaging test, our model can provide an additive resource for the detection of AS.

15.
BMC Cancer ; 21(1): 1120, 2021 Oct 18.
Artigo em Inglês | MEDLINE | ID: mdl-34663260

RESUMO

BACKGROUND: We investigated the performance improvement of physicians with varying levels of chest radiology experience when using a commercially available artificial intelligence (AI)-based computer-assisted detection (CAD) software to detect lung cancer nodules on chest radiographs from multiple vendors. METHODS: Chest radiographs and their corresponding chest CT were retrospectively collected from one institution between July 2017 and June 2018. Two author radiologists annotated pathologically proven lung cancer nodules on the chest radiographs while referencing CT. Eighteen readers (nine general physicians and nine radiologists) from nine institutions interpreted the chest radiographs. The readers interpreted the radiographs alone and then reinterpreted them referencing the CAD output. Suspected nodules were enclosed with a bounding box. These bounding boxes were judged correct if there was significant overlap with the ground truth, specifically, if the intersection over union was 0.3 or higher. The sensitivity, specificity, accuracy, PPV, and NPV of the readers' assessments were calculated. RESULTS: In total, 312 chest radiographs were collected as a test dataset, including 59 malignant images (59 nodules of lung cancer) and 253 normal images. The model provided a modest boost to the reader's sensitivity, particularly helping general physicians. The performance of general physicians was improved from 0.47 to 0.60 for sensitivity, from 0.96 to 0.97 for specificity, from 0.87 to 0.90 for accuracy, from 0.75 to 0.82 for PPV, and from 0.89 to 0.91 for NPV while the performance of radiologists was improved from 0.51 to 0.60 for sensitivity, from 0.96 to 0.96 for specificity, from 0.87 to 0.90 for accuracy, from 0.76 to 0.80 for PPV, and from 0.89 to 0.91 for NPV. The overall increase in the ratios of sensitivity, specificity, accuracy, PPV, and NPV were 1.22 (1.14-1.30), 1.00 (1.00-1.01), 1.03 (1.02-1.04), 1.07 (1.03-1.11), and 1.02 (1.01-1.03) by using the CAD, respectively. CONCLUSION: The AI-based CAD was able to improve the ability of physicians to detect nodules of lung cancer in chest radiographs. The use of a CAD model can indicate regions physicians may have overlooked during their initial assessment.


Assuntos
Neoplasias Pulmonares/diagnóstico por imagem , Interpretação de Imagem Radiográfica Assistida por Computador/métodos , Radiografia Torácica/métodos , Nódulo Pulmonar Solitário/diagnóstico por imagem , Adulto , Idoso , Idoso de 80 Anos ou mais , Aprendizado Profundo , Feminino , Clínicos Gerais , Humanos , Pulmão/diagnóstico por imagem , Masculino , Pessoa de Meia-Idade , Radiologistas , Estudos Retrospectivos , Sensibilidade e Especificidade
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...