Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 12 de 12
Filter
Add more filters










Publication year range
2.
Nat Commun ; 14(1): 4039, 2023 07 07.
Article in English | MEDLINE | ID: mdl-37419921

ABSTRACT

Deep learning (DL) models can harness electronic health records (EHRs) to predict diseases and extract radiologic findings for diagnosis. With ambulatory chest radiographs (CXRs) frequently ordered, we investigated detecting type 2 diabetes (T2D) by combining radiographic and EHR data using a DL model. Our model, developed from 271,065 CXRs and 160,244 patients, was tested on a prospective dataset of 9,943 CXRs. Here we show the model effectively detected T2D with a ROC AUC of 0.84 and a 16% prevalence. The algorithm flagged 1,381 cases (14%) as suspicious for T2D. External validation at a distinct institution yielded a ROC AUC of 0.77, with 5% of patients subsequently diagnosed with T2D. Explainable AI techniques revealed correlations between specific adiposity measures and high predictivity, suggesting CXRs' potential for enhanced T2D screening.


Subject(s)
Deep Learning , Diabetes Mellitus, Type 2 , Humans , Diabetes Mellitus, Type 2/diagnostic imaging , Radiography, Thoracic/methods , Prospective Studies , Radiography
3.
Acad Radiol ; 30(4): 739-748, 2023 04.
Article in English | MEDLINE | ID: mdl-35690536

ABSTRACT

RATIONALE AND OBJECTIVES: Computed tomography (CT) is preferred for evaluating solitary pulmonary nodules (SPNs) but access or availability may be lacking, in addition, overlapping anatomy can hinder detection of SPNs on chest radiographs. We developed and evaluated the clinical feasibility of a deep learning algorithm to generate digitally reconstructed tomography (DRT) images of the chest from digitally reconstructed frontal and lateral radiographs (DRRs) and use them to detect SPNs. METHODS: This single-institution retrospective study included 637 patients with noncontrast helical CT of the chest (mean age 68 years, median age 69 years, standard deviation 11.7 years; 355 women) between 11/2012 and 12/2020, with SPNs measuring 10-30 mm. A deep learning model was trained on 562 patients, validated on 60 patients, and tested on the remaining 15 patients. Diagnostic performance (SPN detection) from planar radiography (DRRs and CT scanograms, PR) alone or with DRT was evaluated by two radiologists in an independent blinded fashion. The quality of the DRT SPN image in terms of nodule size and location, morphology, and opacity was also evaluated, and compared to the ground-truth CT images RESULTS: Diagnostic performance was higher from DRT plus PR than from PR alone (area under the receiver operating characteristic curve 0.95-0.98 versus 0.80-0.85; p < 0.05). DRT plus PR enabled diagnosis of SPNs in 11 more patients than PR alone. Interobserver agreement was 0.82 for DRT plus PR and 0.89 for PR alone; and interobserver agreement for size and location, morphology, and opacity of the DRT SPN was 0.94, 0.68, and 0.38, respectively. CONCLUSION: For SPN detection, DRT plus PR showed better diagnostic performance than PR alone. Deep learning can be used to generate DRT images and improve detection of SPNs.


Subject(s)
Deep Learning , Lung Neoplasms , Solitary Pulmonary Nodule , Humans , Female , Aged , Solitary Pulmonary Nodule/diagnostic imaging , Feasibility Studies , Retrospective Studies , Tomography, X-Ray Computed/methods , Lung Neoplasms/diagnostic imaging
4.
Lancet Digit Health ; 4(6): e406-e414, 2022 06.
Article in English | MEDLINE | ID: mdl-35568690

ABSTRACT

BACKGROUND: Previous studies in medical imaging have shown disparate abilities of artificial intelligence (AI) to detect a person's race, yet there is no known correlation for race on medical imaging that would be obvious to human experts when interpreting the images. We aimed to conduct a comprehensive evaluation of the ability of AI to recognise a patient's racial identity from medical images. METHODS: Using private (Emory CXR, Emory Chest CT, Emory Cervical Spine, and Emory Mammogram) and public (MIMIC-CXR, CheXpert, National Lung Cancer Screening Trial, RSNA Pulmonary Embolism CT, and Digital Hand Atlas) datasets, we evaluated, first, performance quantification of deep learning models in detecting race from medical images, including the ability of these models to generalise to external environments and across multiple imaging modalities. Second, we assessed possible confounding of anatomic and phenotypic population features by assessing the ability of these hypothesised confounders to detect race in isolation using regression models, and by re-evaluating the deep learning models by testing them on datasets stratified by these hypothesised confounding variables. Last, by exploring the effect of image corruptions on model performance, we investigated the underlying mechanism by which AI models can recognise race. FINDINGS: In our study, we show that standard AI deep learning models can be trained to predict race from medical images with high performance across multiple imaging modalities, which was sustained under external validation conditions (x-ray imaging [area under the receiver operating characteristics curve (AUC) range 0·91-0·99], CT chest imaging [0·87-0·96], and mammography [0·81]). We also showed that this detection is not due to proxies or imaging-related surrogate covariates for race (eg, performance of possible confounders: body-mass index [AUC 0·55], disease distribution [0·61], and breast density [0·61]). Finally, we provide evidence to show that the ability of AI deep learning models persisted over all anatomical regions and frequency spectrums of the images, suggesting the efforts to control this behaviour when it is undesirable will be challenging and demand further study. INTERPRETATION: The results from our study emphasise that the ability of AI deep learning models to predict self-reported race is itself not the issue of importance. However, our finding that AI can accurately predict self-reported race, even from corrupted, cropped, and noised medical images, often when clinical experts cannot, creates an enormous risk for all model deployments in medical imaging. FUNDING: National Institute of Biomedical Imaging and Bioengineering, MIDRC grant of National Institutes of Health, US National Science Foundation, National Library of Medicine of the National Institutes of Health, and Taiwan Ministry of Science and Technology.


Subject(s)
Deep Learning , Lung Neoplasms , Artificial Intelligence , Early Detection of Cancer , Humans , Retrospective Studies
5.
J Am Coll Radiol ; 19(1 Pt B): 184-191, 2022 01.
Article in English | MEDLINE | ID: mdl-35033309

ABSTRACT

PURPOSE: The aim of this study was to assess racial/ethnic and socioeconomic disparities in the difference between atherosclerotic vascular disease prevalence measured by a multitask convolutional neural network (CNN) deep learning model using frontal chest radiographs (CXRs) and the prevalence reflected by administrative hierarchical condition category codes in two cohorts of patients with coronavirus disease 2019 (COVID-19). METHODS: A CNN model, previously published, was trained to predict atherosclerotic disease from ambulatory frontal CXRs. The model was then validated on two cohorts of patients with COVID-19: 814 ambulatory patients from a suburban location (presenting from March 14, 2020, to October 24, 2020, the internal ambulatory cohort) and 485 hospitalized patients from an inner-city location (hospitalized from March 14, 2020, to August 12, 2020, the external hospitalized cohort). The CNN model predictions were validated against electronic health record administrative codes in both cohorts and assessed using the area under the receiver operating characteristic curve (AUC). The CXRs from the ambulatory cohort were also reviewed by two board-certified radiologists and compared with the CNN-predicted values for the same cohort to produce a receiver operating characteristic curve and the AUC. The atherosclerosis diagnosis discrepancy, Δvasc, referring to the difference between the predicted value and presence or absence of the vascular disease HCC categorical code, was calculated. Linear regression was performed to determine the association of Δvasc with the covariates of age, sex, race/ethnicity, language preference, and social deprivation index. Logistic regression was used to look for an association between the presence of any hierarchical condition category codes with Δvasc and other covariates. RESULTS: The CNN prediction for vascular disease from frontal CXRs in the ambulatory cohort had an AUC of 0.85 (95% confidence interval, 0.82-0.89) and in the hospitalized cohort had an AUC of 0.69 (95% confidence interval, 0.64-0.75) against the electronic health record data. In the ambulatory cohort, the consensus radiologists' reading had an AUC of 0.89 (95% confidence interval, 0.86-0.92) relative to the CNN. Multivariate linear regression of Δvasc in the ambulatory cohort demonstrated significant negative associations with non-English-language preference (ß = -0.083, P < .05) and Black or Hispanic race/ethnicity (ß = -0.048, P < .05) and positive associations with age (ß = 0.005, P < .001) and sex (ß = 0.044, P < .05). For the hospitalized cohort, age was also significant (ß = 0.003, P < .01), as was social deprivation index (ß = 0.002, P < .05). The Δvasc variable (odds ratio [OR], 0.34), Black or Hispanic race/ethnicity (OR, 1.58), non-English-language preference (OR, 1.74), and site (OR, 0.22) were independent predictors of having one or more hierarchical condition category codes (P < .01 for all) in the combined patient cohort. CONCLUSIONS: A CNN model was predictive of aortic atherosclerosis in two cohorts (one ambulatory and one hospitalized) with COVID-19. The discrepancy between the CNN model and the administrative code, Δvasc, was associated with language preference in the ambulatory cohort; in the hospitalized cohort, this discrepancy was associated with social deprivation index. The absence of administrative code(s) was associated with Δvasc in the combined cohorts, suggesting that Δvasc is an independent predictor of health disparities. This may suggest that biomarkers extracted from routine imaging studies and compared with electronic health record data could play a role in enhancing value-based health care for traditionally underserved or disadvantaged patients for whom barriers to care exist.


Subject(s)
COVID-19 , Carcinoma, Hepatocellular , Deep Learning , Liver Neoplasms , Ethnicity , Humans , Radiography , Retrospective Studies , SARS-CoV-2 , Social Deprivation
6.
PLOS Digit Health ; 1(8): e0000057, 2022 Aug.
Article in English | MEDLINE | ID: mdl-36812559

ABSTRACT

We validate a deep learning model predicting comorbidities from frontal chest radiographs (CXRs) in patients with coronavirus disease 2019 (COVID-19) and compare the model's performance with hierarchical condition category (HCC) and mortality outcomes in COVID-19. The model was trained and tested on 14,121 ambulatory frontal CXRs from 2010 to 2019 at a single institution, modeling select comorbidities using the value-based Medicare Advantage HCC Risk Adjustment Model. Sex, age, HCC codes, and risk adjustment factor (RAF) score were used. The model was validated on frontal CXRs from 413 ambulatory patients with COVID-19 (internal cohort) and on initial frontal CXRs from 487 COVID-19 hospitalized patients (external cohort). The discriminatory ability of the model was assessed using receiver operating characteristic (ROC) curves compared to the HCC data from electronic health records, and predicted age and RAF score were compared using correlation coefficient and absolute mean error. The model predictions were used as covariables in logistic regression models to evaluate the prediction of mortality in the external cohort. Predicted comorbidities from frontal CXRs, including diabetes with chronic complications, obesity, congestive heart failure, arrhythmias, vascular disease, and chronic obstructive pulmonary disease, had a total area under ROC curve (AUC) of 0.85 (95% CI: 0.85-0.86). The ROC AUC of predicted mortality for the model was 0.84 (95% CI,0.79-0.88) for the combined cohorts. This model using only frontal CXRs predicted select comorbidities and RAF score in both internal ambulatory and external hospitalized COVID-19 cohorts and was discriminatory of mortality, supporting its potential use in clinical decision making.

7.
Acad Radiol ; 28(8): 1151-1158, 2021 08.
Article in English | MEDLINE | ID: mdl-34134940

ABSTRACT

RATIONALE AND OBJECTIVES: The clinical prognosis of outpatients with coronavirus disease 2019 (COVID-19) remains difficult to predict, with outcomes including asymptomatic, hospitalization, intubation, and death. Here we determined the prognostic value of an outpatient chest radiograph, together with an ensemble of deep learning algorithms predicting comorbidities and airspace disease to identify patients at a higher risk of hospitalization from COVID-19 infection. MATERIALS AND METHODS: This retrospective study included outpatients with COVID-19 confirmed by reverse transcription-polymerase chain reaction testing who received an ambulatory chest radiography between March 17, 2020 and October 24, 2020. In this study, full admission was defined as hospitalization within 14 days of the COVID-19 test for > 2 days with supplemental oxygen. Univariate analysis and machine learning algorithms were used to evaluate the relationship between the deep learning model predictions and hospitalization for > 2 days. RESULTS: The study included 413 patients, 222 men (54%), with a median age of 51 years (interquartile range, 39-62 years). Fifty-one patients (12.3%) required full admission. A boosted decision tree model produced the best prediction. Variables included patient age, frontal chest radiograph predictions of morbid obesity, congestive heart failure and cardiac arrhythmias, and radiographic opacity, with an internally validated area under the curve (AUC) of 0.837 (95% CI: 0.791-0.883) on a test cohort. CONCLUSION: Deep learning analysis of single frontal chest radiographs was used to generate combined comorbidity and pneumonia scores that predict the need for supplemental oxygen and hospitalization for > 2 days in patients with COVID-19 infection with an AUC of 0.837 (95% confidence interval: 0.791-0.883). Comorbidity scoring may prove useful in other clinical scenarios.


Subject(s)
COVID-19 , Deep Learning , Oxygen/therapeutic use , Adult , COVID-19/diagnostic imaging , COVID-19/therapy , Female , Hospitalization , Humans , Male , Middle Aged , Radiography, Thoracic , Retrospective Studies
8.
Acad Radiol ; 17(6): 795-8, 2010 Jun.
Article in English | MEDLINE | ID: mdl-20457420

ABSTRACT

RATIONALE AND OBJECTIVES: Comprehensive training in cardiac imaging during radiology residency is imperative if radiologists are to maintain a significant role in this rapidly growing field. In this study, radiology chief residents were surveyed to assess the current status of cardiac imaging training in radiology residency programs. The responses to this survey may be helpful in understanding current trends in cardiac imaging training and how such training can be improved in the future. MATERIALS AND METHODS: Chief residents at accredited radiology residency programs were sent an e-mail with a link to a 17-question Web-based survey. The survey assessed the organization of cardiac imaging training in each residency program, imaging modalities incorporated into cardiac imaging training, the role of residents on cardiac imaging rotations, and attitudes of residents about their cardiac imaging training and the future of cardiac imaging. RESULTS: Responses were obtained from 52 of 112 (46%) programs. Seventy-one percent had at least one dedicated cardiac imaging rotation during their residencies. Fifty-two percent and 62% of respondents reported <5 hours of cardiac imaging-related case conferences and didactic lectures per year, respectively. Most had cardiac computed tomography or magnetic resonance imaging incorporated into their cardiac imaging training. Although 92% felt that cardiac imaging training is important, only 17% felt that they currently received adequate training in cardiac imaging. CONCLUSIONS: The majority of residency programs represented in this survey had at least one dedicated cardiac imaging rotation for their residents. Most of these programs had few cardiac imaging-related conferences and lectures per year. Although most chief residents believed that cardiac imaging training is important, only a minority felt that they currently received adequate training in cardiac imaging.


Subject(s)
Cardiology/statistics & numerical data , Diagnostic Imaging/statistics & numerical data , Educational Measurement , Internship and Residency/statistics & numerical data , Physicians/statistics & numerical data , Radiology/education , Radiology/statistics & numerical data , Attitude of Health Personnel , Illinois
9.
Emerg Radiol ; 16(3): 243-5, 2009 May.
Article in English | MEDLINE | ID: mdl-18414910

ABSTRACT

Enteric duplication cysts are rare congenital anomalies that may occur anywhere along the gastrointestinal tract, most commonly involving the small bowel. The distal ileum, jejunum, and duodenum are affected in descending order of frequency. We describe a case of biliary dilatation and duodenal intussusception caused by an enteric duplication cyst in an adult patient. To our knowledge, there are no other reported cases of this entity in an adult in the English literature. Multidetector computed tomography (MDCT) findings are emphasized, and the value of multiplanar reformation (MPR) in forming a correct preoperative differential diagnosis is discussed.


Subject(s)
Biliary Tract Diseases/diagnostic imaging , Cysts/diagnostic imaging , Duodenal Diseases/diagnostic imaging , Intussusception/diagnostic imaging , Adult , Biliary Tract Diseases/etiology , Cysts/complications , Dilatation, Pathologic , Duodenal Diseases/etiology , Humans , Intussusception/etiology , Male , Tomography, X-Ray Computed
10.
Acad Radiol ; 14(4): 426-30, 2007 Apr.
Article in English | MEDLINE | ID: mdl-17368211

ABSTRACT

RATIONALE AND OBJECTIVE: We sought to develop a Bayesian-filter that could distinguish positive radiology computed tomography (CT) reports of appendicitis from negative reports with no appendicitis. MATERIALS AND METHODS: Standard unstructured electronic text radiology reports containing the key word appendicitis were obtained using a Java-based text search engine from a hospital General Electric PACS system. A total of 500 randomly selected reports from multiple radiologists were then manually categorized and merged into two separate text files: 250 positive reports and 250 negative findings of appendicitis. The two text files were then processed by the freely available UNIX-based software dbacl 1.9, a digramic Bayesian classifier for text recognition, on a Linux based Pentium 4 system. The software was then trained on the two separate merged text files categories of positive and negative appendicitis. The ability of the Bayesian filter to discriminate between reports of negative and positive appendicitis images was then tested on 100 randomly selected reports of appendicitis: 50 positive cases and 50 negative cases. RESULTS: The training time for the Bayesian filter was approximately 2 seconds. The Bayesian filter subsequently was able to categorize 50 of 50 positive reports of appendicitis and 50 of 50 reports of negative appendicitis, in less than 10 seconds. CONCLUSION: A Bayesian-filter system can be used to quickly categorize radiology report findings and automatically determine after training, with a high degree of accuracy, whether the reports have text findings of a specific diagnosis. The Bayesian filter can potentially be applied to any type of radiologic report finding and any relevant category.


Subject(s)
Appendicitis/diagnostic imaging , Bayes Theorem , Decision Making, Computer-Assisted , Natural Language Processing , Radiology Information Systems , Tomography, X-Ray Computed , Algorithms , Diagnosis, Differential , False Negative Reactions , False Positive Reactions , Humans
11.
Cereb Cortex ; 13(9): 904-10, 2003 Sep.
Article in English | MEDLINE | ID: mdl-12902389

ABSTRACT

In understanding the brain's response to extensive practice and development of high-level, expert skill, a key question is whether the same brain structures remain involved throughout the different stages of learning and a form of adaptation occurs, or a new functional circuit is formed with some structures dropping off and others joining. After training subjects on a set of complex motor tasks (tying knots), we utilized fMRI to observe that in subjects who learned the task well new regional activity emerged in posterior medial structures, i.e. the posterior cingulate gyrus. Activation associated with weak learning of the knots involved areas that mediate visual spatial computations. Brain activity associated with no substantive learning indicated involvement of areas dedicated to the declarative aspects learning such as the anterior cingulate and prefrontal cortex. The new activation for the pattern of strong learning has alternate interpretations involving either retrieval during episodic memory or a shift toward non-executive cognitive control of the task. While these interpretations are not resolved, the study makes clear that single time-point images of motor skill can be misleading because the brain structures that implement action can change following practice.


Subject(s)
Brain/physiology , Learning/physiology , Motor Skills/physiology , Psychomotor Performance/physiology , Adult , Brain Mapping , Cognition/physiology , Female , Gyrus Cinguli/physiology , Humans , Magnetic Resonance Imaging , Male , Memory/physiology , Prefrontal Cortex/physiology , Visual Cortex/physiology
12.
Neuroimage ; 18(1): 117-26, 2003 Jan.
Article in English | MEDLINE | ID: mdl-12507449

ABSTRACT

The functional neuroanatomy associated with processing single words incidentally, outside focal attention, was investigated. We asked subjects (n = 15) to listen, focus on, and comprehend a story narrative, and then single, unrelated but meaningful words were intruded into the ongoing narrative. We also manipulated the type of intruded word, using either neutral or emotionally valent words, to evaluate the extent of semantic processing and a potential encoding advantage for one type of material. Analyses emphasized the areas of activation unique to the intruded words as distinguished from the narrative text. Subjects were normal, healthy adults (n = 15). Compared to narrative text, the intruded words were associated with activation in the right middle temporal gyrus (BA 39) and posterior cingulate/precuneus regions (BA 30, 23). We conclude that the intruded words did make contact with word-level lexical but not necessarily semantic structures in the middle temporal region. The data suggested that the intruded words were processed by a "nonexecutive" monitoring system implemented by a pairing of activation in posterior, medial structures such as the posterior cingulate with deactivation in brain stem structures. This pattern induced a shift to more passive, less effortful, nonstrategic monitoring of the words. Thus, attention processing, not semantic processing, changes best characterized the brain activation unique to the intruded words. This posterior, medial region is discussed as a substrate dedicated to processing a second, incidental stream of information and thereby providing a crucial mechanism for implementing dual processing of the kind examined here.


Subject(s)
Attention/physiology , Cerebral Cortex/physiology , Image Processing, Computer-Assisted , Imaging, Three-Dimensional , Magnetic Resonance Imaging , Semantics , Speech Perception/physiology , Adult , Auditory Pathways/physiology , Brain Mapping , Brain Stem/physiology , Emotions/physiology , Female , Gyrus Cinguli/physiology , Humans , Male , Nerve Net/physiology , Temporal Lobe/physiology , Visual Cortex/physiology
SELECTION OF CITATIONS
SEARCH DETAIL
...