Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 13 de 13
Filter
1.
Res Sq ; 2024 Jun 28.
Article in English | MEDLINE | ID: mdl-38978576

ABSTRACT

Over 85 million computed tomography (CT) scans are performed annually in the US, of which approximately one quarter focus on the abdomen. Given the current shortage of both general and specialized radiologists, there is a large impetus to use artificial intelligence to alleviate the burden of interpreting these complex imaging studies while simultaneously using the images to extract novel physiological insights. Prior state-of-the-art approaches for automated medical image interpretation leverage vision language models (VLMs) that utilize both the image and the corresponding textual radiology reports. However, current medical VLMs are generally limited to 2D images and short reports. To overcome these shortcomings for abdominal CT interpretation, we introduce Merlin - a 3D VLM that leverages both structured electronic health records (EHR) and unstructured radiology reports for pretraining without requiring additional manual annotations. We train Merlin using a high-quality clinical dataset of paired CT scans (6+ million images from 15,331 CTs), EHR diagnosis codes (1.8+ million codes), and radiology reports (6+ million tokens) for training. We comprehensively evaluate Merlin on 6 task types and 752 individual tasks. The non-adapted (off-the-shelf) tasks include zero-shot findings classification (31 findings), phenotype classification (692 phenotypes), and zero-shot cross-modal retrieval (image to findings and image to impressions), while model adapted tasks include 5-year chronic disease prediction (6 diseases), radiology report generation, and 3D semantic segmentation (20 organs). We perform internal validation on a test set of 5,137 CTs, and external validation on 7,000 clinical CTs and on two public CT datasets (VerSe, TotalSegmentator). Beyond these clinically-relevant evaluations, we assess the efficacy of various network architectures and training strategies to depict that Merlin has favorable performance to existing task-specific baselines. We derive data scaling laws to empirically assess training data needs for requisite downstream task performance. Furthermore, unlike conventional VLMs that require hundreds of GPUs for training, we perform all training on a single GPU. This computationally efficient design can help democratize foundation model training, especially for health systems with compute constraints. We plan to release our trained models, code, and dataset, pending manual removal of all protected health information.

3.
Nat Commun ; 14(1): 4039, 2023 07 07.
Article in English | MEDLINE | ID: mdl-37419921

ABSTRACT

Deep learning (DL) models can harness electronic health records (EHRs) to predict diseases and extract radiologic findings for diagnosis. With ambulatory chest radiographs (CXRs) frequently ordered, we investigated detecting type 2 diabetes (T2D) by combining radiographic and EHR data using a DL model. Our model, developed from 271,065 CXRs and 160,244 patients, was tested on a prospective dataset of 9,943 CXRs. Here we show the model effectively detected T2D with a ROC AUC of 0.84 and a 16% prevalence. The algorithm flagged 1,381 cases (14%) as suspicious for T2D. External validation at a distinct institution yielded a ROC AUC of 0.77, with 5% of patients subsequently diagnosed with T2D. Explainable AI techniques revealed correlations between specific adiposity measures and high predictivity, suggesting CXRs' potential for enhanced T2D screening.


Subject(s)
Deep Learning , Diabetes Mellitus, Type 2 , Humans , Diabetes Mellitus, Type 2/diagnostic imaging , Radiography, Thoracic/methods , Prospective Studies , Radiography
4.
Br J Radiol ; 95(1139): 20210688, 2022 Oct 01.
Article in English | MEDLINE | ID: mdl-36062807

ABSTRACT

OBJECTIVE: Chest X-rays are the most commonly performed diagnostic examinations. An artificial intelligence (AI) system that evaluates the images fast and accurately help reducing workflow and management of the patients. An automated assistant may reduce the time of interpretation in daily practice. We aim to investigate whether radiology residents consider the recommendations of an AI system for their final decisions, and to assess the diagnostic performances of the residents and the AI system. METHODS: Posteroanterior (PA) chest X-rays with confirmed diagnosis were evaluated by 10 radiology residents. After interpretation, the residents checked the evaluations of the AI Algorithm and made their final decisions. Diagnostic performances of the residents without AI and after checking the AI results were compared. RESULTS: Residents' diagnostic performance for all radiological findings had a mean sensitivity of 37.9% (vs 39.8% with AI support), a mean specificity of 93.9% (vs 93.9% with AI support). The residents obtained a mean AUC of 0.660 vs 0.669 with AI support. The AI algorithm diagnostic accuracy, measured by the overall mean AUC, was 0.789. No significant difference was detected between decisions taken with and without the support of AI. CONCLUSION: Although, the AI algorithm diagnostic accuracy were higher than the residents, the radiology residents did not change their final decisions after reviewing AI recommendations. In order to benefit from these tools, the recommendations of the AI system must be more precise to the user. ADVANCES IN KNOWLEDGE: This research provides information about the willingness or resistance of radiologists to work with AI technologies via diagnostic performance tests. It also shows the diagnostic performance of an existing AI algorithm, determined by real-life data.


Subject(s)
Artificial Intelligence , Radiology , Humans , X-Rays , Radiology/methods , Algorithms , Radiologists
5.
NPJ Digit Med ; 5(1): 89, 2022 Jul 11.
Article in English | MEDLINE | ID: mdl-35817953

ABSTRACT

Solid-organ transplantation is a life-saving treatment for end-stage organ disease in highly selected patients. Alongside the tremendous progress in the last several decades, new challenges have emerged. The growing disparity between organ demand and supply requires optimal patient/donor selection and matching. Improvements in long-term graft and patient survival require data-driven diagnosis and management of post-transplant complications. The growing abundance of clinical, genetic, radiologic, and metabolic data in transplantation has led to increasing interest in applying machine-learning (ML) tools that can uncover hidden patterns in large datasets. ML algorithms have been applied in predictive modeling of waitlist mortality, donor-recipient matching, survival prediction, post-transplant complications diagnosis, and prediction, aiming to optimize immunosuppression and management. In this review, we provide insight into the various applications of ML in transplant medicine, why these were used to evaluate a specific clinical question, and the potential of ML to transform the care of transplant recipients. 36 articles were selected after a comprehensive search of the following databases: Ovid MEDLINE; Ovid MEDLINE Epub Ahead of Print and In-Process & Other Non-Indexed Citations; Ovid Embase; Cochrane Database of Systematic Reviews (Ovid); and Cochrane Central Register of Controlled Trials (Ovid). In summary, these studies showed that ML techniques hold great potential to improve the outcome of transplant recipients. Future work is required to improve the interpretability of these algorithms, ensure generalizability through larger-scale external validation, and establishment of infrastructure to permit clinical integration.

6.
Front Immunol ; 13: 867443, 2022.
Article in English | MEDLINE | ID: mdl-35401501

ABSTRACT

Early T-cell development is precisely controlled by E proteins, that indistinguishably include HEB/TCF12 and E2A/TCF3 transcription factors, together with NOTCH1 and pre-T cell receptor (TCR) signalling. Importantly, perturbations of early T-cell regulatory networks are implicated in leukemogenesis. NOTCH1 gain of function mutations invariably lead to T-cell acute lymphoblastic leukemia (T-ALL), whereas inhibition of E proteins accelerates leukemogenesis. Thus, NOTCH1, pre-TCR, E2A and HEB functions are intertwined, but how these pathways contribute individually or synergistically to leukemogenesis remain to be documented. To directly address these questions, we leveraged Cd3e-deficient mice in which pre-TCR signaling and progression through ß-selection is abrogated to dissect and decouple the roles of pre-TCR, NOTCH1, E2A and HEB in SCL/TAL1-induced T-ALL, via the use of Notch1 gain of function transgenic (Notch1ICtg) and Tcf12+/- or Tcf3+/- heterozygote mice. As a result, we now provide evidence that both HEB and E2A restrain cell proliferation at the ß-selection checkpoint while the clonal expansion of SCL-LMO1-induced pre-leukemic stem cells in T-ALL is uniquely dependent on Tcf12 gene dosage. At the molecular level, HEB protein levels are decreased via proteasomal degradation at the leukemic stage, pointing to a reversible loss of function mechanism. Moreover, in SCL-LMO1-induced T-ALL, loss of one Tcf12 allele is sufficient to bypass pre-TCR signaling which is required for Notch1 gain of function mutations and for progression to T-ALL. In contrast, Tcf12 monoallelic deletion does not accelerate Notch1IC-induced T-ALL, indicating that Tcf12 and Notch1 operate in the same pathway. Finally, we identify a tumor suppressor gene set downstream of HEB, exhibiting significantly lower expression levels in pediatric T-ALL compared to B-ALL and brain cancer samples, the three most frequent pediatric cancers. In summary, our results indicate a tumor suppressor function of HEB/TCF12 in T-ALL to mitigate cell proliferation controlled by NOTCH1 in pre-leukemic stem cells and prevent NOTCH1-driven progression to T-ALL.


Subject(s)
Precursor T-Cell Lymphoblastic Leukemia-Lymphoma , Animals , Basic Helix-Loop-Helix Transcription Factors/metabolism , Humans , Mice , Precursor T-Cell Lymphoblastic Leukemia-Lymphoma/genetics , Proto-Oncogene Proteins/metabolism , Receptor, Notch1/genetics , Receptor, Notch1/metabolism , Receptors, Antigen, T-Cell , T-Cell Acute Lymphocytic Leukemia Protein 1 , T-Lymphocytes/metabolism , Transcription Factors/metabolism
11.
Cureus ; 12(7): e9448, 2020 Jul 28.
Article in English | MEDLINE | ID: mdl-32864270

ABSTRACT

Introduction The need to streamline patient management for coronavirus disease-19 (COVID-19) has become more pressing than ever. Chest X-rays (CXRs) provide a non-invasive (potentially bedside) tool to monitor the progression of the disease. In this study, we present a severity score prediction model for COVID-19 pneumonia for frontal chest X-ray images. Such a tool can gauge the severity of COVID-19 lung infections (and pneumonia in general) that can be used for escalation or de-escalation of care as well as monitoring treatment efficacy, especially in the ICU. Methods Images from a public COVID-19 database were scored retrospectively by three blinded experts in terms of the extent of lung involvement as well as the degree of opacity. A neural network model that was pre-trained on large (non-COVID-19) chest X-ray datasets is used to construct features for COVID-19 images which are predictive for our task. Results This study finds that training a regression model on a subset of the outputs from this pre-trained chest X-ray model predicts our geographic extent score (range 0-8) with 1.14 mean absolute error (MAE) and our lung opacity score (range 0-6) with 0.78 MAE. Conclusions These results indicate that our model's ability to gauge the severity of COVID-19 lung infections could be used for escalation or de-escalation of care as well as monitoring treatment efficacy, especially in the ICU. To enable follow up work, we make our code, labels, and data available online.

12.
Bioinformatics ; 36(Suppl_1): i417-i426, 2020 07 01.
Article in English | MEDLINE | ID: mdl-32657403

ABSTRACT

MOTIVATION: The recent development of sequencing technologies revolutionized our understanding of the inner workings of the cell as well as the way disease is treated. A single RNA sequencing (RNA-Seq) experiment, however, measures tens of thousands of parameters simultaneously. While the results are information rich, data analysis provides a challenge. Dimensionality reduction methods help with this task by extracting patterns from the data by compressing it into compact vector representations. RESULTS: We present the factorized embeddings (FE) model, a self-supervised deep learning algorithm that learns simultaneously, by tensor factorization, gene and sample representation spaces. We ran the model on RNA-Seq data from two large-scale cohorts and observed that the sample representation captures information on single gene and global gene expression patterns. Moreover, we found that the gene representation space was organized such that tissue-specific genes, highly correlated genes as well as genes participating in the same GO terms were grouped. Finally, we compared the vector representation of samples learned by the FE model to other similar models on 49 regression tasks. We report that the representations trained with FE rank first or second in all of the tasks, surpassing, sometimes by a considerable margin, other representations. AVAILABILITY AND IMPLEMENTATION: A toy example in the form of a Jupyter Notebook as well as the code and trained embeddings for this project can be found at: https://github.com/TrofimovAssya/FactorizedEmbeddings. SUPPLEMENTARY INFORMATION: Supplementary data are available at Bioinformatics online.


Subject(s)
Algorithms , RNA , Sequence Analysis, RNA
13.
PLoS Biol ; 18(4): e3000678, 2020 04.
Article in English | MEDLINE | ID: mdl-32243449

ABSTRACT

Histological atlases of the cerebral cortex, such as those made famous by Brodmann and von Economo, are invaluable for understanding human brain microstructure and its relationship with functional organization in the brain. However, these existing atlases are limited to small numbers of manually annotated samples from a single cerebral hemisphere, measured from 2D histological sections. We present the first whole-brain quantitative 3D laminar atlas of the human cerebral cortex. It was derived from a 3D histological atlas of the human brain at 20-micrometer isotropic resolution (BigBrain), using a convolutional neural network to segment, automatically, the cortical layers in both hemispheres. Our approach overcomes many of the historical challenges with measurement of histological thickness in 2D, and the resultant laminar atlas provides an unprecedented level of precision and detail. We utilized this BigBrain cortical atlas to test whether previously reported thickness gradients, as measured by MRI in sensory and motor processing cortices, were present in a histological atlas of cortical thickness and which cortical layers were contributing to these gradients. Cortical thickness increased across sensory processing hierarchies, primarily driven by layers III, V, and VI. In contrast, motor-frontal cortices showed the opposite pattern, with decreases in total and pyramidal layer thickness from motor to frontal association cortices. These findings illustrate how this laminar atlas will provide a link between single-neuron morphology, mesoscale cortical layering, macroscopic cortical thickness, and, ultimately, functional neuroanatomy.


Subject(s)
Cerebral Cortex/anatomy & histology , Cerebral Cortex/diagnostic imaging , Imaging, Three-Dimensional/methods , Brain/diagnostic imaging , Humans , Magnetic Resonance Imaging , Neural Networks, Computer
SELECTION OF CITATIONS
SEARCH DETAIL
...