Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 6 de 6
Filter
Add more filters










Database
Language
Publication year range
1.
Res Sq ; 2024 Jun 28.
Article in English | MEDLINE | ID: mdl-38978576

ABSTRACT

Over 85 million computed tomography (CT) scans are performed annually in the US, of which approximately one quarter focus on the abdomen. Given the current shortage of both general and specialized radiologists, there is a large impetus to use artificial intelligence to alleviate the burden of interpreting these complex imaging studies while simultaneously using the images to extract novel physiological insights. Prior state-of-the-art approaches for automated medical image interpretation leverage vision language models (VLMs) that utilize both the image and the corresponding textual radiology reports. However, current medical VLMs are generally limited to 2D images and short reports. To overcome these shortcomings for abdominal CT interpretation, we introduce Merlin - a 3D VLM that leverages both structured electronic health records (EHR) and unstructured radiology reports for pretraining without requiring additional manual annotations. We train Merlin using a high-quality clinical dataset of paired CT scans (6+ million images from 15,331 CTs), EHR diagnosis codes (1.8+ million codes), and radiology reports (6+ million tokens) for training. We comprehensively evaluate Merlin on 6 task types and 752 individual tasks. The non-adapted (off-the-shelf) tasks include zero-shot findings classification (31 findings), phenotype classification (692 phenotypes), and zero-shot cross-modal retrieval (image to findings and image to impressions), while model adapted tasks include 5-year chronic disease prediction (6 diseases), radiology report generation, and 3D semantic segmentation (20 organs). We perform internal validation on a test set of 5,137 CTs, and external validation on 7,000 clinical CTs and on two public CT datasets (VerSe, TotalSegmentator). Beyond these clinically-relevant evaluations, we assess the efficacy of various network architectures and training strategies to depict that Merlin has favorable performance to existing task-specific baselines. We derive data scaling laws to empirically assess training data needs for requisite downstream task performance. Furthermore, unlike conventional VLMs that require hundreds of GPUs for training, we perform all training on a single GPU. This computationally efficient design can help democratize foundation model training, especially for health systems with compute constraints. We plan to release our trained models, code, and dataset, pending manual removal of all protected health information.

2.
Int J Mol Sci ; 25(10)2024 May 17.
Article in English | MEDLINE | ID: mdl-38791508

ABSTRACT

Cryogenic electron tomography (cryoET) is a powerful tool in structural biology, enabling detailed 3D imaging of biological specimens at a resolution of nanometers. Despite its potential, cryoET faces challenges such as the missing wedge problem, which limits reconstruction quality due to incomplete data collection angles. Recently, supervised deep learning methods leveraging convolutional neural networks (CNNs) have considerably addressed this issue; however, their pretraining requirements render them susceptible to inaccuracies and artifacts, particularly when representative training data is scarce. To overcome these limitations, we introduce a proof-of-concept unsupervised learning approach using coordinate networks (CNs) that optimizes network weights directly against input projections. This eliminates the need for pretraining, reducing reconstruction runtime by 3-20× compared to supervised methods. Our in silico results show improved shape completion and reduction of missing wedge artifacts, assessed through several voxel-based image quality metrics in real space and a novel directional Fourier Shell Correlation (FSC) metric. Our study illuminates benefits and considerations of both supervised and unsupervised approaches, guiding the development of improved reconstruction strategies.


Subject(s)
Image Processing, Computer-Assisted , Neural Networks, Computer , Unsupervised Machine Learning , Image Processing, Computer-Assisted/methods , Imaging, Three-Dimensional/methods , Electron Microscope Tomography/methods , Cryoelectron Microscopy/methods , Algorithms , Deep Learning
3.
medRxiv ; 2024 May 07.
Article in English | MEDLINE | ID: mdl-38766040

ABSTRACT

Analyzing anatomic shapes of tissues and organs is pivotal for accurate disease diagnostics and clinical decision-making. One prominent disease that depends on anatomic shape analysis is osteoarthritis, which affects 30 million Americans. To advance osteoarthritis diagnostics and prognostics, we introduce ShapeMed-Knee, a 3D shape dataset with 9,376 high-resolution, medical-imaging-based 3D shapes of both femur bone and cartilage. Besides data, ShapeMed-Knee includes two benchmarks for assessing reconstruction accuracy and five clinical prediction tasks that assess the utility of learned shape representations. Leveraging ShapeMed-Knee, we develop and evaluate a novel hybrid explicit-implicit neural shape model which achieves up to 40% better reconstruction accuracy than a statistical shape model and implicit neural shape model. Our hybrid models achieve state-of-the-art performance for preserving cartilage biomarkers; they're also the first models to successfully predict localized structural features of osteoarthritis, outperforming shape models and convolutional neural networks applied to raw magnetic resonance images and segmentations. The ShapeMed-Knee dataset provides medical evaluations to reconstruct multiple anatomic surfaces and embed meaningful disease-specific information. ShapeMed-Knee reduces barriers to applying 3D modeling in medicine, and our benchmarks highlight that advancements in 3D modeling can enhance the diagnosis and risk stratification for complex diseases. The dataset, code, and benchmarks will be made freely accessible.

4.
bioRxiv ; 2024 Apr 28.
Article in English | MEDLINE | ID: mdl-38712113

ABSTRACT

Cryogenic electron tomography (cryoET) is a powerful tool in structural biology, enabling detailed 3D imaging of biological specimens at a resolution of nanometers. Despite its potential, cryoET faces challenges such as the missing wedge problem, which limits reconstruction quality due to incomplete data collection angles. Recently, supervised deep learning methods leveraging convolutional neural networks (CNNs) have considerably addressed this issue; however, their pretraining requirements render them susceptible to inaccuracies and artifacts, particularly when representative training data is scarce. To overcome these limitations, we introduce a proof-of-concept unsupervised learning approach using coordinate networks (CNs) that optimizes network weights directly against input projections. This eliminates the need for pretraining, reducing reconstruction runtime by 3 - 20× compared to supervised methods. Our in silico results show improved shape completion and reduction of missing wedge artifacts, assessed through several voxel-based image quality metrics in real space and a novel directional Fourier Shell Correlation (FSC) metric. Our study illuminates benefits and considerations of both supervised and unsupervised approaches, guiding the development of improved reconstruction strategies.

5.
Nat Med ; 30(4): 1134-1142, 2024 Apr.
Article in English | MEDLINE | ID: mdl-38413730

ABSTRACT

Analyzing vast textual data and summarizing key information from electronic health records imposes a substantial burden on how clinicians allocate their time. Although large language models (LLMs) have shown promise in natural language processing (NLP) tasks, their effectiveness on a diverse range of clinical summarization tasks remains unproven. Here we applied adaptation methods to eight LLMs, spanning four distinct clinical summarization tasks: radiology reports, patient questions, progress notes and doctor-patient dialogue. Quantitative assessments with syntactic, semantic and conceptual NLP metrics reveal trade-offs between models and adaptation methods. A clinical reader study with 10 physicians evaluated summary completeness, correctness and conciseness; in most cases, summaries from our best-adapted LLMs were deemed either equivalent (45%) or superior (36%) compared with summaries from medical experts. The ensuing safety analysis highlights challenges faced by both LLMs and medical experts, as we connect errors to potential medical harm and categorize types of fabricated information. Our research provides evidence of LLMs outperforming medical experts in clinical text summarization across multiple tasks. This suggests that integrating LLMs into clinical workflows could alleviate documentation burden, allowing clinicians to focus more on patient care.


Subject(s)
Documentation , Semantics , Humans , Electronic Health Records , Natural Language Processing , Physician-Patient Relations
6.
Res Sq ; 2023 Oct 30.
Article in English | MEDLINE | ID: mdl-37961377

ABSTRACT

Sifting through vast textual data and summarizing key information from electronic health records (EHR) imposes a substantial burden on how clinicians allocate their time. Although large language models (LLMs) have shown immense promise in natural language processing (NLP) tasks, their efficacy on a diverse range of clinical summarization tasks has not yet been rigorously demonstrated. In this work, we apply domain adaptation methods to eight LLMs, spanning six datasets and four distinct clinical summarization tasks: radiology reports, patient questions, progress notes, and doctor-patient dialogue. Our thorough quantitative assessment reveals trade-offs between models and adaptation methods in addition to instances where recent advances in LLMs may not improve results. Further, in a clinical reader study with ten physicians, we show that summaries from our best-adapted LLMs are preferable to human summaries in terms of completeness and correctness. Our ensuing qualitative analysis highlights challenges faced by both LLMs and human experts. Lastly, we correlate traditional quantitative NLP metrics with reader study scores to enhance our understanding of how these metrics align with physician preferences. Our research marks the first evidence of LLMs outperforming human experts in clinical text summarization across multiple tasks. This implies that integrating LLMs into clinical workflows could alleviate documentation burden, empowering clinicians to focus more on personalized patient care and the inherently human aspects of medicine.

SELECTION OF CITATIONS
SEARCH DETAIL
...