Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 7 de 7
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Res Sq ; 2024 Jun 28.
Artigo em Inglês | MEDLINE | ID: mdl-38978576

RESUMO

Over 85 million computed tomography (CT) scans are performed annually in the US, of which approximately one quarter focus on the abdomen. Given the current shortage of both general and specialized radiologists, there is a large impetus to use artificial intelligence to alleviate the burden of interpreting these complex imaging studies while simultaneously using the images to extract novel physiological insights. Prior state-of-the-art approaches for automated medical image interpretation leverage vision language models (VLMs) that utilize both the image and the corresponding textual radiology reports. However, current medical VLMs are generally limited to 2D images and short reports. To overcome these shortcomings for abdominal CT interpretation, we introduce Merlin - a 3D VLM that leverages both structured electronic health records (EHR) and unstructured radiology reports for pretraining without requiring additional manual annotations. We train Merlin using a high-quality clinical dataset of paired CT scans (6+ million images from 15,331 CTs), EHR diagnosis codes (1.8+ million codes), and radiology reports (6+ million tokens) for training. We comprehensively evaluate Merlin on 6 task types and 752 individual tasks. The non-adapted (off-the-shelf) tasks include zero-shot findings classification (31 findings), phenotype classification (692 phenotypes), and zero-shot cross-modal retrieval (image to findings and image to impressions), while model adapted tasks include 5-year chronic disease prediction (6 diseases), radiology report generation, and 3D semantic segmentation (20 organs). We perform internal validation on a test set of 5,137 CTs, and external validation on 7,000 clinical CTs and on two public CT datasets (VerSe, TotalSegmentator). Beyond these clinically-relevant evaluations, we assess the efficacy of various network architectures and training strategies to depict that Merlin has favorable performance to existing task-specific baselines. We derive data scaling laws to empirically assess training data needs for requisite downstream task performance. Furthermore, unlike conventional VLMs that require hundreds of GPUs for training, we perform all training on a single GPU. This computationally efficient design can help democratize foundation model training, especially for health systems with compute constraints. We plan to release our trained models, code, and dataset, pending manual removal of all protected health information.

2.
Eur Radiol ; 2024 Apr 29.
Artigo em Inglês | MEDLINE | ID: mdl-38683384

RESUMO

OBJECTIVES: To develop and validate an open-source artificial intelligence (AI) algorithm to accurately detect contrast phases in abdominal CT scans. MATERIALS AND METHODS: Retrospective study aimed to develop an AI algorithm trained on 739 abdominal CT exams from 2016 to 2021, from 200 unique patients, covering 1545 axial series. We performed segmentation of five key anatomic structures-aorta, portal vein, inferior vena cava, renal parenchyma, and renal pelvis-using TotalSegmentator, a deep learning-based tool for multi-organ segmentation, and a rule-based approach to extract the renal pelvis. Radiomics features were extracted from the anatomical structures for use in a gradient-boosting classifier to identify four contrast phases: non-contrast, arterial, venous, and delayed. Internal and external validation was performed using the F1 score and other classification metrics, on the external dataset "VinDr-Multiphase CT". RESULTS: The training dataset consisted of 172 patients (mean age, 70 years ± 8, 22% women), and the internal test set included 28 patients (mean age, 68 years ± 8, 14% women). In internal validation, the classifier achieved an accuracy of 92.3%, with an average F1 score of 90.7%. During external validation, the algorithm maintained an accuracy of 90.1%, with an average F1 score of 82.6%. Shapley feature attribution analysis indicated that renal and vascular radiodensity values were the most important for phase classification. CONCLUSION: An open-source and interpretable AI algorithm accurately detects contrast phases in abdominal CT scans, with high accuracy and F1 scores in internal and external validation, confirming its generalization capability. CLINICAL RELEVANCE STATEMENT: Contrast phase detection in abdominal CT scans is a critical step for downstream AI applications, deploying algorithms in the clinical setting, and for quantifying imaging biomarkers, ultimately allowing for better diagnostics and increased access to diagnostic imaging. KEY POINTS: Digital Imaging and Communications in Medicine labels are inaccurate for determining the abdominal CT scan phase. AI provides great help in accurately discriminating the contrast phase. Accurate contrast phase determination aids downstream AI applications and biomarker quantification.

3.
NPJ Digit Med ; 6(1): 74, 2023 Apr 26.
Artigo em Inglês | MEDLINE | ID: mdl-37100953

RESUMO

Advancements in deep learning and computer vision provide promising solutions for medical image analysis, potentially improving healthcare and patient outcomes. However, the prevailing paradigm of training deep learning models requires large quantities of labeled training data, which is both time-consuming and cost-prohibitive to curate for medical images. Self-supervised learning has the potential to make significant contributions to the development of robust medical imaging models through its ability to learn useful insights from copious medical datasets without labels. In this review, we provide consistent descriptions of different self-supervised learning strategies and compose a systematic review of papers published between 2012 and 2022 on PubMed, Scopus, and ArXiv that applied self-supervised learning to medical imaging classification. We screened a total of 412 relevant studies and included 79 papers for data extraction and analysis. With this comprehensive effort, we synthesize the collective knowledge of prior work and provide implementation guidelines for future researchers interested in applying self-supervised learning to their development of medical imaging classification models.

4.
J Nucl Cardiol ; 30(3): 986-1000, 2023 06.
Artigo em Inglês | MEDLINE | ID: mdl-36045250

RESUMO

PURPOSE: The aim of this study was to assess and compare the arterial uptake of the inflammatory macrophage targeting PET tracer [64Cu]Cu-DOTATATE in patients with no or known cardiovascular disease (CVD) to investigate potential differences in uptake. METHODS: Seventy-nine patients who had undergone [64Cu]Cu-DOTATATE PET/CT imaging for neuroendocrine neoplasm disease were retrospectively allocated to three groups: controls with no known CVD risk factors (n = 22), patients with CVD risk factors (n = 24), or patients with known ischemic CVD (n = 33). Both maximum, mean of max and most-diseased segment (mds) standardized uptake value (SUV) and target-to-background ratio (TBR) uptake metrics were measured and reported for the carotid arteries and the aorta. To assess reproducibility between different reviewers, Bland-Altman plots were made. RESULTS: For the carotid arteries, SUVmax (P = .03), SUVmds (0.05), TBRmax (P < .01), TBRmds (P < .01), and mean-of-max TBR (P = .01) were overall shown to provide a group-wise difference in uptake. When measuring uptake values in the aorta, a group-wise difference was only observed with TBRmds (P = .04). Overall, reproducibility of the reported uptake metrics was excellent for SUVs and good to excellent for TBRs for both the carotid arteries and the aorta. CONCLUSION: Using [64Cu]Cu-DOTATATE PET imaging as a marker of atherosclerotic inflammation, we were able to demonstrate differences in some of the most frequently reported uptake metrics in patients with different degrees of CVD. Measurements of the carotid artery as either maximum uptake values or most-diseased segment analysis showed the best ability to discriminate between the groups.


Assuntos
Aterosclerose , Tomografia por Emissão de Pósitrons combinada à Tomografia Computadorizada , Humanos , Reprodutibilidade dos Testes , Benchmarking , Estudos Retrospectivos , Fluordesoxiglucose F18 , Tomografia por Emissão de Pósitrons/métodos , Artérias Carótidas , Inflamação
5.
Viruses ; 14(11)2022 10 29.
Artigo em Inglês | MEDLINE | ID: mdl-36366500

RESUMO

The principal presumption of phage display biopanning is that the naïve library contains an unbiased repertoire of peptides, and thus, the enriched variants derive from the affinity selection of an entirely random peptide pool. In the current study, we utilized deep sequencing to characterize the widely used Ph.DTM-12 phage display peptide library (New England Biolabs). The next-generation sequencing (NGS) data indicated the presence of stop codons and a high abundance of wild-type clones in the naïve library, which collectively result in a reduced effective size of the library. The analysis of the DNA sequence logo and global and position-specific frequency of amino acids demonstrated significant bias in the nucleotide and amino acid composition of the library inserts. Principal component analysis (PCA) uncovered the existence of four distinct clusters in the naïve library and the investigation of peptide frequency distribution revealed a broad range of unequal abundances for peptides. Taken together, our data provide strong evidence for the notion that the naïve library represents substantial departures from randomness at the nucleotide, amino acid, and peptide levels, though not undergoing any selective pressure for target binding. This non-uniform sequence representation arises from both the M13 phage biology and technical errors of the library construction. Our findings highlight the paramount importance of the qualitative assessment of the naïve phage display libraries prior to biopanning.


Assuntos
Sequenciamento de Nucleotídeos em Larga Escala , Biblioteca de Peptídeos , Peptídeos/química , Aminoácidos/genética , Nucleotídeos
6.
EJNMMI Phys ; 9(1): 51, 2022 Jul 30.
Artigo em Inglês | MEDLINE | ID: mdl-35907082

RESUMO

BACKGROUND: Myocardial perfusion imaging (MPI) using positron emission tomography (PET) tracers is an essential tool in investigating diseases and treatment responses in cardiology. 82Rubidium (82Rb)-PET imaging is advantageous for MPI due to its short half-life, but cannot be used for small animal research due to the long positron range. We aimed to correct for this, enabling MPI with 82Rb-PET in rats. METHODS: The effect of positron range correction (PRC) on 82Rb-PET was examined using two phantoms and in vivo on rats. A NEMA NU-4-inspired phantom was used for image quality evaluation (%standard deviation (%SD), spillover ratio (SOR) and recovery coefficient (RC)). A cardiac phantom was used for assessing spatial resolution. Two rats underwent rest 82Rb-PET to optimize number of iterations, type of PRC and respiratory gating. RESULTS: NEMA NU-4 metrics (no PRC vs PRC): %SD 0.087 versus 0.103; SOR (air) 0.022 versus 0.002, SOR (water) 0.059 versus 0.019; RC (3 mm) 0.219 versus 0.584, RC (4 mm) 0.300 versus 0.874, RC (5 mm) 0.357 versus 1.197. Cardiac phantom full width at half maximum (FWHM) and full width at tenth maximum (FWTM) (no PRC vs. PRC): FWTM 6.73 mm versus 3.26 mm (true: 3 mm), FWTM 9.27 mm versus 7.01 mm. The in vivo scans with respiratory gating had a homogeneous myocardium clearly distinguishable from the blood pool. CONCLUSION: PRC improved the spatial resolution for the phantoms and in vivo at the expense of slightly more noise. Combined with respiratory gating, the spatial resolution achieved using PRC should allow for quantitative MPI in small animals.

7.
Annu Int Conf IEEE Eng Med Biol Soc ; 2021: 1124-1127, 2021 11.
Artigo em Inglês | MEDLINE | ID: mdl-34891485

RESUMO

Deep learning has gained increased impact on medical classification problems in recent years, with models being trained to high performance. However neural networks require large amounts of labeled data, which on medical data can be expensive and cumbersome to obtain. We propose a semi-supervised setup using an unsupervised variational autoencoder combined with a supervised classifier to distinguish between atrial fibrillation and non-atrial fibrillation using ECG records from the MIT-BIH Atrial Fibrillation Database. The proposed model was compared to a fully-supervised convolutional neural network at different proportions of labeled and unlabeled data (1%-50% labeled and the remaining unlabeled). The results demonstrate that the semi-supervised approach was superior to the fully-supervised, from using as little as 5% (5,594 samples) labeled data with an accuracy of 98.7%. The work provides proof of concept and demonstrates that the proposed semisupervised setup can train high accuracy models at low amounts of labeled data.


Assuntos
Fibrilação Atrial , Redes Neurais de Computação , Fibrilação Atrial/diagnóstico , Eletrocardiografia , Humanos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...