Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 9 de 9
Filter
Add more filters










Database
Language
Publication year range
1.
bioRxiv ; 2024 Apr 29.
Article in English | MEDLINE | ID: mdl-38746183

ABSTRACT

Background: Training Large Language Models (LLMs) with in-domain data can significantly enhance their performance, leading to more accurate and reliable question-answering (QA) systems essential for supporting clinical decision-making and educating patients. Methods: This study introduces LLMs trained on in-domain, well-curated ophthalmic datasets. We also present an open-source substantial ophthalmic language dataset for model training. Our LLMs (EYE-Llama), first pre-trained on an ophthalmology-specific dataset, including paper abstracts, textbooks, EyeWiki, and Wikipedia articles. Subsequently, the models underwent fine-tuning using a diverse range of QA datasets. The LLMs at each stage were then compared to baseline Llama 2, ChatDoctor, and ChatGPT (GPT3.5) models, using four distinct test sets, and evaluated quantitatively (Accuracy, F1 score, and BERTScore) and qualitatively by two ophthalmologists. Results: Upon evaluating the models using the American Academy of Ophthalmology (AAO) test set and BERTScore as the metric, our models surpassed both Llama 2 and ChatDoctor in terms of F1 score and performed equally to ChatGPT, which was trained with 175 billion parameters (EYE-Llama: 0.57, Llama 2: 0.56, ChatDoctor: 0.56, and ChatGPT: 0.57). When evaluated on the MedMCQA test set, the fine-tuned models demonstrated a higher accuracy compared to the Llama 2 and ChatDoctor models (EYE-Llama: 0.39, Llama 2: 0.33, ChatDoctor: 0.29). However, ChatGPT outperformed EYE-Llama with an accuracy of 0.55. When tested with the PubmedQA set, the fine-tuned model showed improvement in accuracy over both the Llama 2, ChatGPT, and ChatDoctor models (EYE-Llama: 0.96, Llama 2: 0.90, ChatGPT: 0.93, ChatDoctor: 0.92). Conclusion: The study shows that pre-training and fine-tuning LLMs like EYE-Llama enhances their performance in specific medical domains. Our EYE-Llama models surpass baseline Llama 2 in all evaluations, highlighting the effectiveness of specialized LLMs in medical QA systems. (Funded by NEI R15EY035804 (MNA) and UNC Charlotte Faculty Research Grant (MNA).).

2.
medRxiv ; 2024 Apr 24.
Article in English | MEDLINE | ID: mdl-38464168

ABSTRACT

Purpose: This study explores the feasibility of using generative machine learning (ML) to translate Optical Coherence Tomography (OCT) images into Optical Coherence Tomography Angiography (OCTA) images, potentially bypassing the need for specialized OCTA hardware. Methods: The method involved implementing a generative adversarial network framework that includes a 2D vascular segmentation model and a 2D OCTA image translation model. The study utilizes a public dataset of 500 patients, divided into subsets based on resolution and disease status, to validate the quality of TR-OCTA images. The validation employs several quality and quantitative metrics to compare the translated images with ground truth OCTAs (GT-OCTA). We then quantitatively characterize vascular features generated in TR-OCTAs with GT-OCTAs to assess the feasibility of using TR-OCTA for objective disease diagnosis. Result: TR-OCTAs showed high image quality in both 3 and 6 mm datasets (high-resolution, moderate structural similarity and contrast quality compared to GT-OCTAs). There were slight discrepancies in vascular metrics, especially in diseased patients. Blood vessel features like tortuosity and vessel perimeter index showed a better trend compared to density features which are affected by local vascular distortions. Conclusion: This study presents a promising solution to the limitations of OCTA adoption in clinical practice by using vascular features from TR-OCTA for disease detection. Translation relevance: This study has the potential to significantly enhance the diagnostic process for retinal diseases by making detailed vascular imaging more widely available and reducing dependency on costly OCTA equipment.

3.
Front Med (Lausanne) ; 10: 1259017, 2023.
Article in English | MEDLINE | ID: mdl-37901412

ABSTRACT

This paper presents a federated learning (FL) approach to train deep learning models for classifying age-related macular degeneration (AMD) using optical coherence tomography image data. We employ the use of residual network and vision transformer encoders for the normal vs. AMD binary classification, integrating four unique domain adaptation techniques to address domain shift issues caused by heterogeneous data distribution in different institutions. Experimental results indicate that FL strategies can achieve competitive performance similar to centralized models even though each local model has access to a portion of the training data. Notably, the Adaptive Personalization FL strategy stood out in our FL evaluations, consistently delivering high performance across all tests due to its additional local model. Furthermore, the study provides valuable insights into the efficacy of simpler architectures in image classification tasks, particularly in scenarios where data privacy and decentralization are critical using both encoders. It suggests future exploration into deeper models and other FL strategies for a more nuanced understanding of these models' performance. Data and code are available at https://github.com/QIAIUNCC/FL_UNCC_QIAI.

4.
Sci Rep ; 13(1): 6047, 2023 04 13.
Article in English | MEDLINE | ID: mdl-37055475

ABSTRACT

Diabetic retinopathy (DR) is a major cause of vision impairment in diabetic patients worldwide. Due to its prevalence, early clinical diagnosis is essential to improve treatment management of DR patients. Despite recent demonstration of successful machine learning (ML) models for automated DR detection, there is a significant clinical need for robust models that can be trained with smaller cohorts of dataset and still perform with high diagnostic accuracy in independent clinical datasets (i.e., high model generalizability). Towards this need, we have developed a self-supervised contrastive learning (CL) based pipeline for classification of referable vs non-referable DR. Self-supervised CL based pretraining allows enhanced data representation, therefore, the development of robust and generalized deep learning (DL) models, even with small, labeled datasets. We have integrated a neural style transfer (NST) augmentation in the CL pipeline to produce models with better representations and initializations for the detection of DR in color fundus images. We compare our CL pretrained model performance with two state of the art baseline models pretrained with Imagenet weights. We further investigate the model performance with reduced labeled training data (down to 10 percent) to test the robustness of the model when trained with small, labeled datasets. The model is trained and validated on the EyePACS dataset and tested independently on clinical datasets from the University of Illinois, Chicago (UIC). Compared to baseline models, our CL pretrained FundusNet model had higher area under the receiver operating characteristics (ROC) curve (AUC) (CI) values (0.91 (0.898 to 0.930) vs 0.80 (0.783 to 0.820) and 0.83 (0.801 to 0.853) on UIC data). At 10 percent labeled training data, the FundusNet AUC was 0.81 (0.78 to 0.84) vs 0.58 (0.56 to 0.64) and 0.63 (0.60 to 0.66) in baseline models, when tested on the UIC dataset. CL based pretraining with NST significantly improves DL classification performance, helps the model generalize well (transferable from EyePACS to UIC data), and allows training with small, annotated datasets, therefore reducing ground truth annotation burden of the clinicians.


Subject(s)
Diabetes Mellitus , Diabetic Retinopathy , Humans , Diabetic Retinopathy/diagnosis , Neural Networks, Computer , Algorithms , Machine Learning , Fundus Oculi
5.
Radiol Artif Intell ; 4(3): e210174, 2022 May.
Article in English | MEDLINE | ID: mdl-35652118

ABSTRACT

Purpose: To develop a deep learning-based risk stratification system for thyroid nodules using US cine images. Materials and Methods: In this retrospective study, 192 biopsy-confirmed thyroid nodules (175 benign, 17 malignant) in 167 unique patients (mean age, 56 years ± 16 [SD], 137 women) undergoing cine US between April 2017 and May 2018 with American College of Radiology (ACR) Thyroid Imaging Reporting and Data System (TI-RADS)-structured radiology reports were evaluated. A deep learning-based system that exploits the cine images obtained during three-dimensional volumetric thyroid scans and outputs malignancy risk was developed and compared, using fivefold cross-validation, against a two-dimensional (2D) deep learning-based model (Static-2DCNN), a radiomics-based model using cine images (Cine-Radiomics), and the ACR TI-RADS level, with histopathologic diagnosis as ground truth. The system was used to revise the ACR TI-RADS recommendation, and its diagnostic performance was compared against the original ACR TI-RADS. Results: The system achieved higher average area under the receiver operating characteristic curve (AUC, 0.88) than Static-2DCNN (0.72, P = .03) and tended toward higher average AUC than Cine-Radiomics (0.78, P = .16) and ACR TI-RADS level (0.80, P = .21). The system downgraded recommendations for 92 benign and two malignant nodules and upgraded none. The revised recommendation achieved higher specificity (139 of 175, 79.4%) than the original ACR TI-RADS (47 of 175, 26.9%; P < .001), with no difference in sensitivity (12 of 17, 71% and 14 of 17, 82%, respectively; P = .63). Conclusion: The risk stratification system using US cine images had higher diagnostic performance than prior models and improved specificity of ACR TI-RADS when used to revise ACR TI-RADS recommendation.Keywords: Neural Networks, US, Abdomen/GI, Head/Neck, Thyroid, Computer Applications-3D, Oncology, Diagnosis, Supervised Learning, Transfer Learning, Convolutional Neural Network (CNN) Supplemental material is available for this article. © RSNA, 2022.

6.
Quant Imaging Med Surg ; 11(3): 1102-1119, 2021 Mar.
Article in English | MEDLINE | ID: mdl-33654680

ABSTRACT

Quantitative retinal imaging is essential for eye disease detection, staging classification, and treatment assessment. It is known that different eye diseases or severity stages can affect the artery and vein systems in different ways. Therefore, differential artery-vein (AV) analysis can improve the performance of quantitative retinal imaging. In this article, we provide a brief summary of technical rationales and clinical applications of differential AV analysis in fundus photography, optical coherence tomography (OCT), and OCT angiography (OCTA).

7.
Ophthalmol Retina ; 3(10): 826-834, 2019 10.
Article in English | MEDLINE | ID: mdl-31227330

ABSTRACT

PURPOSE: To correlate quantitative OCT angiography (OCTA) biomarkers with clinical features and to predict the extent of visual improvement after ranibizumab treatment for diabetic macular edema (DME) with OCTA biomarkers. DESIGN: Retrospective, longitudinal study in Taiwan. PARTICIPANTS: Fifty eyes of 50 patients with DME and 22 eyes of 22 healthy persons, with the exception of cataract and refractive error, from 1 hospital. METHODS: Each eye underwent OCT angiography (RTVue XR Avanti System with AngioVue software version 2017.1; Optovue, Fremont, CA), and 3×3-mm2 en face OCTA images of the superficial layer and the deep layer were obtained at baseline and after 3 monthly injections of ranibizumab in the study group. OCT angiography images also were acquired from the control group. MAIN OUTCOME MEASURES: Five OCTA biomarkers, including foveal avascular zone (FAZ) area (FAZ-A), FAZ contour irregularity (FAZ-CI), average vessel caliber (AVC), vessel tortuosity (VT), and vessel density (VD), were analyzed comprehensively. Best-corrected visual acuity (BCVA) and central retinal thickness (CRT) also were obtained. Student t tests were used to compare the OCTA biomarkers between the study group and the control group. Linear regression models were used to evaluate the correlations between the baseline OCTA biomarkers and the changes of BCVA and CRT after treatment. RESULTS: Eyes with DME had larger AVC, VT, FAZ-A, and FAZ-CI and lower VD than those in the control group (P < 0.001 for all). After the loading ranibizumab treatment, these OCTA biomarkers improved but did not return to normal levels. Among all biomarkers, higher inner parafoveal VD in the superficial layer at baseline correlated most significantly with visual gain after treatment in the multiple regression model with adjustment for CRT and ellipsoid zone disruption (P < 0.001). To predict visual improvement, outer parafoveal VD in the superficial layer at the baseline showed the largest area under the receiver operating characteristic curve (0.787; P = 0.004). No baseline OCTA biomarkers showed any significant correlation specifically with anatomic improvement. CONCLUSIONS: For eyes with DME, parafoveal VD in the superficial layer at baseline was an independent predictor for visual improvement after the loading ranibizumab treatment.


Subject(s)
Diabetic Retinopathy/drug therapy , Fluorescein Angiography/methods , Macula Lutea/pathology , Macular Edema/drug therapy , Ranibizumab/administration & dosage , Tomography, Optical Coherence/methods , Visual Acuity , Angiogenesis Inhibitors/administration & dosage , Diabetic Retinopathy/complications , Diabetic Retinopathy/diagnosis , Female , Follow-Up Studies , Fundus Oculi , Humans , Intravitreal Injections , Macula Lutea/drug effects , Macular Edema/diagnosis , Macular Edema/etiology , Male , Middle Aged , Prognosis , Retrospective Studies , Vascular Endothelial Growth Factor A/antagonists & inhibitors
8.
Sci Rep ; 8(1): 8768, 2018 06 08.
Article in English | MEDLINE | ID: mdl-29884832

ABSTRACT

In conventional fundus photography, trans-pupillary illumination delivers illuminating light to the interior of the eye through the peripheral area of the pupil, and only the central part of the pupil can be used for collecting imaging light. Therefore, the field of view of conventional fundus cameras is limited, and pupil dilation is required for evaluating the retinal periphery which is frequently affected by diabetic retinopathy (DR), retinopathy of prematurity (ROP), and other chorioretinal conditions. We report here a nonmydriatic wide field fundus camera employing trans-pars-planar illumination which delivers illuminating light through the pars plana, an area outside of the pupil. Trans-pars-planar illumination frees the entire pupil for imaging purpose only, and thus wide field fundus photography can be readily achieved with less pupil dilation. For proof-of-concept testing, using all off-the-shelf components a prototype instrument that can achieve 90° fundus view coverage in single-shot fundus images, without the need of pharmacologic pupil dilation was demonstrated.


Subject(s)
Fluorescein Angiography/instrumentation , Fundus Oculi , Retinal Vessels/diagnostic imaging , Diabetic Retinopathy/diagnostic imaging , Equipment Design , Humans , Lighting , Retinopathy of Prematurity/diagnostic imaging
9.
Opt Lett ; 43(11): 2551-2554, 2018 Jun 01.
Article in English | MEDLINE | ID: mdl-29856427

ABSTRACT

A portable fundus imager is essential for emerging telemedicine screening and point-of-care examination of eye diseases. However, existing portable fundus cameras have limited field of view (FOV) and frequently require pupillary dilation. We report here a miniaturized indirect ophthalmoscopy-based nonmydriatic fundus camera with a snapshot FOV up to 67° external angle, which corresponds to a 101° eye angle. The wide-field fundus camera consists of a near-infrared light source (LS) for retinal guidance and a white LS for color retinal imaging. By incorporating digital image registration and glare elimination methods, a dual-image acquisition approach was used to achieve reflection artifact-free fundus photography.


Subject(s)
Fundus Oculi , Miniaturization , Ophthalmoscopes , Ophthalmoscopy/methods , Photography/methods , Diagnostic Techniques, Ophthalmological , Eye Diseases/diagnosis , Humans , Infrared Rays , Mydriatics/administration & dosage , Telemedicine
SELECTION OF CITATIONS
SEARCH DETAIL
...