Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 6 de 6
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
medRxiv ; 2024 Feb 13.
Artigo em Inglês | MEDLINE | ID: mdl-38405807

RESUMO

Stargardt disease and age-related macular degeneration are the leading causes of blindness in the juvenile and geriatric populations, respectively. The formation of atrophic regions of the macula is a hallmark of the end-stages of both diseases. The progression of these diseases is tracked using various imaging modalities, two of the most common being fundus autofluorescence (FAF) imaging and spectral-domain optical coherence tomography (SD-OCT). This study seeks to investigate the use of longitudinal FAF and SD-OCT imaging (month 0, month 6, month 12, and month 18) data for the predictive modelling of future atrophy in Stargardt and geographic atrophy. To achieve such an objective, we develop a set of novel deep convolutional neural networks enhanced with recurrent network units for longitudinal prediction and concurrent learning of ensemble network units (termed ReConNet) which take advantage of improved retinal layer features beyond the mean intensity features. Using FAF images, the neural network presented in this paper achieved mean (± standard deviation, SD) and median Dice coefficients of 0.895 (± 0.086) and 0.922 for Stargardt atrophy, and 0.864 (± 0.113) and 0.893 for geographic atrophy. Using SD-OCT images for Stargardt atrophy, the neural network achieved mean and median Dice coefficients of 0.882 (± 0.101) and 0.906, respectively. When predicting only the interval growth of the atrophic lesions with FAF images, mean (± SD) and median Dice coefficients of 0.557 (± 0.094) and 0.559 were achieved for Stargardt atrophy, and 0.612 (± 0.089) and 0.601 for geographic atrophy. The prediction performance in OCT images is comparably good to that using FAF which opens a new, more efficient, and practical door in the assessment of atrophy progression for clinical trials and retina clinics, beyond widely used FAF. These results are highly encouraging for a high-performance interval growth prediction when more frequent or longer-term longitudinal data are available in our clinics. This is a pressing task for our next step in ongoing research.

2.
JAMA Netw Open ; 6(8): e2330320, 2023 08 01.
Artigo em Inglês | MEDLINE | ID: mdl-37606922

RESUMO

Importance: Large language models (LLMs) like ChatGPT appear capable of performing a variety of tasks, including answering patient eye care questions, but have not yet been evaluated in direct comparison with ophthalmologists. It remains unclear whether LLM-generated advice is accurate, appropriate, and safe for eye patients. Objective: To evaluate the quality of ophthalmology advice generated by an LLM chatbot in comparison with ophthalmologist-written advice. Design, Setting, and Participants: This cross-sectional study used deidentified data from an online medical forum, in which patient questions received responses written by American Academy of Ophthalmology (AAO)-affiliated ophthalmologists. A masked panel of 8 board-certified ophthalmologists were asked to distinguish between answers generated by the ChatGPT chatbot and human answers. Posts were dated between 2007 and 2016; data were accessed January 2023 and analysis was performed between March and May 2023. Main Outcomes and Measures: Identification of chatbot and human answers on a 4-point scale (likely or definitely artificial intelligence [AI] vs likely or definitely human) and evaluation of responses for presence of incorrect information, alignment with perceived consensus in the medical community, likelihood to cause harm, and extent of harm. Results: A total of 200 pairs of user questions and answers by AAO-affiliated ophthalmologists were evaluated. The mean (SD) accuracy for distinguishing between AI and human responses was 61.3% (9.7%). Of 800 evaluations of chatbot-written answers, 168 answers (21.0%) were marked as human-written, while 517 of 800 human-written answers (64.6%) were marked as AI-written. Compared with human answers, chatbot answers were more frequently rated as probably or definitely written by AI (prevalence ratio [PR], 1.72; 95% CI, 1.52-1.93). The likelihood of chatbot answers containing incorrect or inappropriate material was comparable with human answers (PR, 0.92; 95% CI, 0.77-1.10), and did not differ from human answers in terms of likelihood of harm (PR, 0.84; 95% CI, 0.67-1.07) nor extent of harm (PR, 0.99; 95% CI, 0.80-1.22). Conclusions and Relevance: In this cross-sectional study of human-written and AI-generated responses to 200 eye care questions from an online advice forum, a chatbot appeared capable of responding to long user-written eye health posts and largely generated appropriate responses that did not differ significantly from ophthalmologist-written responses in terms of incorrect information, likelihood of harm, extent of harm, or deviation from ophthalmologist community standards. Additional research is needed to assess patient attitudes toward LLM-augmented ophthalmologists vs fully autonomous AI content generation, to evaluate clarity and acceptability of LLM-generated answers from the patient perspective, to test the performance of LLMs in a greater variety of clinical contexts, and to determine an optimal manner of utilizing LLMs that is ethical and minimizes harm.


Assuntos
Inteligência Artificial , Oftalmologistas , Humanos , Estudos Transversais , Software , Idioma
3.
Front Med (Lausanne) ; 10: 1157016, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37122330

RESUMO

Purpose: The purpose of this study was to develop a model to predict whether or not glaucoma will progress to the point of requiring surgery within the following year, using data from electronic health records (EHRs), including both structured data and free-text progress notes. Methods: A cohort of adult glaucoma patients was identified from the EHR at Stanford University between 2008 and 2020, with data including free-text clinical notes, demographics, diagnosis codes, prior surgeries, and clinical information, including intraocular pressure, visual acuity, and central corneal thickness. Words from patients' notes were mapped to ophthalmology domain-specific neural word embeddings. Word embeddings and structured clinical data were combined as inputs to deep learning models to predict whether a patient would undergo glaucoma surgery in the following 12 months using the previous 4-12 months of clinical data. We also evaluated models using only structured data inputs (regression-, tree-, and deep-learning-based models) and models using only text inputs. Results: Of the 3,469 glaucoma patients included in our cohort, 26% underwent surgery. The baseline penalized logistic regression model achieved an area under the receiver operating curve (AUC) of 0.873 and F1 score of 0.750, compared with the best tree-based model (random forest, AUC 0.876; F1 0.746), the deep learning structured features model (AUC 0.885; F1 0.757), the deep learning clinical free-text features model (AUC 0.767; F1 0.536), and the deep learning model with both the structured clinical features and free-text features (AUC 0.899; F1 0.745). Discussion: Fusion models combining text and EHR structured data successfully and accurately predicted glaucoma progression to surgery. Future research incorporating imaging data could further optimize this predictive approach and be translated into clinical decision support tools.

4.
Ocul Immunol Inflamm ; 31(9): 1884-1886, 2023 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-36049048

RESUMO

We present a unique case of acute on chronic left frontal sinusitis with an orbital abscess in the left orbit complicated by granulomatosis with polyangiitis and a defect to the structure of the orbit, among the 87 year old patient's other health-related conditions. Urgent transfer to tertiary care, and diagnostic, surgical, and multidisciplinary management were necessary to achieve a favorable clinical outcome: the eye was left undamaged and no infection spread to the brain. This report also sets out to review the literature.


Assuntos
Granulomatose com Poliangiite , Celulite Orbitária , Doenças Orbitárias , Humanos , Idoso de 80 Anos ou mais , Órbita/diagnóstico por imagem , Órbita/patologia , Granulomatose com Poliangiite/complicações , Granulomatose com Poliangiite/diagnóstico , Doenças Orbitárias/diagnóstico , Doenças Orbitárias/etiologia
5.
Cureus ; 14(9): e29630, 2022 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-36320948

RESUMO

Our case report demonstrates the management of a unique penetrating orbital injury. The intraorbital foreign body was an approximately 22 cm long metal dishwasher spring hook lodged into the left orbital apex. An ophthalmological check-up a couple of weeks following the removal surgery discerned the patient had an unprecedented case of orbital apex syndrome. We present this unique case so physicians, medical students, and other emergency and medical professionals can learn about the diagnostic, surgical, and multidisciplinary management necessary to achieve a favorable clinical outcome.

6.
Cureus ; 14(6): e26309, 2022 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-35911266

RESUMO

Pemphigus vulgaris (PV) is an autoimmune disorder affecting the skin and mucous membranes. The condition may be confused with a number of disorders, including Stevens-Johnson syndrome (SJS), toxic epidermal necrolysis (TEN), and erythema multiforme (EM), all of which are life-threatening. Immunohistological and histochemical analyses remain the optimal methods for differentiating these diseases. There is still insufficient evidence regarding the true incidence rate of ocular disease in PV as well as its distinct clinical types. This report sets to review the case of a 62-year-old male with atypical ocular pemphigus vulgaris and review the literature.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...