Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 2 de 2
Filter
Add more filters










Database
Language
Publication year range
1.
J Pediatr Ophthalmol Strabismus ; : 1-7, 2024 May 30.
Article in English | MEDLINE | ID: mdl-38815099

ABSTRACT

PURPOSE: To evaluate the quality, reliability, and readability of online patient educational materials on leukocoria. METHODS: In this cross-sectional study, the Google search engine was searched for the terms "leukocoria" and "white pupil." The first 50 search outcomes were evaluated for each search term based on predefined inclusion criteria, excluding duplicates, peer-reviewed papers, forum posts, paywalled content, and multimedia links. Sources were categorized as "institutional" or "private." Three independent raters assessed each website for quality and reliability using DISCERN, Health on the Net Code of Conduct (HONcode), and JAMA criteria. Readability was evaluated using seven formulas: Flesch Reading Ease (FRE), Flesch-Kincaid Grade Level (FKGL), Simple Measure of Gobbledygook (SMOG) Index, Automated Readability Index (ARI), Linsear Write (LW), Gunning Fog Index (GFI), and Coleman-Liau Index (CLI). RESULTS: A total of 51 websites were included. Quality, assessed by the DISCERN tool, showed a median score of 4, denoting moderate to high quality, with no significant differences between institutional and private sites or search terms. HONcode scores indicated variable reliability and trustworthiness (median: 10, range: 3 to 16), with institutional sites excelling in financial disclosure and ad differentiation. Additionally, institutional and private sites performed well in reliability and accountability, as measured by the JAMA Benchmark criteria (median: 3; range: 1 to 4). Readability, averaging an 11.3 ± 3.7 grade level, did not differ significantly between site types or search terms, consistently falling short of the recommended sixth-grade level for patient educational materials. CONCLUSIONS: The patient educational materials on leukocoria demonstrated moderate to high quality, commendable reliability, and accountability. However, the readability scores were above the recommended level for the layperson. [J Pediatr Ophthalmol Strabismus. 20XX;X(X):XX-XX.].

2.
Am J Ophthalmol ; 265: 28-38, 2024 Apr 16.
Article in English | MEDLINE | ID: mdl-38614196

ABSTRACT

PURPOSE: To evaluate the quality, readability, and accuracy of large language model (LLM)-generated patient education materials (PEMs) on childhood glaucoma, and their ability to improve existing the readability of online information. DESIGN: Cross-sectional comparative study. METHODS: We evaluated responses of ChatGPT-3.5, ChatGPT-4, and Bard to 3 separate prompts requesting that they write PEMs on "childhood glaucoma." Prompt A required PEMs be "easily understandable by the average American." Prompt B required that PEMs be written "at a 6th-grade level using Simple Measure of Gobbledygook (SMOG) readability formula." We then compared responses' quality (DISCERN questionnaire, Patient Education Materials Assessment Tool [PEMAT]), readability (SMOG, Flesch-Kincaid Grade Level [FKGL]), and accuracy (Likert Misinformation scale). To assess the improvement of readability for existing online information, Prompt C requested that LLM rewrite 20 resources from a Google search of keyword "childhood glaucoma" to the American Medical Association-recommended "6th-grade level." Rewrites were compared on key metrics such as readability, complex words (≥3 syllables), and sentence count. RESULTS: All 3 LLMs generated PEMs that were of high quality, understandability, and accuracy (DISCERN ≥4, ≥70% PEMAT understandability, Misinformation score = 1). Prompt B responses were more readable than Prompt A responses for all 3 LLM (P ≤ .001). ChatGPT-4 generated the most readable PEMs compared to ChatGPT-3.5 and Bard (P ≤ .001). Although Prompt C responses showed consistent reduction of mean SMOG and FKGL scores, only ChatGPT-4 achieved the specified 6th-grade reading level (4.8 ± 0.8 and 3.7 ± 1.9, respectively). CONCLUSIONS: LLMs can serve as strong supplemental tools in generating high-quality, accurate, and novel PEMs, and improving the readability of existing PEMs on childhood glaucoma.

SELECTION OF CITATIONS
SEARCH DETAIL
...