Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Medicine (Baltimore) ; 103(25): e38569, 2024 Jun 21.
Artigo em Inglês | MEDLINE | ID: mdl-38905405

RESUMO

We aimed to examine the patient education materials (PEMs) on the internet about "Child Pain" in terms of readability, reliability, quality and content. For our observational study, a search was made on February 28, 2024, using the keywords "Child Pain," "Pediatric Pain," and "Children Pain" in the Google search engine. The readability of PEMs was assessed using computer-based readability formulas (Flesch Reading Ease Score [FRES], Flesch-Kincaid Grade Level [FKGL], Automated readability index (ARI), Gunning Fog [GFOG], Coleman-Liau score [CL], Linsear Write [LW], Simple Measure of Gobbledygook [SMOG]). The reliability and quality of websites were determined using the Journal of American Medical Association (JAMA) score, Global Quality Score (GQS), and DISCERN score. 96 PEM websites included in our study. We determined that the FRES was 64 (32-84), the FKGL was 8.24 (4.01-15.19), ARI was 8.95 (4.67-17.38), GFOG was 11 (7.1-19.2), CL was 10.1 (6.95-15.64), LW was 8.08 (3.94-19.0) and SMOG was 8.1 (4.98-13.93). The scores of readability formulas showed that, the readability level of PEMs was statistically higher than sixth-grade level with all formulas (P = .011 for FRES, P < .001 for GFOG, P < .001 for ARI, P < .001 for FKGL, P < .001 for CL and P < .001 for SMOG), except LW formula (P = .112). The websites had moderate-to-low reliability and quality. Health-related websites had the highest quality with JAMA score. We found a weak negative correlation between Blexb score and JAMA score (P = .013). Compared to the sixth-grade level recommended by the American Medical Association and the National Institutes of Health, the readability grade level of child pain-related internet-based PEMs is quite high. On the other hand, the reliability and quality of PEMs were determined as moderate-to-low. The low readability and quality of PEMs could cause an anxious parent and unnecessary hospital admissions. PEMs on issues threatening public health should be prepared with attention to the recommendations on readability.


Assuntos
Compreensão , Internet , Pais , Humanos , Criança , Pais/psicologia , Letramento em Saúde , Dor , Educação de Pacientes como Assunto/métodos , Reprodutibilidade dos Testes , Informação de Saúde ao Consumidor/normas
2.
Medicine (Baltimore) ; 103(18): e38009, 2024 May 03.
Artigo em Inglês | MEDLINE | ID: mdl-38701313

RESUMO

Subdural hematoma is defined as blood collection in the subdural space between the dura mater and arachnoid. Subdural hematoma is a condition that neurosurgeons frequently encounter and has acute, subacute and chronic forms. The incidence in adults is reported to be 1.72-20.60/100.000 people annually. Our study aimed to evaluate the quality, reliability and readability of the answers to questions asked to ChatGPT, Bard, and perplexity about "Subdural Hematoma." In this observational and cross-sectional study, we asked ChatGPT, Bard, and perplexity to provide the 100 most frequently asked questions about "Subdural Hematoma" separately. Responses from both chatbots were analyzed separately for readability, quality, reliability and adequacy. When the median readability scores of ChatGPT, Bard, and perplexity answers were compared with the sixth-grade reading level, a statistically significant difference was observed in all formulas (P < .001). All 3 chatbot responses were found to be difficult to read. Bard responses were more readable than ChatGPT's (P < .001) and perplexity's (P < .001) responses for all scores evaluated. Although there were differences between the results of the evaluated calculators, perplexity's answers were determined to be more readable than ChatGPT's answers (P < .05). Bard answers were determined to have the best GQS scores (P < .001). Perplexity responses had the best Journal of American Medical Association and modified DISCERN scores (P < .001). ChatGPT, Bard, and perplexity's current capabilities are inadequate in terms of quality and readability of "Subdural Hematoma" related text content. The readability standard for patient education materials as determined by the American Medical Association, National Institutes of Health, and the United States Department of Health and Human Services is at or below grade 6. The readability levels of the responses of artificial intelligence applications such as ChatGPT, Bard, and perplexity are significantly higher than the recommended 6th grade level.


Assuntos
Inteligência Artificial , Compreensão , Hematoma Subdural , Humanos , Estudos Transversais , Reprodutibilidade dos Testes
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...