Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
J Hand Surg Am ; 2024 Jul 06.
Artigo em Inglês | MEDLINE | ID: mdl-38970600

RESUMO

PURPOSE: To address patient health literacy, the American Medical Association and the National Institutes of Health recommend that readability of patient education materials should not exceed an eighth grade reading level. However, patient-facing materials often remain above the recommended average reading level. Current online calculators provide readability scores; however, they lack the ability to provide text-specific feedback, which may streamline the process of simplifying patient materials. The purpose of this study was to evaluate Chat Generative Pretrained Transformer (ChatGPT) 3.5 as a tool for optimizing patient-facing hand surgery education materials through reading level analysis and simplification. METHODS: The readability of 18 patient-facing hand surgery education materials was compared by a traditional online calculator for reading level and ChatGPT 3.5. The original excerpts were then entered into ChatGPT 3.5 and simplified by the artificial intelligence tool. The simplified excerpts were scored by the same calculators. RESULTS: The readability scores for the original excerpts from the online calculator and ChatGPT 3.5 were similar. The simplified excerpts' scores were lower than the originals, with a mean of 7.28, less than the maximum recommended 8. CONCLUSIONS: The use of ChatGPT 3.5 for the purpose of simplification and readability analysis of patient-facing hand surgery materials is efficient and may help facilitate the conveyance of important health information. ChatGPT 3.5 rendered readability scores comparable with traditional readability calculators, in addition to excerpt-specific feedback. It was also able to simplify materials to the recommended grade levels. CLINICAL RELEVANCE: By confirming ChatGPT3.5's ability to assess and simplify patient education materials, this study offers a practical solution for potentially improving patient comprehension, engagement, and health outcomes in clinical settings.

2.
J Burn Care Res ; 2024 Jun 19.
Artigo em Inglês | MEDLINE | ID: mdl-38895848

RESUMO

Despite the growing incidence of burn injuries globally and the advancements in physical recovery, the psychological aspect of burn trauma recovery remains inadequately addressed. This review aims to consolidate existing literature posttraumatic stress disorder (PTSD) and depression in adult burn survivors, recognizing the need for a holistic approach to burn recovery that encompasses both physical and mental health. The comprehensive analysis of 156 studies revealed significant variations in methodological approaches, leading to challenges in creating standardized protocols for mental health assessment in burn care. Key findings include the identification of a wide range of psychological assessment tools and a substantial research gap in low and middle-income countries, where the majority of burn injuries occur. Only 7.0% of the studies assessed interventions for PTSD or depression, indicating a lack of focus on treatment modalities. The studies identified demographic factors, patient history, psychosocial factors, burn injury characteristics, and treatment course as risk factors for PTSD and depression post-burn injury. The review highlights the need for early screening, intervention, and attention to subjective experiences related to burn injury, as these are strong predictors of long-term psychological distress. It also emphasizes the complexity of addressing psychological distress in burn survivors and the need for more standardized practices in assessing PTSD and depression specific to this population.

3.
Plast Reconstr Surg Glob Open ; 12(2): e5575, 2024 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-38313589

RESUMO

Background: To address patient health literacy, the American Medical Association recommends that readability of patient education materials should not exceed a sixth grade reading level; the National Institutes of Health recommend no greater than an eigth-grade reading level. However, patient-facing materials in plastic surgery often remain at an above-recommended average reading level. The purpose of this study was to evaluate ChatGPT 3.5 as a tool for optimizing patient-facing craniofacial education materials. Methods: Eighteen patient-facing craniofacial education materials were evaluated for readability by a traditional calculator and ChatGPT 3.5. The resulting scores were compared. The original excerpts were then inputted to ChatGPT 3.5 and simplified by the artificial intelligence tool. The simplified excerpts were scored by the calculators. Results: The difference in scores for the original excerpts between the online calculator and ChatGPT 3.5 were not significant (P = 0.441). Additionally, the simplified excerpts' scores were significantly lower than the originals (P < 0.001), and the mean of the simplified excerpts was 7.78, less than the maximum recommended 8. Conclusions: The use of ChatGPT 3.5 for simplification and readability analysis of patient-facing craniofacial materials is efficient and may help facilitate the conveyance of important health information. ChatGPT 3.5 rendered readability scores comparable to traditional readability calculators, in addition to excerpt-specific feedback. It was also able to simplify materials to the recommended grade levels. With human oversight, we validate this tool for readability analysis and simplification.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...