Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 730
Filtrar
1.
OTO Open ; 8(3): e137, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-39015736

RESUMO

Objective: To evaluate the readability, understandability, actionability, and accuracy of online resources covering vestibular migraine (VM). Study Design: Cross-sectional descriptive study design. Setting: Digital collection of websites appearing on Google search. Methods: Google searches were conducted to identify common online resources for VM. We examined readability using the Flesch Reading Ease (FRE) and Flesch-Kincaid Grade Level scores, understandability and actionability using the Patient Education Materials Assessment Tool (PEMAT), and accuracy by comparing the website contents to the consensus definition of "probable vestibular migraine." Results: Eleven of the most popular websites were analyzed. Flesch-Kincaid Grade Level averaged at a 13th-grade level (range: 9th-18th). FRE scores averaged 35.5 (range: 9.1-54.4). No website had a readability grade level at the US Agency for Healthcare Research and Quality recommended 5th-grade level or an equivalent FRE score of 90 or greater. Understandability scores varied ranging from 49% to 88% (mean 70%). Actionability scores varied more, ranging from 12% to 87% (mean 44%). There was substantial inter-rater agreement for both PEMAT understandability scoring (mean κ = 0.76, SD = 0.08) and actionability scoring (mean κ = 0.65, SD = 0.08). Three sites included all 3 "probable vestibular migraine" diagnostic criteria as worded in the consensus statement. Conclusion: The quality of online resources for VM is poor overall in terms of readability, actionability, and agreement with diagnostic criteria.

2.
Surg Endosc ; 2024 Jul 15.
Artigo em Inglês | MEDLINE | ID: mdl-39009725

RESUMO

INTRODUCTION: Health literacy is the ability of individuals to use basic health information and services to make well-informed decisions. Low health literacy among surgical patients has been associated with nonadherence to preoperative and/or discharge instructions as well as poor comprehension of surgery. It likely poses as a barrier to patients considering foregut surgery which requires an understanding of different treatment options and specific diet instructions. The objective of this study was to assess and compare the readability of online patient education materials (PEM) for foregut surgery. METHODS: Using Google, the terms "anti-reflux surgery, "GERD surgery," and "foregut surgery" were searched and a total of 30 webpages from universities and national organizations were selected. The readability of the text was assessed with seven instruments: Flesch Reading Ease formula (FRE), Gunning Fog (GF), Flesch-Kincaid Grade Level (FKGL), Coleman Liau Index (CL), Simple Measure of Gobbledygook (SMOG), Automated Readability Index (ARI), and Linsear Write Formula (LWF). Mean readability scores were calculated with standard deviations. We performed a qualitative analysis gathering characteristics such as, type of information (preoperative or postoperative), organization, use of multimedia, inclusion of a version in another language. RESULTS: The overall average readability of the top PEM for foregut surgery was 12th grade. There was only one resource at the recommended sixth grade reading level. Nearly half of PEM included some form of multimedia. CONCLUSIONS: The American Medical Association and National Institute of Health have recommended that PEMs to be written at the 5th-6th grade level. The majority of online PEM for foregut surgery is above the recommended reading level. This may be a barrier for patients seeking foregut surgery. Surgeons should be aware of the potential gaps in understanding of their patients to help them make informed decisions and improve overall health outcomes.

4.
Saf Health Work ; 15(2): 192-199, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-39035793

RESUMO

Background: Safety data sheets (SDSs) are hazard communication materials that accompany chemicals/hazardous products in the workplace. Many SDSs contain dense, technical text, which places considerable comprehension demands on workers, especially those with lower literacy skills. The goal of this study was to assess SDSs for readability, comprehensibility, and suitability (i.e., fit to the target audience). Methods: The Suitability Assessment of Materials (SAM) tool assessed SDSs for suitability and readability. We then amended the SAM tool to further assess SDSs for comprehensibility factors. Both the original and amended SAM tool were used to score 45 randomly selected SDSs for content, literacy demand, graphics, and layout/typography. Results: SDSs performed poorly in terms of readability, suitability, and comprehensibility. The mean readability scores were Flesch-Kincaid Grade Level (9.6), Gunning Fog index (11.0), Coleman-Liau index (13.7), and Simple Measure of Gobbledygook index (10.7), all above the recommended reading level. The original SAM graded SDSs as "not suitable" for suitability and readability. When the amended SAM was used, the mean total SAM score increased, but the SDSs were still considered "not suitable" when adding comprehensibility considerations. The amended SAM tool better identified content-related issues specific to SDSs that make it difficult for a reader to understand the material. Conclusions: In terms of readability, comprehensibility, and suitability, SDSs perform poorly in their primary role as a hazard communication tool, therefore, putting workers at risk. The amended SAM tool could be used when writing SDSs to ensure that the information is more easily understandable for all audiences.

5.
J Oral Rehabil ; 2024 Jul 21.
Artigo em Inglês | MEDLINE | ID: mdl-39034447

RESUMO

BACKGROUND: Temporomandibular disorders (TMD) are a prevalent ailment with a global impact, affecting a substantial number of individuals. While some individuals are receiving treatment from orthodontists for TMD, a significant proportion of individuals obtain knowledge through websites. OBJECTIVES: Our purpose had been to evaluate, from a patient-oriented perspective, the readability of home pages of websites scored in the 10 most prominent devoted to TMD. We also determined what level of education would have been needed to get an overview of the information on the websites under scrutiny. This approach ensures that our findings are centred on the patient experience, providing insights into how accessible and understandable websites about TMD. METHODS: We determined the top 10 patient-focused English language websites by searching for 'temporomandibular disorders' in the 'no country redirect' plugin of the Google Chrome browser (www.google.com/ncr). The readability of the texts was assessed using the Gunning fog index (GFI), Coleman Liau index (CLI), Automated readability index (ARI) Simple Measure of Gobbledygook (SMOG), Flesch Kincald grade level (FKGL), Flesh reasing ease (FRE) (https://readabilityformulas.com). RESULTS: The mean Flesch reading ease index score was determined to be 48.67, accompanied by a standard deviation of 15.04 and these websites require an average of 13.49 years of formal education (GFI), with a standard deviation of 2.62, for ease of understanding. CONCLUSION: Our research indicates that a significant proportion of websites related to TMD can be defined as a level of complexity that exceeds the ability to read comprehension of the general population.

6.
Neurosurg Focus ; 57(1): E6, 2024 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-38950429

RESUMO

OBJECTIVE: Concussions are self-limited forms of mild traumatic brain injury (TBI). Gradual return to play (RTP) is crucial to minimizing the risk of second impact syndrome. Online patient educational materials (OPEM) are often used to guide decision-making. Previous literature has reported that grade-level readability of OPEM is higher than recommended by the American Medical Association and the National Institutes of Health. The authors evaluated the readability of OPEM on concussion and RTP. METHODS: An online search engine was used to identify websites providing OPEM on concussion and RTP. Text specific to concussion and RTP was extracted from each website and readability was assessed using the following six standardized indices: Flesch Reading Ease (FRE), Flesch-Kincaid Grade Level, Gunning Fog Index, Coleman-Liau Index, Simple Measure of Gobbledygook Index, and Automated Readability Index. One-way ANOVA and Tukey's post hoc test were used to compare readability across sources of information. RESULTS: There were 59 concussion and RTP articles, and readability levels exceeded the recommended 6th grade level, irrespective of the source of information. Academic institutions published OPEM at simpler readability levels (higher FRE scores). Private organizations published OPEM at more complex (higher) grade-level readability levels in comparison with academic and nonprofit institutions (p < 0.05). CONCLUSIONS: The readability of OPEM on RTP after concussions exceeds the literacy of the average American. There is a critical need to modify the concussion and RTP OPEM to improve comprehension by a broad audience.


Assuntos
Concussão Encefálica , Compreensão , Educação de Pacientes como Assunto , Concussão Encefálica/prevenção & controle , Humanos , Educação de Pacientes como Assunto/métodos , Educação de Pacientes como Assunto/normas , Internet , Volta ao Esporte , Leitura
7.
J Hand Surg Am ; 2024 Jul 06.
Artigo em Inglês | MEDLINE | ID: mdl-38970600

RESUMO

PURPOSE: To address patient health literacy, the American Medical Association and the National Institutes of Health recommend that readability of patient education materials should not exceed an eighth grade reading level. However, patient-facing materials often remain above the recommended average reading level. Current online calculators provide readability scores; however, they lack the ability to provide text-specific feedback, which may streamline the process of simplifying patient materials. The purpose of this study was to evaluate Chat Generative Pretrained Transformer (ChatGPT) 3.5 as a tool for optimizing patient-facing hand surgery education materials through reading level analysis and simplification. METHODS: The readability of 18 patient-facing hand surgery education materials was compared by a traditional online calculator for reading level and ChatGPT 3.5. The original excerpts were then entered into ChatGPT 3.5 and simplified by the artificial intelligence tool. The simplified excerpts were scored by the same calculators. RESULTS: The readability scores for the original excerpts from the online calculator and ChatGPT 3.5 were similar. The simplified excerpts' scores were lower than the originals, with a mean of 7.28, less than the maximum recommended 8. CONCLUSIONS: The use of ChatGPT 3.5 for the purpose of simplification and readability analysis of patient-facing hand surgery materials is efficient and may help facilitate the conveyance of important health information. ChatGPT 3.5 rendered readability scores comparable with traditional readability calculators, in addition to excerpt-specific feedback. It was also able to simplify materials to the recommended grade levels. CLINICAL RELEVANCE: By confirming ChatGPT3.5's ability to assess and simplify patient education materials, this study offers a practical solution for potentially improving patient comprehension, engagement, and health outcomes in clinical settings.

8.
J Psychiatr Res ; 177: 53-58, 2024 Jun 25.
Artigo em Inglês | MEDLINE | ID: mdl-38972265

RESUMO

Self-report questionnaires are commonly used in depression research with little consideration of their reading ease. This study aimed to increase the reading ease of the commonly-used Quick Inventory of Depressive Symptoms Self Report (QIDS-SR) and assess the impact of the change in wording on the measure's psychometric properties. The study had three phases: 1) Flesh-Kincaid readability statistics of the original and modified wording were compared; 2) a sample of n = 95 participants rated the modified wording for perceived change in meaning and ease of understanding; 3) a second sample of n = 136 participants completed two versions of the QIDS-SR (original, modified, or one of each) alongside the Beck Depression Inventory (BDI). The internal consistency, test-retest reliability and convergent validity of the modified version were assessed. The modified QIDS-SR had significantly higher reading ease, was considered easier to understand and was not perceived to have a significant change in meaning. Its psychometric properties were unaffected. The wording of the questionnaire was successfully simplified to increase its accessibility and this had no notable impact on the psychometric properties of the measure.

9.
J Food Prot ; 87(9): 100323, 2024 Jul 02.
Artigo em Inglês | MEDLINE | ID: mdl-38960323

RESUMO

In many jurisdictions, foodservice workers are required to obtain food handler certification via written examination before being able to work. This study investigated the effect of the readability, or the ease in which one can read and comprehend written text, of food handler exam questions on exam performance. It was hypothesized that the reduction in cognitive load by improving the readability of exam questions would lead to improved scores. Participants received training in personal hygiene and basic food safety and were tested on their knowledge using questions that were worded using the traditional phrasing and updated phrasing that has improved readability. The results indicate that improved readability had a significant difference in the personal hygiene section but not on the basic food safety section. These results are due, in part, to the types of cognitive load (intrinsic vs. extraneous) that are required to solve different types of problems.

10.
Lasers Med Sci ; 39(1): 183, 2024 Jul 17.
Artigo em Inglês | MEDLINE | ID: mdl-39014050

RESUMO

Just as tattoos continue to increase in popularity, many people with tattoos also seek removal, often due to career concerns. Prospective clients interested in laser tattoo removal may do research about the procedure online, as the internet increasingly becomes a resource to get preliminary health information. However, it is important that the online health information on the topic be of high quality and be accessible to all patients. We analyzed 77 websites from a Google search query using the terms "Laser tattoo removal patient Information" and "Laser tattoo removal patient Instructions" to assess this. The websites were evaluated for their readability using multiple validated indices and comprehensiveness. We found that websites had a broad readability range, from elementary to college, though most were above the recommended eighth-grade reading level. Less than half of the websites adequately discussed the increased risk of pigmentary complications in the skin of color clients or emphasized the importance of consulting with a board-certified dermatologist/plastic surgeon before the procedure. Over 90% of the websites noted that multiple laser treatments are likely needed for complete clearance of tattoos. The findings from our study underscore a significant gap in the accessibility and quality of online information for patients considering laser tattoo removal, particularly in addressing specific risks for patients with darker skin tones and emphasizing the need for consulting a board-certified physician before undergoing the procedure. It is important that online resources for laser tattoo removal be appropriately written to allow better decision-making, expectations, and future satisfaction for potential clients interested in the procedure.


Assuntos
Compreensão , Internet , Tatuagem , Humanos , Informação de Saúde ao Consumidor/normas , Educação de Pacientes como Assunto , Terapia a Laser/métodos , Letramento em Saúde
11.
Neuroophthalmology ; 48(4): 257-266, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38933748

RESUMO

Most cases of optic neuritis (ON) occur in women and in patients between the ages of 15 and 45 years, which represents a key demographic of individuals who seek health information using the internet. As clinical providers strive to ensure patients have accessible information to understand their condition, assessing the standard of online resources is essential. To assess the quality, content, accountability, and readability of online information for optic neuritis. This cross-sectional study analyzed 11 freely available medical sites with information on optic neuritis and used PubMed as a gold standard for comparison. Twelve questions were composed to include the information most relevant to patients, and each website was independently examined by four neuro-ophthalmologists. Readability was analyzed using an online readability tool. Journal of the American Medical Association (JAMA) benchmarks, four criteria designed to assess the quality of health information further were used to evaluate the accountability of each website. Freely available online information. On average, websites scored 27.98 (SD ± 9.93, 95% CI 24.96-31.00) of 48 potential points (58.3%) for the twelve questions. There were significant differences in the comprehensiveness and accuracy of content across websites (p < .001). The mean reading grade level of websites was 11.90 (SD ± 2.52, 95% CI 8.83-15.25). Zero websites achieved all four JAMA benchmarks. Interobserver reliability was robust between three of four neuro-ophthalmologist (NO) reviewers (ρ = 0.77 between NO3 and NO2, ρ = 0.91 between NO3 and NO1, ρ = 0.74 between NO2 and NO1; all p < .05). The quality of freely available online information detailing optic neuritis varies by source, with significant room for improvement. The material presented is difficult to interpret and exceeds the recommended reading level for health information. Most websites reviewed did not provide comprehensive information regarding non-therapeutic aspects of the disease. Ophthalmology organizations should be encouraged to create content that is more accessible to the general public.

12.
Br J Hosp Med (Lond) ; 85(6): 1-9, 2024 Jun 30.
Artigo em Inglês | MEDLINE | ID: mdl-38941972

RESUMO

Aims/Background Seroma formation is the most common complication following breast surgery. However, there is little evidence on the readability of online patient education materials on this issue. This study aimed to assess the accessibility and readability of the relevant online information. Methods This systematic review of the literature identified 37 relevant websites for further analysis. The readability of each online article was assessed through using a range of readability formulae. Results The average Flesch-Reading Ease score for all patient education materials was 53.9 (± 21.9) and the average Flesch-Kincaid reading grade level was 7.32 (± 3.1), suggesting they were 'fairly difficult' to read and is higher than the recommended reading level. Conclusion Online patient education materials regarding post-surgery breast seroma are at a higher-than-recommended reading grade level for the public. Improvement would allow all patients, regardless of literacy level, to access such resources to aid decision-making around undergoing breast surgery.


Assuntos
Compreensão , Letramento em Saúde , Internet , Educação de Pacientes como Assunto , Seroma , Humanos , Seroma/etiologia , Educação de Pacientes como Assunto/métodos , Feminino , Complicações Pós-Operatórias , Doenças Mamárias/cirurgia , Mastectomia/efeitos adversos , Informação de Saúde ao Consumidor/normas
13.
Nurs Rep ; 14(2): 1338-1352, 2024 May 24.
Artigo em Inglês | MEDLINE | ID: mdl-38921711

RESUMO

(1) Background: The wording of informed consent forms could hinder their comprehension and hinder patients' autonomous choice. The objective of this study was to analyze the readability and comprehension of anesthesia informed consent forms in a Spanish county hospital. (2) Methods: Descriptive and cross-sectional study carried out on patients who were going to undergo anesthetic techniques. The readability of the forms was analyzed using the INFLESZ tool and their subjective comprehension using an ad hoc questionnaire. (3) Results: The analyzed forms presented a "somewhat difficult" legibility. A total of 44.2% of the patients decided not to read the form, mainly because they had previously undergone surgery with the same anesthetic technique. The language used in the forms was considered inadequate by 49.5% of the patients and 53.3% did not comprehend it in its entirety. A statistically significant negative correlation of age and INFLESZ readability score with the overall questionnaire score was found. A statistically significant association was observed as a function of age and educational level with the different criteria of the questionnaire. (4) Conclusions: The anesthesia informed consent forms presented low readability with limited comprehension. It would be necessary to improve their wording to favor comprehension and to guarantee patients' freedom of choice.

14.
JPRAS Open ; 41: 33-36, 2024 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-38872866

RESUMO

Purpose: Ensuring that educational materials geared toward transgender and gender-diverse patients are comprehensible can mitigate barriers to accessing gender-affirming care and understanding postoperative care. This study evaluates the readability of online patient resources related to gender-affirming vaginoplasty. Methods: Online searches for vaginoplasty were conducted in January 2023 using two search engines. The readability scores of the top ten websites and their associated hyperlinked webpages were derived using ten validated readability tests. Results: A total of 40 pages were assessed from the vaginoplasty searches. The average reading grade level for all the webpages with relevant educational materials was 13.3 (i.e., college level), exceeding the American Medical Association's recommended 6th grade reading level. Conclusion: Complex patient resources may impede patients' understanding of gender-affirming vaginoplasty. Online patient education resources should be created that are more accessible to patients with diverse reading comprehension capabilities.

16.
Clin Neuropsychol ; : 1-20, 2024 Jun 20.
Artigo em Inglês | MEDLINE | ID: mdl-38902221

RESUMO

Objective: Despite varying opinions, little research has examined how to best write pediatric neuropsychology reports. Method: This study gathered input from 230 parents on how text difficulty (reading level) and visual emphasis (bullets, underline, italics) affect report readability and utility. We focused on the most-read report section: summary/impressions. Each parent rated the readability and usefulness of a generic summary/impressions section written in four different styles. The four styles crossed text difficulty (high school-vs-collegiate) with use of visual emphasis (absent-vs-present). Results: Parents found versions with easier text to be more clearly written, easier to follow, and easier to find information (p<.001). Parents rated those with harder text to be overly detailed, complex, hard to understand, and hard to read (p<.001). Visual emphasis made it easier to find key information and the text easier to follow and understand - but primarily for versions that were written in difficult text (interaction p≤.026). After rating all four styles, parents picked their preference. They most often picked versions written in easier text with visual emphasis (p<.001). Conclusions: Findings support writing styles that use easier text difficulty and visual emphasis.

17.
Front Surg ; 11: 1373843, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38903865

RESUMO

Purpose: This study aims to evaluate the effectiveness of ChatGPT-4, an artificial intelligence (AI) chatbot, in providing accurate and comprehensible information to patients regarding otosclerosis surgery. Methods: On October 20, 2023, 15 hypothetical questions were posed to ChatGPT-4 to simulate physician-patient interactions about otosclerosis surgery. Responses were evaluated by three independent ENT specialists using the DISCERN scoring system. The readability was evaluated using multiple indices: Flesch Reading Ease (FRE), Flesch-Kincaid Grade Level (FKGL), Gunning Fog Index (Gunning FOG), Simple Measure of Gobbledygook (SMOG), Coleman-Liau Index (CLI), and Automated Readability Index (ARI). Results: The responses from ChatGPT-4 received DISCERN scores ranging from poor to excellent, with an overall score of 50.7 ± 8.2. The readability analysis indicated that the texts were above the 6th-grade level, suggesting they may not be easily comprehensible to the average reader. There was a significant positive correlation between the referees' scores. Despite providing correct information in over 90% of the cases, the study highlights concerns regarding the potential for incomplete or misleading answers and the high readability level of the responses. Conclusion: While ChatGPT-4 shows potential in delivering health information accurately, its utility is limited by the level of readability of its responses. The study underscores the need for continuous improvement in AI systems to ensure the delivery of information that is both accurate and accessible to patients with varying levels of health literacy. Healthcare professionals should supervise the use of such technologies to enhance patient education and care.

18.
Iowa Orthop J ; 44(1): 47-58, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38919356

RESUMO

Background: Patients often access online resources to educate themselves prior to undergoing elective surgery such as carpal tunnel release (CTR). The purpose of this study was to evaluate available online resources regarding CTR on objective measures of readability (syntax reading grade-level), understandability (ability to convey key messages in a comprehensible manner), and actionability (providing actions the reader may take). Methods: The study conducted two independent Google searches for "Carpal Tunnel Surgery" and among the top 50 results, analyzed articles aimed at educating patients about CTR. Readability was assessed using six different indices: Flesch-Kincaid Grade Level Index, Flesch Reading Ease, Gunning Fog Index, Simple Measure of Gobbledygook (SMOG) Index, Coleman Liau Index, Automated Readability Index. The Patient Education Materials Assessment Tool evaluated understandability and actionability on a 0-100% scale. Spearman's correlation assessed relationships between these metrics and Google search ranks, with p<0.05 indicating statistical significance. Results: Of the 39 websites meeting the inclusion criteria, the mean readability grade level exceeded 9, with the lowest being 9.4 ± 1.5 (SMOG index). Readability did not correlate with Google search ranking (lowest p=0.25). Mean understandability and actionability were 59% ± 15 and 26% ± 24, respectively. Only 28% of the articles used visual aids, and few provided concise summaries or clear, actionable steps. Notably, lower grade reading levels were linked to higher actionability scores (p ≤ 0.02 in several indices), but no readability metrics significantly correlated with understandability. Google search rankings showed no significant association with either understandability or actionability scores. Conclusion: Online educational materials for CTR score poorly in readability, understandability, and actionability. Quality metrics do not appear to affect Google search rankings. The poor quality metric scores found in our study highlight a need for hand specialists to improve online patient resources, especially in an era emphasizing shared decision-making in healthcare. Level of Evidence: IV.


Assuntos
Síndrome do Túnel Carpal , Compreensão , Letramento em Saúde , Internet , Educação de Pacientes como Assunto , Humanos , Educação de Pacientes como Assunto/métodos , Síndrome do Túnel Carpal/cirurgia , Leitura
19.
Iowa Orthop J ; 44(1): 151-158, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38919367

RESUMO

Background: The National Institutes of Health (NIH) and American Medical Association (AMA) recommend that online health information be written at a maximum 6th grade reading level. The aim was to evaluate online resources regarding shoulder arthroscopy utilizing measures of readability, understandability, and actionability, using syntax reading grade level and the Patient Education Materials Assessment Tool (PEMAT-P). Methods: An online Google™ search utilizing "shoulder arthroscopy" was performed. From the top 50 results, websites directed at educating patients were included. News and scientific articles, audiovisual materials, industry websites, and unrelated materials were excluded. Readability was calculated using objective algorithms: Flesch-Kincaid Grade-Level (FKGL), Simple Measure of Gobbledygook (SMOG) grade, Coleman-Liau Index (CLI), and Gunning-Fog Index (GFI). The PEMAT-P was used to assess understandability and actionability, with a 70% score threshold. Scores were compared across academic institutions, private practices, and commercial health publishers. The correlation between search rank and readability, understandability, and actionability was calculated. Results: Two independent searches yielded 53 websites, with 44 (83.02%) meeting inclusion criteria. No mean readability score performed below a 10th grade reading level. Only one website scored at or below 6th grade reading level. Mean understandability and actionability scores were 63.02%±12.09 and 29.77%±20.63, neither of which met the PEMAT threshold. Twelve (27.27%) websites met the understandability threshold, while none met the actionability threshold. Institution categories scored similarly in understandability (61.71%, 62.68%, 63.67%) among academic, private practice, and commercial health publishers respectively (p=0.9536). No readability or PEMAT score correlated with search rank. Conclusion: Online shoulder arthroscopy patient education materials score poorly in readability, understandability, and actionability. One website scored at the NIH and AMA recommended reading level, and 27.27% of websites scored above the 70% PEMAT score for understandability. None met the actionability threshold. Future efforts should improve online resources to optimize patient education and facilitate informed decision-making. Level of Evidence: IV.


Assuntos
Artroscopia , Compreensão , Letramento em Saúde , Internet , Educação de Pacientes como Assunto , Humanos , Educação de Pacientes como Assunto/métodos , Estados Unidos , Articulação do Ombro/cirurgia
20.
Int J Pediatr Otorhinolaryngol ; 181: 111998, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38830271

RESUMO

OBJECTIVES: This study examined the potential of ChatGPT as an accurate and readable source of information for parents seeking guidance on adenoidectomy, tonsillectomy, and ventilation tube insertion surgeries (ATVtis). METHODS: ChatGPT was tasked with identifying the top 15 most frequently asked questions by parents on internet search engines for each of the three specific surgical procedures. We removed repeated questions from the initial set of 45. Subsequently, we asked ChatGPT to generate answers to the remaining 33 questions. Seven highly experienced otolaryngologists individually assessed the accuracy of the responses using a four-level grading scale, from completely incorrect to comprehensive. The readability of responses was determined using the Flesch Reading Ease (FRE) and Flesch-Kincaid Grade Level (FKGL) scores. The questions were categorized into four groups: Diagnosis and Preparation Process, Surgical Information, Risks and Complications, and Postoperative Process. Responses were then compared based on accuracy grade, FRE, and FKGL scores. RESULTS: Seven evaluators each assessed 33 AI-generated responses, providing a total of 231 evaluations. Among the evaluated responses, 167 (72.3 %) were classified as 'comprehensive.' Sixty-two responses (26.8 %) were categorized as 'correct but inadequate,' and two responses (0.9 %) were assessed as 'some correct, some incorrect.' None of the responses were adjudged 'completely incorrect' by any assessors. The average FRE and FGKL scores were 57.15(±10.73) and 9.95(±1.91), respectively. Upon analyzing the responses from ChatGPT, 3 (9.1 %) were at or below the sixth-grade reading level recommended by the American Medical Association (AMA). No significant differences were found between the groups regarding readability and accuracy scores (p > 0.05). CONCLUSIONS: ChatGPT can provide accurate answers to questions on various topics related to ATVtis. However, ChatGPT's answers may be too complex for some readers, as they are generally written at a high school level. This is above the sixth-grade reading level recommended for patient information by the AMA. According to our study, more than three-quarters of the AI-generated responses were at or above the 10th-grade reading level, raising concerns about the ChatGPT text's readability.


Assuntos
Adenoidectomia , Compreensão , Pais , Tonsilectomia , Humanos , Tonsilectomia/métodos , Pais/psicologia , Ventilação da Orelha Média , Feminino , Masculino , Internet , Criança , Inquéritos e Questionários , Letramento em Saúde
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...