Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 763
Filter
1.
Ann Plast Surg ; 92(5S Suppl 3): S361-S365, 2024 May 01.
Article in English | MEDLINE | ID: mdl-38689420

ABSTRACT

BACKGROUND: Public interest in gender affirmation surgery has surged over the last decade. This spike in interest, combined with extensive free online medical knowledge, has led many to the Internet for more information on this complicated procedure. This study aimed to evaluate the quality of online information on metoidioplasty. METHODS: Google Trends in searches on "metoidioplasty" from 2004 to present were assessed. "metoidioplasty" was searched on three popular search engines (Google, Yahoo, and Bing), and the first 100 websites from each search were extracted for inclusion (Fig. 1). Exclusion criteria included duplicates, websites requiring fees, photo libraries, and irrelevant websites. Websites were assigned a score (out of 36) using the modified Ensuring Quality Information for Patients (EQIP) instrument, which grades patient materials based on content (18), identification (6), and structure (12). ChatGPT was also queried for metoidioplasty-related information and responses were analyzed using EQIP. RESULTS: Google Trends analysis indicated relative search interest in "metoidioplasty" has more than quadrupled since 2013(Fig. 2). Of the 93 websites included, only 2 received an EQIP score greater than 27 (6%). Website scores ranged from 7 to 33, with a mean of 18.6 ± 4.8. Mean scores were highest for websites made by health departments (22.3) and lowest for those made by encyclopedias and academic institutions (16.0). Websites with the highest frequency were research articles, web portals, hospital websites, and private practice sites, which averaged scores of 18.2, 19.7, 19.0, and 17.8, respectively. Health department sites averaged the highest content points (11.25), and academic institutions averaged the lowest (5.5). The average content point across all websites was 7.9 of 18. ChatGPT scored a total score of 29: 17 content, 2 identification, and 10 structures. The artificial intelligence chatbot scored the second highest score among all included online resources. CONCLUSIONS: Despite the continued use of search engines, the quality of online information on metoidioplasty remains exceptionally poor across most website developers. This study demonstrates the need to improve these resources, especially as interest in gender-affirming surgery continues to grow. ChatGPT and other artificial intelligence chatbots may be efficient and reliable alternatives for those seeking to understand complex medical information.


Subject(s)
Artificial Intelligence , Internet , Humans , Sex Reassignment Surgery/methods , Female , Male , Consumer Health Information/standards , Search Engine , Patient Education as Topic
2.
PeerJ ; 12: e17264, 2024.
Article in English | MEDLINE | ID: mdl-38803580

ABSTRACT

Background: Irritable bowel syndrome (IBS) is a functional gastrointestinal disorder (FGID) with heterogeneous clinical presentations. There are no clear testing parameters for its diagnosis, and the complex pathophysiology of IBS and the limited time that doctors have to spend with patients makes it difficult to adequately educate patients in the outpatient setting. An increased awareness of IBS means that patients are more likely to self-diagnose and self-manage IBS based on their own symptoms. These factors may make patients more likely to turn to Internet resources. Wikipedia is the most popular online encyclopedia among English-speaking users, with numerous validations. However, in Mandarin-speaking regions, the Baidu Encyclopedia is most commonly used. There have been no studies on the reliability, readability, and objectivity of IBS information on the two sites. This is an urgent issue as these platforms are accessed by approximately 1.45 billion people. Objective: We compared the IBS content on Wikipedia (in English) and Baidu Baike (in Chinese), two online encyclopedias, in terms of reliability, readability, and objectivity. Methods: The Baidu Encyclopedia (in Chinese) and Wikipedia (in English) were evaluated based on the Rome IV IBS definitions and diagnoses. All possible synonyms and derivatives for IBS and IBS-related FGIDs were screened and identified. Two gastroenterology experts evaluated the scores of articles for both sites using the DISCERN instrument, the Journal of the American Medical Association scoring system (JAMA), and the Global Quality Score (GQS). Results: Wikipedia scored higher overall with DISCERN (p < .0001), JAMA (p < .0001) and GQS (p < .05) than the Baidu Encyclopedia. Specifically, Wikipedia scored higher in DISCERN Section 1 (p < .0001), DISCERN Section 2 (p < .01), DISCERN Section 3 (p < .001), and the General DISCERN score (p < .0001) than the Baidu Encyclopedia. Both sites had low DISCERN Section 2 scores (p = .18). Wikipedia also had a larger percentage of high quality scores in total DISCERN, DISCERN Section 1, and DISCERN Section 3 (p < .0001, P < .0001, P < .0004, respectively, based on the above 3 (60%) rule). Conclusions: Wikipedia provides more reliable, higher quality, and more objective IBS-related health information than the Baidu Encyclopedia. However, there should be improvements in the information quality for both sites. Medical professionals and institutions should collaborate with these online platforms to offer better health information for IBS.


Subject(s)
Internet , Irritable Bowel Syndrome , Irritable Bowel Syndrome/diagnosis , Humans , Comprehension , Encyclopedias as Topic , Reproducibility of Results , Consumer Health Information/standards
3.
PLoS One ; 19(5): e0303308, 2024.
Article in English | MEDLINE | ID: mdl-38781283

ABSTRACT

BACKGROUND: This study assesses the quality and readability of Arabic online information about orthodontic pain. With the increasing reliance on the internet for health information, especially among Arabic speakers, it's critical to ensure the accuracy and comprehensiveness of available content. Our methodology involved a systematic search using the Arabic term for (Orthodontic Pain) in Google, Bing, and Yahoo. This search yielded 193,856 results, from which 74 websites were selected based on predefined criteria, excluding duplicates, scientific papers, and non-Arabic content. MATERIALS AND METHODS: For quality assessment, we used the DISCERN instrument, the Journal of the American Medical Association (JAMA) benchmarks, and the Health on the Net (HON) code. Readability was evaluated using the Simplified Measure of Gobbledygook (SMOG), Flesch Reading Ease Score (FRES), and Flesch-Kincaid Grade Level (FKGL) scores. RESULTS: Results indicated that none of the websites received the HONcode seal. The DISCERN assessment showed median total scores of 14.96 (± 5.65), with low overall quality ratings. In JAMA benchmarks, currency was the most achieved aspect, observed in 45 websites (60.81%), but none met all four criteria simultaneously. Readability scores suggested that the content was generally understandable, with a median FKGL score of 6.98 and a median SMOG score of 3.98, indicating middle school-level readability. CONCLUSION: This study reveals a significant gap in the quality of Arabic online resources on orthodontic pain, highlighting the need for improved standards and reliability. Most websites failed to meet established quality criteria, underscoring the necessity for more accurate and trustworthy health information for Arabic-speaking patients.


Subject(s)
Comprehension , Internet , Humans , Consumer Health Information/standards , Language , Pain , Arabs , Reading
4.
J Alzheimers Dis ; 99(2): 667-678, 2024.
Article in English | MEDLINE | ID: mdl-38701143

ABSTRACT

Background: With the increasing popularity of the internet, a growing number of patients and their companions are actively seeking health-related information online. Objective: The aim of this study was to assess the quality and readability of online information about Alzheimer's disease (AD) in China. Methods: A total of 263 qualified AD-related web pages from different businesses, governments, and hospitals were obtained. The quality of the web pages was assessed using the DISCERN tool, and the readability of the web pages was assessed using a readability measurement website suitable for the Chinese language. The differences in readability and quality between different types of web pages were investigated, and the correlation between quality and readability was analyzed. Results: The mean overall DISCERN score was 40.93±7.5. The government group scored significantly higher than the commercial and hospital groups. The mean readability score was 12.74±1.27, and the commercial group had the lowest readability score. There was a positive correlation between DISCERN scores and readability scores. Conclusions: This study presents an evaluation of the quality and readability of health information pertaining to AD in China. The findings indicate that there is a need to enhance the quality and readability of web pages about AD in China. Recommendations for improvement are proposed in light of these findings.


Subject(s)
Alzheimer Disease , Comprehension , Internet , Humans , China , Consumer Health Information/standards , Health Literacy
5.
Cancer Med ; 13(9): e7167, 2024 May.
Article in English | MEDLINE | ID: mdl-38676385

ABSTRACT

BACKGROUND: Gynaecological cancer symptoms are often vague and non-specific. Quality health information is central to timely cancer diagnosis and treatment. The aim of this study was to identify and evaluate the quality of online text-based patient information resources regarding gynaecological cancer symptoms. METHODS: A targeted website search and Google search were conducted to identify health information resources published by the Australian government and non-government health organisations. Resources were classified by topic (gynaecological health, gynaecological cancers, cancer, general health); assessed for reading level (Simple Measure of Gobbledygook, SMOG) and difficulty (Flesch Reading Ease, FRE); understandability and actionability (Patient Education Materials Assessment Tool, PEMAT, 0-100), whereby higher scores indicate better understandability/actionability. Seven criteria were used to assess cultural inclusivity specific for Aboriginal and Torres Strait Islander people; resources which met 3-5 items were deemed to be moderately inclusive and 6+ items as inclusive. RESULTS: A total of 109 resources were identified and 76% provided information on symptoms in the context of gynaecological cancers. The average readability was equivalent to a grade 10 reading level on the SMOG and classified as 'difficult to read' on the FRE. The mean PEMAT scores were 95% (range 58-100) for understandability and 13% (range 0-80) for actionability. Five resources were evaluated as being moderately culturally inclusive. No resource met all the benchmarks. CONCLUSIONS: This study highlights the inadequate quality of online resources available on pre-diagnosis gynaecological cancer symptom information. Resources should be revised in line with the recommended standards for readability, understandability and actionability and to meet the needs of a culturally diverse population.


Subject(s)
Genital Neoplasms, Female , Internet , Humans , Female , Genital Neoplasms, Female/diagnosis , Australia , Consumer Health Information/standards , Patient Education as Topic/methods , Comprehension , Health Literacy
6.
Colorectal Dis ; 26(5): 1014-1027, 2024 May.
Article in English | MEDLINE | ID: mdl-38561871

ABSTRACT

AIM: The aim was to examine the quality of online patient information resources for patients considering parastomal hernia treatment. METHODS: A Google search was conducted using lay search terms for patient facing sources on parastomal hernia. The quality of the content was assessed using the validated DISCERN instrument. Readability of written content was established using the Flesch-Kincaid score. Sources were also assessed against the essential content and process standards from the National Institute for Health and Care Excellence (NICE) framework for shared decision making support tools. Content analysis was also undertaken to explore what the sources covered and to identify any commonalities across the content. RESULTS: Fourteen sources were identified and assessed using the identified tools. The mean Flesch-Kincaid reading ease score was 43.61, suggesting that the information was difficult to read. The overall quality of the identified sources was low based on the pooled analysis of the DISCERN and Flesch-Kincaid scores, and when assessed against the criteria in the NICE standards framework for shared decision making tools. Content analysis identified eight categories encompassing 59 codes, which highlighted considerable variation between sources. CONCLUSIONS: The current information available to patients considering parastomal hernia treatment is of low quality and often does not contain enough information on treatment options for patients to be able to make an informed decision about the best treatment for them. There is a need for high-quality information, ideally co-produced with patients, to provide patients with the necessary information to allow them to make informed decisions about their treatment options when faced with a symptomatic parastomal hernia.


Subject(s)
Internet , Patient Education as Topic , Humans , Consumer Health Information/standards , Surgical Stomas/adverse effects , Incisional Hernia/surgery , Comprehension , Herniorrhaphy
7.
Surg Endosc ; 38(5): 2887-2893, 2024 May.
Article in English | MEDLINE | ID: mdl-38443499

ABSTRACT

INTRODUCTION: Generative artificial intelligence (AI) chatbots have recently been posited as potential sources of online medical information for patients making medical decisions. Existing online patient-oriented medical information has repeatedly been shown to be of variable quality and difficult readability. Therefore, we sought to evaluate the content and quality of AI-generated medical information on acute appendicitis. METHODS: A modified DISCERN assessment tool, comprising 16 distinct criteria each scored on a 5-point Likert scale (score range 16-80), was used to assess AI-generated content. Readability was determined using the Flesch Reading Ease (FRE) and Flesch-Kincaid Grade Level (FKGL) scores. Four popular chatbots, ChatGPT-3.5 and ChatGPT-4, Bard, and Claude-2, were prompted to generate medical information about appendicitis. Three investigators independently scored the generated texts blinded to the identity of the AI platforms. RESULTS: ChatGPT-3.5, ChatGPT-4, Bard, and Claude-2 had overall mean (SD) quality scores of 60.7 (1.2), 62.0 (1.0), 62.3 (1.2), and 51.3 (2.3), respectively, on a scale of 16-80. Inter-rater reliability was 0.81, 0.75, 0.81, and 0.72, respectively, indicating substantial agreement. Claude-2 demonstrated a significantly lower mean quality score compared to ChatGPT-4 (p = 0.001), ChatGPT-3.5 (p = 0.005), and Bard (p = 0.001). Bard was the only AI platform that listed verifiable sources, while Claude-2 provided fabricated sources. All chatbots except for Claude-2 advised readers to consult a physician if experiencing symptoms. Regarding readability, FKGL and FRE scores of ChatGPT-3.5, ChatGPT-4, Bard, and Claude-2 were 14.6 and 23.8, 11.9 and 33.9, 8.6 and 52.8, 11.0 and 36.6, respectively, indicating difficulty readability at a college reading skill level. CONCLUSION: AI-generated medical information on appendicitis scored favorably upon quality assessment, but most either fabricated sources or did not provide any altogether. Additionally, overall readability far exceeded recommended levels for the public. Generative AI platforms demonstrate measured potential for patient education and engagement about appendicitis.


Subject(s)
Appendicitis , Artificial Intelligence , Humans , Comprehension , Internet , Consumer Health Information/standards , Patient Education as Topic/methods
8.
Liver Int ; 44(6): 1373-1382, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38441405

ABSTRACT

BACKGROUND & AIMS: Short videos, crucial for disseminating health information on metabolic dysfunction-associated steatotic liver disease (MASLD), lack a clear evaluation of quality and reliability. This study aimed to assess the quality and reliability of MASLD-related videos on Chinese platforms. METHODS: Video samples were collected from three platforms (TikTok, Kwai and Bilibili) during the period from November 2019 to July 2023. Two independent reviewers evaluated the integrity of the information contained therein by scoring six key aspects of its content: definition, epidemiology, risk factors, outcomes, diagnosis and treatment. The quality and reliability of the videos were assessed using the Journal of the American Medical Association (JAMA) criteria, the Global Quality Score (GQS) and the modified DISCERN score. RESULTS: A total of 198 videos were included. The video content exhibited an overall unsatisfactory quality, with a primary emphasis on risk factors and treatment, while diagnosis and epidemiology were seldom addressed. Regarding the sources of the videos, the GQS and modified DISCERN scores varied significantly between the platforms (p = .003), although they had generally similar JAMA scores (p = .251). Videos created by medical professionals differed significantly in terms of JAMA scores (p = .046) compared to those created by nonmedical professionals, but there were no statistically significant differences in GQS (p = .923) or modified DISCERN scores (p = .317). CONCLUSIONS: The overall quality and reliability of the videos were poor and varied between platforms and uploaders. Platforms and healthcare professionals should strive to provide more reliable health-related information regarding MASLD.


Subject(s)
Video Recording , Humans , Reproducibility of Results , China/epidemiology , Risk Factors , Non-alcoholic Fatty Liver Disease/epidemiology , Non-alcoholic Fatty Liver Disease/therapy , Fatty Liver/diagnosis , Fatty Liver/therapy , Consumer Health Information/standards
9.
Eye Contact Lens ; 50(6): 243-248, 2024 Jun 01.
Article in English | MEDLINE | ID: mdl-38477759

ABSTRACT

OBJECTIVES: To determine the compliance of online vendors to the UK Opticians Act 1989 Section 27 requirements and safety regulations for cosmetic contact lens (CCL) sales and the quality of online CCL health information. METHODS: The top 50 websites selling CCLs on each three search engines, namely Google, Yahoo, and Bing, were selected. Duplicates were removed, and the remaining websites were systematically analyzed in February 2023. UK legal authorization for CCL sales was assessed using the Opticians Act Section 27 and safety regulations determined by the presence of Conformité Européene (CE) marking. The quality and reliability of online information was graded using the DISCERN (16-80) and JAMA (0-4) scores by two independent reviewers. RESULTS: Forty-seven eligible websites were analyzed. Only six (12.7%) met the UK legal authorization for CCL sales. Forty-nine different brands of CCLs were sold on these websites, of which 13 (26.5%) had no CE marking. The mean DISCERN and JAMA benchmark scores were 26 ± 12.2 and 1.3 ± 0.6, respectively (intraclass correlation scores: 0.99 for both). CONCLUSIONS: A significant number of websites provide consumers with easy, unsafe, and unregulated access to CCLs. Most online stores do not meet the requirements set out in the Opticians Act for CCL sales in the United Kingdom. A significant number of CCLs lack CE marking, while the average quality of information on websites selling CCLs is poor. Together, these pose a risk to consumers purchasing CCLs from unregulated websites, and therefore, further stringent regulations on the online sales of these products are needed.


Subject(s)
Consumer Health Information , Internet , Humans , United Kingdom , Consumer Health Information/standards , Cosmetics/standards , Contact Lenses , Consumer Product Safety/legislation & jurisprudence , Consumer Product Safety/standards
10.
12.
Aesthetic Plast Surg ; 48(9): 1688-1697, 2024 May.
Article in English | MEDLINE | ID: mdl-38360956

ABSTRACT

BACKGROUND: Eyelid ptosis is an underestimated pathology deeply affecting patients' quality of life. Internet has increasingly become the major source of information regarding health care, and patients often browse on websites to acquire an initial knowledge on the subject. However, there is lack of data concerning the quality of available information focusing on the eyelid ptosis and its treatment. We systematically evaluated online information quality on eyelid ptosis by using the "Ensuring Quality Information for Patients" (EQIP) scale. MATERIALS AND METHODS: Google, Yahoo and Bing have been searched for the keywords "Eyelid ptosis," "Eyelid ptosis surgery" and "Blepharoptosis." The first 50 hits were included, evaluating the quality of information with the expanded EQIP tool. Websites in English and intended for general non-medical public use were included. Irrelevant documents, videos, pictures, blogs and articles with no access were excluded. RESULTS: Out of 138 eligible websites, 79 (57,7%) addressed more than 20 EQIP items, with an overall median score of 20,2. Only 2% discussed procedure complication rates. The majority fail to disclose severe complications and quantifying risks, fewer than 18% clarified the potential need for additional treatments. Surgical procedure details were lacking, and there was insufficient information about pre-/postoperative precautions for patients. Currently, online quality information has not improved since COVID-19 pandemic. CONCLUSIONS: This study highlights the urgent requirement for improved patient-oriented websites adhering to international standards for plastic and oculoplastic surgery. Healthcare providers should effectively guide their patients in finding trustworthy and reliable eyelid ptosis correction information. LEVEL OF EVIDENCE V: This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors www.springer.com/00266 .


Subject(s)
Blepharoplasty , Blepharoptosis , Internet , Humans , Blepharoptosis/surgery , Blepharoplasty/methods , Consumer Health Information/standards , Patient Education as Topic/methods , Female , Male
13.
Eur J Vasc Endovasc Surg ; 67(5): 738-745, 2024 May.
Article in English | MEDLINE | ID: mdl-38185375

ABSTRACT

OBJECTIVE: This study aimed to assess the quality of patient information material regarding elective abdominal aortic aneurysm (AAA) repair on the internet using the Modified Ensuring Quality Information for Patients (MEQIP) tool. METHODS: A qualitative assessment of internet based patient information was performed. The 12 most used search terms relating to AAA repair were identified using Google Trends, with the first 10 pages of websites retrieved for each term searched. Duplicates were removed, and information for patients undergoing elective AAA were selected. Further exclusion criteria were marketing material, academic journals, videos, and non-English language sites. The remaining websites were then MEQIP scored independently by two reviewers, producing a final score by consensus. RESULTS: A total of 1 297 websites were identified, with 235 (18.1%) eligible for analysis. The median MEQIP score was 18 (interquartile range [IQR] 14, 21) out of a possible 36. The highest score was 33. The 99th percentile MEQIP scoring websites scored > 27, with four of these six sites representing online copies of hospital patient information leaflets, however hospital sites overall had lower median MEQIP scores than most other institution types. MEQIP subdomain median scores were: content, 8 (IQR 6, 11); identification, 3 (IQR 1, 3); and structure, 7 (IQR 6, 9). Of the analysed websites, 77.9% originated from the USA (median score 17) and 12.8% originated in the UK (median score 22). Search engine ranking was related to website institution type but had no correlation with MEQIP. CONCLUSION: When assessed by the MEQIP tool, most websites regarding elective AAA repair are of questionable quality. This is in keeping with studies in other surgical and medical fields. Search engine ranking is not a reliable measure of quality of patient information material regarding elective AAA repair. Health practitioners should be aware of this issue as well as the whereabouts of high quality material to which patients can be directed.


Subject(s)
Aortic Aneurysm, Abdominal , Consumer Health Information , Elective Surgical Procedures , Internet , Patient Education as Topic , Aortic Aneurysm, Abdominal/surgery , Humans , Elective Surgical Procedures/standards , Patient Education as Topic/standards , Consumer Health Information/standards , Vascular Surgical Procedures/standards
14.
J Craniofac Surg ; 35(4): 1157-1159, 2024 Jun 01.
Article in English | MEDLINE | ID: mdl-38284877

ABSTRACT

The aim of this study was to evaluate the quality of the information on YouTube regarding night guards (NGs). YouTube was systematically searched using the keyword "night guards." Two independent reviewers examined the first 100 videos and exclusion criteria were applied. Descriptive characteristics of the remaining 60 videos were recorded. In addition, the purpose, target audience, and source of the included videos were collected. A 12-point content scale (CS) was used to evaluate video content, and the Global Quality Scale (GQS) was used to determine video quality. Statistical analyses were performed using the Kruskal-Wallis and Mann-Whitney tests, and the correlation between scores was evaluated using Spearman rho. Of the included videos, 50% were uploaded by dentists/health institutions, 26% by commercial sources and 24% by laypersons. The aim of 80% of the videos was to inform laypeople and 14% to inform professionals only. The content discussed the most (59.3%) was the production stages of NGs. The mean CS and GQS score of the videos were 2.06 ± 1.35 (poor) and 1.71 ± 0.88 (generally poor), respectively. A positive correlation was found between the CS and GQS scores ( r = 0.447). YouTube videos were found to be poor in terms of both content and quality. Since NGs for treating bruxism will always be a trending topic for patients on social media, the content of YouTube videos should be checked and enriched by professionals so that patients can access accurate information, especially about NGs obtained over the counter.


Subject(s)
Social Media , Video Recording , Humans , Consumer Health Information/standards , Dental Devices, Home Care , Patient Education as Topic , Sleep Bruxism
15.
J Pediatr Ophthalmol Strabismus ; 61(3): 198-203, 2024.
Article in English | MEDLINE | ID: mdl-38112390

ABSTRACT

PURPOSE: To evaluate the quality, reliability, technical quality, and readability of online information related to childhood glaucoma. METHODS: In this cross-sectional study, no human subjects were studied. Analysis was done for online websites on childhood glaucoma. The terms "childhood glaucoma," "pediatric glaucoma," "congenital glaucoma," "buphthalmos," and "big eyes" were entered into the Google search engine and the first 100 search results were assessed for quality, reliability, technical quality, and readability. Peer-reviewed articles, patient forum posts, dictionary definitions, and websites that appeared as targeted ads, were not in English, or were not focused on humans were excluded. Each website was evaluated for (1) quality and reliability using the DISCERN, HONcode, and JAMA criteria; (2) technical quality assessing 11 technical aspects; and (3) readability using six separate criteria (Flesch-Kincaid Reading Ease Score and Grade Level, Gunning Fog Index score, the Simple Measure of Gobbledygook Index, Coleman-Liau Index, and Automated Readability Index). RESULTS: The median scores for the DISCERN, HONcode, and JAMA criteria were 2.6 (range = 1 to 4.75; 1 = worst, 5 = best), 10 (range = 0 to 16; 0 = worst, 16 = best), and 2 (range = 0 to 4; 0 = worst, 4 = best), respectively. The median technical quality score was 0.7. Readability was poor among most websites, with a median Flesch-Kincaid grade Grade Level Score of 9.3. The median Gunning Fog Index score was 9.8. There was a statistically significantly higher JAMA score and Gunning Fog Index score among the private websites compared to the institutional websites. However, institutional websites had higher technical quality. CONCLUSIONS: Online information on childhood glaucoma had poor to moderate quality and reliability. The technical quality is good; however, most websites' readability was above the recommended 5th to 6th grade reading level. [J Pediatr Ophthalmol Strabismus. 2024;61(3):198-203.].


Subject(s)
Comprehension , Glaucoma , Internet , Humans , Cross-Sectional Studies , Reproducibility of Results , Child , Search Engine , Consumer Health Information/standards
16.
Exp Dermatol ; 32(8): 1317-1321, 2023 08.
Article in English | MEDLINE | ID: mdl-36815282

ABSTRACT

Generalized pustular psoriasis (GPP) is a multisystem disease with potentially life-threatening adverse effects. As patients increasingly seek health information online, and as the landscape for GPP changes, the quality of online health information (OHI) becomes progressively more important. This paper is the first of its kind to examine the quality, comprehensiveness and readability of online health information for GPP. Similar to pre-existing studies evaluating OHI, this paper examines 5 key search terms for GPP- 3 medical and 2 laymen. For each search term, the results were evaluated based on HONcode accreditation, an enhanced DISCERN analysis and a number of readability indices. Of the 500 websites evaluated, 84 (16.8%) were HONcode-accredited. Mean DISCERN scores of all websites were 74.9% and 38.6% for website reliability and treatment sections, respectively, demonstrating key gaps in comprehensiveness and reliability of GPP-specific OHI. Additionally, only 4/100 websites (4%) analysed for readability were written at the NIH-recommended sixth-grade level. Academic websites were significantly more difficult to read than governmental websites. This further exacerbates the patient information gap, particularly for patients with low health literacy, who may already be at higher risk of not receiving timely medical care.


Subject(s)
Comprehension , Consumer Health Information , Internet , Psoriasis , Humans , Consumer Health Information/standards , Access to Information
17.
Front Public Health ; 11: 1344212, 2023.
Article in English | MEDLINE | ID: mdl-38259733

ABSTRACT

Background: Health education about Helicobacter pylori (H. pylori) is one of the most effective methods to prevent H. pylori infection and standardize H. pylori eradication treatment. Short videos enable people to absorb and remember information more easily and are an important source of health education. This study aimed to assess the information quality of H. pylori-related videos on Chinese short video-sharing platforms. Methods: A total of 242 H. pylori-related videos from three Chinese short video-sharing platforms with the most users, TikTok, Bilibili, and Kwai, were retrieved. The Global Quality Score (GQS) and the modified DISCERN tool were used to assess the quality and content of videos, respectively. Additionally, comparative analyzes of videos based on different sources and common H. pylori issues were also conducted. Results: The median GQS score and DISCERN score was 2 for H. pylori-related videos analyzed in this study. Non-gastroenterologists posted the most H. pylori-related videos (136/242, 56.2%). Videos from gastroenterologists (51/242, 21.0%) had the highest GQS and DISCERN scores, with a median of 3. Few videos had content on family-based H. pylori infection control and management (5.8%), whether all H. pylori-positive patients need to undergo eradication treatment (27.7%), and the adverse effects of H. pylori eradication therapy (16.1%). Conclusion: Generally, the content and quality of the information in H. pylori-related videos were unsatisfactory, and the quality of the video correlated with the source of the video. Videos from gastroenterologists provided more correct guidance with higher-quality information on the prevention and treatment of H. pylori infection.


Subject(s)
Health Education , Helicobacter Infections , Helicobacter pylori , Social Media , Humans , Asian People , Cross-Sectional Studies , Health Education/standards , Information Sources , Consumer Health Information/standards , Helicobacter Infections/prevention & control , Helicobacter Infections/therapy , China , Video Recording , Gastroenterology
18.
Clin Exp Dermatol ; 47(3): 606-608, 2022 Mar.
Article in English | MEDLINE | ID: mdl-34767665

ABSTRACT

YouTube® is a powerful resource for the dissemination of health information to the general public. We assessed the quality, understandability and evidence-based content of YouTube videos discussing hidradenitis suppurativa treatments, categorized by source: dermatologists, other healthcare professionals, or patients. We demonstrated patient videos were the most popular by viewers, but had the worst ratings for quality and understandability of information presented. Moreover, patient videos were the least likely to recommend the evidence-based treatments. Moving forward, there should be a partnership between dermatologists and patient advocates to create engaging, educational online content for patients to take ownership of their own health.


Subject(s)
Consumer Health Information/standards , Hidradenitis Suppurativa/therapy , Social Media/standards , Evidence-Based Medicine , Humans
20.
J Korean Med Sci ; 36(45): e303, 2021 Nov 22.
Article in English | MEDLINE | ID: mdl-34811977

ABSTRACT

BACKGROUND: YouTube has become an increasingly popular educational tool and an important source of healthcare information. We investigated the reliability and quality of the information in Korean-language YouTube videos about gout. METHODS: We performed a comprehensive electronic search on April 2, 2021, using the following keywords-"gout," "acute gout," "gouty arthritis," "gout treatment," and "gout attack"-and identified 140 videos in the Korean language. Two rheumatologists then categorized the videos into three groups: "useful," "misleading," and "personal experience." Reliability was determined using a five-item questionnaire modified from the DISCERN validation tool, and overall quality scores were based on the Global Quality Scale (GQS). RESULTS: Among the 140 videos identified, 105 (75.0%), 29 (20.7%), and 6 (4.3%) were categorized as "useful," "misleading," and "personal experience," respectively. Most videos in the "useful" group were created by rheumatologists (70.5%). The mean DISCERN and GQS scores in the "useful" group (3.3 ± 1.0 and 3.8 ± 0.7) were higher than those in the "misleading" (0.9 ± 1.0 and 1.9 ± 0.6) and "personal experience" groups (0.8 ± 1.2 and 2.0 ± 0.8) (P < 0.001 for both the DISCERN and GQS tools). CONCLUSION: Approximately 75% of YouTube videos that contain educational material regarding gout were useful; however, we observed some inaccuracies in the medical information provided. Healthcare professionals should closely monitor media content and actively participate in the development of videos that provide accurate medical information.


Subject(s)
Consumer Health Information/standards , Gout/pathology , Social Media , Gout/diagnosis , Gout/therapy , Humans , Information Dissemination , Republic of Korea , Rheumatologists/psychology
SELECTION OF CITATIONS
SEARCH DETAIL
...