Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 193
Filter
1.
JMIR Form Res ; 8: e56594, 2024 Aug 01.
Article in English | MEDLINE | ID: mdl-39088820

ABSTRACT

BACKGROUND: The development of internet technology has greatly increased the ability of patients with chronic obstructive pulmonary disease (COPD) to obtain health information, giving patients more initiative in the patient-physician decision-making process. However, concerns about the quality of website health information will affect the enthusiasm of patients' website search behavior. Therefore, it is necessary to evaluate the current situation of Chinese internet information on COPD. OBJECTIVE: This study aims to evaluate the quality of COPD treatment information on the Chinese internet. METHODS: Using the standard disease name "" ("chronic obstructive pulmonary disease" in Chinese) and the commonly used public search terms "" ("COPD") and "" ("emphysema") combined with the keyword "" ("treatment"), we searched the PC client web page of Baidu, Sogou, and 360 search engines and screened the first 50 links of the website from July to August 2021. The language was restricted to Chinese for all the websites. The DISCERN tool was used to evaluate the websites. RESULTS: A total of 96 websites were included and analyzed. The mean overall DISCERN score for all websites was 30.4 (SD 10.3; range 17.3-58.7; low quality), no website reached the maximum DISCERN score of 75, and the mean score for each item was 2.0 (SD 0.7; range 1.2-3.9). There were significant differences in mean DISCERN scores between terms, with "chronic obstructive pulmonary disease" having the highest mean score. CONCLUSIONS: The quality of COPD information on the Chinese internet is poor, which is mainly reflected in the low reliability and relevance of COPD treatment information, which can easily lead consumers to make inappropriate treatment choices. The term "chronic obstructive pulmonary disease" has the highest DISCERN score among commonly used disease search terms. It is recommended that consumers use standard disease names when searching for website information, as the information obtained is relatively reliable.

2.
Cureus ; 16(6): e62762, 2024 Jun.
Article in English | MEDLINE | ID: mdl-39036142

ABSTRACT

Researchers used the TikTok platform to investigate the quality of select TikTok educational content regarding gastroesophageal reflux disease (GERD). One hundred TikTok videos that fit the inclusion criteria were analyzed using DISCERN, a tool that evaluates the quality of consumer health information on the internet. There was no substantial difference in DISCERN scores between physicians and non-physician content creators. Nevertheless, both groups consistently scored low (<3) in areas such as providing sources of information, indicating the publication date of their sources, discussing treatment risks, and outlining potential consequences if no treatment is pursued.

3.
Int Ophthalmol ; 44(1): 329, 2024 Jul 18.
Article in English | MEDLINE | ID: mdl-39026115

ABSTRACT

PURPOSE: To evaluate the quality and reliability of YouTube videos as an educational resource about myopia. METHODS: The videos were identified by searching YouTube with the keywords 'myopia' and 'nearsightedness', using the website's default search settings. The number of views, likes, dislikes, view ratio, source of the upload, country of origin, video type, and described treatment techniques were assessed. Each video was evaluated using the DISCERN, Journal of the American Medical Association (JAMA), Ensuring Quality Information for Patients (EQIP), Health On the Net Code of Conduct Certification (HONcode), and the Global Quality Score (GQS) scales. RESULTS: A total of 112 videos were included. The classification of videos by source indicated that the top three contributors were health channels (30 videos [26.8%]), physicians (24 videos [21.4%]), and academic centers (19 videos [16.9%]). Most of these videos originated from the United States (74 videos [66.1%]) and focused on the pathophysiology (n = 89, 79.4%) and the treatment (n = 77, 68.7%) of myopia. Statistical comparisons among the groups of video sources showed no significant difference in the mean DISCERN score (p = 0.102). However, significant differences were noted in the JAMA (p = 0.011), GQS (p = 0.009), HONcode (p = 0.011), and EQIP (p = 0.002) scores. CONCLUSIONS: This study underscored the variability in the quality and reliability of YouTube videos related to myopia, with most content ranging from 'weak to moderate' quality based on the DISCERN and GQS scales, yet appearing to be 'excellent' according to the HONcode and EQIP scales. Videos uploaded by physicians generally exhibited higher standards, highlighting the importance of expert involvement in online health information dissemination. Given the potential risks of accessing incorrect medical data that can affect the decision-making processes of patients, caution should be exercised when using online content as a source of information.


Subject(s)
Myopia , Social Media , Video Recording , Humans , Myopia/therapy , Myopia/physiopathology , Social Media/standards , Reproducibility of Results , Information Dissemination/methods , Patient Education as Topic/methods , Patient Education as Topic/standards
4.
BMC Oral Health ; 24(1): 798, 2024 Jul 15.
Article in English | MEDLINE | ID: mdl-39010000

ABSTRACT

BACKGROUND: The aim of this study was to evaluate the content and quality of videos about bruxism treatments on YouTube, a platform frequently used by patients today to obtain information. METHODS: A YouTube search was performed using the keywords "bruxism treatment" and "teeth grinding treatment". "The sort by relevance" filter was used for both search terms and the first 150 videos were saved. A total of 139 videos that met the study criteria were included in the study. Videos were classified as poor, moderate or excellent based on a usefulness score that evaluated content quality. The modified DISCERN tool was also used to evaluate video quality. Additionally, videos were categorized according to the upload source, target audience and video type. The types of treatments mentioned in the videos and the demographic data of the videos were recorded. RESULTS: According to the usefulness score, 59% of the videos were poor-quality, 36.7% were moderate-quality and 4.3% were excellent-quality. Moderate-quality videos had a higher interaction index than excellent-quality videos (p = 0.039). The video duration of excellent-quality videos was longer than that of moderate and poor-quality videos (p = 0.024, p = 0.002). Videos with poor-quality content were found to have significantly lower DISCERN scores than videos with moderate (p < 0.001) and excellent-quality content (p = 0.008). Additionally, there was a significantly positive and moderate (r = 0.446) relationship between DISCERN scores and content usefulness scores (p < 0.001). There was only a weak positive correlation between DISCERN scores and video length (r = 0.359; p < 0.001). The videos uploaded by physiotherapists had significantly higher views per day and viewing rate than videos uploaded by medical doctors (p = 0.037), university-hospital-institute (p = 0.024) and dentists (p = 0.006). The videos uploaded by physiotherapists had notably higher number of likes and number of comments than videos uploaded by medical doctors (p = 0.023; p = 0.009, respectively), university-hospital-institute (p = 0.003; p = 0.008, respectively) and dentists (p = 0.002; p = 0.002, respectively). CONCLUSIONS: Although the majority of videos on YouTube about bruxism treatments are produced by professionals, most of the videos contain limited information, which may lead patients to debate treatment methods. Health professionals should warn patients against this potentially misleading content and direct them to reliable sources.


Subject(s)
Bruxism , Social Media , Video Recording , Humans , Bruxism/therapy , Reproducibility of Results
5.
Clin Genitourin Cancer ; 22(5): 102145, 2024 Jun 29.
Article in English | MEDLINE | ID: mdl-39033711

ABSTRACT

AIM: To examine the reliability of ChatGPT in evaluating the quality of medical content of the most watched videos related to urological cancers on YouTube. MATERIAL AND METHODS: In March 2024 a playlist was created of the first 20 videos watched on YouTube for each type of urological cancer. The video texts were evaluated by ChatGPT and by a urology specialist using the DISCERN-5 and Global Quality Scale (GQS) questionnaires. The results obtained were compared using the Kruskal-Wallis test. RESULTS: For the prostate, bladder, renal, and testicular cancer videos, the median (IQR) DISCERN-5 scores given by the human evaluator and ChatGPT were (Human: 4 [1], 3 [0], 3 [2], 3 [1], P = .11; ChatGPT: 3 [1.75], 3 [1], 3 [2], 3 [0], P = .4, respectively) and the GQS scores were (Human: 4 [1.75], 3 [0.75], 3.5 [2], 3.5 [1], P = .12; ChatGPT: 4 [1], 3 [0.75], 3 [1], 3.5 [1], P = .1, respectively), with no significant difference determined between the scores. The repeatability of the ChatGPT responses was determined to be similar at 25 % for prostate cancer, 30 % for bladder cancer, 30 % for renal cancer, and 35 % for testicular cancer (P = .92). No statistically significant difference was determined between the median (IQR) DISCERN-5 and GQS scores given by humans and ChatGPT for the content of videos about prostate, bladder, renal, and testicular cancer (P > .05). CONCLUSION: Although ChatGPT is successful in evaluating the medical quality of video texts, the results should be evaluated with caution as the repeatability of the results is low.

6.
Front Surg ; 11: 1373843, 2024.
Article in English | MEDLINE | ID: mdl-38903865

ABSTRACT

Purpose: This study aims to evaluate the effectiveness of ChatGPT-4, an artificial intelligence (AI) chatbot, in providing accurate and comprehensible information to patients regarding otosclerosis surgery. Methods: On October 20, 2023, 15 hypothetical questions were posed to ChatGPT-4 to simulate physician-patient interactions about otosclerosis surgery. Responses were evaluated by three independent ENT specialists using the DISCERN scoring system. The readability was evaluated using multiple indices: Flesch Reading Ease (FRE), Flesch-Kincaid Grade Level (FKGL), Gunning Fog Index (Gunning FOG), Simple Measure of Gobbledygook (SMOG), Coleman-Liau Index (CLI), and Automated Readability Index (ARI). Results: The responses from ChatGPT-4 received DISCERN scores ranging from poor to excellent, with an overall score of 50.7 ± 8.2. The readability analysis indicated that the texts were above the 6th-grade level, suggesting they may not be easily comprehensible to the average reader. There was a significant positive correlation between the referees' scores. Despite providing correct information in over 90% of the cases, the study highlights concerns regarding the potential for incomplete or misleading answers and the high readability level of the responses. Conclusion: While ChatGPT-4 shows potential in delivering health information accurately, its utility is limited by the level of readability of its responses. The study underscores the need for continuous improvement in AI systems to ensure the delivery of information that is both accurate and accessible to patients with varying levels of health literacy. Healthcare professionals should supervise the use of such technologies to enhance patient education and care.

7.
Oral Maxillofac Surg ; 28(3): 1431-1436, 2024 Sep.
Article in English | MEDLINE | ID: mdl-38910212

ABSTRACT

PURPOSE: In the digital era, the internet is the go-to source of information, and patients often seek insights on medical conditions like TMJ ankylosis. YouTube, a popular platform, is widely used for this purpose. However, YouTube's lack of regulation means it can host unreliable content. Hence, the primary objective of this study is to assess the scientific quality of YouTube videos concerning TMJ ankylosis. MATERIALS AND METHODS: This study analyzed 59 TMJ ankylosis-related videos. Two Oral and Maxillofacial Surgery specialists assessed these videos. Data on the video source, duration, upload date, the time elapsed since upload, total views, likes, dislikes and comments, Interaction index, and viewing rate were collected and analyzed. Video quality was assessed using the Global Quality Scale (GQS) and the Quality Criteria for Consumer Health Information (DISCERN), comparing health professionals and non-health professionals. RESULTS: Health professional's videos were better in terms of GQS 3.21 ± 0.94 and DISCERN score 3.03 ± 0.75 as compared to the non-health professional videos GQS 3.0 ± 1.04, and DISCERN 2.81 ± 1.13. Health professional group videos had more reliability and better quality than the non-health professional group (p < 0.01). CONCLUSION: YouTube should not be relied on as a trustworthy source for high-quality and reliable information regarding TMJ ankylosis videos. Healthcare professionals must be prepared to address any ambiguous or misleading information and to prioritize building trustworthy relationships with patients through accurate diagnostic and therapeutic processes.


Subject(s)
Ankylosis , Social Media , Temporomandibular Joint Disorders , Video Recording , Humans , Ankylosis/surgery , Consumer Health Information/standards , Information Dissemination , Information Sources
8.
BMC Public Health ; 24(1): 1216, 2024 May 02.
Article in English | MEDLINE | ID: mdl-38698404

ABSTRACT

BACKGROUND: Acute pancreatitis (AP) is a common acute digestive system disorder, with patients often turning to TikTok for AP-related information. However, the platform's video quality on AP has not been thoroughly investigated. OBJECTIVE: The main purpose of this study is to evaluate the quality of videos about AP on TikTok, and the secondary purpose is to study the related factors of video quality. METHODS: This study involved retrieving AP-related videos from TikTok, determining, and analyzing them based on predefined inclusion and exclusion criteria. Relevant data were extracted and compiled for evaluation. Video quality was scored using the DISCERN instrument and the Health on the Net (HONcode) score, complemented by introducing the Acute Pancreatitis Content Score (APCS). Pearson correlation analysis was used to assess the correlation between video quality scores and user engagement metrics such as likes, comments, favorites, retweets, and video duration. RESULTS: A total of 111 TikTok videos were included for analysis, and video publishers were composed of physicians (89.18%), news media organizations (13.51%), individual users (5.41%), and medical institutions (0.9%). The majority of videos focused on AP-related educational content (64.87%), followed by physicians' diagnostic and treatment records (15.32%), and personal experiences (19.81%). The mean scores for DISCERN, HONcode, and APCS were 33.05 ± 7.87, 3.09 ± 0.93, and 1.86 ± 1.30, respectively. The highest video scores were those posted by physicians (35.17 ± 7.02 for DISCERN, 3.31 ± 0.56 for HONcode, and 1.94 ± 1.34 for APCS, respectively). According to the APCS, the main contents focused on etiology (n = 55, 49.5%) and clinical presentations (n = 36, 32.4%), followed by treatment (n = 24, 21.6%), severity (n = 20, 18.0%), prevention (n = 19, 17.1%), pathophysiology (n = 17, 15.3%), definitions (n = 13, 11.7%), examinations (n = 10, 9%), and other related content. There was no correlation between the scores of the three evaluation tools and the number of followers, likes, comments, favorites, and retweets of the video. However, DISCERN (r = 0.309) and APCS (r = 0.407) showed a significant positive correlation with video duration, while HONcode showed no correlation with the duration of the video. CONCLUSIONS: The general quality of TikTok videos related to AP is poor; however, the content posted by medical professionals shows relatively higher quality, predominantly focusing on clinical presentations and etiologies. There is a discernible correlation between video duration and quality ratings, indicating that a combined approach incorporating the guideline can comprehensively evaluate AP-related content on TikTok.


Subject(s)
Pancreatitis , Video Recording , Humans , Pancreatitis/therapy , Pancreatitis/diagnosis , Reproducibility of Results , Acute Disease , Social Media
9.
Indian J Psychiatry ; 66(4): 352-359, 2024 Apr.
Article in English | MEDLINE | ID: mdl-38778845

ABSTRACT

Background: Management of dementia involves a multidisciplinary approach which also requires active participation from family members and caregivers. Thus, having easy access to information about dementia care is pertinent. Internet-based information is an emerging method for the same. Aim: To perform a comparative assessment of patient-oriented online information available on treatment of dementia over web pages in English and Hindi language. Methods: Observational study was conducted online through a general internet search engine (www.google.com). Web pages containing patient-oriented online information on treatment of dementia in English and Hindi were reviewed to assess their content and quality, esthetics, and interactivity. Appropriate descriptive and inferential statistics were conducted using the Statistical Package for the Social Sciences. Results: A total of 70 web pages met the eligibility criteria. Content quality assessed using the DISCERN score was significantly higher for English web pages compared to Hindi web pages (P < 0.01). About 72.4% (21/29) of English and only 9.8% (4/41) of Hindi web pages had a total DISCERN score of 40 or above, indicating good quality. For esthetics, the median score for English pages was significantly higher than for Hindi web pages (P < 0.01). The web pages with Health On Net (HON) certification had significantly better content quality. Conclusion: Our study revealed a scarcity of good quality online information about dementia and its treatment, especially in the Hindi language. English language websites showed better content quality than Hindi websites. HON Code label might be used as an indicator of better content quality for online resources informing on dementia treatment by lay people.

10.
PeerJ ; 12: e17264, 2024.
Article in English | MEDLINE | ID: mdl-38803580

ABSTRACT

Background: Irritable bowel syndrome (IBS) is a functional gastrointestinal disorder (FGID) with heterogeneous clinical presentations. There are no clear testing parameters for its diagnosis, and the complex pathophysiology of IBS and the limited time that doctors have to spend with patients makes it difficult to adequately educate patients in the outpatient setting. An increased awareness of IBS means that patients are more likely to self-diagnose and self-manage IBS based on their own symptoms. These factors may make patients more likely to turn to Internet resources. Wikipedia is the most popular online encyclopedia among English-speaking users, with numerous validations. However, in Mandarin-speaking regions, the Baidu Encyclopedia is most commonly used. There have been no studies on the reliability, readability, and objectivity of IBS information on the two sites. This is an urgent issue as these platforms are accessed by approximately 1.45 billion people. Objective: We compared the IBS content on Wikipedia (in English) and Baidu Baike (in Chinese), two online encyclopedias, in terms of reliability, readability, and objectivity. Methods: The Baidu Encyclopedia (in Chinese) and Wikipedia (in English) were evaluated based on the Rome IV IBS definitions and diagnoses. All possible synonyms and derivatives for IBS and IBS-related FGIDs were screened and identified. Two gastroenterology experts evaluated the scores of articles for both sites using the DISCERN instrument, the Journal of the American Medical Association scoring system (JAMA), and the Global Quality Score (GQS). Results: Wikipedia scored higher overall with DISCERN (p < .0001), JAMA (p < .0001) and GQS (p < .05) than the Baidu Encyclopedia. Specifically, Wikipedia scored higher in DISCERN Section 1 (p < .0001), DISCERN Section 2 (p < .01), DISCERN Section 3 (p < .001), and the General DISCERN score (p < .0001) than the Baidu Encyclopedia. Both sites had low DISCERN Section 2 scores (p = .18). Wikipedia also had a larger percentage of high quality scores in total DISCERN, DISCERN Section 1, and DISCERN Section 3 (p < .0001, P < .0001, P < .0004, respectively, based on the above 3 (60%) rule). Conclusions: Wikipedia provides more reliable, higher quality, and more objective IBS-related health information than the Baidu Encyclopedia. However, there should be improvements in the information quality for both sites. Medical professionals and institutions should collaborate with these online platforms to offer better health information for IBS.


Subject(s)
Internet , Irritable Bowel Syndrome , Irritable Bowel Syndrome/diagnosis , Humans , Comprehension , Encyclopedias as Topic , Reproducibility of Results , Consumer Health Information/standards
11.
SSM Popul Health ; 26: 101677, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38766549

ABSTRACT

Background: Several pelvic area cancers exhibit high incidence rates, and their surgical treatment can result in adverse effects such as urinary and fecal incontinence, significantly impacting patients' quality of life. Post-surgery incontinence is a significant concern, with prevalence rates ranging from 25 to 45% for urinary incontinence and 9-68% for fecal incontinence. Cancer survivors are increasingly turning to YouTube as a platform to connect with others, yet caution is warranted as misinformation is prevalent. Objective: This study aims to evaluate the information quality in YouTube videos about post-surgical incontinence after pelvic area cancer surgery. Methods: A YouTube search for "Incontinence after cancer surgery" yielded 108 videos, which were subsequently analyzed. To evaluate these videos, several quality assessment tools were utilized, including DISCERN, GQS, JAMA, PEMAT, and MQ-VET. Statistical analyses, such as descriptive statistics and intercorrelation tests, were employed to assess various video attributes, including characteristics, popularity, educational value, quality, and reliability. Also, artificial intelligence techniques like PCA, t-SNE, and UMAP were used for data analysis. HeatMap and Hierarchical Clustering Dendrogram techniques validated the Machine Learning results. Results: The quality scales presented a high level of correlation one with each other (p < 0.01) and the Artificial Intelligence-based techniques presented clear clustering representations of the dataset samples, which were reinforced by the Heat Map and Hierarchical Clustering Dendrogram. Conclusions: YouTube videos on "Incontinence after Cancer Surgery" present a "High" quality across multiple scales. The use of AI tools, like PCA, t-SNE, and UMAP, is highlighted for clustering large health datasets, improving data visualization, pattern recognition, and complex healthcare analysis.

12.
Cureus ; 16(4): e58603, 2024 Apr.
Article in English | MEDLINE | ID: mdl-38765432

ABSTRACT

Background This cross-sectional study aimed to assess the readability of strabismus-related websites and the quality of their content. Methodology This cross-sectional study evaluated the websites on strabismus disease using Atesman and Bezirci-Yilmaz's readability formulas, which have been scientifically verified to be effective for Turkish people. The study picked texts from the first 50 websites that appeared on the screen after typing "strabismus treatment" into the Google search engine based on their Turkish reading level and information reliability. In this study, 41 of the first 50 websites were reviewed. Furthermore, two separate senior ophthalmologists evaluated the JAMA and DISCERN indexes, as well as the credibility of the content on these sites. Results The Bezirci-Yilmaz readability index indicated that the websites were readable for individuals with an average education level of 10.5 ± 2.3 years. The websites scored an average of 55.2 ± 7.9 on the Atesman readability formula, indicating that they were readable for students in the 11-12th grade. The websites had an average JAMA score of 0.8 ± 0.7 points and a DISCERN score of 34.2 ± 8.6 points, indicating low-quality content. Conclusions The readability of websites providing information regarding strabismus was significantly higher than Turkey's average educational level. Websites should not only be designed to be easy to read so that strabismus patients may learn about their condition but should also provide higher-quality strabismus content.

13.
Int J Pediatr Otorhinolaryngol ; 180: 111955, 2024 May.
Article in English | MEDLINE | ID: mdl-38640574

ABSTRACT

PURPOSE: Online resources are increasingly being utilised by patients to guide their clinical decision making, as an alternative or supplement to the traditional clinical-patient relationship. YouTube is an online repository of user and community generated videos, which is one of the most popular websites globally. We undertook a study to examine the quality of information presented in YouTube videos related to tonsillectomy. METHODS: We completed a systematic search of YouTube in May 2023 and identified 88 videos for inclusion in our study. Videos were published in the English language, focussing on tonsillectomy and tonsillectomy recovery, and were greater than 2 min in length. We recorded video quality metrics and two authors independently analysed the quality of information using three validated quality assessment tools described in the literature including the modified DISCERN, Global Quality Score, and the JAMA Benchmark Criteria. RESULTS: The overall quality of the information was low with mean quality scores of Modified DISCERN (1.8 ± 1.3), GQS (2.6 ± 1.2), and JAMA Benchmark Criteria (1.6 ± 0.7). Information published by medical sources including medical professionals, healthcare organisations, and medical education channels scored significantly higher compared to non-medical sources across all quality measures and were of moderate overall quality and usefulness: Modified DISCERN (2.5 ± 1.1 vs 0.8 ± 0.9, z = -6.0, p < 0.001), GQS (3.2 ± 1.0 vs 1.7 ± 0.9, z = -5.7, p < 0.001), and JAMA (1.9 ± 0.8 vs 1.1 ± 0.3, z = -5.2, p < 0.001). Videos published during or after 2018 scored higher on Modified DISCERN (z = -3.2,p = 0.001) but not on GQS or JAMA. Video quality metrics such as total view count, likes, and comments, and channel subscriber count, did not correlate with higher video quality. However, amongst videos published by authoritative medical sources, total view count correlated positively with higher Modified DISCERN quality scores (p = 0.037). CONCLUSION: The overall quality and usefulness of YouTube videos on tonsillectomy is of low quality, but information published by authoritative medical sources score significantly higher. Clinicians should be mindful of increasing use of online information sources such as YouTube when counselling patients. Further research is needed in the medical community to create engaging, high-quality content to provide guidance for patients.


Subject(s)
Social Media , Tonsillectomy , Video Recording , Humans , Tonsillectomy/education , Information Dissemination/methods , Patient Education as Topic/standards , Patient Education as Topic/methods
14.
OTO Open ; 8(1): e118, 2024.
Article in English | MEDLINE | ID: mdl-38504881

ABSTRACT

Objective: To understand the quality of informational Graves' disease (GD) videos on YouTube for treatment decision-making quality and inclusion of American Thyroid Association (ATA) treatment guidelines. Study Design: Cross-sectional cohort. Setting: Informational YouTube videos with subject matter "Graves' Disease treatment." Method: The top 50 videos based on our query were assessed using the DISCERN instrument. This validated algorithm discretely rates treatment-related information from excellent (≥4.5) to very poor (<1.9). Videos were also screened for ATA guideline inclusion. Descriptive statistics were used for cohort characterization. Univariate and multivariate linear regressions characterized factors associated with DISCERN scores. Significance was set at P < .05. Results: The videos featured 57,513.43 views (SD = 162,579.25), 1054.70 likes (SD = 2329.77), and 168.80 comments (SD = 292.97). Most were patient education (52%) or patient experience (24%). A minority (40%) were made by thyroid specialists (endocrinologists, endocrine surgeons, or otolaryngologists). Under half did not mention all 3 treatment modalities (44%), and 54% did not mention any ATA recommendations. Overall, videos displayed poor reliability (mean = 2.26, SD = 0.67), treatment information quality (mean = 2.29, SD = 0.75), and overall video quality (mean = 2.47, SD = 1.07). Physician videos were associated with lower likes, views, and comments (P < .001) but higher DISCERN reliability (P = .015) and overall score (P = .019). Longer videos (P = .015), patient accounts (P = .013), and patient experience (P = .002) were associated with lower scores. Conclusion: The most available GD treatment content on YouTube varies significantly in the quality of medical information. This may contribute to suboptimal disease understanding, especially for patients highly engaged with online health information sources.

15.
Photodermatol Photoimmunol Photomed ; 40(2): e12958, 2024 Mar.
Article in English | MEDLINE | ID: mdl-38489300

ABSTRACT

BACKGROUND/PURPOSE: Vitiligo is a depigmenting disorder that affects up to 2% of the population. Due to the relatively high prevalence of this disease and its psychological impact on patients, decisions concerning treatment can be difficult. As patients increasingly seek health information online, the caliber of online health information (OHI) becomes crucial in patients' decisions regarding their care. We aimed to assess the quality and readability of OHI regarding phototherapy in the management of vitiligo. METHODS: Similar to previously published studies assessing OHI, we used 5 medical search terms as a proxy for online searches made by patients. Results for each search term were assessed using an enhanced DISCERN analysis, Health On the Net code of conduct (HONcode) accreditation guidelines, and several readability indices. The DISCERN analysis is a validated questionnaire used to assess the quality of OHI, while HONcode accreditation is a marker of site reliability. RESULTS: Of the 500 websites evaluated, 174 were HONcode-accredited (35%). Mean DISCERN scores for all websites were 58.9% and 51.7% for website reliability and treatment sections, respectively. Additionally, 0/130 websites analyzed for readability scored at the NIH-recommended sixth-grade reading level. CONCLUSION: These analyses shed light on the shortcomings of OHI regarding phototherapy treatment for vitiligo, which could exacerbate disparities for patients who are already at higher risk of worse health outcomes.


Subject(s)
Consumer Health Information , Vitiligo , Humans , Comprehension , Vitiligo/therapy , Reproducibility of Results , Phototherapy , Internet
16.
Saudi Pharm J ; 32(4): 101997, 2024 Apr.
Article in English | MEDLINE | ID: mdl-38426034

ABSTRACT

Background: The goal of this study was to identify and evaluate the use of Arabic YouTube videos on BD as a resource for patient education. Methods: A cross-sectional evaluation of YouTube videos as a source of information for patients with BD in Arabic was performed. The study was observational and, because it did not involve human subjects, it followed the STROBE guidelines whenever possible. The quality of the videos was assessed using the validated DISCERN instrument. The search strategy involved entering the term "bipolar disorder" in the YouTube search bar, and only YouTube videos in Arabic were included. Results: A total of 58 videos were included in this study after removing duplicates and videos unrelated to BD (Figure 1). The most common source of videos was others (38%), followed by physician (33%), educational (26%), and hospital (3%). Resources covering symptoms and prognosis were mostly in the "others" category (41%). The resources or videos that covered treatment options were mainly created by physicians (41%). However, resources or videos that included a personal story mainly belonged to the "others" category (67%). Conclusion: Visual health-related instructional resources still have a significant shortage. This study highlights the poor quality of videos about serious illnesses like BD. Evaluation and promotion of the creation of visual health-related educational resources should be the primary goal of future study.

17.
BJUI Compass ; 5(2): 224-229, 2024 Mar.
Article in English | MEDLINE | ID: mdl-38371202

ABSTRACT

Objectives: The objectives of this study are to assess the current level of experience and teaching practices for SPC change at our institution and, second, to assess the quality of YouTube videos as an educational tool for teaching SPC change. Methods: A survey was conducted of 40 JMOs at our institution regarding SPC change. The first 20 YouTube videos on SPC change were included for analysis. A JAMA and DISCERN score was calculated for each video. Using linear regression, the association between collected variables and the assigned JAMA and DISCERN scores were determined. Results: The survey showed that 18 (45%) of JMOs had done an SPC change. None had received formal teaching. The consensus was that the quality of the YouTube videos was poor. There was a statistically significant positive correlation between the score assigned to videos by each scoring system (Pearson's r 0.81, p < 0.001). There was no statistically significant association between video quality as measured by either of the scoring systems and number of views. No association between any video characteristic and JAMA and DISCERN score was found. Conclusion: An SPC change is often a requirement of JMOs; however, this skill is not formally taught. The quality of YouTube videos describing an SPC change is poor.

18.
Clin Oral Implants Res ; 35(5): 498-509, 2024 May.
Article in English | MEDLINE | ID: mdl-38396373

ABSTRACT

PURPOSE: To critically appraise the quality and reliability of YouTube videos regarding peri-implant diseases and conditions as a source of information for patients, students, and young clinicians. MATERIALS AND METHODS: In March 2023, electronic searches were performed on YouTube website to identify videos related to peri-implant diseases and conditions. We considered only the relevant 250 English-language videos with durations between 3 and 30 min for final analyses. Following the eligibility criteria videos were evaluated for their demographic data, including number of views; number of likes, dislikes, and comments; days since upload; duration; and number of subscribers. Moreover, two assessors independently evaluated the quality and reliability of the included videos using the DISCERN and Video Information and Quality Index (VIQI) tools. Statistical analyses were performed using the Kruskal-Wallis test and Spearman correlation analysis (∝ = 0.05). RESULTS: A total of 69 videos were included for profound analyses. The average DISCERN and VIQI scores were 35.04 ± 6.3 and 14.18 ± 2.46, with 53 videos categorized as "poor" quality using the DISCERN tool. A Spearman rank correlation analysis presented a strong agreement between the DISCERN and VIQI scores (r = .753; p < .001). Nevertheless, based on different sources of upload, no statistically significant differences were reported for video demographics, interaction index, and DISCERN and VIQI scores. CONCLUSIONS: Although YouTube videos on peri-implant diseases and conditions present accurate preliminary information, their reliability still remains uncertain. Hence, we urge respective policymakers to recognize, endorse and produce high-quality videos for accurate information dissemination.


Subject(s)
Social Media , Video Recording , Humans , Cross-Sectional Studies , Reproducibility of Results , Dental Implants , Peri-Implantitis
19.
Int J Gynaecol Obstet ; 166(1): 419-425, 2024 Jul.
Article in English | MEDLINE | ID: mdl-38366748

ABSTRACT

OBJECTIVES: Back pain during pregnancy is a common issue that impacts the quality of life for many women. YouTube has become an increasingly popular source of health information. Pregnant women often turn to YouTube for advice on managing back pain, but the quality of available videos is highly variable. This study aimed to assess the quality and comprehensiveness of YouTube videos related to back pain during pregnancy. METHODS: A YouTube search was conducted using the keyword "back pain in pregnancy", and the first 100 resulting videos were included in the study. After a thorough review and exclusion of ineligible videos, the final sample consisted of 71 videos. Various parameters such as the number of views, likes, viewer interaction, video age, uploaded source (healthcare or nonhealthcare), and video length were evaluated for all videos. RESULTS: Regarding the source of the videos, 44 (61.9%) were created by healthcare professionals, while 27 (38%) were created by nonprofessionals. Videos created by healthcare professionals had significantly higher scores in terms of DISCERN score, Journal of the American Medical Association (JAMA) score, and Global Quality Scale (GQS) (P < 0.001). Our findings indicate a statistically significant and strong positive correlation among the three scoring systems (P < 0.001). CONCLUSION: Videos created by healthcare professionals were generally of higher quality, but many videos were still rated as low-moderate quality. The majority of videos focused on self-care strategies, with fewer discussing other treatment options. Our findings highlight the need for improved quality and comprehensiveness of YouTube videos on back pain during pregnancy.


Subject(s)
Back Pain , Pregnancy Complications , Social Media , Video Recording , Humans , Female , Pregnancy , Back Pain/therapy , Pregnancy Complications/therapy , Consumer Health Information/standards
20.
J Pers Med ; 14(1)2024 Jan 18.
Article in English | MEDLINE | ID: mdl-38248805

ABSTRACT

The aim of our study was to evaluate the potential role of Artificial Intelligence tools like ChatGPT in patient education. To do this, we assessed both the quality and readability of information provided by ChatGPT 3.5 and 4 in relation to Anterior Cruciate Ligament (ACL) injury and treatment. ChatGPT 3.5 and 4 were used to answer common patient queries relating to ACL injuries and treatment. The quality of the information was assessed using the DISCERN criteria. Readability was assessed with the use of seven readability formulae: the Flesch-Kincaid Reading Grade Level, the Flesch Reading Ease Score, the Raygor Estimate, the SMOG, the Fry, the FORCAST, and the Gunning Fog. The mean reading grade level (RGL) was compared with the recommended 8th-grade reading level, the mean RGL among adults in America. The perceived quality and mean RGL of answers given by both ChatGPT 3.5 and 4 was also compared. Both ChatGPT 3.5 and 4 yielded DISCERN scores suggesting "good" quality of information, with ChatGPT 4 slightly outperforming 3.5. However, readability levels for both versions significantly exceeded the average 8th-grade reading level for American patients. ChatGPT 3.5 had a mean RGL of 18.08, while the mean RGL of ChatGPT 4 was 17.9, exceeding the average American reading grade level by 10.08 grade levels and 9.09 grade levels, respectively. While ChatGPT can provide both reliable and good quality information on ACL injuries and treatment options, the readability of the content may limit its utility. Additionally, the consistent lack of source citation represents a significant area of concern for patients and clinicians alike. If AI is to play a role in patient education, it must reliably produce information which is accurate, easily comprehensible, and clearly sourced.

SELECTION OF CITATIONS
SEARCH DETAIL