Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 5 de 5
Filter
Add more filters










Database
Language
Publication year range
1.
BMJ Evid Based Med ; 2024 Jun 11.
Article in English | MEDLINE | ID: mdl-38862202

ABSTRACT

OBJECTIVES: The objectives of this study are to assess reporting of evidence-based healthcare (EBHC) e-learning interventions using the Guideline for Reporting Evidence-based practice Educational interventions and Teaching (GREET) checklist and explore factors associated with compliant reporting. DESIGN: Methodological cross-sectional study. METHODS: Based on the criteria used in an earlier systematic review, we included studies comparing EBHC e-learning and any other form of EBHC training or no EBHC training. We searched Medline, Embase, ERIC, CINAHL, CENTRAL, SCOPUS, Web of Knowledge, PsycInfo, ProQuest and Best Evidence Medical Education up to 4 January 2023. Screening of titles, abstracts, full-text articles and data extraction was done independently by two authors. For each study, we assessed adherence to each of the 17 GREET items and extracted information on possible predictors. Adequacy of reporting for each item of the GREET checklist was judged with yes (provided complete information), no (provided no information), unclear (when insufficient information was provided), or not applicable, when the item was clearly of no relevance to the intervention described (such as for item 8-details about the instructors-in the studies which used electronic, self-paced intervention, without any tutoring). Studies' adherence to the GREET checklist was presented as percentages and absolute numbers. We performed univariate analysis to assess the association of potential adherence predictors with the GREET checklist. We summarised results descriptively. RESULTS: We included 40 studies, the majority of which assessed e-learning or blended learning and mostly involved medical and other healthcare students. None of the studies fully reported all the GREET items. Overall, the median number of GREET items met (received yes) per study was 8 and third quartile (Q3) of GREET items met per study was 9 (min. 4 max. 14). When we used Q3 of the number of items met as cut-off point, adherence to the GREET reporting checklist was poor with 7 out of 40 studies (17.5%) reporting items of the checklist on acceptable level (adhered to at least 10 items out of 17). None of the studies reported on all 17 GREET items. For 3 items, 80% of included studies well reported information (received yes for these items): item 1 (brief description of intervention), item 4 (evidence-based practice content) and item 6 (educational strategies). Items for which 50% of included studies reported complete information (received yes for these items) included: item 9 (modes of delivery), item 11 (schedule) and 12 (time spent on learning). The items for which 70% or more of included studies did not provide information (received no for these items) included: item 7 (incentives) and item 13 (adaptations; for both items 70% of studies received no for them), item 14 (modifications of educational interventions-95% of studies received no for this item), item 16 (any processes to determine whether the materials and the educational strategies used in the educational intervention were delivered as originally planned-93% of studies received no for this item) and 17 (intervention delivery according to schedule-100% of studies received no for this item). Studies published after September 2016 showed slight improvements in nine reporting items. In the logistic regression models, using the cut-off point of Q3 (10 points or above) the odds of acceptable adherence to GREET guidelines were 7.5 times higher if adherence to other guideline (Consolidated Standards of Reporting Trials, Strengthening the Reporting of Observational Studies in Epidemiology, etc) was reported for a given study type (p=0.039), also higher number of study authors increased the odds of adherence to GREET guidance by 18% (p=0.037). CONCLUSIONS: Studies assessing educational interventions on EBHC e-learning still poorly adhere to the GREET checklist. Using other reporting guidelines increased the odds of better GREET reporting. Journals should call for the use of appropriate use of reporting guidelines of future studies on teaching EBHC to increase transparency of reporting, decrease unnecessary research duplication and facilitate uptake of research evidence or result. STUDY REGISTRATION NUMBER: The Open Science Framework (https://doi.org/10.17605/OSF.IO/V86FR).

2.
Account Res ; : 1-30, 2024 May 05.
Article in English | MEDLINE | ID: mdl-38704659

ABSTRACT

Although reproducibility is central to the scientific method, its understanding within the research community remains insufficient. We aimed to explore the perceptions of research reproducibility among stakeholders within academia, learn about possible barriers and facilitators to reproducibility-related practices, and gather their suggestions for the Croatian Reproducibility Network website. We conducted four focus groups with researchers, teachers, editors, research managers, and policymakers from Croatia (n = 23). The participants observed a lack of consensus on the core definitions of reproducibility, both generally and between disciplines. They noted that incentivization and recognition of reproducibility-related practices from publishers and institutions, alongside comprehensive education adapted to the researchers' career stage, could help with implementing reproducibility. Education was considered essential to these efforts, as it could help create a research culture based on good reproducibility-related practices and behavior rather than one driven by mandates or career advancement. This was particularly found to be relevant for growing reproducibility efforts globally. Regarding the Croatian Reproducibility Network website, the participants suggested we adapt the content to users from different disciplines or career stages and offer guidance and tools for reproducibility through which we should present core reproducibility concepts. Our findings could inform other initiatives focused on improving research reproducibility.

3.
Sci Rep ; 14(1): 6016, 2024 03 12.
Article in English | MEDLINE | ID: mdl-38472285

ABSTRACT

This cross-sectional study compared plain language summaries (PLSs) from medical and non-medical organizations regarding conclusiveness, readability and textual characteristics. All Cochrane (medical PLSs, n = 8638) and Campbell Collaboration and International Initiative for Impact Evaluation (non-medical PLSs, n = 163) PLSs of latest versions of systematic reviews published until 10 November 2022 were analysed. PLSs were classified into three conclusiveness categories (conclusive, inconclusive and unclear) using a machine learning tool for medical PLSs and by two experts for non-medical PLSs. A higher proportion of non-medical PLSs were conclusive (17.79% vs 8.40%, P < 0.0001), they had higher readability (median number of years of education needed to read the text with ease 15.23 (interquartile range (IQR) 14.35 to 15.96) vs 15.51 (IQR 14.31 to 16.77), P = 0.010), used more words (median 603 (IQR 539.50 to 658.50) vs 345 (IQR 202 to 476), P < 0.001). Language analysis showed that medical PLSs scored higher for disgust and fear, and non-medical PLSs scored higher for positive emotions. The reason for the observed differences between medical and non-medical fields may be attributed to the differences in publication methodologies or disciplinary differences. This approach to analysing PLSs is crucial for enhancing the overall quality of PLSs and knowledge translation to the general public.


Subject(s)
Comprehension , Language , Cross-Sectional Studies , Systematic Reviews as Topic , Reading
4.
J Glob Health ; 13: 06050, 2023 Oct 27.
Article in English | MEDLINE | ID: mdl-37883198

ABSTRACT

Background: During health emergencies, leading healthcare organisations, such as the World Health Organization (WHO), the European Centre for Disease Control and Prevention (ECDC), and the United States Centers for Disease Control and Prevention (CDC), provide guidance for public health response. Previous studies have evaluated clinical practice guidelines (CPGs) produced in response to epidemics or pandemics, yet few have focused on public health guidelines and recommendations. To address this gap, we assessed health systems guidance (HSG) produced by the WHO, the ECDC, and the CDC for the 2009 H1N1 and COVID-19 pandemics. Methods: We extracted HSG for the H1N1 and COVID-19 pandemics from the organisations' dedicated repositories and websites. After screening the retrieved documents for eligibility, five assessors evaluated them using the Appraisal of Guidelines Research & Evaluation - Health Systems (AGREE-HS) tool to assess the completeness and transparency of reporting according to the five AGREE-HS domains: "Topic", "Participants", "Methods", "Recommendations", and "Implementability". Results: Following the screening process, we included 108 HSG in the analysis. We observed statistically significant differences between the H1N1 and COVID-19 pandemics, with HSG issued during COVID-19 receiving higher AGREE-HS scores. The HSG produced by the CDC had significantly lower overall scores and single-domain scores compared to the WHO and ECDC. However, all HSG scored relatively low, under the median of 40 total points (range = 10-70), indicating incomplete reporting. The HSG produced by all three organisations received a median score <4 (range = 1-7) for the "Participants", "Methods", and "Implementability" domains. Conclusions: There is still significant progress to be made in the quality and completeness of reporting in HSG issued during pandemics, especially regarding methodological approaches and the composition of the guidance development team. Due to their significant impact and importance for healthcare systems globally, HSG issued during future healthcare crises should adhere to best reporting practices to increase uptake by stakeholders and ensure public trust in healthcare organisations.


Subject(s)
COVID-19 , Influenza A Virus, H1N1 Subtype , Humans , Pandemics/prevention & control , COVID-19/epidemiology , Delivery of Health Care , Health Promotion
5.
Front Med (Lausanne) ; 10: 1220999, 2023.
Article in English | MEDLINE | ID: mdl-38196834

ABSTRACT

Objective: To evaluate the impact of research design on the perceived medical treatment effectiveness among researchers, healthcare workers (HCWs) and consumers in Croatia. Methods: A cross-sectional study was conducted from November 2021 to February 2022 using an online survey. The participants were researchers, HCWs and consumers from Croatia. The survey had six scenarios about the same medical treatment presented within different study designs and in random order. Participants were asked to assess on a scale from 1 to 10 if the descriptions presented a sufficient level of evidence to conclude that the treatment was effective. Results: For researchers (n = 97), as the number of participants and degree of variable control in the study design increased, the perceived level of sufficient evidence also increased significantly. Among consumers (n = 286) and HCWs (n = 201), no significant differences in scores were observed between the cross-sectional study, cohort study, RCT, and systematic review. Conclusion: There is a need to implement educational courses on basic research methodology in lower levels of education and as part of Continuing Medical Education for all stakeholders in the healthcare system. Trial registration: this study has been registered on the Open Science Framework prior to study commencement (https://osf.io/t7xmf).

SELECTION OF CITATIONS
SEARCH DETAIL
...