Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 341
Filter
1.
Res Synth Methods ; 2024 Sep 05.
Article in English | MEDLINE | ID: mdl-39234960

ABSTRACT

Conducting high-quality overviews of reviews (OoR) is time-consuming. Because the quality of systematic reviews (SRs) varies, it is necessary to critically appraise SRs when conducting an OoR. A well-established appraisal tool is A Measurement Tool to Assess Systematic Reviews (AMSTAR) 2, which takes about 15-32 min per application. To save time, we developed two fast-and-frugal decision trees (FFTs) for assessing the methodological quality of SR for OoR either during the full-text screening stage (Screening FFT) or to the resulting pool of SRs (Rapid Appraisal FFT). To build a data set for developing the FFT, we identified published AMSTAR 2 appraisals. Overall confidence ratings of the AMSTAR 2 were used as a criterion and the 16 items as cues. One thousand five hundred and nineteen appraisals were obtained from 24 publications and divided into training and test data sets. The resulting Screening FFT consists of three items and correctly identifies all non-critically low-quality SRs (sensitivity of 100%), but has a positive predictive value of 59%. The three-item Rapid Appraisal FFT correctly identifies 80% of the high-quality SRs and correctly identifies 97% of the low-quality SRs, resulting in an accuracy of 95%. The FFTs require about 10% of the 16 AMSTAR 2 items. The Screening FFT may be applied during full-text screening to exclude SRs with critically low quality. The Rapid Appraisal FFT may be applied to the final SR pool to identify SR that might be of high methodological quality.

2.
Environ Evid ; 13(1): 1, 2024 Feb 07.
Article in English | MEDLINE | ID: mdl-39294842

ABSTRACT

To inform environmental policy and practice, researchers estimate effects of interventions/exposures by conducting primary research (e.g., impact evaluations) or secondary research (e.g., evidence reviews). If these estimates are derived from poorly conducted/reported research, then they could misinform policy and practice by providing biased estimates. Many types of bias have been described, especially in health and medical sciences. We aimed to map all types of bias from the literature that are relevant to estimating causal effects in the environmental sector. All the types of bias were initially identified by using the Catalogue of Bias (catalogofbias.org) and reviewing key publications (n = 11) that previously collated and described biases. We identified 121 (out of 206) types of bias that were relevant to estimating causal effects in the environmental sector. We provide a general interpretation of every relevant type of bias covered by seven risk-of-bias domains for primary research: risk of confounding biases; risk of post-intervention/exposure selection biases; risk of misclassified/mismeasured comparison biases; risk of performance biases; risk of detection biases; risk of outcome reporting biases; risk of outcome assessment biases, and four domains for secondary research: risk of searching biases; risk of screening biases; risk of study appraisal and data coding/extraction biases; risk of data synthesis biases. Our collation should help scientists and decision makers in the environmental sector be better aware of the nature of bias in estimation of causal effects. Future research is needed to formalise the definitions of the collated types of bias such as through decomposition using mathematical formulae.

3.
Dent Clin North Am ; 68(4): 785-797, 2024 Oct.
Article in English | MEDLINE | ID: mdl-39244257

ABSTRACT

Today, it is common for medically complex patients who are receiving multiple medications, to seek routine and emergent dental care. It is essential for the practitioner to recognize and comprehend the impact of such medications on the patient's ability to tolerate the planned dental treatment and on dental treatment outcomes. An active appraisal of current literature is essential to stay abreast of emerging findings and understand their treatment implications. This article outlines the process of such active critical appraisal, illustrating key paradigms of the models that describe the impact of medications on treatment outcomes.


Subject(s)
Dental Care for Chronically Ill , Humans , Treatment Outcome , Polypharmacy
4.
J Clin Epidemiol ; 175: 111512, 2024 Aug 31.
Article in English | MEDLINE | ID: mdl-39222724

ABSTRACT

BACKGROUND AND OBJECTIVE: Randomized controlled trials (RCTs) inform health-care decisions. Unfortunately, some published RCTs contain false data, and some appear to have been entirely fabricated. Systematic reviews are performed to identify and synthesize all RCTs which have been conducted on a given topic. This means that any of these 'problematic studies' are likely to be included, but there are no agreed methods for identifying them. The INveStigating ProblEmatic Clinical Trials in Systematic Reviews (INSPECT-SR) project is developing a tool to identify problematic RCTs in systematic reviews of health care-related interventions. The tool will guide the user through a series of 'checks' to determine a study's authenticity. The first objective in the development process is to assemble a comprehensive list of checks to consider for inclusion. METHODS: We assembled an initial list of checks for assessing the authenticity of research studies, with no restriction to RCTs, and categorized these into five domains: Inspecting results in the paper; Inspecting the research team; Inspecting conduct, governance, and transparency; Inspecting text and publication details; Inspecting the individual participant data. We implemented this list as an online survey, and invited people with expertise and experience of assessing potentially problematic studies to participate through professional networks and online forums. Participants were invited to provide feedback on the checks on the list, and were asked to describe any additional checks they knew of, which were not featured in the list. RESULTS: Extensive feedback on an initial list of 102 checks was provided by 71 participants based in 16 countries across five continents. Fourteen new checks were proposed across the five domains, and suggestions were made to reword checks on the initial list. An updated list of checks was constructed, comprising 116 checks. Many participants expressed a lack of familiarity with statistical checks, and emphasized the importance of feasibility of the tool. CONCLUSION: A comprehensive list of trustworthiness checks has been produced. The checks will be evaluated to determine which should be included in the INSPECT-SR tool. PLAIN LANGUAGE SUMMARY: Systematic reviews draw upon evidence from randomized controlled trials (RCTs) to find out whether treatments are safe and effective. The conclusions from systematic reviews are often very influential, and inform both health-care policy and individual treatment decisions. However, it is now clear that the results of many published RCTs are not genuine. In some cases, the entire study may have been fabricated. It is not usual for the veracity of RCTs to be questioned during the process of compiling a systematic review. As a consequence, these "problematic studies" go unnoticed, and are allowed to contribute to the conclusions of influential systematic reviews, thereby influencing patient care. This prompts the question of how these problematic studies could be identified. In this study, we created an extensive list of checks that could be performed to try to identify these studies. We started by assembling a list of checks identified in previous research, and conducting a survey of experts to ask whether they were aware of any additional methods, and to give feedback on the list. As a result, a list of 116 potential "trustworthiness checks" was created. In subsequent research, we will evaluate these checks to see which should be included in a tool, INveStigating ProblEmatic Clinical Trials in Systematic Reviews, which can be used to detect problematic studies.

5.
Front Environ Health ; 32024 Feb 20.
Article in English | MEDLINE | ID: mdl-39087068

ABSTRACT

This article provides a summary and critical appraisal of the systematic review conducted by Alidoust et al. (1) regarding the various effects of housing on both physical and psychological well-being. We aim to discuss the review's findings against existing published evidence to draw out policy and practical implications. Our mini-review illuminates a wide range of housing-related factors which impact on health around which we draw evidence-based policy initiatives and implications, and outline avenues for future research. This mini-review is part of the wider Rapid Conversion of Evidence Summaries (RaCES) program which aims to critically appraise systematic reviews and highlight evidence-based policy and practice implications.

6.
JMIR Med Educ ; 10: e50545, 2024 Aug 22.
Article in English | MEDLINE | ID: mdl-39177012

ABSTRACT

Background: Text-generating artificial intelligence (AI) such as ChatGPT offers many opportunities and challenges in medical education. Acquiring practical skills necessary for using AI in a clinical context is crucial, especially for medical education. Objective: This explorative study aimed to investigate the feasibility of integrating ChatGPT into teaching units and to evaluate the course and the importance of AI-related competencies for medical students. Since a possible application of ChatGPT in the medical field could be the generation of information for patients, we further investigated how such information is perceived by students in terms of persuasiveness and quality. Methods: ChatGPT was integrated into 3 different teaching units of a blended learning course for medical students. Using a mixed methods approach, quantitative and qualitative data were collected. As baseline data, we assessed students' characteristics, including their openness to digital innovation. The students evaluated the integration of ChatGPT into the course and shared their thoughts regarding the future of text-generating AI in medical education. The course was evaluated based on the Kirkpatrick Model, with satisfaction, learning progress, and applicable knowledge considered as key assessment levels. In ChatGPT-integrating teaching units, students evaluated videos featuring information for patients regarding their persuasiveness on treatment expectations in a self-experience experiment and critically reviewed information for patients written using ChatGPT 3.5 based on different prompts. Results: A total of 52 medical students participated in the study. The comprehensive evaluation of the course revealed elevated levels of satisfaction, learning progress, and applicability specifically in relation to the ChatGPT-integrating teaching units. Furthermore, all evaluation levels demonstrated an association with each other. Higher openness to digital innovation was associated with higher satisfaction and, to a lesser extent, with higher applicability. AI-related competencies in other courses of the medical curriculum were perceived as highly important by medical students. Qualitative analysis highlighted potential use cases of ChatGPT in teaching and learning. In ChatGPT-integrating teaching units, students rated information for patients generated using a basic ChatGPT prompt as "moderate" in terms of comprehensibility, patient safety, and the correct application of communication rules taught during the course. The students' ratings were considerably improved using an extended prompt. The same text, however, showed the smallest increase in treatment expectations when compared with information provided by humans (patient, clinician, and expert) via videos. Conclusions: This study offers valuable insights into integrating the development of AI competencies into a blended learning course. Integration of ChatGPT enhanced learning experiences for medical students.


Subject(s)
Artificial Intelligence , Curriculum , Students, Medical , Humans , Students, Medical/psychology , Male , Female , Education, Medical, Undergraduate/methods , Perception , Teaching/standards , Adult , Surveys and Questionnaires
7.
J Clin Epidemiol ; 174: 111460, 2024 Jul 16.
Article in English | MEDLINE | ID: mdl-39025376

ABSTRACT

OBJECTIVES: Risk of bias (RoB) assessment is a critical part of any systematic review (SR). There are multiple tools available for assessing RoB of the studies included in a SR. The conduct of these assessments in intervention SRs are addressed by three items in AMSTAR-2, considered the preferred tool for critically appraising an intervention SR. This study focuses attention on item 9, which assesses the ability of a RoB tool to adequately address sources of bias, particularly in randomized trials (RCTs) and nonrandomized studies of interventions (NRSI). Our main objective is to report the detailed results of our examination of both Cochrane and non-Cochrane RoB tools and distinguish those that meet AMSTAR-2 item 9 appraisal standards. STUDY DESIGN AND SETTING: We identified critical appraisal tools reported in a sample of 126 SRs reporting on interventions for persons with cerebral palsy published from 2014 to 2021. Eligible tools were those that had been used to assess the primary studies included in these SRs and for which assessment results were reported in enough detail to allow appraisal of the tool. We identified the version of the tool applied as original, modified, or novel and established the applicable study designs as intended by the tools' developers. We then evaluated the potential ability of these tools to assess the four sources of bias specified in AMSTAR-2 item 9 for RCTs and NRSI. We adapted item 9 to appraise tools applied to single-case experimental designs, which we also encountered in this sample of SRs. RESULTS: Most of the eligible tools are recognized by name in the published literature and were applied in the original or modified form. Modifications were applied with considerable variability across the sample. Of the 37 tools we examined, those judged to fully meet the appraisal standards for RCTs included all the Cochrane tools, the original and modified Downs and Black Checklist, and the quality assessment standard for a cross-over study by Ding et al; for NRSI, these included all the Cochrane tools, the original and modified Downs and Black Checklist, and the Research Triangle Institute item bank on Risk of Bias and Precision of Observational Studies for NRSI. In general, tools developed for a specific study design were judged to meet the appraisal standards fully or partially for that design. These results suggest it is unlikely that a single tool will be adequate by AMSTAR-2 item 9 appraisal standards for an intervention SR that includes studies of various designs. CONCLUSION: To our knowledge, this is the first resource providing SR authors with practical information about the appropriateness and adequacy of RoB tools by the appraisal standards specified in AMSTAR-2 item 9 for RCTs and NRSI. We propose similar methods for appraisal of tools applied to single-case experimental design. We encourage authors to seek contemporary RoB tools developed for use in healthcare-related intervention SRs and designed to evaluate relevant study design features. The tools should address attributes unique to the review topic and research question but not be subjected to unjustified and excessive modifications. We promote recognition of the potential shortcomings of both Cochrane and non-Cochrane RoB tools, even those that perform well by AMSTAR-2 item 9 appraisal standards.

8.
J Clin Epidemiol ; 174: 111480, 2024 Jul 23.
Article in English | MEDLINE | ID: mdl-39047919

ABSTRACT

OBJECTIVES: Current standards for systematic reviews (SRs) require adequate conduct and complete reporting of risk of bias (RoB) assessments of the individual studies included in the review. We investigated the conduct and reporting of RoB assessments reported in a sample of SRs of interventions for persons with cerebral palsy (CP). STUDY DESIGN AND SETTING: We included SRs published from 2014 to 2021. Authors worked in pairs to independently extract data on the characteristics of the SRs and to rate their conduct and reporting. The conduct of RoB assessment was appraised with the three AMSTAR-2 items related to RoB assessment. Reporting completeness was evaluated using the two items related to RoB assessment within studies in the PRISMA 2020 guidelines. We use descriptive statistics to report the consensus data, in accordance with our protocol. RESULTS: We included 145 SRs. Among the 128 (88.3%) SRs that assessed RoB, the standards for AMSTAR-2 item 9 (use of an adequate RoB tool) were partially or fully satisfied in 73 (57.0%). Across the 128 SRs that assessed RoB, 46 (35.9%) accounted for RoB in interpreting the SR's findings and, of the 49 that included a meta-analysis, 11 (22.4%) discussed the impact of RoB on this. 123 (96.1%) of the 128 SRs named the RoB tool that was used for at least one of the study designs they included, 96 (75.0%) specified the RoB items assessed and 89 (69.5%) reported the findings for each item, 81 (63.2%) fully reported the processes for RoB assessment, 68 (53.1%) reported how an overall RoB judgment was reached, and 74 (57.8%) reported an overall RoB assessment for every study. CONCLUSION: The selection and application of RoB tools in this sample of SRs about interventions for CP are comparable to those reported in other recent studies. However, most SRs in this sample did not fully meet the appraisal standards of AMSTAR-2 regarding the adequacy of the RoB tool applied and other aspects of RoB assessment conduct; Cochrane SRs were a notable exception. Overall, reporting of RoB assessments was somewhat better than conduct, perhaps reflecting the more widespread uptake of the PRISMA guidelines. Our findings may be generalizable to some extent, considering the extensive literature reporting widespread inadequacies in health care-related intervention SRs and reports from other specialties that document similar RoB assessment deficiencies. As such, this study should remind authors, peer reviewers, and journal editors to follow the RoB assessment reporting guidelines of PRISMA 2020 and to understand the corresponding critical appraisal standards of AMSTAR-2. We recommend a shift of focus from the documentation of inadequate RoB assessments and well-known deficiencies in other components of SRs towards the implementation of changes to address these problems along with plans to evaluate their effectiveness.

9.
Toxicol Sci ; 201(2): 240-253, 2024 Oct 01.
Article in English | MEDLINE | ID: mdl-38964352

ABSTRACT

To support the development of appraisal tools for assessing the quality of in vitro studies, we developed a method for literature-based discovery of study assessment criteria, used the method to create an item bank of assessment criteria of potential relevance to in vitro studies, and analyzed the item bank to discern and critique current approaches for appraisal of in vitro studies. We searched four research indexes and included any document that identified itself as an appraisal tool for in vitro studies, was a systematic review that included a critical appraisal step, or was a reporting checklist for in vitro studies. We abstracted, normalized, and categorized all criteria applied by the included appraisal tools to create an "item bank" database of issues relevant to the assessment of in vitro studies. The resulting item bank consists of 676 unique appraisal concepts from 67 appraisal tools. We believe this item bank is the single most comprehensive resource of its type to date, should be of high utility for future tool development exercises, and provides a robust methodology for grounding tool development in the existing literature. Although we set out to develop an item bank specifically targeting in vitro studies, we found that many of the assessment concepts we discovered are readily applicable to other study designs. Item banks can be of significant value as a resource; however, there are important challenges in developing, maintaining, and extending them of which researchers should be aware.


Subject(s)
Research Design , Humans , In Vitro Techniques , Databases, Factual , Animals
10.
Cureus ; 16(5): e59658, 2024 May.
Article in English | MEDLINE | ID: mdl-38836144

ABSTRACT

Critical appraisal is a crucial step in evidence-based practice, enabling researchers to evaluate the credibility and applicability of research findings. Healthcare professionals are encouraged to cultivate critical appraisal skills to assess the trustworthiness and value of available evidence. This process involves scrutinizing key components of a research publication, understanding the strengths and weaknesses of the study, and assessing its relevance to a specific context. It is essential for researchers to become familiar with the core elements of a research article and utilize key questions and guidelines to rigorously assess a study. This paper aims to provide an overview of the critical appraisal process. By understanding the main points of critical appraisal, researchers can assess the quality, relevance, and reliability of articles, thereby enhancing the validity of their findings and decision-making processes.

11.
J Allergy Clin Immunol Pract ; 12(7): 1695-1704, 2024 Jul.
Article in English | MEDLINE | ID: mdl-38703820

ABSTRACT

Atopic dermatitis (AD) or eczema is a chronic inflammatory skin disease characterized by dry, itchy, and inflamed skin. We review emerging concepts and clinical evidence addressing the pathogenesis and prevention of AD. We examine several interventions ranging from skin barrier enhancement strategies to probiotics, prebiotics, and synbiotics; and conversely, from antimicrobial exposure to vitamin D and omega fatty acid supplementation; breastfeeding and hydrolyzed formula; and house dust mite avoidance and immunotherapy. We appraise the available evidence base within the context of the Grades of Recommendation, Assessment, Development, and Evaluation approach. We also contextualize our findings in relation to concepts relating AD and individual-patient allergic life trajectories versus a linear concept of the atopic march and provide insights into future knowledge gaps and clinical trial design considerations that must be addressed in forthcoming research. Finally, we provide implementation considerations to detect population-level differences in AD risk. Major international efforts are required to provide definitive evidence regarding what works and what does not for preventing AD.


Subject(s)
Dermatitis, Atopic , Humans , Dermatitis, Atopic/prevention & control , Animals , Probiotics/therapeutic use , Prebiotics
12.
J Clin Epidemiol ; 171: 111392, 2024 Jul.
Article in English | MEDLINE | ID: mdl-38740313

ABSTRACT

OBJECTIVES: To assess to what extent the overall quality of evidence indicates changes to observe intervention effect estimates when new data become available. METHODS: We conducted a meta-epidemiological study. We obtained evidence from meta-analyses of randomized trials of Cochrane reviews addressing the same health-care question that was updated with inclusion of additional data between January 2016 and May 2021. We extracted the reported effect estimates with 95% confidence intervals (CIs) from meta-analyses and corresponding GRADE (Grading of Recommendations Assessment, Development, and Evaluation) assessments of any intervention comparison for the primary outcome in the first and the last updated review version. We considered the reported overall quality (certainty) of evidence (CoE) and specific evidence limitations (no, serious or very serious for risk of bias, imprecision, inconsistency, and/or indirectness). We assessed the change in pooled effect estimates between the original and updated evidence using the ratio of odds ratio (ROR), absolute ratio of odds ratio (aROR), ratio of standard errors (RoSE), direction of effects, and level of statistical significance. RESULTS: High CoE without limitations characterized 19.3% (n = 29) out of 150 included original Cochrane reviews. The update with additional data did not systematically change the effect estimates (mean ROR 1.00; 95% CI 0.99-1.02), which deviated 1.06-fold from the older estimates (median aROR; interquartile range [IQR]: 1.01-1.15), gained precision (median RoSE 0.87; IQR 0.76-1.00), and maintained the same direction with the same level of statistical significance in 93% (27 of 29) of cases. Lower CoE with limitations characterized 121 original reviews and graded as moderate CoE in 30.0% (45 of 150), low CoE in 32.0% (48 of 150), and very low CoE in 18.7% (28 of 150) reviews. Their update had larger absolute deviations (median aROR 1.12 to 1.33) and larger gains in precision (median RoSE 0.78-0.86) without clear and consistent differences between these categories of CoE. Changes in effect direction or statistical significance were also more common in the lower quality evidence, again with a similar extent across categories (without change in 75.6%, 64.6%, and 75.0% for moderate, low, very low CoE). As limitations increased, effect estimates deviated more (aROR 1.05 with zero, 1.11 with one, 1.25 with two, 1.24 with three limitations) and changes in direction or significance became more frequent (93.2% stable with no limitations, 74.5% with one, 68.2% with two, and 61.5% with three limitations). CONCLUSION: High-quality evidence without methodological deficiencies is trustworthy and stable, providing reliable intervention effect estimates when updated with new data. Evidence of moderate and lower quality may be equally prone to being unstable and cannot indicate if available effect estimates are true, exaggerated, or underestimated.


Subject(s)
Randomized Controlled Trials as Topic , Humans , Randomized Controlled Trials as Topic/standards , Evidence-Based Medicine/standards , Evidence-Based Medicine/methods , Meta-Analysis as Topic , Systematic Reviews as Topic/methods
13.
Trials ; 25(1): 286, 2024 Apr 27.
Article in English | MEDLINE | ID: mdl-38678289

ABSTRACT

BACKGROUND: The fragility index is a statistical measure of the robustness or "stability" of a statistically significant result. It has been adapted to assess the robustness of statistically significant outcomes from randomized controlled trials. By hypothetically switching some non-responders to responders, for instance, this metric measures how many individuals would need to have responded for a statistically significant finding to become non-statistically significant. The purpose of this study is to assess the fragility index of randomized controlled trials evaluating opioid substitution and antagonist therapies for opioid use disorder. This will provide an indication as to the robustness of trials in the field and the confidence that should be placed in the trials' outcomes, potentially identifying ways to improve clinical research in the field. This is especially important as opioid use disorder has become a global epidemic, and the incidence of opioid related fatalities have climbed 500% in the past two decades. METHODS: Six databases were searched from inception to September 25, 2021, for randomized controlled trials evaluating opioid substitution and antagonist therapies for opioid use disorder, and meeting the necessary requirements for fragility index calculation. Specifically, we included all parallel arm or two-by-two factorial design RCTs that assessed the effectiveness of any opioid substitution and antagonist therapies using a binary primary outcome and reported a statistically significant result. The fragility index of each study was calculated using methods described by Walsh and colleagues. The risk of bias of included studies was assessed using the Revised Cochrane Risk of Bias tool for randomized trials. RESULTS: Ten studies with a median sample size of 82.5 (interquartile range (IQR) 58, 179, range 52-226) were eligible for inclusion. Overall risk of bias was deemed to be low in seven studies, have some concerns in two studies, and be high in one study. The median fragility index was 7.5 (IQR 4, 12, range 1-26). CONCLUSIONS: Our results suggest that approximately eight participants are needed to overturn the conclusions of the majority of trials in opioid use disorder. Future work should focus on maximizing transparency in reporting of study results, by reporting confidence intervals, fragility indexes, and emphasizing the clinical relevance of findings. TRIAL REGISTRATION: PROSPERO CRD42013006507. Registered on November 25, 2013.


Subject(s)
Narcotic Antagonists , Opiate Substitution Treatment , Opioid-Related Disorders , Randomized Controlled Trials as Topic , Humans , Analgesics, Opioid/therapeutic use , Analgesics, Opioid/adverse effects , Data Interpretation, Statistical , Narcotic Antagonists/therapeutic use , Narcotic Antagonists/adverse effects , Opiate Substitution Treatment/methods , Opioid-Related Disorders/drug therapy , Research Design , Treatment Outcome
14.
Semin Perinatol ; 48(3): 151900, 2024 Apr.
Article in English | MEDLINE | ID: mdl-38653625

ABSTRACT

Quality improvement (QI) has become an integral part of healthcare. Despite efforts to improve the reporting of QI through frameworks such as the SQUIRE 2.0 guidelines, there is no standard or well-accepted guide to evaluate published QI for rigor, validity, generalizability, and applicability. User's Guides for evaluation of published clinical research have been employed routinely for over 25 years; however, similar tools for critical appraisal of QI are limited and uncommonly used. In this article we propose an approach to guide the critical review of QI reports focused on evaluating the methodology, improvement results, and applicability and feasibility for implementation in other settings. The resulting Quality Improvement Critical Knowledge (QUICK) Tool can be used by those reviewing manuscripts submitted for publication, as well as healthcare providers seeking to understand how to apply published QI to their local context.


Subject(s)
Quality Improvement , Humans , Guidelines as Topic
16.
Aust Occup Ther J ; 71(4): 552-564, 2024 Aug.
Article in English | MEDLINE | ID: mdl-38472150

ABSTRACT

INTRODUCTION: Evidence-based practice supports clinical decision-making by using multiple sources of evidence arising from research and practice. Research evidence develops through empirical study while practice evidence arises through clinical experience, client preferences, and the practice context. Although occupational therapists have embraced the paradigm of evidence-based practice, some studies have identified limits in the availability and use of research, which can lead to reliance on other forms of evidence. This study aimed to understand how Australian occupational therapists use practice evidence, manage potential bias, and enhance trustworthiness. Potential use of a critical appraisal tool for practice evidence was also explored. METHODS: A 42-item questionnaire was developed to address the study aims. It consisted of a 7-point Likert scale, ordinal and free text questions. Likert scales were collapsed into binary scales and analysed using SPSS. Ordinal data were graphed and free text responses were analysed using manifest content analysis. RESULTS: Most respondents (82%) indicated that practice evidence was an important informant of practice and is used alongside research evidence. Almost all respondents (98%) expressed confusion when reconciling discrepancies between research and practice evidence. There was general acknowledgement that practice evidence is prone to bias (82%), yet 92% were confident in trusting their own practice evidence. Most respondents (74.5%) undertook some measures to appraise practice evidence, and almost all respondents (90%) agreed they would refer to a critical appraisal tool that helped them evaluate practice evidence. CONCLUSION: Occupational therapists in this study routinely use practice evidence arising from their own experience, client perspectives, and their practice context to inform clinical decision-making. While they agreed that practice evidence was prone to bias and misinterpretation, they generally trusted their own practice evidence. Participants indicated they needed guidance to critically appraise their practice evidence and supported the development of a critical appraisal tool for this purpose.


Subject(s)
Evidence-Based Practice , Occupational Therapists , Occupational Therapy , Humans , Australia , Occupational Therapy/organization & administration , Occupational Therapy/standards , Occupational Therapists/psychology , Female , Male , Attitude of Health Personnel , Surveys and Questionnaires , Adult , Middle Aged , Trust
17.
Fertil Steril ; 121(6): 918-920, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38309515

ABSTRACT

When evidence from randomized controlled trials about the effectiveness and safety of an intervention is unclear, researchers may choose to review the nonrandomized evidence. All systematic reviews pose considerable challenges, and the level of methodological expertise required to undertake a useful review of nonrandomized intervention studies is both high and often severely underestimated. Using the example of the endometrial receptivity array, we review some common, critical flaws in systematic reviews of this nature, including errors in critical appraisal and meta-analysis.


Subject(s)
Observational Studies as Topic , Humans , Observational Studies as Topic/methods , Observational Studies as Topic/standards , Female , Meta-Analysis as Topic , Systematic Reviews as Topic/methods , Systematic Reviews as Topic/standards , Research Design/standards , Endometrium/pathology , Evidence-Based Medicine/standards , Pregnancy
18.
Res Synth Methods ; 15(3): 512-522, 2024 May.
Article in English | MEDLINE | ID: mdl-38316610

ABSTRACT

Systematic reviews (SRs) have an important role in the healthcare decision-making practice. Assessing the overall confidence in the results of SRs using quality assessment tools, such as "A MeaSurement Tool to Assess Systematic Reviews 2" (AMSTAR 2), is crucial since not all SRs are conducted using the most rigorous methods. In this article, we introduce a free, open-source R package called "amstar2Vis" (https://github.com/bougioukas/amstar2Vis) that provides easy-to-use functions for presenting the critical appraisal of SRs, based on the items of AMSTAR 2 checklist. An illustrative example is outlined, describing the steps involved in creating a detailed table with the item ratings and the overall confidence ratings, generating a stacked bar plot that shows the distribution of ratings as percentages of SRs for each AMSTAR 2 item, and creating a "ggplot2" graph that shows the distribution of overall confidence ratings ("Critically Low," "Low," "Moderate," or "High"). We expect "amstar2Vis" to be useful for overview authors and methodologists who assess the quality of SRs with AMSTAR 2 checklist and facilitate the production of pertinent publication-ready tables and figures. Future research and applications could further investigate the functionality or potential improvements of our package.


Subject(s)
Checklist , Software , Systematic Reviews as Topic , Humans , Reproducibility of Results , Research Design , Evidence-Based Medicine , Algorithms , Meta-Analysis as Topic
19.
Cureus ; 16(1): e52746, 2024 Jan.
Article in English | MEDLINE | ID: mdl-38384650

ABSTRACT

The reliability and relevance of medical literature are significant concerns in the post-COVID-19 era, where misinformation and disinformation are serious threats. This practice guide provides an overview of practical strategies to appraise the reliability of research publications critically. These strategies include critically appraising the effectiveness and constraints of various approaches to disseminating medical information, choosing appropriate medical literature resources, navigating library databases, screening the literature from the search, and screening individual publications. We also discuss the importance of considering study limitations and the relevance of the results in research or use in the medical arena. In-depth, critical appraisal of medical or clinical research evidence requires expertise, insight into research methodologies, and a grasp of issues in each field. By harnessing the wealth of reliable and relevant information available in medical literature through the above steps, we can alleviate potentially misleading information and stay at the forefront of our respective fields.

20.
MethodsX ; 12: 102610, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38371462

ABSTRACT

Cross-sectional studies are commonly used to study human health and disease, but are especially susceptible to bias. This scoping review aims to identify and describe available tools to assess the risk of bias (RoB) in cross-sectional studies and to compile the key bias concepts relevant to cross-sectional studies into an item bank. Using the JBI scoping review methodology, the strategy to locate relevant RoB concepts and tools is a combination of database searches, prospective review of PROSPERO registry records; and consultation with knowledge users and content experts. English language records will be included if they describe tools, checklists, or instruments which describe or permit assessment of RoB for cross-sectional studies. Systematic reviews will be included if they consider eligible RoB tools or use RoB tools for RoB of cross-sectional studies. All records will be independently screened, selected, and extracted by one researcher and checked by a second. An analytic framework will be used to structure the extraction of data. Results for the scoping review are pending. Results from this scoping review will be used to inform future selection of RoB tools and to consider whether development of a new RoB tool for cross-sectional studies is needed.

SELECTION OF CITATIONS
SEARCH DETAIL