Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 25
Filter
1.
Dev Med Child Neurol ; 66(4): 415-421, 2024 Apr.
Article in English | MEDLINE | ID: mdl-37528533

ABSTRACT

Many sources document problems that jeopardize the trustworthiness of systematic reviews. This is a major concern given their potential to influence patient care and impact people's lives. Responsibility for producing trustworthy conclusions on the evidence in systematic reviews is borne primarily by authors who need the necessary training and resources to correctly report on the current knowledge base. Peer reviewers and editors are also accountable; they must ensure that systematic reviews are accurate by demonstrating proper methods. To support all these stakeholders, we attempt to distill the sprawling guidance that is currently available in our recent co-publication about best tools and practices for systematic reviews. We specifically address how to meet methodological conduct standards applicable to key components of systematic reviews. In this complementary invited review, we place these standards in the context of good scholarship principles for systematic review development. Our intention is to reach a broad audience and potentially improve the trustworthiness of evidence syntheses published in the developmental medicine literature and beyond.


Subject(s)
Systematic Reviews as Topic , Systematic Reviews as Topic/standards
2.
Br J Pharmacol ; 181(1): 180-210, 2024 01.
Article in English | MEDLINE | ID: mdl-37282770

ABSTRACT

Data continue to accumulate indicating that many systematic reviews are methodologically flawed, biased, redundant, or uninformative. Some improvements have occurred in recent years based on empirical methods research and standardization of appraisal tools; however, many authors do not routinely or consistently apply these updated methods. In addition, guideline developers, peer reviewers, and journal editors often disregard current methodological standards. Although extensively acknowledged and explored in the methodological literature, most clinicians seem unaware of these issues and may automatically accept evidence syntheses (and clinical practice guidelines based on their conclusions) as trustworthy. A plethora of methods and tools are recommended for the development and evaluation of evidence syntheses. It is important to understand what these are intended to do (and cannot do) and how they can be utilized. Our objective is to distill this sprawling information into a format that is understandable and readily accessible to authors, peer reviewers, and editors. In doing so, we aim to promote appreciation and understanding of the demanding science of evidence synthesis among stakeholders. We focus on well-documented deficiencies in key components of evidence syntheses to elucidate the rationale for current standards. The constructs underlying the tools developed to assess reporting, risk of bias, and methodological quality of evidence syntheses are distinguished from those involved in determining overall certainty of a body of evidence. Another important distinction is made between those tools used by authors to develop their syntheses as opposed to those used to ultimately judge their work. Exemplar methods and research practices are described, complemented by novel pragmatic strategies to improve evidence syntheses. The latter include preferred terminology and a scheme to characterize types of research evidence. We organize best practice resources in a Concise Guide that can be widely adopted and adapted for routine implementation by authors and journals. Appropriate, informed use of these is encouraged, but we caution against their superficial application and emphasize their endorsement does not substitute for in-depth methodological training. By highlighting best practices with their rationale, we hope this guidance will inspire further evolution of methods and tools that can advance the field.


Subject(s)
Systematic Reviews as Topic , Systematic Reviews as Topic/standards , Research Design
3.
JBJS Rev ; 11(6)2023 06 01.
Article in English | MEDLINE | ID: mdl-37285444

ABSTRACT

¼ Data continue to accumulate indicating that many systematic reviews are methodologically flawed, biased, redundant, or uninformative. Some improvements have occurred in recent years based on empirical methods research and standardization of appraisal tools; however, many authors do not routinely or consistently apply these updated methods. In addition, guideline developers, peer reviewers, and journal editors often disregard current methodological standards. Although extensively acknowledged and explored in the methodological literature, most clinicians seem unaware of these issues and may automatically accept evidence syntheses (and clinical practice guidelines based on their conclusions) as trustworthy.¼ A plethora of methods and tools are recommended for the development and evaluation of evidence syntheses. It is important to understand what these are intended to do (and cannot do) and how they can be utilized. Our objective is to distill this sprawling information into a format that is understandable and readily accessible to authors, peer reviewers, and editors. In doing so, we aim to promote appreciation and understanding of the demanding science of evidence synthesis among stakeholders. We focus on well-documented deficiencies in key components of evidence syntheses to elucidate the rationale for current standards. The constructs underlying the tools developed to assess reporting, risk of bias, and methodological quality of evidence syntheses are distinguished from those involved in determining overall certainty of a body of evidence. Another important distinction is made between those tools used by authors to develop their syntheses as opposed to those used to ultimately judge their work.¼ Exemplar methods and research practices are described, complemented by novel pragmatic strategies to improve evidence syntheses. The latter include preferred terminology and a scheme to characterize types of research evidence. We organize best practice resources in a Concise Guide that can be widely adopted and adapted for routine implementation by authors and journals. Appropriate, informed use of these is encouraged, but we caution against their superficial application and emphasize their endorsement does not substitute for in-depth methodological training. By highlighting best practices with their rationale, we hope this guidance will inspire further evolution of methods and tools that can advance the field.


Subject(s)
Systematic Reviews as Topic , Systematic Reviews as Topic/standards
4.
BMC Infect Dis ; 23(1): 383, 2023 Jun 08.
Article in English | MEDLINE | ID: mdl-37286949

ABSTRACT

Data continue to accumulate indicating that many systematic reviews are methodologically flawed, biased, redundant, or uninformative. Some improvements have occurred in recent years based on empirical methods research and standardization of appraisal tools; however, many authors do not routinely or consistently apply these updated methods. In addition, guideline developers, peer reviewers, and journal editors often disregard current methodological standards. Although extensively acknowledged and explored in the methodological literature, most clinicians seem unaware of these issues and may automatically accept evidence syntheses (and clinical practice guidelines based on their conclusions) as trustworthy.A plethora of methods and tools are recommended for the development and evaluation of evidence syntheses. It is important to understand what these are intended to do (and cannot do) and how they can be utilized. Our objective is to distill this sprawling information into a format that is understandable and readily accessible to authors, peer reviewers, and editors. In doing so, we aim to promote appreciation and understanding of the demanding science of evidence synthesis among stakeholders. We focus on well-documented deficiencies in key components of evidence syntheses to elucidate the rationale for current standards. The constructs underlying the tools developed to assess reporting, risk of bias, and methodological quality of evidence syntheses are distinguished from those involved in determining overall certainty of a body of evidence. Another important distinction is made between those tools used by authors to develop their syntheses as opposed to those used to ultimately judge their work.Exemplar methods and research practices are described, complemented by novel pragmatic strategies to improve evidence syntheses. The latter include preferred terminology and a scheme to characterize types of research evidence. We organize best practice resources in a Concise Guide that can be widely adopted and adapted for routine implementation by authors and journals. Appropriate, informed use of these is encouraged, but we caution against their superficial application and emphasize their endorsement does not substitute for in-depth methodological training. By highlighting best practices with their rationale, we hope this guidance will inspire further evolution of methods and tools that can advance the field.


Subject(s)
Systematic Reviews as Topic , Humans , Reference Standards
5.
JBI Evid Synth ; 21(9): 1699-1731, 2023 09 01.
Article in English | MEDLINE | ID: mdl-37282594

ABSTRACT

Data continue to accumulate indicating that many systematic reviews are methodologically flawed, biased, redundant, or uninformative. Some improvements have occurred in recent years based on empirical methods research and standardization of appraisal tools; however, many authors do not routinely or consistently apply these updated methods. In addition, guideline developers, peer reviewers, and journal editors often disregard current methodological standards. Although extensively acknowledged and explored in the methodological literature, most clinicians seem unaware of these issues and may automatically accept evidence syntheses (and clinical practice guidelines based on their conclusions) as trustworthy. A plethora of methods and tools are recommended for the development and evaluation of evidence syntheses. It is important to understand what these are intended to do (and cannot do) and how they can be utilized. Our objective is to distill this sprawling information into a format that is understandable and readily accessible to authors, peer reviewers, and editors. In doing so, we aim to promote appreciation and understanding of the demanding science of evidence synthesis among stakeholders. We focus on well-documented deficiencies in key components of evidence syntheses to elucidate the rationale for current standards. The constructs underlying the tools developed to assess reporting, risk of bias, and methodological quality of evidence syntheses are distinguished from those involved in determining overall certainty of a body of evidence. Another important distinction is made between those tools used by authors to develop their syntheses as opposed to those used to ultimately judge their work. Exemplar methods and research practices are described, complemented by novel pragmatic strategies to improve evidence syntheses. The latter include preferred terminology and a scheme to characterize types of research evidence. We organize best practice resources in a Concise Guide that can be widely adopted and adapted for routine implementation by authors and journals. Appropriate, informed use of these is encouraged, but we caution against their superficial application and emphasize their endorsement does not substitute for in-depth methodological training. By highlighting best practices with their rationale, we hope this guidance will inspire further evolution of methods and tools that can advance the field.


Subject(s)
Systematic Reviews as Topic , Humans
6.
J Pediatr Rehabil Med ; 16(2): 241-273, 2023.
Article in English | MEDLINE | ID: mdl-37302044

ABSTRACT

Data continue to accumulate indicating that many systematic reviews are methodologically flawed, biased, redundant, or uninformative. Some improvements have occurred in recent years based on empirical methods research and standardization of appraisal tools; however, many authors do not routinely or consistently apply these updated methods. In addition, guideline developers, peer reviewers, and journal editors often disregard current methodological standards. Although extensively acknowledged and explored in the methodological literature, most clinicians seem unaware of these issues and may automatically accept evidence syntheses (and clinical practice guidelines based on their conclusions) as trustworthy.A plethora of methods and tools are recommended for the development and evaluation of evidence syntheses. It is important to understand what these are intended to do (and cannot do) and how they can be utilized. Our objective is to distill this sprawling information into a format that is understandable and readily accessible to authors, peer reviewers, and editors. In doing so, we aim to promote appreciation and understanding of the demanding science of evidence synthesis among stakeholders. We focus on well-documented deficiencies in key components of evidence syntheses to elucidate the rationale for current standards. The constructs underlying the tools developed to assess reporting, risk of bias, and methodological quality of evidence syntheses are distinguished from those involved in determining overall certainty of a body of evidence. Another important distinction is made between those tools used by authors to develop their syntheses as opposed to those used to ultimately judge their work.Exemplar methods and research practices are described, complemented by novel pragmatic strategies to improve evidence syntheses. The latter include preferred terminology and a scheme to characterize types of research evidence. We organize best practice resources in a Concise Guide that can be widely adopted and adapted for routine implementation by authors and journals. Appropriate, informed use of these is encouraged, but we caution against their superficial application and emphasize their endorsement does not substitute for in-depth methodological training. By highlighting best practices with their rationale, we hope this guidance will inspire further evolution of methods and tools that can advance the field.

7.
Acta Anaesthesiol Scand ; 67(9): 1148-1177, 2023 10.
Article in English | MEDLINE | ID: mdl-37288997

ABSTRACT

Data continue to accumulate indicating that many systematic reviews are methodologically flawed, biased, redundant, or uninformative. Some improvements have occurred in recent years based on empirical methods research and standardization of appraisal tools; however, many authors do not routinely or consistently apply these updated methods. In addition, guideline developers, peer reviewers, and journal editors often disregard current methodological standards. Although extensively acknowledged and explored in the methodological literature, most clinicians seem unaware of these issues and may automatically accept evidence syntheses (and clinical practice guidelines based on their conclusions) as trustworthy. A plethora of methods and tools are recommended for the development and evaluation of evidence syntheses. It is important to understand what these are intended to do (and cannot do) and how they can be utilized. Our objective is to distill this sprawling information into a format that is understandable and readily accessible to authors, peer reviewers, and editors. In doing so, we aim to promote appreciation and understanding of the demanding science of evidence synthesis among stakeholders. We focus on well-documented deficiencies in key components of evidence syntheses to elucidate the rationale for current standards. The constructs underlying the tools developed to assess reporting, risk of bias, and methodological quality of evidence syntheses are distinguished from those involved in determining overall certainty of a body of evidence. Another important distinction is made between those tools used by authors to develop their syntheses as opposed to those used to ultimately judge their work. Exemplar methods and research practices are described, complemented by novel pragmatic strategies to improve evidence syntheses. The latter include preferred terminology and a scheme to characterize types of research evidence. We organize best practice resources in a Concise Guide that can be widely adopted and adapted for routine implementation by authors and journals. Appropriate, informed use of these is encouraged, but we caution against their superficial application and emphasize their endorsement does not substitute for in-depth methodological training. By highlighting best practices with their rationale, we hope this guidance will inspire further evolution of methods and tools that can advance the field.


Subject(s)
Systematic Reviews as Topic
8.
Syst Rev ; 12(1): 96, 2023 06 08.
Article in English | MEDLINE | ID: mdl-37291658

ABSTRACT

Data continue to accumulate indicating that many systematic reviews are methodologically flawed, biased, redundant, or uninformative. Some improvements have occurred in recent years based on empirical methods research and standardization of appraisal tools; however, many authors do not routinely or consistently apply these updated methods. In addition, guideline developers, peer reviewers, and journal editors often disregard current methodological standards. Although extensively acknowledged and explored in the methodological literature, most clinicians seem unaware of these issues and may automatically accept evidence syntheses (and clinical practice guidelines based on their conclusions) as trustworthy.A plethora of methods and tools are recommended for the development and evaluation of evidence syntheses. It is important to understand what these are intended to do (and cannot do) and how they can be utilized. Our objective is to distill this sprawling information into a format that is understandable and readily accessible to authors, peer reviewers, and editors. In doing so, we aim to promote appreciation and understanding of the demanding science of evidence synthesis among stakeholders. We focus on well-documented deficiencies in key components of evidence syntheses to elucidate the rationale for current standards. The constructs underlying the tools developed to assess reporting, risk of bias, and methodological quality of evidence syntheses are distinguished from those involved in determining overall certainty of a body of evidence. Another important distinction is made between those tools used by authors to develop their syntheses as opposed to those used to ultimately judge their work.Exemplar methods and research practices are described, complemented by novel pragmatic strategies to improve evidence syntheses. The latter include preferred terminology and a scheme to characterize types of research evidence. We organize best practice resources in a Concise Guide that can be widely adopted and adapted for routine implementation by authors and journals. Appropriate, informed use of these is encouraged, but we caution against their superficial application and emphasize their endorsement does not substitute for in-depth methodological training. By highlighting best practices with their rationale, we hope this guidance will inspire further evolution of methods and tools that can advance the field.


Subject(s)
Systematic Reviews as Topic
14.
NeuroRehabilitation ; 49(1): 161-164, 2021.
Article in English | MEDLINE | ID: mdl-34366300

ABSTRACT

BACKGROUND: Botulinum toxin A (BoNT-A) is a well-accepted treatment for the medical management of spasticity in children with cerebral palsy (CP). OBJECTIVE: To assess the efficacy and safety of BoNT-A compared with other treatment options in managing lower limb spasticity in children with CP. METHODS: A summary of the Cochrane Review update by Blumetti et al. (2019), with comments. RESULTS: This review included 31 randomized controlled trials (1508 participants). Compared with usual care/physiotherapy, the evidence is very uncertain about the effect of BoNT-A on gait, function, ankle joint range of motion (ROM), satisfaction, and ankle spasticity in children with CP. Compared with placebo/sham, BoNT-A probably benefits these same outcomes, although the results for function are contradictory. BoNT-A may not be more effective than serial casting at improving gait, function, ankle ROM and spasticity at any time point. However, it may be more effective than an orthosis at medium-term follow-up for hip ROM and adductor spasticity, but not function. The rate of adverse events with BoNT-A is similar to placebo/sham and serial casting. CONCLUSIONS: Evidence for the effectiveness and safety of BoNT-A for the management of lower limb spasticity in children with CP is uncertain, with better quality evidence available from studies of placebo/sham than non-placebo controls. To produce high-quality evidence, future studies need to improve their methodological quality and increase sample sizes.


Subject(s)
Botulinum Toxins, Type A , Cerebral Palsy , Neuromuscular Agents , Botulinum Toxins, Type A/therapeutic use , Cerebral Palsy/complications , Cerebral Palsy/drug therapy , Child , Humans , Lower Extremity , Muscle Spasticity/drug therapy , Muscle Spasticity/etiology , Neuromuscular Agents/therapeutic use , Randomized Controlled Trials as Topic , Treatment Outcome
15.
Dev Med Child Neurol ; 63(11): 1316-1326, 2021 11.
Article in English | MEDLINE | ID: mdl-34091900

ABSTRACT

AIM: To evaluate the methodological quality of recent systematic reviews of interventions for children with cerebral palsy in order to determine the level of confidence in the reviews' conclusions. METHOD: A comprehensive search of 22 databases identified eligible systematic reviews with and without meta-analysis published worldwide from 2015 to 2019. We independently extracted data and used A Measurement Tool to Assess Systematic Reviews-2 (AMSTAR-2) to appraise methodological quality. RESULTS: Eighty-three systematic reviews met strict eligibility criteria. Most were from Europe and Latin America and reported on rehabilitative interventions. AMSTAR-2 appraisal found critically low confidence in 88% (n=73) because of multiple and varied deficiencies. Only 7% (n=6) had no AMSTAR-2 critical domain deficiency. The number of systematic reviews increased fivefold from 2015 to 2019; however, quality did not improve over time. INTERPRETATION: Most of these systematic reviews are considered unreliable according to AMSTAR-2. Current recommendations for treating children with CP based on these flawed systematic reviews need re-evaluation. Findings are comparable to reports from other areas of medicine, despite the general perception that systematic reviews are high-level evidence. The required use of current widely accepted guidance for conducting and reporting systematic reviews by authors, peer reviewers, and editors is critical to ensure reliable, unbiased, and transparent systematic reviews. What this paper adds Confidence was critically low in the conclusions of 88% of systematic reviews about interventions for children with cerebral palsy (CP). Quality issues in the sample were not limited to systematic reviews of non-randomized trials, or to those about certain populations of CP or interventions. The inclusion of meta-analysis did not improve the level of confidence in these systematic reviews. Numbers of systematic reviews on this topic increased over the 5 search years but their methodological quality did not improve.


Subject(s)
Cerebral Palsy/rehabilitation , Systematic Reviews as Topic/standards , Child , Humans
18.
Dev Med Child Neurol ; 54(7): 606-11, 2012 Jul.
Article in English | MEDLINE | ID: mdl-22577944

ABSTRACT

AIM: The aim of this study was to evaluate the interrater reliability and convergent validity of the American Academy for Cerebral Palsy and Developmental Medicine's (AACPDM) methodology for conducting systematic reviews (group design studies). METHOD: Four clinicians independently rated 24 articles for the level of evidence and conduct using AACPDM methodology. Study conduct was also assessed using the Effective Public Health Practice Project scale. Raters were randomly assigned to one of two pairs to resolve discrepancies. The level of agreement between individual raters and pairs was calculated using kappa (α=0.05) and intraclass correlations (ICCs; α=0.05). Spearman's rank correlation coefficient was calculated to evaluate the relationship between raters' categorization of quality categories using the two tools. RESULTS: There was acceptable agreement between raters (κ=0.77; p<0.001; ICC=0.90) and between assigned pairs (κ=0.83; p<0.001; ICC=0.96) for the level of evidence ratings. There was acceptable agreement between pairs for four of the seven conduct questions (κ=0.53-0.87). ICCs (all raters) for conduct category ratings (weak, moderate, and strong) also indicated good agreement (ICC=0.76). Spearman's rho indicated a significant positive correlation for the overall quality category comparisons of the two tools (0.52; p<0.001). CONCLUSIONS: The AACPDM rating system has acceptable interrater reliability. Evaluation of its study quality ratings demonstrated reasonable agreement when compared with a similar tool.


Subject(s)
Cerebral Palsy , Child Development , Observer Variation , Reproducibility of Results , Review Literature as Topic , Surveys and Questionnaires/standards , Child Behavior , Child, Preschool , Humans , Infant , Societies, Medical
20.
Am J Phys Med Rehabil ; 87(7): 556-66, 2008 Jul.
Article in English | MEDLINE | ID: mdl-18574347

ABSTRACT

OBJECTIVE: To investigate the safety of single and repeated multilevel injections of botulinum toxin (BoNT) alone or a combination of phenol and BoNT performed under general anesthesia in children with chronic muscle spasticity. DESIGN: Retrospective cohort study. Data from 336 children who received a total of 764 treatments were analyzed. Mean age was 7.4 yrs, and 90% had diagnoses of cerebral palsy. RESULTS: The overall complication rate was 6.8%, similar to rates reported in comparable studies of BoNT alone and combined BoNT and phenol. Of the total number of injection sessions with complications, 1.2% were anesthesia related and 6.3% were injection related; none resulted in any deaths or long-term morbidity. Injection-related complications were most frequently local symptoms of short duration. These were comparable with those reported previously, except that in this series there was a rare occurrence of dysesthesias (0.4%) with phenol injections. Complications occurred more frequently in patients injected with a combination of phenol and BoNT vs. BoNT alone, but no single causal factor can be implicated. No increase in complications with repeat injections was observed, and there was no correlation of complication rates with dosage of either agent. CONCLUSIONS: Although these procedures are not without adverse effects, this series suggests that the potential benefits outweigh the risks.


Subject(s)
Botulinum Toxins/adverse effects , Cerebral Palsy/drug therapy , Hemiplegia/drug therapy , Muscle Spasticity/drug therapy , Phenol/adverse effects , Quadriplegia/drug therapy , Age Factors , Anti-Dyskinesia Agents/administration & dosage , Anti-Dyskinesia Agents/adverse effects , Anti-Infective Agents, Local/adverse effects , Botulinum Toxins/administration & dosage , Child , Chronic Disease , Female , Humans , Male , Phenol/administration & dosage , Retrospective Studies , Treatment Outcome
SELECTION OF CITATIONS
SEARCH DETAIL
...