Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 14 de 14
Filter
1.
Article in English | MEDLINE | ID: mdl-38666758

ABSTRACT

In psychological studies, multivariate outcomes measured on the same individuals are often encountered. Effects originating from these outcomes are consequently dependent. Multivariate meta-analysis examines the relationships of multivariate outcomes by estimating the mean effects and their variance-covariance matrices from series of primary studies. In this paper we discuss a unified modelling framework for multivariate meta-analysis that also incorporates measurement error corrections. We focus on two types of effect sizes, standardized mean differences (d) and correlations (r), that are common in psychological studies. Using generalized least squares estimation, we outline estimated mean vectors and variance-covariance matrices for d and r that are corrected for measurement error. Given the burgeoning research involving multivariate outcomes, and the largely overlooked ramifications of measurement error, we advocate addressing measurement error while conducting multivariate meta-analysis to enhance the replicability of psychological research.

2.
Behav Res Methods ; 52(5): 2020-2030, 2020 10.
Article in English | MEDLINE | ID: mdl-32157601

ABSTRACT

While both methodological and applied work on Bayesian meta-analysis have flourished, Bayesian modeling of differences between groups of studies remains scarce in meta-analyses in psychology, education, and the social sciences. On rare occasions when Bayesian approaches have been used, non-informative prior distributions have been chosen. However, more informative prior distributions have recently garnered popularity. We propose a group-specific weakly informative prior distribution for the between-studies standard-deviation parameter in meta-analysis. The proposed prior distribution incorporates a frequentist estimate of the between-studies standard deviation as the noncentrality parameter in a folded noncentral t distribution. This prior distribution is then separately modeled for each subgroup of studies, as determined by a categorical factor. Use of the new prior distribution is shown in two extensive examples based on a published meta-analysis on psychological interventions aimed at increasing optimism. We compare the folded noncentral t prior distribution to several non-informative prior distributions. We conclude with discussion, limitations, and avenues for further development of Bayesian meta-analysis in psychology and the social sciences.


Subject(s)
Meta-Analysis as Topic , Psychology , Social Sciences , Bayes Theorem
3.
J Clin Epidemiol ; 89: 77-83, 2017 Sep.
Article in English | MEDLINE | ID: mdl-28365305

ABSTRACT

OBJECTIVE: To identify variables that must be coded when synthesizing primary studies that use quasi-experimental designs. STUDY DESIGN AND SETTING: All quasi-experimental (QE) designs. RESULTS: When designing a systematic review of QE studies, potential sources of heterogeneity-both theory-based and methodological-must be identified. We outline key components of inclusion criteria for syntheses of quasi-experimental studies. We provide recommendations for coding content-relevant and methodological variables and outlined the distinction between bivariate effect sizes and partial (i.e., adjusted) effect sizes. Designs used and controls used are viewed as of greatest importance. Potential sources of bias and confounding are also addressed. CONCLUSION: Careful consideration must be given to inclusion criteria and the coding of theoretical and methodological variables during the design phase of a synthesis of quasi-experimental studies. The success of the meta-regression analysis relies on the data available to the meta-analyst. Omission of critical moderator variables (i.e., effect modifiers) will undermine the conclusions of a meta-analysis.


Subject(s)
Data Collection/methods , Non-Randomized Controlled Trials as Topic/statistics & numerical data , Guidelines as Topic , Humans , Research Design
4.
J Clin Epidemiol ; 89: 84-91, 2017 Sep.
Article in English | MEDLINE | ID: mdl-28365308

ABSTRACT

OBJECTIVE: To outline issues of importance to analytic approaches to the synthesis of quasi-experiments (QEs) and to provide a statistical model for use in analysis. STUDY DESIGN AND SETTING: We drew on studies of statistics, epidemiology, and social-science methodology to outline methods for synthesis of QE studies. The design and conduct of QEs, effect sizes from QEs, and moderator variables for the analysis of those effect sizes were discussed. RESULTS: Biases, confounding, design complexities, and comparisons across designs offer serious challenges to syntheses of QEs. Key components of meta-analyses of QEs were identified, including the aspects of QE study design to be coded and analyzed. Of utmost importance are the design and statistical controls implemented in the QEs. Such controls and any potential sources of bias and confounding must be modeled in analyses, along with aspects of the interventions and populations studied. Because of such controls, effect sizes from QEs are more complex than those from randomized experiments. A statistical meta-regression model that incorporates important features of the QEs under review was presented. CONCLUSION: Meta-analyses of QEs provide particular challenges, but thorough coding of intervention characteristics and study methods, along with careful analysis, should allow for sound inferences.


Subject(s)
Models, Statistical , Non-Randomized Controlled Trials as Topic/methods , Non-Randomized Controlled Trials as Topic/statistics & numerical data , Humans , Meta-Analysis as Topic , Research Design
5.
J Clin Epidemiol ; 89: 43-52, 2017 Sep.
Article in English | MEDLINE | ID: mdl-28351693

ABSTRACT

OBJECTIVES: Rigorous and transparent bias assessment is a core component of high-quality systematic reviews. We assess modifications to existing risk of bias approaches to incorporate rigorous quasi-experimental approaches with selection on unobservables. These are nonrandomized studies using design-based approaches to control for unobservable sources of confounding such as difference studies, instrumental variables, interrupted time series, natural experiments, and regression-discontinuity designs. STUDY DESIGN AND SETTING: We review existing risk of bias tools. Drawing on these tools, we present domains of bias and suggest directions for evaluation questions. RESULTS: The review suggests that existing risk of bias tools provide, to different degrees, incomplete transparent criteria to assess the validity of these designs. The paper then presents an approach to evaluating the internal validity of quasi-experiments with selection on unobservables. CONCLUSION: We conclude that tools for nonrandomized studies of interventions need to be further developed to incorporate evaluation questions for quasi-experiments with selection on unobservables.


Subject(s)
Bias , Non-Randomized Controlled Trials as Topic/statistics & numerical data , Humans , Research Design , Risk Assessment
6.
J Sport Exerc Psychol ; 38(5): 441-457, 2016 Oct.
Article in English | MEDLINE | ID: mdl-27633956

ABSTRACT

Research linking the "quiet eye" (QE) period to subsequent performance has not been systematically synthesized. In this paper we review the literature on the link between the two through nonintervention (Synthesis 1) and intervention (Synthesis 2) studies. In the first synthesis, 27 studies with 38 effect sizes resulted in a large mean effect (d = 1.04) reflecting differences between experts' and novices' QE periods, and a moderate effect size (d = 0.58) comparing QE periods for successful and unsuccessful performances within individuals. Studies reporting QE duration as a percentage of the total time revealed a larger mean effect size than studies reporting an absolute duration (in milliseconds). The second synthesis of 9 articles revealed very large effect sizes for both the quiet-eye period (d = 1.53) and performance (d = 0.84). QE also showed some ability to predict performance effects across studies.


Subject(s)
Achievement , Athletic Performance , Attention , Eye Movements , Fixation, Ocular , Practice, Psychological , Professional Competence , Psychomotor Performance , Humans , Orientation, Spatial , Spatial Navigation
7.
West J Nurs Res ; 37(4): 517-35, 2015 Apr.
Article in English | MEDLINE | ID: mdl-25142707

ABSTRACT

A relatively novel type of meta-analysis, a model-driven meta-analysis, involves the quantitative synthesis of descriptive, correlational data and is useful for identifying key predictors of health outcomes and informing clinical guidelines. Few such meta-analyses have been conducted and thus, large bodies of research remain unsynthesized and uninterpreted for application in health care. We describe the unique challenges of conducting a model-driven meta-analysis, focusing primarily on issues related to locating a sample of published and unpublished primary studies, extracting and verifying descriptive and correlational data, and conducting analyses. A current meta-analysis of the research on predictors of key health outcomes in diabetes is used to illustrate our main points.


Subject(s)
Diabetes Mellitus, Type 2/therapy , Models, Theoretical , Outcome Assessment, Health Care/methods , Humans
8.
Res Synth Methods ; 5(3): 235-53, 2014 Sep.
Article in English | MEDLINE | ID: mdl-26052849

ABSTRACT

A common assumption in meta-analysis is that effect sizes are independent. When correlated effect sizes are analyzed using traditional univariate techniques, this assumption is violated. This research assesses the impact of dependence arising from treatment-control studies with multiple endpoints on homogeneity measures Q and I(2) in scenarios using the unbiased standardized-mean-difference effect size. Univariate and multivariate meta-analysis methods are examined. Conditions included different overall outcome effects, study sample sizes, numbers of studies, between-outcomes correlations, dependency structures, and ways of computing the correlation. The univariate approach used typical fixed-effects analyses whereas the multivariate approach used generalized least-squares (GLS) estimates of a fixed-effects model, weighted by the inverse variance-covariance matrix. Increased dependence among effect sizes led to increased Type I error rates from univariate models. When effect sizes were strongly dependent, error rates were drastically higher than nominal levels regardless of study sample size and number of studies. In contrast, using GLS estimation to account for multiple-endpoint dependency maintained error rates within nominal levels. Conversely, mean I(2) values were not greatly affected by increased amounts of dependency. Last, we point out that the between-outcomes correlation should be estimated as a pooled within-groups correlation rather than using a full-sample estimator that does not consider treatment/control group membership.


Subject(s)
Clinical Trials as Topic/statistics & numerical data , Data Interpretation, Statistical , Endpoint Determination/methods , Meta-Analysis as Topic , Models, Statistical , Outcome Assessment, Health Care/methods , Algorithms , Computer Simulation , Multivariate Analysis , Reproducibility of Results , Sample Size , Sensitivity and Specificity
9.
Res Synth Methods ; 4(2): 127-43, 2013 Jun.
Article in English | MEDLINE | ID: mdl-26053653

ABSTRACT

Regression methods are widely used by researchers in many fields, yet methods for synthesizing regression results are scarce. This study proposes using a factored likelihood method, originally developed to handle missing data, to appropriately synthesize regression models involving different predictors. This method uses the correlations reported in the regression studies to calculate synthesized standardized slopes. It uses available correlations to estimate missing ones through a series of regressions, allowing us to synthesize correlations among variables as if each included study contained all the same variables. Great accuracy and stability of this method under fixed-effects models were found through Monte Carlo simulation. An example was provided to demonstrate the steps for calculating the synthesized slopes through sweep operators. By rearranging the predictors in the included regression models or omitting a relatively small number of correlations from those models, we can easily apply the factored likelihood method to many situations involving synthesis of linear models. Limitations and other possible methods for synthesizing more complicated models are discussed. Copyright © 2012 John Wiley & Sons, Ltd.

11.
Multivariate Behav Res ; 45(2): 213-38, 2010 Mar 31.
Article in English | MEDLINE | ID: mdl-26760284

ABSTRACT

We examined the degree of dependence between standardized-mean-difference effect sizes in multiple-treatment studies in meta-analysis in terms of the correlation formula provided by Gleser and Olkin (1994) . To explore the impact of group size and the values of the true multiple-treatment effect sizes, we simplified the formula for the correlation in terms of the ratio of group sizes and under conditions of equality of sample and effect sizes. The results showed that the group-size ratio affects the correlation between effects much more than do the values of the effect sizes. A relatively smaller control-group size and large effect sizes of the same sign were associated with stronger dependence. We also showed that ignoring the dependence between individual standardized-mean-difference effect sizes always decreases the precision of differences in mean effects across studies in a meta-analysis. The difference in precision was largest when treatment groups were much larger than the control group, regardless of the size of the effects or the number of studies in the meta-analysis.

12.
Subst Use Misuse ; 42(6): 985-1007, 2007.
Article in English | MEDLINE | ID: mdl-17613959

ABSTRACT

This article reports results of a meta-analysis of the effects of a set of community coalitions that implemented science-based substance use prevention interventions as part of a State Incentive Grant (SIG) in Kentucky. The analysis included assessment of direct effects on prevalence of substance use among adolescents as well as assessment of what "risk" and "protective" factors mediated the coalition effects. In addition, we tested whether multiple science-based prevention interventions enhanced the effects of coalitions on youth substance use. Short-term results (using 8th-grade data) showed no significant decreases in six prevalence of substance use outcomes -- and, in fact, a significant though small increase in prevalence of use of one substance (inhalants). Sustained results (using 10th-grade data), however, showed significant, though small decreases in three of six substance use outcomes -- past month prevalence of cigarette use, alcohol use, and binge drinking. We found evidence that the sustained effects on these three prevalence outcomes were mediated by two posited risk factors: friends' drug use and perceived availability of drugs. Finally, we found that the number of science-based prevention interventions implemented in schools within the coalitions did not moderate the effects of the coalitions on the prevalence of drug use. Study limitations are noted.


Subject(s)
Community Mental Health Services/organization & administration , Community-Institutional Relations , Evidence-Based Medicine , Negotiating , Substance-Related Disorders/therapy , Humans , Risk Factors , Substance-Related Disorders/prevention & control
13.
Psychol Methods ; 11(1): 72-86, 2006 Mar.
Article in English | MEDLINE | ID: mdl-16594768

ABSTRACT

Four types of analysis are commonly applied to data from structured Rater x Ratee designs. These types are characterized by the unit of analysis, which is either raters or ratees, and by the design used, which is either between-units or within-unit design. The 4 types of analysis are quite different, and therefore they give rise to effect sizes that differ in their substantive interpretations. In most cases, effect sizes based on between-ratee analysis have the least ambiguous meaning and will best serve the aims of meta-analysts and primary researchers. Effect sizes that arise from within-unit designs confound the strength of an effect with its homogeneity. Nonetheless, the authors identify how a range of effect-size types such as these serve the aims of meta-analysis appropriately.


Subject(s)
Data Interpretation, Statistical , Meta-Analysis as Topic , Research Design/statistics & numerical data , Adult , Beauty , Face , Female , Humans , Judgment , Likelihood Functions , Male , Observer Variation , Reference Values , Reproducibility of Results
14.
Psychol Aging ; 20(2): 272-84, 2005 Jun.
Article in English | MEDLINE | ID: mdl-16029091

ABSTRACT

A meta-analysis examined data from 36 studies linking physical activity to well-being in older adults without clinical disorders. The weighted mean-change effect size for treatment groups (d(C). = 0.24) was almost 3 times the mean for control groups (d(C). = 0.09). Aerobic training was most beneficial (d(C). = 0.29), and moderate intensity activity was the most beneficial activity level (d(C). = 0.34). Longer exercise duration was less beneficial for several types of well-being, though findings are inconclusive. Physical activity had the strongest effects on self-efficacy (d(C).= 0.38), and improvements in cardiovascular status, strength, and functional capacity were linked to well-being improvement overall. Social-cognitive theory is used to explain the effect of physical activity on well-being.


Subject(s)
Activities of Daily Living , Aging , Exercise , Quality of Life , Adult , Aged , Aged, 80 and over , Cardiovascular Diseases/prevention & control , Female , Humans , Male , Mental Health , Middle Aged , Self Efficacy , Social Behavior
SELECTION OF CITATIONS
SEARCH DETAIL
...