Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 43
Filter
1.
Behav Res Methods ; 55(7): 3494-3503, 2023 10.
Article in English | MEDLINE | ID: mdl-36223007

ABSTRACT

Currently, the design standards for single-case experimental designs (SCEDs) are based on validity considerations as prescribed by the What Works Clearinghouse. However, there is a need for design considerations such as power based on statistical analyses. We compute and derive power using computations for (AB)k designs with multiple cases which are common in SCEDs. Our computations show that effect size has the maximum impact on power followed by the number of subjects and then the number of phase reversals. An effect size of 0.75 or higher, at least one set of phase reversals (i.e., where k > 1), and at least three subjects showed high power. The latter two conditions agree with current standards about either having at least an ABAB design or a multiple baseline design with three subjects to meet design standards. An effect size of 0.75 or higher is not uncommon in SCEDs either. Autocorrelations, the number of time-points per phase, and intraclass correlations had a smaller but non-negligible impact on power. In sum, power analyses in the present study show that conditions to meet power requirements are not unreasonable in SCEDs. The software code to compute power is available on GitHub for the use of the reader.


Subject(s)
Research Design , Humans
2.
Eval Rev ; 42(2): 248-280, 2018 04.
Article in English | MEDLINE | ID: mdl-30060688

ABSTRACT

BACKGROUND: Randomized experiments yield unbiased estimates of treatment effect, but such experiments are not always feasible. So researchers have searched for conditions under which randomized and nonrandomized experiments can yield the same answer. This search requires well-justified and informative correspondence criteria, that is, criteria by which we can judge if the results from an appropriately adjusted nonrandomized experiment well-approximate results from randomized experiments. Past criteria have relied exclusively on frequentist statistics, using criteria such as whether results agree in sign or statistical significance or whether results differ significantly from each other. OBJECTIVES: In this article, we show how Bayesian correspondence criteria offer more varied, nuanced, and informative answers than those from frequentist approaches. RESEARCH DESIGN: We describe the conceptual bases of Bayesian correspondence criteria and then illustrate many possibilities using an example that compares results from a randomized experiment to results from a parallel nonequivalent comparison group experiment in which participants could choose their condition. RESULTS: Results suggest that, in this case, the quasi-experiment reasonably approximated the randomized experiment. CONCLUSIONS: We conclude with a discussion of the advantages (computation of relevant quantities, interpretation, and estimation of quantities of interest for policy), disadvantages, and limitations of Bayesian correspondence criteria. We believe that in most circumstances, the advantages of Bayesian approaches far outweigh the disadvantages.


Subject(s)
Bayes Theorem , Empirical Research , Evaluation Studies as Topic , Randomized Controlled Trials as Topic , Bias , Propensity Score , Research Design
3.
J Appl Behav Anal ; 49(3): 656-73, 2016 09.
Article in English | MEDLINE | ID: mdl-27174301

ABSTRACT

The published literature often underrepresents studies that do not find evidence for a treatment effect; this is often called publication bias. Literature reviews that fail to include such studies may overestimate the size of an effect. Only a few studies have examined publication bias in single-case design (SCD) research, but those studies suggest that publication bias may occur. This study surveyed SCD researchers about publication preferences in response to simulated SCD results that show a range of small to large effects. Results suggest that SCD researchers are more likely to submit manuscripts that show large effects for publication and are more likely to recommend acceptance of manuscripts that show large effects when they act as a reviewer. A nontrivial minority of SCD researchers (4% to 15%) would drop 1 or 2 cases from the study if the effect size is small and then submit for publication. This article ends with a discussion of implications for publication practices in SCD research.


Subject(s)
Publication Bias , Research Design , Research Personnel/psychology , Humans , Research Personnel/statistics & numerical data , Surveys and Questionnaires
4.
J Clin Epidemiol ; 76: 82-8, 2016 08.
Article in English | MEDLINE | ID: mdl-27079848

ABSTRACT

OBJECTIVES: We reanalyzed data from a previous randomized crossover design that administered high or low doses of intravenous immunoglobulin (IgG) to 12 patients with hypogammaglobulinaemia over 12 time points, with crossover after time 6. The objective was to see if results corresponded when analyzed as a set of single-case experimental designs vs. as a usual randomized controlled trial (RCT). STUDY DESIGN AND SETTINGS: Two blinded statisticians independently analyzed results. One analyzed the RCT comparing mean outcomes of group A (high dose IgG) to group B (low dose IgG) at the usual trial end point (time 6 in this case). The other analyzed all 12 time points for the group B patients as six single-case experimental designs analyzed together in a Bayesian nonlinear framework. RESULTS: In the randomized trial, group A [M = 794.93; standard deviation (SD) = 90.48] had significantly higher serum IgG levels at time six than group B (M = 283.89; SD = 71.10) (t = 10.88; df = 10; P < 0.001), yielding a mean difference of MD = 511.05 [standard error (SE) = 46.98]. For the single-case experimental designs, the effect from an intrinsically nonlinear regression was also significant and comparable in size with overlapping confidence intervals: MD = 495.00, SE = 54.41, and t = 495.00/54.41 = 9.10. Subsequent exploratory analyses indicated that how trend was modeled made a difference to these conclusions. CONCLUSIONS: The results of single-case experimental designs accurately approximated results from an RCT, although more work is needed to understand the conditions under which this holds.


Subject(s)
Agammaglobulinemia/drug therapy , Biomedical Research/methods , Immunoglobulins/administration & dosage , Randomized Controlled Trials as Topic , Research Design , Statistics as Topic/methods , Administration, Intravenous , Bayes Theorem , Dose-Response Relationship, Drug , Humans , Time Factors
5.
J Clin Epidemiol ; 76: 18-46, 2016 08.
Article in English | MEDLINE | ID: mdl-26272791

ABSTRACT

N-of-1 trials are a useful tool for clinicians who want to determine the effectiveness of a treatment in a particular individual. The reporting of N-of-1 trials has been variable and incomplete, hindering their usefulness in clinical decision making and by future researchers. This document presents the CONSORT (Consolidated Standards of Reporting Trials) extension for N-of-1 trials (CENT 2015). CENT 2015 extends the CONSORT 2010 guidance to facilitate the preparation and appraisal of reports of an individual N-of-1 trial or a series of prospectively planned, multiple, crossover N-of-1 trials. CENT 2015 elaborates on 14 items of the CONSORT 2010 checklist, totalling 25 checklist items (44 sub-items), and recommends diagrams to help authors document the progress of one participant through a trial or more than one participant through a trial or series of trials, as applicable. Examples of good reporting and evidence based rationale for CENT 2015 checklist items are provided.


Subject(s)
Biomedical Research/standards , Clinical Trials as Topic/standards , Guidelines as Topic , Publishing/standards , Research Design/standards , Research Report/standards , Terminology as Topic , Humans
6.
Res Synth Methods ; 6(3): 246-64, 2015 Sep.
Article in English | MEDLINE | ID: mdl-26212600

ABSTRACT

This article looks at the impact of meta-analysis and then explores why meta-analysis was developed at the time and by the scholars it did in the social sciences in the 1970s. For the first problem, impact, it examines the impact of meta-analysis using citation network analysis. The impact is seen in the sciences, arts and humanities, and on such contemporaneous developments as multilevel modeling, medical statistics, qualitative methods, program evaluation, and single-case design. Using a constrained snowball sample of citations, we highlight key articles that are either most highly cited or most central to the systematic review network. Then, the article examines why meta-analysis came to be in the 1970s in the social sciences through the work of Gene Glass, Robert Rosenthal, and Frank Schmidt, each of whom developed similar theories of meta-analysis at about the same time. The article ends by explaining how Simonton's chance configuration theory and Campbell's evolutionary epistemology can illuminate why meta-analysis occurred with these scholars when it did and not in medical sciences.


Subject(s)
Biomedical Research/history , Clinical Trials as Topic/history , Data Interpretation, Statistical , Meta-Analysis as Topic , Research Design , Review Literature as Topic , History, 20th Century , History, 21st Century
7.
Res Synth Methods ; 6(3): 219-20, 2015 Sep.
Article in English | MEDLINE | ID: mdl-26097018

ABSTRACT

This issue of Research Synthesis Methods is devoted to discussion of the origins of modern meta-analysis. Three articles by pioneers in development meta-analysis are by Gene Glass, Frank Schmidt, and Robert Rosenthal, respectively. They reflect on their own experiences about how they made these developments. The fourth article is by William Shadish, and seeks to analyze the impact of meta-analysis and the reasons why meta-analysis developed at the time it did, and by the people who did so. The articles are followed by commentaries by Douglas Altman, Iain Chalmers, Harris Cooper, Kay Dickersin, Larry Hedges, David Hoaglin, and Hannah Rothstein, who each comment on both the four target articles and on their own perspectives about how and why meta-analysis developed when and how it did.


Subject(s)
Biomedical Research/history , Clinical Trials as Topic/history , Data Interpretation, Statistical , Meta-Analysis as Topic , Research Design , Review Literature as Topic , History, 20th Century , History, 21st Century
10.
Psychol Methods ; 20(1): 26-42, 2015 Mar.
Article in English | MEDLINE | ID: mdl-24885341

ABSTRACT

Single-case designs (SCDs) are short time series that assess intervention effects by measuring units repeatedly over time in both the presence and absence of treatment. This article introduces a statistical technique for analyzing SCD data that has not been much used in psychological and educational research: generalized additive models (GAMs). In parametric regression, the researcher must choose a functional form to impose on the data, for example, that trend over time is linear. GAMs reverse this process by letting the data inform the choice of functional form. In this article we review the problem that trend poses in SCDs, discuss how current SCD analytic methods approach trend, describe GAMs as a possible solution, suggest a GAM model testing procedure for examining the presence of trend in SCDs, present a small simulation to show the statistical properties of GAMs, and illustrate the procedure on 3 examples of different lengths. Results suggest that GAMs may be very useful both as a form of sensitivity analysis for checking the plausibility of assumptions about trend and as a primary data analysis strategy for testing treatment effects. We conclude with a discussion of some problems with GAMs and some future directions for research on the application of GAMs to SCDs.


Subject(s)
Biomedical Research/statistics & numerical data , Models, Statistical , Research Design/statistics & numerical data , Humans
11.
J Sch Psychol ; 52(2): 109-22, 2014 Apr.
Article in English | MEDLINE | ID: mdl-24606971

ABSTRACT

The last 10 years have seen great progress in the analysis and meta-analysis of single-case designs (SCDs). This special issue includes five articles that provide an overview of current work on that topic, including standardized mean difference statistics, multilevel models, Bayesian statistics, and generalized additive models. Each article analyzes a common example across articles and presents syntax or macros for how to do them. These articles are followed by commentaries from single-case design researchers and journal editors. This introduction briefly describes each article and then discusses several issues that must be addressed before we can know what analyses will eventually be best to use in SCD research. These issues include modeling trend, modeling error covariances, computing standardized effect size estimates, assessing statistical power, incorporating more accurate models of outcome distributions, exploring whether Bayesian statistics can improve estimation given the small samples common in SCDs, and the need for annotated syntax and graphical user interfaces that make complex statistics accessible to SCD researchers. The article then discusses reasons why SCD researchers are likely to incorporate statistical analyses into their research more often in the future, including changing expectations and contingencies regarding SCD research from outside SCD communities, changes and diversity within SCD communities, corrections of erroneous beliefs about the relationship between SCD research and statistics, and demonstrations of how statistics can help SCD researchers better meet their goals.


Subject(s)
Data Interpretation, Statistical , Meta-Analysis as Topic , Research Design/standards , Humans
12.
J Sch Psychol ; 52(2): 123-47, 2014 Apr.
Article in English | MEDLINE | ID: mdl-24606972

ABSTRACT

This article presents a d-statistic for single-case designs that is in the same metric as the d-statistic used in between-subjects designs such as randomized experiments and offers some reasons why such a statistic would be useful in SCD research. The d has a formal statistical development, is accompanied by appropriate power analyses, and can be estimated using user-friendly SPSS macros. We discuss both advantages and disadvantages of d compared to other approaches such as previous d-statistics, overlap statistics, and multilevel modeling. It requires at least three cases for computation and assumes normally distributed outcomes and stationarity, assumptions that are discussed in some detail. We also show how to test these assumptions. The core of the article then demonstrates in depth how to compute d for one study, including estimation of the autocorrelation and the ratio of between case variance to total variance (between case plus within case variance), how to compute power using a macro, and how to use the d to conduct a meta-analysis of studies using single-case designs in the free program R, including syntax in an appendix. This syntax includes how to read data, compute fixed and random effect average effect sizes, prepare a forest plot and a cumulative meta-analysis, estimate various influence statistics to identify studies contributing to heterogeneity and effect size, and do various kinds of publication bias analyses. This d may prove useful for both the analysis and meta-analysis of data from SCDs.


Subject(s)
Data Interpretation, Statistical , Meta-Analysis as Topic , Research Design/standards , Humans
13.
J Sch Psychol ; 52(2): 149-78, 2014 Apr.
Article in English | MEDLINE | ID: mdl-24606973

ABSTRACT

This article shows how to apply generalized additive models and generalized additive mixed models to single-case design data. These models excel at detecting the functional form between two variables (often called trend), that is, whether trend exists, and if it does, what its shape is (e.g., linear and nonlinear). In many respects, however, these models are also an ideal vehicle for analyzing single-case designs because they can consider level, trend, variability, overlap, immediacy of effect, and phase consistency that single-case design researchers examine when interpreting a functional relation. We show how these models can be implemented in a wide variety of ways to test whether treatment is effective, whether cases differ from each other, whether treatment effects vary over cases, and whether trend varies over cases. We illustrate diagnostic statistics and graphs, and we discuss overdispersion of data in detail, with examples of quasibinomial models for overdispersed data, including how to compute dispersion and quasi-AIC fit indices in generalized additive models. We show how generalized additive mixed models can be used to estimate autoregressive models and random effects and discuss the limitations of the mixed models compared to generalized additive models. We provide extensive annotated syntax for doing all these analyses in the free computer program R.


Subject(s)
Models, Statistical , Research Design/standards , Humans
14.
Neuropsychol Rehabil ; 24(3-4): 528-53, 2014.
Article in English | MEDLINE | ID: mdl-23862576

ABSTRACT

We describe a standardised mean difference statistic (d) for single-case designs that is equivalent to the usual d in between-groups experiments. We show how it can be used to summarise treatment effects over cases within a study, to do power analyses in planning new studies and grant proposals, and to meta-analyse effects across studies of the same question. We discuss limitations of this d-statistic, and possible remedies to them. Even so, this d-statistic is better founded statistically than other effect size measures for single-case design, and unlike many general linear model approaches such as multilevel modelling or generalised additive models, it produces a standardised effect size that can be integrated over studies with different outcome measures. SPSS macros for both effect size computation and power analysis are available.


Subject(s)
Research Design/statistics & numerical data , Humans , Meta-Analysis as Topic
15.
Psychol Methods ; 18(3): 385-405, 2013 Sep.
Article in English | MEDLINE | ID: mdl-23834421

ABSTRACT

Several authors have proposed the use of multilevel models to analyze data from single-case designs. This article extends that work in 2 ways. First, examples are given of how to estimate these models when the single-case designs have features that have not been considered by past authors. These include the use of polynomial coefficients to model nonlinear change, the modeling of counts (Poisson distributed) or proportions (binomially distributed) as outcomes, the use of 2 different ways of modeling treatment effects in ABAB designs, and applications of these models to alternating treatment and changing criterion designs. Second, issues that arise when multilevel models are used for the analysis of single-case designs are discussed; such issues can form part of an agenda for future research on this topic. These include statistical power and assumptions, applications to more complex single-case designs, the role of exploratory data analyses, extensions to other kinds of outcome variables and sampling distributions, and other statistical programs that can be used to do such analyses.


Subject(s)
Linear Models , Multilevel Analysis/methods , Humans , Models, Statistical , Research Design/standards
16.
Behav Res Methods ; 45(3): 813-21, 2013 Sep.
Article in English | MEDLINE | ID: mdl-23239070

ABSTRACT

Researchers in the single-case design tradition have debated the size and importance of the observed autocorrelations in those designs. All of the past estimates of the autocorrelation in that literature have taken the observed autocorrelation estimates as the data to be used in the debate. However, estimates of the autocorrelation are subject to great sampling error when the design has a small number of time points, as is typically the situation in single-case designs. Thus, a given observed autocorrelation may greatly over- or underestimate the corresponding population parameter. This article presents Bayesian estimates of the autocorrelation that greatly reduce the role of sampling error, as compared to past estimators. Simpler empirical Bayes estimates are presented first, in order to illustrate the fundamental notions of autocorrelation sampling error and shrinkage, followed by fully Bayesian estimates, and the difference between the two is explained. Scripts to do the analyses are available as supplemental materials. The analyses are illustrated using two examples from the single-case design literature. Bayesian estimation warrants wider use, not only in debates about the size of autocorrelations, but also in statistical methods that require an independent estimate of the autocorrelation to analyze the data.


Subject(s)
Bayes Theorem , Models, Statistical , Data Interpretation, Statistical , Humans , Regression Analysis , Research Design , Sample Size , Selection Bias
18.
Res Synth Methods ; 4(4): 324-41, 2013 Dec.
Article in English | MEDLINE | ID: mdl-26053946

ABSTRACT

Single-case designs are a class of research methods for evaluating treatment effects by measuring outcomes repeatedly over time while systematically introducing different condition (e.g., treatment and control) to the same individual. The designs are used across fields such as behavior analysis, clinical psychology, special education, and medicine. Emerging standards for single-case designs have focused attention on methods for summarizing and meta-analyzing findings and on the need for effect sizes indices that are comparable to those used in between-subjects designs. In the previous work, we discussed how to define and estimate an effect size that is directly comparable to the standardized mean difference often used in between-subjects research based on the data from a particular type of single-case design, the treatment reversal or (AB)(k) design. This paper extends the effect size measure to another type of single-case study, the multiple baseline design. We propose estimation methods for the effect size and its variance, study the estimators using simulation, and demonstrate the approach in two applications.


Subject(s)
Research Design/statistics & numerical data , Research Design/standards , Biostatistics , Controlled Clinical Trials as Topic/standards , Controlled Clinical Trials as Topic/statistics & numerical data , Data Interpretation, Statistical , Humans , Linear Models , Meta-Analysis as Topic , Models, Statistical
19.
Psychol Methods ; 17(2): 244-54, 2012 Jun.
Article in English | MEDLINE | ID: mdl-22563844

ABSTRACT

Although randomized studies have high internal validity, generalizability of the estimated causal effect from randomized clinical trials to real-world clinical or educational practice may be limited. We consider the implication of randomized assignment to treatment, as compared with choice of preferred treatment as it occurs in real-world conditions. Compliance, engagement, or motivation may be better with a preferred treatment, and this can complicate the generalizability of results from randomized trials. The doubly randomized preference trial (DRPT) is a hybrid randomized and nonrandomized design that allows for estimation of the causal effect of randomization versus treatment preference. In the DRPT, individuals are first randomized to either randomized assignment or choice assignment. Those in the randomized assignment group are then randomized to treatment or control, and those in the choice group receive their preference of treatment versus control. Using the potential outcomes framework, we apply the algebra of conditional independence to show how the DRPT can be used to derive an unbiased estimate of the causal effect of randomization versus preference for each of the treatment and comparison conditions. Also, we show how these results can be implemented using full matching on the propensity score. The methodology is illustrated with a DRPT of introductory psychology students who were randomized to randomized assignment or preference of mathematics versus vocabulary training. We found a small to moderate benefit of preference versus randomization with respect to the mathematics outcome for those who received mathematics training.


Subject(s)
Models, Statistical , Patient Preference , Randomized Controlled Trials as Topic/methods , Randomized Controlled Trials as Topic/statistics & numerical data , Research Design/statistics & numerical data , Causality , Data Interpretation, Statistical , Female , Humans , Male , Mathematics/education , Patient Compliance/psychology , Propensity Score , Random Allocation , Research Subjects/psychology , Selection Bias , Students , Treatment Outcome , Vocabulary
20.
J Marital Fam Ther ; 38(1): 281-304, 2012 Jan.
Article in English | MEDLINE | ID: mdl-22283391

ABSTRACT

This meta-analysis summarizes results from k = 24 studies comparing either Brief Strategic Family Therapy, Functional Family Therapy, Multidimensional Family Therapy, or Multisystemic Therapy to either treatment-as-usual, an alternative therapy, or a control group in the treatment of adolescent substance abuse and delinquency. Additionally, the authors reviewed and applied three advanced meta-analysis methods including influence analysis, multivariate meta-analysis, and publication bias analyses. The results suggested that as a group the four family therapies had statistically significant, but modest effects as compared to treatment-as-usual (d = 0.21; k = 11) and as compared to alternative therapies (d = 0.26; k = 11). The effect of family therapy compared to control was larger (d = 0.70; k = 4) but was not statistically significant probably because of low power. There was insufficient evidence to determine whether the various models differed in their effectiveness relative to each other. Influence analyses suggested that three studies had a large effect on aggregate effect sizes and heterogeneity statistics. Moderator and multivariate analyses were largely underpowered but will be useful as this literature grows.


Subject(s)
Adolescent Behavior , Family Therapy/methods , Juvenile Delinquency/rehabilitation , Substance-Related Disorders/therapy , Adolescent , Adolescent Health Services/organization & administration , Cognitive Behavioral Therapy/methods , Combined Modality Therapy/methods , Comorbidity , Evidence-Based Medicine , Humans , Juvenile Delinquency/statistics & numerical data , Psychotherapy, Group/methods , Randomized Controlled Trials as Topic , Substance-Related Disorders/epidemiology
SELECTION OF CITATIONS
SEARCH DETAIL
...