Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 41
Filter
1.
Sch Psychol ; 2024 Apr 11.
Article in English | MEDLINE | ID: mdl-38602822

ABSTRACT

To make transparent individuals' responses to intervention over time in the systematic review of single-case experimental designs, we developed a method of estimating and graphing fine-grained effect sizes. Fine-grained effect sizes are both case- and time-specific and thus provide more nuanced information than effect size estimates that average effects across time, across cases, or both. We demonstrate the method for estimating fine-grained effect sizes under three different baseline stability assumptions: outcome stability, level stability, and trend stability. We then use the method to graph individual effect trajectories from three single-case experimental design studies that examined the impact of self-management interventions on students identified with autism. We conclude by discussing limitations associated with estimating and graphing fine-grained effect sizes and directions for further development. (PsycInfo Database Record (c) 2024 APA, all rights reserved).

2.
Multivariate Behav Res ; 57(2-3): 298-317, 2022.
Article in English | MEDLINE | ID: mdl-32996335

ABSTRACT

To conduct a multilevel meta-analysis of multiple single-case experimental design (SCED) studies, the individual participant data (IPD) can be analyzed in one or two stages. In the one-stage approach, a multilevel model is estimated based on the raw data. In the two-stage approach, an effect size is calculated for each participant and these effect sizes and their sampling variances are subsequently combined to estimate a meta-analytic multilevel model. The multilevel model in the two-stage approach has fewer parameters to estimate, in exchange for the reduction of information of the raw data to effect sizes. In this paper we explore how the one-stage and two-stage IPD approaches can be applied in the context of meta-analysis of single-case designs. Both approaches are compared for several single-case designs of increasing complexity. Through a simulation study we show that the two-stage approach obtains better convergence rates for more complex models, but that model estimation does not necessarily converge at a faster speed. The point estimates of the fixed effects are unbiased for both approaches across all models, as such confirming results from methodological research on IPD meta-analysis of group-comparison designs. In light of these results, we discuss the implementation of both methods in R.


Subject(s)
Research Design , Computer Simulation , Humans , Multilevel Analysis
3.
Behav Res Methods ; 54(4): 1701-1714, 2022 08.
Article in English | MEDLINE | ID: mdl-34608614

ABSTRACT

Researchers conducting small-scale cluster randomized controlled trials (RCTs) during the pilot testing of an intervention often look for evidence of promise to justify an efficacy trial. We developed a method to test for intervention effects that is adaptive (i.e., responsive to data exploration), requires few assumptions, and is statistically valid (i.e., controls the type I error rate), by adapting masked visual analysis techniques to cluster RCTs. We illustrate the creation of masked graphs and their analysis using data from a pilot study in which 15 high school programs were randomly assigned to either business as usual or an intervention developed to promote psychological and academic well-being in 9th grade students in accelerated coursework. We conclude that in small-scale cluster RCTs there can be benefits of testing for effects without a priori specification of a statistical model or test statistic.


Subject(s)
Models, Statistical , Research Design , Cluster Analysis , Humans , Randomized Controlled Trials as Topic
4.
Sch Psychol ; 36(4): 255-260, 2021 Jul.
Article in English | MEDLINE | ID: mdl-34292045

ABSTRACT

Results from research indicate writing is a critical skill linked to several academic outcomes. To promote improvements in writing quantity and quality, intervention might target increasing students' academic engagement during time designated to practice writing. The purpose of this study was to implement an evidence-based classwide behavioral intervention, the Good Behavior Game (GBG), during daily writing practice time in two classrooms. Participants (n = 45) included students in a Grade 1 and Grade 2 class enrolled in an elementary school in a large suburb in the northeast U.S. Findings based on visual analysis and multilevel modeling indicate that students, on average, wrote more words (quantity) and more correct writing sequences (quality) when the GBG was played versus when it was not. Implications include the need for replication studies to extend findings and explore how school psychologists might consider the use of behavioral interventions to promote improved engagement and academic output in the classroom. (PsycInfo Database Record (c) 2021 APA, all rights reserved).


Subject(s)
Schools , Students , Behavior Therapy , Humans , Writing
5.
J Sch Psychol ; 86: 169-177, 2021 06.
Article in English | MEDLINE | ID: mdl-34051912

ABSTRACT

Single-case researchers often implement multiple-baseline designs as their preferred methodology for intervention evaluations. Recent writings and empirical investigations have argued in favor of incorporating various forms of randomization into such designs for the purpose of elevating the intervention study's internal validity and scientific credibility. In this article, we consider a variety of randomized multiple-baseline designs and associated randomization statistical tests, along with their potential strengths and limitations. In what amounts to a practical guide, we refer school psychology researchers to these versatile randomization procedures for planning and executing their intervention studies.


Subject(s)
Psychology, Educational , Research Design , Humans
6.
Prev Sci ; 22(6): 811-825, 2021 08.
Article in English | MEDLINE | ID: mdl-33544310

ABSTRACT

The paper describes the applicability and acceptability of a selective intervention-Motivation, Assessment, and Planning (MAP)-for high school students that was developed based on the principles of motivational interviewing (MI) and tailored to the unique needs and strengths of students taking accelerated coursework, specifically Advanced Placement (AP) and International Baccalaureate (IB) classes. In addition to detailing the intervention in terms of MI spirit, processes, and relational and technical skills, we report applicability and acceptability data from a second iteration of MAP implementation in eight AP/IB programs in a Southeastern state during spring 2018. We analyzed quantitative and qualitative acceptability data from 121 high school freshmen (97 from AP and 24 from IB courses), as well as the seven MAP coaches who were trained using the Motivational Interview Training and Assessment System (Frey et al. 2017). To gain perspectives from the intended end users of the refined MAP, 12 school counselors and school psychologists who were not trained in MAP evaluated the intervention and provided qualitative and quantitative data on applicability and acceptability. All three stakeholder groups (students, coaches, and school mental health staff) rated and described the intervention as highly acceptable and appropriate for addressing the social-emotional needs of adolescents in AP/IB classes.


Subject(s)
Motivational Interviewing , Adolescent , Curriculum , Humans , Motivation , Schools , Students
7.
Dev Neurorehabil ; 24(2): 130-143, 2021 Feb.
Article in English | MEDLINE | ID: mdl-33393404

ABSTRACT

Objective: There is a growing interest in the potential benefits of applying Bayesian estimation for multilevel models of SCED data. Methodological studies have shown that Bayesian estimation resolves convergence issues, can be adequate for the small sample, and can improve the accuracy of the variance components. Despite the potential benefits, the lack of accessibility to software codes makes it difficult for applied researchers to implement Bayesian estimation in their studies. The purpose of this article is to illustrate a feasible way to implement Bayesian estimation using OpenBUGS software to analyze a complex SCED model where within-participants variability and autocorrelation may differ across cases. Method: By using extracted data from a published study, step-by-step guidance in analyzing the data using OpenBUGS software is provided, including (1) model specification, (2) prior distributions, (3) data entering, (4) model estimation, (5) convergence criteria, and (6) posterior inferences and interpretations. Result: Full codes for the analysis are provided.


Subject(s)
Single-Case Studies as Topic/methods , Software/standards , Bayes Theorem , Humans , Multilevel Analysis
8.
Educ Psychol Meas ; 81(1): 61-89, 2021 Feb.
Article in English | MEDLINE | ID: mdl-33456062

ABSTRACT

Factor mixture modeling (FMM) has been increasingly used to investigate unobserved population heterogeneity. This study examined the issue of covariate effects with FMM in the context of measurement invariance testing. Specifically, the impact of excluding and misspecifying covariate effects on measurement invariance testing and class enumeration was investigated via Monte Carlo simulations. Data were generated based on FMM models with (1) a zero covariate effect, (2) a covariate effect on the latent class variable, and (3) covariate effects on both the latent class variable and the factor. For each population model, different analysis models that excluded or misspecified covariate effects were fitted. Results highlighted the importance of including proper covariates in measurement invariance testing and evidenced the utility of a model comparison approach in searching for the correct specification of covariate effects and the level of measurement invariance. This approach was demonstrated using an empirical data set. Implications for methodological and applied research are discussed.

9.
Behav Res Methods ; 52(6): 2460-2479, 2020 12.
Article in English | MEDLINE | ID: mdl-32441032

ABSTRACT

In the context of single-case experimental designs, replication is crucial. On the one hand, the replication of the basic effect within a study is necessary for demonstrating experimental control. On the other hand, replication across studies is required for establishing the generality of the intervention effect. Moreover, the "replicability crisis" presents a more general context further emphasizing the need for assessing consistency in replications. In the current text, we focus on replication of effects within a study, and we specifically discuss the consistency of effects. Our proposal for assessing the consistency of effects refers to one of the promising data analytical techniques, multilevel models, also known as hierarchical linear models or mixed effects models. One option is to check, for each case in a multiple-baseline design, whether the confidence interval for the individual treatment effect excludes zero. This is relevant for assessing whether the effect is replicated as being non-null. However, we consider that it is more relevant and informative to assess, for each case, whether the confidence interval for the random effects includes zero (i.e., whether the fixed effect estimate is a plausible value for each individual effect). This is relevant for assessing whether the effect is consistent in size, with the additional requirement that the fixed effect itself is different from zero. The proposal for assessing consistency is illustrated with real data and is implemented in free user-friendly software.


Subject(s)
Research Design , Software , Humans , Multilevel Analysis
10.
Behav Res Methods ; 52(5): 2008-2019, 2020 10.
Article in English | MEDLINE | ID: mdl-32144730

ABSTRACT

The focus of the current study is on handling the dependence among multiple regression coefficients representing the treatment effects when meta-analyzing data from single-case experimental studies. We compare the results when applying three different multilevel meta-analytic models (i.e., a univariate multilevel model avoiding the dependence, a multivariate multilevel model ignoring covariance at higher levels, and a multivariate multilevel model modeling the existing covariance) to deal with the dependent effect sizes. The results indicate better estimates of the overall treatment effects and variance components when a multivariate multilevel model is applied, independent of modeling or ignoring the existing covariance. These findings confirm the robustness of multilevel modeling to misspecifying the existing covariance at the case and study level in terms of estimating the overall treatment effects and variance components. The results also show that the overall treatment effect estimates are unbiased regardless of the underlying model, but the between-case and between-study variance components are biased in certain conditions. In addition, the between-study variance estimates are particularly biased when the number of studies is smaller than 40 (i.e., 10 or 20) and the true value of the between-case variance is relatively large (i.e., 8). The observed bias is larger for the between-case variance estimates compared to the between-study variance estimates when the true between-case variance is relatively small (i.e., 0.5).


Subject(s)
Multilevel Analysis , Multivariate Analysis , Bias
11.
Behav Res Methods ; 52(1): 177-192, 2020 02.
Article in English | MEDLINE | ID: mdl-30972557

ABSTRACT

The MultiSCED web application has been developed to assist applied researchers in behavioral sciences to apply multilevel modeling to quantitatively summarize single-case experimental design (SCED) studies through a user-friendly point-and-click interface embedded within R. In this paper, we offer a brief introduction to the application, explaining how to define and estimate the relevant multilevel models and how to interpret the results numerically and graphically. The use of the application is illustrated through a re-analysis of an existing meta-analytic dataset. By guiding applied researchers through MultiSCED, we aim to make use of the multilevel modeling technique for combining SCED data across cases and across studies more comprehensible and accessible.


Subject(s)
Multilevel Analysis , Behavioral Sciences , Research Design
12.
J Exp Anal Behav ; 112(3): 334-348, 2019 11.
Article in English | MEDLINE | ID: mdl-31709560

ABSTRACT

Following up on articles recently published in this journal, the present contribution tells (some of) "the rest of the story" about the value of randomization in single-case intervention research investigations. Invoking principles of internal, statistical-conclusion, and external validity, we begin by emphasizing the critical distinction between design randomization and analysis randomization, along with the necessary correspondence between the two. Four different types of single-case design-and-analysis randomization are then discussed. The persistent negative influence of serially dependent single-case outcome observations is highlighted, accompanied by examples of inappropriate applications of parametric and nonparametric tests that have appeared in the literature. We conclude by presenting valid applications of single-case randomization procedures in various single-case intervention contexts, with specific reference to a freely available Excel-based software package that can be accessed to incorporate the present randomization schemes into a wide variety of single-case intervention designs and analyses.


Subject(s)
Behavioral Research/methods , Data Interpretation, Statistical , Random Allocation , Single-Case Studies as Topic/methods , Humans , Randomized Controlled Trials as Topic/methods , Statistics as Topic , Statistics, Nonparametric
13.
Multivariate Behav Res ; 54(5): 666-689, 2019.
Article in English | MEDLINE | ID: mdl-30857444

ABSTRACT

In single-case research, multiple-baseline (MB) design provides the opportunity to estimate the treatment effect based on not only within-series comparisons of treatment phase to baseline phase observations, but also time-specific between-series comparisons of observations from those that have started treatment to those that are still in the baseline. For analyzing MB studies, two types of linear mixed modeling methods have been proposed: the within- and between-series models. In principle, those models were developed based on normality assumptions, however, normality may not always be found in practical settings. Therefore, this study aimed to investigate the robustness of the within- and between-series models when data were non-normal. A Monte Carlo study was conducted with four statistical approaches. The approaches were defined by the crossing of two analytic decisions: (a) whether to use a within- or between-series estimate of effect and (b) whether to use restricted maximum likelihood or Markov chain Monte Carlo estimations. The results showed the treatment effect estimates of the four approaches had minimal bias, that within-series estimates were more precise than between-series estimates, and that confidence interval coverage was frequently acceptable, but varied across conditions and methods of estimation. Applications and implications were discussed based on the findings.


Subject(s)
Bayes Theorem , Behavioral Research/methods , Likelihood Functions , Computer Simulation , Humans , Linear Models , Markov Chains , Monte Carlo Method
14.
Behav Res Methods ; 51(6): 2477-2497, 2019 12.
Article in English | MEDLINE | ID: mdl-30105444

ABSTRACT

When (meta-)analyzing single-case experimental design (SCED) studies by means of hierarchical or multilevel modeling, applied researchers almost exclusively rely on the linear mixed model (LMM). This type of model assumes that the residuals are normally distributed. However, very often SCED studies consider outcomes of a discrete rather than a continuous nature, like counts, percentages or rates. In those cases the normality assumption does not hold. The LMM can be extended into a generalized linear mixed model (GLMM), which can account for the discrete nature of SCED count data. In this simulation study, we look at the effects of misspecifying an LMM for SCED count data simulated according to a GLMM. We compare the performance of a misspecified LMM and of a GLMM in terms of goodness of fit, fixed effect parameter recovery, type I error rate, and power. Because the LMM and the GLMM do not estimate identical fixed effects, we provide a transformation to compare the fixed effect parameter recovery. The results show that, compared to the GLMM, the LMM has worse performance in terms of goodness of fit and power. Performance in terms of fixed effect parameter recovery is equally good for both models, and in terms of type I error rate the LMM performs better than the GLMM. Finally, we provide some guidelines for applied researchers about aspects to consider when using an LMM for analyzing SCED count data.


Subject(s)
Behavioral Research/statistics & numerical data , Computer Simulation , Linear Models , Research Design/statistics & numerical data , Humans , Longitudinal Studies
15.
BMJ Open ; 8(11): e024057, 2018 Nov 28.
Article in English | MEDLINE | ID: mdl-30498047

ABSTRACT

INTRODUCTION: A multitiered system of supports (MTSS) represents a widely adopted public health approach to education in the USA. Researchers agree professional learning is critical for educators to implement the critical components of MTSS; however, professional learning approaches vary in their designs and targeted outcomes. While researchers increasingly focus their inquiries on professional learning for MTSS, no systematic research review exists. OBJECTIVES: The primary objectives for this mixed-methods review are to (1) understand how professional learning focused on MTSS has been operationalised (2) determine the impact of professional learning on educator (eg, knowledge) and implementation (eg, data-based decision-making processes) outcomes and (3) understand the contextual variables that influence professional learning in the USA. We aim to determine which elements of professional learning improve educators' capacity to implement MTSS. METHODS AND ANALYSIS: We will include studies that use quantitative and qualitative methods. PsycInfo, PubMed, CIHAHL and ERIC will be the primary research databases used to search for studies published from January 1997 to May 2018. We also will search the US Institute for Educational Sciences and Office of Special Education Programs websites, ProQuest, Google Scholar, Science Watch and MSN. Finally, we will search the proceedings of relevant conferences, examine the reference lists of studies that pass full screening and contact authors for additional work. Data extraction will include participant demographics, intervention details, study design, outcomes, analyses and key findings. We will conduct a quality assessment and analyse the data using effect size and thematic analyses. ETHICS AND DISSEMINATION: Institutional review board or ethics approval is not needed for this review of already published works. We will disseminate the findings through presentations at state, national and international conferences; presentations to stakeholders and agencies; publication in peer-reviewed journals; and posts to organisational and agency websites.


Subject(s)
Education, Professional , Learning , School Teachers , Schools , Teaching , Decision Making , Education, Nonprofessional , Humans , Public Health , Research Design , Students , United States , Systematic Reviews as Topic
16.
Educ Psychol Meas ; 78(2): 253-271, 2018 Apr.
Article in English | MEDLINE | ID: mdl-29795955

ABSTRACT

Wording effects associated with positively and negatively worded items have been found in many scales. Such effects may threaten construct validity and introduce systematic bias in the interpretation of results. A variety of models have been applied to address wording effects, such as the correlated uniqueness model and the correlated traits and correlated methods model. This study presents the multilevel bifactor approach to handling wording effects of mixed-format scales used in a multilevel context. The Students Confident in Mathematics scale is used to illustrate this approach. Results from comparing a series of models showed that positive and negative wording effects were present at both the within and the between levels. When the wording effects were ignored, the within-level predictive validity of the Students Confident in Mathematics scale was close to that under the multilevel bifactor model. However, at the between level, a lower validity coefficient was observed when ignoring the wording effects. Implications for applied researchers are discussed.

17.
Res Dev Disabil ; 79: 77-87, 2018 Aug.
Article in English | MEDLINE | ID: mdl-29289405

ABSTRACT

BACKGROUND: When developmental disabilities researchers use multiple-baseline designs they are encouraged to delay the start of an intervention until the baseline stabilizes or until preceding cases have responded to intervention. Using ongoing visual analyses to guide the timing of the start of the intervention can help to resolve potential ambiguities in the graphical display; however, these forms of response-guided experimentation have been criticized as a potential source of bias in treatment effect estimation and inference. AIMS AND METHODS: Monte Carlo simulations were used to examine the bias and precision of average treatment effect estimates obtained from multilevel models of four-case multiple-baseline studies with series lengths that varied from 19 to 49 observations per case. We varied the size of the average treatment effect, the factors used to guide intervention decisions (baseline stability, response to intervention, both, or neither), and whether the ongoing analysis was masked or not. RESULTS: None of the methods of responding to the data led to appreciable bias in the treatment effect estimates. Furthermore, as timing-of-intervention decisions became responsive to more factors, baselines became longer and treatment effect estimates became more precise. CONCLUSIONS: Although the study was conducted under limited conditions, the response-guided practices did not lead to substantial bias. By extending baseline phases they reduced estimation error and thus improved the treatment effect estimates obtained from multilevel models.


Subject(s)
Data Accuracy , Outcome Assessment, Health Care , Developmental Disabilities/therapy , Humans , Monte Carlo Method , Outcome Assessment, Health Care/methods , Outcome Assessment, Health Care/standards , Patient Selection , Research Design , Time-to-Treatment/standards , Treatment Outcome
18.
Res Dev Disabil ; 79: 97-115, 2018 Aug.
Article in English | MEDLINE | ID: mdl-29289406

ABSTRACT

BACKGROUND: Methodological rigor is a fundamental factor in the validity and credibility of the results of a meta-analysis. AIM: Following an increasing interest in single-case experimental design (SCED) meta-analyses, the current study investigates the methodological quality of SCED meta-analyses. METHODS AND PROCEDURES: We assessed the methodological quality of 178 SCED meta-analyses published between 1985 and 2015 through the modified Revised-Assessment of Multiple Systematic Reviews (R-AMSTAR) checklist. OUTCOMES AND RESULTS: The main finding of the current review is that the methodological quality of the SCED meta-analyses has increased over time, but is still low according to the R-AMSTAR checklist. A remarkable percentage of the studies (93.80% of the included SCED meta-analyses) did not even reach the midpoint score (22, on a scale of 0-44). The mean and median methodological quality scores were 15.57 and 16, respectively. Relatively high scores were observed for "providing the characteristics of the included studies" and "doing comprehensive literature search". The key areas of deficiency were "reporting an assessment of the likelihood of publication bias" and "using the methods appropriately to combine the findings of studies". CONCLUSIONS AND IMPLICATIONS: Although the results of the current review reveal that the methodological quality of the SCED meta-analyses has increased over time, still more efforts are needed to improve their methodological quality.


Subject(s)
Meta-Analysis as Topic , Practice Guidelines as Topic/standards , Research Design/standards , Data Accuracy , Humans , Reproducibility of Results , Sample Size
19.
Dev Neurorehabil ; 21(5): 290-311, 2018 Jul.
Article in English | MEDLINE | ID: mdl-27367902

ABSTRACT

In three simulation investigations, we examined the statistical properties of several different randomization-test procedures for analyzing the data from single-case multiple-baseline intervention studies. Two procedures (Wampold-Worsham and Revusky) are associated with single fixed intervention start points and three are associated with randomly determined intervention start points. Of the latter three, one (Koehler-Levin) is an existing procedure that has been previously examined and the other two (modified Revusky and restricted Marascuilo-Busk) are modifications and extensions of existing procedures. All five procedures were found to maintain their Type I error probabilities at acceptable levels. In most of the conditions investigated here, two of the random start-point procedures (Koehler-Levin and restricted Marascuilo-Busk) were more powerful than the others with respect to detecting immediate abrupt intervention effects. For designs in which it is not possible to include the same series lengths for all cases, either the modified Revusky or restricted Marascuilo-Busk procedure is recommended.


Subject(s)
Neuropsychological Tests/standards , Random Allocation , Humans , Sample Size
20.
J Appl Behav Anal ; 50(4): 701-716, 2017 Oct.
Article in English | MEDLINE | ID: mdl-28887866

ABSTRACT

We developed masked visual analysis (MVA) as a structured complement to traditional visual analysis. The purpose of the present investigation was to compare the effects of computer-simulated MVA of a four-case multiple-baseline (MB) design in which the phase lengths are determined by an ongoing visual analysis (i.e., response-guided) versus those in which the phase lengths are established a priori (i.e., fixed criteria). We observed an acceptably low probability (less than .05) of false detection of treatment effects. The probability of correctly detecting a true effect frequently exceeded .80 and was higher when: (a) the masked visual analyst extended phases based on an ongoing visual analysis, (b) the effects were larger, (c) the effects were more immediate and abrupt, and (d) the effects of random and extraneous error factors were simpler. Our findings indicate that MVA is a valuable combined methodological and data-analysis tool for single-case intervention researchers.


Subject(s)
Data Display , Data Interpretation, Statistical , Monte Carlo Method , Computer Simulation , Humans , Research Design
SELECTION OF CITATIONS
SEARCH DETAIL