Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 4 de 4
Filter
Add more filters










Database
Language
Publication year range
1.
Behav Res Methods ; 2024 Mar 20.
Article in English | MEDLINE | ID: mdl-38509268

ABSTRACT

Psychologists are increasingly interested in whether treatment effects vary in randomized controlled trials. A number of tests have been proposed in the causal inference literature to test for such heterogeneity, which differ in the sample statistic they use (either using the variance terms of the experimental and control group, their empirical distribution functions, or specific quantiles), and in whether they make distributional assumptions or are based on a Fisher randomization procedure. In this manuscript, we present the results of a simulation study in which we examine the performance of the different tests while varying the amount of treatment effect heterogeneity, the type of underlying distribution, the sample size, and whether an additional covariate is considered. Altogether, our results suggest that researchers should use a randomization test to optimally control for type 1 errors. Furthermore, all tests studied are associated with low power in case of small and moderate samples even when the heterogeneity of the treatment effect is substantial. This suggests that current tests for treatment effect heterogeneity require much larger samples than those collected in current research.

2.
Article in English | MEDLINE | ID: mdl-37922115

ABSTRACT

Psychotherapy has been proven to be effective on average, though patients respond very differently to treatment. Understanding which characteristics are associated with treatment effect heterogeneity can help to customize therapy to the individual patient. In this tutorial, we describe different meta-learners, which are flexible algorithms that can be used to estimate personalized treatment effects. More specifically, meta-learners decompose treatment effect estimation into multiple prediction tasks, each of which can be solved by any machine learning model. We begin by reviewing necessary assumptions for interpreting the estimated treatment effects as causal, and then give an overview over key concepts of machine learning. Throughout the article, we use an illustrative data example to show how the different meta-learners can be implemented in R. We also point out how current popular practices in psychotherapy research fit into the meta-learning framework. Finally, we show how heterogeneous treatment effects can be analyzed, and point out some challenges in the implementation of meta-learners.

3.
Stat Med ; 42(23): 4147-4176, 2023 10 15.
Article in English | MEDLINE | ID: mdl-37532119

ABSTRACT

There has been growing interest in using nonparametric machine learning approaches for propensity score estimation in order to foster robustness against misspecification of the propensity score model. However, the vast majority of studies focused on single-level data settings, and research on nonparametric propensity score estimation in clustered data settings is scarce. In this article, we extend existing research by describing a general algorithm for incorporating random effects into a machine learning model, which we implemented for generalized boosted modeling (GBM). In a simulation study, we investigated the performance of logistic regression, GBM, and Bayesian additive regression trees for inverse probability of treatment weighting (IPW) when the data are clustered, the treatment exposure mechanism is nonlinear, and unmeasured cluster-level confounding is present. For each approach, we compared fixed and random effects propensity score models to single-level models and evaluated their use in both marginal and clustered IPW. We additionally investigated the performance of the standard Super Learner and the balance Super Learner. The results showed that when there was no unmeasured confounding, logistic regression resulted in moderate bias in both marginal and clustered IPW, whereas the nonparametric approaches were unbiased. In presence of cluster-level confounding, fixed and random effects models greatly reduced bias compared to single-level models in marginal IPW, with fixed effects GBM and fixed effects logistic regression performing best. Finally, clustered IPW was overall preferable to marginal IPW and the balance Super Learner outperformed the standard Super Learner, though neither worked as well as their best candidate model.


Subject(s)
Multilevel Analysis , Observational Studies as Topic , Propensity Score , Humans , Bayes Theorem , Bias , Computer Simulation , Logistic Models
4.
Multivariate Behav Res ; 58(5): 911-937, 2023.
Article in English | MEDLINE | ID: mdl-36602080

ABSTRACT

Gradient tree boosting is a powerful machine learning technique that has shown good performance in predicting a variety of outcomes. However, when applied to hierarchical (e.g., longitudinal or clustered) data, the predictive performance of gradient tree boosting may be harmed by ignoring the hierarchical structure, and may be improved by accounting for it. Tree-based methods such as regression trees and random forests have already been extended to hierarchical data settings by combining them with the linear mixed effects model (MEM). In the present article, we add to this literature by proposing two algorithms to estimate a combination of the MEM and gradient tree boosting. We report on two simulation studies that (i) investigate the predictive performance of the two MEM boosting algorithms and (ii) compare them to standard gradient tree boosting, standard random forest, and other existing methods for hierarchical data (MEM, MEM random forests, model-based boosting, Bayesian additive regression trees [BART]). We found substantial improvements in the predictive performance of our MEM boosting algorithms over standard boosting when the random effects were non-negligible. MEM boosting as well as BART showed a predictive performance similar to the correctly specified MEM (i.e., the benchmark model), and overall outperformed the model-based boosting and random forest approaches.


Subject(s)
Algorithms , Machine Learning , Bayes Theorem , Computer Simulation , Linear Models
SELECTION OF CITATIONS
SEARCH DETAIL
...