Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 26
Filter
1.
J Exp Psychol Hum Percept Perform ; 49(1): 87-107, 2023 Jan.
Article in English | MEDLINE | ID: mdl-36355703

ABSTRACT

Top-down information is known to play an important role in the control of visual attention. Often, evidence for top-down attention control is also interpreted as evidence for voluntary attention control. However, this latter theoretical interpretation is not warranted because volition is typically defined in terms of a conscious feeling that prior intentions led to a subsequent action, but this aspect of performance has not been assessed in previous studies. Accordingly, the present study used the construct of "agency" within the context of the spatial cuing paradigm to examine the relation between top-down and voluntary attention control. The results of two experiments consistently showed using growth-curve modeling that standard manipulations of top-down information in the spatial cuing paradigm do not have the same effect on all participants. In particular, the present findings showed that a slight majority of individuals (~60%) exhibited the expected pattern in which they reported feeling more agency when they performed visual search with the aid of an informative (arrow or onset) cue than when they performed this task with an uninformative cue or without any cue at all. However, more importantly, these findings also showed that a substantial number of individuals (~40%) exhibited the opposite pattern in which they reported feeling more agency when they performed visual search with an uninformative cue or without any cue at all. We conclude that the relation between top-down and voluntary attention control is not straightforward and must be studied using methods that are sensitive to individual differences. (PsycInfo Database Record (c) 2023 APA, all rights reserved).


Subject(s)
Cues , Volition , Humans , Emotions , Consciousness , Individuality , Reaction Time
2.
Multivariate Behav Res ; 55(2): 277-299, 2020.
Article in English | MEDLINE | ID: mdl-31264449

ABSTRACT

Despite the wide application of longitudinal studies, they are often plagued by missing data and attrition. The majority of methodological approaches focus on participant retention or modern missing data analysis procedures. This paper, however, takes a new approach by examining how researchers may supplement the sample with additional participants. First, refreshment samples use the same selection criteria as the initial study. Second, replacement samples identify auxiliary variables that may help explain patterns of missingness and select new participants based on those characteristics. A simulation study compares these two strategies for a linear growth model with five measurement occasions. Overall, the results suggest that refreshment samples lead to less relative bias, greater relative efficiency, and more acceptable coverage rates than replacement samples or not supplementing the missing participants in any way. Refreshment samples also have high statistical power. The comparative strengths of the refreshment approach are further illustrated through a real data example. These findings have implications for assessing change over time when researching at-risk samples with high levels of permanent attrition.


Subject(s)
Behavioral Research/methods , Data Interpretation, Statistical , Longitudinal Studies , Research Design , Computer Simulation , Humans , Monte Carlo Method , Research Subjects
3.
Psychol Methods ; 25(1): 71-87, 2020 Feb.
Article in English | MEDLINE | ID: mdl-31192625

ABSTRACT

In randomized pretest-posttest experimental designs, experimental treatments are commonly compared through an additive difference. The assumed additive effects of experimental treatments, in randomized pretest-posttest design, also correspond to additive differences between pre- and posttreatment measures. However, it is possible that experimental treatments differ in ratios and the treatment effects are multiplicative. Logarithmic-transformed ANOVA (LANOVA) and logarithmic-transformed ANCOVA (LANCOVA)-reparameterizations of log-log regression models-are proposed to test multiplicative effects given randomized pretest-posttest experimental designs. In addition, a new effect size measure is proposed for treatment effects that are multiplicative instead of additive. Model selection strategy, sample size planning, and power calculation for the proposed methods are also provided. Simulation studies were conducted to compare the Type I error rates and power of proposed methods to those of symmetrized change analysis, ANOVA, ANCOVA, gain score analysis, and ANCOVA with logarithmic-transformed dependent variable given population effect of both additive and multiplicative nature. An empirical data analysis follows to show the interpretational difference between multiplicative and additive effects. While logarithmic transformations are most often recommended to address skewness, our article shows how a log transformation can be used to reconceptualize the fundamental nature of the treatment effect. Finally, recommendations, limitations, and future directions are discussed. (PsycINFO Database Record (c) 2020 APA, all rights reserved).


Subject(s)
Data Interpretation, Statistical , Psychology/methods , Randomized Controlled Trials as Topic/methods , Research Design , Humans , Psychology/standards , Randomized Controlled Trials as Topic/standards , Research Design/standards
4.
Multivariate Behav Res ; 54(3): 382-403, 2019.
Article in English | MEDLINE | ID: mdl-30663381

ABSTRACT

Person-mean centering has been recommended for disaggregating between-person and within-person effects when modeling time-varying predictors. Multilevel modeling textbooks recommended global standardization for standardizing fixed effects. An aim of this study is to evaluate whether and when person-mean centering followed by global standardization can accurately estimate fixed-effects within-person relations (the estimand of interest in this study) in multilevel modeling. We analytically derived that global standardization generally yields inconsistent (asymptotically biased) estimates for the estimand when between-person differences in within-person standard deviations exist and the average within-person relation is nonzero. Alternatively, a person-mean-SD standardization (P-S) approach yields consistent estimates. Our simulation results further revealed (1) how misleading the results from global standardization were under various circumstances and (2) the P-S approach had accurate estimates and satisfactory coverage rates of fixed-effects within-person relations when the number of occasions is 30 or more (in many conditions, performance was satisfactory with 10 or 20 occasions). A daily diary data example, focused on emotional complexity, was used to empirically illustrate the approaches. Researchers should choose standardization approaches based on theoretical considerations and should clearly describe the purpose and procedure of standardization in research articles.


Subject(s)
Algorithms , Data Interpretation, Statistical , Multilevel Analysis/standards , Computer Simulation , Humans
6.
Psychol Sci ; 28(11): 1547-1562, 2017 Nov.
Article in English | MEDLINE | ID: mdl-28902575

ABSTRACT

The sample size necessary to obtain a desired level of statistical power depends in part on the population value of the effect size, which is, by definition, unknown. A common approach to sample-size planning uses the sample effect size from a prior study as an estimate of the population value of the effect to be detected in the future study. Although this strategy is intuitively appealing, effect-size estimates, taken at face value, are typically not accurate estimates of the population effect size because of publication bias and uncertainty. We show that the use of this approach often results in underpowered studies, sometimes to an alarming degree. We present an alternative approach that adjusts sample effect sizes for bias and uncertainty, and we demonstrate its effectiveness for several experimental designs. Furthermore, we discuss an open-source R package, BUCSS, and user-friendly Web applications that we have made available to researchers so that they can easily implement our suggested methods.


Subject(s)
Data Interpretation, Statistical , Publication Bias , Sample Size , Uncertainty , Humans
7.
Multivariate Behav Res ; 52(3): 305-324, 2017.
Article in English | MEDLINE | ID: mdl-28266872

ABSTRACT

Psychology is undergoing a replication crisis. The discussion surrounding this crisis has centered on mistrust of previous findings. Researchers planning replication studies often use the original study sample effect size as the basis for sample size planning. However, this strategy ignores uncertainty and publication bias in estimated effect sizes, resulting in overly optimistic calculations. A psychologist who intends to obtain power of .80 in the replication study, and performs calculations accordingly, may have an actual power lower than .80. We performed simulations to reveal the magnitude of the difference between actual and intended power based on common sample size planning strategies and assessed the performance of methods that aim to correct for effect size uncertainty and/or bias. Our results imply that even if original studies reflect actual phenomena and were conducted in the absence of questionable research practices, popular approaches to designing replication studies may result in a low success rate, especially if the original study is underpowered. Methods correcting for bias and/or uncertainty generally had higher actual power, but were not a panacea for an underpowered original study. Thus, it becomes imperative that 1) original studies are adequately powered and 2) replication studies are designed with methods that are more likely to yield the intended level of power.


Subject(s)
Psychology/methods , Research Design , Statistics as Topic , Computer Simulation , Humans , Reproducibility of Results , Software , Statistics as Topic/methods
8.
Psychol Methods ; 21(2): 175-88, 2016 06.
Article in English | MEDLINE | ID: mdl-26950731

ABSTRACT

Time-varying predictors in multilevel models are a useful tool for longitudinal research, whether they are the research variable of interest or they are controlling for variance to allow greater power for other variables. However, standard recommendations to fix the effect of time-varying predictors may make an assumption that is unlikely to hold in reality and may influence results. A simulation study illustrates that treating the time-varying predictor as fixed may allow analyses to converge, but the analyses have poor coverage of the true fixed effect when the time-varying predictor has a random effect in reality. A second simulation study shows that treating the time-varying predictor as random may have poor convergence, except when allowing negative variance estimates. Although negative variance estimates are uninterpretable, results of the simulation show that estimates of the fixed effect of the time-varying predictor are as accurate for these cases as for cases with positive variance estimates, and that treating the time-varying predictor as random and allowing negative variance estimates performs well whether the time-varying predictor is fixed or random in reality. Because of the difficulty of interpreting negative variance estimates, 2 procedures are suggested for selection between fixed-effect and random-effect models: comparing between fixed-effect and constrained random-effect models with a likelihood ratio test or fitting a fixed-effect model when an unconstrained random-effect model produces negative variance estimates. The performance of these 2 procedures is compared. (PsycINFO Database Record


Subject(s)
Models, Statistical , Multilevel Analysis , Humans , Likelihood Functions , Time
9.
Psychol Methods ; 21(1): 1-12, 2016 Mar.
Article in English | MEDLINE | ID: mdl-26214497

ABSTRACT

As the field of psychology struggles to trust published findings, replication research has begun to become more of a priority to both scientists and journals. With this increasing emphasis placed on reproducibility, it is essential that replication studies be capable of advancing the field. However, we argue that many researchers have been only narrowly interpreting the meaning of replication, with studies being designed with a simple statistically significant or nonsignificant results framework in mind. Although this interpretation may be desirable in some cases, we develop a variety of additional "replication goals" that researchers could consider when planning studies. Even if researchers are aware of these goals, we show that they are rarely used in practice-as results are typically analyzed in a manner only appropriate to a simple significance test. We discuss each goal conceptually, explain appropriate analysis procedures, and provide 1 or more examples to illustrate these analyses in practice. We hope that these various goals will allow researchers to develop a more nuanced understanding of replication that can be flexible enough to answer the various questions that researchers might seek to understand.


Subject(s)
Behavioral Research/methods , Data Interpretation, Statistical , Reproducibility of Results , Behavioral Research/standards , Humans
10.
Am Psychol ; 70(6): 487-98, 2015 Sep.
Article in English | MEDLINE | ID: mdl-26348332

ABSTRACT

Psychology has recently been viewed as facing a replication crisis because efforts to replicate past study findings frequently do not show the same result. Often, the first study showed a statistically significant result but the replication does not. Questions then arise about whether the first study results were false positives, and whether the replication study correctly indicates that there is truly no effect after all. This article suggests these so-called failures to replicate may not be failures at all, but rather are the result of low statistical power in single replication studies, and the result of failure to appreciate the need for multiple replications in order to have enough power to identify true effects. We provide examples of these power problems and suggest some solutions using Bayesian statistics and meta-analysis. Although the need for multiple replication studies may frustrate those who would prefer quick answers to psychology's alleged crisis, the large sample sizes typically needed to provide firm evidence will almost always require concerted efforts from multiple investigators. As a result, it remains to be seen how many of the recently claimed failures to replicate will be supported or instead may turn out to be artifacts of inadequate sample sizes and single study replications.


Subject(s)
Psychology/methods , Research Design , Bayes Theorem , Humans , Reproducibility of Results , Sample Size
11.
Psychol Methods ; 20(1): 63-83, 2015 Mar.
Article in English | MEDLINE | ID: mdl-25822206

ABSTRACT

This article extends current discussion of how to disaggregate between-person and within-person effects with longitudinal data using multilevel models. Our main focus is on the 2 issues of centering and detrending. Conceptual and analytical work demonstrates the similarities and differences among 3 centering approaches (no centering, grand-mean centering, and person-mean centering) and the relations and differences among various detrending approaches (no detrending, detrending X only, detrending Y only, and detrending both X and Y). Two real data analysis examples in psychology are provided to illustrate the differences in the results of using different centering and detrending methods for the disaggregation of between- and within-person effects. Simulation studies were conducted to further compare the various centering and detrending approaches under a wider span of conditions. Recommendations of how to perform centering, whether detrending is needed or not, and how to perform detrending if needed are made and discussed.


Subject(s)
Models, Psychological , Models, Statistical , Multilevel Analysis , Humans
12.
Br J Math Stat Psychol ; 68(2): 246-67, 2015 May.
Article in English | MEDLINE | ID: mdl-25098455

ABSTRACT

We analytically derive the fixed-effects estimates in unconditional linear growth curve models by typical linear mixed-effects modelling (TLME) and by a pattern-mixture (PM) approach with random-slope-dependent two-missing-pattern missing not at random (MNAR) longitudinal data. Results showed that when the missingness mechanism is random-slope-dependent MNAR, TLME estimates of both the mean intercept and mean slope are biased because of incorrect weights used in the estimation. More specifically, the estimate of the mean slope is biased towards the mean slope for completers, whereas the estimate of the mean intercept is biased towards the opposite direction as compared to the estimate of the mean slope. We also discuss why the PM approach can provide unbiased fixed-effects estimates for random-coefficients-dependent MNAR data but does not work well for missing at random or outcome-dependent MNAR data. A small simulation study was conducted to illustrate the results and to compare results from TLME and PM. Results from an empirical data analysis showed that the conceptual finding can be generalized to other real conditions even when some assumptions for the analytical derivation cannot be met. Implications from the analytical and empirical results were discussed and sensitivity analysis was suggested for longitudinal data analysis with missing data.


Subject(s)
Behavioral Research/statistics & numerical data , Bias , Data Interpretation, Statistical , Linear Models , Longitudinal Studies , Psychometrics/statistics & numerical data , Schizophrenia/diagnosis , Schizophrenic Psychology , Empirical Research , Humans
13.
Psychol Methods ; 19(2): 188-210, 2014 Jun.
Article in English | MEDLINE | ID: mdl-24079928

ABSTRACT

Randomized longitudinal designs are commonly used in psychological and medical studies to investigate the treatment effect of an intervention or an experimental drug. Traditional linear mixed-effects models for randomized longitudinal designs are limited to maximum-likelihood methods that assume data are missing at random (MAR). In practice, because longitudinal data are often likely to be missing not at random (MNAR), the traditional mixed-effects model might lead to biased estimates of treatment effects. In such cases, an alternative approach is to utilize pattern-mixture models. In this article, a Monte Carlo simulation study compares the traditional mixed-effects model and 2 different approaches to pattern-mixture models (i.e., the differencing-averaging method and the averaging-differencing method) across different missing mechanisms (i.e., MAR, random-coefficient-dependent MNAR, or outcome-dependent MNAR) and different types of treatment-condition-based missingness. Results suggest that the traditional mixed-effects model is well suited for analyzing data with the MAR mechanism whereas the proposed pattern-mixture averaging-differencing model has the best overall performance for analyzing data with the MNAR mechanism. No method was found that could provide unbiased estimates under every missing mechanism, leading to a practical suggestion that researchers need to consider why data are missing and should also consider performing a sensitivity analysis to ascertain the extent to which their results are consistent across various missingness assumptions. Applications of different estimation methods are also illustrated using a real-data example.


Subject(s)
Patient Dropouts , Randomized Controlled Trials as Topic/methods , Data Interpretation, Statistical , Humans , Linear Models , Longitudinal Studies , Patient Dropouts/statistics & numerical data , Research Design , Treatment Outcome
14.
Multivariate Behav Res ; 48(3): 301-39, 2013 May.
Article in English | MEDLINE | ID: mdl-26741846

ABSTRACT

Mediational studies are often of interest in psychology because they explore the underlying relationship between 2 constructs. Previous research has shown that cross-sectional designs are prone to biased estimates of longitudinal mediation parameters. The sequential design has become a popular alternative to the cross-sectional design for assessing mediation. This design is a compromise between the cross-sectional and longitudinal designs because it incorporates time in the model but has only 1 measurement each of X, M, and Y. As such, this design follows the recommendation of the MacArthur group approach, which stresses the importance of multiple waves of data for studying mediation. These 2 designs were compared to see whether the sequential design assesses longitudinal mediation more accurately than the cross-sectional design. Specifically, analytic expressions are derived for the bias of estimated direct and indirect effects as calculated from the sequential design when the actual mediational process follows a longitudinal autoregressive model. It was found that, in general, the sequential design does not assess longitudinal mediation more accurately than the cross-sectional design. As a result, neither design can be depended on to assess longitudinal mediation accurately.

15.
Multivariate Behav Res ; 46(5): 816-41, 2011 Sep 30.
Article in English | MEDLINE | ID: mdl-26736047

ABSTRACT

Maxwell and Cole (2007) showed that cross-sectional approaches to mediation typically generate substantially biased estimates of longitudinal parameters in the special case of complete mediation. However, their results did not apply to the more typical case of partial mediation. We extend their previous work by showing that substantial bias can also occur with partial mediation. In particular, cross-sectional analyses can imply the existence of a substantial indirect effect even when the true longitudinal indirect effect is zero. Thus, a variable that is found to be a strong mediator in a cross-sectional analysis may not be a mediator at all in a longitudinal analysis. In addition, we show that very different combinations of longitudinal parameter values can lead to essentially identical cross-sectional correlations, raising serious questions about the interpretability of cross-sectional mediation data. More generally, researchers are encouraged to consider a wide variety of possible mediation models beyond simple cross-sectional models, including but not restricted to autoregressive models of change.

16.
Psychol Methods ; 15(1): 1-2, 2010 Mar.
Article in English | MEDLINE | ID: mdl-20230098

ABSTRACT

Causality plays a fundamental role in scientific explanation. This introduction describes 2 target articles and 3 commentaries on 2 influential perspectives on causal inference, one developed by Donald Campbell and the other developed by Donald Rubin. One goal of this special section is to introduce Rubin's causal model to psychologists who may be largely unfamiliar with it. Another goal is to compare Rubin's conceptualization with Campbell's perspective, to enrich readers' understanding of both views. All of the authors of this special section perceive many similarities between the 2 approaches. Even so, by comparing and contrasting the 2 perspectives, the authors also believe that it is possible to strengthen both approaches.


Subject(s)
Causality , Psychological Theory , Humans
17.
Annu Rev Clin Psychol ; 5: 71-96, 2009.
Article in English | MEDLINE | ID: mdl-19327026

ABSTRACT

The relation between risk and outcome consists of myriad, complex, longitudinal processes. To study these relations requires research designs and statistical methods that are sensitive to the longitudinal structure of the risk, the outcome, and the risk-outcome relation. This review presents four longitudinal characteristics that can complicate psychopathology risk-outcome research. We represent each complication with an example data set. We demonstrate how conventional statistical approaches can yield highly misleading results. Finally, we review alternative statistical approaches that can handle these complications quite well.


Subject(s)
Mental Disorders/psychology , Models, Statistical , Adult , Humans , Logistic Models , Longitudinal Studies , Mathematical Computing , Mental Disorders/diagnosis , Risk Factors , Young Adult
19.
Annu Rev Psychol ; 59: 537-63, 2008.
Article in English | MEDLINE | ID: mdl-17937603

ABSTRACT

This review examines recent advances in sample size planning, not only from the perspective of an individual researcher, but also with regard to the goal of developing cumulative knowledge. Psychologists have traditionally thought of sample size planning in terms of power analysis. Although we review recent advances in power analysis, our main focus is the desirability of achieving accurate parameter estimates, either instead of or in addition to obtaining sufficient power. Accuracy in parameter estimation (AIPE) has taken on increasing importance in light of recent emphasis on effect size estimation and formation of confidence intervals. The review provides an overview of the logic behind sample size planning for AIPE and summarizes recent advances in implementing this approach in designs commonly used in psychological research.


Subject(s)
Models, Psychological , Psychology/methods , Psychology/statistics & numerical data , Confidence Intervals , Humans , Linear Models , Sampling Studies
20.
Psychol Methods ; 12(1): 23-44, 2007 Mar.
Article in English | MEDLINE | ID: mdl-17402810

ABSTRACT

Most empirical tests of mediation utilize cross-sectional data despite the fact that mediation consists of causal processes that unfold over time. The authors considered the possibility that longitudinal mediation might occur under either of two different models of change: (a) an autoregressive model or (b) a random effects model. For both models, the authors demonstrated that cross-sectional approaches to mediation typically generate substantially biased estimates of longitudinal parameters even under the ideal conditions when mediation is complete. In longitudinal models where variable M completely mediates the effect of X on Y, cross-sectional estimates of the direct effect of X on Y, the indirect effect of X on Y through M, and the proportion of the total effect mediated by M are often highly misleading.


Subject(s)
Models, Psychological , Psychology, Applied/methods , Psychology, Applied/statistics & numerical data , Bias , Cross-Sectional Studies , Humans , Longitudinal Studies , Negotiating
SELECTION OF CITATIONS
SEARCH DETAIL
...