Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 46
Filter
1.
Psychol Methods ; 2023 May 11.
Article in English | MEDLINE | ID: mdl-37166855

ABSTRACT

Planning an appropriate sample size for a study involves considering several issues. Two important considerations are cost constraints and variability inherent in the population from which data will be sampled. Methodologists have developed sample size planning methods for two or more populations when testing for equivalence or noninferiority/superiority for a linear contrast of population means. Additionally, cost constraints and variance heterogeneity among populations have also been considered. We extend these methods by developing a theory for sequential procedures for testing the equivalence or noninferiority/superiority for a linear contrast of population means under cost constraints, which we prove to effectively utilize the allocated resources. Our method, due to the sequential framework, does not require prespecified values of unknown population variance(s), something that is historically an impediment to designing studies. Importantly, our method does not require an assumption of a specific type of distribution of the data in the relevant population from which the observations are sampled, as we make our developments in a data distribution-free context. We provide an illustrative example to show how the implementation of the proposed approach can be useful in applied research. (PsycInfo Database Record (c) 2023 APA, all rights reserved).

2.
Psychol Methods ; 2022 Jul 21.
Article in English | MEDLINE | ID: mdl-35862114

ABSTRACT

Replication is central to scientific progress. Because of widely reported replication failures, replication has received increased attention in psychology, sociology, education, management, and related fields in recent years. Replication studies have generally been assessed dichotomously, designated either a "success" or "failure" based entirely on the outcome of a null hypothesis significance test (i.e., p < .05 or p > .05, respectively). However, alternative definitions of success depend on researchers' goals for the replication. Previous work on alternative definitions for success has focused on the analysis phase of replication. However, the design of the replication is also important, as emphasized with the adage, "an ounce of prevention is better than a pound of cure." One critical component of design often ignored or oversimplified in replication studies is sample size planning, indeed, the details here are crucial. Sample size planning for replication studies should correspond to the method by which success will be evaluated. Researchers have received little guidance, some of which is misguided, on sample size planning for replication goals other than the aforementioned dichotomous null hypothesis significance testing approach. In this article, we describe four different replication goals. Then, we formalize sample size planning methods for each of the four goals. This article aims to provide clarity on the procedures for sample size planning for each goal, with examples and syntax provided to show how each procedure can be used in practice. (PsycInfo Database Record (c) 2022 APA, all rights reserved).

3.
Psychol Methods ; 25(4): 496-515, 2020 Aug.
Article in English | MEDLINE | ID: mdl-32191106

ABSTRACT

Mediation analysis is an important approach for investigating causal pathways. One approach used in mediation analysis is the test of an indirect effect, which seeks to measure how the effect of an independent variable impacts an outcome variable through 1 or more mediators. However, in many situations the proposed tests of indirect effects, including popular confidence interval-based methods, tend to produce poor Type I error rates when mediation does not occur and, more generally, only allow dichotomous decisions of "not significant" or "significant" with regards to the statistical conclusion. To remedy these issues, we propose a new method, a likelihood ratio test (LRT), that uses nonlinear constraints in what we term the model-based constrained optimization (MBCO) procedure. The MBCO procedure (a) offers a more robust Type I error rate than existing methods; (b) provides a p value, which serves as a continuous measure of compatibility of data with the hypothesized null model (not just a dichotomous reject or fail-to-reject decision rule); (c) allows simple and complex hypotheses about mediation (i.e., 1 or more mediators; different mediational pathways); and (d) allows the mediation model to use observed or latent variables. The MBCO procedure is based on a structural equation modeling framework (even if latent variables are not specified) with specialized fitting routines, namely with the use of nonlinear constraints. We advocate using the MBCO procedure to test hypotheses about an indirect effect in addition to reporting a confidence interval to capture uncertainty about the indirect effect because this combination transcends existing methods. (PsycInfo Database Record (c) 2020 APA, all rights reserved).


Subject(s)
Data Interpretation, Statistical , Models, Statistical , Psychology/methods , Humans
4.
Multivariate Behav Res ; 55(2): 188-210, 2020.
Article in English | MEDLINE | ID: mdl-31179751

ABSTRACT

Complex mediation models, such as a two-mediator sequential model, have become more prevalent in the literature. To test an indirect effect in a two-mediator model, we conducted a large-scale Monte Carlo simulation study of the Type I error, statistical power, and confidence interval coverage rates of 10 frequentist and Bayesian confidence/credible intervals (CIs) for normally and nonnormally distributed data. The simulation included never-studied methods and conditions (e.g., Bayesian CI with flat and weakly informative prior methods, two model-based bootstrap methods, and two nonnormality conditions) as well as understudied methods (e.g., profile-likelihood, Monte Carlo with maximum likelihood standard error [MC-ML] and robust standard error [MC-Robust]). The popular BC bootstrap showed inflated Type I error rates and CI under-coverage. We recommend different methods depending on the purpose of the analysis. For testing the null hypothesis of no mediation, we recommend MC-ML, profile-likelihood, and two Bayesian methods. To report a CI, if data has a multivariate normal distribution, we recommend MC-ML, profile-likelihood, and the two Bayesian methods; otherwise, for multivariate nonnormal data we recommend the percentile bootstrap. We argue that the best method for testing hypotheses is not necessarily the best method for CI construction, which is consistent with the findings we present.


Subject(s)
Behavioral Research/methods , Confidence Intervals , Models, Statistical , Multivariate Analysis , Bayes Theorem , Computer Simulation , Humans , Monte Carlo Method
5.
Psychol Methods ; 24(4): 492-515, 2019 Aug.
Article in English | MEDLINE | ID: mdl-30829512

ABSTRACT

Correlation coefficients are effect size measures that are widely used in psychology and related disciplines for quantifying the degree of relationship of two variables, where different correlation coefficients are used to describe different types of relationships for different types of data. We develop methods for constructing a sufficiently narrow confidence interval for 3 different population correlation coefficients with a specified upper bound on the confidence interval width (e.g., .10 units) at a specified level of confidence (e.g., 95%). In particular, we develop methods for Pearson's r, Kendall's tau, and Spearman's rho. Our methods solve an important problem because existing methods of study design for correlation coefficients generally require the use of supposed but typically unknowable population values as input parameters. We develop sequential estimation procedures and prove their desirable properties in order to obtain sufficiently narrow confidence interval for population correlation coefficients without using supposed values of population parameters, doing so in a distribution-free environment. In sequential estimation procedures, supposed values of population parameters for purposes of sample size planning are not needed, but instead stopping rules are developed and once satisfied, they provide a rule-based stop to the sampling of additional units. In particular, data in sequential estimation procedures are collected in stages, whereby at each stage the estimated population values are updated and the stopping rule evaluated. Correspondingly, the final sample size required to obtain a sufficiently narrow confidence interval is not known a priori, but is based on the outcome of the study. Additionally, we extend our methods to the squared multiple correlation coefficient under the assumption of multivariate normality. We demonstrate the effectiveness of our sequential procedure using a Monte Carlo simulation study. We provide freely available R code to implement the methods in the MBESS package. (PsycINFO Database Record (c) 2019 APA, all rights reserved).


Subject(s)
Correlation of Data , Statistics as Topic/methods , Humans
6.
J Gen Psychol ; 146(3): 325-338, 2019.
Article in English | MEDLINE | ID: mdl-30905317

ABSTRACT

The Pearson correlation coefficient can be translated to a common language effect size, which shows the probability of obtaining a certain value on one variable, given the value on the other variable. This common language effect size makes the size of a correlation coefficient understandable to laypeople. Three examples are provided to demonstrate the application of the common language effect size in interpreting Pearson correlation coefficients and multiple correlation coefficients.


Subject(s)
Language , Probability , Humans
7.
Psychol Methods ; 24(1): 20-35, 2019 Feb.
Article in English | MEDLINE | ID: mdl-29863377

ABSTRACT

Clustered data are common in many fields. Some prominent examples of clustering are employees clustered within supervisors, students within classrooms, and clients within therapists. Many methods exist that explicitly consider the dependency introduced by a clustered data structure, but the multitude of available options has resulted in rigid disciplinary preferences. For example, those working in the psychological, organizational behavior, medical, and educational fields generally prefer mixed effects models, whereas those working in economics, behavioral finance, and strategic management generally prefer fixed effects models. However, increasingly interdisciplinary research has caused lines that separate the fields grounded in psychology and those grounded in economics to blur, leading to researchers encountering unfamiliar statistical methods commonly found in other disciplines. Persistent discipline-specific preferences can be particularly problematic because (a) each approach has certain limitations that can restrict the types of research questions that can be appropriately addressed, and (b) analyses based on the statistical modeling decisions common in one discipline can be difficult to understand for researchers trained in alternative disciplines. This can impede cross-disciplinary collaboration and limit the ability of scientists to make appropriate use of research from adjacent fields. This article discusses the differences between mixed effects and fixed effects models for clustered data, reviews each approach, and helps to identify when each approach is optimal. We then discuss the within-between specification, which blends advantageous properties of each framework into a single model. (PsycINFO Database Record (c) 2019 APA, all rights reserved).


Subject(s)
Cluster Analysis , Models, Statistical , Multilevel Analysis , Psychology/methods , Humans
8.
Am Psychol ; 73(7): 899-917, 2018 10.
Article in English | MEDLINE | ID: mdl-29469579

ABSTRACT

The potential for big data to provide value for psychology is significant. However, the pursuit of big data remains an uncertain and risky undertaking for the average psychological researcher. In this article, we address some of this uncertainty by discussing the potential impact of big data on the type of data available for psychological research, addressing the benefits and most significant challenges that emerge from these data, and organizing a variety of research opportunities for psychology. Our article yields two central insights. First, we highlight that big data research efforts are more readily accessible than many researchers realize, particularly with the emergence of open-source research tools, digital platforms, and instrumentation. Second, we argue that opportunities for big data research are diverse and differ both in their fit for varying research goals, as well as in the challenges they bring about. Ultimately, our outlook for researchers in psychology using and benefiting from big data is cautiously optimistic. Although not all big data efforts are suited for all researchers or all areas within psychology, big data research prospects are diverse, expanding, and promising for psychology and related disciplines. (PsycINFO Database Record (c) 2018 APA, all rights reserved).


Subject(s)
Big Data , Psychology , Research Design , Humans , Machine Learning
9.
Psychol Methods ; 23(2): 244-261, 2018 Jun.
Article in English | MEDLINE | ID: mdl-29172614

ABSTRACT

Mediation analysis has become one of the most popular statistical methods in the social sciences. However, many currently available effect size measures for mediation have limitations that restrict their use to specific mediation models. In this article, we develop a measure of effect size that addresses these limitations. We show how modification of a currently existing effect size measure results in a novel effect size measure with many desirable properties. We also derive an expression for the bias of the sample estimator for the proposed effect size measure and propose an adjusted version of the estimator. We present a Monte Carlo simulation study conducted to examine the finite sampling properties of the adjusted and unadjusted estimators, which shows that the adjusted estimator is effective at recovering the true value it estimates. Finally, we demonstrate the use of the effect size measure with an empirical example. We provide freely available software so that researchers can immediately implement the methods we discuss. Our developments here extend the existing literature on effect sizes and mediation by developing a potentially useful method of communicating the magnitude of mediation. (PsycINFO Database Record


Subject(s)
Biomedical Research/methods , Data Interpretation, Statistical , Models, Statistical , Monte Carlo Method , Humans
10.
Psychol Methods ; 23(2): 226-243, 2018 Jun.
Article in English | MEDLINE | ID: mdl-28383948

ABSTRACT

Sequential estimation is a well recognized approach to inference in statistical theory. In sequential estimation the sample size to use is not specified at the start of the study, and instead study outcomes are used to evaluate a predefined stopping rule, if sampling should continue or stop. In this article we develop a general theory for sequential estimation procedure for constructing a narrow confidence interval for a general class of effect sizes with a specified level of confidence (e.g., 95%) and a specified upper bound on the confidence interval width. Our method does not require prespecified, yet usually unknowable, population values of certain parameters for certain types of distributions, thus offering advantages compared to commonly used approaches to sample size planning. Importantly, we make our developments in a distribution-free environment and thus do not make untenable assumptions about the population from which observations are sampled. Our work is thus very general, timely due to the interest in effect sizes, and has wide applicability in the context of estimation of a general class of effect sizes. (PsycINFO Database Record


Subject(s)
Biomedical Research/methods , Models, Statistical , Research Design , Biomedical Research/standards , Confidence Intervals , Humans , Research Design/standards , Sample Size
11.
Psychol Sci ; 28(11): 1547-1562, 2017 Nov.
Article in English | MEDLINE | ID: mdl-28902575

ABSTRACT

The sample size necessary to obtain a desired level of statistical power depends in part on the population value of the effect size, which is, by definition, unknown. A common approach to sample-size planning uses the sample effect size from a prior study as an estimate of the population value of the effect to be detected in the future study. Although this strategy is intuitively appealing, effect-size estimates, taken at face value, are typically not accurate estimates of the population effect size because of publication bias and uncertainty. We show that the use of this approach often results in underpowered studies, sometimes to an alarming degree. We present an alternative approach that adjusts sample effect sizes for bias and uncertainty, and we demonstrate its effectiveness for several experimental designs. Furthermore, we discuss an open-source R package, BUCSS, and user-friendly Web applications that we have made available to researchers so that they can easily implement our suggested methods.


Subject(s)
Data Interpretation, Statistical , Publication Bias , Sample Size , Uncertainty , Humans
12.
Psychol Methods ; 22(1): 94-113, 2017 03.
Article in English | MEDLINE | ID: mdl-27607545

ABSTRACT

The standardized mean difference is a widely used effect size measure. In this article, we develop a general theory for estimating the population standardized mean difference by minimizing both the mean square error of the estimator and the total sampling cost. Fixed sample size methods, when sample size is planned before the start of a study, cannot simultaneously minimize both the mean square error of the estimator and the total sampling cost. To overcome this limitation of the current state of affairs, this article develops a purely sequential sampling procedure, which provides an estimate of the sample size required to achieve a sufficiently accurate estimate with minimum expected sampling cost. Performance of the purely sequential procedure is examined via a simulation study to show that our analytic developments are highly accurate. Additionally, we provide freely available functions in R to implement the algorithm of the purely sequential procedure. (PsycINFO Database Record


Subject(s)
Research Design , Sample Size , Algorithms , Humans
13.
Multivariate Behav Res ; 51(5): 627-648, 2016.
Article in English | MEDLINE | ID: mdl-27712116

ABSTRACT

The coefficient of variation is an effect size measure with many potential uses in psychology and related disciplines. We propose a general theory for a sequential estimation of the population coefficient of variation that considers both the sampling error and the study cost, importantly without specific distributional assumptions. Fixed sample size planning methods, commonly used in psychology and related fields, cannot simultaneously minimize both the sampling error and the study cost. The sequential procedure we develop is the first sequential sampling procedure developed for estimating the coefficient of variation. We first present a method of planning a pilot sample size after the research goals are specified by the researcher. Then, after collecting a sample size as large as the estimated pilot sample size, a check is performed to assess whether the conditions necessary to stop the data collection have been satisfied. If not an additional observation is collected and the check is performed again. This process continues, sequentially, until a stopping rule involving a risk function is satisfied. Our method ensures that the sampling error and the study costs are considered simultaneously so that the cost is not higher than necessary for the tolerable sampling error. We also demonstrate a variety of properties of the distribution of the final sample size for five different distributions under a variety of conditions with a Monte Carlo simulation study. In addition, we provide freely available functions via the MBESS package in R to implement the methods discussed.


Subject(s)
Analysis of Variance , Research Design , Algorithms , Computer Simulation , Data Interpretation, Statistical , Humans , Models, Statistical , Monte Carlo Method , Research/economics , Risk , Software
14.
Psychol Methods ; 21(1): 69-92, 2016 Mar.
Article in English | MEDLINE | ID: mdl-26962759

ABSTRACT

A composite score is the sum of a set of components. For example, a total test score can be defined as the sum of the individual items. The reliability of composite scores is of interest in a wide variety of contexts due to their widespread use and applicability to many disciplines. The psychometric literature has devoted considerable time to discussing how to best estimate the population reliability value. However, all point estimates of a reliability coefficient fail to convey the uncertainty associated with the estimate as it estimates the population value. Correspondingly, a confidence interval is recommended to convey the uncertainty with which the population value of the reliability coefficient has been estimated. However, many confidence interval methods for bracketing the population reliability coefficient exist and it is not clear which method is most appropriate in general or in a variety of specific circumstances. We evaluate these confidence interval methods for 4 reliability coefficients (coefficient alpha, coefficient omega, hierarchical omega, and categorical omega) under a variety of conditions with 3 large-scale Monte Carlo simulation studies. Our findings lead us to generally recommend bootstrap confidence intervals for hierarchical omega for continuous items and categorical omega for categorical items. All of the methods we discuss are implemented in the freely available R language and environment via the MBESS package.


Subject(s)
Confidence Intervals , Psychometrics/methods , Reproducibility of Results , Humans , Psychometrics/standards
15.
Multivariate Behav Res ; 51(1): 86-105, 2016.
Article in English | MEDLINE | ID: mdl-26881959

ABSTRACT

To draw valid inference about an indirect effect in a mediation model, there must be no omitted confounders. No omitted confounders means that there are no common causes of hypothesized causal relationships. When the no-omitted-confounder assumption is violated, inference about indirect effects can be severely biased and the results potentially misleading. Despite the increasing attention to address confounder bias in single-level mediation, this topic has received little attention in the growing area of multilevel mediation analysis. A formidable challenge is that the no-omitted-confounder assumption is untestable. To address this challenge, we first analytically examined the biasing effects of potential violations of this critical assumption in a two-level mediation model with random intercepts and slopes, in which all the variables are measured at Level 1. Our analytic results show that omitting a Level 1 confounder can yield misleading results about key quantities of interest, such as Level 1 and Level 2 indirect effects. Second, we proposed a sensitivity analysis technique to assess the extent to which potential violation of the no-omitted-confounder assumption might invalidate or alter the conclusions about the indirect effects observed. We illustrated the methods using an empirical study and provided computer code so that researchers can implement the methods discussed.


Subject(s)
Confounding Factors, Epidemiologic , Models, Statistical , Multilevel Analysis/methods , Algorithms , Behavioral Research/methods , Cluster Analysis
16.
J Appl Psychol ; 100(6): 1798-810, 2015 Nov.
Article in English | MEDLINE | ID: mdl-26011719

ABSTRACT

Fundamental to the definition of abusive supervision is the notion that subordinates are often victims of a pattern of mistreatment (Tepper, 2000). However, little research has examined the processes through which such destructive relational patterns emerge. In this study, we draw from and extend the multimotive model of reactions to interpersonal threat (Smart Richman & Leary, 2009) to formulate and test hypotheses about how employees' emotional and behavioral responses may ameliorate or worsen supervisors' abuse. To test this model, we collected 6 waves of data from a sample of 244 employees. Results revealed reciprocal relationships between abusive supervision and both supervisor-directed counterproductive behavior and supervisor-directed avoidance. Whereas the abusive supervision--counterproductive behavior relationship was partially driven by anger, the abusive supervision--avoidance relationship was partially mediated by fear. These findings suggest that some may find themselves in abusive relationships, in part, because their own reactions to mistreatment can, perhaps unknowingly, reinforce abusive behavior.


Subject(s)
Anger , Bullying , Employment/psychology , Fear/psychology , Interpersonal Relations , Social Behavior , Adult , Female , Humans , Male , Personnel Management
17.
Front Psychol ; 5: 337, 2014.
Article in English | MEDLINE | ID: mdl-24904445

ABSTRACT

Classification using standard statistical methods such as linear discriminant analysis (LDA) or logistic regression (LR) presume knowledge of group membership prior to the development of an algorithm for prediction. However, in many real world applications members of the same nominal group, might in fact come from different subpopulations on the underlying construct. For example, individuals diagnosed with depression will not all have the same levels of this disorder, though for the purposes of LDA or LR they will be treated in the same manner. The goal of this simulation study was to examine the performance of several methods for group classification in the case where within group membership was not homogeneous. For example, suppose there are 3 known groups but within each group two unknown classes. Several approaches were compared, including LDA, LR, classification and regression trees (CART), generalized additive models (GAM), and mixture discriminant analysis (MIXDA). Results of the study indicated that CART and mixture discriminant analysis were the most effective tools for situations in which known groups were not homogeneous, whereas LDA, LR, and GAM had the highest rates of misclassification. Implications of these results for theory and practice are discussed.

18.
J Appl Psychol ; 99(2): 199-221, 2014 Mar.
Article in English | MEDLINE | ID: mdl-24099348

ABSTRACT

Historically, organizational and personality psychologists have ignored within-individual variation in personality across situations or have treated it as measurement error. However, we conducted a 10-day experience sampling study consistent with whole trait theory (Fleeson, 2012), which conceptualizes personality as a system of stable tendencies and patterns of intraindividual variation along the dimensions of the Big Five personality traits (Costa & McCrae, 1992). The study examined whether (a) internal events (i.e., motivation), performance episodes, and interpersonal experiences at work predict deviations from central tendencies in trait-relevant behavior, affect, and cognition (i.e., state personality), and (b) there are individual differences in responsiveness to work experiences. Results revealed that personality at work exhibited both stability and variation within individuals. Trait measures predicted average levels of trait manifestation in daily behavior at work, whereas daily work experiences (i.e., organizational citizenship, interpersonal conflict, and motivation) predicted deviations from baseline tendencies. Additionally, correlations of neuroticism with standard deviations in the daily personality variables suggest that, although work experiences influence state personality, people higher in neuroticism exhibit higher levels of intraindividual variation in personality than do those who are more emotionally stable.


Subject(s)
Anxiety Disorders/psychology , Individuality , Motivation/physiology , Personality/physiology , Social Behavior , Work/psychology , Adult , Conflict, Psychological , Female , Humans , Male , Neuroticism
19.
Mem Cognit ; 41(7): 1079-95, 2013 Oct.
Article in English | MEDLINE | ID: mdl-23532591

ABSTRACT

Prior knowledge in the domain of mathematics can sometimes interfere with learning and performance in that domain. One of the best examples of this phenomenon is in students' difficulties solving equations with operations on both sides of the equal sign. Elementary school children in the U.S. typically acquire incorrect, operational schemata rather than correct, relational schemata for interpreting equations. Researchers have argued that these operational schemata are never unlearned and can continue to affect performance for years to come, even after relational schemata are learned. In the present study, we investigated whether and how operational schemata negatively affect undergraduates' performance on equations. We monitored the eye movements of 64 undergraduate students while they solved a set of equations that are typically used to assess children's adherence to operational schemata (e.g., 3 + 4 + 5 = 3 + __). Participants did not perform at ceiling on these equations, particularly when under time pressure. Converging evidence from performance and eye movements showed that operational schemata are sometimes activated instead of relational schemata. Eye movement patterns reflective of the activation of relational schemata were specifically lacking when participants solved equations by adding up all the numbers or adding the numbers before the equal sign, but not when they used other types of incorrect strategies. These findings demonstrate that the negative effects of acquiring operational schemata extend far beyond elementary school.


Subject(s)
Eye Movements/physiology , Mathematical Concepts , Problem Solving/physiology , Adult , Educational Measurement , Eye Movement Measurements , Humans , Young Adult
20.
Psychol Methods ; 17(2): 137-52, 2012 Jun.
Article in English | MEDLINE | ID: mdl-22545595

ABSTRACT

The call for researchers to report and interpret effect sizes and their corresponding confidence intervals has never been stronger. However, there is confusion in the literature on the definition of effect size, and consequently the term is used inconsistently. We propose a definition for effect size, discuss 3 facets of effect size (dimension, measure/index, and value), outline 10 corollaries that follow from our definition, and review ideal qualities of effect sizes. Our definition of effect size is general and subsumes many existing definitions of effect size. We define effect size as a quantitative reflection of the magnitude of some phenomenon that is used for the purpose of addressing a question of interest. Our definition of effect size is purposely more inclusive than the way many have defined and conceptualized effect size, and it is unique with regard to linking effect size to a question of interest. Additionally, we review some important developments in the effect size literature and discuss the importance of accompanying an effect size with an interval estimate that acknowledges the uncertainty with which the population value of the effect size has been estimated. We hope that this article will facilitate discussion and improve the practice of reporting and interpreting effect sizes.


Subject(s)
Data Interpretation, Statistical , Research Design/statistics & numerical data , Statistics as Topic/methods , Terminology as Topic , Confidence Intervals , Guidelines as Topic , Humans , Sample Size
SELECTION OF CITATIONS
SEARCH DETAIL
...