Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 9 de 9
Filtrar
Mais filtros










Intervalo de ano de publicação
1.
Behav Res Methods ; 51(3): 1216-1243, 2019 06.
Artigo em Inglês | MEDLINE | ID: mdl-29934696

RESUMO

In this study, two approaches were employed to calculate how large the sample size needs to be in order to achieve a desired statistical power to detect a significant group-by-time interaction in longitudinal intervention studies-a power analysis method, based on derived formulas using ordinary least squares estimates, and an empirical method, based on restricted maximum likelihood estimates. The performance of both procedures was examined under four different scenarios: (a) complete data with homogeneous variances, (b) incomplete data with homogeneous variances, (c) complete data with heterogeneous variances, and (d) incomplete data with heterogeneous variances. Several interesting findings emerged from this research. First, in the presence of heterogeneity, larger sample sizes are required in order to attain a desired nominal power. The second interesting finding is that, when there is attrition, the sample size requirements can be quite large. However, when attrition is anticipated, derived formulas enable the power to be calculated on the basis of the final number of subjects that are expected to complete the study. The third major finding is that the direct mathematical formulas allow the user to rigorously determine the sample size required to achieve a specified power level. Therefore, when data can be assumed to be missing at random, the solution presented can be adopted, given that Monte Carlo studies have indicated that it is very satisfactory. We illustrate the proposed method using real data from two previously published datasets.


Assuntos
Tamanho da Amostra , Funções Verossimilhança , Estudos Longitudinais , Modelos Estatísticos , Método de Monte Carlo
2.
Psicothema ; 30(4): 434-441, 2018 11.
Artigo em Inglês | MEDLINE | ID: mdl-30353846

RESUMO

BACKGROUND: A multivariate extension of the Brown-Forsythe (MBF) procedure can be used for the analysis of partially repeated measure designs (PRMD) when the covariance matrices are arbitrary. However, the MBF procedure requires complete data over time for each subject, which is a significant limitation of this procedure. This article provides the rules for pooling the results obtained after applying the same MBF analysis to each of the imputed datasets of a PRMD. METHOD: Montecarlo methods are used to evaluate the proposed solution (MI-MBF), in terms of control of Type I and Type II errors. For comparative purposes, the MBF analysis based on the complete original dataset (OD-MBF) and the covariance pattern model based on an unstructured matrix (CPM-UN) were studied. RESULTS: Robustness and power results showed that the MI-MBF method performed slightly worse than tests based on CPM-UN when the homogeneity assumption was met, but slightly better when that assumption was not met. We also note that without assuming equality of covariance matrices, little power was sacrificed by using the MI-MBF method in place of the OD-MBF method. CONCLUSIONS: The results of this study suggest that the MI-MBF method performs well and could be of practical use.


Assuntos
Análise de Dados , Estatística como Assunto/métodos
3.
Psicothema (Oviedo) ; 28(3): 330-339, ago. 2016. ilus
Artigo em Inglês | IBECS | ID: ibc-154631

RESUMO

BACKGROUND: S. Usami (2014) describes a method to realistically determine sample size in longitudinal research using a multilevel model. The present research extends the aforementioned work to situations where it is likely that the assumption of homogeneity of the errors across groups is not met and the error term does not follow a scaled identity covariance structure. METHOD: For this purpose, we followed a procedure based on transforming the variance components of the linear growth model and the parameter related to the treatment effect into specific and easily understandable indices. At the same time, we provide the appropriate statistical machinery for researchers to use when data loss is unavoidable, and changes in the expected value of the observed responses are not linear. RESULTS: The empirical powers based on unknown variance components were virtually the same as the theoretical powers derived from the use of statistically processed indexes. CONCLUSIONS: The main conclusion of the study is the accuracy of the proposed method to calculate sample size in the described situations with the stipulated power criteria


ANTECEDENTES: S. Usami (2014) describe un método que permite determinar de forma realista el tamaño de muestra en la investigación longitudinal utilizando un modelo multinivel. En la presente investigación se extiende el trabajo aludido a situaciones donde es probable que se incumpla el supuesto de homogeneidad de los errores a través de los grupos y la estructura del término de error no sea de identidad escalada. MÉTODO: para ello, se ha seguido procedimiento basado en transformar los componentes de varianza del modelo de crecimiento lineal y el parámetro relacionado con el efecto del tratamiento en índices de fácil comprensión y especificación. También se proporciona la maquinaria estadística adecuada para que los investigadores puedan usarlo cuando la pérdida de información resulte inevitable y los cambios en el valor esperado de las respuestas observadas no sean lineales. RESULTADOS: las potencias empíricas basadas en componentes de varianza desconocidos fueron virtualmente idénticas a las potencias teóricas derivadas a partir del uso de índices estadísticos transformados. CONCLUSIONES: la principal conclusión del trabajo es la exactitud del enfoque propuesto para calcular el tamaño de muestra bajo las situaciones reseñadas con el criterio de potencia estipulado


Assuntos
Psicometria/métodos , Estudos Longitudinais , Tamanho da Amostra , Interpretação Estatística de Dados , Projetos de Pesquisa , Estudos de Casos e Controles , Análise de Variância , Reprodutibilidade dos Testes
4.
Psicothema ; 28(3): 330-9, 2016 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-27448269

RESUMO

BACKGROUND:  S. Usami (2014) describes a method to realistically determine sample size in longitudinal research using a multilevel model. The present research extends the aforementioned work to situations where it is likely that the assumption of homogeneity of the errors across groups is not met and the error term does not follow a scaled identity covariance structure.  METHOD:  For this purpose, we followed a procedure based on transforming the variance components of the linear growth model and the parameter related to the treatment effect into specific and easily understandable indices. At the same time, we provide the appropriate statistical machinery for researchers to use when data loss is unavoidable, and changes in the expected value of the observed responses are not linear.  RESULTS:  The empirical powers based on unknown variance components were virtually the same as the theoretical powers derived from the use of statistically processed indexes.  CONCLUSIONS:  The main conclusion of the study is the accuracy of the proposed method to calculate sample size in the described situations with the stipulated power criteria.


Assuntos
Confiabilidade dos Dados , Estudos Longitudinais , Projetos de Pesquisa , Tamanho da Amostra , Modelos Estatísticos
5.
Multivariate Behav Res ; 50(1): 75-90, 2015.
Artigo em Inglês | MEDLINE | ID: mdl-26609744

RESUMO

This article uses Monte Carlo techniques to examine the effect of heterogeneity of variance in multilevel analyses in terms of relative bias, coverage probability, and root mean square error (RMSE). For all simulated data sets, the parameters were estimated using the restricted maximum-likelihood (REML) method both assuming homogeneity and incorporating heterogeneity into multilevel models. We find that (a) the estimates for the fixed parameters are unbiased, but the associated standard errors are frequently biased when heterogeneity is ignored; by contrast, the standard errors of the fixed effects are almost always accurate when heterogeneity is considered; (b) the estimates for the random parameters are slightly overestimated; (c) both the homogeneous and heterogeneous models produce standard errors of the variance component estimates that are underestimated; however, taking heterogeneity into account, the REML-estimations give correct estimates of the standard errors at the lowest level and lead to less underestimated standard errors at the highest level; and (d) from the RMSE point of view, REML accounting for heterogeneity outperforms REML assuming homogeneity; a considerable improvement has been particularly detected for the fixed parameters. Based on this, we conclude that the solution presented can be uniformly adopted. We illustrate the process using a real dataset.


Assuntos
Funções Verossimilhança , Modelos Estatísticos , Método de Monte Carlo , Análise Multinível , Probabilidade , Análise de Regressão
6.
An. psicol ; 30(2): 756-771, mayo 2014.
Artigo em Espanhol | IBECS | ID: ibc-121814

RESUMO

Investigación cuasi-experimental es aquella que tiene como objetivo poner a prueba una hipótesis causal manipulando (al menos) una variable independiente donde por razones logísticas o éticas no se puede asignar las unidades de investigación aleatoriamente a los grupos. Debido a que muchas decisiones a nivel social se toman en base al resultado de investigaciones con estas características, es imperativo que tengan una planificación exquisita de la aplicación del tratamiento, del control en el proceso de la investigación y del análisis de los datos. El pasado año 2013 los diseños cuasi-experimentales cumplieron 50 años, y este trabajo es un homenaje a Campbell y a todos los investigadores que día a día aportan ideas para mejorar el método cuasi-experimental en alguno de sus aspectos. De la mano de una revisión de las investigaciones cuasi-experimentales publicadas en un período de 11 años en tres revistas de Psicología destacamos algunos aspectos que se refieren al cuidado del método. Finalizamos el trabajo proponiendo el concepto de Validez Estructurada, que en resumen, es el hilo conductor que debe seguir la realización de toda investigación para poner a prueba con garantía las hipótesis que responden a los objetivos que en ella se plantean, concretamente, en las investigaciones cuasi-experimentales


Quasi-experimental investigation is that one that has as aim test a causal hypothesis manipulating (at least) an independent variable where for logistic or ethical reasons it is not possible to assign the units of investigation at random to the groups. Due to the fact that many decisions at the social level take on the basis of the result of investigations with these characteristics, it is imperative that have an exquisite planning of the application of the treatment, of the control in the process of the investigation and of the analysis of the data. Last year 2013 the quasi-experimental designs expired 50 years, and this work in an honoring to Campbell and to all the investigators who day after day contribute ideas to improve the quasi-experimental method in someone of his aspects. From the hand of a re-view of the quasi-experimental investigations published in a period of 11 years in three journals of psychology we distinguish some aspects that refer to the care of the method. We finished work by proposing the concept of Structured Validity, which in summary, is the thread that must follow all re-search to test with guarantee the hypothesis that respond to the objectives it raised, in particular, in quasi-experimental investigations


Assuntos
Humanos , 28573 , Projetos de Pesquisa , Psicologia Clínica/tendências
7.
Behav Res Methods ; 43(1): 18-36, 2011 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-21287107

RESUMO

This study examined the performance of selection criteria available in the major statistical packages for both mean model and covariance structure. Unbalanced designs due to missing data involving both a moderate and large number of repeated measurements and varying total sample sizes were investigated. The study also investigated the impact of using different estimation strategies for information criteria, the impact of different adjustments for calculating the criteria, and the impact of different distribution shapes. Overall, we found that the ability of consistent criteria in any of the their examined forms to select the correct model was superior under simple covariance patterns than under complex covariance patterns, and vice versa for the efficient criteria. The simulation studies covered in this paper also revealed that, regardless of method of estimation used, the consistent criteria based on number of subjects were more effective than the consistent criteria based on total number of observations, and vice versa for the efficient criteria. Furthermore, results indicated that, given a dataset with missing values, the efficient criteria were more affected than the consistent criteria by the lack of normality.


Assuntos
Ciências do Comportamento/estatística & dados numéricos , Modelos Estatísticos , Algoritmos , Análise de Variância , Teorema de Bayes , Simulação por Computador , Interpretação Estatística de Dados , Humanos , Funções Verossimilhança , Estudos Longitudinais/estatística & dados numéricos , Reprodutibilidade dos Testes , Projetos de Pesquisa , Tamanho da Amostra
8.
An. psicol ; 26(2): 400-409, jul.-dic. 2010. tab
Artigo em Espanhol | IBECS | ID: ibc-81975

RESUMO

Este artículo evalúa la robustez de varios enfoques para analizar diseños de medidas repetidas cuando los supuestos de normalidad y esfericidad multimuetral son separada y conjuntamente violados. Específicamente, el trabajo de los autores compara el desempeño de dos métodos de remuestreo, pruebas de permutación y de bootstrap, con el desempeño del usual modelo de análisis de varianza (ANOVA) y modelo lineal mixto con la solución Kenward-Roger implementada en SAS PROC MIXED. Los autores descubrieron que la prueba de permutación se comportaba mejor que las pruebas restantes cuando se incumplían los supuestos de normalidad y de esfericidad. Por el contrario, cuando se violaban los su-puestos de normalidad y de esfericidad multimuestral los resultados pusieron de relieve que la prueba Bootstrap-F proporcionaba un control de las tasas de error superior al ofrecido por la prueba de permutación y por enfoque del modelo mixto. La ejecución del enfoque ANOVA se vio afectada considerablemente por la presencia de heterogeneidad y por la falta de esfericidad, pero escasamente por la ausencia de normalidad (AU)


This article evaluated the robustness of several approaches for analyzing repeated measures designs when the assumptions of normality and multisample sphericity are violated separately and jointly. Specifically, the authors’ work compares the performance of two resampling methods, bootstrapping and permutation tests, with the performance of the usual analysis of variance (ANOVA) model and the mixed linear model procedure ad-justed by the Kenward–Roger solution available in SAS PROC MIXED. The authors found that the permutation test outperformed the other three methods when normality and sphericity assumptions did not hold. In contrast, when normality and multisample sphericity assumptions were violated the results clearly revealed that the Bootstrap-F test provided generally better control of Type I error rates than the permutation test and mixed linear model approach. The execution of ANOVA approach was considerably influenced by the presence of heterogeneity and lack of spheric-ity, but scarcely affected by the absence of normality (AU)


Assuntos
Humanos , Masculino , Feminino , Criança , Adolescente , Educação Infantil/psicologia , Punição/psicologia , Pais/psicologia , Educação/métodos , Relações Familiares , Identidade de Gênero
9.
Psychol Rep ; 102(3): 643-56, 2008 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-18763432

RESUMO

The Type I error rates and powers of three recent tests for analyzing nonorthogonal factorial designs under departures from the assumptions of homogeneity and normality were evaluated using Monte Carlo simulation. Specifically, this work compared the performance of the modified Brown-Forsythe procedure, the generalization of Box's method proposed by Brunner, Dette, and Munk, and the mixed-model procedure adjusted by the Kenward-Roger solution available in the SAS statistical package. With regard to robustness, the three approaches adequately controlled Type I error when the data were generated from symmetric distributions; however, this study's results indicate that, when the data were extracted from asymmetric distributions, the modified Brown-Forsythe approach controlled the Type I error slightly better than the other procedures. With regard to sensitivity, the higher power rates were obtained when the analyses were done with the MIXED procedure of the SAS program. Furthermore, results also identified that, when the data were generated from symmetric distributions, little power was sacrificed by using the generalization of Box's method in place of the modified Brown-Forsythe procedure.


Assuntos
Modelos Psicológicos , Testes Psicológicos , Análise Fatorial , Humanos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...