Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 30
Filter
1.
PLoS One ; 19(4): e0297906, 2024.
Article in English | MEDLINE | ID: mdl-38635512

ABSTRACT

The literature on leadership and personal competencies exhibits limitations in terms of construct definition, behavior specifications and valid theory-based measuring strategies. An explanatory design with latent variables and the statistical software SAS 9.4 were used for the validation and adaptation to Spanish of the Leadership Virtues Questionnaire applied to work and organizational psychologists and people who exercise leadership functions in Chile. The levels of agreement between judges for the adaptation to the Spanish language and the confirmatory factor analysis of first order with four dimensions shows insufficient statistical indices for the absolute, comparative and parsimonious adjustments. However, a second-order confirmatory factor analysis with two dimensions presents a satisfactory fit for the item, model, and parameter matrices. The measurement of Virtuous Leadership would provide relevant inputs for further evaluation and training based on ethical competencies aimed at improving management, which would, in turn, allow for its treatment as an independent variable to generate an ethical organizational culture.


Subject(s)
Leadership , Virtues , Humans , Chile , Organizational Culture , Surveys and Questionnaires , Reproducibility of Results
2.
Psychol Methods ; 2023 Feb 16.
Article in English | MEDLINE | ID: mdl-36795436

ABSTRACT

This article discusses the robustness of the multivariate analysis of covariance (MANCOVA) test for an emergent variable system and proposes a modification of this test to obtain adequate information from heterogeneous normal observations. The proposed approach for testing potential effects in heterogeneous MANCOVA models can be adopted effectively, regardless of the degree of heterogeneity and sample size imbalance. As our method was not designed to handle missing values, we also show how to derive the formulas for pooling the results of multiple-imputation-based analyses into a single final estimate. Results of simulated studies and analysis of real-data show that the proposed combining rules provide adequate coverage and power. Based on the current evidence, the two solutions suggested could be effectively used by researchers for testing hypotheses, provided that the data conform to normality. (PsycInfo Database Record (c) 2023 APA, all rights reserved).

3.
Psicothema (Oviedo) ; 32(3): 399-409, ago. 2020. tab
Article in English | IBECS | ID: ibc-199781

ABSTRACT

BACKGROUNDS: This study analyzes the effectiveness of different information criteria for the selection of covariance structures, extending it to different missing data mechanisms, the maintenance and adjustment of the mean structures, and matrices. METHOD: The Monte Carlo method was used with 1,000 simulations, SAS 9.4 statistical software and a partially repeated measures design (p=2; q=5). The following variables were manipulated: a) the complexity of the model; b) sample size; c) matching of covariance matrices and sample size; d) dispersion matrices; e) the type of distribution of the variable; f) the non-response mechanism. RESULTS: The results show that all information criteria worked well in Scenario 1 for normal and non-normal distributions with heterogeneity of variance. However, in Scenarios 2 and 3, all were accurate with the ARH matrix, whereas AIC, AICCR and HQICR worked better with TOEP and UN. When the distribution was not normal, AIC and AICCR were only accurate in Scenario 3, more heterogeneous and unstructured matrices, with complete cases, MAR and MCAR. CONCLUSIONS: In order to correctly select the matrix it is advisible to analyze the heterogeneity, sample size and distribution of the data


ANTECEDENTES: el presente trabajo analiza la efectividad de distintos criterios de información para seleccionar estructuras de covarianza extendiéndolo a diferentes mecanismos de pérdida de datos, la mantención y ajustes de las estructuras de medias y las matrices. MÉTODO: se utilizó el método Monte Carlo con 1.000 simulaciones, el software estadístico SAS 9.4 y un diseño de medidas parcialmente repetidas (p=2; q=5). Las variables manipuladas fueron: a) complejidad del modelo; b) tamaño muestral; c) emparejamiento de las matrices de covarianza y tamaño muestral; d) matrices de dispersión; e) forma de distribución de la variable; y f) mecanismo de no respuesta. RESULTADOS: los resultados muestran que todos los criterios de información funcionan bien en el escenario 1 para distribuciones normales y no normales con homogeneidad y heterogeneidad de varianzas. Sin embargo, en los escenarios 2 y 3, todos son precisos con la matriz ARH, aunque, AIC, AICCR y HQICR lo hacen para TOEP y UN. Por otro lado, cuando la distribución no es normal, solo en el escenario 3 funcionan bien AIC y AICCR, matrices más heterogéneas y No Estructurada, con Casos Completo MAR y MCAR. CONCLUSIONES: en consecuencia, para seleccionar la matriz correctamente se recomienda analizar la heterogeneidad, tamaño muestral y distribución de los datos


Subject(s)
Humans , Analysis of Variance , Models, Statistical , Reproducibility of Results , Sensitivity and Specificity , Monte Carlo Method , 28574
4.
Psicothema ; 32(3): 399-409, 2020 Aug.
Article in English | MEDLINE | ID: mdl-32711676

ABSTRACT

BACKGROUNDS: This study analyzes the effectiveness of different information criteria for the selection of covariance structures, extending it to different missing data mechanisms, the maintenance and adjustment of the mean structures, and matrices. METHOD: The Monte Carlo method was used with 1,000 simulations, SAS 9.4 statistical software and a partially repeated measures design (p=2; q=5). The following variables were manipulated: a) the complexity of the model; b) sample size; c) matching of covariance matrices and sample size; d) dispersion matrices; e) the type of distribution of the variable; f) the non-response mechanism. RESULTS: The results show that all information criteria worked well in Scenario 1 for normal and non-normal distributions with heterogeneity of variance. However, in Scenarios 2 and 3, all were accurate with the ARH matrix, whereas AIC, AICCR and HQICR worked better with TOEP and UN. When the distribution was not normal, AIC and AICCR were only accurate in Scenario 3, more heterogeneous and unstructured matrices, with complete cases, MAR and MCAR. CONCLUSIONS: In order to correctly select the matrix it is advisible to analyze the heterogeneity, sample size and distribution of the data.


Subject(s)
Data Interpretation, Statistical , Research Design/statistics & numerical data , Sensitivity and Specificity
5.
Behav Res Methods ; 51(3): 1216-1243, 2019 06.
Article in English | MEDLINE | ID: mdl-29934696

ABSTRACT

In this study, two approaches were employed to calculate how large the sample size needs to be in order to achieve a desired statistical power to detect a significant group-by-time interaction in longitudinal intervention studies-a power analysis method, based on derived formulas using ordinary least squares estimates, and an empirical method, based on restricted maximum likelihood estimates. The performance of both procedures was examined under four different scenarios: (a) complete data with homogeneous variances, (b) incomplete data with homogeneous variances, (c) complete data with heterogeneous variances, and (d) incomplete data with heterogeneous variances. Several interesting findings emerged from this research. First, in the presence of heterogeneity, larger sample sizes are required in order to attain a desired nominal power. The second interesting finding is that, when there is attrition, the sample size requirements can be quite large. However, when attrition is anticipated, derived formulas enable the power to be calculated on the basis of the final number of subjects that are expected to complete the study. The third major finding is that the direct mathematical formulas allow the user to rigorously determine the sample size required to achieve a specified power level. Therefore, when data can be assumed to be missing at random, the solution presented can be adopted, given that Monte Carlo studies have indicated that it is very satisfactory. We illustrate the proposed method using real data from two previously published datasets.


Subject(s)
Sample Size , Likelihood Functions , Longitudinal Studies , Models, Statistical , Monte Carlo Method
6.
Psicothema (Oviedo) ; 30(4): 434-441, nov. 2018. tab
Article in English | IBECS | ID: ibc-178700

ABSTRACT

BACKGROUND: A multivariate extension of the Brown-Forsythe (MBF) procedure can be used for the analysis of partially repeated measure designs (PRMD) when the covariance matrices are arbitrary. However, the MBF procedure requires complete data over time for each subject, which is a significant limitation of this procedure. This article provides the rules for pooling the results obtained after applying the same MBF analysis to each of the imputed datasets of a PRMD. METHOD: Montecarlo methods are used to evaluate the proposed solution (MI-MBF), in terms of control of Type I and Type II errors. For comparative purposes, the MBF analysis based on the complete original dataset (OD-MBF) and the covariance pattern model based on an unstructured matrix (CPM-UN) were studied. RESULTS: Robustness and power results showed that the MI-MBF method performed slightly worse than tests based on CPM-UN when the homogeneity assumption was met, but slightly better when that assumption was not met. We also note that without assuming equality of covariance matrices, little power was sacrificed by using the MI-MBF method in place of the OD-MBF method. CONCLUSIONS: The results of this study suggest that the MI-MBF method performs well and could be of practical use


ANTECEDENTES: para analizar diseños de medidas parcialmente repetidas (DMPR) con matrices de covarianza arbitrarias se puede usar una extensión multivariante del enfoque de Brown-Forsythe (MBF). Una importante limitación de este enfoque es que requiere datos completos para cada sujeto. Este artículo proporciona las reglas para agrupar los resultados obtenidos tras aplicar el análisis MBF a los diferentes conjuntos de datos imputados de un DMPR. MÉTODO: se aplican técnicas de Montecarlo para evaluar la solución propuesta (IM-MBF), en términos de control de los errores Tipo I y Tipo II. Con fines comparativos, también se evalúan los resultados obtenidos con el enfoque MBF basado en los datos originales (DO-MBF), así como con el modelo de patrones de covarianza basado en asumir una matriz no estructurada (MPC-NE). RESULTADOS: cuando se cumple el supuesto de homogeneidad, el desempeño de la prueba IM-MBF es ligeramente inferior al obtenido con la prueba MPC-NE, mientras que sucede lo contrario cuando se incumple dicho supuesto. También encontramos que se pierde poca potencia usando el enfoque MI-MBF, en lugar del enfoque DO-MBF, cuando las matrices de covarianza son heterogéneas. CONCLUSIONES: los resultados sugieren que el enfoque MI-MBF funciona bien y podría ser de uso práctico


Subject(s)
Humans , Statistics as Topic , Statistics as Topic/methods
7.
Psicothema ; 30(4): 434-441, 2018 11.
Article in English | MEDLINE | ID: mdl-30353846

ABSTRACT

BACKGROUND: A multivariate extension of the Brown-Forsythe (MBF) procedure can be used for the analysis of partially repeated measure designs (PRMD) when the covariance matrices are arbitrary. However, the MBF procedure requires complete data over time for each subject, which is a significant limitation of this procedure. This article provides the rules for pooling the results obtained after applying the same MBF analysis to each of the imputed datasets of a PRMD. METHOD: Montecarlo methods are used to evaluate the proposed solution (MI-MBF), in terms of control of Type I and Type II errors. For comparative purposes, the MBF analysis based on the complete original dataset (OD-MBF) and the covariance pattern model based on an unstructured matrix (CPM-UN) were studied. RESULTS: Robustness and power results showed that the MI-MBF method performed slightly worse than tests based on CPM-UN when the homogeneity assumption was met, but slightly better when that assumption was not met. We also note that without assuming equality of covariance matrices, little power was sacrificed by using the MI-MBF method in place of the OD-MBF method. CONCLUSIONS: The results of this study suggest that the MI-MBF method performs well and could be of practical use.


Subject(s)
Data Analysis , Statistics as Topic/methods
8.
Front Psychol ; 9: 556, 2018.
Article in English | MEDLINE | ID: mdl-29731731

ABSTRACT

It is practically impossible to avoid losing data in the course of an investigation, and it has been proven that the consequences can reach such magnitude that they could even invalidate the results of the study. This paper describes some of the most likely causes of missing data in research in the field of clinical psychology and the consequences they may have on statistical and substantive inferences. When it is necessary to recover the missing information, analyzing the data can become extremely complex. We summarize the experts' recommendations regarding the most powerful procedures for performing this task, the advantages each one has over the others, the elements that can or should influence our choice, and the procedures that are not a recommended option except in very exceptional cases. We conclude by offering four pieces of advice, on which all the experts agree and to which we must attend at all times in order to proceed with the greatest possible success. Finally, we show the pernicious effects produced by missing data on the statistical result and on the substantive or clinical conclusions. For this purpose we have planned to lose data in different percentage rates under two mechanisms of loss of data, MCAR and MAR in the complete data set of two very different real researchs, and we proceed to analyze the set of the available data, listwise deletion. One study is carried out using a quasi-experimental non-equivalent control group design, and another study using a experimental design completely randomized.

9.
Psicothema (Oviedo) ; 28(3): 330-339, ago. 2016. ilus
Article in English | IBECS | ID: ibc-154631

ABSTRACT

BACKGROUND: S. Usami (2014) describes a method to realistically determine sample size in longitudinal research using a multilevel model. The present research extends the aforementioned work to situations where it is likely that the assumption of homogeneity of the errors across groups is not met and the error term does not follow a scaled identity covariance structure. METHOD: For this purpose, we followed a procedure based on transforming the variance components of the linear growth model and the parameter related to the treatment effect into specific and easily understandable indices. At the same time, we provide the appropriate statistical machinery for researchers to use when data loss is unavoidable, and changes in the expected value of the observed responses are not linear. RESULTS: The empirical powers based on unknown variance components were virtually the same as the theoretical powers derived from the use of statistically processed indexes. CONCLUSIONS: The main conclusion of the study is the accuracy of the proposed method to calculate sample size in the described situations with the stipulated power criteria


ANTECEDENTES: S. Usami (2014) describe un método que permite determinar de forma realista el tamaño de muestra en la investigación longitudinal utilizando un modelo multinivel. En la presente investigación se extiende el trabajo aludido a situaciones donde es probable que se incumpla el supuesto de homogeneidad de los errores a través de los grupos y la estructura del término de error no sea de identidad escalada. MÉTODO: para ello, se ha seguido procedimiento basado en transformar los componentes de varianza del modelo de crecimiento lineal y el parámetro relacionado con el efecto del tratamiento en índices de fácil comprensión y especificación. También se proporciona la maquinaria estadística adecuada para que los investigadores puedan usarlo cuando la pérdida de información resulte inevitable y los cambios en el valor esperado de las respuestas observadas no sean lineales. RESULTADOS: las potencias empíricas basadas en componentes de varianza desconocidos fueron virtualmente idénticas a las potencias teóricas derivadas a partir del uso de índices estadísticos transformados. CONCLUSIONES: la principal conclusión del trabajo es la exactitud del enfoque propuesto para calcular el tamaño de muestra bajo las situaciones reseñadas con el criterio de potencia estipulado


Subject(s)
Psychometrics/methods , Longitudinal Studies , Sample Size , Data Interpretation, Statistical , Research Design , Case-Control Studies , Analysis of Variance , Reproducibility of Results
10.
Psicothema ; 28(3): 330-9, 2016 Aug.
Article in English | MEDLINE | ID: mdl-27448269

ABSTRACT

BACKGROUND:  S. Usami (2014) describes a method to realistically determine sample size in longitudinal research using a multilevel model. The present research extends the aforementioned work to situations where it is likely that the assumption of homogeneity of the errors across groups is not met and the error term does not follow a scaled identity covariance structure.  METHOD:  For this purpose, we followed a procedure based on transforming the variance components of the linear growth model and the parameter related to the treatment effect into specific and easily understandable indices. At the same time, we provide the appropriate statistical machinery for researchers to use when data loss is unavoidable, and changes in the expected value of the observed responses are not linear.  RESULTS:  The empirical powers based on unknown variance components were virtually the same as the theoretical powers derived from the use of statistically processed indexes.  CONCLUSIONS:  The main conclusion of the study is the accuracy of the proposed method to calculate sample size in the described situations with the stipulated power criteria.


Subject(s)
Data Accuracy , Longitudinal Studies , Research Design , Sample Size , Models, Statistical
11.
Multivariate Behav Res ; 50(1): 75-90, 2015.
Article in English | MEDLINE | ID: mdl-26609744

ABSTRACT

This article uses Monte Carlo techniques to examine the effect of heterogeneity of variance in multilevel analyses in terms of relative bias, coverage probability, and root mean square error (RMSE). For all simulated data sets, the parameters were estimated using the restricted maximum-likelihood (REML) method both assuming homogeneity and incorporating heterogeneity into multilevel models. We find that (a) the estimates for the fixed parameters are unbiased, but the associated standard errors are frequently biased when heterogeneity is ignored; by contrast, the standard errors of the fixed effects are almost always accurate when heterogeneity is considered; (b) the estimates for the random parameters are slightly overestimated; (c) both the homogeneous and heterogeneous models produce standard errors of the variance component estimates that are underestimated; however, taking heterogeneity into account, the REML-estimations give correct estimates of the standard errors at the lowest level and lead to less underestimated standard errors at the highest level; and (d) from the RMSE point of view, REML accounting for heterogeneity outperforms REML assuming homogeneity; a considerable improvement has been particularly detected for the fixed parameters. Based on this, we conclude that the solution presented can be uniformly adopted. We illustrate the process using a real dataset.


Subject(s)
Likelihood Functions , Models, Statistical , Monte Carlo Method , Multilevel Analysis , Probability , Regression Analysis
12.
An. psicol ; 30(2): 756-771, mayo 2014.
Article in Spanish | IBECS | ID: ibc-121814

ABSTRACT

Investigación cuasi-experimental es aquella que tiene como objetivo poner a prueba una hipótesis causal manipulando (al menos) una variable independiente donde por razones logísticas o éticas no se puede asignar las unidades de investigación aleatoriamente a los grupos. Debido a que muchas decisiones a nivel social se toman en base al resultado de investigaciones con estas características, es imperativo que tengan una planificación exquisita de la aplicación del tratamiento, del control en el proceso de la investigación y del análisis de los datos. El pasado año 2013 los diseños cuasi-experimentales cumplieron 50 años, y este trabajo es un homenaje a Campbell y a todos los investigadores que día a día aportan ideas para mejorar el método cuasi-experimental en alguno de sus aspectos. De la mano de una revisión de las investigaciones cuasi-experimentales publicadas en un período de 11 años en tres revistas de Psicología destacamos algunos aspectos que se refieren al cuidado del método. Finalizamos el trabajo proponiendo el concepto de Validez Estructurada, que en resumen, es el hilo conductor que debe seguir la realización de toda investigación para poner a prueba con garantía las hipótesis que responden a los objetivos que en ella se plantean, concretamente, en las investigaciones cuasi-experimentales


Quasi-experimental investigation is that one that has as aim test a causal hypothesis manipulating (at least) an independent variable where for logistic or ethical reasons it is not possible to assign the units of investigation at random to the groups. Due to the fact that many decisions at the social level take on the basis of the result of investigations with these characteristics, it is imperative that have an exquisite planning of the application of the treatment, of the control in the process of the investigation and of the analysis of the data. Last year 2013 the quasi-experimental designs expired 50 years, and this work in an honoring to Campbell and to all the investigators who day after day contribute ideas to improve the quasi-experimental method in someone of his aspects. From the hand of a re-view of the quasi-experimental investigations published in a period of 11 years in three journals of psychology we distinguish some aspects that refer to the care of the method. We finished work by proposing the concept of Structured Validity, which in summary, is the thread that must follow all re-search to test with guarantee the hypothesis that respond to the objectives it raised, in particular, in quasi-experimental investigations


Subject(s)
Humans , 28573 , Research Design , Psychology, Clinical/trends
13.
Psicothema (Oviedo) ; 25(4): 520-528, oct.-dic. 2013. tab, ilus
Article in English | IBECS | ID: ibc-115901

ABSTRACT

Background: Likelihood-based methods can work poorly when the residuals are not normally distributed and the variances across clusters are heterogeneous. Method: The performance of two estimation methods, the non-parametric residual bootstrap (RB) and the restricted maximum likelihood (REML) for fitting multilevel models are compared through simulation studies in terms of bias, coverage, and precision. Results: We find that (a) both methods produce unbiased estimates of the fixed parameters, but biased estimates of the random parameters, although the REML was more prone to give biased estimates for the variance components; (b) the RB method yields substantial reductions in the difference between nominal and actual confidence interval coverage, compared with the REML method; and (c) for the square root of the mean squared error (RMSE) of the fixed effects, the RB method performed slightly better than the REML method. For the variance components, however, the RB method did not offer a systematic improvement over the REML method in terms of RMSE. Conclusions: It can be stated that the RB method is, in general, superior to the REML method with violated assumptions (AU)


Antecedentes: los métodos basados en la verosimilitud pueden trabajar con dificultad cuando los errores no se distribuyen normalmente y las varianzas a través de los grupos son heterogéneas. Método: el desempeño de dos métodos de estimación, el bootstrap residual (BR) no paramétrico y el de la máxima verosimilitud restringida (MVR), para ajustar modelos multinivel es comparado mediante estudios de simulación en términos de sesgo, cobertura y precisión. Resultados: encontramos que: (a) ambos métodos proporcionan estimaciones no sesgadas de los efectos fijos, pero sesgadas de los efectos aleatorios, aunque el método MVR es más propenso a generar estimaciones sesgadas para los componentes de la varianza; (b) el método BR depara diferencias más pequeñas entre las tasas de cobertura real y nominal de los intervalos de confianza que el método MVR; y (c) los valores de la raíz del error cuadrático medio (RECM) para los efectos fijos son algo más pequeños bajo el método BR que bajo el método REML. Sin embargo, en lo referido a los componentes de la varianza, el método de BR no ofrece una mejora sistemática sobre el método MVR en términos de RECM. Conclusiones: en general, se puede afirmar que el método BR resulta superior al método MVR con supuestos incumplidos (AU)


Subject(s)
Humans , Male , Female , Likelihood Functions , Psychometrics/methods , Psychometrics/statistics & numerical data , Statistics as Topic , Multilevel Analysis/instrumentation , Multilevel Analysis/methods , Multilevel Analysis/trends , Confidence Intervals , Analysis of Variance
14.
Psicothema ; 25(4): 520-8, 2013.
Article in English | MEDLINE | ID: mdl-24124787

ABSTRACT

BACKGROUND: Likelihood-based methods can work poorly when the residuals are not normally distributed and the variances across clusters are heterogeneous. METHOD: The performance of two estimation methods, the non-parametric residual bootstrap (RB) and the restricted maximum likelihood (REML) for fitting multilevel models are compared through simulation studies in terms of bias, coverage, and precision. RESULTS: We find that (a) both methods produce unbiased estimates of the fixed parameters, but biased estimates of the random parameters, although the REML was more prone to give biased estimates for the variance components; (b) the RB method yields substantial reductions in the difference between nominal and actual confidence interval coverage, compared with the REML method; and (c) for the square root of the mean squared error (RMSE) of the fixed effects, the RB method performed slightly better than the REML method. For the variance components, however, the RB method did not offer a systematic improvement over the REML method in terms of RMSE. CONCLUSIONS: It can be stated that the RB method is, in general, superior to the REML method with violated assumptions.


Subject(s)
Likelihood Functions , Statistics, Nonparametric , Analysis of Variance , Bias
15.
Behav Res Methods ; 43(1): 18-36, 2011 Mar.
Article in English | MEDLINE | ID: mdl-21287107

ABSTRACT

This study examined the performance of selection criteria available in the major statistical packages for both mean model and covariance structure. Unbalanced designs due to missing data involving both a moderate and large number of repeated measurements and varying total sample sizes were investigated. The study also investigated the impact of using different estimation strategies for information criteria, the impact of different adjustments for calculating the criteria, and the impact of different distribution shapes. Overall, we found that the ability of consistent criteria in any of the their examined forms to select the correct model was superior under simple covariance patterns than under complex covariance patterns, and vice versa for the efficient criteria. The simulation studies covered in this paper also revealed that, regardless of method of estimation used, the consistent criteria based on number of subjects were more effective than the consistent criteria based on total number of observations, and vice versa for the efficient criteria. Furthermore, results indicated that, given a dataset with missing values, the efficient criteria were more affected than the consistent criteria by the lack of normality.


Subject(s)
Behavioral Sciences/statistics & numerical data , Models, Statistical , Algorithms , Analysis of Variance , Bayes Theorem , Computer Simulation , Data Interpretation, Statistical , Humans , Likelihood Functions , Longitudinal Studies/statistics & numerical data , Reproducibility of Results , Research Design , Sample Size
16.
An. psicol ; 26(2): 400-409, jul.-dic. 2010. tab
Article in Spanish | IBECS | ID: ibc-81975

ABSTRACT

Este artículo evalúa la robustez de varios enfoques para analizar diseños de medidas repetidas cuando los supuestos de normalidad y esfericidad multimuetral son separada y conjuntamente violados. Específicamente, el trabajo de los autores compara el desempeño de dos métodos de remuestreo, pruebas de permutación y de bootstrap, con el desempeño del usual modelo de análisis de varianza (ANOVA) y modelo lineal mixto con la solución Kenward-Roger implementada en SAS PROC MIXED. Los autores descubrieron que la prueba de permutación se comportaba mejor que las pruebas restantes cuando se incumplían los supuestos de normalidad y de esfericidad. Por el contrario, cuando se violaban los su-puestos de normalidad y de esfericidad multimuestral los resultados pusieron de relieve que la prueba Bootstrap-F proporcionaba un control de las tasas de error superior al ofrecido por la prueba de permutación y por enfoque del modelo mixto. La ejecución del enfoque ANOVA se vio afectada considerablemente por la presencia de heterogeneidad y por la falta de esfericidad, pero escasamente por la ausencia de normalidad (AU)


This article evaluated the robustness of several approaches for analyzing repeated measures designs when the assumptions of normality and multisample sphericity are violated separately and jointly. Specifically, the authors’ work compares the performance of two resampling methods, bootstrapping and permutation tests, with the performance of the usual analysis of variance (ANOVA) model and the mixed linear model procedure ad-justed by the Kenward–Roger solution available in SAS PROC MIXED. The authors found that the permutation test outperformed the other three methods when normality and sphericity assumptions did not hold. In contrast, when normality and multisample sphericity assumptions were violated the results clearly revealed that the Bootstrap-F test provided generally better control of Type I error rates than the permutation test and mixed linear model approach. The execution of ANOVA approach was considerably influenced by the presence of heterogeneity and lack of spheric-ity, but scarcely affected by the absence of normality (AU)


Subject(s)
Humans , Male , Female , Child , Adolescent , Child Rearing/psychology , Punishment/psychology , Parents/psychology , Education/methods , Family Relations , Gender Identity
17.
Rev. latinoam. psicol ; 42(2): 289-309, may.-ago. 2010. tab
Article in Spanish | LILACS | ID: lil-637075

ABSTRACT

En esta investigación examinamos el comportamiento de cinco estadísticos univariados para analizar datos en un diseño Split-Plot. Cuatro de ellos asumen que la matriz de desviación subyacente es no esférica. Sin embargo, existe una clara distinción entre dos alternativas, dos procedimientos presuponen que la correlación entre los datos no sigue un patrón determinado y otros dos asumen que existe autocorrelación serial de primer orden. Todos ellos fueron comparados con respecto a su robustez para poner a prueba las fuentes de variación intra-sujeto (tratamiento e interacción) bajo distribución no normal en ausencia de esfericidad y en ambas situaciones, bajo correlación serial de primer orden y bajo correlación arbitraria. Los resultados muestran que cuando la distribución es no normal simétrica todos los procedimientos muestran una tasa de error de Tipo I similar a la obtenida bajo distribución normal. Conforme el grado de sesgo y curtosis incrementa, todos los procedimientos experimentan una alteración en su estimación de la tasa de error de Tipo I y que depende de la estructura de la matriz de covarianza que subyace a los datos. En el conjunto de condiciones sometidas a estudio los procedimientos más robustos fueron HCH, JN y LEC.


In this research we examine the behaviour of five univariate statistics for analyzing the data of a Split-Plot design. Four of them assume that the dispersion matrix underlying is not spherical. However, they do so with a clear distinction between two alternatives, insofar as two of them presuppose that the correlation between the data does not have a certain structure and other two assume that there exists first-order serial autocorrelation. All of them were compared with regard to their robustness to test the sources of variation within-subject (treatment and interaction) under non-normality in the absence of sphericity, both when there was first-order serial autocorrelation and when the underlying correlation was arbitrary. The results show that when the distribution is non-normal symmetric all the procedures show a Type I error rate similar to the obtained one under normal distribution. As the degree of skewness and kurtosis increases, all the procedures experience an alteration in their estimation of the Type I error rate and that it depends on the structure of covariance matrix underlying in the data. In the set of conditions submitted to study the most robust procedures were HCH, JN and LEC.

18.
Psicológica (Valencia, Ed. impr.) ; 31(1): 129-148, ene.-abr. 2010. tab
Article in Spanish | IBECS | ID: ibc-75796

ABSTRACT

El objetivo de esta investigación fue comparar la robustez de dosestadísticos heteroscedásticos, Welch-James desarrollado por Johansen (WJ)y el estadístico Tipo Box desarrollado por Brunner, Dette y Munk (BDM),junto con el Modelo Lineal General (GLM), no heteroscedástico, de dosmodos diferentes en función del cálculo del valor crítico. De una parte,cuando los valores críticos se basan en valores teóricos (WJ, BDM y GLMrespectivamente), y de otra parte, cuando se obtienen mediante remuestreobootstrap (WJB, BDMB y GLMB respectivamente). Para llevarlo a cabo serealizó un estudio de simulación sobre un diseño factorial carente dehomogeneidad, normalidad y ortogonalidad. Los resultados muestran quecuando la relación entre el tamaño de las celdas y el tamaño de las varianzasfue positiva el procedimiento WJ fue el más robusto y que cuando larelación fue negativa el procedimiento más robusto fue WJB. Ambosprocedimientos se comportaron de modo liberal cuando la forma dedistribución fue sesgada, en mayor medida, cuanto mayor era el grado dedesigualdad del tamaño de las celdas y la heterogeneidad de las varianzas(AU)


Theaim of this research was to compare the robustness of two heteroscedastictest statistics, the Welch-James statistic developed by Johansen (WJ) and theType-Box statistic developed by Brunner, Dette and Munk (BDM), togetherwith the General Linear Model (GLM), not heteroscedastic test statistic, intwo different manners depending on the calculation of the critical value. Onthe one hand, when the critical values are based on theoretical values (WJ,BDM and GLM respectively), and on the other hand, when they areobtained by means of bootstrap resampling (WJB, BDMB and GLMBrespectively). To carry out it a study of simulation was realized on afactorial design lacking in homogeneity, normality and orthogonally. Theresults show that when the relation between the size of the cells and the sizeof the variances was positive the procedure WJ was the most robust and thatwhen the relation was negative the most robust procedure was WJB. Bothprocedures behaved in a liberal way when the shape of the distribution wasskewed, in major measure major it were the degree of inequality of the sizeof the cells and the heterogeneity of the variances(AU)


Subject(s)
Humans , Male , Female , Analysis of Variance , Statistics as Topic/methods , Statistics as Topic/organization & administration , Statistics as Topic/trends , Data Interpretation, Statistical , 28574/methods , Factor Analysis, Statistical , Bias , Observer Variation , 28640/methods , 28640/trends
19.
Psicothema ; 20(4): 969-73, 2008 Nov.
Article in Spanish | MEDLINE | ID: mdl-18940112

ABSTRACT

The current paper proposes a solution that generalizes ideas of Brown and Forsythe to the problem of comparing hypotheses in two-way classification designs with heteroscedastic error structure. Unlike the standard analysis of variance, the proposed approach does not require the homogeneity assumption. A comprehensive simulation study, in which sample size of the cells, relationship between the cell sizes and unequal variance, degree of variance heterogeneity, and population distribution shape were systematically manipulated, shows that the proposed approximation was generally robust when normality and heterogeneity were jointly violated.


Subject(s)
Models, Psychological , Surveys and Questionnaires , Factor Analysis, Statistical , Humans , Psychology/methods , Psychology/statistics & numerical data
20.
Psychol Rep ; 102(3): 643-56, 2008 Jun.
Article in English | MEDLINE | ID: mdl-18763432

ABSTRACT

The Type I error rates and powers of three recent tests for analyzing nonorthogonal factorial designs under departures from the assumptions of homogeneity and normality were evaluated using Monte Carlo simulation. Specifically, this work compared the performance of the modified Brown-Forsythe procedure, the generalization of Box's method proposed by Brunner, Dette, and Munk, and the mixed-model procedure adjusted by the Kenward-Roger solution available in the SAS statistical package. With regard to robustness, the three approaches adequately controlled Type I error when the data were generated from symmetric distributions; however, this study's results indicate that, when the data were extracted from asymmetric distributions, the modified Brown-Forsythe approach controlled the Type I error slightly better than the other procedures. With regard to sensitivity, the higher power rates were obtained when the analyses were done with the MIXED procedure of the SAS program. Furthermore, results also identified that, when the data were generated from symmetric distributions, little power was sacrificed by using the generalization of Box's method in place of the modified Brown-Forsythe procedure.


Subject(s)
Models, Psychological , Psychological Tests , Factor Analysis, Statistical , Humans
SELECTION OF CITATIONS
SEARCH DETAIL
...