Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 42
Filter
1.
Sci Rep ; 13(1): 5536, 2023 Apr 04.
Article in English | MEDLINE | ID: mdl-37015939

ABSTRACT

Climate change is a critical issue of our time, and its causes, pathways, and forecasts remain a topic of broader discussion. In this paper, we present a novel data driven pathway analysis framework to identify the key processes behind mean global temperature and sea level rise, and to forecast the magnitude of their increase from the present to 2100. Based on historical data and dynamic statistical modeling alone, we have established the causal pathways that connect increasing greenhouse gas emissions to increasing global mean temperature and sea level, with its intermediate links encompassing humidity, sea ice coverage, and glacier mass, but not for sunspot numbers. Our results indicate that if no action is taken to curb anthropogenic greenhouse gas emissions, the global average temperature would rise to an estimated 3.28 °C (2.46-4.10 °C) above its pre-industrial level while the global sea level would be an estimated 573 mm (474-671 mm) above its 2021 mean by 2100. However, if countries adhere to the greenhouse gas emission regulations outlined in the 2021 United Nations Conference on Climate Change (COP26), the rise in global temperature would lessen to an average increase of 1.88 °C (1.43-2.33 °C) above its pre-industrial level, albeit still higher than the targeted 1.5 °C, while the sea level increase would reduce to 449 mm (389-509 mm) above its 2021 mean by 2100.

2.
Psychol Methods ; 27(4): 519-540, 2022 Aug.
Article in English | MEDLINE | ID: mdl-34166048

ABSTRACT

n real data analysis with structural equation modeling, data are unlikely to be exactly normally distributed. If we ignore the non-normality reality, the parameter estimates, standard error estimates, and model fit statistics from normal theory based methods such as maximum likelihood (ML) and normal theory based generalized least squares estimation (GLS) are unreliable. On the other hand, the asymptotically distribution free (ADF) estimator does not rely on any distribution assumption but cannot demonstrate its efficiency advantage with small and modest sample sizes. The methods which adopt misspecified loss functions including ridge GLS (RGLS) can provide better estimates and inferences than the normal theory based methods and the ADF estimator in some cases. We propose a distributionally weighted least squares (DLS) estimator, and expect that it can perform better than the existing generalized least squares, because it combines normal theory based and ADF based generalized least squares estimation. Computer simulation results suggest that model-implied covariance based DLS (DLSM) provided relatively accurate and efficient estimates in terms of RMSE. In addition, the empirical standard errors, the relative biases of standard error estimates, and the Type I error rates of the Jiang-Yuan rank adjusted model fit test statistic (TJY) in DLSM were competitive with the classical methods including ML, GLS, and RGLS. The performance of DLSM depends on its tuning parameter a. We illustrate how to implement DLSM and select the optimal a by a bootstrap procedure in a real data example. (PsycInfo Database Record (c) 2022 APA, all rights reserved).


Subject(s)
Least-Squares Analysis , Bias , Computer Simulation , Humans , Latent Class Analysis , Sample Size
3.
Psychometrika ; 86(4): 861-868, 2021 12.
Article in English | MEDLINE | ID: mdl-34401978

ABSTRACT

Sijtsma and Pfadt (Psychometrika, 2021) provide a wide-ranging defense for the use of coefficient alpha. Alpha is practical and useful when its limitations are acceptable. This paper discusses several methodologies for reliability, some new here, that go beyond alpha and were not emphasized by Sijtsma and Pfadt. Bentler's (Psychometrika 33:335-345, 1968. https://doi.org/10.1007/BF02289328 ) combined factor analysis (FA) and classical test theory (CTT) model. FACTT provides a key conceptual foundation.


Subject(s)
Reproducibility of Results , Factor Analysis, Statistical , Psychometrics
4.
Am J Kidney Dis ; 71(4): 461-468, 2018 04.
Article in English | MEDLINE | ID: mdl-29128411

ABSTRACT

BACKGROUND: The Centers for Medicare & Medicaid Services require that dialysis patients' health-related quality of life be assessed annually. The primary instrument used for this purpose is the Kidney Disease Quality of Life 36-Item Short-Form Survey (KDQOL-36), which includes the SF-12 as its generic core and 3 kidney disease-targeted scales: Burden of Kidney Disease, Symptoms and Problems of Kidney Disease, and Effects of Kidney Disease. Despite its broad use, there has been limited evaluation of KDQOL-36's psychometric properties. STUDY DESIGN: Secondary analyses of data collected by the Medical Education Institute to evaluate the reliability and factor structure of the KDQOL-36 scales. SETTINGS & PARTICIPANTS: KDQOL-36 responses from 70,786 dialysis patients in 1,381 US dialysis facilities that permitted data analysis were collected from June 1, 2015, through May 31, 2016, as part of routine clinical assessment. MEASUREMENTS & OUTCOMES: We assessed the KDQOL-36 scales' internal consistency reliability and dialysis facility-level reliability using coefficient alpha and 1-way analysis of variance. We evaluated the KDQOL-36's factor structure using item-to-total scale correlations and confirmatory factor analysis. Construct validity was examined using correlations between SF-12 and KDQOL-36 scales and "known groups" analyses. RESULTS: Each of the KDQOL-36's kidney disease-targeted scales had acceptable internal consistency reliability (α=0.83-0.85) and facility-level reliability (r=0.75-0.83). Item-scale correlations and a confirmatory factor analysis model evidenced the KDQOL-36's original factor structure. Construct validity was supported by large correlations between the SF-12 Physical Component Summary and Mental Component Summary (r=0.40-0.52) and the KDQOL-36 scale scores, as well as significant differences on the scale scores between patients receiving different types of dialysis, diabetic and nondiabetic patients, and patients who were employed full-time versus not. LIMITATIONS: Use of secondary data from a clinical registry. CONCLUSIONS: The study provides support for the reliability and construct validity of the KDQOL-36 scales for assessment of health-related quality of life among dialysis patients.


Subject(s)
Kidney Diseases/psychology , Psychometrics/methods , Quality of Life , Registries , Surveys and Questionnaires , Adolescent , Adult , Aged , Aged, 80 and over , Female , Follow-Up Studies , Humans , Kidney Diseases/epidemiology , Kidney Diseases/therapy , Male , Middle Aged , Morbidity , Renal Dialysis , Reproducibility of Results , Retrospective Studies , United States/epidemiology , Young Adult
5.
Psychol Methods ; 22(3): 527-540, 2017 09.
Article in English | MEDLINE | ID: mdl-27732051

ABSTRACT

Internal consistency reliability coefficients based on classical test theory, such as α, ω, λ4, model-based ρxx, and the greatest lower bound ρglb, are computed as ratios of estimated common variance to total variance. They omit specific variance. As a result they are downward-biased and may fail to predict external criteria (McCrae et al., 2011). Some approaches for incorporating specific variance into reliability estimates are proposed and illustrated. The resulting specificity-enhanced coefficients α+, ω+, λ4+, ρxx+ and ρglb+ provide improved estimands of reliability and thus may be worth reporting in addition to their classical counterparts. The correction for attenuation, Spearman-Brown, and maximal reliability formulas also are extended to allow specificity. Limitations, future work, and implications are discussed, including the role of specificity to quantify the extent to which items represent important facets or nuances (McCrae, 2015) of content. (PsycINFO Database Record


Subject(s)
Factor Analysis, Statistical , Models, Statistical , Psychometrics , Reproducibility of Results , Sensitivity and Specificity , Humans
6.
Psychometrika ; 81(4): 907-920, 2016 12.
Article in English | MEDLINE | ID: mdl-27734297

ABSTRACT

Classical test theory reliability coefficients are said to be population specific. Reliability generalization, a meta-analysis method, is the main procedure for evaluating the stability of reliability coefficients across populations. A new approach is developed to evaluate the degree of invariance of reliability coefficients to population characteristics. Factor or common variance of a reliability measure is partitioned into parts that are, and are not, influenced by control variables, resulting in a partition of reliability into a covariate-dependent and a covariate-free part. The approach can be implemented in a single sample and can be applied to a variety of reliability coefficients.


Subject(s)
Analysis of Variance , Factor Analysis, Statistical , Reproducibility of Results , Brain/anatomy & histology , Data Interpretation, Statistical , Humans , Intelligence , Organ Size , Psychometrics
7.
Psychometrika ; 80(1): 182-95, 2015 Mar.
Article in English | MEDLINE | ID: mdl-24306557

ABSTRACT

Extending the theory of lower bounds to reliability based on splits given by Guttman (in Psychometrika 53, 63-70, 1945), this paper introduces quantile lower bound coefficients λ 4(Q) that refer to cumulative proportions of potential locally optimal "split-half" coefficients that are below a particular point Q in the distribution of split-halves based on different partitions of variables into two sets. Interesting quantile values are Q=0.05,0.50,0.95,1.00 with λ 4(0.05)≤λ 4(0.50)≤λ 4(0.95)≤λ 4(1.0). Only the global optimum λ 4(1.0), Guttman's maximal λ 4, has previously been considered to be interesting, but in small samples it substantially overestimates population reliability ρ. The three coefficients λ 4(0.05), λ 4(0.50), and λ 4(0.95) provide new lower bounds to reliability. The smallest, λ 4(0.05), provides the most protection against capitalizing on chance associations, and thus overestimation, λ 4(0.50) is the median of these coefficients, while λ 4(0.95) tends to overestimate reliability, but also exhibits less bias than previous estimators. Computational theory, algorithm, and publicly available code based in R are provided to compute these coefficients. Simulation studies evaluate the performance of these coefficients and compare them to coefficient alpha and the greatest lower bound under several population reliability structures.


Subject(s)
Models, Statistical , Psychometrics , Reproducibility of Results , Algorithms , Humans , Psychometrics/methods
8.
Long Range Plann ; 47(3): 138-145, 2014 Jun 01.
Article in English | MEDLINE | ID: mdl-24926106

ABSTRACT

Rigdon (2012) suggests that partial least squares (PLS) can be improved by killing it, that is, by making it into a different methodology based on components. We provide some history on problems with component-type methods and develop some implications of Rigdon's suggestion. It seems more appropriate to maintain and improve PLS as far as possible, but also to freely utilize alternative models and methods when those are more relevant in certain data analytic situations. Huang's (2013) new consistent and efficient PLSe2 methodology is suggested as a candidate for an improved PLS.

9.
Front Psychol ; 5: 1515, 2014.
Article in English | MEDLINE | ID: mdl-25709585

ABSTRACT

Asymptotically optimal correlation structure methods with binary data can break down in small samples. A new correlation structure methodology based on a recently developed odds-ratio (OR) approximation to the tetrachoric correlation coefficient is proposed as an alternative to the LPB approach proposed by Lee et al. (1995). Unweighted least squares (ULS) estimation with robust standard errors and generalized least squares (GLS) estimation methods were compared. Confidence intervals and tests for individual model parameters exhibited the best performance using the OR approach with ULS estimation. The goodness-of-fit chi-square test exhibited the best Type I error control using the LPB approach with ULS estimation.

10.
J Subst Abuse Treat ; 46(3): 374-81, 2014 Mar.
Article in English | MEDLINE | ID: mdl-24238716

ABSTRACT

The current study focuses on the relationships among a trauma history, a substance use history, chronic homelessness, and the mediating role of recent emotional distress in predicting drug treatment participation among adult homeless people. We explored the predictors of participation in substance abuse treatment because enrolling and retaining clients in substance abuse treatment programs is always a challenge particularly among homeless people. Participants were 853 homeless adults from Los Angeles, California. Using structural equation models, findings indicated that trauma history, substance use history and chronicity of homelessness were associated, and were significant predictors of greater recent emotional distress. The most notable result was that recent emotional distress predicted less participation in current substance abuse treatment (both formal and self-help) whereas a substance use history alone predicted significantly more participation in treatment. Implications concerning treatment engagement and difficulties in obtaining appropriate dual-diagnosis services for homeless mentally distressed individuals are discussed.


Subject(s)
Ill-Housed Persons , Patient Participation , Substance-Related Disorders/therapy , Adult , Affective Symptoms/psychology , Aged , Female , Humans , Male , Middle Aged
11.
Span J Psychol ; 16: E76, 2013.
Article in English | MEDLINE | ID: mdl-24230939

ABSTRACT

Both the family and school environments influence adolescents' violence, but there is little research focusing simultaneously on the two contexts. This study analyzed the role of positive family and classroom environments as protective factors for adolescents' violence against authority (parent abuse and teacher abuse) and the relations between antisocial behavior and child-to-parent violence or student-to-teacher violence. The sample comprised 687 Spanish students aged 12-16 years, who responded to the Family Environment Scale (FES) and the Classroom Environment Scale (CES). Structural Equation Modeling was used to test our model of violent behavior towards authority based on Catalano and Hawkins' Social Developmental Model (1996). Perceived family cohesion and organization showed an inverse association with parent abuse, suggesting that a positive family environment was a protective factor for the development of violence against parents. Family and classroom environments had direct effects on adolescents' violence against authority, and antisocial behavior showed a mediating effect in this relationship. The model accounted for 81% of the variance in violence against authority. As family environment was a better predictor of violence against authority than school environment, intervention efforts to reduce rates of adolescent violence should focus on helping parents to increase family cohesion and to manage conflictive relationships with their children.


Subject(s)
Adolescent Behavior/psychology , Antisocial Personality Disorder/psychology , Family Relations , Social Environment , Violence/psychology , Adolescent , Child , Faculty , Female , Humans , Male , Models, Psychological , Parent-Child Relations , Schools
12.
Stat Med ; 32(24): 4229-39, 2013 Oct 30.
Article in English | MEDLINE | ID: mdl-23640746

ABSTRACT

High-dimensional longitudinal data involving latent variables such as depression and anxiety that cannot be quantified directly are often encountered in biomedical and social sciences. Multiple responses are used to characterize these latent quantities, and repeated measures are collected to capture their trends over time. Furthermore, substantive research questions may concern issues such as interrelated trends among latent variables that can only be addressed by modeling them jointly. Although statistical analysis of univariate longitudinal data has been well developed, methods for modeling multivariate high-dimensional longitudinal data are still under development. In this paper, we propose a latent factor linear mixed model (LFLMM) for analyzing this type of data. This model is a combination of the factor analysis and multivariate linear mixed models. Under this modeling framework, we reduced the high-dimensional responses to low-dimensional latent factors by the factor analysis model, and then we used the multivariate linear mixed model to study the longitudinal trends of these latent factors. We developed an expectation-maximization algorithm to estimate the model. We used simulation studies to investigate the computational properties of the expectation-maximization algorithm and compare the LFLMM model with other approaches for high-dimensional longitudinal data analysis. We used a real data example to illustrate the practical usefulness of the model.


Subject(s)
Data Interpretation, Statistical , Factor Analysis, Statistical , Linear Models , Longitudinal Studies , Aged , Algorithms , Cognition/physiology , Female , Humans , Male , Physical Fitness/physiology , Physical Fitness/psychology
13.
Struct Equ Modeling ; 20(1): 148-156, 2013 Jan 01.
Article in English | MEDLINE | ID: mdl-23418401

ABSTRACT

Recently a new mean scaled and skewness adjusted test statistic was developed for evaluating structural equation models in small samples and with potentially nonnormal data, but this statistic has received only limited evaluation. The performance of this statistic is compared to normal theory maximum likelihood and two well-known robust test statistics. A modification to the Satorra-Bentler scaled statistic is developed for the condition that sample size is smaller than degrees of freedom. The behavior of the four test statistics is evaluated with a Monte Carlo confirmatory factor analysis study that varies seven sample sizes and three distributional conditions obtained using Headrick's fifth-order transformation to nonnormality. The new statistic performs badly in most conditions except under the normal distribution. The goodness-of-fit χ(2) test based on maximum-likelihood estimation performed well under normal distributions as well as under a condition of asymptotic robustness. The Satorra-Bentler scaled test statistic performed best overall, while the mean scaled and variance adjusted test statistic outperformed the others at small and moderate sample sizes under certain distributional conditions.

14.
J Stat Comput Simul ; 83(1): 25-36, 2013.
Article in English | MEDLINE | ID: mdl-23329857

ABSTRACT

The item factor analysis model for investigating multidimensional latent spaces has proved to be useful. Parameter estimation in this model requires computationally demanding high-dimensional integrations. While several approaches to approximate such integrations have been proposed, they suffer various computational difficulties. This paper proposes a Nesting Monte Carlo Expectation-Maximization (MCEM) algorithm for item factor analysis with binary data. Simulation studies and a real data example suggest that the Nesting MCEM approach can significantly improve computational efficiency while also enjoying the good properties of stable convergence and easy implementation.

15.
Comput Stat Data Anal ; 57(1): 392-403, 2013 Jan.
Article in English | MEDLINE | ID: mdl-22904587

ABSTRACT

Based on the Bayes modal estimate of factor scores in binary latent variable models, this paper proposes two new limited information estimators for the factor analysis model with a logistic link function for binary data based on Bernoulli distributions up to the second and the third order with maximum likelihood estimation and Laplace approximations to required integrals. These estimators and two existing limited information weighted least squares estimators are studied empirically. The limited information estimators compare favorably to full information estimators based on marginal maximum likelihood, MCMC, and multinomial distribution with a Laplace approximation methodology. Among the various estimators, Maydeu-Olivares and Joe's (2005) weighted least squares limited information estimators implemented with Laplace approximations for probabilities are shown in a simulation to have the best root mean square errors.

16.
Span. j. psychol ; 16: e76.1-e76.13, 2013. tab, ilus
Article in English | IBECS | ID: ibc-116440

ABSTRACT

Both the family and school environments influence adolescents’ violence, but there is little research focusing simultaneously on the two contexts. This study analyzed the role of positive family and classroom environments as protective factors for adolescents’ violence against authority (parent abuse and teacher abuse) and the relations between antisocial behavior and child-to-parent violence or student-to-teacher violence. The sample comprised 687 Spanish students aged 12-16 years, who responded to the Family Environment Scale (FES) and the Classroom Environment Scale (CES). Structural Equation Modeling was used to test our model of violent behavior towards authority based on Catalano and Hawkins’ Social Developmental Model (1996). Perceived family cohesion and organization showed an inverse association with parent abuse, suggesting that a positive family environment was a protective factor for the development of violence against parents. Family and classroom environments had direct effects on adolescents’ violence against authority, and antisocial behavior showed a mediating effect in this relationship. The model accounted for 81% of the variance in violence against authority. As family environment was a better predictor of violence against authority than school environment, intervention efforts to reduce rates of adolescent violence should focus on helping parents to increase family cohesion and to manage conflictive relationships with their children (AU)


No disponible


Subject(s)
Humans , Male , Female , Adolescent , Social Environment , Family/psychology , Adolescent Behavior/physiology , Adolescent Behavior/psychology , Students/psychology , Violence/psychology , Psychology, Adolescent/methods , Psychology, Adolescent/standards , Psychology, Adolescent/trends , Brief Psychiatric Rating Scale/standards , Psychometrics/methods
17.
Multivariate Behav Res ; 47(3): 442-447, 2012 Jan 01.
Article in English | MEDLINE | ID: mdl-23180888

ABSTRACT

Molenaar (2003, 2011) showed that a common factor model could be transformed into an equivalent model without factors, involving only observed variables and residual errors. He called this invertible transformation the Houdini transformation. His derivation involved concepts from time series and state space theory. This paper verifies the Houdini transformation on a general latent variable model using algebraic methods. The results show that the Houdini transformation is illusory, in the sense that the Houdini transformed model remains a latent variable model. Contrary to common knowledge, a model that is a path model with only observed variables and residual errors may, in fact, be a latent variable model.

18.
Multivariate Behav Res ; 47(3): 448-462, 2012 Jan 01.
Article in English | MEDLINE | ID: mdl-23144511

ABSTRACT

Goodness of fit testing in factor analysis is based on the assumption that the test statistic is asymptotically chi-square; but this property may not hold in small samples even when the factors and errors are normally distributed in the population. Robust methods such as Browne's asymptotically distribution-free method and Satorra Bentler's mean scaling statistic were developed under the presumption of non-normality in the factors and errors. This paper finds new application to the case where factors and errors are normally distributed in the population but the skewness of the obtained test statistic is still high due to sampling error in the observed indicators. An extension of Satorra Bentler's statistic is proposed that not only scales the mean but also adjusts the degrees of freedom based on the skewness of the obtained test statistic in order to improve its robustness under small samples. A simple simulation study shows that this third moment adjusted statistic asymptotically performs on par with previously proposed methods, and at a very small sample size offers superior Type I error rates under a properly specified model. Data from Mardia, Kent and Bibby's study of students tested for their ability in five content areas that were either open or closed book were used to illustrate the real-world performance of this statistic.

19.
Multivariate Behav Res ; 47(4): 547-65, 2012 Jul.
Article in English | MEDLINE | ID: mdl-26777669

ABSTRACT

This article develops a procedure based on copulas to simulate multivariate nonnormal data that satisfy a prespecified variance-covariance matrix. The covariance matrix used can comply with a specific moment structure form (e.g., a factor analysis or a general structural equation model). Thus, the method is particularly useful for Monte Carlo evaluation of structural equation models within the context of nonnormal data. The new procedure for nonnormal data simulation is theoretically described and also implemented in the widely used R environment. The quality of the method is assessed by Monte Carlo simulations. A 1-sample test on the observed covariance matrix based on the copula methodology is proposed. This new test for evaluating the quality of a simulation is defined through a particular structural model specification and is robust against normality violations.

20.
Psychometrika ; 77(3): 442-54, 2012 Jul.
Article in English | MEDLINE | ID: mdl-27519775

ABSTRACT

Bi-factor analysis is a form of confirmatory factor analysis originally introduced by Holzinger and Swineford (Psychometrika 47:41-54, 1937). The bi-factor model has a general factor, a number of group factors, and an explicit bi-factor structure. Jennrich and Bentler (Psychometrika 76:537-549, 2011) introduced an exploratory form of bi-factor analysis that does not require one to provide an explicit bi-factor structure a priori. They use exploratory factor analysis and a bifactor rotation criterion designed to produce a rotated loading matrix that has an approximate bi-factor structure. Among other things this can be used as an aid in finding an explicit bi-factor structure for use in a confirmatory bi-factor analysis. They considered only orthogonal rotation. The purpose of this paper is to consider oblique rotation and to compare it to orthogonal rotation. Because there are many more oblique rotations of an initial loading matrix than orthogonal rotations, one expects the oblique results to approximate a bi-factor structure better than orthogonal rotations and this is indeed the case. A surprising result arises when oblique bi-factor rotation methods are applied to ideal data.

SELECTION OF CITATIONS
SEARCH DETAIL
...