Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 32
Filter
1.
Educ Psychol Meas ; 81(4): 791-810, 2021 Aug.
Article in English | MEDLINE | ID: mdl-34267401

ABSTRACT

The population discrepancy between unstandardized and standardized reliability of homogeneous multicomponent measuring instruments is examined. Within a latent variable modeling framework, it is shown that the standardized reliability coefficient for unidimensional scales can be markedly higher than the corresponding unstandardized reliability coefficient, or alternatively substantially lower than the latter. Based on these findings, it is recommended that scholars avoid estimating, reporting, interpreting, or using standardized scale reliability coefficients in empirical research, unless they have strong reasons to consider standardizing the original components of utilized scales.

2.
Educ Psychol Meas ; 80(3): 604-612, 2020 Jun.
Article in English | MEDLINE | ID: mdl-32425221

ABSTRACT

This note raises caution that a finding of a marked pseudo-guessing parameter for an item within a three-parameter item response model could be spurious in a population with substantial unobserved heterogeneity. A numerical example is presented wherein each of two classes the two-parameter logistic model is used to generate the data on a multi-item measuring instrument, while the three-parameter logistic model is found to be associated with a considerable pseudo-guessing parameter estimate on an item. The implications of the reported results for empirical educational research are subsequently discussed.

3.
Educ Psychol Meas ; 80(1): 199-209, 2020 Feb.
Article in English | MEDLINE | ID: mdl-31933499

ABSTRACT

Equating of psychometric scales and tests is frequently required and conducted in educational, behavioral, and clinical research. Construct comparability or equivalence between measuring instruments is a necessary condition for making decisions about linking and equating resulting scores. This article is concerned with a widely applicable method for examining if two scales or tests cannot be equated. A latent variable modeling method is discussed that can be used to evaluate whether the tests or parts thereof measure latent constructs that are distinct from each other. The approach can be routinely used before an equating procedure is undertaken, in order to assess whether equating could be meaningfully carried out to begin with. The procedure is readily applicable in empirical research using popular software. The method is illustrated with data from dementia screening test batteries administered as part of two studies designed to evaluate a wide range of biomarkers throughout the process of normal aging to dementia or Alzheimer's disease.

4.
Educ Psychol Meas ; 79(6): 1198-1209, 2019 Dec.
Article in English | MEDLINE | ID: mdl-31619845

ABSTRACT

This note highlights and illustrates the links between item response theory and classical test theory in the context of polytomous items. An item response modeling procedure is discussed that can be used for point and interval estimation of the individual true score on any item in a measuring instrument or item set following the popular and widely applicable graded response model. The method contributes to the body of research on the relationships between classical test theory and item response theory and is illustrated on empirical data.

5.
Educ Psychol Meas ; 79(3): 598-609, 2019 Jun.
Article in English | MEDLINE | ID: mdl-31105325

ABSTRACT

Longitudinal studies have steadily grown in popularity across the educational and behavioral sciences, particularly with the increased availability of technological devices that allow the easy collection of repeated measures on multiple dimensions of substantive relevance. This article discusses a procedure that can be used to evaluate population differences in within-person (intraindividual) variability in such longitudinal investigations. The method is based on an application of the latent variable modeling methodology within a two-level modeling framework. The approach is used to obtain point and interval estimates of the differences in within-person variance and in the strength of correlative effects of repeated measures between normal and very mildly demented persons in a longitudinal study of a diagnostic cognitive test assessing verbal episodic memory.

6.
Educ Psychol Meas ; 79(2): 399-412, 2019 Apr.
Article in English | MEDLINE | ID: mdl-30911199

ABSTRACT

This note confronts the common use of a single coefficient alpha as an index informing about reliability of a multicomponent measurement instrument in a heterogeneous population. Two or more alpha coefficients could instead be meaningfully associated with a given instrument in finite mixture settings, and this may be increasingly more likely the case in empirical educational and psychological research. It is argued that in such situations explicit examination of class-invariance in the alpha coefficient must precede any statements about its possible value in the studied population. The approach permits also the evaluation of between-class alpha differences as well as point and interval estimation of the within-class alpha coefficients. The method can similarly be used in situations with (a) known class membership when distinct (sub)populations are investigated while their number is known beforehand and membership in them is observed for studied persons, as well as (b) in settings where only the number of latent classes is known for a population under investigation. The outlined procedure is illustrated with numerical data.

7.
Educ Psychol Meas ; 79(1): 200-210, 2019 Feb.
Article in English | MEDLINE | ID: mdl-30636788

ABSTRACT

This note discusses the merits of coefficient alpha and their conditions in light of recent critical publications that miss out on significant research findings over the past several decades. That earlier research has demonstrated the empirical relevance and utility of coefficient alpha under certain empirical circumstances. The article highlights the fact that as an index aimed at informing about multiple-component measuring instrument reliability, coefficient alpha is dependable then as a reliability estimator. Therefore, alpha should remain in service when these conditions are fulfilled and not be abandoned.

8.
Educ Psychol Meas ; 79(4): 796-807, 2019 Aug.
Article in English | MEDLINE | ID: mdl-32655184

ABSTRACT

Building on prior research on the relationships between key concepts in item response theory and classical test theory, this note contributes to highlighting their important and useful links. A readily and widely applicable latent variable modeling procedure is discussed that can be used for point and interval estimation of the individual person true score on any item in a unidimensional multicomponent measuring instrument or item set under consideration. The method adds to the body of research on the connections between classical test theory and item response theory. The outlined estimation approach is illustrated on empirical data.

9.
Educ Psychol Meas ; 78(6): 1123-1135, 2018 Dec.
Article in English | MEDLINE | ID: mdl-30559517

ABSTRACT

A readily applicable procedure is discussed that allows evaluation of the discrepancy between the popular coefficient alpha and the reliability coefficient of a scale with second-order factorial structure that is frequently of relevance in empirical educational and psychological research. The approach is developed within the framework of the widely used latent variable modeling methodology and permits point and interval estimation of the slippage of alpha from scale reliability in a population under investigation. The method is useful when examining the consistency of complex structure measuring instruments assessing higher order latent constructs and, under its assumptions, represents a generally recommendable alternative to coefficient alpha. The outlined procedure is illustrated using data from an authoritarianism study.

10.
Educ Psychol Meas ; 78(3): 504-516, 2018 Jun.
Article in English | MEDLINE | ID: mdl-30140104

ABSTRACT

This article outlines a procedure for examining the degree to which a common factor may be dominating additional factors in a multicomponent measuring instrument consisting of binary items. The procedure rests on an application of the latent variable modeling methodology and accounts for the discrete nature of the manifest indicators. The method provides point and interval estimates (a) of the proportion of the variance explained by all factors, which is due to the common (global) factor and (b) of the proportion of the variance explained by all factors, which is due to some or all other (local) factors. The discussed approach can also be readily used as a means of assessing approximate unidimensionality when considering application of unidimensional versus multidimensional item response modeling. The procedure is similarly utilizable in case of highly discrete (e.g., Likert-type) ordinal items, and is illustrated with a numerical example.

11.
Educ Psychol Meas ; 78(4): 708-712, 2018 Aug.
Article in English | MEDLINE | ID: mdl-30147123

ABSTRACT

This note extends the results in the 2016 article by Raykov, Marcoulides, and Li to the case of correlated errors in a set of observed measures subjected to principal component analysis. It is shown that when at least two measures are fallible, the probability is zero for any principal component-and in particular for the first principal component-to be error-free. In conjunction with the findings in Raykov et al., it is concluded that in practice no principal component can be perfectly reliable for a set of observed variables that are not all free of measurement error, whether or not their error terms correlate, and hence no principal component can practically be error-free.

12.
Educ Psychol Meas ; 78(1): 167-174, 2018 Feb.
Article in English | MEDLINE | ID: mdl-29795951

ABSTRACT

This article extends the procedure outlined in the article by Raykov, Marcoulides, and Tong for testing congruence of latent constructs to the setting of binary items and clustering effects. In this widely used setting in contemporary educational and psychological research, the method can be used to examine if two or more homogeneous multicomponent instruments with distinct components measure the same construct. The approach is useful in scale construction and development research as well as in construct validation investigations. The discussed method is illustrated with data from a scholastic aptitude assessment study.

13.
Educ Psychol Meas ; 78(2): 343-352, 2018 Apr.
Article in English | MEDLINE | ID: mdl-29795959

ABSTRACT

A latent variable modeling method for studying measurement invariance when evaluating latent constructs with multiple binary or binary scored items with no guessing is outlined. The approach extends the continuous indicator procedure described by Raykov and colleagues, utilizes similarly the false discovery rate approach to multiple testing, and permits one to locate violations of measurement invariance in loading or threshold parameters. The discussed method does not require selection of a reference observed variable and is directly applicable for studying differential item functioning with one- or two-parameter item response models. The extended procedure is illustrated on an empirical data set.

14.
Educ Psychol Meas ; 78(5): 905-917, 2018 Oct.
Article in English | MEDLINE | ID: mdl-32655175

ABSTRACT

Validity coefficients for multicomponent measuring instruments are known to be affected by measurement error that attenuates them, affects associated standard errors, and influences results of statistical tests with respect to population parameter values. To account for measurement error, a latent variable modeling approach is discussed that allows point and interval estimation of the relationship of an underlying latent factor to a criterion variable in a setting that is more general than the commonly considered homogeneous psychometric test case. The method is particularly helpful in validity studies for scales with a second-order factorial structure, by allowing evaluation of the relationship between the second-order factor and a criterion variable. The procedure is similarly useful in studies of discriminant, convergent, concurrent, and predictive validity of measuring instruments with complex latent structure, and is readily applicable when measuring interrelated traits that share a common variance source. The outlined approach is illustrated using data from an authoritarianism study.

15.
Educ Psychol Meas ; 77(1): 165-178, 2017 Jan.
Article in English | MEDLINE | ID: mdl-29795908

ABSTRACT

The measurement error in principal components extracted from a set of fallible measures is discussed and evaluated. It is shown that as long as one or more measures in a given set of observed variables contains error of measurement, so also does any principal component obtained from the set. The error variance in any principal component is shown to be (a) bounded from below by the smallest error variance in a variable from the analyzed set and (b) bounded from above by the largest error variance in a variable from that set. In the case of a unidimensional set of analyzed measures, it is pointed out that the reliability and criterion validity of any principal component are bounded from above by these respective coefficients of the optimal linear combination with maximal reliability and criterion validity (for a criterion unrelated to the error terms in the individual measures). The discussed psychometric features of principal components are illustrated on a numerical data set.

16.
Educ Psychol Meas ; 77(2): 351-361, 2017 Apr.
Article in English | MEDLINE | ID: mdl-29795917

ABSTRACT

This note is concerned with examining the relationship between within-group and between-group variances in two-level nested designs. A latent variable modeling approach is outlined that permits point and interval estimation of their ratio and allows their comparison in a multilevel study. The procedure can also be used to test various hypotheses about the discrepancy between these two variances and assist with their relationship interpretability in empirical investigations. The method can also be utilized as an addendum to point and interval estimation of the popular intraclass correlation coefficient in hierarchical designs. The discussed approach is illustrated with a numerical example.

17.
Educ Psychol Meas ; 76(2): 325-338, 2016 Apr.
Article in English | MEDLINE | ID: mdl-29795868

ABSTRACT

The frequently neglected and often misunderstood relationship between classical test theory and item response theory is discussed for the unidimensional case with binary measures and no guessing. It is pointed out that popular item response models can be directly obtained from classical test theory-based models by accounting for the discrete nature of the observed items. Two distinct observational equivalence approaches are outlined that render the item response models from corresponding classical test theory-based models, and can each be used to obtain the former from the latter models. Similarly, classical test theory models can be furnished using the reverse application of either of those approaches from corresponding item response models.

18.
Educ Psychol Meas ; 76(5): 873-884, 2016 Oct.
Article in English | MEDLINE | ID: mdl-29795892

ABSTRACT

A latent variable modeling procedure is discussed that can be used to test if two or more homogeneous multicomponent instruments with distinct components are measuring the same underlying construct. The method is widely applicable in scale construction and development research and can also be of special interest in construct validation studies. The approach can be readily utilized in empirical settings with observed measure nonnormality and/or incomplete data sets. The procedure is based on testing model nesting restrictions, and it can be similarly employed to examine the collapsibility of latent variables evaluated by multidimensional measuring instruments. The outlined method is illustrated with two data examples.

19.
Educ Psychol Meas ; 76(6): 1026-1044, 2016 Dec.
Article in English | MEDLINE | ID: mdl-29795899

ABSTRACT

A method for evaluating the validity of multicomponent measurement instruments in heterogeneous populations is discussed. The procedure can be used for point and interval estimation of criterion validity of linear composites in populations representing mixtures of an unknown number of latent classes. The approach permits also the evaluation of between-class validity differences as well as within-class validity coefficients. The method can similarly be used with known class membership when distinct populations are investigated, their number is known beforehand and membership in them is observed for the studied subjects, as well as in settings where only the number of latent classes is known. The discussed procedure is illustrated with numerical data.

20.
Educ Psychol Meas ; 75(1): 146-156, 2015 Feb.
Article in English | MEDLINE | ID: mdl-29795816

ABSTRACT

A direct approach to point and interval estimation of Cronbach's coefficient alpha for multiple component measuring instruments is outlined. The procedure is based on a latent variable modeling application with widely circulated software. As a by-product, using sample data the method permits ascertaining whether the population discrepancy between alpha and the composite reliability coefficient may be practically negligible for a given empirical setting. The outlined approach is illustrated with numerical data.

SELECTION OF CITATIONS
SEARCH DETAIL
...