Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 4 de 4
Filter
Add more filters










Database
Language
Publication year range
1.
J Appl Meas ; 17(1): 14-34, 2016.
Article in English | MEDLINE | ID: mdl-26784376

ABSTRACT

The authors investigated the effect of missing completely at random (MCAR) item responses on partial credit model (PCM) parameter estimates in a longitudinal study of Positive Affect. Participants were 307 adults from the older cohort of the Notre Dame Study of Health and Well-Being (Bergeman and Deboeck, 2014) who completed questionnaires including Positive Affect items for 56 days. Additional missing responses were introduced to the data, randomly replacing 20%, 50%, and 70% of the responses on each item and each day with missing values, in addition to the existing missing data. Results indicated that item locations and person trait level measures diverged from the original estimates as the level of degradation from induced missing data increased. In addition, standard errors of these estimates increased with the level of degradation. Thus, MCAR data does damage the quality and precision of PCM estimates.


Subject(s)
Affect , Artifacts , Data Interpretation, Statistical , Models, Statistical , Personality Tests , Surveys and Questionnaires , Aged , Aged, 80 and over , Algorithms , Computer Simulation , Female , Humans , Longitudinal Studies , Male , Middle Aged , Psychometrics/methods , Sample Size
2.
Struct Equ Modeling ; 23(4): 532-543, 2016.
Article in English | MEDLINE | ID: mdl-28936107

ABSTRACT

Reliability has a long history as one of the key psychometric properties of a test. However, a given test might not measure people equally reliably. Test scores from some individuals may have considerably greater error than others. This study proposed two approaches using intraindividual variation to estimate test reliability for each person. A simulation study suggested that the parallel tests approach and the structural equation modeling approach recovered the simulated reliability coefficients. Then in an empirical study, where forty-five females were measured daily on the Positive and Negative Affect Schedule (PANAS) for 45 consecutive days, separate estimates of reliability were generated for each person. Results showed that reliability estimates of the PANAS varied substantially from person to person. The methods provided in this article apply to tests measuring changeable attributes and require repeated measures across time on each individual. This article also provides a set of parallel forms of PANAS.

3.
J Appl Meas ; 13(2): 146-64, 2012.
Article in English | MEDLINE | ID: mdl-22805359

ABSTRACT

Positive (PA) and negative affect (NA) are important constructs in health and well-being research. Good longitudinal measurement is crucial to conducting meaningful research on relationships between affect, health, and well-being across the lifespan. One common affect measure, the PANAS, has been evaluated thoroughly with factor analysis, but not with Racsh-based latent trait models (RLTMs) such as the Partial Credit Model (PCM), and not longitudinally. Current longitudinal RLTMs can computationally handle few occasions of data. The present study compares four methods of anchoring PCMs across 56 occasions to longitudinally evaluate the psychometric properties of the PANAS plus additional items. Anchoring item parameters on mean parameter values across occasions produced more desirable results than using no anchor, using first occasion parameters as anchors, or allowing anchor values to vary across occasions. Results indicated problems with NA items, including poor category utilization, gaps in the item distribution, and a lack of easy-to-endorse items. PA items had much more desirable psychometric qualities.


Subject(s)
Affect , Models, Statistical , Mood Disorders/classification , Mood Disorders/diagnosis , Psychometrics/methods , Aged , Aged, 80 and over , Computer Simulation , Female , Humans , Male , Mood Disorders/psychology , Reproducibility of Results , Sensitivity and Specificity
4.
Psychol Assess ; 24(3): 738-50, 2012 Sep.
Article in English | MEDLINE | ID: mdl-22250596

ABSTRACT

[Correction Notice: An Erratum for this article was reported in Vol 24(3) of Psychological Assessment (see record 2012-04601-001). The article contained a number of errors which are corrected in the erratum.] Despite general consensus over the value of measuring self-reported offending, discrepancies exist in methods of scoring self-reported offending and the length of the reference period over which offending is assessed. This analysis compared the concurrent interassociations and longitudinal predictive strength of diversity, frequency, and severity offending scores measured over the past 6 months and diversity and severity scores measured "ever" between assessments. For violent offending, different scorings were highly correlated and equally predictive of adulthood offending. For nonviolent offending, there was significant continuity in diversity and severity-weighted diversity scores over the transition to adulthood but not in nonviolent frequency or severity-weighted frequency scores. Results support the use of offending diversity scores rather than offending frequency scores and highlight the importance of examining nonviolent and violent offending as separate constructs.


Subject(s)
Criminals/psychology , Juvenile Delinquency/psychology , Surveys and Questionnaires/standards , Adolescent , Adult , Criminals/classification , Female , Humans , Juvenile Delinquency/classification , Longitudinal Studies , Predictive Value of Tests , Reproducibility of Results , Self Report , Severity of Illness Index , Time Factors , Young Adult
SELECTION OF CITATIONS
SEARCH DETAIL
...