Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 4 de 4
Filter
Add more filters










Database
Language
Publication year range
1.
Educ Psychol Meas ; 84(4): 810-834, 2024 Aug.
Article in English | MEDLINE | ID: mdl-39055098

ABSTRACT

Low-stakes test performance commonly reflects examinee ability and effort. Examinees exhibiting low effort may be identified through rapid guessing behavior throughout an assessment. There has been a plethora of methods proposed to adjust scores once rapid guesses have been identified, but these have been plagued by strong assumptions or the removal of examinees. In this study, we illustrate how an IRTree model can be used to adjust examinee ability for rapid guessing behavior. Our approach is flexible as it does not assume independence between rapid guessing behavior and the trait of interest (e.g., ability) nor does it necessitate the removal of examinees who engage in rapid guessing. In addition, our method uniquely allows for the simultaneous modeling of a disengagement latent trait in addition to the trait of interest. The results indicate the model is quite useful for estimating individual differences among examinees in the disengagement latent trait and in providing more precise measurement of examinee ability relative to models ignoring rapid guesses or accommodating it in different ways. A simulation study reveals that our model results in less biased estimates of the trait of interest for individuals with rapid responses, regardless of sample size and rapid response rate in the sample. We conclude with a discussion of extensions of the model and directions for future research.

2.
Appl Psychol Meas ; 46(7): 571-588, 2022 Oct.
Article in English | MEDLINE | ID: mdl-36131840

ABSTRACT

Previous researchers have only either adopted an item or examinee perspective to position effects, where they focused on exploring the relationships among position effects and item or examinee variables separately. Unlike previous researchers, we adopted an integrated perspective to position effects, where we focused on exploring the relationships among position effects, item variables, and examinee variables simultaneously. We evaluated the degree to which position effects on two separate low-stakes tests administered to two different samples were moderated by different item (item length, number of response options, mental taxation, and graphic) and examinee (effort, change in effort, and gender) variables. Items exhibited significant negative linear position effects on both tests, with the magnitude of the position effects varying from item to item. Longer items were more prone to position effects than shorter items; however, the level of mental taxation required to answer the item, the presence of a graphic, and the number of response options were not related to position effects. Examinee effort levels, change in effort patterns, and gender did not moderate the relationships among position effects and item features.

3.
Multivariate Behav Res ; 53(1): 74-89, 2018.
Article in English | MEDLINE | ID: mdl-28952787

ABSTRACT

The term "multilevel meta-analysis" is encountered not only in applied research studies, but in multilevel resources comparing traditional meta-analysis to multilevel meta-analysis. In this tutorial, we argue that the term "multilevel meta-analysis" is redundant since all meta-analysis can be formulated as a special kind of multilevel model. To clarify the multilevel nature of meta-analysis the four standard meta-analytic models are presented using multilevel equations and fit to an example data set using four software programs: two specific to meta-analysis (metafor in R and SPSS macros) and two specific to multilevel modeling (PROC MIXED in SAS and HLM). The same parameter estimates are obtained across programs underscoring that all meta-analyses are multilevel in nature. Despite the equivalent results, not all software programs are alike and differences are noted in the output provided and estimators available. This tutorial also recasts distinctions made in the literature between traditional and multilevel meta-analysis as differences between meta-analytic choices, not between meta-analytic models, and provides guidance to inform choices in estimators, significance tests, moderator analyses, and modeling sequence. The extent to which the software programs allow flexibility with respect to these decisions is noted, with metafor emerging as the most favorable program reviewed.


Subject(s)
Data Interpretation, Statistical , Meta-Analysis as Topic , Multilevel Analysis/methods , Software , Humans , Models, Statistical
4.
J Appl Meas ; 4(1): 24-42, 2003.
Article in English | MEDLINE | ID: mdl-12700429

ABSTRACT

The purpose of the present investigation was to systematically examine the effectiveness of the Sympson-Hetter technique and rotated content balancing relative to no exposure control and no content rotation conditions in a computerized adaptive testing system (CAT) based on the partial credit model. A series of simulated fixed and variable length CATs were run using two data sets generated to multiple content areas for three sizes of item pools. The 2 (exposure control) X 2 (content rotation) X 2 (test length) X 3 (item pool size) X 2 (data sets) yielded a total of 48 conditions. Results show that while both procedures can be used with no deleterious effect on measurement precision, the gains in exposure control, pool utilization, and item overlap appear quite modest. Difficulties involved with setting the exposure control parameters in small item pools make questionable the utility of the Sympson-Hetter technique with similar item pools.


Subject(s)
Computers , Educational Measurement , Models, Theoretical , Automation , Decision Making , Humans , Reproducibility of Results
SELECTION OF CITATIONS
SEARCH DETAIL
...