Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 4 de 4
Filter
Add more filters










Database
Language
Publication year range
1.
J Learn Disabil ; 56(1): 58-71, 2023.
Article in English | MEDLINE | ID: mdl-36065510

ABSTRACT

As access to higher education increases, it is important to monitor students with special needs to facilitate the provision of appropriate resources and support. Although metrics such as the "reading readiness" ACT (formerly American College Testing) of provide insight into how many students may need such resources, they do not specify why a student may need support or how to provide that support. Increasingly, students are bringing reading comprehension struggles to college. Multiple-choice Online Causal Comprehension Assessment-College (MOCCA-College) is a new diagnostic reading comprehension assessment designed to identify who is a poor comprehender and also diagnose why they are a poor comprehender. Using reliability coefficients, receiver-operating characteristic curve analysis, and correlations, this study reports findings from the first year of a 3-year study to validate the assessment with 988 postsecondary students who took MOCCA-College, a subset of whom also provided data on other reading assessments (i.e., ACT, n = 377; Scholastic Aptitude Test [SAT], n = 192; and Nelson-Denny Reading Test [NDRT], n = 78). Despite some limitations (e.g., the sample is predominantly females from 4-year institutions), results indicate that MOCCA-College has good internal reliability, and scores are correlated with other reading assessments. Through a series of analyses of variance (ANOVAs), we also report how students identified by MOCCA-College as good and poor comprehenders differ in terms of demographics, cognitive processes used while reading, overall comprehension ability, and scores on admissions tests. Findings are discussed in terms of using MOCCA-College to help gauge which students may be at risk of reading comprehension difficulties, identify why they may be struggling, and inform directions in actionable instructional changes based on comprehension processing data.


Subject(s)
Cognition , Reading , Humans , Reproducibility of Results , Universities
2.
Psychol Methods ; 2022 Jul 04.
Article in English | MEDLINE | ID: mdl-35786984

ABSTRACT

A regression model of predictor trade-offs is described. Each regression parameter equals the expected change in Y obtained by trading 1 point from one predictor to a second predictor. The model applies to predictor variables that sum to a constant T for all observations; for example, proportions summing to T = 1.0 or percentages summing to T = 100 for each observation. If predictor variables sum to a constant T for all observations and if a least squares solution exists, the predicted values for the criterion variable Y will be uniquely determined, but there will be an infinite set of linear regression weights and the familiar interpretation of regression weights does not apply. However, the regression weights are determined up to an additive constant and thus differences in regression weights ßv-ßv∗ are uniquely determined, readily estimable, and interpretable. ßv-ßv∗ is the expected increase in Y given a transfer of 1 point from variable v∗ to variable v. The model is applied to multiple-choice test items that have four response categories, one correct and three incorrect. Results indicate that the expected outcome depends, not just on the student's number of correct answers, but also on how the student's incorrect responses are distributed over the three incorrect response types. (PsycInfo Database Record (c) 2022 APA, all rights reserved).

3.
Educ Psychol Meas ; 79(1): 65-84, 2019 Feb.
Article in English | MEDLINE | ID: mdl-30636782

ABSTRACT

Prior research suggests that subscores from a single achievement test seldom add value over a single total score. Such scores typically correspond to subcontent areas in the total content domain, but content subdomains might not provide a sound basis for subscores. Using scores on an inferential reading comprehension test from 625 third, fourth, and fifth graders, two new methods of creating subscores were explored. Three subscores were based on the types of incorrect answers given by students. The fourth was based on temporal efficiency in giving correct answers. All four scores were reliable. The three subscores based on incorrect answers added value and validity. In logistic regression analyses predicting failure to reach proficiency on a statewide test, models including subscores fit better than the model with a single total score. Including the pattern of incorrect responses improved fit in all three grades, whereas including the comprehension efficiency score only modestly improved fit in fourth and fifth grades, but not third grade. Area under the curve (AUC) statistics from receiver operating characteristic (ROC) curves based on the various models were higher for models including subscores than those without subscores. Implications for using models with and without subscores are illustrated and discussed.

4.
J Psycholinguist Res ; 45(3): 553-74, 2016 Jun.
Article in English | MEDLINE | ID: mdl-25833811

ABSTRACT

Words can be informative linguistic markers of psychological constructs. The purpose of this study is to examine associations between word use and the process of making meaningful connections to a text while reading (i.e., inference generation). To achieve this purpose, think-aloud data from third-fifth grade students ([Formula: see text]) reading narrative texts were hand-coded for inferences. These data were also processed with a computer text analysis tool, Linguistic Inquiry and Word Count, for percentages of word use in the following categories: cognitive mechanism words, nonfluencies, and nine types of function words. Findings indicate that cognitive mechanisms were an independent, positive predictor of connections to background knowledge (i.e., elaborative inference generation) and nonfluencies were an independent, negative predictor of connections within the text (i.e., bridging inference generation). Function words did not provide unique variance towards predicting inference generation. These findings are discussed in the context of a cognitive reflection model and the differences between bridging and elaborative inference generation. In addition, potential practical implications for intelligent tutoring systems and computer-based methods of inference identification are presented.


Subject(s)
Cognition , Psycholinguistics , Reading , Thinking , Child , Female , Humans , Male
SELECTION OF CITATIONS
SEARCH DETAIL
...