Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 4 de 4
Filter
Add more filters










Database
Language
Publication year range
1.
J Sch Psychol ; 83: 66-88, 2020 12.
Article in English | MEDLINE | ID: mdl-33276856

ABSTRACT

The purpose of this study was to support the development and initial validation of the Intervention Selection Profile (ISP)-Skills, a brief 14-item teacher rating scale intended to inform the selection and delivery of instructional interventions at Tier 2. Teacher participants (n = 196) rated five students from their classroom across four measures (total student n = 877). These measures included the ISP-Skills and three criterion tools: Social Skills Improvement System (SSIS), Devereux Student Strengths Assessment (DESSA), and Academic Competence Evaluation Scales (ACES). Diagnostic classification modeling (DCM) suggested an expert-created Q-matrix, which specified relations between ISP-Skills items and hypothesized latent attributes, provided good fit to item data. DCM also indicated ISP-Skills items functioned as intended, with the magnitude of item ratings corresponding to the model-implied probability of attribute mastery. DCM was then used to generate skill profiles for each student, which included scores representing the probability of students mastering each of eight skills. Correlational analyses revealed large convergent relations between ISP-Skills probability scores and theoretically-aligned subscales from the criterion measures. Discriminant validity was not supported, as ISP-Skills scores were also highly related to all other criterion subscales. Receiver operating characteristic (ROC) curve analyses informed the selection of cut scores from each ISP-Skills scale. Review of classification accuracy statistics associated with these cut scores (e.g., sensitivity and specificity) suggested they reliably differentiated students with below average, average, and above average skills. Implications for practice and directions for future research are discussed, including those related to the examination of ISP-Skills treatment utility.


Subject(s)
Behavior Rating Scale/standards , Students/psychology , Academic Performance , Adult , Child , Child Behavior/psychology , Emotions , Female , Humans , Male , Reproducibility of Results , Schools , Sensitivity and Specificity , Social Skills
2.
J Sch Psychol ; 68: 129-141, 2018 06.
Article in English | MEDLINE | ID: mdl-29861023

ABSTRACT

In accordance with an argument-based approach to validation, the purpose of the current study was to yield evidence relating to Social, Academic, and Emotional Behavior Risk Screener (SAEBRS) score interpretation. Bifactor item response theory analyses were performed to examine SAEBRS item functioning. Structural equation modeling (SEM) was used to simultaneously evaluate intra- and inter-scale relationships, expressed through (a) a measurement model specifying a bifactor structure to SAEBRS items, and (b) a structural model specifying convergent and discriminant relations with an outcome measure (i.e., Behavioral and Emotional Screening System [BESS]). Finally, hierarchical omega coefficients were calculated in evaluating the model-based internal reliability of each SAEBRS scale. IRT analyses supported the adequate fit of the bifactor model, indicating items adequately discriminated moderate and high-risk students. SEM results further supported the fit of the latent bifactor measurement model, yielding superior fit relative to alternative models (i.e., unidimensional and correlated factors). SEM analyses also indicated the latent SAEBRS-Total Behavior factor was a statistically significant predictor of all BESS subscales, the SAEBRS-Academic Behavior predicted BESS Adaptive Skills subscales, and the SAEBRS-Emotional Behavior predicted the BESS Internalizing Problems subscale. Hierarchical omega coefficients indicated the SAEBRS-Total Behavior factor was associated with adequate reliability. In contrast, after accounting for the total scale, each of the SAEBRS subscales was associated with somewhat limited reliability, suggesting variability in these scores is largely driven by the Total Behavior scale. Implications for practice and future research are discussed.


Subject(s)
Child Behavior Disorders/diagnosis , Emotions/physiology , Problem Behavior/psychology , Students/psychology , Child , Child Behavior Disorders/psychology , Female , Humans , Male , Mass Screening , Psychometrics , Reproducibility of Results , Risk Assessment , Schools
3.
Multivariate Behav Res ; 50(1): 128, 2015.
Article in English | MEDLINE | ID: mdl-26609749
4.
J Pers Assess ; 95(2): 129-40, 2013.
Article in English | MEDLINE | ID: mdl-23030794

ABSTRACT

Confirmatory factor analytic studies of psychological measures showing item responses to be multidimensional do not provide sufficient guidance for applied work. Demonstrating that item response data are multifactorial in this way does not necessarily (a) mean that a total scale score is an inadequate indicator of the intended construct, (b) demand creating and scoring subscales, or (c) require specifying a multidimensional measurement model in research using structural equation modeling (SEM). To better inform these important decisions, more fine-grained psychometric analyses are necessary. We describe 3 established, but seldom used, psychometric approaches that address 4 distinct questions: (a) To what degree do total scale scores reflect reliable variation on a single construct? (b) Is the scoring and reporting of subscale scores justified? (c) If justified, how much reliable variance do subscale scores provide after controlling for a general factor? and (d) Can multidimensional item response data be represented by a unidimensional measurement model in SEM, or are multidimensional measurement models (e.g., second-order, bifactor) necessary to achieve unbiased structural coefficients? In the discussion, we provide guidance for applied researchers on how best to interpret the results from applying these methods and review their limitations.


Subject(s)
Models, Psychological , Personality Tests , Factor Analysis, Statistical , Humans , Psychometrics , Research Design
SELECTION OF CITATIONS
SEARCH DETAIL
...