Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 7 de 7
Filter
Add more filters










Database
Language
Publication year range
1.
Article in English | MEDLINE | ID: mdl-38726687

ABSTRACT

Oral reading fluency (ORF) assessments are commonly used to screen at-risk readers and evaluate interventions' effectiveness as curriculum-based measurements. Similar to the standard practice in item response theory (IRT), calibrated passage parameter estimates are currently used as if they were population values in model-based ORF scoring. However, calibration errors that are unaccounted for may bias ORF score estimates and, in particular, lead to underestimated standard errors (SEs) of ORF scores. Therefore, we consider an approach that incorporates the calibration errors in latent variable scores. We further derive the SEs of ORF scores based on the delta method to incorporate the calibration uncertainty. We conduct a simulation study to evaluate the recovery of point estimates and SEs of latent variable scores and ORF scores in various simulated conditions. Results suggest that ignoring calibration errors leads to underestimated latent variable score SEs and ORF score SEs, especially when the calibration sample is small.

2.
Educ Psychol Meas ; 84(1): 190-209, 2024 Feb.
Article in English | MEDLINE | ID: mdl-38250506

ABSTRACT

Words read correctly per minute (WCPM) is the reporting score metric in oral reading fluency (ORF) assessments, which is popularly utilized as part of curriculum-based measurements to screen at-risk readers and to monitor progress of students who receive interventions. Just like other types of assessments with multiple forms, equating would be necessary when WCPM scores are obtained from multiple ORF passages to be compared both between and within students. This article proposes a model-based approach for equating WCPM scores. A simulation study was conducted to evaluate the performance of the model-based equating approach along with some observed-score equating methods with external anchor test design.

3.
J Appl Stat ; 50(15): 3157-3176, 2023.
Article in English | MEDLINE | ID: mdl-37969542

ABSTRACT

The paper considers parameter estimation in count data models using penalized likelihood methods. The motivating data consists of multiple independent count variables with a moderate sample size per variable. The data were collected during the assessment of oral reading fluency (ORF) in school-aged children. A sample of fourth-grade students were given one of ten available passages to read with these differing in length and difficulty. The observed number of words read incorrectly (WRI) is used to measure ORF. Three models are considered for WRI scores, namely the binomial, the zero-inflated binomial, and the beta-binomial. We aim to efficiently estimate passage difficulty, a quantity expressed as a function of the underlying model parameters. Two types of penalty functions are considered for penalized likelihood with respective goals of shrinking parameter estimates closer to zero or closer to one another. A simulation study evaluates the efficacy of the shrinkage estimates using Mean Square Error (MSE) as metric. Big reductions in MSE relative to unpenalized maximum likelihood are observed. The paper concludes with an analysis of the motivating ORF data.

4.
Educ Psychol Meas ; 80(5): 847-869, 2020 Oct.
Article in English | MEDLINE | ID: mdl-32855562

ABSTRACT

Oral reading fluency (ORF), used by teachers and school districts across the country to screen and progress monitor at-risk readers, has been documented as a good indicator of reading comprehension and overall reading competence. In traditional ORF administration, students are given one minute to read a grade-level passage, after which the assessor calculates the words correct per minute (WCPM) fluency score by subtracting the number of incorrectly read words from the total number of words read aloud. As part of a larger effort to develop an improved ORF assessment system, this study expands on and demonstrates the performance of a new model-based estimate of WCPM based on a recently developed latent-variable psychometric model of speed and accuracy for ORF data. The proposed method was applied to a data set collected from 58 fourth-grade students who read four passages (a total of 260 words). The proposed model-based WCPM scores were also evaluated through a simulation study with respect to sample size and number of passages read.

5.
Biometrics ; 75(4): 1133-1144, 2019 12.
Article in English | MEDLINE | ID: mdl-31260084

ABSTRACT

Errors-in-variables models in high-dimensional settings pose two challenges in application. First, the number of observed covariates is larger than the sample size, while only a small number of covariates are true predictors under an assumption of model sparsity. Second, the presence of measurement error can result in severely biased parameter estimates, and also affects the ability of penalized methods such as the lasso to recover the true sparsity pattern. A new estimation procedure called SIMulation-SELection-EXtrapolation (SIMSELEX) is proposed. This procedure makes double use of lasso methodology. First, the lasso is used to estimate sparse solutions in the simulation step, after which a group lasso is implemented to do variable selection. The SIMSELEX estimator is shown to perform well in variable selection, and has significantly lower estimation error than naive estimators that ignore measurement error. SIMSELEX can be applied in a variety of errors-in-variables settings, including linear models, generalized linear models, and Cox survival models. It is furthermore shown in the Supporting Information how SIMSELEX can be applied to spline-based regression models. A simulation study is conducted to compare the SIMSELEX estimators to existing methods in the linear and logistic model settings, and to evaluate performance compared to naive methods in the Cox and spline models. Finally, the method is used to analyze a microarray dataset that contains gene expression measurements of favorable histology Wilms tumors.


Subject(s)
Models, Statistical , Scientific Experimental Error , Gene Expression Profiling , Humans , Linear Models , Logistic Models , Methods , Microarray Analysis/statistics & numerical data , Proportional Hazards Models , Sample Size , Wilms Tumor/genetics
6.
Stat Med ; 37(25): 3679-3692, 2018 11 10.
Article in English | MEDLINE | ID: mdl-30003564

ABSTRACT

It is important to properly correct for measurement error when estimating density functions associated with biomedical variables. These estimators that adjust for measurement error are broadly referred to as density deconvolution estimators. While most methods in the literature assume the distribution of the measurement error to be fully known, a recently proposed method based on the empirical phase function (EPF) can deal with the situation when the measurement error distribution is unknown. The EPF density estimator has only been considered in the context of additive and homoscedastic measurement error; however, the measurement error of many biomedical variables is heteroscedastic in nature. In this paper, we developed a phase function approach for density deconvolution when the measurement error has unknown distribution and is heteroscedastic. A weighted EPF (WEPF) is proposed where the weights are used to adjust for heteroscedasticity of measurement error. The asymptotic properties of the WEPF estimator are evaluated. Simulation results show that the weighting can result in large decreases in mean integrated squared error when estimating the phase function. The estimation of the weights from replicate observations is also discussed. Finally, the construction of a deconvolution density estimator using the WEPF is compared with an existing deconvolution estimator that adjusts for heteroscedasticity but assumes the measurement error distribution to be fully known. The WEPF estimator proves to be competitive, especially when considering that it relies on minimal assumption of the distribution of measurement error.


Subject(s)
Data Interpretation, Statistical , Statistical Distributions , Bias , Humans , Models, Statistical , Statistics, Nonparametric
7.
Biometrics ; 72(4): 1369-1377, 2016 12.
Article in English | MEDLINE | ID: mdl-27061196

ABSTRACT

For the classical, homoscedastic measurement error model, moment reconstruction (Freedman et al., 2004, 2008) and moment-adjusted imputation (Thomas et al., 2011) are appealing, computationally simple imputation-like methods for general model fitting. Like classical regression calibration, the idea is to replace the unobserved variable subject to measurement error with a proxy that can be used in a variety of analyses. Moment reconstruction and moment-adjusted imputation differ from regression calibration in that they attempt to match multiple features of the latent variable, and also to match some of the latent variable's relationships with the response and additional covariates. In this note, we consider a problem where true exposure is generated by a complex, nonlinear random effects modeling process, and develop analogues of moment reconstruction and moment-adjusted imputation for this case. This general model includes classical measurement errors, Berkson measurement errors, mixtures of Berkson and classical errors and problems that are not measurement error problems, but also cases where the data-generating process for true exposure is a complex, nonlinear random effects modeling process. The methods are illustrated using the National Institutes of Health-AARP Diet and Health Study where the latent variable is a dietary pattern score called the Healthy Eating Index-2005. We also show how our general model includes methods used in radiation epidemiology as a special case. Simulations are used to illustrate the methods.


Subject(s)
Models, Statistical , Regression Analysis , Computer Simulation , Feeding Behavior , Humans , Logistic Models , Nutrition Surveys/statistics & numerical data
SELECTION OF CITATIONS
SEARCH DETAIL
...