ABSTRACT
In this study, we investigated the effect of the Positive Behavior Skills (PBS) program on students' attendance and academic learning. We used propensity score matching and growth modeling to compare PBS and non-PBS schools on (a) English language art (ELA), (b) mathematics, and (c) attendance. PBS schools had statistically significantly higher annual growth rate in attendance than the matched non-PBS schools, with PBS schools growing 0.38% points higher. Due to the word limit, the results for the other two school outcomes (ELA, mathematics) were not presented. Implications of the study were discussed given the nature of PBS program and general context of the literature.
Subject(s)
Language , Schools , Humans , Mathematics , Program Evaluation , StudentsABSTRACT
Portfolio evaluation is the evaluation of multiple projects with a common purpose. While logic models have been used in many ways to support evaluation, and data visualization has been used widely to present and communicate evaluation findings, adopting logic models for portfolio evaluation and using data visualization to share findings simultaneously is surprisingly limited in the literature. With the data from a sample portfolio of 209 projects which aims to improve the system of early care and education (ECE), this study illustrated how to use logic model and data visualization techniques to conduct a portfolio evaluation by answering two evaluation questions: "To what extent are the elements of a logic model (strategies, sub-strategies, activities, outcomes, and impacts) reflected in the sample portfolio?" and "Which dominant paths through the logic model were illuminated by the data visualization technique?" For the first question, the visualization technique illuminated several dominant strategies, sub-strategies, activities, and outcomes. For the second question, our visualization techniques made it convenient to identify critical paths through the logic model. Implications for both program evaluation and program planning were discussed.
Subject(s)
Logic , Models, Theoretical , Program Evaluation/methods , Humans , Organizational Objectives , Program DevelopmentABSTRACT
Outcome evaluation is very important for program evaluation and has been becoming increasingly so in the age of accountability. Typically, outcome evaluation is conducted for a single program from a single perspective. However, in a real-life situation, many programs exist in a system, and the effects could be viewed from various perspectives. The authors illustrate a typology of program effects in a system. It moves from the paradigm of a single program's single effect to that of a set of programs' multiple effects. Methodological implications are discussed.
Subject(s)
Behavioral Research/organization & administration , Education/organization & administration , Program Evaluation/methods , Behavioral Research/standards , Child , Child, Preschool , Education/standards , Female , Humans , Program Evaluation/standards , Quality Improvement/organization & administration , Reproducibility of Results , Socioeconomic FactorsABSTRACT
Evaluation of program impact in the field of education has been a controversial topic over the years. Although randomized control trials have great advantages in causal inference, they often raise ethical and economic concerns in practice. As an alternative, quasi-experimental designs may provide valid evidence of influence if they are well-designed. In this article, we presented an evaluation case of a district-wide early learning improvement program. To strike a balance between practicability and academic rigor, we developed comparison groups from multiple perspectives, and used a series of tests consistent with WWC 3.0 standards to reach the most valid comparisons. Implications for evaluation practice were discussed.