Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 5 de 5
Filtrar
1.
Eval Program Plann ; 80: 101799, 2020 Feb 24.
Artigo em Inglês | MEDLINE | ID: mdl-32106004

RESUMO

Value for Money (VfM) is an evaluative question about the merit, worth, and significance of resource use in social programs. Although VfM is a critical component of evidence-based programming, it is often overlooked or avoided by evaluators and decision-makers. A framework for evaluating VfM across the dimensions of economy, effectiveness, efficiency, and equity has emerged in response to limitations of traditional economic evaluation. This framework for assessing VfM integrates methods for engaging stakeholders in evaluative thinking to increase acceptance and utilization of evaluations that address questions of resource use. In this review, we synthesize literature on the VfM framework and position it within a broader theory of Utilization-Focused Evaluation (UFE). We then examine mechanisms through which the VfM framework may contribute to increased evaluation use. Finally, we outline avenues for future research on VfM evaluation.

2.
Eval Program Plann ; 76: 101677, 2019 10.
Artigo em Inglês | MEDLINE | ID: mdl-31302512

RESUMO

Several evaluation models exist for investigating unintended outcomes, including goal-free and systems evaluation. Yet methods for collecting and analyzing data on unintended outcomes remain under-utilized. Ripple Effects Mapping (REM) is a promising qualitative evaluation method with a wide range of program planning and evaluation applications. In situations where program results are likely to occur over time within complex settings, this method is useful for uncovering both intended and unintended outcomes. REM applies an Appreciative Inquiry facilitation technique to engage stakeholders in visually mapping sequences of program outcomes. Although it has been used to evaluate community development and health promotion initiatives, further methodological guidance for applying REM is still needed. The purpose of this paper is to contribute to the methodological development of evaluating unintended outcomes and extend the foundations of REM by describing steps for integrating it with grounded theory.


Assuntos
Teoria Fundamentada , Avaliação de Programas e Projetos de Saúde/métodos , Humanos
3.
J Psychosoc Nurs Ment Health Serv ; 56(4): 18-22, 2018 Apr 01.
Artigo em Inglês | MEDLINE | ID: mdl-29328358

RESUMO

A faculty team developed the 4-week Recovery-Based Interprofessional Distance Education (RIDE) rotation for graduate students in their disciplines. The evaluation team identified the Team Development Measure (TDM) as a potential alternative to reflect team development during the RIDE rotation. The TDM, completed anonymously online, was piloted on the second student cohort (N = 18) to complete the RIDE rotation. The overall pretest mean was 60.73 points (SD = 11.85) of a possible 100 points, indicating that students anticipated their RIDE team would function at a moderately high level during the 4-week rotation. The overall posttest mean, indicating student perceptions of actual team functioning, was 72.71 points (SD = 23.31), an average increase of 11.98 points. Although not statistically significant, Cohen's effect size (d = 0.43) indicates an observed difference of large magnitude. No other published work has used the TDM as a pre-/posttest measure of team development. The authors believe the TDM has several advantages as a measure of student response to interprofessional education offerings, particularly in graduate students with prior experience on health care teams. Further work is needed to validate and extend the findings of this pilot study. [Journal of Psychosocial Nursing and Mental Health Services, 56(4), 18-22.].


Assuntos
Relações Interprofissionais , Equipe de Assistência ao Paciente , Aprendizagem Baseada em Problemas , Estudantes de Ciências da Saúde , Inquéritos e Questionários , Comportamento Cooperativo , Educação de Pós-Graduação , Feminino , Humanos , Masculino , Projetos Piloto
4.
Postgrad Med J ; 91(1078): 423-30, 2015 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-26253921

RESUMO

BACKGROUND: Although biostatistics and clinical epidemiology are essential for comprehending medical evidence, research has shown consistently low and variable knowledge among postgraduate medical trainees. Simultaneously, there has been an increase in the complexity of statistical methods among top-tier medical journals. AIMS: To develop the Biostatics and Clinical Epidemiology Skills (BACES) assessment by (1) establishing content validity evidence of the BACES; (2) examining the model fit of the BACES items to an Item Response Theory (IRT) model; and (3) comparing IRT item estimates with those of traditional Classical Test Theory (CTT) indices. METHODS: Thirty multiple choice questions were written to focus on interpreting clinical epidemiological and statistical methods. Content validity was assessed through a four-person expert review. The instrument was administered to 150 residents across three academic medical centres in southern USA during the autumn of 2013. Data were fit to a two-parameter logistic IRT model and the item difficulty, discrimination and examinee ability values were compared with traditional CTT item statistics. RESULTS: 147 assessments were used for analysis (mean (SD) score 14.38 (3.38)). Twenty-six items, 13 devoted to statistics and 13 to clinical epidemiology, successfully fit a two-parameter logistic IRT model. These estimates also significantly correlated with their comparable CTT values. CONCLUSIONS: The strength of the BACES instrument was supported by (1) establishing content validity evidence; (2) fitting a sample of 147 residents' responses to an IRT model; and (3) correlating the IRT estimates with their CTT values, which makes it a flexible yet rigorous instrument for measuring biostatistical and clinical epidemiological knowledge.


Assuntos
Bioestatística , Competência Clínica/normas , Educação Médica Continuada , Avaliação Educacional , Processamento Eletrônico de Dados , Prática Clínica Baseada em Evidências , Humanos , Iowa
5.
Eval Program Plann ; 52: 142-7, 2015 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-26051793

RESUMO

Reflective practice (RP), one of six essential competency domains in evaluation identified by Stevahn, King, Ghere, and Minnema (2005), refers to thinking critically about one's evaluation practice, alone or with other people, and using critical insights to improve one's practice. Currently, evaluators have minimal guidance in navigating this essential professional competency, professed to be a necessary part of their practice. This article focuses on how RP can serve as a tool for evaluators through the use of the "DATA" integrated RP framework, developed by Peters (1991, 2009). DATA is an acronym with each letter standing for a different step in the process of reflective practice. The "D" step of the acronym focuses on (D)escribing what is or has been happening in practice. The "A" step refers to (A)nalyzing the current state of practice-why is this happening the way it is? The "T" concentrates on a practice-oriented form of (T)heorizing, which comes from analysis and serves as a basis for the resulting (A)ct. The last "A" focuses on the specifics of an action plan to change one's evaluation practice in light of the practical theory developed through theorizing. This paper describes the DATA model and introduces the application of the framework in a practice context.


Assuntos
Competência Profissional/normas , Avaliação de Programas e Projetos de Saúde/normas , Humanos , Julgamento , Modelos Organizacionais , Avaliação de Programas e Projetos de Saúde/métodos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...