Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 27
Filter
1.
Article in English | MEDLINE | ID: mdl-32742547

ABSTRACT

The Summer Institutes on Scientific Teaching (SI) is a faculty development workshop in which science, technology, engineering, and mathematics (STEM) instructors, particularly from biology, are trained in the Scientific Teaching (ST) pedagogy. While participants have generally reported positive experiences, we aimed to assess how the SI affected participants' teaching practices. Building on a previously developed taxonomy of ST practices, we surveyed SI participants from the 2004-2014 SI classes regarding specific ST practices. Participants' self-reported use and implementation of ST practices increased immediately after SI attendance as well as over a longer time frame, suggesting that implementation persisted and even increased with time. However, instructors reported implementation gains for some practices more than others. The practices with the highest gains were engaging students in their own learning, using learning goals in course design, employing formative assessment, developing overarching course learning goals, representing science as a process, and facilitating group discussion activities. We propose that the ST practices showing the greatest gains may serve as beneficial focal points for professional development programs, while practices with smaller gains may require modified dissemination approaches or support structures.

2.
J Clin Transl Sci ; 4(1): 74, 2020 Feb.
Article in English | MEDLINE | ID: mdl-32257415

ABSTRACT

[This corrects the article DOI: 10.1017/cts.2019.387.].

3.
J Clin Transl Sci ; 5(1): e22, 2020 Jul 22.
Article in English | MEDLINE | ID: mdl-33948245

ABSTRACT

The critical processes driving successful research translation remain understudied. We describe a mixed-method case study protocol for analyzing translational research that has led to the successful development and implementation of innovative health interventions. An overarching goal of these case studies is to describe systematically the chain of events between basic, fundamental scientific discoveries and the adoption of evidence-based health applications, including description of varied, long-term impacts. The case study approach isolates many of the key factors that enable the successful translation of research into practice and provides compelling evidence connecting the intervention to measurable changes in health and medical practice, public health outcomes, and other broader societal impacts. The goal of disseminating this protocol is to systematize a rigorous approach, which can enhance reproducibility, promote the development of a large collection of comparable studies, and enable cross-case analyses. This approach, an application of the "science of translational science," will lead to a better understanding of key research process markers, timelines, and potential points of leverage for intervention that may help facilitate decisions, processes, and policies to speed the sustainable translational process. Case studies are effective communication vehicles to demonstrate both accountability and the impacts of the public's investment in research.

4.
J Appl Gerontol ; 39(6): 677-680, 2020 06.
Article in English | MEDLINE | ID: mdl-30058433

ABSTRACT

Objectives: The Cornell Research-to-Practice (RTP) Consensus Workshop Model is a strategy for bridging the gap between aging research and practice but lacks a technique for evaluating the relative importance of ideas. This project assessed the feasibility of adding a quantitative survey to the RTP model to address this gap. Method: Older adults with cancer (OACs), OAC caregivers, researchers, clinicians, and advocacy organization representatives participated in a RTP workshop on implementing psychological interventions for OACs. Following an in-person workshop, participants completed surveys assessing the relative importance of barriers and strategies for psychological intervention implementation. Results: Seventeen of 35 participants completed the survey, the majority of which were likely clinicians. Barriers and strategies to implementation rated as having the greatest impact were associated with the care team and institutional factors. Conclusion: Quantitative ratings add novel information to the RTP model that could potentially enhance the model's impact on aging research and practice.


Subject(s)
Aging , Neoplasms/psychology , Aged , Consensus , Geriatrics , Humans , Surveys and Questionnaires
5.
J Clin Transl Sci ; 3(2-3): 59-64, 2019 Jun.
Article in English | MEDLINE | ID: mdl-31660229

ABSTRACT

The purpose of the article is to describe the progress of the Clinical and Translational Science Award (CTSA) Program to address the evaluation-related recommendations made by the 2013 Institute of Medicine's review of the CTSA Program and guidelines published in CTS Journal the same year (Trochim et al., Clinical and Translational Science 2013; 6(4): 303-309). We utilize data from a 2018 national survey of evaluators administered to all 64 CTSA hubs and a content analysis of the role of evaluation in the CTSA Program Funding Opportunity Announcements to document progress. We present four new opportunities for further strengthening CTSA evaluation efforts: (1) continue to build the collaborative evaluation infrastructure at local and national levels; (2) make better use of existing data; (3) strengthen and augment the common metrics initiative; and (4) pursue internal and external opportunities to evaluate the CTSA program at the national level. This article will be of significant interest to the funders of the CTSA Program and the multiple stakeholders in the larger consortium and will promote dialog from the broad range of CTSA stakeholders about further strengthening the CTSA Program's evaluation.

6.
Eval Program Plann ; 60: 176-185, 2017 02.
Article in English | MEDLINE | ID: mdl-27596122

ABSTRACT

This paper considers the origins and development of the concept mapping methodology, a summary of its growth, and its influence in a variety of fields. From initial discussions with graduate students, through the rise of the theory-driven approach to program evaluation and the development of a theoretical framework for conceptualization methodology, the paper highlights some of the key early efforts and pilot projects that culminated in a 1989 special issue on the method in Evaluation and Program Planning that brought the method to the attention of the field of evaluation. The paper details the thinking that led to the standard version of the method (the analytic sequence, "bridging" index, and pattern matching) and the development of the software for accomplishing it. A bibliometric analysis shows that the rate of citation continues to increase, where it has grown geographically and institutionally, that the method has been used in a wide variety of disciplines and specialties, and that the literature had an influence on the field. The article concludes with a critical appraisal of some of the key aspects of the approach that warrant further development.


Subject(s)
Cluster Analysis , Empirical Research , Group Processes , Research Design , Bibliometrics , Cooperative Behavior , History, 20th Century , Humans , Reproducibility of Results , Software Design
7.
Eval Program Plann ; 60: 166-175, 2017 02.
Article in English | MEDLINE | ID: mdl-27780609

ABSTRACT

Concept mapping was developed in the 1980s as a unique integration of qualitative (group process, brainstorming, unstructured sorting, interpretation) and quantitative (multidimensional scaling, hierarchical cluster analysis) methods designed to enable a group of people to articulate and depict graphically a coherent conceptual framework or model of any topic or issue of interest. This introduction provides the basic definition and description of the methodology for the newcomer and describes the steps typically followed in its most standard canonical form (preparation, generation, structuring, representation, interpretation and utilization). It also introduces this special issue which reviews the history of the methodology, describes its use in a variety of contexts, shows the latest ways it can be integrated with other methodologies, considers methodological advances and developments, and sketches a vision of the future of the method's evolution.


Subject(s)
Cluster Analysis , Empirical Research , Group Processes , Research Design , Cooperative Behavior , Humans
8.
Clin Transl Sci ; 8(5): 451-9, 2015 Oct.
Article in English | MEDLINE | ID: mdl-26073891

ABSTRACT

The National Institutes of Health (NIH) Roadmap for Medical Research initiative, funded by the NIH Common Fund and offered through the Clinical and Translational Science Award (CTSA) program, developed more than 60 unique models for achieving the NIH goal of accelerating discoveries toward better public health. The variety of these models enabled participating academic centers to experiment with different approaches to fit their research environment. A central challenge related to the diversity of approaches is the ability to determine the success and contribution of each model. This paper describes the effort by the Evaluation Key Function Committee to develop and test a methodology for identifying a set of common metrics to assess the efficiency of clinical research processes and for pilot testing these processes for collecting and analyzing metrics. The project involved more than one-fourth of all CTSAs and resulted in useful information regarding the challenges in developing common metrics, the complexity and costs of acquiring data for the metrics, and limitations on the utility of the metrics in assessing clinical research performance. The results of this process led to the identification of lessons learned and recommendations for development and use of common metrics to evaluate the CTSA effort.


Subject(s)
Clinical Trials as Topic/methods , Clinical Trials as Topic/standards , Research Design/standards , Research Support as Topic/statistics & numerical data , Translational Research, Biomedical/methods , Translational Research, Biomedical/standards , Awards and Prizes , Benchmarking/standards , Clinical Trials as Topic/economics , Ethics Committees, Research/standards , Feasibility Studies , Humans , National Institutes of Health (U.S.) , Pilot Projects , Research Support as Topic/economics , Time Factors , Translational Research, Biomedical/economics , United States
9.
Eval Program Plann ; 45: 127-39, 2014 Aug.
Article in English | MEDLINE | ID: mdl-24780281

ABSTRACT

Evolutionary theory, developmental systems theory, and evolutionary epistemology provide deep theoretical foundations for understanding programs, their development over time, and the role of evaluation. This paper relates core concepts from these powerful bodies of theory to program evaluation. Evolutionary Evaluation is operationalized in terms of program and evaluation evolutionary phases, which are in turn aligned with multiple types of validity. The model of Evolutionary Evaluation incorporates Chen's conceptualization of bottom-up versus top-down program development. The resulting framework has important implications for many program management and evaluation issues. The paper illustrates how an Evolutionary Evaluation perspective can illuminate important controversies in evaluation using the example of the appropriate role of randomized controlled trials that encourages a rethinking of "evidence-based programs". From an Evolutionary Evaluation perspective, prevailing interpretations of rigor and mandates for evidence-based programs pose significant challenges to program evolution. This perspective also illuminates the consequences of misalignment between program and evaluation phases; the importance of supporting both researcher-derived and practitioner-derived programs; and the need for variation and evolutionary phase diversity within portfolios of programs.


Subject(s)
Program Development/methods , Program Evaluation/methods , Research Design , Systems Theory , Humans , Randomized Controlled Trials as Topic
10.
Clin Transl Sci ; 6(4): 303-9, 2013 Aug.
Article in English | MEDLINE | ID: mdl-23919366

ABSTRACT

The National Center for Advancing Translational Sciences (NCATS), a part of the National Institutes of Health, currently funds the Clinical and Translational Science Awards (CTSAs), a national consortium of 61 medical research institutions in 30 states and the District of Columbia. The program seeks to transform the way biomedical research is conducted, speed the translation of laboratory discoveries into treatments for patients, engage communities in clinical research efforts, and train a new generation of clinical and translational researchers. An endeavor as ambitious and complex as the CTSA program requires high-quality evaluations in order to show that the program is well implemented, efficiently managed, and demonstrably effective. In this paper, the Evaluation Key Function Committee of the CTSA Consortium presents an overall framework for evaluating the CTSA program and offers policies to guide the evaluation work. The guidelines set forth are designed to serve as a tool for education within the CTSA community by illuminating key issues and practices that should be considered during evaluation planning, implementation, and utilization. Additionally, these guidelines can provide a basis for ongoing discussions about how the principles articulated in this paper can most effectively be translated into operational reality.


Subject(s)
Guidelines as Topic , Program Evaluation , Translational Research, Biomedical , Awards and Prizes , Policy , Translational Research, Biomedical/organization & administration
11.
Eval Health Prof ; 36(4): 478-91, 2013 Dec.
Article in English | MEDLINE | ID: mdl-23925706

ABSTRACT

Assessing the value of clinical and translational research funding on accelerating the translation of scientific knowledge is a fundamental issue faced by the National Institutes of Health (NIH) and its Clinical and Translational Awards (CTSAs). To address this issue, the authors propose a model for measuring the return on investment (ROI) of one key CTSA program, the clinical research unit (CRU). By estimating the economic and social inputs and outputs of this program, this model produces multiple levels of ROI: investigator, program, and institutional estimates. A methodology, or evaluation protocol, is proposed to assess the value of this CTSA function, with specific objectives, methods, descriptions of the data to be collected, and how data are to be filtered, analyzed, and evaluated. This article provides an approach CTSAs could use to assess the economic and social returns on NIH and institutional investments in these critical activities.


Subject(s)
Investments/economics , Models, Economic , Program Evaluation/economics , Translational Research, Biomedical/economics , Awards and Prizes , Humans , National Institutes of Health (U.S.) , United States
12.
Prog Community Health Partnersh ; 6(3): 311-20, 2012.
Article in English | MEDLINE | ID: mdl-22982844

ABSTRACT

BACKGROUND: Community engagement has been a cornerstone of National Institute of Allergy and Infectious Diseases (NIAID)'s HIV/AIDS clinical trials programs since 1990. Stakeholders now consider this critical to success, hence the impetus to develop evaluation approaches. OBJECTIVES: The purpose was to assess the extent to which community advisory boards (CABs) at HIV/AIDS trials sites are being integrated into research activities. METHODS: CABs and research staff (RS) at NIAID research sites were surveyed for how each viewed (a) the frequency of activities indicative of community involvement, (b) the means for identifying, prioritizing, and supporting CAB needs, and (c) mission and operational challenges. RESULTS: Overall, CABs and RS share similar views about the frequency of community involvement activities. Cluster analysis reveals three groups of sites based on activity frequency ratings, including a group notable for CAB-RS discordance. CONCLUSIONS: Assessing differences between community and researcher perceptions about the frequency of and challenges posed by specific engagement activities may prove useful in developing evaluation tools for assessing community engagement in collaborative research settings.


Subject(s)
Advisory Committees/organization & administration , Clinical Trials as Topic/methods , Community-Institutional Relations , HIV Infections/therapy , National Institute of Allergy and Infectious Diseases (U.S.)/organization & administration , Acquired Immunodeficiency Syndrome/therapy , Communication , Community-Based Participatory Research/organization & administration , Cooperative Behavior , Humans , Needs Assessment , United States
14.
Sci Transl Med ; 4(118): 118cm2, 2012 Jan 25.
Article in English | MEDLINE | ID: mdl-22277966

ABSTRACT

Clinical research is burdened by inefficiencies and complexities, with a poor record of trial completion, none of which is desirable. The Clinical and Translational Science Award (CTSA) Consortium, including more than 60 clinical research institutions, supports a unified national effort to become, in effect, a virtual national laboratory designed to identify, implement, evaluate, and extend process improvements across all parts of clinical research, from conception to completion. If adequately supported by academic health centers, industry, and funding agencies, the Consortium could become a test bed for improvements that can dramatically reduce wasteful complexity, thus increasing the likelihood of clinical trial completion.


Subject(s)
Laboratories/organization & administration , Translational Research, Biomedical/organization & administration , User-Computer Interface , Awards and Prizes , Clinical Trials as Topic/economics , Humans , Laboratories/economics , Translational Research, Biomedical/economics
15.
Stat Med ; 30(23): 2778-82; author reply 2783-4, 2011 Oct 15.
Article in English | MEDLINE | ID: mdl-21953713

ABSTRACT

The paper 'Evaluation Metrics for Biostatistical and Epidemiological Collaborations' of Rubio et al. represents an important initial advance in the evaluation of biostatistics, epidemiology, and research design (BERD). The authors present a sensible three-domain model (collaboration with investigators, application of BERD-related methods, and discovery of new BERD methodologies), rightly acknowledge the importance of team science, and break new ground in illustrating that the Clinical Translational Science Awards can function as a kind of national laboratory for the development and exploration of measures and metrics. Building upon these gains, there are several future considerations worthy of subsequent serious attention: strengthening the connection between BERD evaluation and both the science of team science and the field of evaluation; facing the challenges of operationalization of the conceptual domains; augmenting the work of Rubio et al. with standard evaluative models; and anticipating the need for multiplistic mixed methods and experimental and quasi-experimental complements to the proposed BERD metrics. Several common pitfalls will also be important to avoid, including the tendency to conflate the meaning of 'metrics' and 'measures' and the potential for a premature rush to adopt national 'standards' before adequately pilot testing the initial set of methods they have worked so diligently to develop.


Subject(s)
Biostatistics/methods , Epidemiologic Methods , Research Design/standards , Humans
16.
PLoS One ; 6(3): e17428, 2011 Mar 04.
Article in English | MEDLINE | ID: mdl-21394198

ABSTRACT

Evaluative bibliometrics uses advanced techniques to assess the impact of scholarly work in the context of other scientific work and usually compares the relative scientific contributions of research groups or institutions. Using publications from the National Institute of Allergy and Infectious Diseases (NIAID) HIV/AIDS extramural clinical trials networks, we assessed the presence, performance, and impact of papers published in 2006-2008. Through this approach, we sought to expand traditional bibliometric analyses beyond citation counts to include normative comparisons across journals and fields, visualization of co-authorship across the networks, and assess the inclusion of publications in reviews and syntheses. Specifically, we examined the research output of the networks in terms of the a) presence of papers in the scientific journal hierarchy ranked on the basis of journal influence measures, b) performance of publications on traditional bibliometric measures, and c) impact of publications in comparisons with similar publications worldwide, adjusted for journals and fields. We also examined collaboration and interdisciplinarity across the initiative, through network analysis and modeling of co-authorship patterns. Finally, we explored the uptake of network produced publications in research reviews and syntheses. Overall, the results suggest the networks are producing highly recognized work, engaging in extensive interdisciplinary collaborations, and having an impact across several areas of HIV-related science. The strengths and limitations of the approach for evaluation and monitoring research initiatives are discussed.


Subject(s)
Acquired Immunodeficiency Syndrome , Bibliometrics , Biomedical Research/standards , Clinical Trials as Topic/standards , National Institute of Allergy and Infectious Diseases (U.S.) , National Institutes of Health (U.S.) , Cooperative Behavior , Evaluation Studies as Topic , Interdisciplinary Studies/standards , Journal Impact Factor , Periodicals as Topic/standards , Program Evaluation , United States
17.
Res Eval ; 19(4): 239-250, 2010 Oct 01.
Article in English | MEDLINE | ID: mdl-21552512

ABSTRACT

New discoveries in basic science are creating extraordinary opportunities to design novel biomedical preventions and therapeutics for human disease. But the clinical evaluation of these new interventions is, in many instances, being hindered by a variety of legal, regulatory, policy and operational factors, few of which enhance research quality, the safety of study participants or research ethics. With the goal of helping increase the efficiency and effectiveness of clinical research, we have examined how the integration of utilization-focused evaluation with elements of business process modeling can reveal opportunities for systematic improvements in clinical research. Using data from the NIH global HIV/AIDS clinical trials networks, we analyzed the absolute and relative times required to traverse defined phases associated with specific activities within the clinical protocol lifecycle. Using simple median duration and Kaplan-Meyer survival analysis, we show how such time-based analyses can provide a rationale for the prioritization of research process analysis and re-engineering, as well as a means for statistically assessing the impact of policy modifications, resource utilization, re-engineered processes and best practices. Successfully applied, this approach can help researchers be more efficient in capitalizing on new science to speed the development of improved interventions for human disease.

18.
Health Res Policy Syst ; 7: 12, 2009 May 21.
Article in English | MEDLINE | ID: mdl-19460164

ABSTRACT

Globally, health research organizations are called upon to re-examine their policies and practices to more efficiently and effectively address current scientific and social needs, as well as increasing public demands for accountability.Through a case study approach, the authors examine an effort undertaken by the National Institute of Allergy & Infectious Diseases (part of the National Institutes of Health, Department of Health & Human Services, United States Government) to develop an evaluation system for its recently restructured HIV/AIDS clinical trials program. The challenges in designing, operationalizing, and managing global clinical trials programs are considered in the context of large scale scientific research initiatives.Through a process of extensive stakeholder input, a framework of success factors was developed that enables both a prospective view of the elements that must be addressed in an evaluation of this research and a current state assessment of the extent to which the goals of the restructuring are understood by stakeholders across the DAIDS clinical research networks.

19.
Am J Prev Med ; 35(2 Suppl): S151-60, 2008 Aug.
Article in English | MEDLINE | ID: mdl-18619395

ABSTRACT

PURPOSE: As the science of team science evolves, the development of measures that assess important processes related to working in transdisciplinary teams is critical. Therefore, the purpose of this paper is to present the psychometric properties of scales measuring collaborative processes and transdisciplinary integration. METHODS: Two hundred-sixteen researchers and research staff participating in the Transdisciplinary Tobacco Use Research Centers (TTURC) Initiative completed the TTURC researcher survey. Confirmatory-factor analyses were used to verify the hypothesized factor structures. Descriptive data pertinent to these scales and their associations with other constructs were included to further examine the properties of the scales. RESULTS: Overall, the hypothesized-factor structures, with some minor modifications, were validated. A total of four scales were developed, three to assess collaborative processes (satisfaction with the collaboration, impact of collaboration, trust and respect) and one to assess transdisciplinary integration. All scales were found to have adequate internal consistency (i.e., Cronbach alpha's were all >0.70); were correlated with intermediate markers of collaborations (e.g., the collaboration and transdisciplinary-integration scales were positively associated with the perception of a center's making good progress in creating new methods, new science and models, and new interventions); and showed some ability to detect group differences. CONCLUSIONS: This paper provides valid tools that can be utilized to examine the underlying processes of team science--an important step toward advancing the science of team science.


Subject(s)
Cooperative Behavior , Group Processes , Interdisciplinary Communication , Psychometrics/statistics & numerical data , Research Personnel/statistics & numerical data , Attitude , Humans , Interprofessional Relations , Models, Psychological , Personal Satisfaction , Program Evaluation , Reproducibility of Results , Surveys and Questionnaires
20.
Am J Prev Med ; 35(2 Suppl): S196-203, 2008 Aug.
Article in English | MEDLINE | ID: mdl-18619400

ABSTRACT

Improving population health requires understanding and changing societal structures and functions, but countervailing forces sometimes undermine those changes, thus reflecting the adaptive complexity inherent in public health systems. The purpose of this paper is to propose systems thinking as a conceptual rubric for the practice of team science in public health, and transdisciplinary, translational research as a catalyst for promoting the functional efficiency of science. The paper lays a foundation for the conceptual understanding of systems thinking and transdisciplinary research, and will provide illustrative examples within and beyond public health. A set of recommendations for a systems-centric approach to translational science will be presented.


Subject(s)
Cooperative Behavior , Group Processes , Interdisciplinary Communication , Public Health , Systems Theory , Humans , Public Health Administration , Science/organization & administration
SELECTION OF CITATIONS
SEARCH DETAIL
...