Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 7 de 7
Filter
1.
Med Teach ; 35(1): 15-26, 2013.
Article in English | MEDLINE | ID: mdl-23102151

ABSTRACT

BACKGROUND: Despite thousands of publications over the past 90 years on the assessment of teaching effectiveness, there is still confusion, misunderstanding, and hand-to-hand combat on several topics that seem to pop up over and over again on listservs, blogs, articles, books, and medical education/teaching conference programs. If you are measuring teaching performance in face-to-face, blended/hybrid, or online courses, then you are probably struggling with one or more of these topics or flashpoints. AIM: To decrease the popping and struggling by providing a state-of-the-art update of research and practices and a "consumer's guide to trouble-shooting these flashpoints." METHODS: Five flashpoints are defined, the salient issues and research described, and, finally, specific, concrete recommendations for moving forward are proffered. Those flashpoints are: (1) student ratings vs. multiple sources of evidence; (2) sources of evidence vs. decisions: which come first?' (3) quality of "home-grown" rating scales vs. commercially-developed scales; (4) paper-and-pencil vs. online scale administration; and (5) standardized vs. unstandardized online scale administrations. The first three relate to the sources of evidence chosen and the last two pertain to online administration issues. RESULTS: Many medical schools/colleges and higher education in general fall far short of their potential and the available technology to comprehensively assess teaching effectiveness. Specific recommendations were given to improve the quality and variety of the sources of evidence used for formative and summative decisions and their administration procedures. CONCLUSIONS: Multiple sources of evidence collected through online administration, when possible, can furnish a solid foundation from which to infer teaching effectiveness and contribute to fair and equitable decisions about faculty contract renewal, merit pay, and promotion and tenure.


Subject(s)
Faculty, Medical/standards , Teaching/standards , Education, Medical , Evaluation Studies as Topic , Humans
2.
Med Teach ; 31(12): 1073-80, 2009 Dec.
Article in English | MEDLINE | ID: mdl-19995170

ABSTRACT

BACKGROUND: Student ratings have dominated as the primary and, frequently, only measure of teaching performance at colleges and universities for the past 50 years. Recently, there has been a trend toward augmenting those ratings with other data sources to broaden and deepen the evidence base. The 360 degrees multisource feedback (MSF) model used in management and industry for half a century and in clinical medicine for the last decade seemed like a best fit to evaluate teaching performance and professionalism. AIM: To adapt the 360 degrees MSF model to the assessment of teaching performance and professionalism of medical school faculty. METHODS: The salient characteristics of the MSF models in industry and medicine were extracted from the literature. These characteristics along with 14 sources of evidence from eight possible raters, including students, self, peers, outside experts, mentors, alumni, employers, and administrators, based on the research in higher education were adapted to formative and summative decisions. RESULTS: Three 360 degrees MSF models were generated for three different decisions: (1) formative decisions and feedback about teaching improvement; (2) summative decisions and feedback for merit pay and contract renewal; and (3) formative decisions and feedback about professional behaviors in the academic setting. The characteristics of each model were listed. Finally, a top-10 list of the most persistent and, perhaps, intractable psychometric issues in executing these models was suggested to guide future research. CONCLUSIONS: The 360 degrees MSF model appears to be a useful framework for implementing a multisource evaluation of faculty teaching performance and professionalism in medical schools. This model can provide more accurate, reliable, fair, and equitable decisions than the one based on just a single source.


Subject(s)
Education, Medical, Undergraduate/standards , Employee Performance Appraisal/methods , Faculty, Medical/standards , Professional Competence , Consumer Behavior , Education, Medical, Undergraduate/methods , Feedback , Humans , Psychometrics , Schools, Medical , Students, Medical , Teaching/standards , United States
3.
Cancer Nurs ; 31(6): 452-61, 2008.
Article in English | MEDLINE | ID: mdl-18987512

ABSTRACT

The purpose of this cross-sectional, correlational study was to describe stomatitis-related pain in women with breast cancer undergoing autologous hematopoietic stem cell transplant. The hypotheses that significant, positive relationships would exist between oral pain and stomatitis, state anxiety, depression, and alteration in swallowing were tested. Stomatitis, sensory dimension of oral pain, and state anxiety were hypothesized to most accurately predict oral pain overall intensity. Thirty-two women were recruited at 2 East Coast comprehensive cancer centers. Data were collected on bone marrow transplantation day +7 +/- 24 hours using Painometer, Oral Mucositis Index-20, Oral Assessment Guide, State-Trait Anxiety Inventory, and Beck Depression Inventory. Data analysis included descriptive statistics, correlations, and stepwise multiple regression. All participants had stomatitis; 47% had oral pain, with a subset reporting continuous moderate to severe oral pain despite pain management algorithms. Significant, positive associations were seen between oral pain, stomatitis, and alteration in swallowing and between oral pain with swallowing and alteration in swallowing. Oral pain was not significantly correlated with state anxiety and depression. Oral sensory and affective pain intensity most accurately predicted oral pain overall intensity. Future research needs to explore factors that affect perception and response to stomatitis-related oropharyngeal pain and individual patient response to opioid treatment.


Subject(s)
Breast Neoplasms/complications , Hematopoietic Stem Cell Transplantation/adverse effects , Pain/etiology , Stomatitis/etiology , Transplantation, Autologous/adverse effects , Acute Disease , Adult , Algorithms , Anxiety , Breast Neoplasms/therapy , Cross-Sectional Studies , Depression , Female , Health Status Indicators , Humans , Middle Aged , Pain Measurement , Psychological Tests , Psychometrics , Regression Analysis , Statistics as Topic , Stomatitis/complications
4.
J Health Adm Educ ; 24(2): 97-116, 2007.
Article in English | MEDLINE | ID: mdl-18214074
5.
Arch Phys Med Rehabil ; 86(10): 1901-9, 2005 Oct.
Article in English | MEDLINE | ID: mdl-16213229

ABSTRACT

OBJECTIVES: To assess the reliability and validity of the Pediatric Quality of Life Inventory, version 4.0 (PedsQL), and to compare it with that of the Behavior Rating Inventory of Executive Function (BRIEF) among children with traumatic brain injury (TBI). DESIGN: Prospective cohort study that documented the health-related quality of life of 391 children at 3 and 12 months postinjury. SETTING: Four level I pediatric trauma centers. PARTICIPANTS: Children (age range, 5-15 y) hospitalized with a TBI or an extremity fracture. INTERVENTIONS: Not applicable. MAIN OUTCOME MEASURES: Parent-reported PedsQL and BRIEF scale scores. RESULTS: Both the PedsQL and BRIEF scales showed good internal consistency (PedsQL alpha range, .74-.93; BRIEF alpha range, .82-.98) and test-retest reliability (PedsQL r range, .75-.90; BRIEF r range, .82-.92), respectively. Factor analysis revealed that most PedsQL items loaded most highly on their conceptually derived scale. The PedsQL cognitive function scale detected the largest differences among groups of children with varying severities of TBI as well as parents' assessment of change in cognition postinjury. CONCLUSIONS: Although the reliability of the 2 instruments is comparable, the PedsQL discriminates better among children with TBI. The PedsQL is a promising instrument for measuring the health of children after TBI.


Subject(s)
Brain Injuries/complications , Quality of Life , Surveys and Questionnaires , Abbreviated Injury Scale , Adolescent , Brain Injuries/psychology , Child , Child, Preschool , Cognition Disorders/diagnosis , Extremities/injuries , Factor Analysis, Statistical , Female , Fractures, Bone/complications , Humans , Interviews as Topic , Male , Prospective Studies , Reproducibility of Results , United States
6.
Acad Med ; 80(1): 66-71, 2005 Jan.
Article in English | MEDLINE | ID: mdl-15618097

ABSTRACT

"Mentor" is a term widely used in academic medicine but for which there is no consensus on an operational definition. Further, criteria are rarely reported for evaluating the effectiveness of mentoring. This article presents the work of an Ad Hoc Faculty Mentoring Committee whose tasks were to define "mentorship," specify concrete characteristics and responsibilities of mentors that are measurable, and develop new tools to evaluate the effectiveness of the mentoring relationship. The committee developed two tools: the Mentorship Profile Questionnaire, which describes the characteristics and outcome measures of the mentoring relationship from the perspective of the mentee, and the Mentorship Effectiveness Scale, a 12-item six-point agree-disagree-format Likert-type rating scale, which evaluates 12 behavioral characteristics of the mentor. These instruments are explained and copies are provided. Psychometric issues, including the importance of content-related validity evidence, response bias due to acquiescence and halo effects, and limitations on collecting reliability evidence, are examined in the context of the mentor-mentee relationship. Directions for future research are suggested.


Subject(s)
Faculty, Medical/standards , Interprofessional Relations , Mentors/psychology , Employee Performance Appraisal , Humans , Professional Role , Psychometrics/instrumentation , Social Responsibility , Surveys and Questionnaires
7.
Int J Nurs Educ Scholarsh ; 1: Article10, 2004.
Article in English | MEDLINE | ID: mdl-16646875

ABSTRACT

Peer observation of classroom and clinical teaching has received increased attention over the past decade in schools of nursing to augment student ratings of teaching effectiveness. One essential ingredient is the scale used to evaluate performance. A five-step systematic procedure for adapting, writing, and building any peer observation scale is described. The differences between the development of a classroom observation scale and an appraisal scale to observe clinical instructors are examined. Psychometric issues peculiar to observation scales are discussed in terms of content validity, eight types of response bias, and interobserver reliability. The applications of the scales in one school of nursing as part of the triangulation of methods with student ratings and the teaching portfolio are illustrated. Copies of the scales are also provided.


Subject(s)
Education, Nursing/standards , Peer Review , Teaching/standards , Faculty, Nursing/standards , Humans , Observer Variation , Psychometrics
SELECTION OF CITATIONS
SEARCH DETAIL
...