Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 12 de 12
Filter
2.
AEM Educ Train ; 1(3): 243-249, 2017 Jul.
Article in English | MEDLINE | ID: mdl-30051042

ABSTRACT

OBJECTIVES: Multisource feedback (MSF) has potential value in learner assessment, but has not been broadly implemented nor studied in emergency medicine (EM). This study aimed to adapt existing MSF instruments for emergency department implementation, measure feasibility, and collect initial validity evidence to support score interpretation for learner assessment. METHODS: Residents from eight U.S. EM residency programs completed a self-assessment and were assessed by eight physicians, eight nonphysician colleagues, and 25 patients using unique instruments. Instruments included a five-point rating scale to assess interpersonal and communication skills, professionalism, systems-based practice, practice-based learning and improvement, and patient care. MSF feasibility was measured by percentage of residents who collected the target number of instruments. To develop internal structure validity evidence, Cronbach's alpha was calculated as a measure of internal consistency. RESULTS: A total of 125 residents collected a mean of 7.0 physician assessments (n = 752), 6.7 nonphysician assessments (n = 775), and 17.8 patient assessments (n = 2,100) with respective response rates of 67.2, 75.2, and 77.5%. Cronbach's alpha values for physicians, nonphysicians, patients, and self were 0.97, 0.97, 0.96, and 0.96, respectively. CONCLUSIONS: This study demonstrated that MSF implementation is feasible, although challenging. The tool and its scale demonstrated excellent internal consistency. EM educators may find the adaptation process and tools applicable to their learners.

3.
Acad Emerg Med ; 19(12): 1319-22, 2012 Dec.
Article in English | MEDLINE | ID: mdl-23230895

ABSTRACT

The 2012 Academic Emergency Medicine (AEM) consensus conference "Education Research In Emergency Medicine: Opportunities, Challenges, and Strategies for Success" convened a diverse group of stakeholders in medical education to target gaps in emergency medicine (EM) education research and identify priorities for future study. A total of 175 registrants collaborated in preparatory and conference-day activities to explore subtopics in educational interventions, learner assessment, faculty development, and research funding and infrastructure. The conference was punctuated by didactic sessions led by key international medical education experts and ended with consensus formation in many domains. This issue of AEM presents the exciting results of this process.


Subject(s)
Consensus Development Conferences as Topic , Education, Medical/methods , Emergency Medicine/education , Research/education , Humans
4.
Acad Emerg Med ; 19(12): 1462-7, 2012 Dec.
Article in English | MEDLINE | ID: mdl-23279252

ABSTRACT

This 2012 Academic Emergency Medicine consensus conference breakout session was devoted to the task of identifying the history and current state of faculty development in education research in emergency medicine (EM). The participants set a future agenda for successful faculty development in education research. A number of education research and content experts collaborated during the session. This article summarizes existing academic and medical literature, expert opinions, and audience consensus to report our agreement and findings related to the promotion of faculty development.


Subject(s)
Biomedical Research/education , Education, Medical/methods , Emergency Medicine/education , Faculty/standards , Staff Development/methods , Education, Medical/standards , Humans , Staff Development/standards
5.
Acad Emerg Med ; 18(5): 504-12, 2011 May.
Article in English | MEDLINE | ID: mdl-21569169

ABSTRACT

OBJECTIVES: Effective real-time feedback is critical to medical education. This study tested the hypothesis that an educational intervention related to feedback would improve emergency medicine (EM) faculty and resident physician satisfaction with feedback. METHODS: This was a cluster-randomized, controlled study of 15 EM residency programs in 2007-2008. An educational intervention was created that combined a feedback curriculum with a card system designed to promote timely, effective feedback. Sites were randomized either to receive the intervention or to continue their current feedback method. All participants completed a Web-based survey before and after the intervention period. The primary outcome was overall feedback satisfaction on a 10-point scale. Additional items addressed specific aspects of feedback. Responses were compared using a generalized estimating equations model, adjusting for confounders and baseline differences between groups. The study was designed to achieve at least 80% power to detect a one-point difference in overall satisfaction (α = 0.05). RESULTS: Response rates for pre- and postintervention surveys were 65.9 and 47.3% (faculty) and 64.7 and 56.9% (residents). Residents in the intervention group reported a mean overall increase in feedback satisfaction scores compared to those in the control group (mean increase 0.96 points, standard error [SE] ± 0.44, p = 0.03) and significantly higher satisfaction with the quality, amount, and timeliness of feedback. There were no significant differences in mean scores for overall and specific aspects of satisfaction between the faculty physician intervention and control groups. CONCLUSIONS: An intervention designed to improve real-time feedback in the ED resulted in higher resident satisfaction with feedback received, but did not affect faculty satisfaction with the feedback given.


Subject(s)
Attitude of Health Personnel , Education, Medical, Graduate/methods , Emergency Medicine/education , Knowledge of Results, Psychological , Medical Staff, Hospital/psychology , Physicians/psychology , Cluster Analysis , Curriculum , Emergency Service, Hospital , Faculty , Female , Humans , Internet , Internship and Residency , Linear Models , Male
6.
Acad Emerg Med ; 17 Suppl 2: S13-5, 2010 Oct.
Article in English | MEDLINE | ID: mdl-21199078

ABSTRACT

In 2010 the Council of Emergency Medicine Residency Directors (CORD) established an Academy for Scholarship in Education in Emergency Medicine to define, promote, recognize, and reward excellence in education, education research, and education leadership in emergency medicine. In this article we describe the mission and aims of the Academy. Academies for medical educators are widespread in medical schools today and have produced many benefits both for faculty and for educational programs. Little effort, however, has been devoted to such a model in graduate medical education specialty societies. While CORD and other emergency medicine organizations have developed numerous initiatives to advance excellence in education, we believe that this effort will be accelerated if housed in the form of an Academy that emphasizes scholarship in teaching and other education activities. The CORD Academy for Scholarship in Education in Emergency Medicine is a new model for promoting excellence in education in graduate medical education specialty societies.


Subject(s)
Emergency Medicine/education , Fellowships and Scholarships/organization & administration , Academies and Institutes/organization & administration , Education, Medical/standards , Emergency Medicine/organization & administration , United States
7.
Acad Emerg Med ; 16 Suppl 2: S32-6, 2009 Dec.
Article in English | MEDLINE | ID: mdl-20053207

ABSTRACT

Over the past 25 years, research performed by emergency physicians (EPs) demonstrates that bedside ultrasound (US) can improve the care of emergency department (ED) patients. At the request of the Council of Emergency Medicine Residency Directors (CORD), leaders in the field of emergency medicine (EM) US met to delineate in consensus fashion the model "US curriculum" for EM residency training programs. The goal of this article is to provide a framework for providing US education to EM residents. These guidelines should serve as a foundation for the growth of resident education in EM US. The intent of these guidelines is to provide minimum education standards for all EM residency programs to refer to when establishing an EUS training program. The document focuses on US curriculum, US education, and competency assessment. The use of US in the management of critically ill patients will improve patient care and thus should be viewed as a required skill set for all future graduating EM residents. The authors consider EUS skills critical to the development of an emergency physician, and a minimum skill set should be mandatory for all graduating EM residents. The US education provided to EM residents should be structured to allow residents to incorporate US into daily clinical practice. Image acquisition and interpretation alone are insufficient. The ability to integrate findings with patient care and apply them in a busy clinical environment should be stressed.


Subject(s)
Clinical Competence , Competency-Based Education/standards , Curriculum/standards , Emergency Medicine/education , Internship and Residency/standards , Ultrasonography , Humans , United States
8.
Acad Emerg Med ; 16 Suppl 2: S51-7, 2009 Dec.
Article in English | MEDLINE | ID: mdl-20053212

ABSTRACT

OBJECTIVES: Developed by the Council of Emergency Medicine Residency Directors (CORD), the standardized direct observation assessment tool (SDOT) is an evaluation instrument used to assess residents' clinical skills in the emergency department (ED). In a previous study examining the inter-rater agreement of the tool, faculty scored simulated resident-patient encounters. The objective of the present study was to evaluate the inter-rater agreement of the SDOT in real-time evaluations of residents in the ED. METHODS: This was a multi-center, prospective, observational study in which faculty raters were paired to simultaneously observe and independently evaluate a resident's clinical performance using the SDOT. Data collected from eight emergency medicine (EM) residency programs produced 99 unique resident-patient encounters and reported on 26 individual behaviors related to specific core competencies, global evaluation scores for each core competency, and an overall clinical competency score. Inter-rater agreement was assessed using percentage agreement analyses with three constructs: exact agreement, liberal agreement, and binary (pass/fail) agreement. RESULTS: Inter-rater agreement between faculty raters varied according to category of measure used. Exact agreement ranged from poor to good, depending on the measure: the overall competency score (good), the competency score for each of the six core competencies (poor to good), and the individual item scores (fair to very good). Liberal agreement and binary agreement were excellent for the overall competency score and the competency score for each of the six core competencies and very good to excellent for the individual item scores. CONCLUSIONS: The SDOT demonstrated excellent inter-rater agreement when analyzed with liberal agreement and when dichotomized as a pass/fail measure and fair to good agreement for most measures with exact agreement. The SDOT can be useful and reliable when evaluating residents' clinical skills in the ED, particularly as it relates to marginal performance.


Subject(s)
Clinical Competence , Internship and Residency , Humans , Prospective Studies , Reproducibility of Results
9.
Acad Emerg Med ; 16 Suppl 2: S76-81, 2009 Dec.
Article in English | MEDLINE | ID: mdl-20053217

ABSTRACT

OBJECTIVES: Effective feedback is critical to medical education. Little is known about emergency medicine (EM) attending and resident physician perceptions of feedback. The focus of this study was to examine perceptions of the educational feedback that attending physicians give to residents in the clinical environment of the emergency department (ED). The authors compared attending and resident satisfaction with real-time feedback and hypothesized that the two groups would report different overall satisfaction with the feedback they currently give and receive in the ED. METHODS: This observational study surveyed attending and resident physicians at 17 EM residency programs through web-based surveys. The primary outcome was overall satisfaction with feedback in the ED, ranked on a 10-point scale. Additional survey items addressed specific aspects of feedback. Responses were compared using a linear generalized estimating equation (GEE) model for overall satisfaction, a logistic GEE model for dichotomized responses, and an ordinal logistic GEE model for ordinal responses. RESULTS: Three hundred seventy-three of 525 (71%) attending physicians and 356 of 596 (60%) residents completed the survey. Attending physicians were more satisfied with overall feedback (mean score 5.97 vs. 5.29, p < 0.001) and with timeliness of feedback (odds ratio [OR] = 1.56, 95% confidence interval [CI] = 1.23 to 2.00; p < 0.001) than residents. Attending physicians were also more likely to rate the quality of feedback as very good or excellent for positive feedback, constructive feedback, feedback on procedures, documentation, management of ED flow, and evidence-based decision-making. Attending physicians reported time constraints as the top obstacle to giving feedback and were more likely than residents to report that feedback is usually attending initiated (OR = 7.09, 95% CI = 3.53 to 14.31; p < 0.001). CONCLUSIONS: Attending physician satisfaction with the quality, timeliness, and frequency of feedback given is higher than resident physician satisfaction with feedback received. Attending and resident physicians have differing perceptions of who initiates feedback and how long it takes to provide effective feedback. Knowledge of these differences in perceptions about feedback may be used to direct future educational efforts to improve feedback in the ED.


Subject(s)
Emergency Medicine/education , Internship and Residency , Knowledge of Results, Psychological , Medical Staff, Hospital , Teaching , Adult , Cross-Sectional Studies , Female , Humans , Male
SELECTION OF CITATIONS
SEARCH DETAIL
...