Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 26
Filter
1.
AEM Educ Train ; 7(Suppl 1): S68-S77, 2023 Jun.
Article in English | MEDLINE | ID: mdl-37383834

ABSTRACT

Background: Addressing racism in emergency medicine education is vital for providing optimal training and assessment of physicians in the specialty, developing physicians with the skills necessary to advocate for their patients, and recruiting and retaining a diverse group of physicians. To form a prioritized research agenda, the Society of Academic Emergency Medicine (SAEM) conducted a consensus conference at the annual meeting in May 2022 on addressing racism in emergency medicine, which included a subgroup on education. Methods: The education workgroup worked on summarizing the current literature on addressing racism in emergency medicine education, identifying critical knowledge gaps, and creating a consensus-driven research agenda for addressing racism in emergency medicine education. We used a nominal group technique and modified Delphi to develop priority questions for research. We then distributed a pre-conference survey to conference registrants to rate priority areas for research. During the consensus conference, group leaders provided an overview and background describing the rationale for the preliminary research question list. Attendees were then involved in discussions to help modify and develop research questions. Results: Nineteen questions were initially selected by the education workgroup as potential areas for research. The education workgroup's next round of consensus building resulted in a consensus of ten questions to be included in the pre-conference survey. No questions in the pre-conference survey reached consensus. After robust discussion and voting by workgroup members and attendees at the consensus conference, six questions were determined to be priority research areas. Conclusions: We believe recognizing and addressing racism in emergency medicine education is imperative. Critical gaps in curriculum design, assessment, bias training, allyship, and the learning environment negatively impact training programs. These gaps must be prioritized for research as they can have adverse effects on recruitment, the ability to promote a safe learning environment, patient care, and patient outcomes.

2.
J Am Coll Health ; : 1-6, 2023 Jan 03.
Article in English | MEDLINE | ID: mdl-36595635

ABSTRACT

Objective: To examine how in-person classroom instruction was related to risk of SARS-CoV-2 infection in undergraduate students. Participants: Indiana University undergraduate students (n = 69,606) enrolled in Fall 2020, when courses with in-person and remote instruction options were available. Methods: Students participated weekly in mandatory SARS-CoV-2 RT-PCR asymptomatic testing by random selection, supplemented with symptomatic testing as needed. We used log-binomial regression models to estimate the association between number of in-person credit hours and the risk of SARS-CoV-2 infection over the course of the semester. Results: Overall 5,786 SARS-CoV-2 cases were observed. Increased in-person credit hour exposures were not associated with increased risk of SARS-CoV-2 overall [aRR (95% CI): 0.98 (0.97,0.99)], nor within specific subgroups (Greek affiliation and class). Conclusions: In-person instruction did not appear to increase SARS-CoV-2 transmission in a university setting with rigorous protective measures in place, prior to mass vaccine rollout and prior to delta variant emergence.

4.
AEM Educ Train ; 5(2): e10496, 2021 Apr.
Article in English | MEDLINE | ID: mdl-33842811

ABSTRACT

OBJECTIVES: Uniformly training physicians to provide safe, high-quality care requires reliable assessment tools to ensure learner competency. The consensus-derived National Clinical Assessment Tool in Emergency Medicine (NCAT-EM) has been adopted by clerkships across the country. Analysis of large-scale deidentified data from a consortium of users is reported. METHODS: Thirteen sites entered data into a Web-based platform resulting in over 6,400 discrete NCAT-EM assessments from 748 students and 704 assessors. Reliability, internal consistency analysis, and factorial analysis of variance for hypothesis generation were performed. RESULTS: All categories on the NCAT-EM rating scales and professionalism subdomains were used. Clinical rating scale and global assessment scores were positively skewed, similar to other assessments commonly used in emergency medicine (EM). Professionalism lapses were noted in <1% of assessments. Cronbach's alpha was >0.8 for each site; however, interinstitutional variability was significant. M4 students scored higher than M3 students, and EM-bound students scored higher than non-EM-bound students. There were site-specific differences based on number of prior EM rotations, but no overall association. There were differences in scores based on assessor faculty rank and resident training year, but not by years in practice. There were site-specific differences based on student sex, but overall no difference. CONCLUSIONS: To our knowledge, this is the first large-scale multi-institutional implementation of a single clinical assessment tool. This study demonstrates the feasibility of a unified approach to clinical assessment across multiple diverse sites. Challenges remain in determining appropriate score distributions and improving consistency in scoring between sites.

5.
Acad Med ; 94(10): 1498-1505, 2019 10.
Article in English | MEDLINE | ID: mdl-31219811

ABSTRACT

PURPOSE: This study examined applicant reactions to the Association of American Medical Colleges Standardized Video Interview (SVI) during its first year of operational use in emergency medicine (EM) residency program selection to identify strategies to improve applicants' SVI experience and attitudes. METHOD: Individuals who self-classified as EM applicants applying in the Electronic Residency Application Service 2018 cycle and who completed the SVI in summer 2017 were invited to participate in 2 surveys. Survey 1, which focused on procedural issues, was administered immediately after SVI completion. Survey 2, which focused on applicants' SVI experience, was administered in fall 2017, after SVI scores were released. RESULTS: The response rates for surveys 1 and 2 were 82.3% (2,906/3,532) and 58.7% (2,074/3,532), respectively. Applicant reactions varied by aspect of the SVI studied and their SVI total scores. Most applicants were satisfied with most procedural aspects of the SVI, but most applicants were not satisfied with the SVI overall or with their total SVI scores. About 20% to 30% of applicants had neutral opinions about most aspects of the SVI. Negative reactions to the SVI were stronger for applicants who scored lower on the SVI. CONCLUSIONS: Applicants had generally negative reactions to the SVI. Most were skeptical of its ability to assess the target competencies and its potential to add value to the selection process. Applicant acceptance and appreciation of the SVI will be critical to the SVI's acceptance by the graduate medical education community.


Subject(s)
Attitude , Education, Medical, Graduate , Emergency Medicine/education , Interviews as Topic , Personal Satisfaction , Personnel Selection , Female , Humans , Internship and Residency , Male
6.
Acad Med ; 94(10): 1489-1497, 2019 10.
Article in English | MEDLINE | ID: mdl-30870151

ABSTRACT

PURPOSE: Innovative tools are needed to shift residency selection toward a more holistic process that balances academic achievement with other competencies important for success in residency. The authors evaluated the feasibility of the AAMC Standardized Video Interview (SVI) and evidence of the validity of SVI total scores. METHOD: The SVI, developed by the Association of American Medical Colleges, consists of six questions designed to assess applicants' interpersonal and communication skills and knowledge of professionalism. Study 1 was conducted in 2016 for research purposes. Study 2 was an operational pilot administration in 2017; SVI data were available for use in residency selection by emergency medicine programs for the 2018 application cycle. Descriptive statistics, correlations, and standardized mean differences were used to examine data. RESULTS: Study 1 included 855 applicants; Study 2 included 3,532 applicants. SVI total scores were relatively normally distributed. There were small correlations between SVI total scores and United States Medical Licensing Examination Step exam scores, Alpha Omega Alpha Honor Medical Society membership, and Gold Humanism Honor Society membership. There were no-to-small group differences in SVI total scores by gender and race/ethnicity, and small-to-medium differences by applicant type. CONCLUSIONS: Findings provide initial evidence of the validity of SVI total scores and suggest that these scores provide different information than academic metrics. Use of the SVI, as part of a holistic screening process, may help program directors widen the pool of applicants invited to in-person interviews and may signal that programs value interpersonal and communication skills and professionalism.


Subject(s)
Education, Medical, Graduate , Interviews as Topic , Personnel Selection , Professional Competence , Emergency Medicine/education , Female , General Surgery/education , Humans , Internal Medicine/education , Internship and Residency , Male , Pediatrics/education , Reproducibility of Results
7.
Acad Med ; 94(10): 1506-1512, 2019 10.
Article in English | MEDLINE | ID: mdl-30893064

ABSTRACT

PURPOSE: To evaluate how emergency medicine residency programs perceived and used Association of American Medical Colleges (AAMC) Standardized Video Interview (SVI) total scores and videos during the Electronic Residency Application Service 2018 cycle. METHOD: Study 1 (November 2017) used a program director survey to evaluate user reactions to the SVI following the first year of operational use. Study 2 (January 2018) analyzed program usage of SVI video responses using data collected through the AAMC Program Director's Workstation. RESULTS: Results from the survey (125/175 programs; 71% response rate) and video usage analysis suggested programs viewed videos out of curiosity and to understand the range of SVI total scores. Programs were more likely to view videos for attendees of U.S. MD-granting medical schools and applicants with higher United States Medical Licensing Examination Step 1 scores, but there were no differences by gender or race/ethnicity. More than half of programs that did not use SVI total scores in their selection processes were unsure of how to incorporate them (36/58; 62%) and wanted additional research on utility (33/58; 57%). More than half of programs indicated being at least somewhat likely to use SVI total scores (55/97; 57%) and videos (52/99; 53%) in the future. CONCLUSIONS: Program reactions on the utility and ease of use of SVI total scores were mixed. Survey results indicate programs used the SVI cautiously in their selection processes, consistent with AAMC recommendations. Future user surveys will help the AAMC gauge improvements in user acceptance and familiarity with the SVI.


Subject(s)
Emergency Medicine/education , Internship and Residency , Interviews as Topic , Personnel Selection , Professional Competence , Education, Medical, Graduate , Humans
8.
West J Emerg Med ; 19(1): 66-74, 2018 Jan.
Article in English | MEDLINE | ID: mdl-29383058

ABSTRACT

INTRODUCTION: Clinical assessment of medical students in emergency medicine (EM) clerkships is a highly variable process that presents unique challenges and opportunities. Currently, clerkship directors use institution-specific tools with unproven validity and reliability that may or may not address competencies valued most highly in the EM setting. Standardization of assessment practices and development of a common, valid, specialty-specific tool would benefit EM educators and students. METHODS: A two-day national consensus conference was held in March 2016 in the Clerkship Directors in Emergency Medicine (CDEM) track at the Council of Residency Directors in Emergency Medicine (CORD) Academic Assembly in Nashville, TN. The goal of this conference was to standardize assessment practices and to create a national clinical assessment tool for use in EM clerkships across the country. Conference leaders synthesized the literature, articulated major themes and questions pertinent to clinical assessment of students in EM, clarified the issues, and outlined the consensus-building process prior to consensus-building activities. RESULTS: The first day of the conference was dedicated to developing consensus on these key themes in clinical assessment. The second day of the conference was dedicated to discussing and voting on proposed domains to be included in the national clinical assessment tool. A modified Delphi process was initiated after the conference to reconcile questions and items that did not reach an a priori level of consensus. CONCLUSION: The final tool, the National Clinical Assessment Tool for Medical Students in Emergency Medicine (NCAT-EM) is presented here.


Subject(s)
Clinical Clerkship/standards , Clinical Competence/standards , Consensus , Educational Measurement/standards , Emergency Medicine/education , Students, Medical , Surveys and Questionnaires/standards , Delphi Technique , Education, Medical , Emergency Service, Hospital , Humans , Leadership , Models, Organizational , Physician Executives , United States
9.
Int J Med Educ ; 8: 192-204, 2017 May 29.
Article in English | MEDLINE | ID: mdl-28557777

ABSTRACT

OBJECTIVES: This study aimed to assess residents' and fellows' knowledge of finance principles that may affect their personal financial health. METHODS: A cross-sectional, anonymous, web-based survey was administered to a convenience sample of residents and fellows at two academic medical centers.  Respondents answered 20 questions on personal finance and 28 questions about their own financial planning, attitudes, and debt. Questions regarding satisfaction with one's financial condition and investment-risk tolerance used a 10-point Likert scale (1=lowest, 10=highest).  Of 2,010 trainees, 422 (21%) responded (median age 30 years; interquartile range, 28-33). RESULTS: The mean quiz score was 52.0% (SD = 19.1). Of 299 (71%) respondents with student loan debt, 144 (48%) owed over $200,000.  Many respondents had other debt, including 86 (21%) with credit card debt. Of 262 respondents with retirement savings, 142 (52%) had saved less than $25,000. Respondents' mean satisfaction with their current personal financial condition was 4.8 (SD = 2.5) and investment-risk tolerance was 5.3 (SD = 2.3). Indebted trainees reported lower satisfaction than trainees without debt (4.4 vs. 6.2, F (1,419) = 41.57, p < .001).   Knowledge was moderately correlated with investment-risk tolerance (r=0.41, p < .001), and weakly correlated with satisfaction with financial status (r=0.23, p < .001). CONCLUSIONS: Residents and fellows had low financial literacy and investment-risk tolerance, high debt, and deficits in their financial preparedness.  Adding personal financial education to the medical education curriculum would benefit trainees.  Providing education in areas such as budgeting, estate planning, investment strategies, and retirement planning early in training can offer significant long-term benefits.


Subject(s)
Education, Medical/methods , Fellowships and Scholarships/statistics & numerical data , Financing, Personal/statistics & numerical data , Internship and Residency/statistics & numerical data , Academic Medical Centers , Adult , Cross-Sectional Studies , Curriculum , Financial Management/methods , Humans , Internet , Surveys and Questionnaires
11.
J Emerg Med ; 51(6): 705-711, 2016 Dec.
Article in English | MEDLINE | ID: mdl-27614539

ABSTRACT

BACKGROUND: Assessment practices in emergency medicine (EM) clerkships have not been previously described. Clinical assessment frequently relies on global ratings of clinical performance, or "shift cards," although these tools have not been standardized or studied. OBJECTIVE: We sought to characterize assessment practices in EM clerkships, with particular attention to shift cards. METHODS: A survey regarding assessment practices was administered to a national sample of EM clerkship directors (CDs). Descriptive statistics were compiled and regression analyses were performed. RESULTS: One hundred seventy-two CDs were contacted, and 100 (58%) agreed to participate. The most heavily weighted assessment methods in final grades were shift cards (66%) and written examinations (21-26%), but there was considerable variability in grading algorithms. EM faculty (100%) and senior residents (69%) were most commonly responsible for assessment, and assessors were often preassigned (71%). Forty-four percent of CDs reported immediate completion of shift cards, 27% within 1 to 2 days, and 20% within a week. Only 40% reported return rates >75%. Thirty percent of CDs do not permit students to review individual evaluations, and 54% of the remainder deidentify evaluations before student review. Eighty-six percent had never performed psychometric analysis on their assessment tools. Sixty-five percent of CDs were satisfied with their shift cards, but 90% supported the development of a national tool. CONCLUSION: There is substantial variability in assessment practices between EM clerkships, raising concern regarding the comparability of grades between institutions. CDs rely on shift cards in grading despite the lack of evidence of validity and inconsistent process variables. Standardization of assessment practices may improve the assessment of EM students.


Subject(s)
Clinical Clerkship , Educational Measurement/methods , Emergency Medicine/education , Students, Medical , Clinical Clerkship/methods , Clinical Competence , Humans , Prospective Studies , Surveys and Questionnaires
12.
J Emerg Med ; 50(2): 302-7, 2016 Feb.
Article in English | MEDLINE | ID: mdl-26602424

ABSTRACT

BACKGROUND: Evaluation of medical students rotating through the emergency department (ED) is an important formative and summative assessment method. Intuitively, delaying evaluation should affect the reliability of this assessment method, however, the effect of evaluation timing on scoring is unknown. OBJECTIVE: A quality-improvement project evaluating the timing of end-of-shift ED evaluations at the University of Arizona was performed to determine whether delay in evaluation affected the score. METHODS: End-of-shift ED evaluations completed on behalf of fourth-year medical students from July 2012 to March 2013 were reviewed. Forty-seven students were evaluated 547 times by 46 residents and attendings. Evaluation scores were means of anchored Likert scales (1-5) for the domains of energy/interest, fund of knowledge, judgment/problem-solving ability, clinical skills, personal effectiveness, and systems-based practice. Date of shift, date of evaluation, and score were collected. Linear regression was performed to determine whether timing of the evaluation had an effect on evaluation score. RESULTS: Data were complete for 477 of 547 evaluations (87.2%). Mean evaluation score was 4.1 (range 2.3-5, standard deviation 0.62). Evaluations took a mean of 8.5 days (median 4 days, range 0-59 days, standard deviation 9.77 days) to complete. Delay in evaluation had no significant effect on score (p = 0.983). CONCLUSIONS: The evaluation score was not affected by timing of the evaluation. Variance in scores was similar for both immediate and delayed evaluations. Considerable amounts of time and energy are expended tracking down delayed evaluations. This activity does not impact a student's final grade.


Subject(s)
Clinical Clerkship , Educational Measurement/standards , Emergency Medicine/education , Clinical Competence , Education, Medical, Undergraduate , Educational Measurement/methods , Educational Status , Humans , Quality Improvement , Time Factors
13.
West J Emerg Med ; 16(6): 919-22, 2015 Nov.
Article in English | MEDLINE | ID: mdl-26594290

ABSTRACT

INTRODUCTION: In April 2013, the National Board of Medical Examiners (NBME) released an Advanced Clinical Examination (ACE) in emergency medicine (EM). In addition to this new resource, CDEM (Clerkship Directors in EM) provides two online, high-quality, internally validated examinations. National usage statistics are available for all three examinations, however, it is currently unknown how students entering an EM residency perform as compared to the entire national cohort. This information may help educators interpret examination scores of both EM-bound and non-EM-bound students. OBJECTIVES: The objective of this study was to compare EM clerkship examination performance between students who matched into an EM residency in 2014 to students who did not. We made comparisons were made using the EM-ACE and both versions of the National fourth year medical student (M4) EM examinations. METHOD: In this retrospective multi-institutional cohort study, the EM-ACE and either Version 1 (V1) or 2 (V2) of the National EM M4 examination was given to students taking a fourth-year EM rotation at five institutions between April 2013 to February 2014. We collected examination performance, including the scaled EM-ACE score, and percent correct on the EM M4 exams, and 2014 NRMP Match status. Student t-tests were performed on the examination averages of students who matched in EM as compared with those who did not. RESULTS: A total of 606 students from five different institutions took both the EM-ACE and one of the EM M4 exams; 94 (15.5%) students matched in EM in the 2014 Match. The mean score for EM-bound students on the EM-ACE, V1 and V2 of the EM M4 exams were 70.9 (n=47, SD=9.0), 84.4 (n=36, SD=5.2), and 83.3 (n=11, SD=6.9), respectively. Mean scores for non-EM-bound students were 68.0 (n=256, SD=9.7), 82.9 (n=243, SD=6.5), and 74.5 (n=13, SD=5.9). There was a significant difference in mean scores in EM-bound and non-EM-bound student for the EM-ACE (p=0.05) and V2 (p<0.01) but not V1 (p=0.18) of the National EM M4 examination. CONCLUSION: Students who successfully matched in EM performed better on all three exams at the end of their EM clerkship.


Subject(s)
Clinical Clerkship , Educational Measurement/methods , Emergency Medicine/education , Clinical Clerkship/standards , Clinical Clerkship/statistics & numerical data , Clinical Competence/statistics & numerical data , Educational Measurement/standards , Educational Measurement/statistics & numerical data , Emergency Medicine/standards , Humans , Internship and Residency , Retrospective Studies , United States
14.
West J Emerg Med ; 16(6): 957-60, 2015 Nov.
Article in English | MEDLINE | ID: mdl-26594299

ABSTRACT

INTRODUCTION: There is great variation in the knowledge base of Emergency Medicine (EM) interns in July. The first objective knowledge assessment during residency does not occur until eight months later, in February, when the American Board of EM (ABEM) administers the in-training examination (ITE). In 2013, the National Board of Medical Examiners (NBME) released the EM Advanced Clinical Examination (EM-ACE), an assessment intended for fourth-year medical students. Administration of the EM-ACE to interns at the start of residency may provide an earlier opportunity to assess the new EM residents' knowledge base. The primary objective of this study was to determine the correlation of the NBME EM-ACE, given early in residency, with the EM ITE. Secondary objectives included determination of the correlation of the United States Medical Licensing Examination (USMLE) Step 1 or 2 scores with early intern EM-ACE and ITE scores and the effect, if any, of clinical EM experience on examination correlation. METHODS: This was a multi-institutional, observational study. Entering EM interns at six residencies took the EM-ACE in July 2013 and the ABEM ITE in February 2014. We collected scores for the EM-ACE and ITE, age, gender, weeks of clinical EM experience in residency prior to the ITE, and USMLE Step 1 and 2 scores. Pearson's correlation and linear regression were performed. RESULTS: Sixty-two interns took the EM-ACE and the ITE. The Pearson's correlation coefficient between the ITE and the EM-ACE was 0.62. R-squared was 0.5 (adjusted 0.4). The coefficient of determination was 0.41 (95% CI [0.3-0.8]). For every increase of one in the scaled EM-ACE score, we observed a 0.4% increase in the EM in-training score. In a linear regression model using all available variables (EM-ACE, gender, age, clinical exposure to EM, and USMLE Step 1 and Step 2 scores), only the EM-ACE score was significantly associated with the ITE (p<0.05). We observed significant colinearity among the EM-ACE, ITE and USMLE scores. Gender, age and number of weeks of EM prior to the ITE had no effect on the relationship between EM-ACE and the ITE. CONCLUSION: Given early during intern year, the EM-ACE score showed positive correlation with ITE. Clinical EM experience prior to the in-training exam did not affect the correlation.


Subject(s)
Educational Measurement/methods , Emergency Medicine/education , Internship and Residency , Adult , Clinical Competence/statistics & numerical data , Educational Measurement/standards , Educational Measurement/statistics & numerical data , Female , Humans , Licensure, Medical , Linear Models , Male , Time Factors , United States
16.
West J Emerg Med ; 16(1): 138-42, 2015 Jan.
Article in English | MEDLINE | ID: mdl-25671023

ABSTRACT

INTRODUCTION: Since 2011 two online, validated exams for fourth-year emergency medicine (EM) students have been available (National EM M4 Exams). In 2013 the National Board of Medical Examiners offered the Advanced Clinical Examination in Emergency Medicine (EM-ACE). All of these exams are now in widespread use; however, there are no data on how they correlate. This study evaluated the correlation between the EM-ACE exam and the National EM M4 Exams. METHODS: From May 2013 to April 2014 the EM-ACE and one version of the EM M4 exam were administered sequentially to fourth-year EM students at five U.S. medical schools. Data collected included institution, gross and scaled scores and version of the EM M4 exam. We performed Pearson's correlation and random effects linear regression. RESULTS: 305 students took the EM-ACE and versions 1 (V1) or 2 (V2) of the EM M4 exams (281 and 24, respectively) [corrected].The mean percent correct for the exams were as follows: EM-ACE 74.9 (SD-9.82), V1 83.0 (SD-6.39), V2 78.5 (SD-7.70) [corrected]. Pearson's correlation coefficient for the V1/EM-ACE was 0.53 (0.43 scaled) and for the V2/EM-ACE was 0.58 (0.41 scaled) [corrected]. The coefficient of determination for V1/ EM-ACE was 0.73 and for V2/EM-ACE 0.71 (0.65 and .49 for scaled scores) [ERRATUM]. The R-squared values were 0.28 and 0.30 (0.18 and 0.13 scaled), respectively [corrected]. There was significant cluster effect by institution. CONCLUSION: There was moderate positive correlation of student scores on the EM-ACE exam and the National EM M4 Exams.


Subject(s)
Clinical Clerkship , Education, Medical, Undergraduate , Educational Measurement/methods , Emergency Medicine/education , Clinical Competence , Humans , Linear Models , Prospective Studies , United States
17.
West J Emerg Med ; 15(4): 419-23, 2014 Jul.
Article in English | MEDLINE | ID: mdl-25035747

ABSTRACT

INTRODUCTION: The standard letter of recommendation in emergency medicine (SLOR) was developed to standardize the evaluation of applicants, improve inter-rater reliability, and discourage grade inflation. The primary objective of this study was to describe the distribution of categorical variables on the SLOR in order to characterize scoring tendencies of writers. METHODS: We performed a retrospective review of all SLORs written on behalf of applicants to the three Emergency Medicine residency programs in the University of Arizona Health Network (i.e. the University Campus program, the South Campus program and the Emergency Medicine/Pediatrics combined program) in 2012. All "Qualifications for Emergency Medicine" and "Global Assessment" variables were analyzed. RESULTS: 1457 SLORs were reviewed, representing 26.7% of the total number of Electronic Residency Application Service applicants for the academic year. Letter writers were most likely to use the highest/most desirable category on "Qualifications for EM" variables (50.7%) and to use the second highest category on "Global Assessments" (43.8%). For 4-point scale variables, 91% of all responses were in one of the top two ratings. For 3-point scale variables, 94.6% were in one of the top two ratings. Overall, the lowest/least desirable ratings were used less than 2% of the time. CONCLUSIONS: SLOR letter writers do not use the full spectrum of categories for each variable proportionately. Despite the attempt to discourage grade inflation, nearly all variable responses on the SLOR are in the top two categories. Writers use the lowest categories less than 2% of the time. Program Directors should consider tendencies of SLOR writers when reviewing SLORs of potential applicants to their programs.


Subject(s)
Emergency Medicine/education , Personnel Selection/standards , School Admission Criteria , Writing/standards , Arizona , Education, Medical, Graduate , Educational Measurement , Humans , Retrospective Studies
18.
J Emerg Med ; 47(2): 216-22, 2014 Aug.
Article in English | MEDLINE | ID: mdl-24930443

ABSTRACT

BACKGROUND: A few studies suggest that an increasing clinical workload does not adversely affect quality of teaching in the Emergency Department (ED); however, the impact of clinical teaching on productivity is unknown. OBJECTIVES: The primary objective of this study was to determine whether there was a difference in relative value units (RVUs) billed by faculty members when an acting internship (AI) student is on shift. Secondary objectives include comparing RVUs billed by individual faculty members and in different locations. METHODS: A matched case-control study design was employed, comparing the RVUs generated during shifts with an Emergency Medicine (EM) AI (cases) to shifts without an AI (controls). Case shifts were matched with control shifts for individual faculty member, time (day, swing, night), location, and, whenever possible, day of the week. Outcome measures were gross, procedural, and critical care RVUs. RESULTS: There were 140 shifts worked by AI students during the study period; 18 were unmatchable, and 21 were night shifts that crossed two dates of service and were not included. There were 101 well-matched shift pairs retained for analysis. Gross, procedural, and critical care RVUs billed did not differ significantly in case vs. control shifts (53.60 vs. 53.47, p=0.95; 4.30 vs. 4.27, p=0.96; 3.36 vs. 3.41, respectively, p=0.94). This effect was consistent across sites and for all faculty members. CONCLUSIONS: An AI student had no adverse effect on overall, procedural, or critical care clinical billing in the academic ED. When matched with experienced educators, career-bound fourth-year students do not detract from clinical productivity.


Subject(s)
Academic Medical Centers/statistics & numerical data , Efficiency, Organizational/statistics & numerical data , Emergency Medicine/education , Emergency Service, Hospital/statistics & numerical data , Internship and Residency/statistics & numerical data , Case-Control Studies , Efficiency , Emergency Medicine/statistics & numerical data , Faculty, Medical/statistics & numerical data , Humans , Workload
19.
J Emerg Med ; 46(4): 544-50, 2014 Apr.
Article in English | MEDLINE | ID: mdl-24113483

ABSTRACT

BACKGROUND: The Standardized Letter of Recommendation (SLOR) was developed in an attempt to standardize the evaluation of applicants to an emergency medicine (EM) residency. OBJECTIVE: Our aim was to determine whether the Global Assessment Score (GAS) and Likelihood of Matching Assessment (LOMA) of the SLOR for applicants applying to an EM residency are affected by the experience of the letter writer. We describe the distribution of GAS and LOMA grades and compare the GAS and LOMA scores to length of time an applicant knew the letter writer and number of EM rotations. METHODS: We conducted a retrospective review of all SLORs written for all applicants applying to three EM residency programs for the 2012 match. Median number of letters written the previous year were compared across the four GAS and LOMA scores using an equality of medians test and test for trend to see if higher scores on the GAS and LOMA were associated with less experienced letter writers. Distributions of the scores were determined and length of time a letter writer knew an applicant and number of EM rotations were compared with GAS and LOMA scores. RESULTS: There were 917 applicants representing 27.6% of the total applicant pool for the 2012 United States EM residency match and 1253 SLORs for GAS and 1246 for LOMA were analyzed. The highest scores on the GAS and LOMA were associated with the lowest median number of letters written the previous year (equality of medians test across groups, p < 0.001; test for trend, p < 0.001). Less than 3% received the lowest score for GAS and LOMA. Among letter writers that knew an applicant for more than 1 year, 45.3% gave a GAS score of "Outstanding" and 53.4% gave a LOMA of "Very Competitive" compared with 31.7% and 39.6%, respectively, if the letter writer knew them 1 year or less (p = 0.002; p = 0.005). Number of EM rotations was not associated with GAS and LOMA scores. CONCLUSIONS: SLORs written by less experienced letter writers were more likely to have a GAS of "Outstanding" (p < 0.001) and a LOMA of "Very Competitive" (p < 0.001) than more experienced letter writers. The overall distribution of GAS and LOMA was heavily weighted to the highest scores. The length of time a letter writer knew an applicant was significantly associated with GAS and LOMA scores.


Subject(s)
Correspondence as Topic , Educational Measurement/standards , Emergency Medicine/education , Personnel Selection/standards , Professional Competence , Writing , Clinical Clerkship , Education, Medical, Graduate , Humans , Internship and Residency , Retrospective Studies , Time Factors
20.
PLoS One ; 8(9): e73832, 2013.
Article in English | MEDLINE | ID: mdl-24058494

ABSTRACT

The science of surveillance is rapidly evolving due to changes in public health information and preparedness as national security issues, new information technologies and health reform. As the Emergency Department has become a much more utilized venue for acute care, it has also become a more attractive data source for disease surveillance. In recent years, influenza surveillance from the Emergency Department has increased in scope and breadth and has resulted in innovative and increasingly accepted methods of surveillance for influenza and influenza-like-illness (ILI). We undertook a systematic review of published Emergency Department-based influenza and ILI syndromic surveillance systems. A PubMed search using the keywords "syndromic", "surveillance", "influenza" and "emergency" was performed. Manuscripts were included in the analysis if they described (1) data from an Emergency Department (2) surveillance of influenza or ILI and (3) syndromic or clinical data. Meeting abstracts were excluded. The references of included manuscripts were examined for additional studies. A total of 38 manuscripts met the inclusion criteria, describing 24 discrete syndromic surveillance systems. Emergency Department-based influenza syndromic surveillance has been described worldwide. A wide variety of clinical data was used for surveillance, including chief complaint/presentation, preliminary or discharge diagnosis, free text analysis of the entire medical record, Google flu trends, calls to teletriage and help lines, ambulance dispatch calls, case reports of H1N1 in the media, markers of ED crowding, admission and Left Without Being Seen rates. Syndromes used to capture influenza rates were nearly always related to ILI (i.e. fever +/- a respiratory or constitutional complaint), however, other syndromes used for surveillance included fever alone, "respiratory complaint" and seizure. Two very large surveillance networks, the North American DiSTRIBuTE network and the European Triple S system have collected large-scale Emergency Department-based influenza and ILI syndromic surveillance data. Syndromic surveillance for influenza and ILI from the Emergency Department is becoming more prevalent as a measure of yearly influenza outbreaks.


Subject(s)
Disease Outbreaks , Emergency Service, Hospital/statistics & numerical data , Influenza, Human/epidemiology , Medical Records/statistics & numerical data , Public Health Surveillance/methods , Databases, Bibliographic , Europe/epidemiology , Hospitalization/statistics & numerical data , Humans , Influenza A Virus, H1N1 Subtype/isolation & purification , Influenza, Human/diagnosis , Influenza, Human/pathology , Influenza, Human/virology , North America/epidemiology , Prevalence , Public Health Informatics/statistics & numerical data
SELECTION OF CITATIONS
SEARCH DETAIL
...