Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Med Educ Online ; 18: 22495, 2013 Nov 19.
Artigo em Inglês | MEDLINE | ID: mdl-24256741

RESUMO

BACKGROUND: The meaningful use (MU) of electronic medical records (EMRs) is being implemented in three stages. Key objectives of stage one include electronic analysis of data entered into structured fields, using decision-support tools (e.g., checking drug-drug interactions [DDI]) and electronic information exchange. OBJECTIVE: The authors assessed the performance of medical students on 10 stage-one MU tasks and measured the correlation between students' MU performance and subsequent end-of-clerkship professionalism assessments and their grades on an end-of-year objective structured clinical examination. PARTICIPANTS: Two-hundred and twenty-two third-year medical students on the internal medicine (IM) clerkship. DESIGN/MAIN MEASURES: From July 2010 to February 2012, all students viewed 15 online tutorials covering MU competencies. The authors measured student MU documentation and performance in the chart of a virtual patient using a fully functional training EMR. Specific MU measurements included, adding: a new problem, a new medication, an advanced directive, smoking status, the results of screening tests; and performing a DDI (in which a major interaction was probable), and communicating a plan for this interaction. KEY RESULTS: A total of 130 MU errors were identified. Sixty-eight (30.6%) students had at least one error, and 30 (13.5%) had more than one (range 2-6). Of the 130 errors, 90 (69.2%) were errors in structured data entry. Errors occurred in medication dosing and instructions (18%), DDI identification (12%), documenting smoking status (15%), and colonoscopy results (23%). Students with MU errors demonstrated poorer performance on end-of-clerkship professionalism assessments (r =-0.112, p=0.048) and lower observed structured clinical examination (OSCE) history-taking skills (r =-0.165, p=0.008) and communication scores (r= - 0.173, p=0.006). CONCLUSIONS: MU errors among medical students are common and correlate with subsequent poor performance in multiple educational domains. These results indicate that without assessment and feedback, a substantial minority of students may not be ready to progress to more advanced MU tasks.


Assuntos
Avaliação Educacional/métodos , Uso Significativo/organização & administração , Competência Profissional/normas , Estudantes de Medicina , Estágio Clínico , Educação de Graduação em Medicina , Registros Eletrônicos de Saúde , Humanos , Medicina Interna/educação , Michigan
2.
Teach Learn Med ; 25(4): 292-9, 2013.
Artigo em Inglês | MEDLINE | ID: mdl-24112197

RESUMO

BACKGROUND: We developed, implemented, and assessed a web-based clinical evaluation application (i.e., CEX app) for Internet-enabled mobile devices, including mobile phones. The app displays problem-specific checklists that correspond to training problems created by the Clerkship Directors in Internal Medicine. PURPOSE: We hypothesized that use of the CEX app for directly observing students' clinical skills would be feasible and acceptable, and would demonstrate adequate reliability and validity. METHODS: Between July 2010 and February 2012, 266 third-year medical students completed 5 to 10 formative CEXs during their internal medicine clerkship. The observers (attendings and residents), who performed the CEX, used the app to guide and document their observations, record their time observing and giving feedback to the students, and their overall satisfaction with the CEX app. Interrater reliability and validity were assessed with 17 observers who viewed 6 videotaped student-patient encounters, and by measuring the correlation between student CEX scores and their scores on subsequent standardized-patient Objective Structured Clinical Examination (OSCE) exams. RESULTS: A total of 2,523 CEXs were completed by 411 observers. The average number of evaluations per student was 9.8 (± 1.8 SD), and the average number of CEXs completed per observer was 6 (± 11.8 SD). Observers spent less than 10 min on 45.3% of the CEXs and 68.6% of the feedback sessions. An overwhelming majority of observers (90.6%) reported satisfaction with the CEX. Interrater reliability was measured at 0.69 among the observers viewing the videotapes, and their ratings discriminated between competent and noncompetent performances. Student CEX grades, however, did not correlate with their end of 3rd-year OSCE scores. CONCLUSIONS: The use of this CEX app is feasible and it captures students' clinical performance data with a high rate of user satisfaction. Our embedded checklists had adequate interrater reliability and concurrent validity. The grades measured on this app, however, were not predictive of subsequent student performance.


Assuntos
Estágio Clínico , Competência Clínica/normas , Aplicativos Móveis , Observação/métodos , Estudantes de Medicina , Lista de Checagem , Estudos de Viabilidade , Humanos , Medicina Interna/educação , Michigan
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...