Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 6 de 6
Filter
1.
Acad Pediatr ; 18(5): 535-541, 2018 07.
Article in English | MEDLINE | ID: mdl-29325913

ABSTRACT

OBJECTIVE: Effective self-directed educational tools are invaluable. Our objective was to determine whether a self-directed, web-based oral case presentation module would improve medical students' oral case presentations compared to usual curriculum, and with similar efficacy as structured oral presentation faculty feedback sessions. METHODS: We conducted a pragmatic multicenter cluster randomized controlled trial among medical students rotating in pediatric clerkships at 7 US medical schools. In the clerkship's first 14 days, subjects were instructed to complete an online Computer-Assisted Learning in Pediatrics Program (CLIPP) oral case presentation module, an in-person faculty-led case presentation feedback session, or neither (control). At the clerkship's end, evaluators blinded to intervention status rated the quality of students' oral case presentations on a 10-point scale. We conducted intention-to-treat multivariable analyses clustered on clerkship block. RESULTS: Study participants included 256 CLIPP (32.5%), 263 feedback (33.3%), and 270 control (34.2%) subjects. Only 51.1% of CLIPP subjects completed the assigned presentation module, while 98.5% of feedback subjects participated in presentation feedback sessions. Compared to controls, oral presentation quality was significantly higher in the feedback group (adjusted difference in mean quality, 0.28; 95% confidence interval, 0.08, 0.49) and trended toward being significantly higher in the CLIPP group (0.19; 95% confidence interval, -0.006, 0.38). The quality of presentations in the CLIPP and feedback groups was not significantly different (-0.10; 95% confidence interval, -0.31, 0.11). CONCLUSIONS: The quality of oral case presentations delivered by students randomized to complete the CLIPP module did not differ from faculty-led presentation feedback sessions and was not statistically superior to control.


Subject(s)
Clinical Clerkship/methods , Education, Distance/methods , Formative Feedback , Pediatrics/education , Adult , Female , Humans , Internet , Male , Multivariate Analysis , Program Evaluation , Schools, Medical , Students, Medical , United States , Young Adult
2.
MedEdPORTAL ; 13: 10603, 2017 Jul 21.
Article in English | MEDLINE | ID: mdl-30800805

ABSTRACT

INTRODUCTION: We developed, revised, and implemented self-directed rater training materials in the course of a validity study for a written Pediatric History and Physical Exam Evaluation (P-HAPEE) rubric. METHODS: Core training materials consist of a single-page instruction sheet, sample written history and physical (H&P), and detailed answer key. We iteratively revised the materials based on reviewer comments and pilot testing. Eighteen attending physicians and five senior residents underwent self-directed training, scored 10 H&Ps, and completed a rubric utility survey in the course of the validity study. We have since implemented the P-HAPEE rubric and self-directed rater training in a pediatric clerkship. Based on input from reviewers, study raters, faculty members, and medical student users, we have also developed and implemented additional optional supplemental training materials. RESULTS: Pilot testing indicated that training takes approximately 1 hour. While reviewers endorsed the training format, several suggested having optional supplemental materials available. Nineteen out of 23 volunteer study raters completed the rubric utility survey. All described the rubric as good or very good and indicated strong to very strong interest in continued use. DISCUSSION: The P-HAPEE rubric offers a novel, practical, reliable, and valid method for supervising physicians to assess pediatric written H&Ps and can be implemented using brief, self-directed rater training.

3.
Acad Pediatr ; 17(1): 68-73, 2017.
Article in English | MEDLINE | ID: mdl-27521461

ABSTRACT

OBJECTIVE: The written history and physical examination (H&P) is an underutilized source of medical trainee assessment. The authors describe development and validity evidence for the Pediatric History and Physical Exam Evaluation (P-HAPEE) rubric: a novel tool for evaluating written H&Ps. METHODS: Using an iterative process, the authors drafted, revised, and implemented the 10-item rubric at 3 academic institutions in 2014. Eighteen attending physicians and 5 senior residents each scored 10 third-year medical student H&Ps. Inter-rater reliability (IRR) was determined using intraclass correlation coefficients. Cronbach α was used to report consistency and Spearman rank-order correlations to determine relationships between rubric items. Raters provided a global assessment, recorded time to review and score each H&P, and completed a rubric utility survey. RESULTS: Overall intraclass correlation was 0.85, indicating adequate IRR. Global assessment IRR was 0.89. IRR for low- and high-quality H&Ps was significantly greater than for medium-quality ones but did not differ on the basis of rater category (attending physician vs. senior resident), note format (electronic health record vs nonelectronic), or student diagnostic accuracy. Cronbach α was 0.93. The highest correlation between an individual item and total score was for assessments was 0.84; the highest interitem correlation was between assessment and differential diagnosis (0.78). Mean time to review and score an H&P was 16.3 minutes; residents took significantly longer than attending physicians. All raters described rubric utility as "good" or "very good" and endorsed continued use. CONCLUSIONS: The P-HAPEE rubric offers a novel, practical, reliable, and valid method for supervising physicians to assess pediatric written H&Ps.


Subject(s)
Documentation/standards , Medical History Taking , Pediatrics/education , Physical Examination , Clinical Competence , Education, Medical, Undergraduate , Factor Analysis, Statistical , Humans , Reproducibility of Results
4.
Glob Pediatr Health ; 3: 2333794X16669013, 2016.
Article in English | MEDLINE | ID: mdl-27689103

ABSTRACT

The objective of this study was to determine if a training module improves the auscultation skills of medical students at the University of Maryland School of Medicine. Second-year medical students completed pretests on 12 heart sounds followed by a 45-minute training module on clinical auscultation, with retesting immediately after the intervention and during their third-year pediatrics clerkship. The control group consisted of third-year medical students who did not have the intervention. There was a 23% improvement in the identification of heart sounds postintervention (P < .001). Diastolic and valvular murmurs were poorly identified pre- and post intervention. There was a 6% decline in accuracy of the intervention group in the following academic year. The intervention group was superior to the control group at identifying the tested heart sounds (49% vs 43%, P = .04). The accuracy of second-year medical students in identifying heart sounds improved after a brief training module.

5.
J Grad Med Educ ; 7(1): 53-8, 2015 Mar.
Article in English | MEDLINE | ID: mdl-26217423

ABSTRACT

BACKGROUND: Pediatricians underestimate the prevalence of substance misuse among children and adolescents and often fail to screen for and intervene in practice. The American Academy of Pediatrics recommends training in Screening, Brief Intervention, and Referral to Treatment (SBIRT), but training outcomes and skill acquisition are rarely assessed. OBJECTIVE: We compared the effects of online versus in-person SBIRT training on pediatrics residents' knowledge, attitudes, behaviors, and skills. METHODS: Forty pediatrics residents were randomized to receive either online or in-person training. Skills were assessed by pre- and posttraining standardized patient interviews that were coded for SBIRT-adherent and -nonadherent behaviors and global skills by 2 trained coders. Thirty-two residents also completed pre- and postsurveys of their substance use knowledge, attitudes, and behaviors (KABs). Two-way repeated measures multivariate analyses of variance (MANOVAs) and analyses of variance (ANOVAs) estimates were used to assess group differences in skill acquisition and KABs. RESULTS: Findings indicated that both groups demonstrated skill improvement from pre- to postassessment. Results indicated that both groups increased their knowledge, self-reported behaviors, confidence, and readiness with no significant between-group differences. Follow-up univariate analyses indicated that, while both groups increased their SBIRT-adherent skills, the online training group displayed more "undesirable" behaviors posttraining. CONCLUSIONS: The current study indicates that brief training, online or in-person, can increase pediatrics residents' SBIRT skills, knowledge, self-reported behaviors, confidence, and readiness. The findings further indicate that in-person training may have incremental benefit in teaching residents what not to do.


Subject(s)
Clinical Competence , Computer-Assisted Instruction , Education, Medical, Graduate , Internship and Residency , Pediatrics/education , Referral and Consultation , Substance Abuse Detection/standards , Adolescent , Child , Educational Measurement , Female , Health Knowledge, Attitudes, Practice , Humans , Interviews as Topic , Male , Psychotherapy, Brief
6.
Pediatrics ; 134(5): 965-71, 2014 Nov.
Article in English | MEDLINE | ID: mdl-25349316

ABSTRACT

OBJECTIVE: To measure the effects of participating in structured oral presentation evaluation sessions early in pediatric clerkships on students' subsequent presentations. METHODS: We conducted a single-blind, 3-arm, cluster randomized controlled trial during pediatric clerkships at Boston University School of Medicine, University of Maryland School of Medicine, Oregon Health & Science University, and Case Western Reserve University School of Medicine. Blocks of students at each school were randomly assigned to experience either (1) no formal presentation feedback (control) or a small-group presentation feedback session early in pediatric clerkships in which students gave live presentations and received feedback from faculty who rated their presentations by using a (2) single-item (simple) or (3) 18-item (detailed) evaluation form. At the clerkship end, overall quality of subjects' presentations was rated by faculty blinded to randomization status, and subjects reported whether their presentations had improved. Analyses included multivariable linear and logistic regressions clustered on clerkship block that controlled for medical school. RESULTS: A total of 476 participants were evenly divided into the 3 arms, which had similar characteristics. Compared with controls, presentation quality was significantly associated with participating in detailed (coefficient: 0.38; 95% confidence interval [CI]: 0.07-0.69) but not simple (coefficient: 0.16; 95% CI: -0.12-0.43) feedback sessions. Similarly, student self-report of presentation improvement was significantly associated with participating in detailed (odds ratio: 2.16; 95% CI: 1.11-4.18] but not simple (odds ratio: 1.89; 95% CI: 0.91-3.93) feedback sessions. CONCLUSIONS: Small-group presentation feedback sessions led by faculty using a detailed evaluation form resulted in clerkship students delivering oral presentations of higher quality compared with controls.


Subject(s)
Clinical Clerkship/standards , Clinical Competence/standards , Communication , Feedback , Pediatrics/education , Pediatrics/standards , Adult , Clinical Clerkship/methods , Female , Humans , Male , Pediatrics/methods , Program Evaluation/standards , Single-Blind Method , Young Adult
SELECTION OF CITATIONS
SEARCH DETAIL
...