Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 6 de 6
Filter
Add more filters










Database
Language
Publication year range
1.
Med Teach ; : 1-7, 2024 Mar 14.
Article in English | MEDLINE | ID: mdl-38484293

ABSTRACT

Constructed-response questions (CRQs) require effective marking schemes to ensure that the intended learning objectives and/or professional competencies are appropriately addressed, and valid inferences regarding examinee competence are drawn from such assessments. While the educational literature on writing rubrics has proliferated in recent years, this is largely targeted at classroom use and formative purposes. There is comparatively little guidance on how to develop appropriate marking schemes for summative assessment contexts. The different purposes mean that different principles and practices apply to mark schemes for examinations. In this article, we draw on the educational literature as well as our own practical experience of working with medical and health professional educators on their questions and marking schemes to offer 12 key principles or tips for designing and implementing effective marking schemes.

2.
J Med Educ Curric Dev ; 9: 23821205221081813, 2022.
Article in English | MEDLINE | ID: mdl-35237723

ABSTRACT

Rubrics are utilized extensively in tertiary contexts to assess student performance on written tasks; however, their use for assessment of research projects has received little attention. In particular, there is little evidence on the reliability of examiner judgements according to rubric type (general or specific) in a research context. This research examines the concordance between pairs of examiners assessing a medical student research project during a two-year period employing a generic rubric followed by a subsequent two-year implementation of task-specific rubrics. Following examiner feedback, and with consideration to the available literature, we expected the task-specific rubrics would increase the consistency of examiner judgements and reduce the need for arbitration due to discrepant marks. However, in contrast, results showed that generic rubrics provided greater consistency of examiner judgements and fewer arbitrations compared with the task-specific rubrics. These findings have practical implications for educational practise in the assessment of research projects and contribute valuable empirical evidence to inform the development and use of rubrics in medical education.

3.
J Med Educ Curric Dev ; 6: 2382120519849411, 2019.
Article in English | MEDLINE | ID: mdl-31206032

ABSTRACT

BACKGROUND: Development of diagnostic reasoning (DR) is fundamental to medical students' training, but assessing DR is challenging. Several written assessments focus on DR but lack the ability to dynamically assess DR. Oral assessment formats have strengths but have largely lost favour due to concerns about low reliability and lack of standardization. Medical schools and specialist medical colleges value many forms of oral assessment (eg, long case, Objective Structured Clinical Examination [OSCE], viva voce) but are increasingly searching for ways in which to standardize these formats. We sought to develop and trial a Standardized Case-Based Discussion (SCBD), a highly standardized and interactive oral assessment of DR. METHODS: Two initial cohorts of medical students (n = 319 and n = 342) participated in the SCBD as part of their assessments. All students watch a video trigger (based on an authentic clinical case) and discuss their DR with an examiner for 15 minutes. Examiners probe students' DR and assess how students respond to new standardized clinical information. An online examiner training module clearly articulates expected student performance standards. We used student achievement and student and examiner perceptions to gauge the performance of this new assessment form over 2 implementation years. RESULTS: The SCBD was feasible to implement for a large student cohort and was acceptable to students and examiners. Most students and all examiners agreed that the SCBD discussion provided useful information on students' DR. The assessment had acceptable internal consistency, and the associations with other assessment formats were small and positive, suggesting that the SCBD measures a related, yet novel construct. CONCLUSIONS: Rigorous, standardized oral assessments have a place in a programme of assessment in initial medical training because they provide opportunities to explore DR that are limited in other formats. We plan to incorporate an SCBD into our clinical assessments for the first year of clinical training, where teaching and assessing basic DR is emphasized. We will also explore further examiners' understanding of and approach to assessing DR.

4.
Nutr Diet ; 75(2): 235-243, 2018 04.
Article in English | MEDLINE | ID: mdl-29314662

ABSTRACT

AIM: Health professionals seeking employment in foreign countries are commonly required to undertake competency assessment in order to practice. The present study aims to outline the development and validation of a written examination for Dietetic Skills Recognition (DSR), to assess the knowledge, skills, capabilities and professional judgement of overseas-educated dietitians against the competency standards applied to dietetic graduates in Australia. METHODS: The present study reviews the design, rationale, validation and outcomes of a multiple choice question (MCQ) written examination for overseas-educated dietitians based on 5 years of administration. The validity of the exam is evaluated using Messick's validity framework, which focuses on five potential sources of validity evidence-content, internal structure, relationships with other variables, response process and consequences. The reference point for the exam pass mark or "cutscore" is the minimum standard required for safe practice. RESULTS: In total, 114 candidates have completed the MCQ examination at least once, with an overall pass rate of 52% on the first attempt. Pass rates are higher from countries where dietetic education more closely reflects the Australian model. While the pass rate for each exam tends to vary with each cohort, the cutscore has remained relatively stable over eight administrations. CONCLUSIONS: The findings provide important data supporting the validity of the MCQ exam. A more complete evaluation of the validity of the exam must be sought within the context of the whole DSR program of assessment. The DSR written component may serve as a model for use of the MCQ format for dietetic and other professional credentialing organisations.


Subject(s)
Credentialing , Educational Measurement , Nutritionists/education , Writing , Australia , Competency-Based Education/standards , Educational Measurement/standards , Foreign Professional Personnel/education , Humans , Models, Educational
5.
Acad Med ; 92(6): 780-784, 2017 06.
Article in English | MEDLINE | ID: mdl-28557942

ABSTRACT

PROBLEM: Professionalism is a critical attribute of medical graduates. Its measurement is challenging. The authors sought to assess final-year medical students' knowledge of appropriate professional behavior across a broad range of workplace situations. APPROACH: Situational judgement tests (SJTs) are used widely in applicant selection to assess judgement or decision making in work-related settings as well as attributes such as empathy, integrity, and resilience. In 2014, the authors developed three 40-item SJTs with scenarios relevant to interns (first-year junior doctors) and delivered the tests to final-year medical students to assess aspects of professionalism. As preparation, students discussed SJT-style scenarios; after the tests they completed an evaluation. The authors applied the Angoff method for the standard-setting process, delivered electronic individualized feedback reports to students post test, and provided remediation for students failing to meet the cut score. OUTCOMES: Evaluation revealed that the tests positively affected students' learning and that students accepted them as an assessment tool. Validity and reliability were acceptable. Implementation costs were initially high but will be recouped over time. NEXT STEPS: Recent improvements include changes to pass requirements, question revision based on reliability testing, and provision of detailed item-level feedback. Work is currently under way to expand the item bank and to introduce tests earlier in the course. Future research will explore correlation of SJT performance with other measures of professionalism and focus on the impact of SJTs on professionalism and interns' ability to deal with challenging workplace situations.


Subject(s)
Education, Medical, Undergraduate/standards , Educational Measurement/methods , Judgment , Professionalism/education , Professionalism/standards , Students, Medical , Adult , Curriculum , Decision Making , Empathy , Female , Humans , Male , Psychometrics , Reproducibility of Results , Surveys and Questionnaires , United States , Young Adult
SELECTION OF CITATIONS
SEARCH DETAIL
...