Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 11 de 11
Filter
1.
Med Educ ; 2023 Nov 28.
Article in English | MEDLINE | ID: mdl-38017648

ABSTRACT

INTRODUCTION: Test-enhanced learning (TEL) is an impactful teaching and learning strategy that prioritises active learner engagement through the process of regular testing and reviewing. While it is clear that meaningful feedback optimises the effects of TEL, the ideal timing of this feedback (i.e. immediate or delayed) in a medical education setting is unclear. METHOD: Forty-one second-year medical students were recruited from the University of Melbourne. Participants were given a multiple-choice question test with a mix of immediate (i.e. post-item) and delayed (i.e. post-item-block) conceptual feedback. Students were then tested on near and far transfer items during an immediate post-test, and at a one-week follow-up. RESULTS: A logistic mixed effects model was used to predict the probability of successful near and far transfer. As expected, participants in our study tended to score lower on far transfer items than they did on near transfer items. In addition, correct initial response on a parent question predicted subsequent correct responding. Contrary to our hypotheses, the feedback timing effect was non-significant-there was no discernible difference between feedback delivered immediately versus delayed feedback. DISCUSSION: The findings of this study suggest that the timing of feedback delivery (post-item versus post-item-block) does not influence the efficacy of TEL in this medical education setting. We therefore suggest that educators may consider practical factors when determining appropriate TEL feedback timing in their setting.

2.
Perspect Med Educ ; 9(5): 307-313, 2020 10.
Article in English | MEDLINE | ID: mdl-32789664

ABSTRACT

INTRODUCTION: The role of feedback in test-enhanced learning is an understudied area that has the potential to improve student learning. This study investigates the influence of different forms of post-test feedback on retention and transfer of biomedical knowledge within a test-enhanced learning framework. METHODS: 64 participants from a Canadian and an Australian medical school sat two single-best-answer formative multiple choice tests one week apart. We compared the effects of conceptually focused, response-oriented, and simple right/wrong feedback on a learner's ability to correctly answer new (transfer) questions. On the first test occasion, participants received parent items with feedback, and then attempted items closely related (near transfer) to and more distant (far transfer) from parent items. In a repeat test at 1 week, participants were given different near and far transfer versions of parent items. Feedback type, and near and far transfer items were randomized within and across participants. RESULTS: Analysis demonstrated that response-oriented and conceptually focused feedback were superior to traditional right/wrong feedback for both types of transfer tasks and in both immediate and final retention test performance. However, there was no statistically significant difference between response-orientated and conceptually focused groups on near or far transfer problems, nor any differences in performance between our initial test occasion and the retention test 1 week later. As with most studies of transfer, participants' far transfer scores were lower than for near transfer. DISCUSSION: Right/wrong feedback appears to have limited potential to augment test-enhanced learning. Our work suggests that item-level feedback and feedback that identifies and elaborates on key conceptual knowledge are two important areas for future research on learning, retention and transfer.


Subject(s)
Educational Measurement/standards , Feedback , Educational Measurement/methods , Educational Measurement/statistics & numerical data , Humans , Knowledge , Ontario , Schools, Medical/organization & administration , Schools, Medical/statistics & numerical data , Victoria
3.
Med Educ ; 54(11): 1075-1076, 2020 11.
Article in English | MEDLINE | ID: mdl-32845028

Subject(s)
Students , Humans
4.
MedEdPublish (2016) ; 9: 214, 2020.
Article in English | MEDLINE | ID: mdl-38073825

ABSTRACT

This article was migrated. The article was marked as recommended. Objective Structured Clinical Examinations (OSCEs) are extensively used for clinical assessment in the health professions. However, current social distancing requirements (including on-campus bans) at many universities have made the co-location of participants for large cohort OSCEs impossible. While there is a developing literature on remote OSCEs, particularly in response to the COVID-19 pandemic, this is dominated by approaches dealing with small participant numbers. This paper describes our recent large scale (n = 361 candidates) implementation of a remotely delivered 2 station OSCE. The planning for this OSCE was extensive and involved comprehensive candidate, examiner and simulated patient orientation and training. Our processes were explicitly designed to develop platform familiarity for all participants and included building on remote tutorial experiences and device testing. Our remote OSCE design and logistics made use of using existing enterprise solutions including videoconferencing, survey and collaboration platforms and allowed extra time between candidates in case of technical issues. We describe our process in detail including examiner, simulated patient, and candidate perspectives to provide precise detail, hopefully assisting other institutions to understand and adopt our approach. Although logistically complex, we have demonstrated that it is possible to deliver a remote OSCE assessment involving a large student cohort with a limited number of stations using commonly available enterprise solutions. We recognise it would be ideal to sample more broadly across stations and examiners, yet given the constraints of our current COVID-19 impacted environment, we believe this to be an appropriate compromise for a non-graduating cohort at this time.

5.
JMIR Med Educ ; 3(2): e17, 2017 Oct 02.
Article in English | MEDLINE | ID: mdl-28970187

ABSTRACT

BACKGROUND: Medical students have access to a wide range of learning resources, many of which have been specifically developed for or identified and recommended to them by curriculum developers or teaching staff. There is an expectation that students will access and use these resources to support their self-directed learning. However, medical educators lack detailed and reliable data about which of these resources students use to support their learning and how this use relates to key learning events or activities. OBJECTIVE: The purpose of this study was to comprehensively document first-year medical student selection and use of online learning resources to support their bioscience learning within a case-based curriculum and assess these data in relation to our expectations of student learning resource requirements and use. METHODS: Study data were drawn from 2 sources: a survey of student learning resource selection and use (2013 cohort; n=326) and access logs from the medical school learning platform (2012 cohort; n=337). The paper-based survey, which was distributed to all first-year students, was designed to assess the frequency and types of online learning resources accessed by students and included items about their perceptions of the usefulness, quality, and reliability of various resource types and sources. Of 237 surveys returned, 118 complete responses were analyzed (36.2% response rate). Usage logs from the learning platform for an entire semester were processed to provide estimates of first-year student resource use on an individual and cohort-wide basis according to method of access, resource type, and learning event. RESULTS: According to the survey data, students accessed learning resources via the learning platform several times per week on average, slightly more often than they did for resources from other online sources. Google and Wikipedia were the most frequently used nonuniversity sites, while scholarly information sites (eg, online journals and scholarly databases) were accessed relatively infrequently. Students were more likely to select learning resources based on the recommendation of peers than of teaching staff. The overwhelming majority of the approximately 70,000 resources accessed by students via the learning platform were lecture notes, with each accessed an average of 167 times. By comparison, recommended journal articles and (online) textbook chapters were accessed only 49 and 31 times, respectively. The number and type of learning resources accessed by students through the learning platform was highly variable, with a cluster analysis revealing that a quarter of students accessed very few resources in this way. CONCLUSIONS: Medical students have easy access to a wide range of quality learning resources, and while some make good use of the learning resources recommended to them, many ignore most and access the remaining ones infrequently. Learning analytics can provide useful measures of student resource access through university learning platforms but fails to account for resources accessed via external online sources or sharing of resources using social media.

6.
Med Educ ; 51(9): 963-973, 2017 Sep.
Article in English | MEDLINE | ID: mdl-28833428

ABSTRACT

OBJECTIVE: Self-regulation is recognised as being a requisite skill for professional practice This study is part of a programme of research designed to explore efficient methods of feedback that improve medical students' ability to self-regulate their learning. Our aim was to clarify how students respond to different forms and content of written feedback and to explore the impact on study behaviour and knowledge acquisition. METHODS: Year 2 students in a 4-year graduate entry medical programme completing four formative progress tests during the academic year were randomised into three groups receiving different feedback reports. All reports included proportion correct overall and by clinical rotation. One group received feedback reports including lists of clinical presentations relating to questions answered correctly and incorrectly; another group received reports containing this same information in combination with response certitude. The final group received reports involving normative comparisons. Baseline progress test performance quartile groupings (a proxy for academic ability) were determined by results on the first progress test. A mixed-method approach with triangulation of research findings was used to interpret results. Outcomes of interest included progress test scores, summative examination results and measures derived from study diaries, questionnaires and semi-structured interviews. RESULTS: Of the three types of feedback provided in this experiment, feedback containing normative comparisons resulted in inferior test performance for students in the lowest performance quartile group. This type of feedback appeared to stimulate general rather than examination-focused study. CONCLUSIONS: Medical students are often considered relatively homogenous and high achieving, yet the results of this study suggest caution when providing them with normative feedback indicating poorer performance relative to their peers. There is much need for further work to explore efficient methods of providing written feedback that improves medical students' ability to self-regulate their learning, particularly when giving feedback to those students who have the most room for improvement.


Subject(s)
Clinical Competence , Education, Medical, Undergraduate , Educational Measurement/methods , Formative Feedback , Learning , Students, Medical/psychology , Humans , Peer Group
7.
Perspect Med Educ ; 6(5): 356-361, 2017 Oct.
Article in English | MEDLINE | ID: mdl-28819803

ABSTRACT

Large-scale interview and simulation-based assessments such as objective structured clinical examinations (OSCEs) and multiple mini interviews (MMIs) are logistically complex to administer, generate large volumes of assessment data, and are strong candidates for the adoption of computer-based marking systems. Adoption of new technologies can be challenging, and technical failures, which are relatively commonplace, can delay and/or create resistance to ongoing implementation.This paper reports on the adoption process of an electronic marking system for OSCEs and MMIs following an unsuccessful initial trial. It describes how, after the initial setback, a staged implementation, progressing from small to larger-scale assessments, single to multiple assessment types, and lower to higher stakes assessments, was used to successfully adopt and embed iPad-based marking within our medical school.Critical factors in the success of this approach included thorough appraisal and selection of technologies, rigorous assurance of system reliability and security, constant review and refinement, and careful attention to implementation and end-user training. Engagement of stakeholders is also crucial, especially in the case of previous failures or setbacks. The early identification and recruitment of staff to provide specific expertise and support for adoption of an innovation helps to facilitate this process with four key roles proposed; those of innovation advocate, champion, expert and sponsor.

8.
Acad Med ; 92(6): 780-784, 2017 06.
Article in English | MEDLINE | ID: mdl-28557942

ABSTRACT

PROBLEM: Professionalism is a critical attribute of medical graduates. Its measurement is challenging. The authors sought to assess final-year medical students' knowledge of appropriate professional behavior across a broad range of workplace situations. APPROACH: Situational judgement tests (SJTs) are used widely in applicant selection to assess judgement or decision making in work-related settings as well as attributes such as empathy, integrity, and resilience. In 2014, the authors developed three 40-item SJTs with scenarios relevant to interns (first-year junior doctors) and delivered the tests to final-year medical students to assess aspects of professionalism. As preparation, students discussed SJT-style scenarios; after the tests they completed an evaluation. The authors applied the Angoff method for the standard-setting process, delivered electronic individualized feedback reports to students post test, and provided remediation for students failing to meet the cut score. OUTCOMES: Evaluation revealed that the tests positively affected students' learning and that students accepted them as an assessment tool. Validity and reliability were acceptable. Implementation costs were initially high but will be recouped over time. NEXT STEPS: Recent improvements include changes to pass requirements, question revision based on reliability testing, and provision of detailed item-level feedback. Work is currently under way to expand the item bank and to introduce tests earlier in the course. Future research will explore correlation of SJT performance with other measures of professionalism and focus on the impact of SJTs on professionalism and interns' ability to deal with challenging workplace situations.


Subject(s)
Education, Medical, Undergraduate/standards , Educational Measurement/methods , Judgment , Professionalism/education , Professionalism/standards , Students, Medical , Adult , Curriculum , Decision Making , Empathy , Female , Humans , Male , Psychometrics , Reproducibility of Results , Surveys and Questionnaires , United States , Young Adult
9.
Stud Health Technol Inform ; 245: 447-451, 2017.
Article in English | MEDLINE | ID: mdl-29295134

ABSTRACT

Computer-aided learning systems (e-learning systems) can help medical students gain more experience with diagnostic reasoning and decision making. Within this context, providing feedback that matches students' needs (i.e. personalised feedback) is both critical and challenging. In this paper, we describe the development of a machine learning model to support medical students' diagnostic decisions. Machine learning models were trained on 208 clinical cases presenting with abdominal pain, to predict five diagnoses. We assessed which of these models are likely to be most effective for use in an e-learning tool that allows students to interact with a virtual patient. The broader goal is to utilise these models to generate personalised feedback based on the specific patient information requested by students and their active diagnostic hypotheses.


Subject(s)
Decision Making , Education, Medical, Undergraduate , Machine Learning , Students, Medical , Abdominal Pain/diagnosis , Abdominal Pain/therapy , Clinical Competence , Feedback , Humans , Learning
11.
Stud Health Technol Inform ; 168: 57-64, 2011.
Article in English | MEDLINE | ID: mdl-21893912

ABSTRACT

INTRODUCTION: Electronic Health Record (EHR) systems are an increasingly important feature of the national healthcare system [1]. However, little research has investigated the impact this will have on medical students' learning. As part of an innovative technology platform for a new masters level program in medicine, we are developing a student-centred EHR system for clinical education. A prototype was trialed with medical students over several weeks during 2010. This paper reports on the findings of the trial, which had the overall aim of assisting our understanding of how trainee doctors might use an EHR system for learning and communication in a clinical setting. BACKGROUND: In primary care and hospital settings, EHR systems offer potential benefits to medical students' learning: Longitudinal tracking of clinical progress towards established learning objectives [2]; Capacity to search across a substantial body of records [3]; Integration with online medical databases [3]; Development of expertise in creating, accessing and managing high quality EHRs [4]. While concerns have been raised that EHR systems may alter the interaction between teachers and students [3], and may negatively influence physician-patient communication [6], there is general consensus that the EHR is changing the current practice environment and teaching practice needs to respond. METHODS: Final year medical students on clinical placement at a large university teaching hospital were recruited for the trial. Following a four-week period of use, semi-structured interviews were conducted with 10 participants. Audio-recorded interviews were transcribed and data analysed for emerging themes. Study participants were also surveyed about the importance of EHR systems in general, their familiarity with them, and general perceptions of sharing patient records. CONCLUSIONS: Medical students in this pilot study identified a number of educational, practical and administrative advantages that the student-centred EHR system offered over their existing ad-hoc procedures for recording patient encounters. Findings from this preliminary study point to the need to introduce and instruct students' on the use of EHR systems from their earliest clinical encounters, and to closely integrate learning activities based on the EHR system with established learning objectives. Further research is required to evaluate the impact of student-centred EHR systems on learning outcomes.


Subject(s)
Education, Medical , Electronic Health Records , Students, Medical , Data Collection , Hospitals, Teaching , Humans , Teaching/methods , Victoria
SELECTION OF CITATIONS
SEARCH DETAIL
...