Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 152
Filtrar
1.
J Obstet Gynaecol India ; 74(3): 256-264, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38974743

RESUMO

Background: MCQs (multiple choice questions) can be used to assess higher-order skills and be utilised for creating a question bank. Purpose of Study Aim: To perform post-validation item analysis of MCQs constructed by medical faculty for formative assessment of final-year medical students and to explore the association between difficulty index (p value) and discrimination indices (DI) with distractor efficiency (DE). Methods: An observational study was conducted involving 50 final-year MBBS students and 10 faculty members for a period of one year (October 2020 to September 2021). After training the faculty in item analysis, a MCQ test was prepared after collecting the peer-reviewed 25 MCQs (five each from various subtopics of the subject). The result of this test was used to calculate the FV (facility value), DI (discrimination index), and DE (distractor efficiency) of each item. Student and faculty feedback was also obtained on a five-point Likert scale. Results: The mean FV was 46.3 ± 19.3 and 64% of questions were difficult; the mean DI was 0.3 ± 0.1 and 92% of questions could differentiate between HAG (high achiever's group) and LAG (low achiever's group); the mean DE was 82% ± 19.8 and 48% of items had no NFDs (non-functional distractors). Conclusion: MCQs can be used to assess all the levels of Bloom's taxonomy. Item analysis yielded 23 out of 25 MCQs that were suitable for the question bank.

3.
J Microbiol Biol Educ ; : e0004724, 2024 Jun 13.
Artigo em Inglês | MEDLINE | ID: mdl-38869278

RESUMO

Many 4-year public institutions face significant pedagogical challenges due to the high ratio of students to teaching team members. To address the issue, we developed a workflow using the programming language R as a method to rapidly grade multiple-choice questions, adjust for errors, and grade answer-dependent style multiple-choice questions, thus shifting the teaching teams' time commitment back to student interaction. We provide an example of answer-dependent style multiple-choice questions and demonstrate how the output allows for discrete analysis of questions based on various categories such as Fundamental Statements or Bloom's Taxonomy Levels. Additionally, we show how student demographics can be easily integrated to yield a holistic perspective on student performance in a course. The workflow offers dynamic grading opportunities for multiple-choice questions and versatility through its adaptability to assessment analyses. This approach to multiple-choice questions allows instructors to pinpoint factors affecting student performance and respond to changes to foster a healthy learning environment.

4.
Cureus ; 16(5): e59778, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38846235

RESUMO

In recent years, healthcare education providers have boasted about a conscious shift towards increasing clinical competence via assessment tests that promote more active learning. Despite this, multiple-choice questions remain amongst the most prevalent forms of assessment. Various literature justifies the use of multiple-choice testing by its high levels of validity and reliability. Education providers also benefit from requiring fewer resources and costs in the development of questions and easier adaptivity of questions to compensate for neurodiversity. However, when testing these (and other) variables via a structured approach in terms of their utility, it is elucidated that these advantages are largely dependent on the quality of the questions that are written, the level of clinical competence that is to be attained by learners and the impact of negating confounding variables such as differential attainment. Attempts at improving the utility of multiple-choice question testing in modern healthcare curricula are discussed in this review, as well as the impact of these modifications on performance.

5.
Postgrad Med J ; 2024 Jun 06.
Artigo em Inglês | MEDLINE | ID: mdl-38840505

RESUMO

ChatGPT's role in creating multiple-choice questions (MCQs) is growing but the validity of these artificial-intelligence-generated questions is unclear. This literature review was conducted to address the urgent need for understanding the application of ChatGPT in generating MCQs for medical education. Following the database search and screening of 1920 studies, we found 23 relevant studies. We extracted the prompts for MCQ generation and assessed the validity evidence of MCQs. The findings showed that prompts varied, including referencing specific exam styles and adopting specific personas, which align with recommended prompt engineering tactics. The validity evidence covered various domains, showing mixed accuracy rates, with some studies indicating comparable quality to human-written questions, and others highlighting differences in difficulty and discrimination levels, alongside a significant reduction in question creation time. Despite its efficiency, we highlight the necessity of careful review and suggest a need for further research to optimize the use of ChatGPT in question generation. Main messages  Ensure high-quality outputs by utilizing well-designed prompts; medical educators should prioritize the use of detailed, clear ChatGPT prompts when generating MCQs. Avoid using ChatGPT-generated MCQs directly in examinations without thorough review to prevent inaccuracies and ensure relevance. Leverage ChatGPT's potential to streamline the test development process, enhancing efficiency without compromising quality.

6.
Adv Med Educ Pract ; 15: 393-400, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38751805

RESUMO

Introduction: This research investigated the capabilities of ChatGPT-4 compared to medical students in answering MCQs using the revised Bloom's Taxonomy as a benchmark. Methods: A cross-sectional study was conducted at The University of the West Indies, Barbados. ChatGPT-4 and medical students were assessed on MCQs from various medical courses using computer-based testing. Results: The study included 304 MCQs. Students demonstrated good knowledge, with 78% correctly answering at least 90% of the questions. However, ChatGPT-4 achieved a higher overall score (73.7%) compared to students (66.7%). Course type significantly affected ChatGPT-4's performance, but revised Bloom's Taxonomy levels did not. A detailed association check between program levels and Bloom's taxonomy levels for correct answers by ChatGPT-4 showed a highly significant correlation (p<0.001), reflecting a concentration of "remember-level" questions in preclinical and "evaluate-level" questions in clinical courses. Discussion: The study highlights ChatGPT-4's proficiency in standardized tests but indicates limitations in clinical reasoning and practical skills. This performance discrepancy suggests that the effectiveness of artificial intelligence (AI) varies based on course content. Conclusion: While ChatGPT-4 shows promise as an educational tool, its role should be supplementary, with strategic integration into medical education to leverage its strengths and address limitations. Further research is needed to explore AI's impact on medical education and student performance across educational levels and courses.

7.
BMC Med Educ ; 24(1): 569, 2024 May 24.
Artigo em Inglês | MEDLINE | ID: mdl-38790034

RESUMO

BACKGROUND: Online question banks are the most widely used education resource amongst medical students. Despite this there is an absence of literature outlining how and why they are used by students. Drawing on Deci and Ryan's self-determination theory, our study aimed to explore why and how early-stage medical students use question banks in their learning and revision strategies. METHODS: The study was conducted at Newcastle University Medical School (United Kingdom and Malaysia). Purposive, convenience and snowball sampling of year two students were employed. Ten interviews were conducted. Thematic analysis was undertaken iteratively, enabling exploration of nascent themes. Data collection ceased when no new perspectives were identified. RESULTS: Students' motivation to use question banks was predominantly driven by extrinsic motivators, with high-stakes exams and fear of failure being central. Their convenience and perceived efficiency promoted autonomy and thus motivation. Rapid feedback cycles and design features consistent with gamification were deterrents to intrinsic motivation. Potentially detrimental patterns of question bank use were evident: cueing, avoidance and memorising. Scepticism regarding veracity of question bank content was absent. CONCLUSIONS: We call on educators to provide students with guidance about potential pitfalls associated with question banks and to reflect on potential inequity of access to these resources.


Assuntos
Motivação , Pesquisa Qualitativa , Estudantes de Medicina , Humanos , Estudantes de Medicina/psicologia , Malásia , Reino Unido , Avaliação Educacional , Feminino , Educação de Graduação em Medicina , Masculino , Internet
8.
BMC Med Educ ; 24(1): 599, 2024 May 30.
Artigo em Inglês | MEDLINE | ID: mdl-38816855

RESUMO

BACKGROUND: Item difficulty plays a crucial role in assessing students' understanding of the concept being tested. The difficulty of each item needs to be carefully adjusted to ensure the achievement of the evaluation's objectives. Therefore, this study aimed to investigate whether repeated item development training for medical school faculty improves the accuracy of predicting item difficulty in multiple-choice questions. METHODS: A faculty development program was implemented to enhance the prediction of each item's difficulty index, ensure the absence of item defects, and maintain the general principles of item development. The interrater reliability between the predicted, actual, and corrected item difficulty was assessed before and after the training, using either the kappa index or the correlation coefficient, depending on the characteristics of the data. A total of 62 faculty members participated in the training. Their predictions of item difficulty were compared with the analysis results of 260 items taken by 119 fourth-year medical students in 2016 and 316 items taken by 125 fourth-year medical students in 2018. RESULTS: Before the training, significant agreement between the predicted and actual item difficulty indices was observed for only one medical subject, Cardiology (K = 0.106, P = 0.021). However, after the training, significant agreement was noted for four subjects: Internal Medicine (K = 0.092, P = 0.015), Cardiology (K = 0.318, P = 0.021), Neurology (K = 0.400, P = 0.043), and Preventive Medicine (r = 0.577, P = 0.039). Furthermore, a significant agreement was observed between the predicted and actual difficulty indices across all subjects when analyzing the average difficulty of all items (r = 0.144, P = 0.043). Regarding the actual difficulty index by subject, neurology exceeded the desired difficulty range of 0.45-0.75 in 2016. By 2018, however, all subjects fell within this range. CONCLUSION: Repeated item development training, which includes predicting each item's difficulty index, can enhance faculty members' ability to predict and adjust item difficulty accurately. To ensure that the difficulty of the examination aligns with its intended purpose, item development training can be beneficial. Further studies on faculty development are necessary to explore these benefits more comprehensively.


Assuntos
Avaliação Educacional , Docentes de Medicina , Humanos , Avaliação Educacional/métodos , Reprodutibilidade dos Testes , Estudantes de Medicina , Educação de Graduação em Medicina , Masculino , Feminino
9.
BMC Med Educ ; 24(1): 448, 2024 Apr 24.
Artigo em Inglês | MEDLINE | ID: mdl-38658906

RESUMO

OBJECTIVES: This study aimed to investigate the utility of the RAND/UCLA appropriateness method (RAM) in validating expert consensus-based multiple-choice questions (MCQs) on electrocardiogram (ECG). METHODS: According to the RAM user's manual, nine panelists comprising various experts who routinely handle ECGs were asked to reach a consensus in three phases: a preparatory phase (round 0), an online test phase (round 1), and a face-to-face expert panel meeting (round 2). In round 0, the objectives and future timeline of the study were elucidated to the nine expert panelists with a summary of relevant literature. In round 1, 100 ECG questions prepared by two skilled cardiologists were answered, and the success rate was calculated by dividing the number of correct answers by 9. Furthermore, the questions were stratified into "Appropriate," "Discussion," or "Inappropriate" according to the median score and interquartile range (IQR) of appropriateness rating by nine panelists. In round 2, the validity of the 100 ECG questions was discussed in an expert panel meeting according to the results of round 1 and finally reassessed as "Appropriate," "Candidate," "Revision," and "Defer." RESULTS: In round 1 results, the average success rate of the nine experts was 0.89. Using the median score and IQR, 54 questions were classified as " Discussion." In the expert panel meeting in round 2, 23% of the original 100 questions was ultimately deemed inappropriate, although they had been prepared by two skilled cardiologists. Most of the 46 questions categorized as "Appropriate" using the median score and IQR in round 1 were considered "Appropriate" even after round 2 (44/46, 95.7%). CONCLUSIONS: The use of the median score and IQR allowed for a more objective determination of question validity. The RAM may help select appropriate questions, contributing to the preparation of higher-quality tests.


Assuntos
Eletrocardiografia , Humanos , Consenso , Reprodutibilidade dos Testes , Competência Clínica/normas , Avaliação Educacional/métodos , Cardiologia/normas
10.
Eur J Dent Educ ; 2024 Mar 08.
Artigo em Inglês | MEDLINE | ID: mdl-38456591

RESUMO

INTRODUCTION: The effectiveness of multiple-choice questions (MCQs) in dental education is pivotal to student performance and knowledge advancement. However, their optimal implementation requires exploration to enhance the benefits. MATERIALS AND METHODS: An educational tool incorporating MCQs was administered from the 5th to the 10th semester in a dental curriculum. The students filled out a questionnaire after the MCQ, which was linked to the learning management system. Four cohorts of four semesters generated 2300 data points analysed by Spearmen correlation and mixed model regression analysis. RESULTS: Demonstrated a significant correlation between early exam preparation and improved student performance. Independent study hours and lecture attendance emerged as significant predictors, accounting for approximately 10.27% of the variance in student performance on MCQs. While the number of MCQs taken showed an inverse relationship with study hours, the perceived clarity of these questions positively correlated with academic achievement. CONCLUSION: MCQs have proven effective in enhancing student learning and knowledge within the discipline. Our analysis underscores the important role of independent study and consistent lecture attendance in positively influencing MCQ scores. The study provides valuable insights into using MCQs as a practical tool for dental student learning. Moreover, the clarity of assessment tools, such as MCQs, remains pivotal in influencing student outcomes. This study underscores the multifaceted nature of learning experiences in dental education and the importance of bridging the gap between student expectations and actual performance.

11.
BMC Med Educ ; 24(1): 354, 2024 Mar 29.
Artigo em Inglês | MEDLINE | ID: mdl-38553693

RESUMO

BACKGROUND: Writing multiple choice questions (MCQs) for the purpose of medical exams is challenging. It requires extensive medical knowledge, time and effort from medical educators. This systematic review focuses on the application of large language models (LLMs) in generating medical MCQs. METHODS: The authors searched for studies published up to November 2023. Search terms focused on LLMs generated MCQs for medical examinations. Non-English, out of year range and studies not focusing on AI generated multiple-choice questions were excluded. MEDLINE was used as a search database. Risk of bias was evaluated using a tailored QUADAS-2 tool. RESULTS: Overall, eight studies published between April 2023 and October 2023 were included. Six studies used Chat-GPT 3.5, while two employed GPT 4. Five studies showed that LLMs can produce competent questions valid for medical exams. Three studies used LLMs to write medical questions but did not evaluate the validity of the questions. One study conducted a comparative analysis of different models. One other study compared LLM-generated questions with those written by humans. All studies presented faulty questions that were deemed inappropriate for medical exams. Some questions required additional modifications in order to qualify. CONCLUSIONS: LLMs can be used to write MCQs for medical examinations. However, their limitations cannot be ignored. Further study in this field is essential and more conclusive evidence is needed. Until then, LLMs may serve as a supplementary tool for writing medical examinations. 2 studies were at high risk of bias. The study followed the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines.


Assuntos
Conhecimento , Idioma , Humanos , Bases de Dados Factuais , Redação
12.
Med Teach ; : 1-3, 2024 Feb 10.
Artigo em Inglês | MEDLINE | ID: mdl-38340312

RESUMO

WHAT IS THE EDUCATIONAL CHALLENGE?: A fundamental challenge in medical education is creating high-quality, clinically relevant multiple-choice questions (MCQs). ChatGPT-based automatic item generation (AIG) methods need well-designed prompts. However, the use of these prompts is hindered by the time-consuming process of copying and pasting, a lack of know-how among medical teachers, and the generalist nature of standard ChatGPT, which often lacks the medical context. WHAT ARE THE PROPOSED SOLUTIONS?: The Case-based MCQ Generator, a custom GPT, addresses these challenges. It has been trained by using GPT Builder, which is a platform designed by OpenAI for customizing ChatGPT to meet specific needs, in order to allow users to generate case-based MCQs. By using this free tool for those who have ChatGPT Plus subscription, health professions educators can easily select a prompt, input a learning objective or item-specific test point, and generate clinically relevant questions. WHAT ARE THE POTENTIAL BENEFITS TO A WIDER GLOBAL AUDIENCE?: It enhances the efficiency of MCQ generation and ensures the generation of contextually relevant questions, surpassing the capabilities of standard ChatGPT. It streamlines the MCQ creation process by integrating prompts published in medical education literature, eliminating the need for manual prompt input. WHAT ARE THE NEXT STEPS?: Future development aims at sustainability and addressing ethical and accessibility issues. It requires regular updates, integration of new prompts from emerging health professions education literature, and a supportive digital ecosystem around the tool. Accessibility, especially for educators in low-resource countries, is vital, demanding alternative access models to overcome financial barriers.

13.
Eur J Clin Pharmacol ; 80(5): 729-735, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38353690

RESUMO

PURPOSE: Artificial intelligence, specifically large language models such as ChatGPT, offers valuable potential benefits in question (item) writing. This study aimed to determine the feasibility of generating case-based multiple-choice questions using ChatGPT in terms of item difficulty and discrimination levels. METHODS: This study involved 99 fourth-year medical students who participated in a rational pharmacotherapy clerkship carried out based-on the WHO 6-Step Model. In response to a prompt that we provided, ChatGPT generated ten case-based multiple-choice questions on hypertension. Following an expert panel, two of these multiple-choice questions were incorporated into a medical school exam without making any changes in the questions. Based on the administration of the test, we evaluated their psychometric properties, including item difficulty, item discrimination (point-biserial correlation), and functionality of the options. RESULTS: Both questions exhibited acceptable levels of point-biserial correlation, which is higher than the threshold of 0.30 (0.41 and 0.39). However, one question had three non-functional options (options chosen by fewer than 5% of the exam participants) while the other question had none. CONCLUSIONS: The findings showed that the questions can effectively differentiate between students who perform at high and low levels, which also point out the potential of ChatGPT as an artificial intelligence tool in test development. Future studies may use the prompt to generate items in order for enhancing the external validity of the results by gathering data from diverse institutions and settings.


Assuntos
Hipertensão , Estudantes de Medicina , Humanos , Inteligência Artificial , Faculdades de Medicina
14.
J Dent Educ ; 88(5): 533-543, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38314889

RESUMO

PURPOSE: Item analysis of multiple-choice questions (MCQs) is an essential tool for identifying items that can be stored, revised, or discarded to build a quality MCQ bank. This study analyzed MCQs based on item analysis to develop a pool of valid and reliable items and investigate stakeholders' perceptions regarding MCQs in a written summative assessment (WSA) based on this item analysis. METHODS: In this descriptive study, 55 questions each from 2016 to 2019 of WSA in preclinical removable prosthodontics for fourth-year undergraduate dentistry students were analyzed for item analysis. Items were categorized according to their difficulty index (DIF I) and discrimination index (DI). Students (2021-2022) were assessed using this question bank. Students' perceptions of and feedback from faculty members concerning this assessment were collected using a questionnaire with a five-point Likert scale. RESULTS: Of 220 items when both indices (DIF I and DI) were combined, 144 (65.5%) were retained in the question bank, 66 (30%) required revision before incorporation into the question bank, and only 10 (4.5%) were discarded. The mean DIF I and DI values were 69% (standard deviation [Std.Dev] = 19) and 0.22 (Std.Dev = 0.16), respectively, for 220 MCQs. The mean scores from the questionnaire for students and feedback from faculty members ranged from 3.50 to 4.04 and from 4 to 5, respectively, indicating that stakeholders tended to agree and strongly agree, respectively, with the proposed statements. CONCLUSION: This study assisted the prosthodontics department in creating a set of prevalidated questions with known difficulty and discrimination capacity.


Assuntos
Educação em Odontologia , Avaliação Educacional , Prostodontia , Prostodontia/educação , Humanos , Educação em Odontologia/métodos , Avaliação Educacional/métodos , Estudantes de Odontologia/psicologia , Inquéritos e Questionários , Participação dos Interessados
15.
Med Teach ; : 1-8, 2024 Jan 26.
Artigo em Inglês | MEDLINE | ID: mdl-38277134

RESUMO

Peer-led assessment (PLA) has gained increasing prominence within health professions education as an effective means of engaging learners in the process of assessment writing and practice. Involving students in various stages of the assessment lifecycle, including item writing, quality assurance, and feedback, not only facilitates the creation of high-quality item banks with minimal faculty input but also promotes the development of students' assessment literacy and fosters their growth as teachers. The advantages of involving students in the generation of assessments are evident from a pedagogical standpoint, benefiting both students and faculty. However, faculty members may face uncertainty when it comes to implementing such approaches effectively. To address this concern, this paper presents twelve tips that offer guidance on important considerations for the successful implementation of peer-led assessment schemes in the context of health professions education.

16.
Artigo em Inglês | MEDLINE | ID: mdl-38224412

RESUMO

Given the high prevalence of multiple-choice examinations with formula scoring in medical training, several studies have tried to identify other factors in addition to the degree of knowledge of students which influence their response patterns. This study aims to measure the effect of students' attitude towards risk and ambiguity on their number of correct, wrong, and blank answers. In October 2018, 233 3rd year medical students from the Faculty of Medicine of the University of Porto, in Porto, Portugal, completed a questionnaire which assessed the student's attitudes towards risk and ambiguity, and aversion to ambiguity in medicine. Simple and multiple regression models and the respective regression coefficients were used to measure the association between the students' attitudes, and their answers in two examinations that they had taken in June 2018. Having an intermediate level of ambiguity aversion in medicine (as opposed to a very high or low level) was associated with a significant increase in the number of correct answers and decrease in the number of blank answers in the first examination. In the second examination, high levels of ambiguity aversion in medicine were associated with a decrease in the number of wrong answers. Attitude towards risk, tolerance for ambiguity, and gender did not show significant association with the number of correct, wrong, and blank answers for either examination. Students' ambiguity aversion in medicine is correlated with their performance in multiple-choice examinations with negative marking. Therefore, it is suggested the planning and implementation of counselling sessions with medical students regarding the possible impact of ambiguity aversion on their performance in multiple-choice questions with negative marking.

17.
Eur J Dent Educ ; 28(2): 655-662, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38282273

RESUMO

Multiple-choice questions (MCQs) are the most popular type of items used in knowledge-based assessments in undergraduate and postgraduate healthcare education. MCQs allow assessment of candidates' knowledge on a broad range of knowledge-based learning outcomes in a single assessment. Single-best-answer (SBA) MCQs are the most versatile and commonly used format. Although writing MCQs may seem straight-forward, producing decent-quality MCQs is challenging and warrants a range of quality checks before an item is deemed suitable for inclusion in an assessment. Like all assessments, MCQ-based examinations must be aligned with the learning outcomes and learning opportunities provided to the students. This paper provides evidence-based guidance on the effective use of MCQs in student assessments, not only to make decisions regarding student progression but also to build an academic environment that promotes assessment as a driver for learning. Practical tips are provided to the readers to produce authentic MCQ items, along with appropriate pre- and post-assessment reviews, the use of standard setting and psychometric evaluation of assessments based on MCQs. Institutions need to develop an academic culture that fosters transparency, openness, equality and inclusivity. In line with contemporary educational principles, teamwork amongst teaching faculty, administrators and students is essential to establish effective learning and assessment practices.


Assuntos
Educação em Odontologia , Avaliação Educacional , Humanos , Estudantes , Aprendizagem , Redação
18.
Curr Pharm Teach Learn ; 16(3): 174-177, 2024 03.
Artigo em Inglês | MEDLINE | ID: mdl-38218657

RESUMO

INTRODUCTION: The purpose of this study was to describe the effect of converting multiple choice questions (MCQs) that include an "all of the above" (AOTA) answer option to a "select all that apply" (SATA) question type on question performance. METHODS: A summative assessment at the end of the first professional pharmacy year was comprised of approximately 50 multiple choice questions covering material from all courses taught. Eight questions contained AOTA answer options and were converted to SATA items in the subsequent year by eliminating the AOTA option and including the words "select all that apply" in the stem. Majority of the other questions included on the exam remained the same between the two years. Item difficulty, item discrimination, point biserial, and distractor efficiency were used to compare the MCQs on exams in the two years. RESULTS: The AOTA questions were significantly easier and less discriminating than the SATA items. The performance of the remaining questions on the exam did not differ between the years. The distractor efficiency increased significantly when the questions were converted to SATA items. CONCLUSIONS: MCQs with AOTA answer options are discouraged due to poor item construction resulting in poor discrimination between high and low performing students. The AOTA questions are easily converted to the SATA format. The result of this conversion is a more difficult and more discriminating question with all answer options chosen, which prevents students from easily guessing the correct answer.


Assuntos
Avaliação Educacional , Estudantes de Medicina , Succinimidas , Sulfetos , Humanos , Avaliação Educacional/métodos
19.
Healthcare (Basel) ; 12(2)2024 Jan 22.
Artigo em Inglês | MEDLINE | ID: mdl-38275562

RESUMO

This study investigates the effectiveness of the Script Concordance Test (SCT) in enhancing clinical reasoning skills within paramedic education. Focusing on the Medical University of Lublin, we evaluated the SCT's application across two cohorts of paramedic students, aiming to understand its potential to improve decision-making skills in emergency scenarios. Our approach, informed by Van der Vleuten's assessment framework, revealed that while the SCT's correlation with traditional methods like multiple-choice questions (MCQs) was limited, its formative nature significantly contributed to improved performance in summative assessments. These findings suggest that the SCT can be an effective tool in paramedic training, particularly in strengthening cognitive abilities critical for emergency responses. The study underscores the importance of incorporating innovative assessment tools like SCTs in paramedic curricula, not only to enhance clinical reasoning but also to prepare students for effective emergency responses. Our research contributes to the ongoing efforts in refining paramedic education and highlights the need for versatile assessment strategies in preparing future healthcare professionals for diverse clinical challenges.

20.
Biochem Mol Biol Educ ; 52(2): 156-164, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-37929789

RESUMO

Retrieval practice is an evidence-based approach to teaching; here, we evaluate the use of PeerWise for embedding retrieval practice into summative assessment. PeerWise allows anonymous authoring, sharing, answering, rating, and feedback on peer-authored multiple choice questions. PeerWise was embedded as a summative assessment in a large first-year introductory biochemistry module. Engagement with five aspects of the tool was evaluated against student performance in coursework, exam, and overall module outcome. Results indicated a weak-to-moderate positive but significant correlation between engagement with PeerWise and assessment performance. Student feedback showed PeerWise had a polarizing effect; the majority recognized the benefits as a learning and revision tool, but a minority strongly disliked it, complaining of a lack of academic moderation and irrelevant questions unrelated to the module. PeerWise can be considered a helpful learning tool for some students and a means of embedding retrieval practice into summative assessment.


Assuntos
Avaliação Educacional , Estudantes , Humanos , Avaliação Educacional/métodos , Aprendizagem , Bioquímica , Retroalimentação , Ensino
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...