Your browser doesn't support javascript.
Show: 20 | 50 | 100
Results 1 - 20 de 63
Filter
1.
BMC Med Educ ; 22(1): 178, 2022 Mar 15.
Article in English | MEDLINE | ID: covidwho-1745460

ABSTRACT

BACKGROUND: Objective Structured Clinical Examinations (OSCEs) are a common form of assessment used across medical schools in the UK to assess clinical competence and practical skills and are traditionally held in an in-person format. In the past, medical students have often prepared for such exams through in-person peer-assisted learning (PAL), however, due to the recent Covid-19 pandemic, many in-person teaching sessions transitioned to online-based formats. There is currently a paucity of research on the utility of virtual PAL OSCE sessions and thus, we carried out a national pilot study to determine the feasibility of virtual OSCE teaching via feedback from participants and examiners. METHODS: A total of 85 students from 19 UK-based medical schools with eight students based internationally attended the series of online OSCE workshops delivered via Zoom®. All students and examiners completed a feedback questionnaire at the end of each session regarding parameters, which included questions on pre-and post-workshop confidence in three OSCE domains: history-taking, communication and data interpretation. A Likert scale using 5 Likert items was used to self-report confidence, and the results were analysed using the Mann-Whitney U test after assessing for normality using the Shapiro-Wilk test. RESULTS: Results from student feedback showed an increase in confidence for all three OSCE domains after each event (p < 0.001) with 69.4% agreeing or strongly agreeing that online OSCE sessions could sufficiently prepare them for in-person exams. Questionnaire feedback revealed that 97.6% of students and 86.7% of examiners agreed that virtual OSCE teaching would be useful for preparing for in-person OSCE examinations after the pandemic. CONCLUSION: Most participants in the virtual OSCE sessions reported an improvement in their confidence in history-taking, communication and data interpretation skills. Of the participants and examiners that had also experienced in-person OSCE examinations, the majority also reported that they found virtual OSCE sessions to be as engaging and as interactive as in-person teaching. This study has demonstrated that virtual OSCE workshops are a feasible option with the potential to be beneficial beyond the pandemic. However, more studies are required to assess the overall impact on student learning and to determine the value of virtual OSCE workshops on exam performance.


Subject(s)
COVID-19 , Students, Medical , COVID-19/epidemiology , Educational Measurement/methods , Humans , Pandemics , Pilot Projects
2.
Phys Eng Sci Med ; 45(1): 273-278, 2022 Mar.
Article in English | MEDLINE | ID: covidwho-1637994

ABSTRACT

The COVID-19 pandemic has caused a shift from on-campus to remote online examinations, which are usually difficult to invigilate. Meanwhile, closed-ended question formats, such as true-false (TF), are particularly suited to these examination conditions, as they allow automatic marking by computer software. While previous studies have reported the score characteristics in TF questions in conventional supervised examinations, this study investigates the efficacy of using TF questions in online, unsupervised examinations at the undergraduate level of Biomedical Engineering. We examine the TF and other question-type scores of 57 students across three examinations held in 2020 under online, unsupervised conditions. Our analysis shows significantly larger coefficient of variance (CV) in scores in TF questions (42.7%) than other question types (22.3%). The high CV in TF questions may be explained by different answering strategies among students, with 13.3 ± 17.2% of TF questions left unanswered (zero marks) and 16.4 ± 11.5% of TF questions guessed incorrectly (negative marks awarded). In unsupervised, open-book examination where sharing of answers among students is a potential risk; questions that induce a larger variation in responses may be desirable to differentiate among students. We also observed a significant relationship (r = 0.64, p < 0.05) between TF scores and the overall subject scores, indicating that TF questions are an effective predictor of overall student performance. Our results from this initial analysis suggests that TF questions are useful for assessing biomedical-theme content in online, unsupervised examinations, and are encouraging for their ongoing use in future assessments.


Subject(s)
Biomedical Engineering , COVID-19 , Educational Measurement/methods , Humans , Pandemics , SARS-CoV-2
4.
Anesth Analg ; 133(5): 1331-1341, 2021 Nov 01.
Article in English | MEDLINE | ID: covidwho-1566542

ABSTRACT

In 2020, the coronavirus disease 2019 (COVID-19) pandemic interrupted the administration of the APPLIED Examination, the final part of the American Board of Anesthesiology (ABA) staged examination system for initial certification. In response, the ABA developed, piloted, and implemented an Internet-based "virtual" form of the examination to allow administration of both components of the APPLIED Exam (Standardized Oral Examination and Objective Structured Clinical Examination) when it was impractical and unsafe for candidates and examiners to travel and have in-person interactions. This article describes the development of the ABA virtual APPLIED Examination, including its rationale, examination format, technology infrastructure, candidate communication, and examiner training. Although the logistics are formidable, we report a methodology for successfully introducing a large-scale, high-stakes, 2-element, remote examination that replicates previously validated assessments.


Subject(s)
Anesthesiology/education , COVID-19/epidemiology , Certification/methods , Computer-Assisted Instruction/methods , Educational Measurement/methods , Specialty Boards , Anesthesiology/standards , COVID-19/prevention & control , Certification/standards , Clinical Competence/standards , Computer-Assisted Instruction/standards , Educational Measurement/standards , Humans , Internship and Residency/methods , Internship and Residency/standards , Specialty Boards/standards , United States/epidemiology
5.
Lang Speech Hear Serv Sch ; 52(3): 769-775, 2021 07 07.
Article in English | MEDLINE | ID: covidwho-1545676

ABSTRACT

Purpose The COVID-19 pandemic has necessitated a quick shift to virtual speech-language services; however, only a small percentage of speech-language pathologists (SLPs) had previously engaged in telepractice. The purpose of this clinical tutorial is (a) to describe how the Early Language and Literacy Acquisition in Children with Hearing Loss study, a longitudinal study involving speech-language assessment with children with and without hearing loss, transitioned from in-person to virtual assessment and (b) to provide tips for optimizing virtual assessment procedures. Method We provide an overview of our decision making during the transition to virtual assessment. Additionally, we report on a pilot study that calculated test-retest reliability from in-person to virtual assessment for a subset of our preschool-age participants. Results Our pilot study revealed that most speech-language measures had high or adequate test-retest reliability when administered in a virtual environment. When low reliability occurred, generally the measures were timed. Conclusions Speech-language assessment can be conducted successfully in a virtual environment for preschool children with hearing loss. We provide suggestions for clinicians to consider when preparing for virtual assessment sessions. Supplemental Material https://doi.org/10.23641/asha.14787834.


Subject(s)
Child Language , Education of Hearing Disabled , Educational Measurement/methods , Hearing Loss , Speech-Language Pathology/methods , Telemedicine/methods , COVID-19 , Child, Preschool , Educational Measurement/economics , Family , Humans , Pandemics , Pilot Projects , Speech-Language Pathology/economics , Surveys and Questionnaires , Telemedicine/economics
6.
Med Teach ; 44(3): 300-308, 2022 Mar.
Article in English | MEDLINE | ID: covidwho-1475585

ABSTRACT

The COVID-19 pandemic presented an enormous and immediate challenge to assessing clinical skills in healthcare professionals. Many institutions were unable to deliver established face-to-face assessment methods such as Objective Structured Clinical Examinations (OSCEs). Assessors needed to rapidly institute alternative assessment methods to ensure that candidates met the clinical competences required for progression. Using a systematic review, we aimed to evaluate the feasibility, and acceptability of remote methods of clinical skills assessment, including remote structured clinical assessments and the submission of video recordings. We searched for studies reporting on Remote Clinical Assessments or its variants in MEDLINE, Embase and The Cochrane library from 2000 to March 2021. Twenty eight studies were included in the review; 20 studies related to remote structured clinical examinations or OSCEs and 8 reported the use of video submissions. The participants of the different studies included medical students, nursing students, dental students and doctors in training. A variety of different online platforms were utilised including Zoom, Skype, webcams, and Adobe Connect online. The studies found that delivery of remote clinical assessments is possible and provides an alternative method of assessing many clinical skills, but most also acknowledge limitations and challenges. They are acceptable to both candidates and examiners, and where measured, show moderate agreement with on-site clinical assessments. Current evidence is based on studies with low methodological quality and for the most part, small sample sizes.


Subject(s)
COVID-19 , Students, Medical , Clinical Competence , Educational Measurement/methods , Humans , Pandemics , Physical Examination
7.
Eur J Dent Educ ; 26(2): 377-383, 2022 May.
Article in English | MEDLINE | ID: covidwho-1406548

ABSTRACT

INTRODUCTION: During the COVID-19 pandemic, dental schools were required to reformat their curricula to accommodate regulations mandated to protect the health of students and faculty. For students enrolled in the Operative Dentistry preclinical courses at the Harvard School of Dental Medicine (HSDM), this modified curriculum included frontloading the course with lectures delivered remotely, followed by in-person laboratory exercises of learned concepts. The aim of this article was to determine the impact that the modifications had on student performance and student self-evaluation capabilities. MATERIALS AND METHODS: Thirty-eight students were introduced to this restructured course. Their performance in a final multiple-choice (MC) examination, four preclinical laboratory competency assessments (class II amalgam preparation and restoration, class III composite preparation and restoration) and their self-assessment of these preclinical competency assessments were then compared with the pre-COVID pandemic (P-CP) classes from years 2014 to 2019 (n = 216 students). Linear regressions were performed to determine differences in mean faculty scores, self-assessment scores, student-faculty score gaps (S-F gaps) and absolute S-F gaps seen between the class impacted by the pandemic and the P-CP classes. RESULTS: The results demonstrated that students during the COVID-19 pandemic (D-CP) had a higher average faculty score in all four preclinical laboratory competency assessments and in the final MC examination. In addition, the S-F gap was smaller in this cohort as compared with the P-CP classes. CONCLUSION: Despite the challenges of restructuring the preclinical curricula, D-CP students performed better than their P-CP predecessors in multiple facets of this Operative Dentistry course including self-assessment accuracy.


Subject(s)
COVID-19 , Dentistry, Operative , Clinical Competence , Curriculum , Dentistry, Operative/education , Diagnostic Self Evaluation , Education, Dental/methods , Educational Measurement/methods , Humans , Pandemics , Self-Assessment , Students, Dental
9.
J Surg Oncol ; 124(2): 193-199, 2021 Aug.
Article in English | MEDLINE | ID: covidwho-1378940

ABSTRACT

Telesimulation (TS), the process of using the internet to link educators and trainees at locations remote from one another, harnesses the powers of technology to enable access to high-quality simulation-based education and assessment to learners across the globe. From its first uses in the teaching and assessment of laparoscopic skills to more recent interpretations during the current pandemic, TS has shown promise in helping educators to address pressing dilemmas in medical education.


Subject(s)
Education, Distance/methods , Education, Medical, Graduate/methods , Educational Measurement/methods , Simulation Training/methods , Specialties, Surgical/education , Educational Technology , Global Health , Humans , International Cooperation , Internet
12.
PLoS One ; 16(8): e0254340, 2021.
Article in English | MEDLINE | ID: covidwho-1341496

ABSTRACT

The COVID-19 pandemic has impelled the majority of schools and universities around the world to switch to remote teaching. One of the greatest challenges in online education is preserving the academic integrity of student assessments. The lack of direct supervision by instructors during final examinations poses a significant risk of academic misconduct. In this paper, we propose a new approach to detecting potential cases of cheating on the final exam using machine learning techniques. We treat the issue of identifying the potential cases of cheating as an outlier detection problem. We use students' continuous assessment results to identify abnormal scores on the final exam. However, unlike a standard outlier detection task in machine learning, the student assessment data requires us to consider its sequential nature. We address this issue by applying recurrent neural networks together with anomaly detection algorithms. Numerical experiments on a range of datasets show that the proposed method achieves a remarkably high level of accuracy in detecting cases of cheating on the exam. We believe that the proposed method would be an effective tool for academics and administrators interested in preserving the academic integrity of course assessments.


Subject(s)
Education, Distance , Educational Measurement , Fraud , Lie Detection , Machine Learning , Algorithms , COVID-19/epidemiology , Datasets as Topic , Deception , Education, Distance/methods , Education, Distance/organization & administration , Educational Measurement/methods , Educational Measurement/standards , Humans , Models, Theoretical , Pandemics , SARS-CoV-2 , Universities
14.
Ann Glob Health ; 87(1): 68, 2021.
Article in English | MEDLINE | ID: covidwho-1325927

ABSTRACT

Introduction: The COVID-19 pandemic has forced a new look (or modernization) for both the obligations and approaches to achieve best-practices in global health learning. These best-practices have moved beyond traditional, face-to-face (F2F), classroom-based didactics to the use of innovative online, asynchronous and synchronous instructional design and the information and communication technology (ICT) tools to support it. But moving to this higher level of online in-service and pre-service training, key obligations (e.g., stopping neocolonialization, cultural humility, reversing brain drain, gender equity) must guide the modernization of instructional design and the supporting ICT. To positively impact global health training, educators must meet the needs of learners where they are. Purpose: We describe a set of multi-communication methods, e-Learning principles, strategies, and ICT approaches for educators to pivot content delivery from traditional, F2F classroom didactics into the modern era. These best-practices in both the obligations and approaches utilize thoughtful, modern strategies of instructional design and ICT. Approach: We harnessed our collective experiences in global health training to present thoughtful insights on the guiding principles, strategies, and ICT environment central to develop learning curricula that meet trainee needs and how they can be actualized. Specifically, we describe five strategies: 1. Individualized learning; 2. Provide experiential learning; 3. Mentor … Mentor … Mentor; 4. Reinforce learning through assessment; and 5. Information and communication technology and tools to support learning. Discussion: We offer a vision, set of guiding principles, and five strategies for successful curricula delivery in the modern era so that global health training can be made available to a wider audience more efficiently and effectively.


Subject(s)
Education, Distance/methods , Global Health/education , Learning , Mentoring/methods , Problem-Based Learning/methods , Educational Measurement/methods , Humans , International Cooperation
16.
Clin Ter ; 172(4): 284-304, 2021 Jul 05.
Article in English | MEDLINE | ID: covidwho-1304849

ABSTRACT

ABSTRACT: Many Italian universities during the COVID-19 pandemic had numerous students attending hospital wards. The training of health care students was necessary to prepare for good practices in implementing knowledge about COVID-19 and minimizing contagion among students who carried out the internship. In February 2020, a course aiming to guide health personnel so that they can appropriately address the health emergency posed by the new coronavirus was created, making use of the scientific evidence currently available as well as official sources of information and updates. The aim of this study was the development and validation of a useful tool to evaluate the progress in knowledge regarding COVID-19 of students in degree courses for the health care professions. The reliability of the test was assessed using Cronbach's alpha (α) coefficient, while the responsiveness of the test between T0 and T1 was measured with a student t test. The standard error of measurement was used to calculate the minimal detectable change of the tool. The test is made up of 31 items with four multiple-choice answers, one of which is correct. Fifteen bachelor's degree courses at the Sapienza University of Rome were enrolled, for a total population of 1,017 students from different course years. The test showed good internal consistency, with Cronbach's α values of 0.82. The item-total analysis also showed good results, with homogeneous α values from 0.80 to 0.82 for each item. The student t test showed a difference of 3.59 between T0 and T1 (p < 0.001). The minimal detectable change was 0.47. The test is a useful tool for assessing progress in skills regarding COVID-19 for students from bachelor's degree courses in the health professions. It allows the improvement and acquisition of skills as well as a qualitative analysis of the organization of internship degree courses.


Subject(s)
COVID-19/diagnosis , COVID-19/therapy , Education, Distance/statistics & numerical data , Educational Measurement/methods , Educational Measurement/statistics & numerical data , Health Knowledge, Attitudes, Practice , Health Personnel/statistics & numerical data , Adult , Cross-Sectional Studies , Female , Humans , Italy/epidemiology , Male , Middle Aged , Pandemics , Reproducibility of Results , SARS-CoV-2 , Students/statistics & numerical data , Surveys and Questionnaires , Young Adult
18.
Acad Med ; 96(9): 1236-1238, 2021 09 01.
Article in English | MEDLINE | ID: covidwho-1281877

ABSTRACT

The COVID-19 pandemic interrupted administration of the United States Medical Licensing Examination (USMLE) Step 2 Clinical Skills (CS) exam in March 2020 due to public health concerns. As the scope and magnitude of the pandemic became clearer, the initial plans by the USMLE program's sponsoring organizations (NBME and Federation of State Medical Boards) to resume Step 2 CS in the short-term shifted to long-range plans to relaunch an exam that could harness technology and reduce infection risk. Insights about ongoing changes in undergraduate and graduate medical education and practice environments, coupled with challenges in delivering a transformed examination during a pandemic, led to the January 2021 decision to permanently discontinue Step 2 CS. Despite this, the USMLE program considers assessment of clinical skills to be critically important. The authors believe this decision will facilitate important advances in assessing clinical skills. Factors contributing to the decision included concerns about achieving desired goals within desired time frames; a review of enhancements to clinical skills training and assessment that have occurred since the launch of Step 2 CS in 2004; an opportunity to address safety and health concerns, including those related to examinee stress and wellness during a pandemic; a review of advances in the education, training, practice, and delivery of medicine; and a commitment to pursuing innovative assessments of clinical skills. USMLE program staff continue to seek input from varied stakeholders to shape and prioritize technological and methodological enhancements to guide development of clinical skills assessment. The USMLE program's continued exploration of constructs and methods by which communication skills, clinical reasoning, and physical examination may be better assessed within the remaining components of the exam provides opportunities for examinees, educators, regulators, the public, and other stakeholders to provide input.


Subject(s)
Clinical Competence/standards , Educational Measurement/methods , Licensure, Medical/standards , COVID-19/prevention & control , Educational Measurement/standards , Humans , Licensure, Medical/trends , United States
19.
Rev Clin Esp (Barc) ; 221(8): 456-463, 2021 Oct.
Article in English | MEDLINE | ID: covidwho-1275670

ABSTRACT

BACKGROUND AND OBJECTIVES: The COVID-19 pandemic has forced universities to move the completion of university studies online. Spain's National Conference of Medical School Deans coordinates an objective, structured clinical competency assessment called the Objective Structured Clinical Examination (OSCE), which consists of 20 face-to-face test sections for students in their sixth year of study. As a result of the pandemic, a computer-based case simulation OSCE (CCS-OSCE) has been designed. The objective of this article is to describe the creation, administration, and development of the test. MATERIALS AND METHODS: This work is a descriptive study of the CCS-OSCE from its planning stages in April 2020 to its administration in June 2020. RESULTS: The CCS-OSCE evaluated the competences of anamnesis, exploration, clinical judgment, ethical aspects, interprofessional relations, prevention, and health promotion. No technical or communication skills were evaluated. The CCS-OSCE consisted of ten test sections, each of which had a 12-min time limit and ranged from six to 21 questions (mean: 1.1 min/question). The CCS-OSCE used the virtual campus platform of each of the 16 participating medical schools, which had a total of 2829 students in their sixth year of study. It was jointly held on two dates in June 2020. CONCLUSIONS: The CCS-OSCE made it possible to bring together the various medical schools and carry out interdisciplinary work. The CCS-OSCE conducted may be similar to Step 3 of the United States Medical Licensing Examination.


Subject(s)
COVID-19 , Clinical Competence/standards , Computer Simulation , Educational Measurement/methods , Schools, Medical , Humans , Spain
20.
Curr Pharm Teach Learn ; 13(9): 1174-1179, 2021 09.
Article in English | MEDLINE | ID: covidwho-1275244

ABSTRACT

INTRODUCTION: In response to the COVID-19 pandemic, most universities in North America transitioned to online instruction and assessment in March 2020. Undergraduate pharmacy students in years one to three of two four-year entry-to-practice programs at a university in Canada were administered open-book examinations to complete their didactic winter-term courses in pharmaceutical sciences; behavioural, social, and administrative sciences; and pharmacotherapeutics. The impacts of the switch to open-book examinations on final exam characteristics are examined. METHODS: The ratios and correlations of final exam and midterm grades in 2020, where final exams were open-book, and in 2019, where finals were closed-book, were calculated and compared. RESULTS: In 2020, the ratio of final exam to midterm exam scores for five out of seven courses were significantly larger than they were in 2019. Alternatively, for all but one course, the correlations between midterm and final examination grades showed no significant difference from 2019 to 2020. CONCLUSIONS: Compared to 2019 when finals were administered in a closed-book format, a sudden shift to an open-book format for final exams in 2020 appears to be associated with the final exams becoming easier relative to midterms. However, when considering how final and midterm exam grades correlate year over year, in all but one class, there was no significant difference. These findings suggest that changing exams to be open-book may change how they can be used to inform criterion-referenced or absolute grading decisions but not norm-referenced or rank-based decisions.


Subject(s)
Education, Distance/methods , Educational Measurement/methods , Educational Measurement/statistics & numerical data , Educational Status , Students, Pharmacy/statistics & numerical data , Canada , Humans , Universities
SELECTION OF CITATIONS
SEARCH DETAIL