Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 3.422
Filter
2.
BMJ Open Qual ; 13(Suppl 2)2024 May 07.
Article in English | MEDLINE | ID: mdl-38719519

ABSTRACT

INTRODUCTION: Safe practice in medicine and dentistry has been a global priority area in which large knowledge gaps are present.Patient safety strategies aim at preventing unintended damage to patients that can be caused by healthcare practitioners. One of the components of patient safety is safe clinical practice. Patient safety efforts will help in ensuring safe dental practice for early detection and limiting non-preventable errors.A valid and reliable instrument is required to assess the knowledge of dental students regarding patient safety. OBJECTIVE: To determine the psychometric properties of a written test to assess safe dental practice in undergraduate dental students. MATERIAL AND METHODS: A test comprising 42 multiple-choice questions of one-best type was administered to final year students (52) of a private dental college. Items were developed according to National Board of Medical Examiners item writing guidelines. The content of the test was determined in consultation with dental experts (either professor or associate professor). These experts had to assess each item on the test for language clarity as A: clear, B: ambiguous and relevance as 1: essential, 2: useful, not necessary, 3: not essential. Ethical approval was taken from the concerned dental college. Statistical analysis was done in SPSS V.25 in which descriptive analysis, item analysis and Cronbach's alpha were measured. RESULT: The test scores had a reliability (calculated by Cronbach's alpha) of 0.722 before and 0.855 after removing 15 items. CONCLUSION: A reliable and valid test was developed which will help to assess the knowledge of dental students regarding safe dental practice. This can guide medical educationist to develop or improve patient safety curriculum to ensure safe dental practice.


Subject(s)
Educational Measurement , Patient Safety , Psychometrics , Humans , Psychometrics/instrumentation , Psychometrics/methods , Patient Safety/standards , Patient Safety/statistics & numerical data , Surveys and Questionnaires , Educational Measurement/methods , Educational Measurement/statistics & numerical data , Educational Measurement/standards , Reproducibility of Results , Students, Dental/statistics & numerical data , Students, Dental/psychology , Education, Dental/methods , Education, Dental/standards , Male , Female , Clinical Competence/statistics & numerical data , Clinical Competence/standards
3.
BMC Anesthesiol ; 24(1): 188, 2024 May 27.
Article in English | MEDLINE | ID: mdl-38802780

ABSTRACT

BACKGROUND: Ethiopia made a national licensing examination (NLE) for associate clinician anesthetists a requirement for entry into the practice workforce. However, there is limited empirical evidence on whether the NLE scores of associate clinicians predict the quality of health care they provide in low-income countries. This study aimed to assess the association between anesthetists' NLE scores and three selected quality of patient care indicators. METHODS: A multicenter longitudinal observational study was conducted between January 8 and February 7, 2023, to collect quality of care (QoC) data on surgical patients attended by anesthetists (n = 56) who had taken the Ethiopian anesthetist NLE since 2019. The three QoC indicators were standards for safe anesthesia practice, critical incidents, and patient satisfaction. The medical records of 991 patients were reviewed to determine the standards for safe anesthesia practice and critical incidents. A total of 400 patients responded to the patient satisfaction survey. Multivariable regressions were employed to determine whether the anesthetist NLE score predicted QoC indicators. RESULTS: The mean percentage of safe anesthesia practice standards met was 69.14%, and the mean satisfaction score was 85.22%. There were 1,120 critical incidents among 911 patients, with three out of five experiencing at least one. After controlling for patient, anesthetist, facility, and clinical care-related confounding variables, the NLE score predicted the occurrence of critical incidents. For every 1% point increase in the total NLE score, the odds of developing one or more critical incidents decreased by 18% (aOR = 0.82; 95% CI = 0.70 = 0.96; p = 0.016). No statistically significant associations existed between the other two QoC indicators and NLE scores. CONCLUSION: The NLE score had an inverse relationship with the occurrence of critical incidents, supporting the validity of the examination in assessing graduates' ability to provide safe and effective care. The lack of an association with the other two QoC indicators requires further investigation. Our findings may help improve education quality and the impact of NLEs in Ethiopia and beyond.


Subject(s)
Anesthetists , Patient Satisfaction , Quality of Health Care , Humans , Ethiopia , Longitudinal Studies , Male , Female , Adult , Quality of Health Care/standards , Anesthetists/standards , Middle Aged , Anesthesiology/standards , Clinical Competence/standards , Educational Measurement/methods , Educational Measurement/standards
6.
Curr Pharm Teach Learn ; 16(6): 465-468, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38582641

ABSTRACT

BACKGROUND AND PURPOSE: To describe one institution's approach to transformation of high-stakes objective structure clinical examinations (OSCEs) from norm-referenced to criterion-referenced standards setting and to evaluate the impact of these changes on OSCE performance and pass rates. EDUCATIONAL ACTIVITY AND SETTING: The OSCE writing team at the college selected a modified Angoff method appropriate for high-stakes assessments to replace the two standard deviation method previously used. Each member of the OSCE writing team independently reviewed the analytical checklist and calculated a passing score for active stations on OSCEs. Then the group met to determine a final pass score for each station. The team also determined critical cut points for each station, when indicated. After administration of the OSCEs, scores, pass rates, and need for remediation were compared to the previous norm-referenced method. Descriptive statistics were used to summarize the data. FINDINGS: OSCE scores remained relatively unchanged when switched to a criterion-referenced method, but the number of remediators increased up to 2.6 fold. In the first year, the average score increased from 86.8% to 91.7% while the remediation rate increased from 2.8% to 7.4%. In the third year, the average increased from 90.9% to 92% while the remediation rate increased from 6% to 15.6%. Likewise, the fourth-year average increased from 84.9% to 87.5% while the remediation rate increased from 4.4% to 9%. SUMMARY: Transition to a modified Angoff method did not impact average OSCE score but did increase the number of remediations.


Subject(s)
Educational Measurement , Humans , Educational Measurement/methods , Educational Measurement/statistics & numerical data , Educational Measurement/standards , Clinical Competence/standards , Clinical Competence/statistics & numerical data , Education, Pharmacy/methods , Education, Pharmacy/standards , Education, Pharmacy/statistics & numerical data
7.
JMIR Med Educ ; 10: e55048, 2024 Apr 29.
Article in English | MEDLINE | ID: mdl-38686550

ABSTRACT

Background: The deployment of OpenAI's ChatGPT-3.5 and its subsequent versions, ChatGPT-4 and ChatGPT-4 With Vision (4V; also known as "GPT-4 Turbo With Vision"), has notably influenced the medical field. Having demonstrated remarkable performance in medical examinations globally, these models show potential for educational applications. However, their effectiveness in non-English contexts, particularly in Chile's medical licensing examinations-a critical step for medical practitioners in Chile-is less explored. This gap highlights the need to evaluate ChatGPT's adaptability to diverse linguistic and cultural contexts. Objective: This study aims to evaluate the performance of ChatGPT versions 3.5, 4, and 4V in the EUNACOM (Examen Único Nacional de Conocimientos de Medicina), a major medical examination in Chile. Methods: Three official practice drills (540 questions) from the University of Chile, mirroring the EUNACOM's structure and difficulty, were used to test ChatGPT versions 3.5, 4, and 4V. The 3 ChatGPT versions were provided 3 attempts for each drill. Responses to questions during each attempt were systematically categorized and analyzed to assess their accuracy rate. Results: All versions of ChatGPT passed the EUNACOM drills. Specifically, versions 4 and 4V outperformed version 3.5, achieving average accuracy rates of 79.32% and 78.83%, respectively, compared to 57.53% for version 3.5 (P<.001). Version 4V, however, did not outperform version 4 (P=.73), despite the additional visual capabilities. We also evaluated ChatGPT's performance in different medical areas of the EUNACOM and found that versions 4 and 4V consistently outperformed version 3.5. Across the different medical areas, version 3.5 displayed the highest accuracy in psychiatry (69.84%), while versions 4 and 4V achieved the highest accuracy in surgery (90.00% and 86.11%, respectively). Versions 3.5 and 4 had the lowest performance in internal medicine (52.74% and 75.62%, respectively), while version 4V had the lowest performance in public health (74.07%). Conclusions: This study reveals ChatGPT's ability to pass the EUNACOM, with distinct proficiencies across versions 3.5, 4, and 4V. Notably, advancements in artificial intelligence (AI) have not significantly led to enhancements in performance on image-based questions. The variations in proficiency across medical fields suggest the need for more nuanced AI training. Additionally, the study underscores the importance of exploring innovative approaches to using AI to augment human cognition and enhance the learning process. Such advancements have the potential to significantly influence medical education, fostering not only knowledge acquisition but also the development of critical thinking and problem-solving skills among health care professionals.


Subject(s)
Educational Measurement , Licensure, Medical , Chile , Humans , Educational Measurement/methods , Educational Measurement/standards , Clinical Competence/standards , Male , Female
8.
Am J Pharm Educ ; 88(5): 100701, 2024 May.
Article in English | MEDLINE | ID: mdl-38641172

ABSTRACT

As first-time pass rates on the North American Pharmacy Licensure Examination (NAPLEX) continue to decrease, pharmacy educators are left questioning the dynamics causing the decline and how to respond. Institutional and student factors both influence first-time NAPLEX pass rates. Pharmacy schools established before 2000, those housed within an academic medical center, and public rather than private schools have been associated with tendencies toward higher first-time NAPLEX pass rates. However, these factors alone do not sufficiently explain the issues surrounding first-time pass rates. Changes to the NAPLEX blueprint may also have influenced first-time pass rates. The number of existing pharmacy schools combined with decreasing numbers of applicants and influences from the COVID-19 pandemic should also be considered as potential causes of decreased first-time pass rates. In this commentary, factors associated with first-time NAPLEX pass rates are discussed along with some possible responses for the Academy to consider.


Subject(s)
COVID-19 , Education, Pharmacy , Educational Measurement , Licensure, Pharmacy , Schools, Pharmacy , Humans , Educational Measurement/standards , Schools, Pharmacy/standards , COVID-19/epidemiology , Students, Pharmacy , Pharmacists , United States
9.
Med Educ ; 58(6): 730-736, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38548481

ABSTRACT

OBJECTIVE: This study explored how the Syrian crisis, training conditions, and relocation influenced the National Medical Examination (NME) scores of final-year medical students. METHODS: Results of the NME were used to denote the performance of final-year medical students between 2014 and 2021. The NME is a mandatory standardised test that measures the knowledge and competence of students in various clinical subjects. We categorised the data into two periods: period-I (2014-2018) and period-II (2019-2021). Period-I represents students who trained under hostile circumstances, which refer to the devastating effects of a decade-long Syrian crisis. Period-II represents post-hostilities phase, which is marked by a deepening economic crisis. RESULTS: Collected data included test scores for a total of 18 312 final-year medical students from nine medical schools (from six public and three private universities). NME scores improved significantly in period-II compared with period-I tests (p < 0.0001). Campus location or relocation during the crisis affected the results significantly, with higher scores from students of medical schools located in lower-risk regions compared with those from medical schools located in high-risk regions (p < 0.0001), both during and in the post-hostilities phases. Also, students of medical schools re-located to lesser-risk regions scored significantly less than those of medical schools located in high-risk regions (p < 0.0001), but their scores remained inferior to that of students of medical schools that were originally located in lower-risk regions (p < 0.0001). CONCLUSION: Academic performance of final year medical students can be adversely affected by crises and conflicts, with a clear tendency to recovery upon crises resolution. The study underscores the importance of maintaining and safeguarding the infrastructure of educational institutions, especially during times of crisis. Governments and educational authorities should prioritise resource allocation to ensure that medical schools have access to essential services, learning resources, and teaching personnel.


Subject(s)
Educational Measurement , Students, Medical , Syria , Humans , Educational Measurement/methods , Educational Measurement/standards , Clinical Competence/standards , Schools, Medical , Education, Medical, Undergraduate , Education, Medical
10.
J Osteopath Med ; 124(6): 257-265, 2024 Jun 01.
Article in English | MEDLINE | ID: mdl-38498662

ABSTRACT

CONTEXT: The National Board of Osteopathic Medical Examiners (NBOME) administers the Comprehensive Osteopathic Medical Licensing Examination of the United States (COMLEX-USA), a three-level examination designed for licensure for the practice of osteopathic medicine. The examination design for COMLEX-USA Level 3 (L3) was changed in September 2018 to a two-day computer-based examination with two components: a multiple-choice question (MCQ) component with single best answer and a clinical decision-making (CDM) case component with extended multiple-choice (EMC) and short answer (SA) questions. Continued validation of the L3 examination, especially with the new design, is essential for the appropriate interpretation and use of the test scores. OBJECTIVES: The purpose of this study is to gather evidence to support the validity of the L3 examination scores under the new design utilizing sources of evidence based on Kane's validity framework. METHODS: Kane's validity framework contains four components of evidence to support the validity argument: Scoring, Generalization, Extrapolation, and Implication/Decision. In this study, we gathered data from various sources and conducted analyses to provide evidence that the L3 examination is validly measuring what it is supposed to measure. These include reviewing content coverage of the L3 examination, documenting scoring and reporting processes, estimating the reliability and decision accuracy/consistency of the scores, quantifying associations between the scores from the MCQ and CDM components and between scores from different competency domains of the L3 examination, exploring the relationships between L3 scores and scores from a performance-based assessment that measures related constructs, performing subgroup comparisons, and describing and justifying the criterion-referenced standard setting process. The analysis data contains first-attempt test scores for 8,366 candidates who took the L3 examination between September 2018 and December 2019. The performance-based assessment utilized as a criterion measure in this study is COMLEX-USA Level 2 Performance Evaluation (L2-PE). RESULTS: All assessment forms were built through the automated test assembly (ATA) procedure to maximize parallelism in terms of content coverage and statistical properties across the forms. Scoring and reporting follows industry-standard quality-control procedures. The inter-rater reliability of SA rating, decision accuracy, and decision consistency for pass/fail classifications are all very high. There is a statistically significant positive association between the MCQ and the CDM components of the L3 examination. The patterns of associations, both within the L3 subscores and with L2-PE domain scores, fit with what is being measured. The subgroup comparisons by gender, race, and first language showed expected small differences in mean scores between the subgroups within each category and yielded findings that are consistent with those described in the literature. The L3 pass/fail standard was established through implementation of a defensible criterion-referenced procedure. CONCLUSIONS: This study provides some additional validity evidence for the L3 examination based on Kane's validity framework. The validity of any measurement must be established through ongoing evaluation of the related evidence. The NBOME will continue to collect evidence to support validity arguments for the COMLEX-USA examination series.


Subject(s)
Educational Measurement , Licensure, Medical , Osteopathic Medicine , United States , Humans , Educational Measurement/methods , Educational Measurement/standards , Licensure, Medical/standards , Osteopathic Medicine/education , Osteopathic Medicine/standards , Reproducibility of Results , Clinical Competence/standards
12.
J Neurosci Nurs ; 56(3): 86-91, 2024 May 01.
Article in English | MEDLINE | ID: mdl-38451926

ABSTRACT

ABSTRACT: BACKGROUND: To measure the effectiveness of an educational intervention, it is essential to develop high-quality, validated tools to assess a change in knowledge or skills after an intervention. An identified gap within the field of neurology is the lack of a universal test to examine knowledge of neurological assessment. METHODS: This instrument development study was designed to determine whether neuroscience knowledge as demonstrated in a Neurologic Assessment Test (NAT) was normally distributed across healthcare professionals who treat patients with neurologic illness. The variables of time, knowledge, accuracy, and confidence were individually explored and analyzed in SAS. RESULTS: The mean (standard deviation) time spent by 135 participants to complete the NAT was 12.9 (3.2) minutes. The mean knowledge score was 39.5 (18.2), mean accuracy was 46.0 (15.7), and mean confidence was 84.4 (24.4). Despite comparatively small standard deviations, Shapiro-Wilk scores indicate that the time spent, knowledge, accuracy, and confidence are nonnormally distributed ( P < .0001). The Cronbach α was 0.7816 considering all 3 measures (knowledge, accuracy, and confidence); this improved to an α of 0.8943 when only knowledge and accuracy were included in the model. The amount of time spent was positively associated with higher accuracy ( r2 = 0.04, P < .05), higher knowledge was positively associated with higher accuracy ( r2 = 0.6543, P < .0001), and higher knowledge was positively associated with higher confidence ( r2 = 0.4348, P < .0001). CONCLUSION: The scores for knowledge, confidence, and accuracy each had a slightly skewed distribution around a point estimate with a standard deviation smaller than the mean. This suggests initial content validity in the NAT. There is adequate initial construct validity to support using the NAT as an outcome measure for projects that measure change in knowledge. Although improvements can be made, the NAT does have adequate construct and content validity for initial use.


Subject(s)
Health Personnel , Neurologic Examination , Humans , Neurologic Examination/standards , Neurologic Examination/methods , Health Personnel/education , Reproducibility of Results , Clinical Competence/standards , Female , Male , Adult , Neuroscience Nursing , Health Knowledge, Attitudes, Practice , Nervous System Diseases/nursing , Nervous System Diseases/diagnosis , Educational Measurement/methods , Educational Measurement/standards
14.
Indian Pediatr ; 61(5): 463-468, 2024 May 15.
Article in English | MEDLINE | ID: mdl-38400729

ABSTRACT

India introduced competency-based medical education (CBME) in the year 2019. There is often confusion between terms like ability, skill, and competency. The provided curriculum encourages teaching and assessing skills rather than competencies. Though competency includes skill, it is more than a mere skill, and ignoring the other aspects like communication, ethics, and professionalism can compromise the teaching of competencies as well as their intended benefits to the patient and the society. The focus on skills also undermines the assessment of relevant knowledge. This paper clarifies the differences between ability, skill, and competency, and re-emphasizes the role of relevant knowledge and its assessment throughout clinical training. It is also emphasized that competency assessment is not a one-shot process; rather, it must be a longitudinal process where the assessment should bring out the achievement level of the student. Many of the components of competencies are not assessable by purely objective methods and there is a need to use expert subjective judgments, especially for the formative and classroom assessments. A mentor adds to the success of a competency-based curriculum.


Subject(s)
Clinical Competence , Competency-Based Education , Curriculum , Humans , Clinical Competence/standards , India , Competency-Based Education/standards , Curriculum/standards , Educational Measurement/methods , Educational Measurement/standards , Education, Medical/standards , Education, Medical/methods
16.
Acad Med ; 99(5): 534-540, 2024 May 01.
Article in English | MEDLINE | ID: mdl-38232079

ABSTRACT

PURPOSE: Learner development and promotion rely heavily on narrative assessment comments, but narrative assessment quality is rarely evaluated in medical education. Educators have developed tools such as the Quality of Assessment for Learning (QuAL) tool to evaluate the quality of narrative assessment comments; however, scoring the comments generated in medical education assessment programs is time intensive. The authors developed a natural language processing (NLP) model for applying the QuAL score to narrative supervisor comments. METHOD: Samples of 2,500 Entrustable Professional Activities assessments were randomly extracted and deidentified from the McMaster (1,250 comments) and Saskatchewan (1,250 comments) emergency medicine (EM) residency training programs during the 2019-2020 academic year. Comments were rated using the QuAL score by 25 EM faculty members and 25 EM residents. The results were used to develop and test an NLP model to predict the overall QuAL score and QuAL subscores. RESULTS: All 50 raters completed the rating exercise. Approximately 50% of the comments had perfect agreement on the QuAL score, with the remaining resolved by the study authors. Creating a meaningful suggestion for improvement was the key differentiator between high- and moderate-quality feedback. The overall QuAL model predicted the exact human-rated score or 1 point above or below it in 87% of instances. Overall model performance was excellent, especially regarding the subtasks on suggestions for improvement and the link between resident performance and improvement suggestions, which achieved 85% and 82% balanced accuracies, respectively. CONCLUSIONS: This model could save considerable time for programs that want to rate the quality of supervisor comments, with the potential to automatically score a large volume of comments. This model could be used to provide faculty with real-time feedback or as a tool to quantify and track the quality of assessment comments at faculty, rotation, program, or institution levels.


Subject(s)
Competency-Based Education , Internship and Residency , Natural Language Processing , Humans , Competency-Based Education/methods , Internship and Residency/standards , Clinical Competence/standards , Narration , Educational Measurement/methods , Educational Measurement/standards , Emergency Medicine/education , Faculty, Medical/standards
17.
J Med Syst ; 47(1): 86, 2023 Aug 15.
Article in English | MEDLINE | ID: mdl-37581690

ABSTRACT

ChatGPT, a language model developed by OpenAI, uses a 175 billion parameter Transformer architecture for natural language processing tasks. This study aimed to compare the knowledge and interpretation ability of ChatGPT with those of medical students in China by administering the Chinese National Medical Licensing Examination (NMLE) to both ChatGPT and medical students. We evaluated the performance of ChatGPT in three years' worth of the NMLE, which consists of four units. At the same time, the exam results were compared to those of medical students who had studied for five years at medical colleges. ChatGPT's performance was lower than that of the medical students, and ChatGPT's correct answer rate was related to the year in which the exam questions were released. ChatGPT's knowledge and interpretation ability for the NMLE were not yet comparable to those of medical students in China. It is probable that these abilities will improve through deep learning.


Subject(s)
Artificial Intelligence , Educational Measurement , Licensure , Medicine , Students, Medical , Humans , Asian People , China , Knowledge , Language , Medicine/standards , Licensure/standards , Students, Medical/statistics & numerical data , Educational Measurement/standards
18.
Natl Med J India ; 36(5): 323-326, 2023.
Article in English | MEDLINE | ID: mdl-38759987

ABSTRACT

Background Reflective practice is an integral component of continuing professional development. However, assessing the written narration is complex and difficult. Rubric is a potential tool that can overcome this difficulty. We aimed to develop, validate and estimate inter-rater reliability of an analytical rubric used for assessing reflective narration. Methods A triangulation type of mixed-methods design (Qual-Nominal group Technique, Quan-Analytical follow-up design and Qual-Open-ended response) was adopted to achieve the study objectives. Faculties involved in the active surveillance of Covid-19 participated in the process of development of assessment rubrics. The reflective narrations of medical interns were assessed by postgraduates with and without the rubric. Steps recommended by the assessment committee of the University of Hawaii were followed to develop rubrics. Content validity index and inter-rater reliability measures were estimated. Results An analytical rubric with eight criteria and four mastery levels yielding a maximum score of 40 was developed. There was a significant difference in the mean score obtained by interns when rated without and with the developed rubrics. Kendall's coefficient of concordance, which is a measure of concordance of scorers among more than two scorers, was higher after using rubrics. Conclusion Our attempt to develop an analytical rubric for assessing reflective narration was successful in terms of the high content validity index and better inter-rater concordance. The same process can be replicated to develop any such analytical rubric in the future.


Subject(s)
COVID-19 , Humans , Reproducibility of Results , SARS-CoV-2 , Observer Variation , Internship and Residency , Educational Measurement/methods , Educational Measurement/standards
19.
Educ. med. super ; 36(2)jun. 2022. ilus, tab
Article in Spanish | LILACS, CUMED | ID: biblio-1404547

ABSTRACT

Introducción: La formación de los especialistas médico-quirúrgicos (residentes) se lleva a cabo en hospitales donde confluyen actividades asistenciales y de enseñanza-aprendizaje. El conocimiento sobre este ambiente dual es fundamental para identificar oportunidades para optimizar la calidad y efectividad de ambas actividades. Objetivo: Construir una escala para medir la percepción del ambiente de enseñanza-aprendizaje en la práctica clínica de los residentes en formación en Colombia. Métodos: Se diseñó una escala tipo Likert, que adaptó la guía de la Association for Medical Education in Europe Developing Questionnaires For Educational Research, con los siguientes pasos: revisión de literatura, revisión de la normatividad colombiana con respecto a los hospitales universitarios, síntesis de la evidencia, desarrollo de los ítems, validación de apariencia por expertos y aplicación del cuestionario a residentes. Resultados: Se construyó la escala de Ambiente de la Práctica Clínica (EAPRAC) sobre la base de la teoría educativa de la actividad y del aprendizaje situado en el lugar de trabajo. Inicialmente, se definieron 46 preguntas y, posterior a la validación de apariencia, se conformaron 39 ítems distribuidos en siete dominios: procesos académicos, docentes, convenios docencia-servicio, bienestar, infraestructura académica, infraestructura asistencial y organización y gestión. La aplicación de esta escala a residentes no mostró problemas de comprensión, motivo por el cual no fue necesario depurar la cantidad ni el contenido de los ítems. Conclusiones: La escala construida tiene validez de apariencia por los pares expertos y los residentes, lo que permite que en una fase posterior se le realice la validez de contenido y reproducibilidad(AU)


Introduction: The training of medical-surgical specialists (residents) takes place in hospitals where healthcare and teaching-learning activities converge. Knowledge about this dual setting is essential for identifying opportunities to optimize the quality and effectiveness of both activities. Objective: To construct a scale for measuring the perception about the teaching-learning environment in the clinical practice of residents who receive training in Colombia. Methods: A Likert-type scale was designed as an adapted form of the guide Developing Questionnaires for Educational Research, presented by the Association for Medical Education in Europe, with the following steps: literature review, review of Colombian regulations regarding university hospitals, synthesis of evidence, development of items, validation of appearance by experts, and questionnaire application to residents. Results: A clinical practice environment scale was constructed on the basis of the educational theory of activity and learning situated in the workplace. Initially, 46 questions were defined and, after the validation of appearance, 39 items distributed in seven domains were created: academic processes, teaching processes, teaching-service agreements, welfare, academic infrastructure, care infrastructure, and management and organization. The application of this scale to residents showed no comprehension problems; therefore, it was not necessary to refine the number or content of the items. Conclusions: The scale constructed has validity of appearance by expert peers and residents, which allows, in further stages, to carry out content validity and reproducibility(AU)


Subject(s)
Humans , Teaching , Knowledge , Learning , Health Management , Education, Medical , Educational Measurement/standards , Evaluation Studies as Topic , Hospitals/standards
20.
Med Teach ; 44(4): 353-359, 2022 04.
Article in English | MEDLINE | ID: mdl-35104191

ABSTRACT

Health professions education has undergone significant changes over the last few decades, including the rise of competency-based medical education, a shift to authentic workplace-based assessments, and increased emphasis on programmes of assessment. Despite these changes, there is still a commonly held assumption that objectivity always leads to and is the only way to achieve fairness in assessment. However, there are well-documented limitations to using objectivity as the 'gold standard' to which assessments are judged. Fairness, on the other hand, is a fundamental quality of assessment and a principle that almost no one contests. Taking a step back and changing perspectives to focus on fairness in assessment may help re-set a traditional objective approach and identify an equal role for subjective human judgement in assessment alongside objective methods. This paper explores fairness as a fundamental quality of assessments. This approach legitimises human judgement and shared subjectivity in assessment decisions alongside objective methods. Widening the answer to the question: 'What is fair assessment' to include not only objectivity but also expert human judgement and shared subjectivity can add significant value in ensuring learners are better equipped to be the health professionals required of the 21st century.


Subject(s)
Competency-Based Education , Educational Measurement/methods , Educational Measurement/standards , Health Occupations/education , Workplace , Humans , Judgment
SELECTION OF CITATIONS
SEARCH DETAIL
...