Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 255
Filter
1.
JMIR Med Educ ; 10: e58126, 2024 Jun 27.
Article in English | MEDLINE | ID: mdl-38952022

ABSTRACT

Background: Multiple-choice examinations are frequently used in German dental schools. However, details regarding the used item types and applied scoring methods are lacking. Objective: This study aims to gain insight into the current use of multiple-choice items (ie, questions) in summative examinations in German undergraduate dental training programs. Methods: A paper-based 10-item questionnaire regarding the used assessment methods, multiple-choice item types, and applied scoring methods was designed. The pilot-tested questionnaire was mailed to the deans of studies and to the heads of the Department of Operative/Restorative Dentistry at all 30 dental schools in Germany in February 2023. Statistical analysis was performed using the Fisher exact test (P<.05). Results: The response rate amounted to 90% (27/30 dental schools). All respondent dental schools used multiple-choice examinations for summative assessments. Examinations were delivered electronically by 70% (19/27) of the dental schools. Almost all dental schools used single-choice Type A items (24/27, 89%), which accounted for the largest number of items in approximately half of the dental schools (13/27, 48%). Further item types (eg, conventional multiple-select items, Multiple-True-False, and Pick-N) were only used by fewer dental schools (≤67%, up to 18 out of 27 dental schools). For the multiple-select item types, the applied scoring methods varied considerably (ie, awarding [intermediate] partial credit and requirements for partial credit). Dental schools with the possibility of electronic examinations used multiple-select items slightly more often (14/19, 74% vs 4/8, 50%). However, this difference was statistically not significant (P=.38). Dental schools used items either individually or as key feature problems consisting of a clinical case scenario followed by a number of items focusing on critical treatment steps (15/27, 56%). Not a single school used alternative testing methods (eg, answer-until-correct). A formal item review process was established at about half of the dental schools (15/27, 56%). Conclusions: Summative assessment methods among German dental schools vary widely. Especially, a large variability regarding the use and scoring of multiple-select multiple-choice items was found.


Subject(s)
Education, Dental , Educational Measurement , Germany , Humans , Surveys and Questionnaires , Educational Measurement/methods , Education, Dental/methods , Schools, Dental
2.
Article in English | MEDLINE | ID: mdl-38981117

ABSTRACT

OBJECTIVES: We describe new curriculum materials for engaging secondary school students in exploring the "big data" in the NIH All of Us Research Program's Public Data Browser and the co-design processes used to collaboratively develop the materials. We also describe the methods used to develop and validate assessment items for studying the efficacy of the materials for student learning as well as preliminary findings from these studies. MATERIALS AND METHODS: Secondary-level biology teachers from across the United States participated in a 2.5-day Co-design Summer Institute. After learning about the All of Us Research Program and its Data Browser, they collaboratively developed learning objectives and initial ideas for learning experiences related to exploring the Data Browser and big data. The Genetic Science Learning Center team at the University of Utah further developed the educators' ideas. Additional teachers and their students participated in classroom pilot studies to validate a 22-item instrument that assesses students' knowledge. Educators completed surveys about the materials and their experiences. RESULTS: The "Exploring Big Data with the All of Us Data Browser" curriculum module includes 3 data exploration guides that engage students in using the Data Browser, 3 related multimedia pieces, and teacher support materials. Pilot testing showed substantial growth in students' understanding of key big data concepts and research applications. DISCUSSION AND CONCLUSION: Our co-design process provides a model for educator engagement. The new curriculum module serves as a model for introducing secondary students to big data and precision medicine research by exploring diverse real-world datasets.

3.
J Intell ; 12(6)2024 May 31.
Article in English | MEDLINE | ID: mdl-38921691

ABSTRACT

Standard learning assessments like multiple-choice questions measure what students know but not how their knowledge is organized. Recent advances in cognitive network science provide quantitative tools for modeling the structure of semantic memory, revealing key learning mechanisms. In two studies, we examined the semantic memory networks of undergraduate students enrolled in an introductory psychology course. In Study 1, we administered a cumulative multiple-choice test of psychology knowledge, the Intro Psych Test, at the end of the course. To estimate semantic memory networks, we administered two verbal fluency tasks: domain-specific fluency (naming psychology concepts) and domain-general fluency (naming animals). Based on their performance on the Intro Psych Test, we categorized students into a high-knowledge or low-knowledge group, and compared their semantic memory networks. Study 1 (N = 213) found that the high-knowledge group had semantic memory networks that were more clustered, with shorter distances between concepts-across both the domain-specific (psychology) and domain-general (animal) categories-compared to the low-knowledge group. In Study 2 (N = 145), we replicated and extended these findings in a longitudinal study, collecting data near the start and end of the semester. In addition to replicating Study 1, we found the semantic memory networks of high-knowledge students became more interconnected over time, across both domain-general and domain-specific categories. These findings suggest that successful learners show a distinct semantic memory organization-characterized by high connectivity and short path distances between concepts-highlighting the utility of cognitive network science for studying variation in student learning.

4.
BMC Med Educ ; 24(1): 636, 2024 Jun 06.
Article in English | MEDLINE | ID: mdl-38844972

ABSTRACT

BACKGROUND: General practitioner interns need to acquire the expected clinical, communication, personal and professional competencies. Internship evaluations use qualitative evaluation tables to assess competency acquisition. However, there is no standardised evaluation table used in France. Some faculties use the exhaustive, precise, and manageable Exceler evaluation tool. We aim to evaluate opinions of General practice interns in Brest about the acceptability and feasibility of using the Exceler evaluation tool to monitor competency acquisition during internships. METHODS: This qualitative study used intern focus groups. Six-open ended questions with optional follow-up questions were asked. Cards from the Dixit® game were used to guide and facilitate discussion. Open, axial, then integrative analysis of the verbatim was performed. RESULTS: This is the first study to evaluate intern opinions about GP internship evaluations using focus groups. Participants felt that the quality of existing evaluations was insufficient, and it was difficult to monitor their progress. Adapting evaluations to individual profiles and backgrounds seemed necessary. Exceler appeared to be a possible solution due to its content validity, flexibility of use and accessibility. However, there were comments about possible modifications. CONCLUSIONS: Analysing opinions of tutors, supervisors and other practice centers could help identify potential barriers and reveal solutions to facilitate its implementation and use. TRIAL REGISTRATION: Not applicable.


Subject(s)
Clinical Competence , Feasibility Studies , Focus Groups , General Practice , Internship and Residency , Qualitative Research , Humans , Internship and Residency/standards , Clinical Competence/standards , General Practice/education , Educational Measurement/methods , Male , Female , Adult , France , Attitude of Health Personnel
5.
Rev Infirm ; 73(301): 32-34, 2024 May.
Article in French | MEDLINE | ID: mdl-38796242

ABSTRACT

In the context of smoking cessation, the shared educational assessment (BEP) enables us to assess the smoker's needs, define specific objectives and set up appropriate educational workshops. This multidisciplinary approach helps smokers to maintain their smoking cessation. The BEP is the first step in the educational process, exploring the various classic dimensions of therapeutic patient education (TPE) and then defining an action plan based on the priorities identified.


Subject(s)
Patient Education as Topic , Smoking Cessation , Humans , Smoking Cessation/methods , Patient Education as Topic/methods
6.
J Dent Educ ; 88(5): 533-543, 2024 May.
Article in English | MEDLINE | ID: mdl-38314889

ABSTRACT

PURPOSE: Item analysis of multiple-choice questions (MCQs) is an essential tool for identifying items that can be stored, revised, or discarded to build a quality MCQ bank. This study analyzed MCQs based on item analysis to develop a pool of valid and reliable items and investigate stakeholders' perceptions regarding MCQs in a written summative assessment (WSA) based on this item analysis. METHODS: In this descriptive study, 55 questions each from 2016 to 2019 of WSA in preclinical removable prosthodontics for fourth-year undergraduate dentistry students were analyzed for item analysis. Items were categorized according to their difficulty index (DIF I) and discrimination index (DI). Students (2021-2022) were assessed using this question bank. Students' perceptions of and feedback from faculty members concerning this assessment were collected using a questionnaire with a five-point Likert scale. RESULTS: Of 220 items when both indices (DIF I and DI) were combined, 144 (65.5%) were retained in the question bank, 66 (30%) required revision before incorporation into the question bank, and only 10 (4.5%) were discarded. The mean DIF I and DI values were 69% (standard deviation [Std.Dev] = 19) and 0.22 (Std.Dev = 0.16), respectively, for 220 MCQs. The mean scores from the questionnaire for students and feedback from faculty members ranged from 3.50 to 4.04 and from 4 to 5, respectively, indicating that stakeholders tended to agree and strongly agree, respectively, with the proposed statements. CONCLUSION: This study assisted the prosthodontics department in creating a set of prevalidated questions with known difficulty and discrimination capacity.


Subject(s)
Education, Dental , Educational Measurement , Prosthodontics , Prosthodontics/education , Humans , Education, Dental/methods , Educational Measurement/methods , Students, Dental/psychology , Surveys and Questionnaires , Stakeholder Participation
7.
Article in English | MEDLINE | ID: mdl-38387881

ABSTRACT

PURPOSE: Despite educational mandates to assess resident teaching competence, limited instruments with validity evidence exist for this purpose. Existing instruments do not allow faculty to assess resident-led teaching in a large group format or whether teaching was interactive. This study gathers validity evidence on the use of the Resident-led Large Group Teaching Assessment Instrument (Relate), an instrument used by faculty to assess resident teaching competency. Relate comprises 23 behaviors divided into six elements: learning environment, goals and objectives, content of talk, promotion of understanding and retention, session management, and closure. METHODS: Messick's unified validity framework was used for this study. Investigators used video recordings of resident-led teaching from three pediatric residency programs to develop Relate and a rater guidebook. Faculty were trained on instrument use through frame-of-reference training. Resident teaching at all sites was video-recorded during 2018-2019. Two trained faculty raters assessed each video. Descriptive statistics on performance were obtained. Validity evidence sources include: rater training effect (response process), reliability and variability (internal structure), and impact on Milestones assessment (relations to other variables). RESULTS: Forty-eight videos, from 16 residents, were analyzed. Rater training improved inter-rater reliability from 0.04 to 0.64. The Φ-coefficient reliability was 0.50. There was a significant correlation between overall Relate performance and the pediatric teaching Milestone, r = 0.34, P = .019. CONCLUSION: Relate provides validity evidence with sufficient reliability to measure resident-led large-group teaching competence.


Subject(s)
Internship and Residency , Humans , United States , Child , Reproducibility of Results , Clinical Competence , Educational Measurement , Faculty
8.
MethodsX ; 12: 102531, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38204981

ABSTRACT

Evaluating text-based answers obtained in educational settings or behavioral studies is time-consuming and resource-intensive. Applying novel artificial intelligence tools such as ChatGPT might support the process. Still, currently available implementations do not allow for automated and case-specific evaluations of large numbers of student answers. To counter this limitation, we developed a flexible software and user-friendly web application that enables researchers and educators to use cutting-edge artificial intelligence technologies by providing an interface that combines large language models with options to specify questions of interest, sample solutions, and evaluation instructions for automated answer scoring. We validated the method in an empirical study and found the software with expert ratings to have high reliability. Hence, the present software constitutes a valuable tool to facilitate and enhance text-based answer evaluation.•Generative AI-enhanced software for customizable, case-specific, and automized grading of large amounts of text-based answers.•Open-source software and web application for direct implementation and adaptation.

9.
Rev. bras. educ. méd ; 48(1): e014, 2024. tab, graf
Article in Portuguese | LILACS-Express | LILACS | ID: biblio-1535559

ABSTRACT

Resumo Introdução: Com a evolução do ensino médico para currículos baseados em competências, fez-se necessária uma readequação dos currículos e dos métodos de avaliação, com maior enfoque sobre o cenário de prática profissional e, portanto, na utilização de ferramentas como o Mini-Clinical Evaluation Exercise (Mini-CEX). Objetivo: Este estudo teve como objetivo avaliar o uso da estratégia Mini-CEX como método de avaliação nos programas de residência médica. Método: Trata-se de uma revisão de escopo, cuja estratégia de busca realizada no PubMed resultou em 578 artigos. Após aplicar a metodologia do Instituto Joanna Briggs para inclusão e exclusão, foram selecionados 24 estudos transversais. Resultado: Selecionaram-se artigos referentes a estudos realizados entre 1995 e 2021, em diversos continentes, diferentes programas de residência, e cenários ambulatorial, internação e de emergência. O Mini-CEX mostrou-se aplicável no contexto da residência médica, pois trata-se de uma avaliação observacional direta do atendimento realizado pelo médico residente nos diversos cenários de atuação, como ambulatórios, internações e emergências. Trata-se de uma avaliação com tempo de observação variando de dez a 40 minutos e que permite a abordagem de vários aspectos do atendimento médico, como anamnese, exame físico, raciocínio clínico e aconselhamento, além de possibilitar a realização de um feedback sobre o desempenho dos residentes. Conclusão: O Mini-CEX constitui uma ferramenta de fácil aplicabilidade e promove alto grau de satisfação dos envolvidos, podendo ser utilizada de forma rotineira nos programas de residência médica.


Abstract Introduction: With the evolution of medical education towards competency-based curriculum, the need has emerged to reconfigure curriculum and assessment methods, with increased focus on the professional practice setting, thus leading to the utilization of tools such as the mini-CEX (mini-Clinical Evaluation Exam). Objective: To evaluate the use of the mini-CEX strategy as an assessment method in medical residency programs. Method: This is a scoping review, and the search performed on PubMed resulted in 578 articles. After applying the Joanna Briggs Institute methodology for inclusion and exclusion, 24 cross-sectional studies were selected. Results: The selected articles were based on studies conducted between 1995 and 2021, in various continents and in both clinical and surgical residency programs, including outpatient, inpatient, and emergency settings. The Mini-CEX was shown to be applicable in the context of medical residency, as it is an observational assessment of the care provided by the resident physician in various practice settings such as outpatient clinics, inpatient wards, and emergency departments. It involves a variable observation time ranging from 10 to 40 minutes and allows for the evaluation of various aspects of medical care, including history taking, physical examination, clinical reasoning, counseling, and provides an opportunity for providing feedback on the residents' performance. Conclusion: The mini-CEX is a tool that is easy to implement and promotes a high degree of satisfaction among stakeholders. It could be used more routinely in medical residency programs.

10.
Rev Med Interne ; 44(12): 632-640, 2023 Dec.
Article in English | MEDLINE | ID: mdl-37923588

ABSTRACT

INTRODUCTION: Several studies suggest the relevance of healthcare simulation to prepare future doctors to deliver bad news. A such, we designed a role-play workshop to train first-year residents enrolled in Lille University School of Medicine to break bad news. The objective of this work is to report on our experience of this training and to assess its educational value through its capacity to satisfy residents' expectations, to induce a feeling of ease towards bad news disclosure, and to change trainees' preconceptions regarding these situations. METHODS: The training consisted of a 45-minute heuristic reflective activity, aimed at identifying residents' preconceptions regarding bad news disclosure, followed by 4 30-min role-plays in which they played the parts of the physician, the patient and/or their relatives. Trainees were asked to answer 2 questionnaires (pre- and post-training), exploring previous experiences, preconceived ideas regarding bad news disclosure and workshop satisfaction. RESULTS: Almost all residents felt very satisfied with the workshop, which they regarded as formative (91%) and not too stressful (89%). The majority felt "more capable" (53% vs. 83%) and "more comfortable" (27% vs. 62%) to deliver bad news, especially regarding "finding the right words" (12% vs. 22%). Trainees tended to overestimate their skills before the workshop and lowered their assessment of their performance after attending the training, especially when they played the role of a patient in the simulation. CONCLUSION: Healthcare role-play seems an interesting technique for training to breaking bad news. Placing residents in the role of patients or relatives is an active approach that encourages reflexivity.


Subject(s)
Internship and Residency , Physician-Patient Relations , Humans , Truth Disclosure , Universities , Educational Status
11.
Article in English | LILACS | ID: biblio-1551410

ABSTRACT

The objective is to present a daily attitudes and professionalism assessment instrument for medical students in theoretical-practical activities. The development of the instrument was based on the manuals of the program for student integration with the community, on the program's pedagogical project, and on the National Curricular Guidelines for Undergraduate Programs in Medicine, and was carried out by professors. These were consulted in weekly 50-minute meetings held between August and November 2016. At the end of the process, a version of the instrument was consolidated with five items and six descriptors to discriminate learning situations that enable competency-based assessment from the simplest to the most complex level. With the use of the instrument, points considered important in medical training in theoretical-practical activities cannot be overlooked (AU).


Objetiva-se apresentar um instrumento de avaliação diária de atitudes e profissionalismo para estudantes de Medicina em atividades teórico-práticas. A elaboração do instrumento foi baseada nos manuais do programa de integração do aluno com a comunidade, projeto pedagógico do curso e nas Diretrizes Curriculares Nacionais da Graduação em Medicina, realizada por docentes. Os docentes foram consultados em reuniões com duração de 50 minutos, com periodicidade semanal, entre agosto e novembro de 2016. Ao final do processo foi consolidada uma versão do instrumento com cinco itens e seis descritores para discriminação de situações de aprendizagem que permitem a avaliação da competência de um nível mais simples até o mais complexo. Com a utilização do instrumento elaborado não se deixa de avaliar pontos considerados importantes para a formação médica em atividades teórico-prática (AU).


Subject(s)
Schools, Medical , Educational Measurement , Feedback , Academic Performance
12.
JMIR Res Protoc ; 12: e49955, 2023 Oct 24.
Article in English | MEDLINE | ID: mdl-37874640

ABSTRACT

BACKGROUND: There has been a significant increase in the use of e-learning for global and public health education recently, especially following the COVID-19 pandemic. e-Learning holds the potential to offer equal opportunities, overcoming barriers like physical limitations and training costs. However, its effectiveness remains debated, with institutions unprepared for the sudden shift during the pandemic. To effectively evaluate the outcomes of e-learning, a standardized and rigorous approach is necessary. However, the existing literature on this subject often lacks standardized assessment tools and theoretical foundations, leading to ambiguity in the evaluation process. Consequently, it becomes imperative to identify a clear theoretical foundation and practical approach for evaluating global and public health e-learning outcomes. OBJECTIVE: This protocol for a scoping review aims to map the state of e-learning evaluation in global and public health education to determine the existing theoretical evaluation frameworks, methods, tools, and domains and the gaps in research and practice. METHODS: The scoping review will be conducted following the PRISMA-ScR (Preferred Reporting Items for Systematic Reviews and Meta-Analyses Extension for Scoping Reviews) guidelines. The initial search was performed in PubMed, Education Resource Information Center, Web of Science, and Scopus to identify peer-reviewed articles that report on the use of evaluation and assessment for e-learning training. The search strings combined the concepts of e-learning, public health, and health science education, along with evaluation and frameworks. After the initial search, a screening process will be carried out to determine the relevance of the identified studies to the research question. Data related to the characteristics of the included studies, the characteristics of the e-learning technology used in the studies, and the study outcomes will be extracted from the eligible articles. The extracted data will then undergo a structured, descriptive, quantitative, and qualitative content analysis to synthesize the information from the selected studies. RESULTS:  Initial database searches yielded a total of 980 results. Duplicates have been removed, and title and abstract screening of the 805 remaining extracted articles are underway. Quantitative and qualitative findings from the reviewed articles will be presented to answer the study objective. CONCLUSIONS: This scoping review will provide global and public health educators with a comprehensive overview of the current state of e-learning evaluation. By identifying existing e-learning frameworks and tools, the findings will offer valuable guidance for further advancements in global and public health e-learning evaluation. The study will also enable the creation of a comprehensive, evidence-based e-learning evaluation framework and tools, which will improve the quality and accountability of global health and public health education. Ultimately, this will contribute to better health outcomes. INTERNATIONAL REGISTERED REPORT IDENTIFIER (IRRID): DERR1-10.2196/49955.

13.
BMC Med Educ ; 23(1): 788, 2023 Oct 24.
Article in English | MEDLINE | ID: mdl-37875929

ABSTRACT

Pass/fail (P/F) grading has emerged as an alternative to tiered clerkship grading. Systematically evaluating existing literature and surveying program directors (PD) perspectives on these consequential changes can guide educators in addressing inequalities in academia and students aiming to improve their residency applications. In our survey, a total of 1578 unique PD responses (63.1%) were obtained across 29 medical specialties. With the changes to United States Medical Licensure Examination (USMLE), responses showed increased importance of core clerkships with the implementation of Step 2CK cutoffs. PDs believed core clerkship performance was a reliable representation of an applicant's preparedness for residency, particularly in Accreditation Council for Graduate Medical Education's (ACGME)Medical Knowledge and Patient Care and Procedural Skills. PDs disagreed with P/F core clerkships because it more difficult to objectively compare applicants. No statistically significant differences in responses were found in PD preferential selection when comparing applicants from tiered and P/F core clerkship grading systems. If core clerkships adopted P/F scoring, PDs would further increase emphasis on narrative assessment, sub-internship evaluation, reference letters, academic awards, professional development and medical school prestige. In the meta-analysis, of 6 studies from 2,118 participants, adjusted scaled scores with mean difference from an equal variance model from PDs showed residents from tiered clerkship grading systems overall performance, learning ability, work habits, personal evaluations, residency selection and educational evaluation were not statistically significantly different than from residents from P/F systems. Overall, our dual study suggests that while PDs do not favor P/F core clerkships, PDs do not have a selection preference and do not report a difference in performance between applicants from P/F vs. tiered grading core clerkship systems, thus providing fertile grounds for institutions to examine the feasibility of adopting P/F grading for core clerkships.


Subject(s)
Clinical Clerkship , Internship and Residency , Students, Medical , Humans , United States , Educational Measurement , Accreditation , Licensure, Medical
14.
Kans J Med ; 16: 234-236, 2023.
Article in English | MEDLINE | ID: mdl-37791030

ABSTRACT

Introduction: Encounters for preoperative assessments are common within primary care offices, so it is imperative that family medicine residents learn how to perform preoperative evaluations. We assessed family medicine residents' knowledge of preoperative evaluation in preparation for surgery by providing a pre- and post-test alongside a didactic seminar. Methods: A didactic seminar on preoperative evaluations was presented at a family medicine resident didactics session by two senior anesthesiology residents. A 16-question, multiple choice test was used as both a pre-test and post-test to assess family medicine residents' knowledge. Results: A total of 31 participants took the pre-test (residents = 24; medical students = 7), and 30 participants took the post-test (residents = 23; medical students = 7). Mean scores and standard deviations were calculated for both tests with an average score of 37.50% ± 10.58% and 45.42% ± 11.12% on the pre- and post-test, respectively. Using the Kruskal-Wallis test, residents showed a significant improvement in test scores following the didactic presentation (p = 0.041), while overall results (residents and medical students) also reported a significant difference (p = 0.004). Conclusions: Our results demonstrated that educating family medicine residents and medical students on preoperative evaluation showed significant, quantifiable gains in knowledge following a brief didactic presentation. Given the current gap between guidelines and practice, our results emphasize the need for a formal medical school and residency-based curriculum related to preoperative patient evaluation.

15.
Korean J Med Educ ; 35(3): 285-290, 2023 Sep.
Article in English | MEDLINE | ID: mdl-37670524

ABSTRACT

PURPOSE: This study investigates the characteristics of different item types to assess learning outcomes and explore the educational implications that can be obtained from the results of learning outcome assessments. METHODS: Forty-five second-year premedical students participated in this study. Multiple choice question (MCQ) and short essay question (SEQ) scores and pass rates for 10 learning outcomes were analyzed. Descriptive statistics and correlation analysis were used to analyze the data. RESULTS: The correlation analysis indicated that there was a significant correlation between SEQs and pass rate but there was no significant correlation between MCQs and pass rate. Some students with identical scores on the MCQs had different scores on the SEQs or on the learning outcomes. CONCLUSION: This study showed that students' achievement of learning outcomes can be assessed using various types of questions in outcome-based education.


Subject(s)
Academic Success , Learning , Humans , Students , Students, Premedical
16.
Pharm. pract. (Granada, Internet) ; 21(3): 1-8, jul.-sep. 2023. tab
Article in English | IBECS | ID: ibc-226183

ABSTRACT

Backgound: Neuropsychiatric disease is common globally. It is vital train pharmacists to provide patient-centered care in neuropsychiatry. Objective: To evaluate the impact of student-created vignettes on their knowledge and abilities to assess and manage patients with neuropsychiatric diseases, and to evaluate their experience. Methods: Several learning/assessment methodologies within the Therapeutics III course were utilized, including a major assignment of student-created vignettes about neuropsychiatric diseases. A framework guided student in creating the vignettes; identifying conception, design, and administration. Created vignettes were evaluated based on a validated scoring guide. Mean scores in various assessments were compared using Spearman’s rank-order correlation. Students evaluated their experience on a 5-point Likert-type scale of 1 = Strongly Disagree, 2 = Disagree, 3 = Neutral, 4 = Agree, 5 = Strongly Agree. Results: Overall, students’ performance in the assignment was excellent, average score = 92%. A significant correlation existed between the vignette assignment and assessments covering neuropsychiatric disease. Most students agreed they were made aware of what needed to be done (95%), that the instructions about elements to include, designs, and delivery mechanisms were enough (93.4%, 86.7%, and 93.4%, respectively). Most students agreed that developing the vignette was stimulating, engaging and enjoyable (93.3% and 90%, 88.3% respectively). Students stated they felt confident in their scientific background knowledge (88.3%), in employing communication strategies with patients (85%) and their families (83.3%), and in their confidence in promoting and supporting patients with the diseases. (AU)


Subject(s)
Humans , Students, Pharmacy , Neuropsychiatry/education , Knowledge , Mental Disorders , Educational Measurement , Curriculum , Personal Satisfaction
18.
Percept Mot Skills ; 130(4): 1732-1761, 2023 Aug.
Article in English | MEDLINE | ID: mdl-37137162

ABSTRACT

Scholars refer to individuals who have been immersed in digital environments and who make easy use of digital languages to interact with the world as "digital natives," and Teo proposed four attributes of digital natives to illustrate their behavioral tendencies. We aimed to expand Teo's framework and to develop and validate the Scale of Digital Native Attributes (SDNA) for measuring cognitive and social interactive attributes of digital natives. Based on pre-test results, we retained 10 attributes and 37 SDNA items, with 3-4 items in each sub-dimension. We then recruited 887 Taiwanese undergraduates as respondents and conducted confirmatory factor analysis to establish construct validity. Moreover, the SDNA correlated with several other related measurements to demonstrate satisfactory criterion-related validity. Internal consistency was evaluated by McDonald's Omega (ω) and Cronbach's α coefficient, showing satisfactory reliability. This preliminary tool is now ready for cross validation and temporal reliability testing in further research.


Subject(s)
Language , Students , Humans , Reproducibility of Results , Cognition , Surveys and Questionnaires , Psychometrics
19.
Clin Anat ; 36(7): 986-992, 2023 Oct.
Article in English | MEDLINE | ID: mdl-37212241

ABSTRACT

Student success in basic medical science courses is typically determined by their individual performance on examinations of various types. Previous research both within and outside medical education has shown that the use of educational assessment activities can increase learning as demonstrated by performance on subsequent examinations, a phenomenon known as the testing effect. Activities primarily designed and used for assessment and evaluation purposes can also be used as teaching opportunities. We developed a method for measuring and evaluating student accomplishment in a preclinical basic science course that incorporates both individual and collaborative efforts, encourages and rewards active participation, does not compromise the reliability of the assessment outcome and is perceived by the students as helpful and valuable. The approach involved a two-part assessment activity composed of an individual examination and a small group examination with each component differentially weighted in determining an overall examination score. We found that the method was successful in encouraging collaborative efforts during the group component and provided valid measures of student grasp of the subject matter. We describe the development and implementation of the method, provide data derived from its use in a preclinical basic science course and discuss factors to be addressed when utilizing this approach to ensure fairness and reliability of the outcome. We include brief summary comments from students regarding their impressions of the value of this method.


Subject(s)
Education, Medical , Educational Measurement , Humans , Reproducibility of Results , Educational Measurement/methods , Learning , Curriculum
20.
Soins ; 68(873): 28-31, 2023 Mar.
Article in French | MEDLINE | ID: mdl-37037640

ABSTRACT

The MOTIV-SEP therapeutic education program for people with multiple sclerosis integrates the cognitive, emotional, behavioral and social components of therapeutic compliance. These components allow for an approach centered on the individual, in order to help him or her better support the important stage of starting treatment, but also to anticipate and reduce the obstacles to compliance.


Subject(s)
Multiple Sclerosis , Humans , Male , Female , Multiple Sclerosis/therapy , Patient Compliance
SELECTION OF CITATIONS
SEARCH DETAIL
...