Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 894
Filter
1.
JMIR Med Educ ; 10: e58126, 2024 Jun 27.
Article in English | MEDLINE | ID: mdl-38952022

ABSTRACT

Background: Multiple-choice examinations are frequently used in German dental schools. However, details regarding the used item types and applied scoring methods are lacking. Objective: This study aims to gain insight into the current use of multiple-choice items (ie, questions) in summative examinations in German undergraduate dental training programs. Methods: A paper-based 10-item questionnaire regarding the used assessment methods, multiple-choice item types, and applied scoring methods was designed. The pilot-tested questionnaire was mailed to the deans of studies and to the heads of the Department of Operative/Restorative Dentistry at all 30 dental schools in Germany in February 2023. Statistical analysis was performed using the Fisher exact test (P<.05). Results: The response rate amounted to 90% (27/30 dental schools). All respondent dental schools used multiple-choice examinations for summative assessments. Examinations were delivered electronically by 70% (19/27) of the dental schools. Almost all dental schools used single-choice Type A items (24/27, 89%), which accounted for the largest number of items in approximately half of the dental schools (13/27, 48%). Further item types (eg, conventional multiple-select items, Multiple-True-False, and Pick-N) were only used by fewer dental schools (≤67%, up to 18 out of 27 dental schools). For the multiple-select item types, the applied scoring methods varied considerably (ie, awarding [intermediate] partial credit and requirements for partial credit). Dental schools with the possibility of electronic examinations used multiple-select items slightly more often (14/19, 74% vs 4/8, 50%). However, this difference was statistically not significant (P=.38). Dental schools used items either individually or as key feature problems consisting of a clinical case scenario followed by a number of items focusing on critical treatment steps (15/27, 56%). Not a single school used alternative testing methods (eg, answer-until-correct). A formal item review process was established at about half of the dental schools (15/27, 56%). Conclusions: Summative assessment methods among German dental schools vary widely. Especially, a large variability regarding the use and scoring of multiple-select multiple-choice items was found.


Subject(s)
Education, Dental , Educational Measurement , Germany , Humans , Surveys and Questionnaires , Educational Measurement/methods , Education, Dental/methods , Schools, Dental
2.
J Chiropr Educ ; 2024 Jun 10.
Article in English | MEDLINE | ID: mdl-38852943

ABSTRACT

OBJECTIVE: Since 1963 the Canadian Chiropractic Examining Board has conducted competency examinations for individuals seeking licensure to practice chiropractic in Canada. To maintain currency with changes in practice, examination content and methodology have been regularly updated since that time. This paper describes the process used by the Canadian Chiropractic Examining Board to restructure the examination to ensure it was current and to align it with the 2018 Federation of Canadian Chiropractic's Canadian Chiropractic Entry-to-Practice Competency Profile. METHODS: A subject-matter-expert committee developed proposed candidate outcomes (indicators) for a new examination, derived from the competency profile. A national survey of practice was undertaken to determine the importance and frequency-of-use of the profile's enabling competencies. Survey results, together with other practice-based data and further subject-matter-expert input, were used to validate indicators and to create a new structure for the examination. RESULTS: The new examination is a combination of single-focus and case-based multiple-choice questions, and OSCE (objective, structured, clinical examination) methodology. Content mapping and item weighting were determined by a blueprinting committee and are provided. CONCLUSION: Administration of the new examination commenced in early 2024.

3.
Technol Health Care ; 2024 Apr 27.
Article in English | MEDLINE | ID: mdl-38788102

ABSTRACT

BACKGROUND: Dental education is considered as a complex, challenging and often stressful educational procedure. Acquisition of psychomotor skills by undergraduate students is an important step in many health professions to become a successful professional. During under graduation, class II cavity preparation exercise is of utmost important in dentistry. OBJECTIVE: To compare class II cavities prepared by students after hands-on live demonstration and pre-recorded video demonstration using well-organised evaluation rubrics. METHOD: Preclinical dental students (n= 50) were divided into two groups. The students in group I (n= 25) attended a hands-on live demonstration performed by one faculty while students in group II (n= 25) watched a 15-minute pre-recorded procedural video on the projector. Both groups were appealed to prepare class II cavity for amalgam involving disto-occlusal surface of mandibular second molar articulated on jaw model (TRU LON study model, Jayna industries, Ghaziabad U.P., India). Following completion of the preparations, all teeth were collected, and labelled grades of prepared cavities were given according to prespecified rubrics. The data of scores were presented as means and standard deviation. Statistical analysis of data was executed using SPSS software. A paired t-test was used to compare scores between groups. RESULTS: The study shows that the video-supported demonstration of a cavity preparation was better than the live hands-on demonstration. A higher mean response for the procedural video group was found compared to the live demonstration group (p= 0.000133). CONCLUSION: Pre-recorded video-supported demonstration along with guidance by a tutor may be a viable alternative to hands-on live demonstration in cavity preparation procedures during undergraduate dental training. Moreover, rubric methods can be implemented in the teaching of various preclinical exercises for conservative dentistry and endodontics.

4.
BMC Med Educ ; 24(1): 555, 2024 May 21.
Article in English | MEDLINE | ID: mdl-38773470

ABSTRACT

BACKGROUND: The Progress Test is an individual assessment applied to all students at the same time and on a regular basis. The test was structured in the medical undergraduate education of a conglomerate of schools to structure a programmatic assessment integrated into teaching. This paper presents the results of four serial applications of the progress test and the feedback method to students. METHODS: This assessment comprises 120 items offered online by means of a personal password. Items are authored by faculty, peer-reviewed, and approved by a committee of experts. The items are classified by five major areas, by topics used by the National Board of Medical Examiners and by medical specialties related to a national Unified Health System. The correction uses the Item Response Theory with analysis by the "Rasch" model that considers the difficulty of the item. RESULTS: Student participation increased along the four editions of the tests, considering the number of enrollments. The median performances increased in the comparisons among the sequential years in all tests, except for test1 - the first test offered to schools. Between subsequent years of education, 2nd-1st; 4th-3rd and 5th-4th there was an increase in median scores from progress tests 2 through 4. The final year of undergraduate showed a limited increase compared to the 5th year. There is a consistent increase in the median, although with fluctuations between the observed intervals. CONCLUSION: The progress test promoted the establishment of regular feedback among students, teachers and coordinators and paved the road to engagement much needed to construct an institutional programmatic assessment.


Subject(s)
Education, Medical, Undergraduate , Educational Measurement , Humans , Educational Measurement/methods , Students, Medical
5.
Curr Pharm Teach Learn ; 16(7): 102101, 2024 Jul.
Article in English | MEDLINE | ID: mdl-38702261

ABSTRACT

INTRODUCTION: Artificial intelligence (AI), particularly ChatGPT, is becoming more and more prevalent in the healthcare field for tasks such as disease diagnosis and medical record analysis. The objective of this study is to evaluate the proficiency and accuracy of ChatGPT in different domains of clinical pharmacy cases and queries. METHODS: The study NAPLEX® Review Questions, 4th edition, pertaining to 10 different chronic conditions compared ChatGPT's responses to pharmacotherapy cases and questions obtained from McGraw Hill's, alongside the answers provided by the book's authors. The proportion of correct responses was collected and analyzed using the Statistical Package for the Social Sciences (SPSS) version 29. RESULTS: When tested in English, ChatGPT had substantially higher mean scores than when tested in Turkish. The average accurate score for English and Turkish was 0.41 ± 0.49 and 0.32 ± 0.46, respectively, p = 0.18. Responses to queries beginning with "Which of the following is correct?" are considerably more precise than those beginning with "Mark all the incorrect answers?" 0.66 ± 0.47 as opposed to 0.16 ± 0.36; p = 0.01 in English language and 0.50 ± 0.50 as opposed to 0.14 ± 0.34; p < 0.05in Turkish language. CONCLUSION: ChatGPT displayed a moderate level of accuracy while responding to English inquiries, but it displayed a slight level of accuracy when responding to Turkish inquiries, contingent upon the question format. Improving the accuracy of ChatGPT in languages other than English requires the incorporation of several components. The integration of the English version of ChatGPT into clinical practice has the potential to improve the effectiveness, precision, and standard of patient care provision by supplementing personal expertise and professional judgment. However, it is crucial to utilize technology as an adjunct and not a replacement for human decision-making and critical thinking.


Subject(s)
Artificial Intelligence , Humans , Turkey , Reproducibility of Results , Artificial Intelligence/standards , Surveys and Questionnaires , Language
6.
J Med Imaging Radiat Sci ; 55(4): 101426, 2024 May 25.
Article in English | MEDLINE | ID: mdl-38797622

ABSTRACT

BACKGROUND: The aim of this study was to describe the proficiency of ChatGPT (GPT-4) on certification style exams from the Canadian Association of Medical Radiation Technologists (CAMRT), and describe its performance across multiple exam attempts. METHODS: ChatGPT was prompted with questions from CAMRT practice exams in the disciplines of radiological technology, magnetic resonance (MRI), nuclear medicine and radiation therapy (87-98 questions each). ChatGPT attempted each exam five times. Exam performance was evaluated using descriptive statistics, stratified by discipline and question type (knowledge, application, critical thinking). Light's Kappa was used to assess agreement in answers across attempts. RESULTS: Using a passing grade of 65 %, ChatGPT passed the radiological technology exam only once (20 %), MRI all five times (100 %), nuclear medicine three times (60 %), and radiation therapy all five times (100 %). ChatGPT's performance was best on knowledge questions across all disciplines except radiation therapy. It performed worst on critical thinking questions. Agreement in ChatGPT's responses across attempts was substantial within the disciplines of radiological technology, MRI, and nuclear medicine, and almost perfect for radiation therapy. CONCLUSION: ChatGPT (GPT-4) was able to pass certification style exams for radiation technologists and therapists, but its performance varied between disciplines. The algorithm demonstrated substantial to almost perfect agreement in the responses it provided across multiple exam attempts. Future research evaluating ChatGPT's performance on standardized tests should consider using repeated measures.

7.
Rev Med Interne ; 45(6): 327-334, 2024 Jun.
Article in French | MEDLINE | ID: mdl-38643040

ABSTRACT

INTRODUCTION: Objective Structured Clinical Examinations (OSCEs) assess professional performance in a simulated environment. Following their integration into the reform of the 2nd cycle of medical studies (R2C), this pedagogical modality was implemented in France. This study investigates the variability of students' OSCE scores, as well as their inter-rater reproducibility. METHODS: This single-center retrospective study covered several sessions of evaluative OSCE circuits conducted between January 2022 and June 2023. Variables collected were: baseline situation family, competency domain, presence of a standardized participant for stations; gender and professional status for evaluators; scores (global, clinical and communication skills), number of previously completed OSCE circuits and faculty scores for students. RESULTS: The variability of the overall score was explained mainly (79.7%, CI95% [77.4; 82.0]) by the station factor. The student factor and the circuit factor explained 7.5% [12.9; 20.2] and<0.01% [2.10-13; 2.10-9] respectively. The inter-rater intra-class correlation coefficient was 87.2% [86.4; 87.9] for the global score. Station characteristics (starting situation, domain) and evaluator characteristics (gender, status) were significantly associated with score variations. CONCLUSION: This first study on the variability of OSCE circuit scores in France shows good reproducibility with influence of station characteristics. In order to standardize circuits, variability linked to the domain competency should be considered as well.


Subject(s)
Clinical Competence , Educational Measurement , Observer Variation , Students, Medical , Humans , Educational Measurement/methods , Educational Measurement/standards , Retrospective Studies , Clinical Competence/standards , Clinical Competence/statistics & numerical data , Female , France , Male , Students, Medical/statistics & numerical data , Reproducibility of Results
8.
HCA Healthc J Med ; 5(1): 49-54, 2024.
Article in English | MEDLINE | ID: mdl-38560390

ABSTRACT

Background: We endeavored to create an evidence-based curriculum to improve general surgery resident fund of knowledge. Global and resident-specific interventions were employed to this end. These interventions were monitored via multiple choice question results on a weekly basis and American Board of Surgery In-Training Examination (ABSITE) performance. Methods: This study was performed in a prospective manner over a 2-year period. A structured textbook review with testing was implemented for all residents. A focused textbook question-writing assignment and a Surgical Council on Resident Education (SCORE)-based individualized learning plan (ILP) were implemented for residents scoring below the 35th percentile on the ABSITE. Results: Curriculum implementation resulted in a statistically significant reduction in the number of residents scoring below the 35th percentile, from 50% to 30.8% (P = .023). One hundred percent of residents initially scoring below the 35th percentile were successfully remediated over the study period. Average overall program ABSITE percentile scores increased from 38.5% to 51.4% over a 2-year period. Conclusion: Structured textbook review and testing combined with a question-writing assignment and a SCORE-focused ILP successfully remediated residents scoring below the 35th percentile and improved general surgery residency ABSITE performance.

9.
Anaesth Intensive Care ; : 310057X241234676, 2024 Apr 22.
Article in English | MEDLINE | ID: mdl-38649296

ABSTRACT

The role of self-assessment in workplace-based assessment remains contested. However, anaesthesia trainees need to learn to judge the quality of their own work. Entrustment scales have facilitated a shared understanding of performance standards among supervisors by aligning assessment ratings with everyday clinical supervisory decisions. We hypothesised that if the entrustment scale similarly helped trainees in their self-assessment, there would be substantial agreement between supervisor and trainee ratings. We collected separate mini-clinical evaluation exercises forms from 113 anaesthesia trainee-supervisor pairs from three hospitals in Australia and New Zealand. We calculated the agreement between trainee and supervisor ratings using Pearson and intraclass correlation coefficients. We also tested for associations with demographic variables and examined narrative comments for factors influencing rating. We found ratings agreed in 32% of cases, with 66% of trainee ratings within one point of the supervisor rating on a nine-point scale. The correlation between trainee and supervisor ratings was 0.71, and the degree of agreement measured by the intraclass correlation coefficient was 0.67. With higher supervisor ratings, trainee ratings better correlated with supervisor ratings. We found no strong association with demographic variables. Possible explanations of divergent ratings included one party being unaware of a vital aspect of the performance and different interpretations of the prospective nature of the scale. The substantial concordance between trainee and supervisor ratings supports the contention that the entrustment scale helped produce a shared understanding of the desired performance standard. Discussion between trainees and supervisors on the reasoning underlying their respective judgements would provide further opportunities to enhance this shared understanding.

10.
J Adv Med Educ Prof ; 12(2): 111-117, 2024 Apr.
Article in English | MEDLINE | ID: mdl-38660432

ABSTRACT

Introduction: Direct Observation of Practical Skills (DOPS) tests is a valuable method for clinical assessment. This study aimed to implement the DOPS test to assess the procedural skills of community dentistry courses and its effects on mastery learning and satisfaction of professors and students at Tabriz faculty of dentistry in 2021-2022. Methods: In a quasi-experimental study, 60 dentistry students of a class were assigned into two study (n=30) and control (n=30) groups by Permuted block randomization. In the case group, the skills were related to Fluoride therapy, fissure sealant therapy, and health education evaluated by DOPS. In the control group, these skills were evaluated by traditional evaluation methods. Each test was repeated three times. Finally, the satisfaction of students in the case group was assessed by a questionnaire. The chi-square test was used to compare qualitative variables. Repeated measure ANOVA test was used to compare the mean scores in three stages and two groups. P value less than 0.05 was considered significant. Data were analyzed using SPSS 16 software. Results: A significant difference in the mean score of Fluoride therapy, pit and fissure sealant therapy, and health education was seen between the case and control groups (P<0.001). Also a significant increase in these skills in the third stage of assessment in the case group was observed (P<0.001). The professors and students' satisfaction was considerably high on the DOPS test. Conclusion: The DOPS method had more impact on Fluoride therapy, pit and fissure sealant therapy, and health education's learning process in dentistry students than the conventional evaluation. The professors and students' satisfaction level was high regarding DOPS. The advantages of the DOPS method are student-centeredness, objectivity, and appropriate feedback.

11.
J Chiropr Educ ; 2024 Apr 16.
Article in English | MEDLINE | ID: mdl-38621691

ABSTRACT

OBJECTIVE: To evaluate the association between basic science curriculum delivery method with other academic and demographic factors on National Board of Chiropractic Examiners (NBCE) part I pass rates. METHODS: This was a retrospective cohort study of students from 3 campuses of 1 chiropractic institution who matriculated in 2018 or 2020. COVID-19 regulations required online delivery of a basic science curriculum for students in the 2020 cohorts, whereas students in the 2018 cohorts experienced a traditional classroom delivery. A general linear model estimated odds ratios for passing NBCE part I, comparing individual online cohorts with the combined classroom cohort while adjusting for academic and demographic variables. RESULTS: A total of 968 students were included, 55% from the classroom cohort. The spring 2020 cohort had the fewest students with bachelors' degrees (59%) and more students with high in-program grade point averages (GPA; 61%) along with the lowest estimated odds ratio [0.80 (95% CI: 0.73-0.87)] for passing vs the classroom cohort. The fall 2020 cohort had significantly higher odds [1.06 (95% CI: 1.00-1.03)] of passing vs the classroom cohort. Additional predictors included main campus matriculation, white ethnicity, bachelors' degree, no alternative admission status, and in-program GPA. Students with high in-program GPA (vs low) had a 36% increased odds of passing. CONCLUSION: Compared to the classroom cohort, the spring 2020 cohort had the lowest odds while the fall 2020 cohort had the highest odds of passing part I. In-program GPA had the highest association with passing. These results provide information on how curriculum delivery impacts board exam performance.

12.
Med Sci Educ ; 34(2): 363-370, 2024 Apr.
Article in English | MEDLINE | ID: mdl-38686154

ABSTRACT

The aim of this study was to assess the publication fate of research projects performed during the research year by students enrolled in a Master's degree (MSc) of surgical science and to identify factors associated with subsequent publication. An anonymous online survey of 35 questions was sent to students enrolled in MSc of surgical science between 2013 and 2020. The questionnaire included student's characteristics, topic, and supervision of the research projects developed during the research year and dissemination of the research work. Data regarding publication was collected using PubMed database. Factors associated with publication were identified by univariate analysis. Among 361 students, 26% completed the survey. Among respondents, the publication rate of research projects was 53.7%. The median time interval between the end of the research year and the date of publication was 2 (1-3) years. The student was listed as a first author in 70.6% of publications. Factors associated with publication of the research work completed during the research year were student's previous publications (P = 0.041) and presentation of the research work in academic conferences (P = 0.005). The most mentioned cause for non-publication was the absence of completion of the research work. Among respondents, the publication rate of research works performed during the MSc was high, which emphasizes the quality of the work carried out by the students and their involvement. Significant efforts must be undertaken to encourage the enrollment of residents in scientific research. Supplementary Information: The online version contains supplementary material available at 10.1007/s40670-023-01973-y.

13.
Article in English | MEDLINE | ID: mdl-38502461

ABSTRACT

Integrative medicine, need to be inoffensive, effective, and of quality (World Health Organization). In 2010, the American Society of Teachers of Family Medicine approved 19 competencies for teaching integrative medicine to residents. In 2018, the University of Rennes created a course: "Integrative Medicine and Complementary Therapies". Up until then, the only feedback from the courses was the students' opinions. We investigated the impact on medical students' social representation.We performed a sociological analysis of students' social representations before and after the course. The social representation is based on the way an individual creates his or her universe of beliefs and ideas. After hearing, "What word or group of words comes to mind when you hear people speak of integrative medicine and complementary therapies?", students were asked to provide 5 words/phrases, rank their importance, and show their attitude towards these words/phrases. The frequency and importance of these words/phrases were used to construct social representations (with central cores, and primary and secondary peripheries) before and after the course.Among the 101 students registered, 59 provided complete responses before and 63 after the course. Before, the central core comprised "hypnosis" and "alternative medicine", while after: "complementary care" and "global care". We only identified first periphery before the course: "acupuncture" and "homeopathy". 4 new contrasting elements: "integration with conventional treatment", "patient's choice", "personalisation of care", and "caring relationship of trust".This teaching course positively affected students' social representation of integrative medicine, and might promote their use during future practices.

14.
Adv Physiol Educ ; 48(2): 407-413, 2024 Jun 01.
Article in English | MEDLINE | ID: mdl-38545641

ABSTRACT

Emotional intelligence (EI) has a positive correlation with the academic performance of medical students. However, why there is a positive correlation needs further exploration. We hypothesized that the capability of answering higher-order knowledge questions (HOQs) is higher in students with higher EI. Hence, we assessed the correlation between EI and the capability of medical students to answer HOQs in physiology. First-year undergraduate medical students (n = 124) from an Indian medical college were recruited as a convenient sample. EI was assessed by the Schutte Self-Report Emotional Intelligence Test (SSEIT), a 33-item self-administered validated questionnaire. A specially designed objective examination with 15 lower-order and 15 higher-order multiple-choice questions was conducted. The correlation between the examination score and the EI score was tested by Pearson's correlation coefficient. Data from 92 students (33 females and 59 males) with a mean age of 20.14 ± 1.87 yr were analyzed. Overall, students got a percentage of 53.37 ± 14.07 in the examination, with 24.46 ± 9.1 in HOQs and 28.91 ± 6.58 in lower-order knowledge questions (LOQs). They had a mean score of 109.58 ± 46.2 in SSEIT. The correlation coefficient of SSEIT score with total marks was r = 0.29 (P = 0.0037), with HOQs was r = 0.41 (P < 0.0001), and with LOQs was r = 0.14 (P = 0.19). Hence, there is a positive correlation between EI and the capability of medical students to answer HOQs in physiology. This study may be the foundation for further exploration of the capability of answering HOQs in other subjects.NEW & NOTEWORTHY This study assessed the correlation between emotional intelligence (EI) and the capability of medical students to answer higher-order knowledge questions (HOQs) in the specific context of physiology. The finding reveals one of the multifaceted dimensions of the relationship between EI and academic performance. This novel perspective opens the door to further investigations to explore the relationship in other subjects and other dimensions to understand why students with higher EI have higher academic performance.


Subject(s)
Education, Medical, Undergraduate , Emotional Intelligence , Physiology , Students, Medical , Humans , Students, Medical/psychology , Emotional Intelligence/physiology , Female , Male , Physiology/education , Young Adult , Education, Medical, Undergraduate/methods , Educational Measurement/methods , Surveys and Questionnaires
15.
Physiother Can ; 76(1): 111-120, 2024 Feb.
Article in English | MEDLINE | ID: mdl-38465297

ABSTRACT

Purpose: Clinical education and assessment of students' performance during clinical placements are key components of Canadian entry-to-practice physiotherapy curriculum and important in developing entry-level physiotherapy practitioners. The Canadian Physiotherapy Assessment of Clinical Performance (ACP) is the measure currently used to assess physiotherapy student performance on clinical placements in most of the entry-to-practice physiotherapy programmes across Canada. The release of the 2017 Competency Profile by the National Physiotherapy Advisory Group resulted in a revision of the existing ACP. The purpose of this study is to report the process used to develop a revised version of the ACP based on the 2017 Competency Profile, henceforth called the ACP 2.0. Method: Using a multistage process, we sought input from Canadian clinical education academics, an expert consultant panel, as well as physiotherapists across Canada using a questionnaire, meetings, and an online survey, respectively. Results: Twelve of 15 clinical education academics responded to a questionnaire. The expert consultant panel (n = 12) met three times. There were 144 physiotherapists who initiated the national, online, survey and met the inclusion criteria; 84 completed the survey. In the ACP 2.0, rating scales and comments boxes were grouped, and additional text was added to 12 items for further clarification. The ACP 2.0 came to have 18 items and 9 comment boxes in addition to summative comments, in contrast to the original ACP's 21 items and 9 comment boxes. Conclusions: In November 2020, Canadian clinical education academics reviewed the proposed draft ACP 2.0 and unanimously accepted it for implementation in Canadian physiotherapy university programmes.


Objectif: l'enseignement clinique et l'évaluation du rendement des étudiants pendant les stages cliniques sont des éléments clés du programme canadien d'entrée en pratique de la physiothérapie et sont importants pour former des praticiens de la physiothérapie prêts à entrer en pratique. L'évaluation du rendement clinique de la physiothérapie au Canada (ÉPC) est la mesure actuellement en usage pour évaluer le rendement des étudiants en physiothérapie lors de leur stage clinique dans la plupart des programmes d'entrée en pratique de la physiothérapie au Canada. La publication du Profil des compétences par le Groupe consultatif national en physiothérapie en 2017 a donné lieu à une révision de l'ÉPC. La présente étude vise à rendre compte du processus utilisé pour mettre au point une version révisée de l'ÉPC d'après le Profil des compétences de 2017, désormais appelée l'ÉPC 2.0. Méthodologie: au moyen d'un processus échelonné, les chercheurs ont demandé l'apport d'universitaires canadiens en enseignement clinique, d'un groupe d'experts consultants et de physiothérapeutes des diverses régions du Canada dans le cadre d'un questionnaire, de réunions et d'un sondage en ligne, respectivement. Résultats: au total, 12 des 15 universitaires en enseignement clinique ont répondu à un questionnaire. Le groupe d'experts consultants (n = 12) s'est réuni trois fois. Enfin, 144 physiothérapeutes qui respectaient les critères d'inclusion ont entrepris le sondage national en ligne, et 84 l'ont terminé. Dans l'ÉPC 2.0, les échelles d'évaluation et les encadrés de commentaires ont été regroupés et du texte a été ajouté à 12 des points afin de les clarifier. L'ÉPC 2.0 comporte finalement 18 points et neuf encadrés de commentaires en plus des commentaires sommatifs, par rapport aux 21 points et aux neuf encadrés de commentaires de l'ÉPC original. Conclusions: en novembre 2020, les universitaires en enseignement clinique canadiens ont révisé le projet d'ÉPC 2.0 et en ont adopté la mise en œuvre à l'unanimité au sein des programmes universitaires de physiothérapie du Canada.

16.
Psychometrika ; 89(1): 296-316, 2024 Mar.
Article in English | MEDLINE | ID: mdl-38332224

ABSTRACT

In psychological research and practice, a person's scores on two different traits or abilities are often compared. Such within-person comparisons require that measurements have equal units (EU) and/or equal origins: an assumption rarely validated. We describe a multidimensional SEM/IRT model from the literature and, using principles of conjoint measurement, show that its expected response variables satisfy the axioms of additive conjoint measurement for measurement on a common scale. In an application to Quality of Life data, the EU analysis is used as a pre-processing step to derive a simple structure Quality of Life model with three dimensions expressed in equal units. The results are used to address questions that can only be addressed by scores expressed in equal units. When the EU model fits the data, scores in the corresponding simple structure model will have added validity in that they can address questions that cannot otherwise be addressed. Limitations and the need for further research are discussed.


Subject(s)
Models, Statistical , Psychometrics , Quality of Life , Humans , Psychometrics/methods
17.
J Dent Educ ; 88(5): 596-605, 2024 May.
Article in English | MEDLINE | ID: mdl-38348732

ABSTRACT

PURPOSE/OBJECTIVE: Commission on Dental Accreditation (CODA) accreditation standard 2-19 states predoctoral dental schools must assess students' ability to function successfully as the leader of the oral health care team. This study aimed to explore how dental schools incorporate leadership training into their curriculum to better understand the leadership skills students learn, the ways students engage in leadership training, and the opportunities students have to practice leadership skills with their peers. METHODS: The aim of this 2022 qualitative phenomenology study was to use semi-structured interviews with academic Deans at CODA-accredited dental schools and one subject matter expert to uncover types of cognitive, behavioral, and environmental factors influencing leadership training through the lens of social cognitive theory. All interviews were recorded on Zoom, transcribed, de-identified, and analyzed for recurring themes using NVivo. Eight academic Deans and one subject matter expert participated in the study. RESULTS: Four major themes emerged from the data: leadership is essential for dental professionals, leadership is incorporated into the curricula in diverse ways, students most often engage in leadership training opportunities with dental peers and interprofessional opportunities could be expanded, and dental schools often face barriers to incorporating leadership training. Vertically integrated case presentations and team-based practice management simulations are meaningful leadership development activities. Prominent barriers include time constraints, lack of faculty champions with teaching leadership expertise, and prioritizing the development of hand skills. CONCLUSION: Standard practices for student leadership development and assessment do not appear to exist across dental school curricula. Findings support the need for a leadership development framework.


Subject(s)
Curriculum , Education, Dental , Leadership , Qualitative Research , Education, Dental/methods , Education, Dental/standards , Humans , Schools, Dental , Students, Dental/psychology
18.
J Dent Educ ; 88(5): 631-638, 2024 May.
Article in English | MEDLINE | ID: mdl-38390731

ABSTRACT

PURPOSE/OBJECTIVES: The ability to give and receive feedback is a key skill to develop during predoctoral dental education, and the use of peer feedback specifically offers distinct benefits including a different understanding of material due to peers' proximity of knowledge development and assisting with overburdened instructors. However, it is unclear if peer feedback offers similar quality to instructor feedback. METHODS: Dental students in two different graduation years provided quantitative and qualitative peer feedback on a case-based oral and maxillofacial pathology simulation. The data from these exercises were aggregated and analyzed to compare the quality of qualitative feedback to course examination scores. Student perceptions of peer feedback were also recorded. RESULTS: The mean quality of feedback was not correlated with course examination scores, though the number of times students gave high-quality feedback and received high-quality feedback was correlated with course examination scores. Student feedback overall had a lower quality than instructor feedback, though there was no significant difference between instructor feedback quality and the maximum student feedback quality received. Student perceptions of the utility of feedback were positive. CONCLUSION: While instructor feedback is more reliable and consistent, our findings suggest that in most instances, at least one peer in moderate-sized groups is able to approximate the quality of instructor feedback on case-based assignments.


Subject(s)
Education, Dental , Faculty, Dental , Peer Group , Students, Dental , Education, Dental/methods , Education, Dental/standards , Humans , Students, Dental/psychology , Feedback , Formative Feedback , Educational Measurement/methods
19.
BMC Nurs ; 23(1): 108, 2024 Feb 08.
Article in English | MEDLINE | ID: mdl-38326865

ABSTRACT

BACKGROUND: Novice nurses providing care in acute conditions should have satisfactory performance. Accurate and appropriate evaluation of the performance of novice nurses in providing care in acute situations is essential for planning interventions to improve the quality of patient care. This study was conducted to translate and evaluate the psychometric properties of the Persian version of the Perception to Care in Acute Situations (PCAS-P) scale in novice nurses. METHODS: In this methodological study, 236 novice nurses were selected by the convenience sampling method. 17-item scale PCAS-P was translated into Persian by the forward-backward process. Then, this version was used for psychometric evaluation. For this purpose, face validity, content validity, and construct validity were assessed using confirmatory factor analysis. Internal consistency and stability reliability were calculated. The data were analyzed using SPSS and AMOS software. RESULTS: The PCAS-P scale maintained the meaning of the original English version and was clear, explicit, and understandable for novice nurses. Confirmatory factor analysis showed that this Persian version is consistent with the proposed model and confirmed the fit of the three-factor model. The values of Cronbach's alpha coefficient, McDonald's omega, Coefficient H, and average inter-item correlation were excellent for the overall scale and its dimensions, and the three latent factors had good convergent and discriminant validity. Additionally, the average measurement size was 0.944 ICC (95% CI 0.909 to 0.969). CONCLUSION: The PCAS-P scale is valid and reliable for measuring novice nurses' perception of acute situations.

20.
Digit Health ; 10: 20552076241233144, 2024.
Article in English | MEDLINE | ID: mdl-38371244

ABSTRACT

Introduction: Since its release by OpenAI in November 2022, numerous studies have subjected ChatGPT to various tests to evaluate its performance in medical exams. The objective of this study is to evaluate ChatGPT's accuracy and logical reasoning across all 10 subjects featured in Stage 1 of Senior Professional and Technical Examinations for Medical Doctors (SPTEMD) in Taiwan, with questions that encompass both Chinese and English. Methods: In this study, we tested ChatGPT-4 to complete SPTEMD Stage 1. The model was presented with multiple-choice questions extracted from three separate tests conducted in February 2022, July 2022, and February 2023. These questions encompass 10 subjects, namely biochemistry and molecular biology, anatomy, embryology and developmental biology, histology, physiology, microbiology and immunology, parasitology, pharmacology, pathology, and public health. Subsequently, we analyzed the model's accuracy for each subject. Result: In all three tests, ChatGPT achieved scores surpassing the 60% passing threshold, resulting in an overall average score of 87.8%. Notably, its best performance was in biochemistry, where it garnered an average score of 93.8%. Conversely, the performance of the generative pre-trained transformer (GPT)-4 assistant on anatomy, parasitology, and embryology was not as good. In addition, its scores were highly variable in embryology and parasitology. Conclusion: ChatGPT has the potential to facilitate not only exam preparation but also improve the accessibility of medical education and support continuous education for medical professionals. In conclusion, this study has demonstrated ChatGPT's potential competence across various subjects within the SPTEMD Stage 1 and suggests that it could be a helpful tool for learning and exam preparation for medical students and professionals.

SELECTION OF CITATIONS
SEARCH DETAIL
...