Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 13 de 13
Filter
1.
Korean Medical Education Review ; (3): 79-92, 2022.
Article in English | WPRIM | ID: wpr-938793

ABSTRACT

With increasing demands for medical care by society, the medical system, and general citizens and rapid changes in doctor’s awareness, the competencies required of doctors are also changing. The goal of this study was to develop a doctor’s competency framework from the patient’s perspective, and to make it the basis for the development of milestones and entrustable professional activities for each period of medical student education and resident training. To this end, a big data analysis using topic modeling was performed on domestic and international research papers (2011–2020), domestic newspaper articles (2016–2020), and domestic social networking service data (2016–2020) related to doctor’s competencies. Delphi surveys were conducted twice with 28 medical education experts. In addition, a survey was conducted on doctor’s competencies among 1,000 citizens, 407 nurses, 237 medical students, 361 majors, and 200 specialists.Through the above process, six core competencies, 16 sub-competencies, and 47 competencies were derived as subject-oriented doctor’s competencies. The core competencies were: (1) competency related to disease and health as an expert; (2) competency related to patients as a communicator; (3) competency related to colleagues as a collaborator; (4) competency related to society as a health care leader (5) competency related to oneself as a professional, and (6) competency related to academics as a scholar who contributes to the development of medicine.

2.
Journal of Korean Medical Science ; : e74-2022.
Article in English | WPRIM | ID: wpr-925946

ABSTRACT

Background@#There is no national survey on medical school faculty members’ burnout in Korea. This study aimed to investigate burnout levels and explore possible factors related to burnout among faculty members of Korean medical schools. @*Methods@#An anonymous online questionnaire was distributed to 40 Korean medical schools from October 2020 to December 2020. Burnout was measured by a modified and revalidated version of the Maslach Burnout Inventory-Human Service Survey. @*Results@#A total of 996 faculty members participated in the survey. Of them, 855 answered the burnout questions, and 829 completed all the questions in the questionnaire. A significant number of faculty members showed a high level of burnout in each sub-dimension: 34% in emotional exhaustion, 66.3% in depersonalization, and 92.4% in reduced personal accomplishment. A total of 31.5% of faculty members revealed a high level of burnout in two sub-dimensions, while 30.5% revealed a high level of burnout in all three sub-dimensions.Woman faculty members or those younger than 40 reported significantly higher emotional exhaustion and depersonalization. Long working hours (≥ 80 hours/week) showed the highest reduced personal accomplishment scores (F = 4.023, P = 0.018). The most significant stressor or burnout source was “excessive regulation by the government or university.” The research was the most exasperating task, but the education was the least stressful. @*Conclusion@#This first nationwide study alerts that a significant number of faculty members in Korean medical schools seem to suffer from a high level of burnout. Further studies are necessary for identifying the burnout rate, related factors, and strategies to overcome physician burnout.

3.
Korean Medical Education Review ; (3): 154-159, 2021.
Article in English | WPRIM | ID: wpr-918371

ABSTRACT

The coronavirus disease 2019 (COVID-19) pandemic has profoundly impacted all aspects of undergraduate, postgraduate, and continuing medical education. Only the focus of medical education—care for patients and communities—has remained an integral part of all of the above sectors. Several challenges have been experienced by learners and educators as the education and training of future doctors has continued in the midst of this crisis, including the cancellation of face-to-face classes and training, reduced patient encounter opportunities, fairness issues in online assessments, disruption of patient interview-based exams, reflections on the role of doctors in society, and mental health-related problems linked to isolation and concerns about infection. In response to these disruptions, educators and institutions have rapidly deployed educational innovations. Schools have adopted educational strategies to overcome these challenges by implementing novel education delivery methods in an online format, providing clinical experiences through simulation or telehealth methods, introducing online assessment tools with formative purposes, encouraging learners’ involvement in nonclinical activities such as community service, and making available resources and programs to sustain learners’ mental health and wellness. During the COVID-19 pandemic, educators and institutions have faced drastic changes in medical education worldwide. At the same time, the quantitative expansion of online education has caused other problems, such as the lack of human collaboration. The long-term effects of the COVID-19 pandemic on medical education need to be studied further.

4.
Korean Medical Education Review ; (3): 185-193, 2021.
Article in English | WPRIM | ID: wpr-918367

ABSTRACT

Students must be familiar with clinical skills before starting clinical practice to ensure patients’ safety and enable efficient learning. However, performance is mainly tested in the third or fourth years of medical school, and studies using the validity framework have not been reported in Korea. We analyzed the validity of a performance test conducted among second-year students classified into content, response process, internal structure, relationships with other variables, and consequences according to Messick’s framework.As results of the analysis, content validity was secured by developing cases according to a pre-determined blueprint. The quality of the response process was controlled by training and calibrating raters. The internal structure showed that (1) reliability by generalizability theory was acceptable (coefficients of 0.724 and 0.786, respectively, for day 1 and day 2), and (2) the relevant domains had proper correlations, while the clinical performance examination (CPX) and objective structured clinical examination (OSCE) showed weaker relationships. OSCE/CPX scores were correlated with other variables, especially grade point average and oral structured exam scores. The consequences of this assessment were (1) making students learn clinical skills and study themselves, while causing too much stress for students due to lack of motivation; (2) reminding educators of the need to apply practical teaching methods and to give feedback on the test results; and (3) providing an opportunity for faculty to consider developing support programs. It is necessary to develop the blueprint more precisely according to students’ level and to verify the validity of the response process with statistical methods.

5.
Journal of Educational Evaluation for Health Professions ; : 42-2020.
Article in English | WPRIM | ID: wpr-891557

ABSTRACT

This study assessed the clinical performance of 150 third-year medicalstudents in Busan, Korea in a whole-task emergency objective structured clinical examination station that simulated a patient with palpitations visiting the emergency department. The examination was conducted from November 25 to 27, 2019. Clinical performance was assessed as the number and percentage of students who performed history-taking (HT), a physical examination (PE), an electrocardiography (ECG) study, patient education (Ed), and clinical reasoning (CR), which were items on the checklist. It was found that 18.0% of students checked the patient’s pulse, 51.3% completed an ECG study, and 57.9% explained the results to the patient. A sizable proportion (38.0%) of students did not even attempt an ECG study. In a whole-task emergency station, students showed good performance on HT and CR, but unsatisfactory results for PE, ECG study, and Ed. Clinical skills educational programs for subjected student should focus more on PE, timely diagnostic tests, and sufficient Ed.

6.
Journal of Educational Evaluation for Health Professions ; : 42-2020.
Article in English | WPRIM | ID: wpr-899261

ABSTRACT

This study assessed the clinical performance of 150 third-year medicalstudents in Busan, Korea in a whole-task emergency objective structured clinical examination station that simulated a patient with palpitations visiting the emergency department. The examination was conducted from November 25 to 27, 2019. Clinical performance was assessed as the number and percentage of students who performed history-taking (HT), a physical examination (PE), an electrocardiography (ECG) study, patient education (Ed), and clinical reasoning (CR), which were items on the checklist. It was found that 18.0% of students checked the patient’s pulse, 51.3% completed an ECG study, and 57.9% explained the results to the patient. A sizable proportion (38.0%) of students did not even attempt an ECG study. In a whole-task emergency station, students showed good performance on HT and CR, but unsatisfactory results for PE, ECG study, and Ed. Clinical skills educational programs for subjected student should focus more on PE, timely diagnostic tests, and sufficient Ed.

7.
Korean Medical Education Review ; (3): 131-142, 2020.
Article | WPRIM | ID: wpr-836859

ABSTRACT

A systematic educational program evaluation system for continuous quality improvement in undergraduate medical education is essential. Monitoring and evaluation (M&E) are two distinct but complementary processes referred to in an evaluation system that emphasizes formative purpose. Monitoring involves regular data collection for tracking process and results, while evaluation requires periodic judgment for improvement. We have recently completed implementing an educational evaluation using the M&E concept in a medical school. The evaluation system consists of two loops, one at the lesson/course level and the other at the phase/graduation level. We conducted evaluation activities in four stages: planning, monitoring, evaluation, and improvement. In the planning phase, we clarified the purpose of evaluation, formulated a plan to engage stakeholders, determined evaluation criteria and indicators, and developed an evaluation plan. Next, during the monitoring phase, we developed evaluation instruments and methods and then collected data. In the evaluation phase, we analyzed results and evaluated the criteria of the two loops. Finally, we reviewed the evaluation results with stakeholders to make improvements. We have recognized several problems including excessive burden, lack of expertise, insufficient consideration of stakeholders’ evaluation questions, and inefficient data collection. We need to share the value of evaluation and build a system gradually.

8.
Korean Medical Education Review ; (3): 51-58, 2019.
Article in Korean | WPRIM | ID: wpr-760444

ABSTRACT

Although ‘assessment for learning’ rather than ‘assessment of learning’ has been emphasized recently, student learning before examinations is still unclear. The purpose of this study was to investigate pre-assessment learning activities (PALA) and to find mechanism factors (MF) that influence those activities. Moreover, we compared the PALA and MF of written exams with those of the clinical performance examination/objective structured clinical examination (CPX/OSCE) in third-year (N=121) and fourth-year (N=108) medical students. Through literature review and discussion, questionnaires with a 5-point Likert scale were developed to measure PALA and MF. PALA had the constructs of cognitive and meta-cognitive activities, and MF had sub-components of personal, interpersonal, and environmental factors. Cronbach's α coefficient was used to calculate survey reliability, while the Pearson correlation coefficient and multiple regression analysis were used to investigate the influence of MF on PALA. A paired t-test was applied to compare the PALA and MF of written exams with those of CPX/OSCE in third and fourth year students. The Pearson correlation coefficients between PALA and MF were 0.479 for written exams and 0.508 for CPX/OSCE. MF explained 24.1% of the PALA in written exams and 25.9% of PALA in CPX/OSCE. Both PALA and MF showed significant differences between written exams and CPX/OSCE in third-year students, whereas those in fourth-year students showed no differences. Educators need to consider MFs that influence the PALA to encourage 'assessment for learning'.


Subject(s)
Humans , Education, Medical, Undergraduate , Educational Measurement , Learning , Students, Medical
9.
Journal of Educational Evaluation for Health Professions ; : 4-2018.
Article in English | WPRIM | ID: wpr-764472

ABSTRACT

PURPOSE: The objective of this study was to evaluate the authenticity, acceptability, and feasibility of a hybrid station that combined a standardized patient encounter and a simulated Papanicolaou test. METHODS: We introduced a hybrid station in the routine clinical skills examination (CSE) for 335 third-year medical students at 4 universities in Korea from December 1 to December 3, 2014. After the tests, we conducted an anonymous survey on the authenticity, acceptability, and feasibility of the hybrid station. RESULTS: A total of 334 medical students and 17 professors completed the survey. A majority of the students (71.6%) and professors (82.4%) agreed that the hybrid station was more authentic than the standard CSE. Over 60 percent of the students and professors responded that the station was acceptable for assessing the students' competence. Most of the students (75.2%) and professors (82.4%) assessed the required tasks as being feasible after reading the instructions. CONCLUSION: Our results showed that the hybrid CSE station was a highly authentic, acceptable, and feasible way to assess medical students' performance.


Subject(s)
Humans , Anonyms and Pseudonyms , Clinical Competence , Gynecology , Korea , Mental Competency , Papanicolaou Test , Patient Simulation , Students, Medical
10.
Korean Medical Education Review ; (3): 32-43, 2018.
Article in Korean | WPRIM | ID: wpr-760419

ABSTRACT

Although student research programs have been implemented worldwide, research programs during premedical school have unique characteristics. The purpose of this study is to evaluate factors that influence the effects of premedical school research programs. Eighty second-year premedical students at Pusan National University were included in the study. Effect elements and influential factors were extracted through reference reviews and in-depth individual interviews. A Likert scale questionnaire was developed using the extracted elements and factors, and Cronbach's alpha coefficient was used to analyze the reliability of the survey. The mean value and the standard deviation for each question were calculated to evaluate education effectiveness and learning satisfaction, and the influence of each factor on effect was analyzed using correlation analysis. Students' research skills and knowledge were improved in the short term; however, interest in research or in a career as a researcher did not increase. Student interest, participation, and contributions were important factors. Among professors, passion, considerateness, and teaching method including the level of lesson were influential factors. Implementation of curriculum and support and guidance were influential as well, whereas evaluation system was not a factor. To improve student research programs, several factors that influence education effectiveness and learning satisfaction should be considered.


Subject(s)
Humans , Curriculum , Education , Education, Premedical , Learning , Program Evaluation , Students, Medical , Students, Premedical , Teaching
11.
Korean Journal of Medical Education ; : 31-40, 2018.
Article in English | WPRIM | ID: wpr-713377

ABSTRACT

PURPOSE: The aim of this study was to inquire about the clinical performance and determine the performance pattern of medical students in standardized patient (SP) based examinations of domestic violence (DV). METHODS: The clinical performance sores in DV station with SP of third-year (n=111, in 2014) and 4th-year (n=143, in 2016) medical students of five universities in the Busan-Gyeongnam Clinical Skills Examination Consortium were subjected in this study. The scenarios and checklists of DV cases were developed by the case development committee of the consortium. The students’ performance was compared with other stations encountered in SP. The items of the checklists were categorized to determine the performance pattern of students investigating DV into six domains: disclosure strategy (D), DV related history taking (H), checking the perpetrator’s psychosocial state (P), checking the victim’s condition (V), negotiating and persuading the interviewee (N), and providing information about DV (I). RESULTS: Medical students showed poorer performance in DV stations than in the other stations with SP in the same examination. Most students did confirm the perpetrator and commented on confidentiality but ignored the perpetrator’s state and patient’s physical and psychological condition. The students performed well in the domains of D, H, and I but performed poorly in domains P, V, and N. CONCLUSION: Medical students showed poor clinical performance in the DV station. They performed an ‘event oriented interview’ rather than ‘patient centered’ communication. An integrated educational program of DV should be set to improve students’ clinical performance.


Subject(s)
Child , Humans , Checklist , Child Abuse , Clinical Competence , Confidentiality , Disclosure , Domestic Violence , Education, Medical, Undergraduate , Negotiating , Students, Medical
12.
Journal of Educational Evaluation for Health Professions ; : 4-2018.
Article in English | WPRIM | ID: wpr-937881

ABSTRACT

PURPOSE@#The objective of this study was to evaluate the authenticity, acceptability, and feasibility of a hybrid station that combined a standardized patient encounter and a simulated Papanicolaou test.@*METHODS@#We introduced a hybrid station in the routine clinical skills examination (CSE) for 335 third-year medical students at 4 universities in Korea from December 1 to December 3, 2014. After the tests, we conducted an anonymous survey on the authenticity, acceptability, and feasibility of the hybrid station.@*RESULTS@#A total of 334 medical students and 17 professors completed the survey. A majority of the students (71.6%) and professors (82.4%) agreed that the hybrid station was more authentic than the standard CSE. Over 60 percent of the students and professors responded that the station was acceptable for assessing the students' competence. Most of the students (75.2%) and professors (82.4%) assessed the required tasks as being feasible after reading the instructions.@*CONCLUSION@#Our results showed that the hybrid CSE station was a highly authentic, acceptable, and feasible way to assess medical students' performance.

13.
Korean Journal of Medical Education ; : 35-47, 2016.
Article in English | WPRIM | ID: wpr-76112

ABSTRACT

PURPOSE: The purpose of this study is to investigate the reliability and validity of new clinical performance examination (CPX) for assessing clinical reasoning skills and evaluating clinical reasoning ability of the students. METHODS: Third-year medical school students (n=313) in Busan-Gyeongnam consortium in 2014 were included in the study. One of 12 stations was developed to assess clinical reasoning abilities. The scenario and checklists of the station were revised by six experts. Chief complaint of the case was rhinorrhea, accompanied by fever, headache, and vomiting. Checklists focused on identifying of the main problem and systematic approach to the problem. Students interviewed the patient and recorded subjective and objective findings, assessments, plans (SOAP) note for 15 minutes. Two professors assessed students simultaneously. We performed statistical analysis on their scores and survey. RESULTS: The Cronbach α of subject station was 0.878 and Cohen κ coefficient between graders was 0.785. Students agreed on CPX as an adequate tool to evaluate students' performance, but some graders argued that the CPX failed to secure its validity due to their lack of understanding the case. One hundred eight students (34.5%) identified essential problem early and only 58 (18.5%) performed systematic history taking and physical examination. One hundred seventy-three of them (55.3%) communicated correct diagnosis with the patient. Most of them had trouble in writing SOAP notes. CONCLUSION: To gain reliability and validity, interrater agreement should be secured. Students' clinical reasoning skills were not enough. Students need to be trained on problem identification, reasoning skills and accurate record-keeping.


Subject(s)
Humans , Checklist , Clinical Competence , Communication , Comprehension , Education, Medical, Undergraduate , Educational Measurement/standards , Medical History Taking , Medical Records , Observer Variation , Physical Examination , Physician-Patient Relations , Problem-Based Learning , Reproducibility of Results , Republic of Korea , Schools, Medical , Students, Medical , Surveys and Questionnaires , Thinking , Universities
SELECTION OF CITATIONS
SEARCH DETAIL