Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 4 de 4
Filter
1.
Journal of Educational Evaluation for Health Professions ; : 10-2019.
Article in Korean | WPRIM | ID: wpr-937914

ABSTRACT

Purpose@#This study aimed to explore students’ cognitive patterns while solving clinical problems in 3 different types of assessments—clinical performance examination (CPX), multimedia case-based assessment (CBA), and modified essay question (MEQ)—and thereby to understand how different types of assessments stimulate different patterns of thinking. @*Methods@#A total of 6 test-performance cases from 2 fourth-year medical students were used in this cross-case study. Data were collected through one-on-one interviews using a stimulated recall protocol where students were shown videos of themselves taking each assessment and asked to elaborate on what they were thinking. The unit of analysis was the smallest phrases or sentences in the participants’ narratives that represented meaningful cognitive occurrences. The narrative data were reorganized chronologically and then analyzed according to the hypothetico-deductive reasoning framework for clinical reasoning. @*Results@#Both participants demonstrated similar proportional frequencies of clinical reasoning patterns on the same clinical assessments. The results also revealed that the three different assessment types may stimulate different patterns of clinical reasoning. For example, the CPX strongly promoted the participants’ reasoning related to inquiry strategy, while the MEQ strongly promoted hypothesis generation. Similarly, data analysis and synthesis by the participants were more strongly stimulated by the CBA than by the other assessment types. @*Conclusion@#This study found that different assessment designs stimulated different patterns of thinking during problem-solving. This finding can contribute to the search for ways to improve current clinical assessments. Importantly, the research method used in this study can be utilized as an alternative way to examine the validity of clinical assessments.

2.
Korean Journal of Medical Education ; : 101-109, 2017.
Article in English | WPRIM | ID: wpr-213563

ABSTRACT

PURPOSE: Hypothetico-deductive reasoning (HDR) is an essential learning activity and a learning outcome in problem-based learning (PBL). It is important for medical students to engage in the HDR process through argumentation during their small group discussions in PBL. This study aimed to analyze the quality of preclinical medical students' argumentation according to each phase of HDR in PBL. METHODS: Participants were 15 first-year preclinical students divided into two small groups. A set of three 2-hour discussion sessions from each of the two groups during a 1-week-long PBL unit on the cardiovascular system was audio-recorded. The arguments constructed by the students were analyzed using a coding scheme, which included four types of argumentation (Type 0: incomplete, Type 1: claim only, Type 2: claim with data, and Type 3: claim with data and warrant). The mean frequency of each type of argumentation according to each HDR phase across the two small groups was calculated. RESULTS: During small group discussions, Type 1 arguments were generated most often (frequency=120.5, 43%), whereas the least common were Type 3 arguments (frequency=24.5, 8.7%) among the four types of arguments. CONCLUSION: The results of this study revealed that the students predominantly made claims without proper justifications; they often omitted data for supporting their claims or did not provide warrants to connect the claims and data. The findings suggest instructional interventions to enhance the quality of medical students' arguments in PBL, including promoting students' comprehension of the structure of argumentation for HDR processes and questioning.


Subject(s)
Humans , Cardiovascular System , Clinical Coding , Comprehension , Learning , Problem-Based Learning , Students, Medical
3.
Korean Journal of Medical Education ; : 169-178, 2016.
Article in English | WPRIM | ID: wpr-32289

ABSTRACT

PURPOSE: The quality of problem representation is critical for developing students' problem-solving abilities in problem-based learning (PBL). This study investigates preclinical students' experience with standardized patients (SPs) as a problem representation method compared to using video cases in PBL. METHODS: A cohort of 99 second-year preclinical students from Inje University College of Medicine (IUCM) responded to a Likert scale questionnaire on their learning experiences after they had experienced both video cases and SPs in PBL. The questionnaire consisted of 14 items with eight subcategories: problem identification, hypothesis generation, motivation, collaborative learning, reflective thinking, authenticity, patient-doctor communication, and attitude toward patients. RESULTS: The results reveal that using SPs led to the preclinical students having significantly positive experiences in boosting patient-doctor communication skills; the perceived authenticity of their clinical situations; development of proper attitudes toward patients; and motivation, reflective thinking, and collaborative learning when compared to using video cases. The SPs also provided more challenges than the video cases during problem identification and hypotheses generation. CONCLUSION: SPs are more effective than video cases in delivering higher levels of authenticity in clinical problems for PBL. The interaction with SPs engages preclinical students in deeper thinking and discussion; growth of communication skills; development of proper attitudes toward patients; and motivation. Considering the higher cost of SPs compared with video cases, SPs could be used most advantageously during the preclinical period in the IUCM curriculum.


Subject(s)
Humans , Cohort Studies , Curriculum , Learning , Methods , Motivation , Problem-Based Learning , Thinking
4.
Korean Journal of Medical Education ; : 31-40, 2014.
Article in Korean | WPRIM | ID: wpr-13949

ABSTRACT

PURPOSE: The purpose of this study was to explore the relationships among medical students' assessments on peers' group presentations, instructors' assessments of those presentations, and students' educational achievements in other assignments and tests. METHODS: A total of 101 first-year students from a medical school participated in the study. The students' educational achievements in a 4-week long integrated curriculum were analyzed. Student's final grades were comprised of the following education criteria: two written tests (60%), 15 group reports (25%), one individual report (7%), and four group presentations (15%). We compared scores of the group presentation assessed by the peers and the two instructors. Furthermore, we compared peers' assessment scores with each component of the evaluation criteria. RESULTS: Pearson correlation analysis showed significant correlaton for the assessments between peers and instructors (r=0.775, p<0.001). Peer assessment scores also correlated significantly with scores for the group assignments (r=0.777, p<0.001), final grades on the curriculum (r=0.345, p<0.001), and scores for individual assignments (r=0.334, p<0.001); however, no significant correlation was observed between the peer-assessed group presentation scores and the two written test scores. CONCLUSION: Peer assessments may be a reliable and valid method for evaluating medical students' performances in an integrated curriculum, especially if the assessments are used to academic processes, such as presentations, with explicit evaluation and judgment criteria. Peer assessments on group presentations might assess different learning domains compared to written tests that primarily evaluate limited medical knowledge and clinical reasoning.


Subject(s)
Humans , Curriculum , Education , Educational Status , Group Processes , Judgment , Learning , Methods , Peer Review , Schools, Medical , Self-Evaluation Programs , Students, Medical
SELECTION OF CITATIONS
SEARCH DETAIL