Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 4 de 4
Filter
Add more filters










Database
Language
Publication year range
1.
BMC Med Educ ; 22(1): 616, 2022 Aug 12.
Article in English | MEDLINE | ID: mdl-35962381

ABSTRACT

BACKGROUND: Multiple mini-interviews (MMI) are used to assess non-academic attributes for selection in medicine and other healthcare professions. It remains unclear if different MMI station formats (discussions, role-plays, collaboration) assess different dimensions. METHODS: Based on station formats of the 2018 and 2019 Integrated French MMI (IFMMI), which comprised five discussions, three role-plays and two collaboration stations, the authors performed confirmatory factor analysis (CFA) using the lavaan 0.6-5 R package and compared a one-factor solution to a three-factor solution for scores of the 2018 (n = 1438) and 2019 (n = 1440) cohorts of the IFMMI across three medical schools in Quebec, Canada. RESULTS: The three-factor solution was retained, with discussions, role-plays and collaboration stations all loading adequately with their scores. Furthermore, all three factors had moderate-to-high covariance (range 0.44 to 0.64). The model fit was also excellent with a Comparative fit index (CFI) of 0.983 (good if > 0.9), a Tucker Lewis index of 0.976 (good if > 0.95), a Standardized Root Mean Square Residual of 0.021 (good if < .08) and a Root Mean Square Error of 0.023 (good if < 0.08) for 2018 and similar results for 2019. In comparison, the single factor solution presented a lower fit (CFI = 0.819, TLI = 0.767, SRMR = 0.049 and RMSEA = 0.070). CONCLUSIONS: The IFMMI assessed three dimensions that were related to stations formats, a finding that was consistent across two cohorts. This suggests that different station formats may be assessing different skills, and has implications for the choice of appropriate reliability metrics and the interpretation of scores. Further studies should try to characterize the underlying constructs associated with each station format and look for differential predictive validity according to these formats.


Subject(s)
School Admission Criteria , Schools, Medical , Canada , Humans , Psychometrics , Reproducibility of Results
2.
Teach Learn Med ; 28(4): 375-384, 2016.
Article in English | MEDLINE | ID: mdl-27294400

ABSTRACT

Construct: The purpose of this study was to provide initial evidence of the validity of written case summaries as assessments of clinical problem representation in a classroom setting. BACKGROUND: To solve clinical problems, clinicians must gain a clear representation of the issues. In the clinical setting, oral case presentations-or summaries-are used to assess learners' ability to gather, synthesize, and "translate" pertinent case information. This ability can be assessed in Objective Structured Clinical Examination and Virtual Patient settings using oral or written case summaries. Evidence of their validity in these settings includes adequate interrater agreement and moderate correlation with other assessments of clinical reasoning. We examined the use of written case summaries in a classroom setting as part of an examination designed to assess clinical reasoning. APPROACH: We developed and implemented written examinations for 2 preclerkship general practice courses in Years 4 and 5 of a 7-year curriculum. Examinations included 8 case summary questions in Year 4 and 5 in Year 5. Seven hundred students participated. Cases were scored using 3 criteria: extraction of pertinent findings, semantic quality, and global ratings. We examined the item parameters (using classical test theory) and generalizability of case summary items. We computed correlations between case summary scores and scores on other questions within the examination. RESULTS: Item parameters were acceptable (average item difficulty = 0.49-0.73 and 0.59-0.68 in Years 4 and 5; average point-biserials = 0.21-0.24 and 0.18-0.21). Scores were moderately generalizable (G coefficients = 0.40-0.50), with case-specificity a substantial source of measurement error (10.2%-19.5% of variance). Scoring and rater had small effects. Correlations with related constructs were low to moderate. CONCLUSIONS: There is good evidence regarding the scoring and generalizability of written case summaries for assessment of clinical problem representation. Further evidence regarding the extrapolation and implications of these assessments is warranted.


Subject(s)
Clinical Competence , Educational Measurement , Physical Examination , Education, Medical , Humans , Reproducibility of Results
3.
JMIR Res Protoc ; 5(1): e26, 2016 Feb 17.
Article in English | MEDLINE | ID: mdl-26888076

ABSTRACT

BACKGROUND: Helping trainees develop appropriate clinical reasoning abilities is a challenging goal in an environment where clinical situations are marked by high levels of complexity and unpredictability. The benefit of simulation-based education to assess clinical reasoning skills has rarely been reported. More specifically, it is unclear if clinical reasoning is better acquired if the instructor's input occurs entirely after or is integrated during the scenario. Based on educational principles of the dual-process theory of clinical reasoning, a new simulation approach called simulation with iterative discussions (SID) is introduced. The instructor interrupts the flow of the scenario at three key moments of the reasoning process (data gathering, integration, and confirmation). After each stop, the scenario is continued where it was interrupted. Finally, a brief general debriefing ends the session. System-1 process of clinical reasoning is assessed by verbalization during management of the case, and System-2 during the iterative discussions without providing feedback. OBJECTIVE: The aim of this study is to evaluate the effectiveness of Simulation with Iterative Discussions versus the classical approach of simulation in developing reasoning skills of General Pediatrics and Neonatal-Perinatal Medicine residents. METHODS: This will be a prospective exploratory, randomized study conducted at Sainte-Justine hospital in Montreal, Qc, between January and March 2016. All post-graduate year (PGY) 1 to 6 residents will be invited to complete one SID or classical simulation 30 minutes audio video-recorded complex high-fidelity simulations covering a similar neonatology topic. Pre- and post-simulation questionnaires will be completed and a semistructured interview will be conducted after each simulation. Data analyses will use SPSS and NVivo softwares. RESULTS: This study is in its preliminary stages and the results are expected to be made available by April, 2016. CONCLUSIONS: This will be the first study to explore a new simulation approach designed to enhance clinical reasoning. By assessing more closely reasoning processes throughout a simulation session, we believe that Simulation with Iterative Discussions will be an interesting and more effective approach for students. The findings of the study will benefit medical educators, education programs, and medical students.

4.
J Appl Meas ; 11(4): 337-51, 2010.
Article in English | MEDLINE | ID: mdl-21164224

ABSTRACT

Questionnaire-based inquiries make it possible to obtain data rather quickly and at relatively low cost, but a number of factors may influence respondents' answers and affect data's validity. Some of these factors are related to the individuals and the environment, while others are directly related to the characteristics of the questionnaire and its items: the text introducing the questionnaire, the order in which the items are presented, the number of responses categories and their labels on the proposed scale and the wording of items. The focus of this article is on this last point and its goal is to show how the developments of diagnostic features surrounding Rasch modelling can be used to study the impact of item wording in opinion/perception questionnaires on the responses obtained and on the location of anchor points of the item response scale.


Subject(s)
Surveys and Questionnaires , Data Interpretation, Statistical , Humans , Models, Statistical , Perception , Public Opinion
SELECTION OF CITATIONS
SEARCH DETAIL
...