Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 166
Filter
1.
Curr Pharm Teach Learn ; 16(10): 102134, 2024 Jul 01.
Article in English | MEDLINE | ID: mdl-38955063

ABSTRACT

INTRODUCTION: Entrustable Professional Activities (EPAs) are tasks that professionals within a field perform autonomously. EPAs are incorporated in workplace-based assessment tools to assist training and professional development. Few studies have evaluated medication history-taking EPAs use in pharmacy practice and none have sought stakeholder feedback on their use. This study evaluates the quality of the medication history-taking EPA utilized in South Australian public hospitals and the usability of its assessment tool. METHODS: A voluntary online questionnaire was conducted from July 15th to September 2nd 2021 to gather the opinions of stakeholders on the use of the medication history-taking EPA. The questionnaire was developed based on tools identified in the literature and utilized 14 open-text and five-point Likert scale questions. The questionnaire was distributed using Survey Monkey® to a purposive sample of staff and students. RESULTS: 82 responses were received from 218 surveys distributed, yielding a response rate of 38%. Respondents believed the EPA promotes learner development (90.6%) and the provision of useful feedback (83%). 94.3% considered the EPA to be easy to use but only 56.6% indicated that using it fits easily within their workday. Time constraints and the presence of context-specific descriptors were commonly perceived as limitations. Some stakeholders indicated a lack of understanding of entrustment decisions. CONCLUSION: The EPA and its assessment tool were perceived to have good quality and usability. Reducing the length of the tool, broadening its applicability across contexts, and improving user understanding of entrustment decision-making may support better use of the tool.

2.
Med Sci Educ ; 34(3): 537-541, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38887399

ABSTRACT

Students as Teachers programs are prevalent, though assessments within these programs are lacking. A workplace-based assessment for clinical teaching was developed to foster formative feedback and support learner growth. Feedback narratives were analyzed to identify student teaching behaviors and demonstrated the themes medical knowledge, professionalism, communication, and teaching skills, which were subcategorized as clinical relevance, learner stage appropriateness, use of evidence-based teaching strategies, learning environment, feedback-related, and time-appropriate. This analysis supports the use of the assessment form for student teachers in the clinical environment as students received construct-relevant feedback from various raters while teaching in multiple settings.

3.
BMC Med Educ ; 24(1): 659, 2024 Jun 13.
Article in English | MEDLINE | ID: mdl-38872142

ABSTRACT

OBJECTIVES: Workplace-based assessment (WBA) has been vigorously criticized for not fulfilling its educational purpose by medical educators. A comprehensive exploration of stakeholders' needs regarding WBA is essential to optimize its implementation in clinical practice. METHOD: Three homogeneous focus groups were conducted with three groups of stakeholders: General Practitioner (GP) trainees, GP trainers, and GP tutors. Due to COVID-19 measures, we opted for an online asynchronous form to enable participation. An constructivist grounded theory approach was used to employ this study and allow the identification of stakeholders' needs for using WBA. RESULTS: Three core needs for WBA were identified in the analysis. Within GP Training, stakeholders found WBA essential, primarily, for establishing learning goals, secondarily, for assessment purposes, and, lastly, for providing or receiving feedback. CONCLUSION: All stakeholders perceive WBA as valuable when it fosters learning. The identified needs were notably influenced by agency, trust, availability, and mutual understanding. These were facilitating factors influencing needs for WBA. Embracing these insights can significantly illuminate the landscape of workplace learning culture for clinical educators and guide a successful implementation of WBA.


Subject(s)
COVID-19 , Focus Groups , Grounded Theory , Needs Assessment , Workplace , Humans , Female , Male , Clinical Competence , SARS-CoV-2 , General Practitioners/education
4.
J Surg Educ ; 81(7): 967-972, 2024 Jul.
Article in English | MEDLINE | ID: mdl-38816336

ABSTRACT

OBJECTIVE: Workplace-based assessments (WBAs) play an important role in the assessment of surgical trainees. Because these assessment tools are utilized by a multitude of faculty, inter-rater reliability is important to consider when interpreting WBA data. Although there is evidence supporting the validity of many of these tools, inter-reliability evidence is lacking. This study aimed to evaluate the inter-rater reliability of multiple operative WBA tools utilized in general surgery residency. DESIGN: General surgery residents and teaching faculty were recorded during 6 general surgery operations. Nine faculty raters each reviewed 6 videos and rated each resident on performance (using the Society for Improving Medical Professional Learning, or SIMPL, Performance Scale as well as the operative performance rating system (OPRS) Scale), entrustment (using the ten Cate Entrustment-Supervision Scale), and autonomy (using the Zwisch Scale). The ratings were reviewed for inter-rater reliability using percent agreement and intraclass correlations. PARTICIPANTS: Nine faculty members viewed the videos and assigned ratings for multiple WBAs. RESULTS: Absolute intraclass correlation coefficients for each scale ranged from 0.33 to 0.47. CONCLUSIONS: All single-item WBA scales had low to moderate inter-rater reliability. While rater training may improve inter-rater reliability for single observations, many observations by many raters are needed to reliably assess trainee performance in the workplace.


Subject(s)
Clinical Competence , Educational Measurement , General Surgery , Internship and Residency , Workplace , General Surgery/education , Reproducibility of Results , Humans , Educational Measurement/methods , Education, Medical, Graduate/methods , Video Recording , Faculty, Medical , Male , Female
5.
BMC Med Educ ; 24(1): 549, 2024 May 17.
Article in English | MEDLINE | ID: mdl-38760773

ABSTRACT

BACKGROUND: In medical education, Entrustable Professional Activities (EPAs) have been gaining momentum for the last decade. Such novel educational interventions necessitate accommodating competing needs, those of curriculum designers, and those of users in practice, in order to be successfully implemented. METHODS: We employed a participatory research design, engaging diverse stakeholders in designing an EPA framework. This iterative approach allowed for continuous refinement, shaping a comprehensive blueprint comprising 60 EPAs. Our approach involved two iterative cycles. In the first cycle, we utilized a modified-Delphi methodology with clinical competence committee (CCC) members, asking them whether each EPA should be included. In the second cycle, we used semi-structured interviews with General Practitioner (GP) trainers and trainees to explore their perceptions about the framework and refine it accordingly. RESULTS: During the first cycle, 14 CCC members agreed that all the 60 EPAs should be included in the framework. Regarding the formulation of each EPAs, 20 comments were given and 16 adaptations were made to enhance clarity. In the second cycle, the semi-structured interviews with trainers and trainees echoed the same findings, emphasizing the need of the EPA framework for improving workplace-based assessment, and its relevance to real-world clinical scenarios. However, trainees and trainers expressed concerns regarding implementation challenges, such as the large number of EPAs to be assessed, and perception of EPAs as potentially high-stakes. CONCLUSION: Accommodating competing stakeholders' needs during the design process can significantly enhance the EPA implementation. Recognizing users as experts in their own experiences empowers them, enabling a priori identification of implementation barriers and potential pitfalls. By embracing a collaborative approach, wherein diverse stakeholders contribute their unique viewpoints, we can only create effective educational interventions to complex assessment challenges.


Subject(s)
Clinical Competence , Competency-Based Education , Curriculum , Humans , General Practitioners/education , Delphi Technique , Education, Medical, Graduate , Interviews as Topic , Stakeholder Participation , Community-Based Participatory Research
6.
BMC Med Educ ; 24(1): 487, 2024 May 02.
Article in English | MEDLINE | ID: mdl-38698352

ABSTRACT

BACKGROUND: Workplace-based assessment (WBA) used in post-graduate medical education relies on physician supervisors' feedback. However, in a training environment where supervisors are unavailable to assess certain aspects of a resident's performance, nurses are well-positioned to do so. The Ottawa Resident Observation Form for Nurses (O-RON) was developed to capture nurses' assessment of trainee performance and results have demonstrated strong evidence for validity in Orthopedic Surgery. However, different clinical settings may impact a tool's performance. This project studied the use of the O-RON in three different specialties at the University of Ottawa. METHODS: O-RON forms were distributed on Internal Medicine, General Surgery, and Obstetrical wards at the University of Ottawa over nine months. Validity evidence related to quantitative data was collected. Exit interviews with nurse managers were performed and content was thematically analyzed. RESULTS: 179 O-RONs were completed on 30 residents. With four forms per resident, the ORON's reliability was 0.82. Global judgement response and frequency of concerns was correlated (r = 0.627, P < 0.001). CONCLUSIONS: Consistent with the original study, the findings demonstrated strong evidence for validity. However, the number of forms collected was less than expected. Exit interviews identified factors impacting form completion, which included clinical workloads and interprofessional dynamics.


Subject(s)
Clinical Competence , Internship and Residency , Psychometrics , Humans , Reproducibility of Results , Female , Male , Educational Measurement/methods , Ontario , Internal Medicine/education
7.
Cureus ; 16(4): e58073, 2024 Apr.
Article in English | MEDLINE | ID: mdl-38738047

ABSTRACT

BACKGROUND: Studies that have methodically compiled the body of research on the competency-based medical education (CBME) assessment procedure and pinpointed knowledge gaps about the structure of the assessment process are few. Thus, the goals of the study were to create a model assessment framework for competency-based medical education that would be applicable in the Indian setting as well as to thoroughly examine the competency-based medical education assessment framework. METHODS: PubMed, MEDLINE (Ovid), EMBASE (Ovid), Scopus, Web of Science, and Google Scholar were the databases that were searched. The search parameters were restricted to English language publications about competency-based education and assessment methods, which were published between January 2006 and December 2020. A descriptive overview of the included research (in tabular form) served as the foundation for the data synthesis. RESULTS: Databases provided 732 records; out of which 36 fulfilled the inclusion and exclusion criteria. Thirty-six studies comprised a mix of randomized controlled trials, focus group interviews, and questionnaire studies, including cross-sectional studies, qualitative studies (03), mixed-method studies, etc. The papers were published in 10 different journals. The greatest number was published in BMC Medical Education (18). The average quality score for included studies was 62.53% (range: 35.71-83.33%). Most authors are from the UK (07), followed by the USA (05). The included studies were grouped into seven categories based on their dominant focus: moving away from a behavioristic approach to a constructive approach of assessment (01 studies), formative assessment (FA) and feedback (10 studies), the hurdles in the implementation of feedback (04 studies), utilization of computer or online based formative test with automated feedback (05 studies), video feedback (02 studies), e-learning platforms for formative assessment (04 studies), studies related to workplace-based assessment (WBA)/mini-clinical evaluation exercise (mini-CEX)/direct observation of procedural skills (DOPS) (10 studies). CONCLUSIONS: Various constructivist techniques, such as concept maps, portfolios, and rubrics, can be used for assessments. Self-regulated learning, peer feedback, online formative assessment, an online computer-based formative test with automated feedback, the use of a computerized web-based objective structured clinical examination (OSCE) evaluation system, and the use of narrative feedback instead of numerical scores in mini-CEX are all ways to increase student involvement in the design and implementation of the formative assessment.

8.
Adv Med Educ Pract ; 15: 37-46, 2024.
Article in English | MEDLINE | ID: mdl-38223750

ABSTRACT

Background: Workplace-Based Assessment (WPBA) has been widely utilized for assessing performance in training sites for both formative and summative purposes. Currently, with the recently updated duration of the family medicine (FM) training program in Saudi Arabia from four years to three years, the possible impact of such a change on assessment would need to be investigated. This objective was to explore the experiences of FM residents regarding the usage of WPBA as an assessment tool for improving clinical teaching at King Abdulaziz Hospital (KAH), Al Ahsa, Saudi Arabia. Methods: The study involves an exploratory qualitative phenomenological approach targeting family medicine resident in KAH was used. Purposive sampling techniques were used. In this descriptive study, data was collected through the utilization of 1:1 semi-structured interviews guided by directive prompts. All recorded interviews were transcribed verbatim. An inductive analytical approach was applied for thematic analysis of transcripts. Results: Fifteen participants were individually interviewed until data saturation was reached. The themes that emerged were organized into the categories of underlying principles of WPBA, the impact of the learning environment, associated opportunities and challenges, and making WPBA more effective. Participants expressed that the orientation provided by the program was insufficient, although the core principles were clear to them. They valued the senior peers' support and encouragement for the creation of a positive learning environment. However, time limit, workload, and a lack of optimum ideal implementation reduced the educational value and effectiveness of WPBA among senior residents. Conclusion: The study examined residents' experiences with WPBA and concluded that low levels of satisfaction were attributed to implementation-related problems. Improvements should be made primarily in two areas: better use of available resources and more systematic prior planning. Revision and assignment of the selection process were suggested, in addition to the implementation of the new curriculum. The research will assist stakeholders in selecting and carrying out evaluation techniques that will enhance residents' abilities.

9.
Anaesth Intensive Care ; 52(1): 6-15, 2024 Jan.
Article in English | MEDLINE | ID: mdl-38006613

ABSTRACT

In 2023, a Diploma of Rural Generalist Anaesthesia (DipRGA) was implemented across Australia. Developed collaboratively by the Australian and New Zealand College of Anaesthetists (ANZCA), the Australian College of Rural and Remote Medicine (ACRRM) and the Royal Australian College of General Practitioners (RACGP), the 12-month qualification is completed during or following ACRRM or RACGP Rural Generalist Fellowship training. Focused on the needs of rural and remote communities for elective and emergency surgery, maternity care, resuscitative care for medical illness or injury, and stabilisation for retrieval, the DipRGA supports rural generalist anaesthetists working within collaborative teams in geographically isolated settings. The goal is a graduate who can anaesthetise American Society of Anesthesiologists physical status class 1, 2 and stable 3 patients for elective surgery, provide obstetric anaesthesia and analgesia, anaesthetise paediatric patients and undertake advanced crisis care within their scope of practice. Crucially, they also recognise both limitations of their skills and local resources available when considering whether to provide care, defer, refer or transfer patients. DipRGA curriculum design commenced by adapting the ANZCA specialist training curriculum with consideration of the training approach of both the ACRRM and the RACGP, particularly the rural and remote context. Curriculum content is addressed in seven entrustable professional activities supported by workplace-based assessments and multisource feedback. Trainees are supervised by rural generalist anaesthetists and specialist anaesthetists, and complete flexible learning activities to accommodate geographical dispersion. Standardised summative assessments include an early test of knowledge and an examination, adapted from the ACRRM structured assessment using multiple patient scenarios.


Subject(s)
Anesthesia, Obstetrical , Anesthesiology , Maternal Health Services , Rural Health Services , Humans , Female , Child , Pregnancy , Australia , Anesthesiology/education
10.
J Pediatr Surg ; 59(1): 31-36, 2024 Jan.
Article in English | MEDLINE | ID: mdl-37845126

ABSTRACT

PURPOSE: Identifying the number of cases required for a fellow to achieve competence has been challenging. Workplace-based assessment (WBA) systems make collecting performance data practical and create the opportunity to translate WBA ratings into probabilistic statements about a fellow's likelihood of performing to a given standard on a subsequent assessment opportunity. METHODS: We compared data from two pediatric surgery training programs that used the performance rating scale from the Society for Improving Medical Professional Learning (SIMPL). We used a Bayesian generalized linear mixed effects model to examine the relationship past and future performance for three procedures: Laparoscopic Inguinal Hernia Repair, Laparoscopic Gastrostomy Tube Placement, and Pyloromyotomy. RESULTS: For site one, 26 faculty assessed 9 fellows on 16 procedures yielding 1094 ratings, of which 778 (71%) earned practice-ready ratings. For site two, 25 faculty rated 3 fellows on 4 unique procedures yielding 234 ratings of which 151 (65%) were deemed practice-ready. We identified similar model-based future performance expectations, with prior practice-ready ratings having a similar average effect across both sites (Site one, B = 0.25; Site two, B = 0.25). Similar prior practice-ready ratings were needed for Laparoscopic G-Tube Placement (Site one = 13; Site two = 14), while greater differences were observed for Laparoscopic Inguinal Hernia Repair (Site one = 10; Site two = 15) and Pyloromyotomy (Site one = 10; Site two = 15). CONCLUSION: Our approach to modeling operative performance data is effective at determining future practice readiness of pediatric surgery fellows across multiple faculty and fellow groups. This method could be used to establish minimum case number requirements. TYPE OF STUDY: Original manuscript, Study of Diagnostic Test. LEVEL OF EVIDENCE: II.


Subject(s)
Hernia, Inguinal , Internship and Residency , Laparoscopy , Specialties, Surgical , Child , Humans , Hernia, Inguinal/surgery , Bayes Theorem , Clinical Competence , Specialties, Surgical/education , Laparoscopy/education
11.
J Surg Educ ; 81(1): 17-24, 2024 Jan.
Article in English | MEDLINE | ID: mdl-38036389

ABSTRACT

OBJECTIVE: To examine the readiness of general surgery residents in their final year of training to perform 5 common surgical procedures based on their documented performance during training. DESIGN: Intraoperative performance ratings were analyzed using a Bayesian mixed effects approach, adjusting for rater, trainee, procedure, case complexity, and postgraduate year (PGY) as random effects as well as month in academic year and cumulative, procedure-specific performance per trainee as fixed effects. This model was then used to estimate each PGY 5 trainee's final probability of being able to independently perform each procedure. The actual, documented competency rates for individual trainees were then identified across each of the 5 most common general surgery procedures: appendectomy, cholecystectomy, ventral hernia repair, groin hernia repair, and partial colectomy. SETTING: This study was conducted using data from members of the SIMPL collaborative. PARTICIPANTS: A total of 17,248 evaluations of 927 PGY5 general surgery residents were analyzed from 2015 to 2021. RESULTS: The percentage of residents who requested a SIMPL rating during their PGY5 year and achieved a ≥90% probability of being rated as independent, or "Practice-Ready," was 97.4% for appendectomy, 82.4% for cholecystectomy, 43.5% for ventral hernia repair, 24% for groin hernia repair, and 5.3% for partial colectomy. CONCLUSIONS: There is substantial variation in the demonstrated competency of general surgery residents to perform several common surgical procedures at the end of their training. This variation in readiness calls for careful study of how surgical residents can become more adequately prepared to enter independent practice.


Subject(s)
General Surgery , Hernia, Inguinal , Hernia, Ventral , Internship and Residency , Humans , Bayes Theorem , Clinical Competence , Education, Medical, Graduate/methods , Hernia, Inguinal/surgery , Hernia, Ventral/surgery , General Surgery/education
12.
Article in English | MEDLINE | ID: mdl-38010576

ABSTRACT

First impressions can influence rater-based judgments but their contribution to rater bias is unclear. Research suggests raters can overcome first impressions in experimental exam contexts with explicit first impressions, but these findings may not generalize to a workplace context with implicit first impressions. The study had two aims. First, to assess if first impressions affect raters' judgments when workplace performance changes. Second, whether explicitly stating these impressions affects subsequent ratings compared to implicitly-formed first impressions. Physician raters viewed six videos where learner performance either changed (Strong to Weak or Weak to Strong) or remained consistent. Raters were assigned two groups. Group one (n = 23, Explicit) made a first impression global rating (FIGR), then scored learners using the Mini-CEX. Group two (n = 22, Implicit) scored learners at the end of the video solely with the Mini-CEX. For the Explicit group, in the Strong to Weak condition, the FIGR (M = 5.94) was higher than the Mini-CEX Global rating (GR) (M = 3.02, p < .001). In the Weak to Strong condition, the FIGR (M = 2.44) was lower than the Mini-CEX GR (M = 3.96 p < .001). There was no difference between the FIGR and the Mini-CEX GR in the consistent condition (M = 6.61, M = 6.65 respectively, p = .84). There were no statistically significant differences in any of the conditions when comparing both groups' Mini-CEX GR. Therefore, raters adjusted their judgments based on the learners' performances. Furthermore, raters who made their first impressions explicit showed similar rater bias to raters who followed a more naturalistic process.

13.
Educ Prim Care ; 34(5-6): 268-276, 2023.
Article in English | MEDLINE | ID: mdl-38011869

ABSTRACT

BACKGROUND: In GP training, identifying early predictors of poor summative examination performance can be challenging. We aimed to establish whether external clinical teaching visit (ECTV) performance, measured using a validated instrument (GP Registrar Competency Assessment Grid, GPR-CAG) is predictive of Royal Australian College of General Practitioners (RACGP) Fellowship examination performance. METHODS: A retrospective cohort study including GP registrars in New South Wales/Australian Capital Territory with ECTV data recorded during their first training term (GPT1), between 2014 and 2018, who attempted at least one Fellowship examination. Independent variables of interest included the four GPR-CAG factors assessed in GPT1 ('patient-centredness/caring', 'formulating hypotheses/management plans', 'professional responsibilities', 'physical examination skills'). Outcomes of interest included individual scores of the three summative examinations (Applied Knowledge Test (AKT); Key Feature Problem (KFP); and the Objective Structured Clinical Examination (OSCE)) and overall Pass/Fail status. Univariable and multivariable regression analyses were performed. RESULTS: Univariably, there were statistically significant associations (p < 0.01) between all four GPR-CAG factors and all four summative examination outcomes, except for 'formulating hypotheses/management plans' and OSCE score (p = 0.07). On multivariable analysis, each factor was significantly associated (p < 0.05) with at least one exam outcome, and 'physical examination skills' was significantly associated (p < 0.05) with all four exam outcomes. DISCUSSION: ECTV performance, via GPR-CAG scores, is predictive of RACGP Fellowship exam performance. The univariable findings highlight the pragmatic utility of ECTVs in flagging registrars who are at-risk of poor exam performance, facilitating early intervention. The multivariable associations of GPR-CAG scores and examination performance suggest that these scores provide predictive ability beyond that of other known predictors.


Subject(s)
Clinical Competence , General Practice , Humans , Retrospective Studies , Australia , General Practice/education , Family Practice/education
14.
BMC Med Educ ; 23(1): 832, 2023 Nov 06.
Article in English | MEDLINE | ID: mdl-37932732

ABSTRACT

BACKGROUND: South Africa (SA) is on the brink of implementing workplace-based assessments (WBA) in all medical specialist training programmes in the country. Despite the fact that competency-based medical education (CBME) has been in place for about two decades, WBA offers new and interesting challenges. The literature indicates that WBA has resource, regulatory, educational and social complexities. Implementing WBA would therefore require a careful approach to this complex challenge. To date, insufficient exploration of WBA practices, experiences, perceptions, and aspirations in healthcare have been undertaken in South Africa or Africa. The aim of this study was to identify factors that could impact WBA implementation from the perspectives of medical specialist educators. The outcomes being reported are themes derived from reported potential barriers and enablers to WBA implementation in the SA context. METHODS: This paper reports on the qualitative data generated from a mixed methods study that employed a parallel convergent design, utilising a self-administered online questionnaire to collect data from participants. Data was analysed thematically and inductively. RESULTS: The themes that emerged were: Structural readiness for WBA; staff capacity to implement WBA; quality assurance; and the social dynamics of WBA. CONCLUSIONS: Participants demonstrated impressive levels of insight into their respective working environments, producing an extensive list of barriers and enablers. Despite significant structural and social barriers, this cohort perceives the impending implementation of WBA to be a positive development in registrar training in South Africa. We make recommendations for future research, and to the medical specialist educational leaders in SA.


Subject(s)
Educational Measurement , Internship and Residency , Humans , Educational Measurement/methods , South Africa , Workplace , Education, Medical, Graduate/methods , Clinical Competence
15.
J Surg Educ ; 80(10): 1370-1377, 2023 Oct.
Article in English | MEDLINE | ID: mdl-37596105

ABSTRACT

OBJECTIVE: To demonstrate the value of integrating surgical resident Entrustable Professional Activity (EPA) data into a learning analytics platform that provides meaningful feedback for formative and summative decision-making. DESIGN: Description of the Surgical Entrustable Professional Activities (SEPA) analytics dashboard, and examples of summary analytics and intuitive display features. SETTING: Department of Surgery, University of Wisconsin Hospital and Clinics. PARTICIPANTS: Surgery residents, faculty, and residency program administrators. RESULTS: We outline the major functionalities of the SEPA dashboard and offer concrete examples of how these features are utilized by various stakeholders to support progressive entrustment decisions for surgical residents. CONCLUSIONS: Our intuitive analytics platform allows for seamless integration of SEPA microassessment data to support Clinical Competency Committee (CCC) decisions for resident evaluation and provides point of training feedback to faculty and trainees in support of progressive autonomy.

16.
Am J Surg ; 226(5): 588-595, 2023 11.
Article in English | MEDLINE | ID: mdl-37481408

ABSTRACT

BACKGROUND: This study quantifies the number of observations required to reliably assess the operative competence of Core Surgical Trainees (CSTs) in Ireland, using the Supervised Structured Assessment of Operative Performance (SSAOP) tool. METHODS: SSAOPs (April 2016-February 2021) were analysed across a mix of undifferentiated procedures, as well as for three commonly performed general surgery procedures in CST: appendicectomy, abdominal wall hernia repair, and skin/subcutaneous lesion excision. Generalizability and Decision studies determined the number of observations required to achieve dependability indices ≥0.8, appropriate for use in high-stakes assessment. RESULTS: A total of 2,294 SSAOPs were analysed. Four assessors, each observing 10 cases, can generate scores sufficiently reliable for use in high-stakes assessments. Focusing on a selection of core procedures yields more favourable reliability indices. CONCLUSION: Trainers should conduct repeated assessments across a smaller number of procedures to improve reliability. Programs should increase the assessor mix to yield sufficient dependability indices for high-stakes assessment.


Subject(s)
Clinical Competence , Internship and Residency , Humans , Reproducibility of Results , Educational Measurement , Ireland
17.
J Surg Educ ; 80(9): 1302-1310, 2023 09.
Article in English | MEDLINE | ID: mdl-37481412

ABSTRACT

BACKGROUND: Surgical training quality is critical to ensure that trainees receive adequate preparation to perform surgical procedures independently and that patients receive safe, effective, and high-quality care. Numerous surgical training quality indicators have been proposed, investigated and implemented. However, the existing evidence base for these indicators is limited, with most studies originating from English-speaking, high-income countries. OBJECTIVES: This scoping review aimed to identify the range of quality indicators that have been proposed and evaluated in the literature, and to critically evaluate the existing evidence base for these indicators. METHODS: A systematic literature search was conducted using MEDLINE and Embase databases to identify studies reporting on surgical training quality indicators. A total of 68 articles were included in the review. RESULTS: Operative volume is the most commonly cited indicator and has been investigated for its effects on trainee exam performance and career progression. Other indicators include operative diversity, workplace-based assessments, regular evaluation and feedback, academic achievements, formal teaching, and learning agreements, and direct observation of procedural skills. However, these indicators are largely based on qualitative analyses and expert opinions and have not been validated quantitatively using clear outcome measures for trainees and patients. CONCLUSIONS: Future research is necessary to establish evidence-based indicators of high-quality surgical training, including in low-resource settings. Quantitative and qualitative studies are required to validate existing indicators and to identify new indicators that are relevant to diverse surgical training environments. Lastly, any approach to surgical training quality must prioritize the benefit to both trainees and patients, ensuring training success, career progression, and patient safety.


Subject(s)
Academic Success , Benchmarking , Humans , Clinical Competence , Educational Measurement , Learning
18.
Br J Anaesth ; 131(3): 503-509, 2023 09.
Article in English | MEDLINE | ID: mdl-37349239

ABSTRACT

Over the past century, education has been a core component for improving patient safety. The initial focus was developing a curriculum and an assessment process. In recent decades, the value of work-based learning has come to the fore. Learning from work, or experiential learning, requires reflection, which is critically dependent on external feedback. Conceptions of feedback have moved from a transactional information transfer from the supervisor to the trainee to a learner-centred and collaborative process occurring in a complex socio-cultural environment. In this narrative review we describe the evolution of the feedback conversation, provide a model synthesising the core concepts of feedback, and offer some guidance for the development of effective feedback in anaesthesia education.


Subject(s)
Anesthesia , Education, Medical, Graduate , Humans , Feedback , Curriculum , Communication , Clinical Competence
19.
Indian J Orthop ; 57(5): 714-717, 2023 May.
Article in English | MEDLINE | ID: mdl-37122673

ABSTRACT

Introduction: Mini-CEX helps in judicious use of competencies in authentic settings by simultaneously assessing clinical skills of trainees and providing feedback on their performance. As assessment of M.B.B.S. Interns for their competency in clinical examination skills in Department of Orthopaedics is lacking, this study is taken up to introduce Mini-CEX for M.B.B.S. Interns, by sensitising faculty and interns. Materials and Methods: A Quasi-experimental study was conducted during June to December 2020, among 60 interns posted in the Department of Orthopaedics. After obtaining IEC Clearance and written informed consent from the study participants, they were sensitised and exposed to five Mini-CEX clinical encounters involving examination of a patient with knee/other joint disorder in the Outpatient/Inpatient clinical setting, with eight faculty. The study tool used was Mini-CEX questionnaire developed by American Board of Internal Medicine (ABIM). Case specific feedback was provided to interns using sandwich technique. The reflections and perceptions of interns and faculty were obtained after completion of all Mini-CEX encounters. Results: 96.7% encounters were conducted in OPD during first encounter. On an average, one Mini-CEX encounter lasted for 17 min. Interns had an overall score for the various domains ranging from 5.38 to 5.58. Comparison of mean scores showed a statistically significant improvement (p value < 0.0001). All the assessors were satisfied with Mini-CEX as an assessment tool. Conclusion: Interns and faculty opined that Mini-CEX improves clinical examination skills and professional development as focus is on the outcome and learning process, with multiple sampling in a longitudinal manner.

20.
Nurse Educ Today ; 126: 105836, 2023 Jul.
Article in English | MEDLINE | ID: mdl-37167832

ABSTRACT

BACKGROUND: Educational and health care organizations who prepare meta-assessors to fulfill their role in the assessment of trainees' performance based on reported observations have little literature to rely on. While the assessment of trainees' performance based on reported observations has been operationalized, we have yet to understand the elements that can affect its quality fully. Closing this gap in the literature will provide valuable insight that could inform the implementation and quality monitoring of the assessment of trainees' performance based on reported observations. OBJECTIVES: The purpose of this study was to explore the elements to consider in the assessment of trainees' performance based on reported observations from the perspectives of meta-assessors. METHODS: Design, Settings, Participants, data collection and analysis. The authors adopted Sandelowski's qualitative descriptive approach to interview nurse meta-assessors from two nursing programs. A semi-structured interview guide was used to document the elements to consider in the assessment of nursing trainees' performance based on reported observations, and a survey was used to collect sociodemographic data. The authors conducted a thematic analysis of the interview transcripts. RESULTS: Thirteen meta-assessors participated in the study. Three core themes were identified: (1) meta-assessors' appropriation of their perceived assessment roles and activities, (2) team climate of information sharing, and (3) challenges associated with the assessment of trainees' performance based on reported observations. Each theme is comprised of several sub themes. CONCLUSIONS: To optimize the quality of the assessment of the trainee's performance based on reported observations and ratings, HPE programs might consider how to clarify better the meta-assessor's roles and activities, as well as how interventions could be created to promote a climate of information sharing and to address the challenges identified. This work will guide educational and health care organizations for better preparation and support for meta-assessors and preceptors.


Subject(s)
Clinical Competence , Humans , Surveys and Questionnaires , Educational Status
SELECTION OF CITATIONS
SEARCH DETAIL
...