Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 9 de 9
Filter
1.
Learn Environ Res ; 26(1): 161-175, 2023.
Article in English | MEDLINE | ID: mdl-35574193

ABSTRACT

The learning environment comprises the psychological, social, cultural and physical setting in which learning occurs and has an influence on student motivation and success. The purpose of the present study was to explore qualitatively, from the perspectives of both students and faculty, the key elements of the learning environment that supported and hindered student learning. We recruited a total of 22 students and 9 faculty to participate in either a focus group or an individual interview session about their perceptions of the learning environment at their university. We analyzed the data using a directed content analysis and organized the themes around the three key dimensions of personal development, relationships, and institutional culture. Within each of these dimensions, we identified subthemes that facilitated or impeded student learning and faculty work. We also identified and discussed similarities in subthemes identified by students and faculty.

2.
Learn Environ Res ; 25(1): 59-73, 2022.
Article in English | MEDLINE | ID: mdl-33519295

ABSTRACT

The desire to support student learning and professional development, in combination with accreditation requirements, necessitates the need to evaluate the learning environment of educational programs. The Health Education Learning Environment Survey (HELES) is a recently-developed global measure of the learning environment for health professions programs. This paper provides evidence of the applicability of the HELES for evaluating the learning environment across four health professions programs: medicine, nursing, occupational therapy and pharmaceutical sciences. Two consecutive years of HELES data were collected from each program at a single university (year 1 = 552 students; year 2 = 745 students) using an anonymous online survey. Reliability analyses across programs and administration years supported the reliability of the tool. Two-way factorial ANOVAs with program and administration year as the independent variables indicated statistically- and practically-significant differences across programs for four of the seven scales. Overall, these results support the use of the HELES to evaluate student perceptions of the learning environment multiple of health professions programs.

3.
Eval Health Prof ; 43(3): 162-168, 2020 09.
Article in English | MEDLINE | ID: mdl-30832508

ABSTRACT

The learning environment can be broadly conceptualized as the physical, social, and psychological context in which learning and socialization takes place. While there is now an expectation that health professions education programs should monitor the quality of their learning environment, existing measures have been criticized for lacking a theoretical foundation and sufficient validity evidence. Guided by Moos's learning environment framework, this study developed and preliminarily validated a global measure of the learning environment. Three pilot tests, conducted on 1,040 undergraduate medical students, refined the measure into the 35-item Health Education Learning Environment Survey (HELES), which consists of six subscales: peer relationships, faculty relationships, work-life balance, clinical skills development, expectations, and educational setting and resources. A final validation study conducted on another sample of 347 medical students confirmed its factor structure and examined its reliability and relation of the HELES to the Medical School Learning Environment Survey (MSLES). Subscale reliabilities ranged from .78 to .89. The HELES correlated with the MSLES at .79. These results indicate that the HELES can provide a valid and reliable assessment of the learning environment of medical students and, as such, can be used to inform accreditation and program planning in health professions programs.


Subject(s)
Education, Medical/organization & administration , Environment , Learning , Surveys and Questionnaires/standards , Adult , Clinical Competence , Faculty, Medical , Female , Humans , Interpersonal Relations , Male , Reproducibility of Results , Work-Life Balance , Young Adult
4.
Acad Med ; 92(11S Association of American Medical Colleges Learn Serve Lead: Proceedings of the 56th Annual Research in Medical Education Sessions): S100-S109, 2017 11.
Article in English | MEDLINE | ID: mdl-29065030

ABSTRACT

PURPOSE: The importance of confidence for learning and performance makes learners' perceptions of readiness for the next level of training valuable indicators of curricular success. The "Readiness for Clerkship" (RfC) and "Readiness for Residency" (RfR) surveys have been shown to provide reliable ratings of the relative effectiveness of various aspects of training. This study examines the generalizability of those results. METHOD: Surveys were administered at four medical schools approximately four months after the start of clerkship and eight months after the start of residency during 2013-2015. Collected data were anonymized. A total of 647 medical students and 483 residents participated. RESULTS: Reliabilities of G = 0.8 could be obtained with only 6 to 12 medical students and 8 to 15 residents. Within MD programs, no meaningful differences in item ratings were observed across cohorts. Residents in each school consistently rated themselves higher than clerks on the majority of Medical Expert and Communicator competencies common to both surveys. Similar strengths and weaknesses were identified across programs, but differences were observed on five clerkship items and one residency item. CONCLUSIONS: Across four MD programs, the RfC and RfR surveys provided reliable ratings of the relative effectiveness of aspects of training with small numbers of respondents. The capacity of these surveys to efficiently identify perceived strengths and weaknesses held by cohorts of learners may, thereby, facilitate program improvement.


Subject(s)
Clinical Clerkship , Clinical Competence , Education, Medical, Undergraduate , Internship and Residency , Self Concept , Students, Medical , Adult , Female , Humans , Male , Program Evaluation , Surveys and Questionnaires , Young Adult
5.
Adv Health Sci Educ Theory Pract ; 21(2): 359-73, 2016 May.
Article in English | MEDLINE | ID: mdl-26297481

ABSTRACT

Educators often seek to demonstrate the equivalence of groups, such as whether or not students achieve comparable success regardless of the site at which they trained. A methodological consideration that is often underappreciated is how to operationalize equivalence. This study examined whether a distribution-based approach, based on effect size, can identify an appropriate equivalence threshold for medical education data. Thirty-nine individuals rated program site equivalence on a series of simulated pairwise bar graphs representing one of four measures with which they had prior experience: (1) undergraduate academic achievement, (2) a student experience survey, (3) an Objective Structured Clinical Exam global rating scale, or (4) a licensing exam. Descriptive statistics and repeated measures ANOVA examined the effects on equivalence ratings of (a) the difference between means, (b) variability in scores, and (c) which program site (the larger or smaller) scored higher. The equivalence threshold was defined as the point at which 50 % of participants rated the sites as non-equivalent. Across the four measures, the equivalence thresholds converged to average effect size of Cohen's d = 0.57 (range of 0.50-0.63). This corresponded to an average mean difference of 10 % (range of 3-13 %). These results are discussed in reference to findings from the health-related quality of life field that has demonstrated that d = 0.50 represents a consistent threshold for perceived change. This study provides preliminary empirically-based guidance for defining an equivalence threshold for researchers and evaluators conducting equivalence tests.


Subject(s)
Education, Medical/standards , Educational Measurement/standards , Research/standards , Surveys and Questionnaires/standards , Adolescent , Adult , Data Interpretation, Statistical , Female , Humans , Male , Middle Aged , Young Adult
6.
Acad Med ; 90(11 Suppl): S36-42, 2015 Nov.
Article in English | MEDLINE | ID: mdl-26505099

ABSTRACT

BACKGROUND: Health professions programs continue to search for meaningful and efficient ways to evaluate the quality of education they provide and support ongoing program improvement. Despite flaws inherent in self-assessment, recent research suggests that aggregated self-assessments reliably rank aspects of competence attained during preclerkship MD training. Given the novelty of those observations, the purpose of this study was to test their generalizability by evaluating an MD program as a whole. METHOD: The Readiness for Residency Survey (RfR) was developed and aligned with the published Readiness for Clerkship Survey (RfC), but focused on the competencies expected to be achieved at graduation. The RfC and RfR were administered electronically four months after the start of clerkship and six months after the start of residency, respectively. Generalizability and decision studies examined the extent to which specific competencies were achieved relative to one another. RESULTS: The reliability of scores assigned by a single resident was G = 0.32. However, a reliability of G = 0.80 could be obtained by averaging over as few as nine residents. Whereas highly rated competencies in the RfC resided within the CanMEDS domains of professional, communicator, and collaborator, five additional medical expert competencies emerged as strengths when the program was evaluated after completion by residents. CONCLUSIONS: Aggregated resident self-assessments obtained using the RfR reliably differentiate aspects of competence attained over four years of undergraduate training. The RfR and RfC together can be used as evaluation tools to identify areas of strength and weakness in an undergraduate medical education program.


Subject(s)
Clinical Clerkship , Clinical Competence , Education, Medical, Undergraduate , Internship and Residency , Self-Assessment , Humans , Program Evaluation , Reproducibility of Results , Surveys and Questionnaires
7.
Acad Med ; 90(5): 684-90, 2015 May.
Article in English | MEDLINE | ID: mdl-25629950

ABSTRACT

PURPOSE: Accreditation standards require medical schools to use comparable assessment methods to ensure students in rotation-based clerkships and longitudinal integrated clerkships (LICs) achieve the same learning objectives. The National Board of Medical Examiners (NBME) Clinical Science Subject Examinations (subject exams) are commonly used, but an integrated examination like the NBME Comprehensive Clinical Science Examination (CCSE) may be better suited for LICs. This study examined the comparability of the CCSE and five commonly required subject exams. METHOD: In 2009-2010, third-year medical students in rotation-based clerkships at the University of British Columbia Faculty of Medicine completed subject exams in medicine, obstetrics-gynecology, pediatrics, psychiatry, and surgery for summative purposes following each rotation and a year-end CCSE for formative purposes. Data for 205 students were analyzed to determine the relationship between scores on the CCSE (and its five discipline subscales) and the five subject exams and the impact of clerkship rotation order. RESULTS: The correlation between the CCSE score and the average score on the five subject exams was high (0.80-0.93). Four subject exam scores were significant predictors of the CCSE score, and scores on the subject exams explained 65%-87% of CCSE score variance. Scores on each subject exam-but not rotation order-were statistically significant in predicting corresponding CCSE discipline subscale scores. CONCLUSIONS: The results provide evidence that these five subject exams and the CCSE measure similar constructs. This suggests that assessment of clerkship-year students' knowledge using the CCSE is comparable to assessment using this set of subject exams.


Subject(s)
Clinical Clerkship/methods , Clinical Competence , Clinical Medicine/education , Education, Medical/methods , Educational Measurement/methods , Schools, Medical , Students, Medical , British Columbia , Educational Measurement/standards , Humans , Learning , Retrospective Studies
8.
Acad Med ; 87(10): 1355-60, 2012 Oct.
Article in English | MEDLINE | ID: mdl-22914522

ABSTRACT

PURPOSE: To examine whether or not aggregated self-assessment data of clerkship readiness can provide meaningful sources of information to evaluate the effectiveness of an educational program. METHOD: The 39-item Readiness for Clerkship survey was developed during academic year 2009-2010 using several key competence documents and expert review. The survey was completed by two cohorts of students (179 from the class of 2011 in February 2010, 171 from the class of 2012 in November 2010) and of clinical preceptors (384 for class of 2011 preceptors, 419 for class of 2012 preceptors). Descriptive statistics, Pearson correlations coefficients, ANOVA, and generalizability and decision studies were used to determine whether ratings could differentiate between different aspects of a training program. RESULTS: When self-assessments were aggregated across students, their judgments aligned very well with those of faculty raters. The correlation of average scores, calculated for each item between faculty and students, was r=0.88 for 2011 and r=0.91 for 2012. This was only slightly lower than the near-perfect correlations of item averages within groups across successive years (r=0.99 for faculty; r=0.98 for students). Generalizability and decision analyses revealed that one can achieve interrater reliability in this domain with fewer students (9-21) than faculty (26-45). CONCLUSIONS: These results provide evidence that, when aggregated, student self-assessment data from the Readiness for Clerkship Survey provide valid data for use in program evaluation that align well with an external standard.


Subject(s)
Clinical Clerkship , Education, Medical, Undergraduate/standards , Educational Measurement/methods , Program Evaluation/methods , Self-Assessment , Surveys and Questionnaires , Analysis of Variance , British Columbia , Clinical Competence , Factor Analysis, Statistical , Humans , Models, Statistical , Observer Variation
9.
Assessment ; 15(1): 60-71, 2008 Mar.
Article in English | MEDLINE | ID: mdl-18258732

ABSTRACT

The majority of body image measures have largely been developed with younger female samples. Before these measures can be applied to men, and to middle-aged and older women, and used to make gender and age comparisons, they must exhibit adequate cross-group measurement invariance. This study examined the age and gender cross-group measurement invariance of the Appearance Schemas Inventory-Revised (ASI-R) and the Body Image Quality of Life Inventory (BIQLI), with a sample of 1,262 adults (422 men and 840 women) aged 18 to 98 years. For the ASI-R, all groups met requirements for configural and metric invariance. Scalar invariance was found only for the three age groups, which indicated that mean comparisons may be conducted across gender for young, middle-aged, and older adults but should not be conducted across age groups within either gender. Results for the BIQLI indicated that observed mean comparisons may be conducted across all age and gender groups.


Subject(s)
Body Image , Quality of Life , Adolescent , Adult , Age Factors , Aged , Aged, 80 and over , Female , Humans , Male , Middle Aged , Psychological Tests , Psychometrics , Sex Factors , Surveys and Questionnaires
SELECTION OF CITATIONS
SEARCH DETAIL
...