Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 129
Filter
1.
BMC Med Res Methodol ; 20(1): 293, 2020 12 03.
Article in English | MEDLINE | ID: mdl-33267819

ABSTRACT

BACKGROUND: Scores on an outcome measurement instrument depend on the type and settings of the instrument used, how instructions are given to patients, how professionals administer and score the instrument, etc. The impact of all these sources of variation on scores can be assessed in studies on reliability and measurement error, if properly designed and analyzed. The aim of this study was to develop standards to assess the quality of studies on reliability and measurement error of clinician-reported outcome measurement instruments, performance-based outcome measurement instrument, and laboratory values. METHODS: We conducted a 3-round Delphi study involving 52 panelists. RESULTS: Consensus was reached on how a comprehensive research question can be deduced from the design of a reliability study to determine how the results of a study inform us about the quality of the outcome measurement instrument at issue. Consensus was reached on components of outcome measurement instruments, i.e. the potential sources of variation. Next, we reached consensus on standards on design requirements (n = 5), standards on preferred statistical methods for reliability (n = 3) and measurement error (n = 2), and their ratings on a four-point scale. There was one term for a component and one rating of one standard on which no consensus was reached, and therefore required a decision by the steering committee. CONCLUSION: We developed a tool that enables researchers with and without thorough knowledge on measurement properties to assess the quality of a study on reliability and measurement error of outcome measurement instruments.


Subject(s)
Delphi Technique , Bias , Consensus , Humans , Reproducibility of Results
2.
Med Teach ; 42(2): 213-220, 2020 02.
Article in English | MEDLINE | ID: mdl-31622126

ABSTRACT

Introduction: Programmatic assessment (PA) is an approach to assessment aimed at optimizing learning which continues to gain educational momentum. However, the theoretical underpinnings of PA have not been clearly described. An explanation of the theoretical underpinnings of PA will allow educators to gain a better understanding of this approach and, perhaps, facilitate its use and effective implementation. The purpose of this article is twofold: first, to describe salient theoretical perspectives on PA; second to examine how theory may help educators to develop effective PA programs, helping to overcome challenges around PA.Results: We outline a number of learning theories that underpin key educational principles of PA: constructivist and social constructivist theory supporting meaning making, and longitudinality; cognitivist and cognitive development orientation scaffolding the practice of a continuous feedback process; theory of instructional design underpinning assessment as learning; self-determination theory (SDT), self-regulation learning theory (SRL), and principles of deliberate practice providing theoretical tenets for student agency and accountability.Conclusion: The construction of a plausible and coherent link between key educational principles of PA and learning theories should enable educators to pose new and important inquiries, reflect on their assessment practices and help overcome future challenges in the development and implementation of PA in their programs.


Subject(s)
Educational Measurement , Formative Feedback , Learning , Cognition , Humans , Students
3.
Eur J Dent Educ ; 22 Suppl 1: 21-27, 2018 Mar.
Article in English | MEDLINE | ID: mdl-29601682

ABSTRACT

Assessments are widely used in dental education to record the academic progress of students and ultimately determine whether they are ready to begin independent dental practice. Whilst some would consider this a "rite-of-passage" of learning, the concept of assessments in education is being challenged to allow the evolution of "assessment for learning." This serves as an economical use of learning resources whilst allowing our learners to prove their knowledge and skills and demonstrating competence. The Association for Dental Education in Europe and the American Dental Education Association held a joint international meeting in London in May 2017 allowing experts in dental education to come together for the purposes of Shaping the Future of Dental Education. Assessment in a Global Context was one topic in which international leaders could discuss different methods of assessment, identifying the positives, the pitfalls and critiquing the method of implementation to determine the optimum assessment for a learner studying to be a healthcare professional. A post-workshop survey identified that educators were thinking differently about assessment, instead of working as individuals providing isolated assessments; the general consensus was that a longitudinally orientated systematic and programmatic approach to assessment provide greater reliability and improved the ability to demonstrate learning.


Subject(s)
Education, Dental/standards , Educational Measurement , International Cooperation , Clinical Competence/standards , Congresses as Topic , Education , Education, Dental/methods , Education, Dental/trends , Educational Measurement/methods , Educational Measurement/standards , Forecasting , Humans
4.
Med Teach ; 39(11): 1174-1181, 2017 Nov.
Article in English | MEDLINE | ID: mdl-28784026

ABSTRACT

BACKGROUND: In clerkships, students are expected to self-regulate their learning. How clinical departments and their routine approach on clerkships influences students' self-regulated learning (SRL) is unknown. AIM: This study explores how characteristic routines of clinical departments influence medical students' SRL. METHODS: Six focus groups including 39 purposively sampled participants from one Dutch university were organized to study how characteristic routines of clinical departments influenced medical students' SRL from a constructivist paradigm, using grounded theory methodology. The focus groups were audio recorded, transcribed verbatim and were analyzed iteratively using constant comparison and open, axial and interpretive coding. RESULTS: Students described that clinical departments influenced their SRL through routines which affected the professional relationships they could engage in and affected their perception of a department's invested effort in them. Students' SRL in a clerkship can be supported by enabling them to engage others in their SRL and by having them feel that effort is invested in their learning. CONCLUSIONS: Our study gives a practical insight in how clinical departments influenced students' SRL. Clinical departments can affect students' motivation to engage in SRL, influence the variety of SRL strategies that students can use and how meaningful students perceive their SRL experiences to be.


Subject(s)
Clinical Clerkship/organization & administration , Self-Control/psychology , Students, Medical/psychology , Workplace/psychology , Adult , Clinical Competence , Cooperative Behavior , Environment , Female , Focus Groups , Grounded Theory , Humans , Interpersonal Relations , Learning , Male , Motivation , Netherlands , Patient Care Team/organization & administration , Self Efficacy , Young Adult
5.
BMC Med Educ ; 15: 237, 2015 Dec 30.
Article in English | MEDLINE | ID: mdl-26715145

ABSTRACT

BACKGROUND: Evaluations of clinical assessments that use judgement-based methods have frequently shown them to have sub-optimal reliability and internal validity evidence for their interpretation and intended use. The aim of this study was to enhance that validity evidence by an evaluation of the internal validity and reliability of competency constructs from supervisors' end-of-term summative assessments for prevocational medical trainees. METHODS: The populations were medical trainees preparing for full registration as a medical practitioner (74) and supervisors who undertook ≥2 end-of-term summative assessments (n = 349) from a single institution. Confirmatory Factor Analysis was used to evaluate assessment internal construct validity. The hypothesised competency construct model to be tested, identified by exploratory factor analysis, had a theoretical basis established in workplace-psychology literature. Comparisons were made with competing models of potential competency constructs including the competency construct model of the original assessment. The optimal model for the competency constructs was identified using model fit and measurement invariance analysis. Construct homogeneity was assessed by Cronbach's α. Reliability measures were variance components of individual competency items and the identified competency constructs, and the number of assessments needed to achieve adequate reliability of R > 0.80. RESULTS: The hypothesised competency constructs of "general professional job performance", "clinical skills" and "professional abilities" provides a good model-fit to the data, and a better fit than all alternative models. Model fit indices were χ2/df = 2.8; RMSEA = 0.073 (CI 0.057-0.088); CFI = 0.93; TLI = 0.95; SRMR = 0.039; WRMR = 0.93; AIC = 3879; and BIC = 4018). The optimal model had adequate measurement invariance with nested analysis of important population subgroups supporting the presence of full metric invariance. Reliability estimates for the competency construct "general professional job performance" indicated a resource efficient and reliable assessment for such a construct (6 assessments for an R > 0.80). Item homogeneity was good (Cronbach's alpha = 0.899). Other competency constructs are resource intensive requiring ≥11 assessments for a reliable assessment score. CONCLUSION: Internal validity and reliability of clinical competence assessments using judgement-based methods are acceptable when actual competency constructs used by assessors are adequately identified. Validation for interpretation and use of supervisors' assessment in local training schemes is feasible using standard methods for gathering validity evidence.


Subject(s)
Clinical Competence/standards , Educational Measurement/standards , Medical Staff, Hospital/standards , Administrative Personnel/standards , Australia , Certification/standards , Educational Measurement/methods , Factor Analysis, Statistical , Female , Humans , Judgment , Male , Psychometrics , Reproducibility of Results
6.
BMC Med Educ ; 15: 140, 2015 Aug 26.
Article in English | MEDLINE | ID: mdl-26306762

ABSTRACT

BACKGROUND: Problem based learning (PBL) is a powerful learning activity but fidelity to intended models may slip and student engagement wane, negatively impacting learning processes, and outcomes. One potential solution to solve this degradation is by encouraging self-assessment in the PBL tutorial. Self-assessment is a central component of the self-regulation of student learning behaviours. There are few measures to investigate self-assessment relevant to PBL processes. We developed a Self-assessment Scale on Active Learning and Critical Thinking (SSACT) to address this gap. We wished to demonstrated evidence of its validity in the context of PBL by exploring its internal structure. METHODS: We used a mixed methods approach to scale development. We developed scale items from a qualitative investigation, literature review, and consideration of previous existing tools used for study of the PBL process. Expert review panels evaluated its content; a process of validation subsequently reduced the pool of items. We used structural equation modelling to undertake a confirmatory factor analysis (CFA) of the SSACT and coefficient alpha. RESULTS: The 14 item SSACT consisted of two domains "active learning" and "critical thinking." The factorial validity of SSACT was evidenced by all items loading significantly on their expected factors, a good model fit for the data, and good stability across two independent samples. Each subscale had good internal reliability (>0.8) and strongly correlated with each other. CONCLUSIONS: The SSACT has sufficient evidence of its validity to support its use in the PBL process to encourage students to self-assess. The implementation of the SSACT may assist students to improve the quality of their learning in achieving PBL goals such as critical thinking and self-directed learning.


Subject(s)
Educational Measurement/methods , Problem-Based Learning/methods , Students, Medical/psychology , Humans , Learning , Reproducibility of Results , Self-Assessment , Thinking
7.
Med Teach ; 37(7): 641-646, 2015 Jul.
Article in English | MEDLINE | ID: mdl-25410481

ABSTRACT

Programmatic assessment is an integral approach to the design of an assessment program with the intent to optimise its learning function, its decision-making function and its curriculum quality-assurance function. Individual methods of assessment, purposefully chosen for their alignment with the curriculum outcomes and their information value for the learner, the teacher and the organisation, are seen as individual data points. The information value of these individual data points is maximised by giving feedback to the learner. There is a decoupling of assessment moment and decision moment. Intermediate and high-stakes decisions are based on multiple data points after a meaningful aggregation of information and supported by rigorous organisational procedures to ensure their dependability. Self-regulation of learning, through analysis of the assessment information and the attainment of the ensuing learning goals, is scaffolded by a mentoring system. Programmatic assessment-for-learning can be applied to any part of the training continuum, provided that the underlying learning conception is constructivist. This paper provides concrete recommendations for implementation of programmatic assessment.

8.
Nurse Educ Today ; 35(2): 341-6, 2015 Feb.
Article in English | MEDLINE | ID: mdl-25497139

ABSTRACT

Although competency-based education is well established in health care education, research shows that the competencies do not always match the reality of clinical workplaces. Therefore, there is a need to design feasible and evidence-based competency frameworks that fit the workplace reality. This theoretical paper outlines a competency-based framework, designed to facilitate learning, assessment and supervision in clinical workplace education. Integration is the cornerstone of this holistic competency framework.


Subject(s)
Clinical Competence , Competency-Based Education/methods , Education, Nursing , Educational Measurement , Workplace
9.
Midwifery ; 31(1): 90-4, 2015 Jan.
Article in English | MEDLINE | ID: mdl-25017173

ABSTRACT

BACKGROUND: increasingly, reflection is highlighted as integral to core practice competencies but empirical research into the relationship between reflection and performance in the clinical workplace is scarce. AIM: this study investigated the relationship between reflection ability and clinical performance. METHODS: we designed a cross-sectional and a retrospective-longitudinal cohort study. Data from first, second and third year midwifery students were collected to study the variables 'clinical performance' and 'reflection ability'. Data were analysed with SPSS for Windows, Release 20.0. Descriptive statistics, Pearson׳s Product Moment Correlation Coefficients (r) and r² values were computed to investigate associations between the research variables. FINDINGS: the results showed a moderate observed correlation between reflection ability and clinical performance scores. When adopting a cross-sectional perspective, all correlation values were significant (p<0.01) and above 0.4, with the exception of the third year correlations. Assuming perfect reliability in the measurement, the adjusted correlations, for year 2 and year 3 indicated a high association between reflection ability and clinical performance (>0.6). The results based on the retrospective-longitudinal data set explained a moderate proportion of the variance after correction for attenuation. Finally, the results indicate that 'reflection ability' scores of earlier years are significant related with 'clinical performance' scores of subsequent years. These results suggest that (1) reflection ability is linked to clinical performance; (2) that written reflections are an important, but not the sole way to assess professional competence and that (3) reflection is a contributor to clinical performance improvement. CONCLUSIONS: the data showed a moderate but significant relationship between 'reflection ability' and 'clinical performance' scores in clinical practice of midwifery students. Reflection therefore seems an important component of professional competence.


Subject(s)
Clinical Competence/standards , Students, Nursing/psychology , Belgium , Cohort Studies , Cross-Sectional Studies , Education, Nursing, Graduate , Female , Humans , Pregnancy , Retrospective Studies
10.
Perspect Med Educ ; 3(3): 222-232, 2014 Jun.
Article in English | MEDLINE | ID: mdl-24925627

ABSTRACT

Educational practice and educational research are not aligned with each other. Current educational practice heavily relies on information transmission or content delivery to learners. Yet evidence shows that delivery is only a minor part of learning. To illustrate the directions we might take to find better educational strategies, six areas of educational evidence are briefly reviewed. The flipped classroom idea is proposed to shift our expenditure and focus in education. All information delivery could be web distributed, thus creating more time for other more expensive educational strategies to support the learner. In research our focus should shift from comparing one curriculum to the other, to research that explains why things work in education and under which conditions. This may generate ideas for creative designers to develop new educational strategies. These best practices should be shared and further researched. At the same time attention should be paid to implementation and the realization that teachers learn in a way very similar to the people they teach. If we take the evidence seriously, our educational practice will look quite different to the way it does now.

11.
Nurse Educ Pract ; 14(4): 441-6, 2014 Aug.
Article in English | MEDLINE | ID: mdl-24780309

ABSTRACT

BACKGROUND: Self-directed learning is an educational concept that has received increasing attention. The recent workplace literature, however, reports problems with the facilitation of self-directed learning in clinical practice. We developed the Midwifery Assessment and Feedback Instrument (MAFI) as a framework to facilitate self-directed learning. In the present study, we sought clinical supervisors' perceptions of the usefulness of MAFI. METHODS: Interviews with fifteen clinical supervisors were audio taped, transcribed verbatim and analysed thematically using Atlas-Ti software for qualitative data analysis. RESULTS: Four themes emerged from the analysis. (1) The competency-based educational structure promotes the setting of realistic learning outcomes and a focus on competency development, (2) instructing students to write reflections facilitates student-centred supervision, (3) creating a feedback culture is necessary to achieve continuity in supervision and (4) integrating feedback and assessment might facilitate competency development under the condition that evidence is discussed during assessment meetings. Supervisors stressed the need for direct observation, and instruction how to facilitate a self-directed learning process. CONCLUSION: The MAFI appears to be a useful framework to promote self-directed learning in clinical practice. The effect can be advanced by creating a feedback and assessment culture where learners and supervisors share the responsibility for developing self-directed learning.


Subject(s)
Attitude of Health Personnel , Clinical Competence , Competency-Based Education/organization & administration , Educational Measurement , Midwifery/education , Nurse Administrators/psychology , Programmed Instructions as Topic , Belgium , Feedback , Female , Humans , Pregnancy , Program Evaluation
12.
Med Teach ; 36(7): 602-7, 2014 Jul.
Article in English | MEDLINE | ID: mdl-24787531

ABSTRACT

BACKGROUND: The development of reflective learning skills is a continuous process that needs scaffolding. It can be described as a continuum, with the focus of reflection differing in granularity from recent, concrete activities to global competency development. AIM: To explore learners' perceptions regarding the effects of two reflective writing activities designed to stimulate reflection at different degrees of granularity during clinical training. METHODS: Totally 142 respondents (students and recent graduates) completed a questionnaire. Quantitative and qualitative data were triangulated. RESULTS: Immediate reflection-on-action was perceived to be more valuable than delayed reflection-on-competency-development because it facilitated day-to-day improvement. Delayed reflection was perceived to facilitate overall self-assessment, self-confidence and continuous improvement, but this perception was mainly found among graduates. Detailed reflection immediately after a challenging learning experience and broad reflection on progress appeared to serve different learning goals and consequently require different arrangements regarding feedback and timing. CONCLUSIONS: Granularity of focus has consequences for scaffolding reflective learning, with immediate reflection on concrete events and reflection on long-term progress requiring different approaches. Learners appeared to prefer immediate reflection-on-action.


Subject(s)
Clinical Competence/standards , Midwifery/education , Problem-Based Learning/standards , Self-Assessment , Students, Health Occupations/psychology , Belgium , Humans , Problem-Based Learning/methods , Program Evaluation , Surveys and Questionnaires , Time Factors
13.
Med Teach ; 35(9): 772-8, 2013 Sep.
Article in English | MEDLINE | ID: mdl-23808841

ABSTRACT

BACKGROUND: Although the literature suggests that reflection has a positive impact on learning, there is a paucity of evidence to support this notion. AIM: We investigated feedback and reflection in relation to the likelihood that feedback will be used to inform action plans. We hypothesised that feedback and reflection present a cumulative sequence (i.e. trainers only pay attention to trainees' reflections when they provided specific feedback) and we hypothesised a supplementary effect of reflection. METHOD: We analysed copies of assessment forms containing trainees' reflections and trainers' feedback on observed clinical performance. We determined whether the response patterns revealed cumulative sequences in line with the Guttman scale. We further examined the relationship between reflection, feedback and the mean number of specific comments related to an action plan (ANOVA) and we calculated two effect sizes. RESULTS: Both hypotheses were confirmed by the results. The response pattern found showed an almost perfect fit with the Guttman scale (0.99) and reflection seems to have supplementary effect on the variable action plan. CONCLUSIONS: Reflection only occurs when a trainer has provided specific feedback; trainees who reflect on their performance are more likely to make use of feedback. These results confirm findings and suggestions reported in the literature.


Subject(s)
Education, Medical, Graduate/methods , Educational Measurement , Feedback , General Practice/education , Self-Assessment , Cross-Sectional Studies , Female , Humans , Male , Netherlands
14.
Adv Health Sci Educ Theory Pract ; 18(5): 1087-102, 2013 Dec.
Article in English | MEDLINE | ID: mdl-23494202

ABSTRACT

In recent years, postgraduate assessment programmes around the world have embraced workplace-based assessment (WBA) and its related tools. Despite their widespread use, results of studies on the validity and reliability of these tools have been variable. Although in many countries decisions about residents' continuation of training and certification as a specialist are based on the composite results of different WBAs collected in a portfolio, to our knowledge, the reliability of such a WBA toolbox has never been investigated. Using generalisability theory, we analysed the separate and composite reliability of three WBA tools [mini-Clinical Evaluation Exercise (mini-CEX), direct observation of procedural skills (DOPS), and multisource feedback (MSF)] included in a resident portfolio. G-studies and D-studies of 12,779 WBAs from a total of 953 residents showed that a reliability coefficient of 0.80 was obtained for eight mini-CEXs, nine DOPS, and nine MSF rounds, whilst the same reliability was found for seven mini-CEXs, eight DOPS, and one MSF when combined in a portfolio. At the end of the first year of residency a portfolio with five mini-CEXs, six DOPS, and one MSF afforded reliable judgement. The results support the conclusion that several WBA tools combined in a portfolio can be a feasible and reliable method for high-stakes judgements.


Subject(s)
Education, Medical, Graduate , Educational Measurement/methods , Medicine/standards , Workplace , Female , Humans , Internship and Residency , Male , Netherlands , Reproducibility of Results
15.
Adv Health Sci Educ Theory Pract ; 18(3): 375-96, 2013 Aug.
Article in English | MEDLINE | ID: mdl-22592323

ABSTRACT

Weaknesses in the nature of rater judgments are generally considered to compromise the utility of workplace-based assessment (WBA). In order to gain insight into the underpinnings of rater behaviours, we investigated how raters form impressions of and make judgments on trainee performance. Using theoretical frameworks of social cognition and person perception, we explored raters' implicit performance theories, use of task-specific performance schemas and the formation of person schemas during WBA. We used think-aloud procedures and verbal protocol analysis to investigate schema-based processing by experienced (N = 18) and inexperienced (N = 16) raters (supervisor-raters in general practice residency training). Qualitative data analysis was used to explore schema content and usage. We quantitatively assessed rater idiosyncrasy in the use of performance schemas and we investigated effects of rater expertise on the use of (task-specific) performance schemas. Raters used different schemas in judging trainee performance. We developed a normative performance theory comprising seventeen inter-related performance dimensions. Levels of rater idiosyncrasy were substantial and unrelated to rater expertise. Experienced raters made significantly more use of task-specific performance schemas compared to inexperienced raters, suggesting more differentiated performance schemas in experienced raters. Most raters started to develop person schemas the moment they began to observe trainee performance. The findings further our understanding of processes underpinning judgment and decision making in WBA. Raters make and justify judgments based on personal theories and performance constructs. Raters' information processing seems to be affected by differences in rater expertise. The results of this study can help to improve rater training, the design of assessment instruments and decision making in WBA.


Subject(s)
Clinical Competence/standards , Educational Measurement/methods , Educational Measurement/standards , Humans , Internship and Residency/standards , Physicians/standards , Video Recording
16.
Adv Health Sci Educ Theory Pract ; 18(4): 701-25, 2013 Oct.
Article in English | MEDLINE | ID: mdl-23053869

ABSTRACT

Supervisor assessments are critical for both formative and summative assessment in the workplace. Supervisor ratings remain an important source of such assessment in many educational jurisdictions even though there is ambiguity about their validity and reliability. The aims of this evaluation is to explore the: (1) construct validity of ward-based supervisor competency assessments; (2) reliability of supervisors for observing any overarching domain constructs identified (factors); (3) stability of factors across subgroups of contexts, supervisors and trainees; and (4) position of the observations compared to the established literature. Evaluated assessments were all those used to judge intern (trainee) suitability to become an unconditionally registered medical practitioner in the Australian Capital Territory, Australia in 2007-2008. Initial construct identification is by traditional exploratory factor analysis (EFA) using Principal component analysis with Varimax rotation. Factor stability is explored by EFA of subgroups by different contexts such as hospital type, and different types of supervisors and trainees. The unit of analysis is each assessment, and includes all available assessments without aggregation of any scores to obtain the factors. Reliability of identified constructs is by variance components analysis of the summed trainee scores for each factor and the number of assessments needed to provide an acceptably reliable assessment using the construct, the reliability unit of analysis being the score for each factor for every assessment. For the 374 assessments from 74 trainees and 73 supervisors, the EFA resulted in 3 factors identified from the scree plot, accounting for only 68 % of the variance with factor 1 having features of a "general professional job performance" competency (eigenvalue 7.630; variance 54.5 %); factor 2 "clinical skills" (eigenvalue 1.036; variance 7.4 %); and factor 3 "professional and personal" competency (eigenvalue 0.867; variance 6.2 %). The percent trainee score variance for the summed competency item scores for factors 1, 2 and 3 were 40.4, 27.4 and 22.9 % respectively. The number of assessments needed to give a reliability coefficient of 0.80 was 6, 11 and 13 respectively. The factor structure remained stable for subgroups of female trainees, Australian graduate trainees, the central hospital, surgeons, staff specialist, visiting medical officers and the separation into single years. Physicians as supervisors, male trainees, and male supervisors all had a different grouping of items within 3 factors which all had competency items that collapsed into the predefined "face value" constructs of competence. These observations add new insights compared to the established literature. For the setting, most supervisors appear to be assessing a dominant construct domain which is similar to a general professional job performance competency. This global construct consists of individual competency items that supervisors spontaneously align and has acceptable assessment reliability. However, factor structure instability between different populations of supervisors and trainees means that subpopulations of trainees may be assessed differently and that some subpopulations of supervisors are assessing the same trainees with different constructs than other supervisors. The lack of competency criterion standardisation of supervisors' assessments brings into question the validity of this assessment method as currently used.


Subject(s)
Clinical Competence/standards , Employee Performance Appraisal/standards , Medical Staff, Hospital , Australian Capital Territory , Factor Analysis, Statistical , Female , Humans , Male , Reproducibility of Results
17.
Ned Tijdschr Tandheelkd ; 119(6): 302-5, 2012 Jun.
Article in Dutch | MEDLINE | ID: mdl-22812268

ABSTRACT

Educational research not only showed that student characteristics are of major importance for study success, but also that education does make a difference. Essentially, teaching is about stimulating students to invest time in learning and to use that time as effectively as possible. Assessment, goal-orientated work, and feedback have a major effect. The teacher is the key figure. With the aim to better understand teaching and learning, educational researchers usefindingsfrom other disciplines more and more often. A pitfall is to apply the findings of educational research without taking into consideration the context and the specific characteristics of students and teachers. Because of the large number offactors that influence the results ofeducation, educational science is referred as 'the hardest science of all'.


Subject(s)
Education, Dental , Psychology, Educational , Students, Dental/psychology , Teaching/methods , Humans , Learning , Motivation
18.
Med Teach ; 34 Suppl 1: S32-6, 2012.
Article in English | MEDLINE | ID: mdl-22409188

ABSTRACT

It has been shown that medical students have a higher rate of depressive symptoms than the general population and age- and sex-matched peers. This study aimed to estimate the prevalence of depressive symptoms among the medical students of a large school following a traditional curriculum and its relation to personal background variables. A descriptive-analytic, cross-sectional study was conducted in a medical school in Riyadh, Saudi Arabia. The medical students of King Saud University in Riyadh, Saudi Arabia, were screened for depressive symptoms using the 21-item Beck Depression Inventory. A high prevalence of depressive symptoms (48.2%) was found, it was either mild (21%), moderate (17%), or severe (11%). The presence and severity of depressive symptoms had a statistically significant association with early academic years (p < 0.000) and female gender (p < 0.002). The high prevalence of depressive symptoms is an alarming sign and calls for remedial action, particularly for the junior and female students.


Subject(s)
Depression/epidemiology , Depressive Disorder/epidemiology , Education, Medical, Undergraduate/methods , Stress, Psychological/psychology , Students, Medical/psychology , Cross-Sectional Studies , Education, Medical, Undergraduate/standards , Educational Status , Female , Humans , Male , Prevalence , Psychiatric Status Rating Scales , Saudi Arabia/epidemiology , Sex Factors , Stress, Psychological/complications , Stress, Psychological/etiology , Young Adult
20.
Med Teach ; 34(3): 205-14, 2012.
Article in English | MEDLINE | ID: mdl-22364452

ABSTRACT

We propose a model for programmatic assessment in action, which simultaneously optimises assessment for learning and assessment for decision making about learner progress. This model is based on a set of assessment principles that are interpreted from empirical research. It specifies cycles of training, assessment and learner support activities that are complemented by intermediate and final moments of evaluation on aggregated assessment data points. A key principle is that individual data points are maximised for learning and feedback value, whereas high-stake decisions are based on the aggregation of many data points. Expert judgement plays an important role in the programme. Fundamental is the notion of sampling and bias reduction to deal with the inevitable subjectivity of this type of judgement. Bias reduction is further sought in procedural assessment strategies derived from criteria for qualitative research. We discuss a number of challenges and opportunities around the proposed model. One of its prime virtues is that it enables assessment to move, beyond the dominant psychometric discourse with its focus on individual instruments, towards a systems approach to assessment design underpinned by empirically grounded theory.


Subject(s)
Educational Measurement/methods , Program Evaluation/methods , Decision Making , Humans , Models, Educational
SELECTION OF CITATIONS
SEARCH DETAIL
...