Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 23
Filter
1.
Med Teach ; 42(2): 213-220, 2020 02.
Article in English | MEDLINE | ID: mdl-31622126

ABSTRACT

Introduction: Programmatic assessment (PA) is an approach to assessment aimed at optimizing learning which continues to gain educational momentum. However, the theoretical underpinnings of PA have not been clearly described. An explanation of the theoretical underpinnings of PA will allow educators to gain a better understanding of this approach and, perhaps, facilitate its use and effective implementation. The purpose of this article is twofold: first, to describe salient theoretical perspectives on PA; second to examine how theory may help educators to develop effective PA programs, helping to overcome challenges around PA.Results: We outline a number of learning theories that underpin key educational principles of PA: constructivist and social constructivist theory supporting meaning making, and longitudinality; cognitivist and cognitive development orientation scaffolding the practice of a continuous feedback process; theory of instructional design underpinning assessment as learning; self-determination theory (SDT), self-regulation learning theory (SRL), and principles of deliberate practice providing theoretical tenets for student agency and accountability.Conclusion: The construction of a plausible and coherent link between key educational principles of PA and learning theories should enable educators to pose new and important inquiries, reflect on their assessment practices and help overcome future challenges in the development and implementation of PA in their programs.


Subject(s)
Educational Measurement , Formative Feedback , Learning , Cognition , Humans , Students
2.
Med Teach ; 37(7): 641-646, 2015 Jul.
Article in English | MEDLINE | ID: mdl-25410481

ABSTRACT

Programmatic assessment is an integral approach to the design of an assessment program with the intent to optimise its learning function, its decision-making function and its curriculum quality-assurance function. Individual methods of assessment, purposefully chosen for their alignment with the curriculum outcomes and their information value for the learner, the teacher and the organisation, are seen as individual data points. The information value of these individual data points is maximised by giving feedback to the learner. There is a decoupling of assessment moment and decision moment. Intermediate and high-stakes decisions are based on multiple data points after a meaningful aggregation of information and supported by rigorous organisational procedures to ensure their dependability. Self-regulation of learning, through analysis of the assessment information and the attainment of the ensuing learning goals, is scaffolded by a mentoring system. Programmatic assessment-for-learning can be applied to any part of the training continuum, provided that the underlying learning conception is constructivist. This paper provides concrete recommendations for implementation of programmatic assessment.

3.
Adv Health Sci Educ Theory Pract ; 18(3): 375-96, 2013 Aug.
Article in English | MEDLINE | ID: mdl-22592323

ABSTRACT

Weaknesses in the nature of rater judgments are generally considered to compromise the utility of workplace-based assessment (WBA). In order to gain insight into the underpinnings of rater behaviours, we investigated how raters form impressions of and make judgments on trainee performance. Using theoretical frameworks of social cognition and person perception, we explored raters' implicit performance theories, use of task-specific performance schemas and the formation of person schemas during WBA. We used think-aloud procedures and verbal protocol analysis to investigate schema-based processing by experienced (N = 18) and inexperienced (N = 16) raters (supervisor-raters in general practice residency training). Qualitative data analysis was used to explore schema content and usage. We quantitatively assessed rater idiosyncrasy in the use of performance schemas and we investigated effects of rater expertise on the use of (task-specific) performance schemas. Raters used different schemas in judging trainee performance. We developed a normative performance theory comprising seventeen inter-related performance dimensions. Levels of rater idiosyncrasy were substantial and unrelated to rater expertise. Experienced raters made significantly more use of task-specific performance schemas compared to inexperienced raters, suggesting more differentiated performance schemas in experienced raters. Most raters started to develop person schemas the moment they began to observe trainee performance. The findings further our understanding of processes underpinning judgment and decision making in WBA. Raters make and justify judgments based on personal theories and performance constructs. Raters' information processing seems to be affected by differences in rater expertise. The results of this study can help to improve rater training, the design of assessment instruments and decision making in WBA.


Subject(s)
Clinical Competence/standards , Educational Measurement/methods , Educational Measurement/standards , Humans , Internship and Residency/standards , Physicians/standards , Video Recording
4.
Med Teach ; 34(3): 205-14, 2012.
Article in English | MEDLINE | ID: mdl-22364452

ABSTRACT

We propose a model for programmatic assessment in action, which simultaneously optimises assessment for learning and assessment for decision making about learner progress. This model is based on a set of assessment principles that are interpreted from empirical research. It specifies cycles of training, assessment and learner support activities that are complemented by intermediate and final moments of evaluation on aggregated assessment data points. A key principle is that individual data points are maximised for learning and feedback value, whereas high-stake decisions are based on the aggregation of many data points. Expert judgement plays an important role in the programme. Fundamental is the notion of sampling and bias reduction to deal with the inevitable subjectivity of this type of judgement. Bias reduction is further sought in procedural assessment strategies derived from criteria for qualitative research. We discuss a number of challenges and opportunities around the proposed model. One of its prime virtues is that it enables assessment to move, beyond the dominant psychometric discourse with its focus on individual instruments, towards a systems approach to assessment design underpinned by empirically grounded theory.


Subject(s)
Educational Measurement/methods , Program Evaluation/methods , Decision Making , Humans , Models, Educational
5.
Anaesth Intensive Care ; 39(1): 107-15, 2011 Jan.
Article in English | MEDLINE | ID: mdl-21375100

ABSTRACT

The Competency-Based Training program in Intensive Care Medicine in Europe identified 12 competency domains. Professionalism was given a prominence equal to technical ability. However, little information pertaining to fellows' views on professionalism is available. A nationwide qualitative study was performed. The moderator asked participants to clarify the terms professionalism and professional behaviour, and to explore the questions "How do you learn the mentioned aspects?" and "What ways of learning do you find useful or superfluous?". Qualitative data analysis software (MAXQDA2007) facilitated analysis using an inductive coding approach. Thirty-five fellows across eight groups participated. The themes most frequently addressed were communication, keeping distance and boundaries, medical knowledge and expertise, respect, teamwork, leadership and organisation and management. Medical knowledge, expertise and technical skills seem to become more tacit when training progresses. Topics can be categorised into themes of workplace-based learning, by gathering practical experience, by following examples and receiving feedback on action, including learning from own and others' mistakes. Formal teaching courses (e.g. communication) and scheduled sessions addressing professionalism aspects were also valued. The emerging themes considered most relevant for intensivists were adequate communication skills and keeping boundaries with patients and relatives. Professionalism is mainly learned 'on the job' from role models in the intensive care unit. Formal teaching courses and sessions addressing professionalism aspects were nevertheless valued, and learning from own and others' mistakes was considered especially useful. Self-reflection as a starting point for learning professionalism was stressed.


Subject(s)
Clinical Competence/statistics & numerical data , Critical Care , Internship and Residency , Social Perception , Adult , Attitude of Health Personnel , Communication , Focus Groups , Humans , Intensive Care Units , Leadership , Mentors , Netherlands , Physician-Patient Relations
6.
Adv Health Sci Educ Theory Pract ; 16(2): 151-65, 2011 May.
Article in English | MEDLINE | ID: mdl-20882335

ABSTRACT

Traditional psychometric approaches towards assessment tend to focus exclusively on quantitative properties of assessment outcomes. This may limit more meaningful educational approaches towards workplace-based assessment (WBA). Cognition-based models of WBA argue that assessment outcomes are determined by cognitive processes by raters which are very similar to reasoning, judgment and decision making in professional domains such as medicine. The present study explores cognitive processes that underlie judgment and decision making by raters when observing performance in the clinical workplace. It specifically focuses on how differences in rating experience influence information processing by raters. Verbal protocol analysis was used to investigate how experienced and non-experienced raters select and use observational data to arrive at judgments and decisions about trainees' performance in the clinical workplace. Differences between experienced and non-experienced raters were assessed with respect to time spent on information analysis and representation of trainee performance; performance scores; and information processing--using qualitative-based quantitative analysis of verbal data. Results showed expert-novice differences in time needed for representation of trainee performance, depending on complexity of the rating task. Experts paid more attention to situation-specific cues in the assessment context and they generated (significantly) more interpretations and fewer literal descriptions of observed behaviors. There were no significant differences in rating scores. Overall, our findings seemed to be consistent with other findings on expertise research, supporting theories underlying cognition-based models of assessment in the clinical workplace. Implications for WBA are discussed.


Subject(s)
Clinical Competence , Cognition , Educational Measurement/methods , General Practitioners/education , Health Knowledge, Attitudes, Practice , Decision Making , Educational Status , Humans , Judgment , Statistics, Nonparametric , Task Performance and Analysis , Verbal Learning , Workplace
7.
Best Pract Res Clin Obstet Gynaecol ; 24(6): 703-19, 2010 Dec.
Article in English | MEDLINE | ID: mdl-20510653

ABSTRACT

This article presents lessons learnt from experiences with assessment of professional competence. Based on Miller's pyramid, a distinction is made between established assessment technology for assessing 'knows', 'knowing how' and 'showing how' and more recent developments in the assessment of (clinical) performance at the 'does' level. Some general lessons are derived from research of and experiences with the established assessment technology. Here, many paradoxes are revealed and empirical outcomes are often counterintuitive. Instruments for assessing the 'does' level are classified and described, and additional general lessons for this area of performance assessment are derived. These lessons can also be read as general principles of assessment (programmes) and may provide theoretical building blocks to underpin appropriate and state-of-the-art assessment practices.


Subject(s)
Clinical Competence , Clinical Medicine/education , Education, Medical/standards , Educational Measurement/methods , Competency-Based Education , Humans , Models, Educational , Observer Variation , Physicians , Reproducibility of Results , Research Design
8.
Adv Health Sci Educ Theory Pract ; 15(3): 379-93, 2010 Aug.
Article in English | MEDLINE | ID: mdl-19821042

ABSTRACT

Research on assessment in medical education has strongly focused on individual measurement instruments and their psychometric quality. Without detracting from the value of this research, such an approach is not sufficient to high quality assessment of competence as a whole. A programmatic approach is advocated which presupposes criteria for designing comprehensive assessment programmes and for assuring their quality. The paucity of research with relevance to programmatic assessment, and especially its development, prompted us to embark on a research project to develop design principles for programmes of assessment. We conducted focus group interviews to explore the experiences and views of nine assessment experts concerning good practices and new ideas about theoretical and practical issues in programmes of assessment. The discussion was analysed, mapping all aspects relevant for design onto a framework, which was iteratively adjusted to fit the data until saturation was reached. The overarching framework for designing programmes of assessment consists of six assessment programme dimensions: Goals, Programme in Action, Support, Documenting, Improving and Accounting. The model described in this paper can help to frame programmes of assessment; it not only provides a common language, but also a comprehensive picture of the dimensions to be covered when formulating design principles. It helps identifying areas concerning assessment in which ample research and development has been done. But, more importantly, it also helps to detect underserved areas. A guiding principle in design of assessment programmes is fitness for purpose. High quality assessment can only be defined in terms of its goals.


Subject(s)
Educational Measurement/methods , Learning , Models, Educational , Program Development , Teaching , Curriculum , Focus Groups , Goals , Humans , Program Evaluation , Tape Recording
9.
Med Teach ; 31(10): e464-8, 2009 Oct.
Article in English | MEDLINE | ID: mdl-19877854

ABSTRACT

BACKGROUND: The role of knowledge in postgraduate medical education has often been discussed. However, recent insights from cognitive psychology and the study of deliberate practice recognize that expert problem solving requires a well-organized knowledge database. This implies that postgraduate assessment should include knowledge testing. Longitudinal assessment, like progress testing, seems a promising approach for postgraduate progress knowledge assessment. AIMS: To evaluate the validity and reliability of a national progress test in postgraduate Obstetrics and Gynaecology training. METHODS: Data of 10 years of postgraduate progress testing were analyzed on reliability with Cronbach's alpha and on construct validity using one-way ANOVA with a post hoc Scheffe test. RESULTS: Average reliability with true-false questions was 0.50, which is moderate at best. After the introduction of multiple-choice questions average reliability improved to 0.65. Construct validity or discriminative power could only be demonstrated with some certainty between training year 1 and training year 2 and higher training years. CONCLUSION: Validity and reliability of the current progress test in postgraduate Obstetrics and Gynaecology training is unsatisfactory. Suggestions for improvement of both test construct and test content are provided.


Subject(s)
Educational Measurement/methods , Gynecology , Internship and Residency , Knowledge , Obstetrics , Clinical Competence , Humans , Program Evaluation , Reproducibility of Results
10.
Adv Health Sci Educ Theory Pract ; 13(2): 203-11, 2008 May.
Article in English | MEDLINE | ID: mdl-17043915

ABSTRACT

BACKGROUND: To establish credible, defensible and acceptable passing scores for written tests is a challenge for health profession educators. Angoff procedures are often used to establish pass/fail decisions for written and performance tests. In an Angoff procedure judges' expertise and professional skills are assumed to influence their ratings of the items during standard-setting. The purpose of this study was to investigate the impact of judges' item-related knowledge on their judgement of the difficulty of items, and second, to determine the stability of differences between judges. METHOD: Thirteen judges were presented with two sets of 60 items on different occasions. They were asked to not only judge the difficulty of the items but also to answer them, without the benefit of the answer key. For each of the 120 items an Angoff estimate and an item score were obtained. The relationship between the Angoff estimate and the item score was examined by applying a regression analysis to the 60 items (Angoff estimate, score) for each judge at each occasion. RESULTS AND CONCLUSIONS: This study shows that in standard-setting the individual judgement of the individual item is not only a reflection of the difficulty of the item but also of the inherent stringency of the judge and his/her subject-related knowledge. Considerable variation between judges in their stringency was found, and Angoff estimates were significantly affected by a judge knowing or not knowing the answer to the item. These findings stress the importance of a careful selection process of the Angoff judges when making pass/fail decisions in health professions education. They imply that judges should be selected who are not only capable of conceptualising the 'minimally competent student', but who would also be capable of answering all the items.


Subject(s)
Clinical Competence , Educational Measurement/methods , Judgment , Humans , Knowledge , Observer Variation , Reproducibility of Results
11.
Ned Tijdschr Geneeskd ; 149(49): 2752-5, 2005 Dec 03.
Article in Dutch | MEDLINE | ID: mdl-16375022

ABSTRACT

There has been considerable change in the field of assessment of medical competence. At the moment, competency-orientated assessment, 'mini-CEX' (brief clinical evaluation exercises) and portfolios are quite popular. These methods are based on research findings indicating that medical competence can better be described as a collection of the complex tasks (so-called competencies) that a doctor must be able to perform than as the sum of knowledge, skills, problem-solving ability and attitudes. Mini-CEX represents a method for the assessment of medical competence reliably and validly in a practical setting. Using a portfolio, information on the student's competence can be collated and evaluated from various sources, including mini-CEX. As such, a portfolio has much in common with a patient chart.


Subject(s)
Clinical Competence/standards , Education, Medical/standards , Education, Medical, Graduate/standards , Educational Measurement , Health Knowledge, Attitudes, Practice , Humans , Netherlands
12.
Med Teach ; 26(8): 719-25, 2004 Dec.
Article in English | MEDLINE | ID: mdl-15763876

ABSTRACT

The practice of assessment is governed by an interesting paradox. On the one hand good assessment requires substantial resources which may exceed the capacity of a single institution and we have reason to doubt the quality of our in-house examinations. On the other hand, our parsimonity with regard to our resources makes us reluctant to pool efforts and share our test material. This paper reports on an initiative to share test material across different medical schools. Three medical schools in The Netherlands have successfully set up a partnership for a specific testing method: progress testing. At present, these three schools collaboratively produce high-quality test items. The jointly produced progress tests are administered concurrently by these three schools and one other school, which buys the test. The steps taken in establishing this partnership are described and results are presented to illustrate the unique sort of information that is obtained by cross-institutional assessment. In addition, plans to improve test content and procedure and to expand the partnership are outlined. Eventually, the collaboration may even extend to other test formats. This article is intended to give evidence of the feasibility and exciting potential of between school collaboration in test development and test administration. Our experiences have demonstrated that such collaboration has excellent potential to combine economic benefit with educational advantages, which exceed what is achievable by individual schools.


Subject(s)
Cooperative Behavior , Education, Medical/standards , Educational Measurement/methods , Interinstitutional Relations , Schools, Medical , Educational Measurement/standards , Humans , Netherlands
13.
Med Educ ; 37 Suppl 1: 65-71, 2003 Nov.
Article in English | MEDLINE | ID: mdl-14641641

ABSTRACT

CONTEXT: Simulation-based testing methods have been developed to meet the need for assessment procedures that are both authentic and well-structured. It is widely acknowledged that, although the authenticity of a procedure may be a contributing factor to its validity, authenticity alone never is a sufficient factor. AIM: In this paper we describe the mainstream development of various simulation-based approaches, with their strengths and weaknesses. The purpose is not to provide a review based on an extensive meta-analysis but to present crucial factors in the development of these methods and their implications for current and future developments. METHOD: The description of these simulation-based instruments uses a subdivision according to the layers of Miller's pyramid. Written and computer-based simulations are aimed at measuring the 'knows how' layer, observation-based techniques such as standardised patient-based examinations and objective structured clinical examinations target the 'shows how' layer and performance practice measures assess performance at the 'does' layer. CONCLUSION: In all simulations, case specificity was found to pose the most prominent threat to reliability, while too much structure threatened to trivialise the assessment. The conclusion is that authentic and reliable assessment is predicated on a wise balance between efficiency and adequate content sampling.


Subject(s)
Clinical Competence/standards , Education, Medical, Undergraduate/methods , Educational Measurement/methods , Patient Simulation , Curriculum , Education, Medical, Undergraduate/trends , Humans
14.
Med Educ ; 36(10): 910-7, 2002 Oct.
Article in English | MEDLINE | ID: mdl-12390457

ABSTRACT

BACKGROUND: While much is now known about how to assess the competence of medical practitioners in a controlled environment, less is known about how to measure the performance in practice of experienced doctors working in their own environments. The performance of doctors depends increasingly on how well they function in teams and how well the health care system around them functions. METHODS: This paper reflects the combined experiences of a group of experienced education researchers and the results of literature searches on performance assessment methods. CONCLUSION: Measurement of competence is different to measurement of performance. Components of performance could be re-conceptualised within a different domain structure. Assessment methods may be of a different utility to that in competence assessment and, indeed, of different utility according to the purpose of the assessment. An exploration of the utility of potential performance assessment methods suggests significant gaps that indicate priority areas for research and development.


Subject(s)
Clinical Competence/standards , Physicians, Family/standards , Education, Medical/standards , Educational Measurement , Humans , Quality of Health Care , Reproducibility of Results
15.
Med Educ ; 36(10): 925-30, 2002 Oct.
Article in English | MEDLINE | ID: mdl-12390459

ABSTRACT

INTRODUCTION: An essential element of practice performance assessment involves combining the results of various procedures in order to see the whole picture. This must be derived from both objective and subjective assessment, as well as a combination of quantitative and qualitative assessment procedures. Because of the severe consequences an assessment of practice performance may have, it is essential that the procedure is both defensible to the stakeholders and fair in that it distinguishes well between good performers and underperformers. LESSONS FROM COMPETENCE ASSESSMENT: Large samples of behaviour are always necessary because of the domain specificity of competence and performance. The test content is considerably more important in determining which competency is being measured than the test format, and it is important to recognise that the process of problem-solving process is more idiosyncratic than its outcome. It is advisable to add some structure to the assessment but to refrain from over-structuring, as this tends to trivialise the measurement. IMPLICATIONS FOR PRACTICE PERFORMANCE ASSESSMENT: A practice performance assessment should use multiple instruments. The reproducibility of subjective parts should not be increased by over-structuring, but by sampling through sources of bias. As many sources of bias may exist, sampling through all of them may not prove feasible. Therefore, a more project-orientated approach is suggested using a range of instruments. At various timepoints during any assessment with a particular instrument, questions should be raised as to whether the sampling is sufficient with respect to the quantity and quality of the observations, and whether the totality of assessments across instruments is sufficient to see 'the whole picture'. This policy is embedded within a larger organisational and health care context.


Subject(s)
Clinical Competence/standards , Education, Medical/standards , Physicians, Family/standards , Educational Measurement , Humans , Quality of Health Care/standards
16.
Med Educ ; 35(4): 348-56, 2001 Apr.
Article in English | MEDLINE | ID: mdl-11318998

ABSTRACT

PURPOSE: To assess whether case-based questions elicit different thinking processes from factual knowledge-based questions. METHOD: 20 general practitioners (GPs) and 20 students solved case-based questions and matched factual knowledge-based questions while thinking aloud. Verbatim protocols were analysed. Five indicators were defined: extent of protocols; immediate responses; re-reading of information given in the stem or case after the question had been read; order of re-reading information, and type of consideration, i.e. 'true-false' type or 'vector', that is, a deliberation which has a magnitude and a direction. RESULTS: Cases elicited longer protocols than factual knowledge questions. Students re-read more given information than GPs. GPs gave an immediate response on twice as many occasions as students. GPs re-ordered the case information, whereas students re-read the information in the order it was presented. This ordering difference was not found in the factual knowledge questions. Factual knowledge questions mainly led to 'true-false' considerations, whereas cases elicited mainly 'vector' considerations. CONCLUSION: Short case-based questions lead to thinking processes which represent problem-solving ability better than those elicited by factual knowledge questions.


Subject(s)
Education, Medical/methods , Educational Measurement/methods , Family Practice/education , Problem Solving , Humans , Netherlands , Thinking
17.
Med Teach ; 22(6): 592-600, 2000.
Article in English | MEDLINE | ID: mdl-21275695

ABSTRACT

This article reviews consistent research findings concerning the assessment of clinical competence during the clerkship phase of the undergraduate medical training programme on issues of reliability, validity, effect on training programme and learning behaviour, acceptability and costs. Subsequently, research findings on the clinical clerkship as a learning environment are discussed demonstrating that the clinical attachment provides a rather unstructured educational framework. Five fundamental questions (why, what, when, how, who) are addressed to generate general suggestions for improving assessment on the basis of the evidence on assessment and clinical training. Good assessment requires a thoughtful compromise between what is achievable and what is ideal. It is argued that educational effects are eminently important in this compromise, particularly in the unstructured clinical setting. Maximizing educational effects can be achieved in combination with improvements of other measurement qualities of the assessment. Two concrete examples are provided to illustrate the recommended assessment strategies.

18.
Adv Health Sci Educ Theory Pract ; 4(3): 233-244, 1999.
Article in English | MEDLINE | ID: mdl-12386481

ABSTRACT

Comparisons between PBL and non-PBL medical schools on problem-solving ability often show no differences. This could be either due to the fact that no difference in problem-solving skills exists or that the instruments used are inadequate. In this study a key-feature approach case-based examination was used to compare two medical schools in the Netherlands, one of which has a PBL curriculum (Maastricht) and one which has a program half way a transition from a non-PBL towards a PBL curriculum (Groningen). Differences were found both in proficiency scores and in the pattern of response times, both supporting the assumption that a PBL approach would lead to a higher level of problem solving ability. The effect size, however, is not as large as originally assumed by the PBL proponents. Conclusions must be drawn with caution, but it seems likely that a test based on large numbers of short cases is the most sensitive in detecting differences in problem solving ability between students of different curricula.

19.
Med Teach ; 21(2): 144-50, 1999.
Article in English | MEDLINE | ID: mdl-21275728

ABSTRACT

In assessment of problem solving the use of short case-based testing is a promising development. In this approach an examination consists of large numbers of short cases each of which contain a small number of questions. These questions are aimed at essential decisions.Writing such cases, however, is not easy. In this article a description of this type of examination is provided.Also strategies and pitfalls are described in writing these cases. These strategies pertain to the selection of essential decisions, the careful writing of cases and questions and the selection of question formats.

20.
Med Educ ; 30(1): 44-9, 1996 Jan.
Article in English | MEDLINE | ID: mdl-8736188

ABSTRACT

This study investigates the cueing effect occurring in multiple choice questions. Two parallel tests with matching contents were administered. By means of a computer program, examinees of different training levels and professional expertise were presented the same set of 35 cases (derived from patient problems in general practice) twice. The first time the cases were linked to open-ended questions; the second time they were linked to multiple choice questions. The examinees consisted of 75 medical students from three different years of training, 25 residents in training for general practice and 25 experienced general practitioners. Across groups, total test scores reflected a difference in mean scores on both formats, and a high inter-test correlation. Within each level of expertise, differences in mean scores and high correlations were also found. The data were further explored per group of examinees. Two types of cueing effects were found: positive cueing (examinees were cued towards the correct answer) and negative cueing (examinees were cued towards an incorrect answer). These effects were found at all levels of expertise and in almost all items. However, both effects decline with increasing level of expertise. Positive cueing mainly occurs in difficult items, whereas negative cueing mainly occurs in easy items.


Subject(s)
Computers , Education, Medical, Undergraduate , Educational Measurement , Adult , Evaluation Studies as Topic , Humans , Netherlands
SELECTION OF CITATIONS
SEARCH DETAIL
...