Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 3 de 3
Filter
Add more filters











Database
Language
Publication year range
1.
Br J Biomed Sci ; 81: 12229, 2024.
Article in English | MEDLINE | ID: mdl-38854458

ABSTRACT

This paper describes the successful implementation of an assessment literacy strategy within a Biomedical Sciences degree. Teaching was aligned with an assessment literacy framework and aimed to prepare undergraduates for a literature comprehension assessment. Students were introduced to the assessment purpose and an adapted Miller's pyramid model illustrated how the assessment contributed to competency development during their degree. Students read primary research papers and answered questions relating to the publications. They were then introduced to the processes of assessment and collaboratively graded answers of different standards. Finally, student and faculty grades were compared, differences considered, and key characteristics of answers discussed. Most students reported that they understood more about assessment standards than prior to the intervention [139/159 (87.4%)] and felt it had helped prepare them for their exam [138/159 (86.8%)]. The majority also reported they had increased confidence in evaluating data [118/159 (74%)], communicating their reasoning [113/159 (71%)] and considering what a reader needs to know [127/159 (79.9%)]. Students were asked to state the most important thing they had learned from the assessment literacy teaching. Notably, no responses referred to domain-specific knowledge. 129 free text responses were mapped to the University of Edinburgh graduate attribute framework. 93 (72%) statements mapped to the graduate attribute category "Research and Enquiry," 66 (51.16%) mapped to "Communication" and 21 (16.27%) mapped to "Personal and Intellectual Autonomy." To explore any longer-term impact of the assessment literacy teaching, a focus group was held with students from the same cohort, 2 years after the original intervention. Themes from this part of the study included that teaching had provided insights into standards and expectations for the assessment and the benefits of domain specific knowledge. A variety of aspects related to graduate attributes were also identified. Here, assessment literacy as a vehicle for graduate attribute development was an unexpected outcome. We propose that by explicitly engaging students with purpose, process, standards, and expectations, assessment literacy strategies may be used to successfully raise awareness of developmental progression, and enhance skills, aptitudes, and dispositions beneficial to Biomedical Sciences academic achievement and life after university.


Subject(s)
Curriculum , Educational Measurement , Humans , Educational Measurement/methods , Literacy , Male , Female , Students/psychology , Comprehension
2.
J Vet Med Educ ; 48(2): 158-162, 2021 Apr.
Article in English | MEDLINE | ID: mdl-32149588

ABSTRACT

Assessment literacy is increasingly recognized as an important concept to consider when developing assessment strategies for courses and programs. Assessment literacy approaches support students in their understanding of assessment expectations and help them both understand and optimize their performance in assessment. In this teaching tip, a model for assessment literacy that builds on the well-known Miller's Pyramid model for assessment in clinical disciplines is proposed and contextualized. The model progresses thinking from assessment methods themselves to consideration of the activities that need to be built into curricula to ensure that assessment literacy is addressed at each level of the pyramid. The teaching tip provides specific examples at each of the levels. Finally, the relevance of this work to overall curriculum design is emphasized.


Subject(s)
Education, Veterinary , Literacy , Animals , Curriculum , Humans , Students
3.
J Vet Med Educ ; 44(4): 686-691, 2017.
Article in English | MEDLINE | ID: mdl-28581915

ABSTRACT

Comparative judgment in assessment is a process whereby repeated comparison of two items (e.g., assessment answers) can allow an accurate ranking of all the submissions to be achieved. In adaptive comparative judgment (ACJ), technology is used to automate the process and present pairs of pieces of work over iterative cycles. An online ACJ system was used to present students with work prepared by a previous cohort at the same stage of their studies. Objective marks given to the work by experienced faculty were compared to the rankings given to the work by a cohort of veterinary students (n=154). Each student was required to review and judge 20 answers provided by the previous cohort to a free-text short answer question. The time that students spent on the judgment tasks was recorded, and students were asked to reflect on their experiences after engaging with the task. There was a strong positive correlation between student ranking and faculty marking. A weak positive correlation was found between the time students spent on the judgments and their performance on the part of their own examination that contained questions in the same format. Slightly less than half of the students agreed that the exercise was a good use of their time, but 78% agreed that they had learned from the process. Qualitative data highlighted different levels of benefit from the simplest aspect of learning more about the topic to an appreciation of the more generic lessons to be learned.


Subject(s)
Education, Veterinary/methods , Judgment , Students, Medical/psychology , Feedback , Humans , Internet , Self-Evaluation Programs , Surveys and Questionnaires
SELECTION OF CITATIONS
SEARCH DETAIL