Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 6 de 6
Filtrar
Adicionar filtros








Assunto principal
Intervalo de ano
1.
Rev. bras. educ. méd ; 47(2): e067, 2023. tab, graf
Artigo em Português | LILACS-Express | LILACS | ID: biblio-1449623

RESUMO

Resumo: Introdução: A avaliação do estudante é componente essencial de todo programa educacional. O aprendizado das ciências básicas é fundamental para dar sentido ao que se aprende na fase clínica da formação de um profissional em saúde. Entretanto, a maioria dos treinamentos de elaboradores de testes de múltipla escolha (TME) é voltada à formulação de questões clínicas e não inclui abordagem específica para questões das ciências básicas. Relato de experiência: Foi realizada uma oficina para a capacitação docente na elaboração de TME de aplicação dos conhecimentos de ciências básicas, visando à elaboração de uma prova a ser aplicada no final do ciclo básico de seis cursos da saúde. O material instrucional foi elaborado pelos autores, que ofereceram uma oficina no formato on-line. Um diferencial dessa capacitação foi a aplicação de modelos de elaboração de enunciados com contextos definidos, utilizando momentos de preparo assíncronos e encontro síncrono. Após a oficina, aplicaram-se questionários sobre a satisfação e aprendizagem dos participantes. A maioria avaliou a oficina como boa ou muito boa e referiu aumento da percepção de capacidade para elaborar TME, e, ao final, somente 7% se sentiram pouco preparados para elaborar um TME seguindo as boas práticas. Houve melhora na qualidade dos TME elaborados, tendo como referencial os índices de dificuldade e discriminação. Discussão: Existem evidências do valor do desenvolvimento do corpo docente na melhoria da qualidade das questões produzidas. O formato de oficina proposto foi bem avaliado pelos participantes e contribuiu para a qualidade das questões de provas aplicadas ao final do ciclo básico. Conclusão: Estratégias como a descrita qualificam as avaliações dentro da escola e contribuem para a organização de provas externas.


Abstract: Introduction: Student assessment is an essential component of all educational programs. Basic science learning is essential for making clinical knowledge meaningful to healthcare students. However, most item writer training is focused on the formulation of clinical questions and does not include a specific approach to basic science questions. Experience Report: Workshops on item writing for knowledge application on basic sciences were carried out with the aim of planning a test to be applied at the end of the basic cycle of six health courses. The instructional material was prepared by the authors, who offered online workshops. A differential of this training was the application of models of item lead-in elaboration with defined contexts, using moments of asynchronous preparation and synchronous encounter. After each workshop, surveys were applied to assess participants' satisfaction and learning. Most participants rated the workshop as good or very good and reported an increase in their perceived ability to prepare single best answer multiple-choice questions. At the end, only 7% reported they were not prepared to write an item following good practices. There was an improvement in the quality of the items prepared, using the difficulty and discrimination indexes as a reference. Discussion: There is evidence of the value of faculty development in improving the quality of the questions produced. The proposed workshop format was well evaluated by the participants and contributed to the quality of tests applied to students at the end of the basic science cycle. Conclusion: Strategies such this qualify assessments within the school and contribute to the organization of external exams.

2.
Artigo | IMSEAR | ID: sea-217741

RESUMO

Background: Multiple choice questions (MCQs) are preferred tools of assessment because of objectivity, ease of scoring and how each MCQ functions as an item can be understood by Item analysis. Aims and Objectives: The aim of the study was (i) to carry out item analysis of MCQs used in formative assessment to know the validity and (ii) to carry out a post validation item analysis of MCQ’s of 1st MBBS students in anatomy to use results for further actions. Materials and Methods: 45 MCQs were administered to 112 students of 1st M.B.B.S. as a formative assessment. Difficulty index and Discrimination index were calculated. Results: Mean difficulty index was 56.67 ± 22.09, and mean discrimination index was 0.35 ± 0.23. Distribution of easy, moderate, and difficult MCQ was 20, 67, and 13%, respectively. About 20% of MCQs were poor, 20% with acceptable discriminating index, 27% had good, and 33% MCQ were of excellent discrimination index. No item was negatively discriminating and all distractor were functional. Very easy and very difficult items had poor discriminating index. Conclusion: Most of items had moderate difficulty and good to excellent discrimination. Too easy and too difficult items showcased poor discrimination, no negatively discriminating item and absence of non-functional distractor suggest good quality framing of the MCQs.

3.
Artigo | IMSEAR | ID: sea-198321

RESUMO

Background: The accurate, reliable and timely assessment of students is an essential domain of teaching duringMedical professional courses. The Multiple Choice Questions (MCQ) are time tested method of ready assessmentof undergraduate students. Although it evaluates student’s cognitive knowledge but does not evaluate professionalskills. However it is said that MCQs emphasize recall of factual information rather than conceptual understandingand interpretation of concepts.Objectives: The main objective of the study is to analyse the items with the help of item analysis and select theitems which are good for incorporation into future question bank with reliability.Materials and Methods: This study was done in Department of Anatomy, AIIMS, Patna. A 396 first year MBBSstudents of different batches took the MCQ test comprising 60 questions in two sessions. During the evaluationprocess of MCQ’s each correct response was awarded one mark and no marks was awarded for any incorrectresponse. Each item was analysed for difficulty index, discrimination index and distractor effectiveness.Results: The overall mean of Facilitative value, Discrimination Index, Distractor Effectiveness and CorrelationCoefficient was 66.09 (±21.55), 0.26 (±0.16), 18.84 (±10.45) and 0.55±0.22 respectively.Conclusion: The framing of MCQ should be according to Bloom’s classification to assess cognitive, affective aswell as psychomotor domain of the students. The MCQ having poor and negative discrimination should bereframed and again should be analysed.

4.
Artigo em Inglês | IMSEAR | ID: sea-175336

RESUMO

Background: Single best-answer multiple-choice questions (MCQs) consist of a question (the stem) two or more choices from which examinees must choose the correct option (the distracters) and one correct or best response (the key). Item analysis is the process of collecting, summarizing and using information from students’ responses to assess the quality of test items. Classical test theory for item analysis is most followed method to determine the reliability by calculating Difficulty Index (P score) and Discriminating Index (D score) and Distracter effectiveness Aim: This Study was aimed to calculate P scoreand distracter effectiveness; to find out relationship between P score and distracter effectiveness. Material and methods: In this Cross Sectional study 65 items responded by 120 Students of first year M.B.B.S were studied for Item Analysis. Difficulty Index, and Distracter Effectiveness were calculated for each item. Distracters were identified and classified as Functioning and Non- functioning distracter. Interrelationship between P Score, and Distracter Effectiveness was calculated and analyzed by Epinifo 7 software Result: We found Items with two functioning distracters were more difficult than that of others followed by items with three functioning distracters. Conclusion: Distractors affect the item difficulty index and by the means also affects quality of the assessment .

5.
Korean Journal of Family Medicine ; : 352-357, 2011.
Artigo em Inglês | WPRIM | ID: wpr-84294

RESUMO

BACKGROUND: In-training examination (ITE) is a cognitive examination similar to the written test, but it is different from the Clinical Practice Examination of the Korean Academy of Family Medicine (KAFM) Certification Examination (CE). The objective of this is to estimate the positive predictive value of the KAFM-ITE for identifying residents at risk for poor performance on the three types of KAFM-CE. METHODS: 372 residents who completed the KAFM-CE in 2011 were included. We compared the mean KAFM-CE scores with ITE experience. We evaluated the correlation and the positive predictive value (PPV) of ITE for the multiple choice question (MCQ) scores of 1st written test & 2nd slide examination, the total clinical practice examination scores, and the total sum of 2nd test. RESULTS: 275 out of 372 residents completed ITE. Those who completed ITE had significantly higher MCQ scores of 1st written test than those who did not. The correlation of ITE scores with 1st written MCQ (0.627) was found to be the highest among the other kinds of CE. The PPV of the ITE score for 1st written MCQ scores was 0.672. The PPV of the ITE score ranged from 0.376 to 0.502. CONCLUSION: The score of the KAFM ITE has acceptable positive predictive value that could be used as a part of comprehensive evaluation system for residents in cognitive field.


Assuntos
Humanos , Certificação
6.
Medical Education ; : 119-124, 2010.
Artigo em Japonês | WPRIM | ID: wpr-363053

RESUMO

We analyzed inadvertent human errors during 3-day trial examinations for the National Examination for Physicians. Sixth-year medical students sat for 2 different examinations consisting of 500 multiple-choice questions and chose either 1 or 2 correct answers. After the first examination, the students verified their errors and were provided with educational guidance to prevent inadvertent errors.1) More than half of the students made inadvertent errors during the examination.2)The errors occurred when the students solved questions or marked the answer sheets.3) Most of errors were either the selection of the wrong number of answer options (i.e., a 2-choice selection was required, but only 1 choice was selected) or the selection of choices that differed from the intended choices when the answer sheets were marked.4) After the students were taught how to avoid errors, the mean number of errors per examination per student decreased significantly from 2.1 to 1.0.5) To our knowledge, this is the first report to show the educational effectiveness of a method to decrease the rate of inadvertent errors during examinations.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA