Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 7 de 7
Filter
Add filters








Year range
1.
Mongolian Medical Sciences ; : 80-82, 2018.
Article in English | WPRIM | ID: wpr-973095

ABSTRACT

Introduction@#The clinical skills training at medical schools provides the opportunity for future medical doctors to deal with the client with proper care, diagnosis of the disease, first aid, treatment, nursing, treatment, counseling to address the complexity of the problem solving and the ethical attitude of the doctor. To achieve this objective, it is necessary to assess the level of knowledge, skills and attitudes students have acquired.@*Goal@#To analyze assignment of basic clinical skills assessment and to identify the level of кknowledge and skills students who have graduated second year medical program at “Ach” Medical University during 2016- 2017 academic year.@*Materials and Methods@#The study was used as a descriptive model to measure the reliability of the assignment, the difficulty factor of tasks, and the Hoffsten’s scores based on the tasks and performance of each station and compared with the indicators.@*Results@#Based on Hoffsten’s study on the success rate of examiners at the 5 stations, the Hoffsten’s score level of clinical examination was 68 percent, the physical examination station was 64 percent, the station’s diagnostic level was 71 percent, the laboratory was 70 percent and the nursing station was 70 percent.@*Conclusion@#At each clinical trial, the differential diagnosis of each individual clinical trial, clinical interview, nursing station and visual diagnostic station (DF> 95), at the laboratory and at the physical examination station, assess the student with a higher grade of difficulty factor (DF> 80) to the Hoffsten’s score of the basic clinical skills exam is set to be 70 percent.

2.
Mongolian Medical Sciences ; : 65-74, 2018.
Article in English | WPRIM | ID: wpr-973093

ABSTRACT

Introduction@#One of the quality assurance measurements for medical schools is the achievement of students who have graduated in the assessment of the knowledge, skills and attitudes they are trained in.@*Goal@#To analyze assignment of theoretical and practical exam and to identify the level of кknowledge students who have graduate at “Ach” Medical University during 2015-2016 academic year.@*Materials and Methods@#The study was conducted on a cross sectional and descriptive study through the based on the task of analyzing the 261 graduate students theoretical and practical exam performance of the bachelor degree in Medicine, Dentistry, Traditional Medicine and Nursing of Ach Medical University of Mongolia /AMU/ and was assessed and to identify a reliability coefficient, difficulty factor, discrimination index, Hoffsten’s score. @*Results@#The reliabiliy coefficient of graduate exam meets requirement when it’s 0.94-0.96. According to the analysis of the 300 test of the each classroom of graduates was 70 percent (n=202) with weak dicrimination index, difficallty factor was more than 50 percent too easy, The Hoffsten’s score to which exam was passed of Medical graduates is 70 percent, traditional medicine is 87 percent, dentistry is 79 percent, the nursing is a Hoffsten’s score was 80 percent.@*Conclusions @#The reliability coefficient the theoretical exam of the graduates’ knowledge is convenient for all occupations, and whole field examines the weak difficulty index (DI≤0) for all field examinations. The Hoffsten’s score is 70% above the medical field. Graduate assignments can not discriminate graduates’ knowledge and skills levels and the difficulty factor graduate examination was very easy.

3.
Malaysian Journal of Public Health Medicine ; : 1-7, 2016.
Article in English | WPRIM | ID: wpr-626841

ABSTRACT

Comparable selection methods based on interview as one of the selection criteria are used in many countries globally however; procedure of interview and its reliability has been of varying nature. A semi-structured interview procedure was developed by the Faculty of Medicine at Universiti Sultan ZainalAbidin to finally select the shortlisted candidates seeking to studying medicine in this institution as the new intake of 2015-2016 sessions of MBBS program. Multiple panels comprising of two members each to independently select the candidate held interview. Inter-ratter reliability of quality assessment was investigated. Current article investigates the inter-ratter reliability of interviewers in quality assessment of candidates seeking to join the Faculty of Medicine at Universiti Sultan ZainalAbidin, Malaysia. An observational study, conducted across all the candidates, who were shortlisted on merit for formal selection through interview procedure. Data reflecting candidates’ characteristics and qualities were collected as quantitative score. Inter-ratter reliability using intra class coefficient was calculated for interpretation. A moderate difference of mean (SD) among the interviewer varying from 37.61 (3.48) to 42.12 (0.60) was observed. The reliability of score varied between 0.50- 0.65, significant at p = < 0.05 with majority assessors. However, among the 4 panels of assessors’ intra-class correlation coefficient was between 0.70-0.0.90 (p = < 0.001). Assessment of candidates’ performance based on observation did not achieve the satisfactory level of intraclass correlation coefficient (ICC ≥ 0.70). However for higher discrepancy in inter-ratter scores in some cases, continuing faculty development program in interviewing skills and calibration workshops are recommended to improve the reliability and validity of quality selection through interview procedure in future.

4.
Malaysian Journal of Public Health Medicine ; : 7-15, 2016.
Article in English | WPRIM | ID: wpr-626840

ABSTRACT

Multiple-choice question as one best answer (OBA) is considered as a more effective tool to test higher order thinking for its reliability and validity compared to objective test (multiple true and false) items. However, to determine quality of OBA questions it needs item analysis for difficulty index (PI) and discrimination index (DI) as well as distractor efficiency (DE) with functional distractor (FD) and non-functional distractor (NFD). However, any flaw in item structuring should not be allowed to affect students’ performance due to the error of measurement. Standard error of measurement (SEM) to calculate a band of score can be utilized to reduce the impact of error in assessment. Present study evaluates the quality of 30 items OBA administered in professional II examination to apply the corrective measures and produce quality items for the question bank. The mean (SD) of 30 items OBA = 61.11 (7.495) and the reliability (internal consistency) as Cronbach’s alpha = 0.447. Out of 30 OBA items 11(36.66%) with PI = 0.31-0.60 and 12 items (40.00%) with DI = ≥0.19 were placed in category to retain item in question bank, 6 items (20.00%) in category to revise items with DI ≤0.19 and remaining 12 items (40.00%) in category to discard items for either with a poor or with negative DI. Out of a total 120 distractors, the non-functional distractors (NFD) were 63 (52.5%) and functional distracters were 57 (47.5%). 28 items (93.33%) were found to contain 1- 4 NFD and only 2 (6.66%) items were without any NFD. Distracter efficiency (DE) result of 28 items with NDF and only 2 items without NDF showed 7 items each with 1 NFD (75% DE) and 4 NFD (0% DE), 10 items with 2 NFD (50% DE) and 4 items with 3 NFD (25% DE). Standard error of measurement (SEM) calculated for OBA has been ± 5.51 and considering the borderline cut-off point set at ≥45%, a band score within 1 SD (68%) is generated for OBA. The high frequency of difficult or easy items and moderate to poor discrimination suggest the need of items corrective measure. Increased number of NFD and low DE in this study indicates difficulty of teaching faculty in developing plausible distractors for OBA question. Standard error of measurement (SEM) should be utilized to calculate a band of score to make logical decision on pass or fail of borderline students.

5.
Mongolian Medical Sciences ; : 47-51, 2016.
Article in English | WPRIM | ID: wpr-975603

ABSTRACT

BackgroundHealth professional licensing was introduced in Mongolia in 1999. Medical school graduates shouldpass the health professional licensing exam (HPLE) to be registered. It was informed that HPLEsuccess rate has been decreased for last few years among graduates who passed final theoreticexam (FTE). There has been no research conducted to explain the reasons of such trend. Thisresearch aims to conduct a comparative assessment of MSQs used for both HPLE and FTE.GoalTo analyze examination and test to identify the level of medical knowledge of students who graduateas medical doctor at “Ach” Medical University during 2011- 2015.Materials and MethodsThis is a cross sectional descriptive study. it employed a statistical analysis of 2950 MSQs (24version) that were used for the HPLE by the Health Development Center of the MOH (N=16)and FTE by the “Ach” Medical University (N=8) between 2011 and 2015. Test sheets of HPLE(N=728) and FTE (N=686) were assessed in order to identify a reliability of tests, difficulty index,discrimination index using QuickSCORE II program of the test reading machine with a mode of“Scantron ES-2010”.ResultsThe success rate was much higher in FTE than it in HPLE between 2011 and 2015. The successrate of HPLE decreased dramatically starting from 2013 (87%) to 2014 (4%) and 2015 (24%) whilethe same rate of FTE was stable and almost 100%.FTE’s reliability coefficient of 2011-2015 years meets requirement when it’s 0.92-0.96. HPLE’sreliability coefficient of 2013 and 2014 years don’t meet requirement.From all of the MCQs that has been used in FTE‘s 97% and in HPLE’s 80% are positive discriminationindex which means possible to identify medical school graduates knowledge.ConclusionOur findings confirmed that the success rates of HPLE among medical school graduates are beingquite low.Reliability coefficient of HPLE tests were less reliable (КР20=0.66-0.86) than FTE (КР20=0,92-0.96) and particularly tests for 2014 and 2015 were more difficult and were with high percentage ofnegative discrimination.Test score between HPLE and FTE of 2011-2015 is direct linear correlation.

6.
Univ. psychol ; 13(1): 217-226, ene.-mar. 2014. ilus, tab
Article in Spanish | LILACS | ID: lil-726972

ABSTRACT

La fiabilidad de los puntajes es una de las propiedades psicométricas más importantes de un test psicológico. Sin embargo, a menudo los test son utilizados para hacer clasificaciones dicotómicas de personas, como sucede en las pruebas de screening psicopatológico o en selección de personal. En esos casos, los coeficientes de fiabilidad convencionales no resultan apropiados para estimar la precisión de las clasificaciones. En este trabajo se presenta el coeficiente K² de Livingston (1972, 1973) y se demuestra su uso a través de dos ejemplos empíricos, para estimar la fiabilidad de una clasificación realizada a partir de un test psicológico.


The reliability of test scores is one of the most important psychometric properties of a psychological test. However, the tests are often used for dichotomous classifications of people, as in tests used for screening or recruitment purposes. In such cases, the conventional reliability coefficients are not suitable for estimating the accuracy of the classifications. This paper introduces the coefficient K² of Livingston (1972, 1973) and demonstrates its use through two empirical examples to estimate the reliability of a classification based on psychological tests.


Subject(s)
Psychological Tests , Data Accuracy
7.
Rev. medica electron ; 34(1): 1-6, ene.-feb. 2012.
Article in Spanish | LILACS | ID: lil-629890

ABSTRACT

Se realizó un estudio de carácter evaluativo para determinar la precisión de un examen escrito mediante un análisis de fiabilidad. El examen, integrado por 30 preguntas que exploran cierto tipo de conocimiento profesional, se aplicó a 45 personas, y con los puntajes obtenidos se creó una base de datos en SPSS para Windows, versión 16. Se calculó el coeficiente alfa de Cronbach, para diferentes partes del examen y coeficientes de discriminación para determinar las variables más relevantes a los efectos de la confiabilidad del examen. Los resultados principales se presentaron en tablas. Se obtuvo un valor negativo para la confiabilidad del examen, y mediante eliminación de preguntas y cambio de escala de los puntajes, se obtuvo un examen de confiabilidad aceptable. Se concluyó que el análisis de fiabilidad es un procedimiento efectivo para incrementar la precisión de un examen


It was performed a study, to evaluate the precision of a written test by means of a reliability analysis. The test, with 30 questions exploring some kind of professional knowledge, was applied to a set of 45 persons. Using the SPSS for Windows Version 16 software, a data base was created with the scores obtained. We calculated the Cronbach alpha coefficients for different parts of the test and the corresponding indexes of discrimination in order to determine the most relevant variables related with the reliability of the test. The principal findings were presented in statistical tables. A negative value of the reliability coefficient was obtained for the initial test, but, by means of putting out some questions and making appropriate changes in the score scales of them, a new version of the test with a medium level of reliability was obtained. It was concluded that the reliability analysis is an effective tool to increase the precision of a test


Subject(s)
Humans , Statistics as Topic/methods , Data Interpretation, Statistical
SELECTION OF CITATIONS
SEARCH DETAIL