Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 32
Filter
1.
J Intell ; 11(2)2023 Feb 16.
Article in English | MEDLINE | ID: mdl-36826935

ABSTRACT

As a component of many intelligence test batteries, figural matrices tests are an effective way to assess reasoning, which is considered a core ability of intelligence. Traditionally, the sum of correct items is used as a performance indicator (total solution procedure). However, recent advances in the development of computer-based figural matrices tests allow additional indicators to be considered for scoring. In two studies, we focused on the added value of a partial solution procedure employing log file analyses from a computer-based figural matrices test. In the first study (n = 198), we explored the internal validity of this procedure by applying both an exploratory bottom-up approach (using sequence analyses) and a complementary top-down approach (using rule jumps, an indicator taken from relevant studies). Both approaches confirmed that higher scores in the partial solution procedure were associated with higher structuredness in participants' response behavior. In the second study (n = 169), we examined the external validity by correlating the partial solution procedure in addition to the total solution procedure with a Grade Point Average (GPA) criterion. The partial solution procedure showed an advantage over the total solution procedure in predicting GPA, especially at lower ability levels. The implications of the results and their applicability to other tests are discussed.

2.
Behav Sci (Basel) ; 12(8)2022 Aug 18.
Article in English | MEDLINE | ID: mdl-36004864

ABSTRACT

Computer-based testing is an emerging method to evaluate students' mathematics learning outcomes. However, algebra problems impose a high cognitive load due to requiring multiple calculation steps, which might reduce students' performance in computer-based testing. In order to understand students' cognitive load when answering algebra questions in a computer-based testing environment, three perspectives, element interactivity, practice effect, and individual differences, were investigated in this study. Seven levels of algebra exam questions were created using unary and simultaneous linear equations, and the inverse efficiency scores were employed as a measure of cognitive load in the study. Forty undergraduate and graduate students were tested. There were four findings: (1) As the element interactivity of test materials increased, the cognitive load increased rapidly. (2) The high-efficiency group had a lower cognitive load than the low-efficiency group, suggesting that the high-efficiency group had an advantage in a computer-based testing environment. (3) "Practice" has a considerable effect on reducing cognitive load, particularly in level 6 and 7 test items. (4) The low-efficiency group can reduce but not eliminate the gap with the high-efficiency group; they may require additional experience in a computer-based testing environment in order to improve reducing their cognitive load.

3.
J Intell ; 10(3)2022 Jul 13.
Article in English | MEDLINE | ID: mdl-35893272

ABSTRACT

Figural matrices tasks are one of the most prominent item formats used in intelligence tests, and their relevance for the assessment of cognitive abilities is unquestionable. However, despite endeavors of the open science movement to make scientific research accessible on all levels, there is a lack of royalty-free figural matrices tests. The Open Matrices Item Bank (OMIB) closes this gap by providing free and unlimited access (GPLv3 license) to a large set of empirically validated figural matrices items. We developed a set of 220 figural matrices based on well-established construction principles commonly used in matrices tests and administered them to a sample of N = 2572 applicants to medical schools. The results of item response models and reliability analyses demonstrate the excellent psychometric properties of the items. In the discussion, we elucidate how researchers can already use the OMIB to gain access to high-quality matrices tests for their studies. Furthermore, we provide perspectives for features that could additionally improve the utility of the OMIB.

4.
Adv Health Sci Educ Theory Pract ; 27(2): 405-425, 2022 05.
Article in English | MEDLINE | ID: mdl-35230589

ABSTRACT

BACKGROUND: Current demand for multiple-choice questions (MCQs) in medical assessment is greater than the supply. Consequently, an urgency for new item development methods arises. Automatic Item Generation (AIG) promises to overcome this burden, generating calibrated items based on the work of computer algorithms. Despite the promising scenario, there is still no evidence to encourage a general application of AIG in medical assessment. It is therefore important to evaluate AIG regarding its feasibility, validity and item quality. OBJECTIVE: Provide a narrative review regarding the feasibility, validity and item quality of AIG in medical assessment. METHODS: Electronic databases were searched for peer-reviewed, English language articles published between 2000 and 2021 by means of the terms 'Automatic Item Generation', 'Automated Item Generation', 'AIG', 'medical assessment' and 'medical education'. Reviewers screened 119 records and 13 full texts were checked according to the inclusion criteria. A validity framework was implemented in the included studies to draw conclusions regarding the validity of AIG. RESULTS: A total of 10 articles were included in the review. Synthesized data suggests that AIG is a valid and feasible method capable of generating high-quality items. CONCLUSIONS: AIG can solve current problems related to item development. It reveals itself as an auspicious next-generation technique for the future of medical assessment, promising several quality items both quickly and economically.


Subject(s)
Research Design , Feasibility Studies , Humans
5.
Educ Inf Technol (Dordr) ; 27(2): 1771-1810, 2022.
Article in English | MEDLINE | ID: mdl-34366694

ABSTRACT

Score interchangeability of Computerized Fixed-Length Linear Testing (henceforth CFLT) and Paper-and-Pencil-Based Testing (henceforth PPBT) has become a controversial issue over the last decade when technology has meaningfully restructured methods of the educational assessment. Given this controversy, various testing guidelines published on computerized testing may be used to investigate the interchangeability of CFLT and PPBT mean scores to corroborate if test takers' testing performance is influenced by the effects of testing administration mode; specifically, if validity and reliability of two versions of the same test are affected. This research was conducted to probe not only score interchangeability across testing modes but also to explore the role of age and gender stereotypes, item review, ICT literacy and attitudes towards computer use as moderator variables in test takers' reading achievement in CFLT. Fifty-eight EFL learners homogeneous in both general English and reading skills assigned into one testing group participated in this study. Three different versions of TOEFL reading comprehension test, Computer Attitude Scale (CAS), and ICT literacy Scale of TOEFL Examinees were used in this crossover quasi-controlled empirical study with a common-person and pretest-posttest design to collect data. The findings demonstrated that although the reading scores of test takers were interchangeable in both CFLT and PPBT versions regarding testing administration modes, they were different regarding item review. Furthermore, no significant interaction was found between age, gender, and ICT literacy and CFLT performance. However, attitudes towards the use of computer led to a significant change in testing achievement on CFLT.

6.
Curr Pharm Teach Learn ; 13(8): 935-944, 2021 08.
Article in English | MEDLINE | ID: mdl-34294257

ABSTRACT

INTRODUCTION: In fall 2017, West Coast University School of Pharmacy implemented ExamSoft for testing. Three courses in each didactic year employed ExamSoft. Prior to this, courses had Scantron-based exams. We surveyed the students to assess their perception of ExamSoft. We hypothesized that students' inherent bias towards technology affected their perception of ExamSoft. METHODS: To assess this hypothesis, we conducted a survey of all students. The survey contained questions about comfort with technology and nine questions on students' perceptions of ExamSoft and its usefulness. RESULTS: The survey responses were stratified according to the preference of respondents towards technology and its use in exams. Respondents were stratified into three groups: tech-embracers, tech-skeptics, and neutral. Our results showed that respondents classified as tech-skeptics tended to have a more negative view of ExamSoft and its perceived impact on their grades than students stratified as tech-embracers or neutral. CONCLUSIONS: Our study suggests that students' inherent bias towards technology plays an important role in their perception of computer-based testing. Assessing incoming students' comfort with technology and student orientation activities to help acquaint students with new technology could help improve their acceptance of educational technology used for testing.


Subject(s)
Computers , Students , Educational Technology , Humans , Perception , Surveys and Questionnaires
7.
Am J Pharm Educ ; 84(12): ajpe8034, 2020 12.
Article in English | MEDLINE | ID: mdl-34283787

ABSTRACT

Objective. To determine whether elimination of backward navigation during an examination resulted in changes in examination score or time to complete the examination.Methods. Student performance on six examinations in which backward navigation was eliminated was compared to performance on examinations administered to pharmacy students the previous year when backwards navigation was allowed. The primary comparison of interest was change in student performance on a subset of identical questions included on both examinations. Secondary outcomes included change in total examination score and completion time.Results. No significant reduction in examination scores was observed as a result of eliminating backward navigation. The average time that students spent on a question was significantly reduced on two of the six examinations.Conclusion. Restricting pharmacy students' ability to revisit questions previously answered (elimination of backward navigation) on an examination had no adverse effect on scores or testing time when assessed across three years of the didactic pharmacy curriculum.


Subject(s)
Education, Pharmacy , Pharmacy , Students, Pharmacy , Curriculum , Educational Measurement , Humans
8.
Front Psychol ; 9: 2177, 2018.
Article in English | MEDLINE | ID: mdl-30542303

ABSTRACT

The analysis of response time has received increasing attention during the last decades, since evidence from several studies supported the argument that there is a direct relationship between item response time and test performance. The aim of this study was to investigate whether item response latency affects person's ability parameters, in that it represents an adaptive or maladaptive practice. To examine the above research question data from 8,475 individuals completing the computerized version of the Postgraduate General Aptitude Test (PAGAT) were analyzed. To determine the extent to which response latency affects person's ability, we used a Multiple Indicators Multiple Causes (MIMIC) model, in which every item in a scale was linked to its corresponding covariate (i.e., item response latency). We ran the MIMIC model within the Item Response Theory (IRT) framework (2-PL model). The results supported the hypothesis that item response latency could provide valuable information for getting more accurate estimations for persons' ability levels. Results indicated that for individuals who invest more time on easy items, their likelihood of success does not improve, most likely because slow and fast responders have significantly different levels of ability (fast responders are of higher ability compared to slow responders). Consequently, investing more time for low ability individuals does not prove to be adaptive. The opposite was found for difficult items: individuals spending more time on difficult items increase their likelihood of success, more likely because they are high achievers (in difficult items individuals who spent more time were of significantly higher ability compared to fast responders). Thus, it appears that there is an interaction between the difficulty of the item and person abilities that explain the effects of response time on likelihood of success. We concluded that accommodating item response latency in a computerized assessment model, can inform test quality and test takers' behavior, and in that way, enhance score measurement accuracy.

9.
Adv Health Sci Educ Theory Pract ; 23(5): 995-1003, 2018 Dec.
Article in English | MEDLINE | ID: mdl-30043313

ABSTRACT

This study compared the effects of two types of delayed feedback (correct response or correct response + rationale) provided to students by a computer-based testing system following an exam. The preclinical medical curriculum at the University of Kansas Medical Center uses a two-exam system for summative assessments in which students test, revisit material, and then re-test (same content, different questions), with the higher score used to determine the students' grades. Using a quasi-experimental design and data collected during the normal course of instruction, test and re-test scores from midterm multiple choice examinations were compared between academic year (AY) 2015-2016, which provided delayed feedback with the correct answer only, and AY 2016-2017, where delayed feedback consisted of the correct answer plus a rationale. The average increase in score on the re-test was 2.29 ± 6.83% (n = 192) with correct answer only and 3.92 ± 7.12% (n = 197) with rationales (p < 0.05). The effect of the rationales was not different in students of differing academic abilities based on entering composite MCAT scores or Year 1 GPA. Thus, delayed feedback with exam question rationales resulted in a greater increase in exam score between the test and re-test than feedback with correct response only. This finding suggests that delayed elaborative feedback on a summative exam produced a small, but significant, improvement in learning, in medical students.


Subject(s)
Education, Medical, Undergraduate/methods , Education, Medical, Undergraduate/statistics & numerical data , Educational Measurement/methods , Educational Measurement/statistics & numerical data , Formative Feedback , Humans , Learning
10.
J Phys Ther Sci ; 30(6): 790-793, 2018 Jun.
Article in English | MEDLINE | ID: mdl-29950765

ABSTRACT

[Purpose] This study examines the relationship between the results of computer-based testing (CBT) and level of satisfaction with learning, school life, graduation research, and national examination results among freshman and sophomore undergraduate physical therapy students. [Subjects and Methods] The subjects of this survey were 56 male and 42 female physical therapy students who graduated from the International University of Health and Welfare, Ohtawara, in March 2017. The students were ranked according to four 25th-percentile groups based on the results of CBT, which was conducted at the end of freshman and sophomore years. A visual analog scale was used to assess satisfaction levels at the end of sophomore, junior, and senior years. The results of the national examination were scored independently. [Results] Compared with the freshman-year CBT results, we found a significant difference in learning satisfaction during the senior year and in the national examination. In addition, compared with the sophomore-year CBT results, there was a significant difference in learning satisfaction for sophomore, junior, and senior years, as well as in the national examination. [Conclusion] We found a link between the CBT results from freshman and sophomore years and those from the national examination. The results suggest that CBT has an educational effect.

11.
BMC Med Educ ; 18(1): 143, 2018 Jun 19.
Article in English | MEDLINE | ID: mdl-29914444

ABSTRACT

BACKGROUND: Because computers are used in many aspects of today's life, it seems necessary to include them in teaching and assessment processes. METHOD: The aims of this cross-sectional study were to construct a multidimensional valid scale, to identify the factors that influenced the nature of student motivation on Computer Based Testing (CBT), to recognize how students self-regulated their activities around CBT, and to describe the efficiency of autonomous versus controlled situations on motivation. The study was carried out among 246 Iranian Paramedical Students of Tabriz Medical Sciences University, Tabriz, Iran; 2013-2014. The researchers prepared a questionnaire, based on the Self-Determination Theory (SDT), containing 26 items with a five-point Likert scale. It was prepared according to a previous valid questionnaire and by sharing opinions with some students and five professors. The factor analysis was done to perform instructional and exploratory factor analysis. RESULTS: The Kaiser-Meyer-Olkin(KMO) measure was performed and variables were correlated highly enough to provide a reasonable basis for factor analysis. The selected 4 factors determined a 60.28% of the variance; autonomy 26.37%, stimulation 14.11%, relatedness10.71%, and competency 9.10%. CONCLUSION: A questionnaire was prepared and validated, based on SDT variables. The results indicated that autonomous extrinsic motivation correlated positively with intrinsic motivation and CBT. There was a general positive attitude towards computer-based testing among students. As students became intrinsically motivated through the promotion of autonomous regulation, CBT was recommended as a proper test mode.


Subject(s)
Allied Health Personnel/psychology , Attitude to Computers , Educational Measurement/methods , Motivation , Personal Autonomy , Students, Health Occupations/psychology , Surveys and Questionnaires , Adolescent , Adult , Allied Health Personnel/education , Cross-Sectional Studies , Factor Analysis, Statistical , Humans , Iran , Reproducibility of Results , Young Adult
12.
Curr Pharm Teach Learn ; 10(2): 235-242, 2018 02.
Article in English | MEDLINE | ID: mdl-29706282

ABSTRACT

BACKGROUND AND PURPOSE: The purpose of this study was to evaluate student and faculty perceptions of the transition to a required computer-based testing format and to identify any impact of this transition on student exam performance. EDUCATIONAL ACTIVITY AND SETTING: Separate questionnaires sent to students and faculty asked about perceptions of and problems with computer-based testing. Exam results from program-required courses for two years prior to and two years following the adoption of computer-based testing were compared to determine if this testing format impacted student performance. FINDINGS: Responses to Likert-type questions about perceived ease of use showed no difference between students with one and three semesters experience with computer-based testing. Of 223 student-reported problems, 23% related to faculty training with the testing software. Students most commonly reported improved feedback (46% of responses) and ease of exam-taking (17% of responses) as benefits to computer-based testing. Faculty-reported difficulties were most commonly related to problems with student computers during an exam (38% of responses) while the most commonly identified benefit was collecting assessment data (32% of responses). Neither faculty nor students perceived an impact on exam performance due to computer-based testing. An analysis of exam grades confirmed there was no consistent performance difference between the paper and computer-based formats. DISCUSSION AND SUMMARY: Both faculty and students rapidly adapted to using computer-based testing. There was no evidence that switching to computer-based testing had any impact on student exam performance.


Subject(s)
Attitude , Computers , Education, Pharmacy , Educational Measurement/methods , Faculty, Pharmacy , Students, Pharmacy , Adult , Feedback , Female , Humans , Male , Perception , Surveys and Questionnaires , Young Adult
13.
Hum Factors ; 60(3): 340-350, 2018 05.
Article in English | MEDLINE | ID: mdl-29244530

ABSTRACT

Objective The purpose of the present research is to establish measurement equivalence and test differences in reliability between computerized and pencil-and-paper-based tests of spatial cognition. Background Researchers have increasingly adopted computerized test formats, but few attempt to establish equivalence for computer-based and paper-based tests. The mixed results in the literature on the test mode effect, which occurs when performance differs as a function of test medium, highlight the need to test for, instead of assume, measurement equivalence. One domain that has been increasingly computerized and is thus in need of tests of measurement equivalence across test mode is spatial cognition. Method In the present study, 244 undergraduate students completed two measures of spatial ability (i.e., spatial visualization and cross-sectioning) in either computer- or paper-and-pencil-based format. Results Measurement equivalence was not supported across computer-based and paper-based formats for either spatial test. The results also indicated that test administration type affected the types of errors made on the spatial visualization task, which further highlights the conceptual differences between test mediums. Paper-based tests also demonstrated increased reliability when compared with computerized versions of the tests. Conclusion The results of the measurement equivalence tests caution against treating computer- and paper-based versions of spatial measures as equivalent. We encourage subsequent work to demonstrate test mode equivalence prior to the utilization of spatial measures because current evidence suggests they may not reliably capture the same construct. Application The assessment of test type differences may influence the medium in which spatial cognition tests are administered.


Subject(s)
Neuropsychological Tests/standards , Space Perception/physiology , Spatial Navigation/physiology , Adult , Computers , Humans , Paper , Reproducibility of Results , Young Adult
14.
Drug Alcohol Depend ; 178: 94-100, 2017 09 01.
Article in English | MEDLINE | ID: mdl-28645065

ABSTRACT

BACKGROUND: The Screener and Opioid Assessment for Patients with Pain-Revised (SOAPP-R) is a 24-item assessment designed to assist in the prediction of aberrant drug-related behavior (ADB) among patients with chronic pain. Recent work has created shorter versions of the SOAPP-R, including a static 12-item short form and two computer-based methods (curtailment and stochastic curtailment) that monitor assessments in progress. The purpose of this study was to cross-validate these shorter versions in two new populations. METHODS: This retrospective study used data from patients recruited from a hospital-based pain center (n=84) and pain patients followed and treated at primary care centers (n=110). Subjects had been administered the SOAPP-R and assessed for ADB. In real-data simulation, the sensitivity, specificity, and area under the curve (AUC) of each form were calculated, as was the mean test length using curtailment and stochastic curtailment. RESULTS: Curtailment reduced the number of items administered by 30% to 34% while maintaining sensitivity and specificity identical to those of the full-length SOAPP-R. Stochastic curtailment reduced the number of items administered by 45% to 63% while maintaining sensitivity and specificity within 0.03 of those of the full-length SOAPP-R. The AUC of the 12-item form was equal to that of the 24-item form in both populations. CONCLUSIONS: Curtailment, stochastic curtailment, and the 12-item short form have potential to enhance the efficiency of the SOAPP-R.


Subject(s)
Analgesics, Opioid/therapeutic use , Chronic Pain/drug therapy , Pain Measurement/methods , Analgesics, Opioid/administration & dosage , Humans , Pain Clinics , Research Design , Retrospective Studies , Sensitivity and Specificity , Substance Abuse Detection
15.
Adv Med Educ Pract ; 8: 33-36, 2017.
Article in English | MEDLINE | ID: mdl-28096708

ABSTRACT

INTRODUCTION: Historically, testing medical students' skills using a handheld ophthalmoscope has been difficult to do objectively. Many programs train students using plastic models of the eye which are a very limited fidelity simulator of a real human eye. This makes it difficult to be sure that actual proficiency is attained given the differences between the various models and actual patients. The purpose of this article is to introduce a method of testing where a medical student must match a patient with his/her fundus photo, ensuring objective evaluation as well as developing skills on real patients which are more likely to transfer into clinical practice directly. PRESENTATION OF CASE: Fundus photos from standardized patients (SPs) were obtained using a retinal camera and placed into a grid using proprietary software. Medical students were then asked to examine a SP and attempt to match the patient to his/her fundus photo in the grid. RESULTS: Of the 33 medical students tested, only 10 were able to match the SP's eye to the correct photo in the grid. The average time to correct selection was 175 seconds, and the successful students rated their confidence level at 27.5% (average). The incorrect selection took less time, averaging 118 seconds, yet yielded a higher student-reported confidence level at 34.8% (average). The only noteworthy predictor of success (p<0.05) was the student's age (p=0.02). CONCLUSION: It may be determined that there is an apparent gap in the ophthalmoscopy training of the students tested. It may also be of concern that students who selected the incorrect photo were more confident in their selections than students who chose the correct photo. More training may be necessary to close this gap, and future studies should attempt to establish continuing protocols in multiple centers.

16.
Educ Psychol Meas ; 77(4): 570-586, 2017 Aug.
Article in English | MEDLINE | ID: mdl-30034020

ABSTRACT

The current study proposes novel methods to predict multistage testing (MST) performance without conducting simulations. This method, called MST test information, is based on analytic derivation of standard errors of ability estimates across theta levels. We compared standard errors derived analytically to the simulation results to demonstrate the validity of the proposed method in both measurement precision and classification accuracy. The results indicate that the MST test information effectively predicted the performance of MST. In addition, the results of the current study highlighted the relationship among the test construction, MST design factors, and MST performance.

17.
Korean J Med Educ ; 27(1): 3-10, 2015 Mar.
Article in Korean | MEDLINE | ID: mdl-25800256

ABSTRACT

PURPOSE: The purpose of this study was to evaluate the suitability (convenience, objectiveness, and satisfaction) of ubiquitous-based testing (UBT) as a medical education evaluation tool. METHODS: UBT was administered using a smart pad in our medical school in May 2012. A questionnaire was given twice. The pre-UBT questionnaire examined possession of a tablet computer, skillfulness of smart devices, the convenience of UBT, and the usefulness of a medical educational assessment tool. The post-UBT questionnaire evaluated the satisfaction, convenience, and preference of UBT and the usefulness of a medical educational assessment tool, as in the pre-UBT test. The survey was measured on a 4-point scale: 1 is "strongly disagree" and 4 is "strongly agree." RESULTS: One hundred three students (male, 55.3%) participated in the UBT. The mean age was 29.2±2.4 years. In the pre-UBT questionnaire analysis, students responded affirmatively to the items about the skillfulness of smart devices, clinical skill assessment, and achievement of educational objectives. The responses to the items on the convenience and satisfaction with the UBT were positive in the post-UBT. The factors that affected the post-UBT questionnaire were as follows: knowledge assessment (p=0.041) and achievement of educational objectives (p=0.015) were significant, based on gender, and satisfaction with the UBT (p=0.002) was significant, based on possession of a tablet computer. The relationship between the ranks of this UBT and the average ranks of the three previous semesters was statistically significant (p<0.001). CONCLUSION: Convenience, objectiveness, knowledge assessment, and composition and completion were useful items in the UBT.


Subject(s)
Clinical Competence , Computers , Education, Medical , Educational Measurement/methods , Smartphone , Achievement , Adult , Consumer Behavior , Female , Goals , Humans , Male , Ownership , Sex Factors , Surveys and Questionnaires
18.
Article in Korean | WPRIM (Western Pacific) | ID: wpr-69917

ABSTRACT

PURPOSE: The purpose of this study was to evaluate the suitability (convenience, objectiveness, and satisfaction) of ubiquitous-based testing (UBT) as a medical education evaluation tool. METHODS: UBT was administered using a smart pad in our medical school in May 2012. A questionnaire was given twice. The pre-UBT questionnaire examined possession of a tablet computer, skillfulness of smart devices, the convenience of UBT, and the usefulness of a medical educational assessment tool. The post-UBT questionnaire evaluated the satisfaction, convenience, and preference of UBT and the usefulness of a medical educational assessment tool, as in the pre-UBT test. The survey was measured on a 4-point scale: 1 is "strongly disagree" and 4 is "strongly agree." RESULTS: One hundred three students (male, 55.3%) participated in the UBT. The mean age was 29.2+/-2.4 years. In the pre-UBT questionnaire analysis, students responded affirmatively to the items about the skillfulness of smart devices, clinical skill assessment, and achievement of educational objectives. The responses to the items on the convenience and satisfaction with the UBT were positive in the post-UBT. The factors that affected the post-UBT questionnaire were as follows: knowledge assessment (p=0.041) and achievement of educational objectives (p=0.015) were significant, based on gender, and satisfaction with the UBT (p=0.002) was significant, based on possession of a tablet computer. The relationship between the ranks of this UBT and the average ranks of the three previous semesters was statistically significant (p<0.001). CONCLUSION: Convenience, objectiveness, knowledge assessment, and composition and completion were useful items in the UBT.


Subject(s)
Adult , Female , Humans , Male , Achievement , Clinical Competence , Computers , Consumer Behavior , Education, Medical , Educational Measurement/methods , Goals , Ownership , Sex Factors , Smartphone , Surveys and Questionnaires
19.
Arch Clin Neuropsychol ; 28(7): 700-10, 2013 Nov.
Article in English | MEDLINE | ID: mdl-23887185

ABSTRACT

The measurement of effort and performance validity is essential for computerized testing where less direct supervision is needed. The clinical validation of an Automated Neuropsychological Metrics-Performance Validity Index (ANAM-PVI) was examined by converting ANAM test scores into a common metric based on their relative infrequency in an outpatient clinic sample with presumed good effort. Optimal ANAM-PVI cut-points were determined using receiver operator characteristic (ROC) curve analyses and an a priori specificity of 90%. Sensitivity/specificity was examined in available validation samples (controls, simulators, and neurorehabilitation patients). ANAM-PVI scores differed between groups with simulators scoring the highest. ROC curve analysis indicated excellent discriminability of ANAM-PVI scores ≥5 to detect simulators versus controls (area under the curve = 0.858; odds ratio for detecting suboptimal performance = 15.6), but resulted in a 27% false-positive rate in the clinical sample. When specificity in the clinical sample was set at 90%, sensitivity decreased (68%), but was consistent with other embedded effort measures. Results support the ANAM-PVI as an embedded effort measure and demonstrate the value of sample-specific cut-points in groups with cognitive impairment. Examination of different cut-points indicates that clinicians should choose sample-specific cut-points based on sensitivity and specificity rates that are most appropriate for their patient population with higher cut-points for those expected to have severe cognitive impairment (e.g., dementia or severe acquired brain injury).


Subject(s)
Cognition Disorders/diagnosis , Neuropsychological Tests , Adolescent , Adult , Aged , Cognition Disorders/psychology , Female , Humans , Male , Middle Aged , ROC Curve , Sensitivity and Specificity
20.
Front Hum Neurosci ; 6: 195, 2012.
Article in English | MEDLINE | ID: mdl-22822394

ABSTRACT

This review illustrates how, after unilateral brain damage, the presence and severity of spatial awareness deficits for the contralesional hemispace depend greatly on the quantity of attentional resources available for performance. After a brief description of neglect and extinction, different frameworks accounting for spatial and non-spatial attentional processes will be outlined. The central part of the review describes how the performance of brain-damaged patients is negatively affected by increased task demands, which can result in the emergence of severe awareness deficits for contralesional space even in patients who perform normally on paper-and-pencil tests. Throughout the review neglect is described as a spatial syndrome that can be exacerbated in the presence and severity by both spatial and non-spatial tasks. The take-home message is that the presence and degree of contralesional neglect and extinction can be dramatically overlooked based on standard clinical (paper-and-pencil) testing, where patients can easily compensate for their deficits. Only tasks where compensation is made impossible represent an appropriate approach to detect these disabling contralesional deficits of awareness when they become subtle in post-acute stroke phases.

SELECTION OF CITATIONS
SEARCH DETAIL
...