Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 12 de 12
Filter
1.
Psychol Serv ; 10(2): 257-63, 2013 May.
Article in English | MEDLINE | ID: mdl-23003117

ABSTRACT

The Beck Depression Inventory II (BDI-II) has been suspected of overestimating the level of depression in individuals that endure chronic pain. Using a sample (N = 345) of male military veterans with chronic pain enrolled in an outpatient treatment program, a factor analysis on the BDI-II revealed a "Somatic Complaints" factor along with 2 other factors we labeled "Negative Rumination" and "Mood." Standardized scores were provided for each BDI-II factor score, Total score, and Total minus Somatic score. The internal consistency reliabilities (Gilmer-Feldt and alpha coefficients) for all scores were found to be clinically acceptable. Item-Total score correlations found that all of the BDI-II items were good discriminators (r > .30). We conclude that the normative data provided in this study should help control for somatic responding by male chronic pain veterans on the BDI-II. We highly recommend that clinicians and researchers use the norm-referenced method when interpreting BDI-II scores from individuals suffering from chronic pain.


Subject(s)
Chronic Pain/psychology , Depressive Disorder/psychology , Psychiatric Status Rating Scales/standards , Psychometrics/methods , Veterans/psychology , Adult , Aged , Aged, 80 and over , Depressive Disorder/diagnosis , Humans , Male , Middle Aged , Severity of Illness Index , Young Adult
2.
Appl Neuropsychol ; 14(4): 284-90, 2007.
Article in English | MEDLINE | ID: mdl-18067425

ABSTRACT

Criterion-referenced (Livingston r) and norm-referenced (Gilmer-Feldt r and Coefficient Alpha) techniques were used to calculate the internal consistency reliability of the Bender-Gestalt Test (BGT) Total Score using the 12-item Lacks system of scoring. Livingston's r was found to be .825 for the Lacks BGT cutoff score of 5. The Gilmer-Feldt and alpha coefficients for the Lacks Total Score was found to be .644 and .626, respectively. An item analysis showed that most of the BGT items (9 out of 12) were within established criteria for item difficulty, however, 7 items were found to be poor discriminators. The interscorer reliabilities based on three scorers, two scorers, and a single scorer was found to be .895, .852, and .740, respectively. Due to the low reliabilities and several inherent flaws that were identified with the Lacks scoring system, the authors recommend that users of the BGT consider alternative objective scoring systems.


Subject(s)
Bender-Gestalt Test/statistics & numerical data , Cognition Disorders/diagnosis , Psychometrics , Adult , Cognition Disorders/psychology , Confidence Intervals , Data Interpretation, Statistical , Female , Humans , Male , Middle Aged , Observer Variation , Reproducibility of Results
3.
Clin Neuropsychol ; 20(4): 678-94, 2006 Dec.
Article in English | MEDLINE | ID: mdl-16980254

ABSTRACT

The Category Test is a well-known neuropsychological instrument used to assess concept formation and higher executive abilities. The present study investigated the utility of additional scores for the Category Test. We used principles developed in cognitive psychology to create several new measures for subtests 5 and 6 of this test. These scores were primarily designed to be sensitive to interference effects of learning decision rules from subtest 2, subtest 3, and subtest 4. The new scores as well as the total error scores from subtests 5 and 6 were used to discriminate subjects with documented brain injury from subjects who were neurologically normal based on neuroimaging and neurologic evaluation. The Category Test was given following Reitan's (1979) instructions, with the exception that no additional prompting was given to participants who struggled early with the test in order to reduce the "executive" guidance of the examiner. Because any "interference" from earlier subtests on performance of subtest 5 and subtest 6 should be related to mastery of these earlier subtests, the normal group was matched to the brain-impaired group on which subtest(s) they learned. This resulted in four learning groups: (a) learned subtests 3 and 4; (b) learned subtest 4 but not 3; (c) learned subtest 3 but not 4; and (d) failed to learn either subtest. ANOVA analyses revealed that the three measures of interference were significantly greater in the brain-damaged group than in the normal controls. Also, specific interference measures were related to specific prior subtest mastery, thus providing support for a proactive interference effect. In addition, we have evidence that our new measures may be selectively sensitive to frontal system dysfunction.


Subject(s)
Concept Formation/physiology , Nervous System Diseases/physiopathology , Neuropsychological Tests , Problem Solving/physiology , Weights and Measures , Adult , Aged , Analysis of Variance , Female , Humans , Male , Middle Aged , Predictive Value of Tests , Reproducibility of Results
4.
Percept Mot Skills ; 100(3 Pt 1): 695-702, 2005 Jun.
Article in English | MEDLINE | ID: mdl-16060429

ABSTRACT

The present study investigated the types of inaccurate responses, i.e., Don't Know, Semantic, Visual (nonlinguistic), Phonological, Circumlocutory, and Perseverative, made on the Hooper Visual Organization Test by a heterogeneous sample of 68 brain-damaged and 63 substance abuse patients. The mean age of the brain-damaged and substance abuse groups were 46.0 (SD=13.5) and 43.7 (SD=12.9) yr., respectively. Analysis showed that the brain-damaged group made significantly more visual and perseverative responses than the substance abuse group. There was significantly more variance in the Visual responses than the Semantic responses for the brain-damaged group. The authors conclude that visuospatial ability is the primary factor for successful performance on this test.


Subject(s)
Brain Damage, Chronic/diagnosis , Cognition Disorders/diagnosis , Neuropsychological Tests/statistics & numerical data , Substance-Related Disorders/diagnosis , Visual Perception , Adult , Agnosia/diagnosis , Agnosia/psychology , Brain Damage, Chronic/psychology , Cognition Disorders/psychology , Humans , Middle Aged , Perceptual Closure , Psychometrics , Reproducibility of Results , Semantics , Space Perception , Substance-Related Disorders/psychology , Verbal Behavior
5.
Assessment ; 12(2): 137-44, 2005 Jun.
Article in English | MEDLINE | ID: mdl-15914716

ABSTRACT

Criterion-referenced (Livingston) and norm-referenced (Gilmer-Feldt) techniques were used to measure the internal consistency reliability of Folstein's Mini-Mental State Examination (MMSE) on a large sample (N = 418) of elderly medical patients. Two administration and scoring variants of the MMSE Attention and Calculation section (Serial 7s only and WORLD only) were investigated. Livingston reliability coefficients (rs) were calculated for a wide range of cutoff scores. As necessary for the calculation of the Gilmer-Feldt r, a factor analysis showed that the MMSE measures three cognitive domains. Livingston's r for the most widely used MMSE cutoff score of 24 was .803 for Serial 7s and .795 for WORLD. The Gilmer-Feldt internal consistency reliability coefficient was .764 for Serial 7s and .747 for WORLD. Item analysis showed that nearly all of the MMSE items were good discriminators, but 12 were too easy. True score confidence intervals should be applied when interpreting MMSE test scores.


Subject(s)
Mental Status Schedule , Aged , Factor Analysis, Statistical , Female , Geriatric Assessment , Humans , Male , Psychometrics , Reproducibility of Results
6.
Appl Neuropsychol ; 12(1): 19-23, 2005.
Article in English | MEDLINE | ID: mdl-15788219

ABSTRACT

Coefficient alpha and an item analysis were calculated for the 16-item Benton Visual Form Discrimination Test (VFDT) using a heterogeneous sample (N = 293) of mostly elderly medical patients who were suspected of having cognitive impairment. The total score reliability was .74. An item analysis found that 15 of the items were within established criteria for item difficulty, however, 5 items were found to be poor discriminators. Through the use of confidence intervals around observed scores, it was shown that the current classification criterion for the VFDT demands a higher reliability coefficient than what was found. Also, evidence for the test's insufficient level of difficulty is presented. It is difficult to recommend this test for clinical use.


Subject(s)
Discrimination, Psychological/physiology , Neuropsychological Tests , Psychometrics , Visual Perception/physiology , Adult , Aged , Aged, 80 and over , Female , Humans , Male , Middle Aged , Reproducibility of Results
7.
J Clin Psychol ; 59(9): 985-90, 2003 Sep.
Article in English | MEDLINE | ID: mdl-12945063

ABSTRACT

Several studies have investigated random responding to the F, F Back, and VRIN scales. Only one study attempted to provide practical cutoff scores for these scales, but was unable to reach definitive cutoffs. This study uses the normal approximation to the binomial distribution and provides confidence interval bounds for random responding at the 95, 90, and 85% levels for the F, F Back, and VRIN scales. The possibility that humans asked to respond randomly produce F, F Back, and VRIN scores different from computer-generated random scores was investigated. The results show nonsignificant differences between human and computer responses for the F and F Back scales and mixed results for the VRIN scale.


Subject(s)
Computer Simulation , Personality Inventory/statistics & numerical data , Humans , Personality Inventory/standards , Psychometrics , Random Allocation , Reference Values , Sensitivity and Specificity
8.
Psychol Rep ; 92(2): 468-72, 2003 Apr.
Article in English | MEDLINE | ID: mdl-12785627

ABSTRACT

The poorly written administration and scoring instructions for the Boston Naming Test allow too wide a range of interpretations. Three different, seemingly correct interpretations of the scoring methods were compared. The results show that these methods can produce large differences in the total score.


Subject(s)
Language Disorders/diagnosis , Neuropsychological Tests , Research Design/statistics & numerical data , Female , Humans , Male , Middle Aged , Neuropsychological Tests/standards , Neuropsychological Tests/statistics & numerical data , Neuropsychology/instrumentation , Research Design/standards
9.
Assessment ; 10(1): 66-70, 2003 Mar.
Article in English | MEDLINE | ID: mdl-12675385

ABSTRACT

The psychometric properties of the Hooper Vsual Organization Test (VOT) have not been well investigated Here the authors present internal consistency and interrater reliability coefficients, and an item analysis, using data from a sample (N = 281) of "cognitively impaired" and "cognitively intact" patients, and patients with undetermined cognitive status. Coefficient alpha for the VOT total sample was .882. An item analysis found that 26 of the 30 items were good at discriminating among patients. Also, the interrater reliabilities for three raters (.992), two raters (.988), and one rater (.977) were excellent. Therefore, the judgmental scoring of the VOT does not interfere significantly with its clinical utility. The authors conclude that the VOT is a psychometrically sound test.


Subject(s)
Neuropsychological Tests , Psychometrics , Visual Perception/physiology , Cognition Disorders/psychology , Humans , Observer Variation , Space Perception/physiology
10.
Psychol Rep ; 93(3 Pt 2): 1080-2, 2003 Dec.
Article in English | MEDLINE | ID: mdl-14765574

ABSTRACT

Knight's 2003 analysis of the effect of the WAIS-III instructions on the Matrix Reasoning subtest was based on multiple t tests, which is a violation of conventional statistical procedures. Using this procedure significant differences were found between the group who know the subtest was untimed versus the group which did not know if the subtest was timed or untimed. Reanalysis of the data used three statistical alternatives: (a) Bonferroni correction for all possible t tests, (b) one-way analysis of variance, and (c) selected t tests with the Bonferroni correction. All three analyses yielded nonsignificant differences between means, thereby changing the conclusions of Knight's study.


Subject(s)
Decision Making , Wechsler Scales/statistics & numerical data , Humans , Reproducibility of Results
11.
J Clin Psychol ; 58(12): 1615-7, 2002 Dec.
Article in English | MEDLINE | ID: mdl-12455026

ABSTRACT

The effectiveness of the MCMI-III Validity scale, Scale X, and the Clinical Personality Pattern scales to detect random responding is put to the test. The binomial expansion and Monte Carlo techniques were used. If the examiner is willing to interpret tests of questionable validity, then 50% of the random responders will not be detected. Scale X and the Clinical Personality Pattern scales were useless in detecting random responders.


Subject(s)
Personality Inventory/standards , Adult , Aged , Female , Humans , Male , Middle Aged , Monte Carlo Method , Personality Inventory/statistics & numerical data , Psychometrics , Sensitivity and Specificity , Software
12.
Percept Mot Skills ; 95(3 Pt 2): 1096, 2002 Dec.
Article in English | MEDLINE | ID: mdl-12578248

ABSTRACT

A reanalysis of the retest reliabilities for the Colored Progressive Matrices indicates Kazlauskaite and Lynn's conclusions (2002) were not accurate.


Subject(s)
Cognition Disorders/diagnosis , Intelligence Tests , Visual Perception , Humans , Reproducibility of Results
SELECTION OF CITATIONS
SEARCH DETAIL
...