Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 37
Filtrar
1.
J Clin Exp Neuropsychol ; 46(1): 67-79, 2024 02.
Artigo em Inglês | MEDLINE | ID: mdl-38362939

RESUMO

OBJECTIVE: To adjust the decision criterion for the Word Memory Test (WMT, Green, 2003) to minimize the frequency of false positives. METHOD: Archival data were combined into a database (n = 3,210) to examine the best cut score for the WMT. We compared results based on the original scoring rules and those based on adjusted scoring rules using a criterion based on 16 performance validity tests (PVTs) exclusive of the WMT. Cutoffs based on peer-reviewed publications and test manuals were used. The resulting PVT composite was considered the best estimate of validity status. We focused on a specificity of .90 with a false-positive rate of less than .10 across multiple samples. RESULTS: Each examinee was administered the WMT, as well as on average 5.5 (SD = 2.5) other PVTs. Based on the original scoring rules of the WMT, 31.8% of examinees failed. Using a single failure on the criterion PVT (C-PVT), the base rate of failure was 45.9%. When requiring two or more failures on the C-PVT, the failure rate dropped to 22.8%. Applying a contingency analysis (i.e., X2) to the two failures model on the C-PVT measure and using the original rules for the WMT resulted in only 65.3% agreement. However, using our adjusted rules for the WMT, which consisted of relying on only the IR and DR WMT subtest scores with a cutoff of 77.5%, agreement between the adjusted and the C-PVT criterion equaled 80.8%, for an improvement of 12.1% identified. The adjustmeny resulted in a 49.2% reduction in false positives while preserving a sensitivity of 53.6%. The specificity for the new rules was 88.8%, for a false positive rate of 11.2%. CONCLUSIONS: Results supported lowering of the cut score for correct responding from 82.5% to 77.5% correct. We also recommend discontinuing the use of the Consistency subtest score in the determination of WMT failure.


Assuntos
Testes Neuropsicológicos , Humanos , Feminino , Masculino , Adulto , Reações Falso-Positivas , Pessoa de Meia-Idade , Testes Neuropsicológicos/normas , Adulto Jovem , Idoso , Simulação de Doença/diagnóstico , Adolescente , Testes de Memória e Aprendizagem/normas , Reprodutibilidade dos Testes , Sensibilidade e Especificidade
2.
Clin Neuropsychol ; : 1-17, 2023 Dec 01.
Artigo em Inglês | MEDLINE | ID: mdl-38041021

RESUMO

Objective: To determine if similar levels of performance on the Overall Test Battery Mean (OTBM) occur at different forced choice test (FCT) p-value score failures. Second, to determine the OTBM levels that are associated with failures at above chance on various performance validity (PVT) tests. Method: OTBMs were computed from archival data obtained from four practices. We calculated each examinee's Estimated Premorbid Global Ability (EPGA) and OTBM. The sample size was 5,103 examinees with 282 (5.5%) of these scoring below chance at p ≤ .20 on at least one FCT. Results: The OTBM associated with a failure at p ≤ .20 was equivalent to the OTBM that was associated with failing 6 or more PVTs at above-chance cutoffs. The mean OTBMs relative to increasingly strict FCT p cutoffs were similar (T scores in the 30s). As expected, there was an inverse relationship between the number of PVTs failed and examinees' OTBMs. Conclusions: The data support the use of p ≤ .20 as the probability level for testing the significance of below chance performance on FCTs. The OTBM can be used to index the influence of invalid performance on outcomes, especially when an examinee scores below chance.

3.
Clin Neuropsychol ; 36(1): 1-23, 2022 01.
Artigo em Inglês | MEDLINE | ID: mdl-32603209

RESUMO

OBJECTIVE: The current study utilizes five decades of data to demonstrate cohort differences in gender representation in governance, speaking at conferences, serving on editorial boards, and in scholarly productivity in clinical neuropsychology. Broadly examining gender disparities across domains of professional attainment helps illuminate the areas in which inequity in clinical neuropsychology is most pronounced and in need of ameliorative resources. METHODS: Data from 1967 to 2017 were coded from publicly available information from the four major professional associations for clinical neuropsychology in the U.S. (i.e. INS, AACN, NAN, & SCN). Gender differences were examined in (1) speaking at a national conference, (2) holding an office in a professional organization, (3) serving on the editorial team for a journal affiliated with a professional organization, and (4) scholarly activity as coded from Google Scholar. RESULTS: The percentage of men in the field significantly declined across time, whereas the percentage of women significantly increased; the number of women exceeded the number of men in approximately 1992. Gender differences in conference speakers, editorial board members, and research citations were greater in the earlier than in more recent cohorts of clinical neuropsychologists but gender inequity in conference speaking and editorial activities is evident in the most recent cohorts. DISCUSSION: Gender differences in conference speakers, editorial board members, and in earning research citations have diminished over time, but early career women still face disadvantages in speaking at conferences and serving on editorial boards. We provide strategies to increase and sustain women's participation in leadership in neuropsychology.


Assuntos
Liderança , Neuropsicologia , Feminino , Humanos , Renda , Masculino , Testes Neuropsicológicos , Sociedades
4.
Clin Neuropsychol ; 35(6): 1053-1106, 2021 08.
Artigo em Inglês | MEDLINE | ID: mdl-33823750

RESUMO

Objective: Citation and download data pertaining to the 2009 AACN consensus statement on validity assessment indicated that the topic maintained high interest in subsequent years, during which key terminology evolved and relevant empirical research proliferated. With a general goal of providing current guidance to the clinical neuropsychology community regarding this important topic, the specific update goals were to: identify current key definitions of terms relevant to validity assessment; learn what experts believe should be reaffirmed from the original consensus paper, as well as new consensus points; and incorporate the latest recommendations regarding the use of validity testing, as well as current application of the term 'malingering.' Methods: In the spring of 2019, four of the original 2009 work group chairs and additional experts for each work group were impaneled. A total of 20 individuals shared ideas and writing drafts until reaching consensus on January 21, 2021. Results: Consensus was reached regarding affirmation of prior salient points that continue to garner clinical and scientific support, as well as creation of new points. The resulting consensus statement addresses definitions and differential diagnosis, performance and symptom validity assessment, and research design and statistical issues. Conclusions/Importance: In order to provide bases for diagnoses and interpretations, the current consensus is that all clinical and forensic evaluations must proactively address the degree to which results of neuropsychological and psychological testing are valid. There is a strong and continually-growing evidence-based literature on which practitioners can confidently base their judgments regarding the selection and interpretation of validity measures.


Assuntos
Simulação de Doença , Neuropsicologia , Academias e Institutos , Humanos , Motivação , Testes Neuropsicológicos , Estados Unidos
5.
Dev Neuropsychol ; 45(7-8): 431-434, 2020 12 18.
Artigo em Inglês | MEDLINE | ID: mdl-33140668
6.
Appl Neuropsychol Adult ; 27(4): 364-375, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-30773042

RESUMO

One of the basic tasks performed by a neuropsychologist is to identify the difference between current performance and the premorbid expected performance. Baseline expected performance for Intellectually Impaired (n = 21), Developmentally Delayed (n = 40), Attention Deficit Disorder (n = 98), Learning Disability (n = 42), and "Normal" groups (n = 75) were developed along with a demographically corrected prediction of premorbid functioning and a word reading based prediction of premorbid functioning. We utilized a subset of this data pool for development (n = 107) and validation (n = 108) of premorbid functioning estimates. Findings show that a combination of three methods (baseline, demographic, and reading) were superior to any individual method. The effect size (Cohen's d) calculations show that differences in the prediction of domain level performances were small and likely not clinically meaningful, indicating that the premorbid estimates would be usable as a prediction of expected performance at the domain level. However, the motor domains were not well predicted.


Assuntos
Disfunção Cognitiva/fisiopatologia , Transtornos do Neurodesenvolvimento/fisiopatologia , Psicometria/normas , Adulto , Transtorno do Deficit de Atenção com Hiperatividade/complicações , Transtorno do Deficit de Atenção com Hiperatividade/fisiopatologia , Disfunção Cognitiva/etiologia , Deficiências do Desenvolvimento/complicações , Deficiências do Desenvolvimento/fisiopatologia , Feminino , Humanos , Deficiência Intelectual/complicações , Deficiência Intelectual/fisiopatologia , Aprendizagem , Deficiências da Aprendizagem/complicações , Deficiências da Aprendizagem/fisiopatologia , Masculino , Transtornos do Neurodesenvolvimento/complicações , Testes Neuropsicológicos , Psicometria/métodos , Leitura , Reprodutibilidade dos Testes
7.
Clin Neuropsychol ; 33(8): 1354-1372, 2019 11.
Artigo em Inglês | MEDLINE | ID: mdl-31111775

RESUMO

Objective: Discrimination of patients passing vs. failing the Word Memory Test (WMT) by performance on 11 performance and symptom validity tests (PVTs, SVTs) from the Meyers Neuropsychological Battery (MNB) at per-test false positive cutoffs ranging from 0 to 15%. PVT and SVT intercorrelation in subgroups passing and failing the WMT, as well as the degree of skew of the individual PVTs and SVT in the pass/fail subgroups, were also analyzed. Method: In 255 clinical and forensic cases, 100 failed and 155 passed the WMT, at a base-rate of invalid performance of 39.2%. Performance was contrasted on 10 PVTs and 1 SVT from the MNB, using per-test false positive rates of 0.0%, 3.3%, 5.0%, 10.0%, and 15.0% in discriminating WMT pass and WMT fail groups. These two WMT groups were also contrasted using the 10 PVTs and 1 SVT as continuous variables in a logistic regression. Results: The per-PVT false positive rate of 10% yielded the highest WMT pass/fail classification, and more closely approximated the classification obtained by logistic regression than other cut scores. PVT and SVT correlations were higher in cases failing the WMT, and data were more highly skewed in those passing the WMT. Conclusions: The optimal per-PVT and SVT cutoff is at a false positive rate of 10%, with failure of ≥3 PVTs/SVTs out of 11 yielding sensitivity of 61.0% and specificity of 90.3%. PVTs with the best classification had the greatest degree of skew in the WMT pass subgroup.


Assuntos
Testes Neuropsicológicos/normas , Projetos de Pesquisa , Adulto , Feminino , Humanos , Masculino , Reprodutibilidade dos Testes
8.
Clin Neuropsychol ; 31(8): 1401-1405, 2017 11.
Artigo em Inglês | MEDLINE | ID: mdl-28994350

RESUMO

We reply to Nichols' (2017) critique of our commentary on the MMPI-2/MMPI-2-RF Symptom Validity Scale (FBS/FBS-r) as a measure of symptom exaggeration versus a measure of litigation response syndrome (LRS). Nichols claims that we misrepresented the thrust of the original paper he co-authored with Gass; namely, that they did not represent that the FBS/FBS-r were measures of LRS but rather, intended to convey that the FBS/RBS-r were indeterminate as to whether the scales measured LRS or measured symptom exaggeration. Our original commentary offered statistical support from published literature that (1) FBS/FBS-r were associated with performance validity test (PVT) failure, establishing the scales as measures of symptom exaggeration, and (2) persons in litigation who passed PVTs did not produce clinically significant elevations on the scales, contradicting that FBS/FBS-r were measures of LRS. In the present commentary, we draw a distinction between the psychometric data we present supporting the validity of FBS/FBS-r, and the conceptual, non-statistical arguments presented by Nichols, who does not refute our original empirically based conclusions.


Assuntos
MMPI , Simulação de Doença , Humanos , Masculino , Testes Neuropsicológicos , Psicometria , Reprodutibilidade dos Testes
9.
Clin Neuropsychol ; 31(8): 1387-1395, 2017 11.
Artigo em Inglês | MEDLINE | ID: mdl-28829224

RESUMO

OBJECTIVES: To address (1) Whether there is empirical evidence for the contention of Nichols and Gass that the MMPI-2/MMPI-2-RF FBS/FBS-r Symptom Validity Scale is a measure of Litigation Response Syndrome (LRS), representing a credible set of responses and reactions of claimants to the experience of being in litigation, rather than a measure of non-credible symptom report, as the scale is typically used; and (2) to address their stated concerns about the validity of FBS/FBS-r meta-analytic results, and the risk of false positive elevations in persons with bona-fide medical conditions. METHOD: Review of published literature on the FBS/FBS-r, focusing in particular on associations between scores on this symptom validity test and scores on performance validity tests (PVTs), and FBS/FBS-r score elevations in patients with genuine neurologic, psychiatric and medical problems. RESULTS: (1) several investigations show significant associations between FBS/FBS-r scores and PVTs measuring non-credible performance; (2) litigants who pass PVTs do not produce significant elevations on FBS/FBS-r; (3) non-litigating medical patients (bariatric surgery candidates, persons with sleep disorders, and patients with severe traumatic brain injury) who have multiple physical, emotional and cognitive symptoms do not produce significant elevations on FBS/FBS-r. Two meta-analytic studies show large effect sizes for FBS/FBS-r of similar magnitude. CONCLUSIONS: FBS/FBS-r measures non-credible symptom report rather than legitimate experience of litigation stress. Importantly, the absence of significant FBS/FBS-r elevations in litigants who pass PVTs demonstrating credible performance, directly contradicts the contention of Nichols and Gass that the scale measures LRS. These data, meta-analytic publications, and recent test use surveys support the admissibility of FBS/FBS-r under both Daubert and the older Frye criteria.


Assuntos
Simulação de Doença , Transtornos do Sono-Vigília , Humanos , MMPI , Testes Neuropsicológicos , Reprodutibilidade dos Testes
11.
Arch Clin Neuropsychol ; 30(7): 611-33, 2015 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-26152291

RESUMO

Researchers who have been responsible for developing test batteries have argued that competent practice requires the use of a "fixed battery" that is co-normed. We tested this assumption with three normative systems: co-normed, meta-regressed norms and a system of these two methods. We analyzed two samples: 330 referred patients and 99 undergraduate volunteers. The T scores generated for referred patients using the three systems were highly associated with one another and quite similar in magnitude, with an Overall Test Battery Means (OTBMs) using the co-normed, hybrid, and meta-regressed scores equaled 43.8, 45.0, and 43.9, respectively. For volunteers, the OTBMs equaled 47.4, 47.5, and 47.1, respectively. The correlations amongst these OTBMs across systems were all above .90. Differences among OTBMs across normative systems were small and not clinically meaningful. We conclude that co-norming for competent clinical practice is not necessary.


Assuntos
Transtornos Cognitivos/diagnóstico , Testes Neuropsicológicos/normas , Adolescente , Adulto , Idoso , Análise de Variância , Bases de Dados Factuais/estatística & dados numéricos , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Valores de Referência , Estatística como Assunto , Adulto Jovem
12.
Appl Neuropsychol Adult ; 22(3): 233-40, 2015.
Artigo em Inglês | MEDLINE | ID: mdl-25371976

RESUMO

This study utilized logistic regression to determine whether performance patterns on Concussion Vital Signs (CVS) could differentiate known groups with either genuine or feigned performance. For the embedded measure development group (n = 174), clinical patients and undergraduate students categorized as feigning obtained significantly lower scores on the overall test battery mean for the CVS, Shipley-2 composite score, and California Verbal Learning Test-Second Edition subtests than did genuinely performing individuals. The final full model of 3 predictor variables (Verbal Memory immediate hits, Verbal Memory immediate correct passes, and Stroop Test complex reaction time correct) was significant and correctly classified individuals in their known group 83% of the time (sensitivity = .65; specificity = .97) in a mixed sample of young-adult clinical cases and simulators. The CVS logistic regression function was applied to a separate undergraduate college group (n = 378) that was asked to perform genuinely and identified 5% as having possibly feigned performance indicating a low false-positive rate. The failure rate was 11% and 16% at baseline cognitive testing in samples of high school and college athletes, respectively. These findings have particular relevance given the increasing use of computerized test batteries for baseline cognitive testing and return-to-play decisions after concussion.


Assuntos
Concussão Encefálica/diagnóstico , Concussão Encefálica/psicologia , Modelos Logísticos , Simulação de Doença/diagnóstico , Sinais Vitais/fisiologia , Adolescente , Adulto , Concussão Encefálica/fisiopatologia , Feminino , Humanos , Masculino , Memória de Curto Prazo/fisiologia , Testes Neuropsicológicos , Estudantes/psicologia , Universidades , Aprendizagem Verbal/fisiologia , Adulto Jovem
13.
Behav Sci Law ; 31(6): 686-701, 2013.
Artigo em Inglês | MEDLINE | ID: mdl-24105915

RESUMO

The diagnosis and evaluation of mild traumatic brain injury (mTBI) is reviewed from the perspective of meta-analyses of neuropsychological outcome, showing full recovery from a single, uncomplicated mTBI by 90 days post-trauma. Persons with history of complicated mTBI characterized by day-of-injury computed tomography or magnetic resonance imaging abnormalities, and those who have suffered prior mTBIs may or may not show evidence of complete recovery similar to that experienced by persons suffering a single, uncomplicated mTBI. Persistent post-concussion syndrome (PCS) is considered as a somatoform presentation, influenced by the non-specificity of PCS symptoms which commonly occur in non-TBI samples and co-vary as a function of general life stress, and psychological factors including symptom expectation, depression and anxiety. A model is presented for forensic evaluation of the individual mTBI case, which involves open-ended interview, followed by structured interview, record review, and detailed neuropsychological testing. Differential diagnosis includes consideration of other neurologic and psychiatric disorders, symptom expectation, diagnosis threat, developmental disorders, and malingering.


Assuntos
Lesões Encefálicas/diagnóstico , Neuropsicologia , Lesões Encefálicas/fisiopatologia , Lesões Encefálicas/reabilitação , Diagnóstico Diferencial , Medicina Legal , Humanos , Metanálise como Assunto , Resultado do Tratamento
14.
Arch Clin Neuropsychol ; 28(7): 640-8, 2013 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-23832096

RESUMO

This study examined intra-individual variability in a large sample (n = 629) of individuals with a history of mild traumatic brain injury (mTBI) or TBI referred for neuropsychological evaluation. Variability was assessed using the overall test battery mean standard deviation (OTBM SD). We found a negative linear relation between OTBM and OTBM SD (r = -.672) in this sample with a history of neurologic pathology, indicating that the variability is inversely related to cognitive performance and contrary to what is observed in most normative data. Analyses revealed main effects for OTBM and OTBM SD across three TBI severity groups: loss of consciousness (LOC) <1 h, LOC 1 h-6 days, and LOC >6 days. These effects were found for both a valid performance group (no failed embedded validity measures; n = 504) and an invalid performance group (failed one or more embedded validity measures; n = 125). These findings support that cognitive intra-individual variability is increased uniquely by both neuropathology and suboptimal effort, there is a dose-response relationship between neuropathology and cognitive variability, and intra-individual variability may have utility as a clinical index of both.


Assuntos
Lesões Encefálicas/psicologia , Cognição/fisiologia , Avaliação da Deficiência , Individualidade , Adulto , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Testes Neuropsicológicos , Índice de Gravidade de Doença
15.
Clin Neuropsychol ; 27(2): 215-37, 2013.
Artigo em Inglês | MEDLINE | ID: mdl-23414416

RESUMO

Bigler et al. (2013, The Clinical Neuropsychologist) contend that weak methodology and poor quality of the studies comprising our recent meta-analysis led us to miss detecting a subgroup of mild traumatic brain injury (mTBI) characterized by persisting symptomatic complaint and positive biomarkers for neurological damage. Our computation of non-significant Q, tau(2), and I(2) statistics contradicts the existence of a subgroup of mTBI with poor outcome, or variation in effect size as a function of quality of research design. Consistent with this conclusion, the largest single contributor to our meta-analysis, Dikmen, Machamer, Winn, and Temkin (1995, Neuropsychology, 9, 80) yielded an effect size, -0.02, that was smaller than our overall effect size of -0.07 despite using the most liberal definition of mTBI: loss of consciousness less than 1 hour, with no exclusion of subjects who had positive CT scans. The evidence is weak for biomarkers of mTBI, such as diffusion tensor imaging and for demonstrable neuropathology in uncomplicated mTBI. Postconcussive symptoms, and reduced neuropsychological test scores are not specific to mTBI but can result from pre-existing psychosocial and psychiatric problems, expectancy effects and diagnosis threat. Moreover, neuropsychological impairment is seen in a variety of primary psychiatric disorders, which themselves are predictive of persistent complaints following mTBI. We urge use of prospective studies with orthopedic trauma controls in future investigations of mTBI to control for these confounding factors.


Assuntos
Lesões Encefálicas/complicações , Transtornos Cognitivos/diagnóstico , Transtornos Cognitivos/etiologia , Transtornos da Memória/diagnóstico , Testes Neuropsicológicos , Feminino , Humanos , Masculino
16.
Clin Neuropsychol ; 26(2): 197-213, 2012.
Artigo em Inglês | MEDLINE | ID: mdl-22256957

RESUMO

Ruff et al. (1994; Ruff, Camenzuli, & Mueller, 1996) hypothesized that some mild traumatic brain injury (MTBI) patients will suffer chronic symptomatic complaints and impairments, identifying this subgroup as the "miserable minority." However, several meta-analyses of the effects of MTBI have been published (e.g., Rohling et al., 2011) showing no significant cognitive impairments following recovery. Recently Pertab, James, and Bigler (2009) suggested that meta-analysis might be obscuring impairments in some MTBI patients, presenting a hypothetical score distribution to illustrate their claim. Our statistical analyses of their hypothetical figure and of several other potential distributions containing an impaired subgroup that varied as a function of effect size and base rate of occurrence did not support the existence of a miserable minority that is obscured in meta-analyses by the larger group of MTBI patients experiencing full recovery. Indeed, given our recent published MTBI effect size of -0.07 (Rohling et al., 2011), for an impaired subgroup to exist, the level of impairment would have to be just under a tenth of a standard deviation, equivalent to a WMS-IV Index score value of 1 point. At effect sizes this small, any cut score chosen on a test to diagnose patients would result in more false positives than true positives. This greatly increases the risk of misdiagnosis in persons who are susceptible to misattribution, expectancy effects, and "diagnosis threat," thereby increasing the risk of iatrogenic illness.


Assuntos
Lesões Encefálicas/complicações , Transtornos Cognitivos/etiologia , Lesões Encefálicas/psicologia , Transtornos Cognitivos/psicologia , Humanos , Metanálise como Assunto , Testes Neuropsicológicos
17.
Psychol Bull ; 137(4): 708-12; authors reply 713-5, 2011 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-21707131

RESUMO

In the May 2010 issue of Psychological Bulletin, R. E. McGrath, M. Mitchell, B. H. Kim, and L. Hough published an article entitled "Evidence for Response Bias as a Source of Error Variance in Applied Assessment" (pp. 450-470). They argued that response bias indicators used in a variety of settings typically have insufficient data to support such use in everyday clinical practice. Furthermore, they claimed that despite 100 years of research into the use of response bias indicators, "a sufficient justification for [their] use… in applied settings remains elusive" (p. 450). We disagree with McGrath et al.'s conclusions. In fact, we assert that the relevant and voluminous literature that has addressed the issues of response bias substantiates validity of these indicators. In addition, we believe that response bias measures should be used in clinical and research settings on a regular basis. Finally, the empirical evidence for the use of response bias measures is strongest in clinical neuropsychology. We argue that McGrath et al.'s erroneous perspective on response bias measures is a result of 3 errors in their research methodology: (a) inclusion criteria for relevant studies that are too narrow; (b) errors in interpreting results of the empirical research they did include; (c) evidence of a confirmatory bias in selectively citing the literature, as evidence of moderation appears to have been overlooked. Finally, their acknowledging experts in the field who might have highlighted these errors prior to publication may have prevented critiques during the review process.


Assuntos
Testes Psicológicos/estatística & dados numéricos , Psicologia/estatística & dados numéricos , Psicometria/estatística & dados numéricos , Humanos
18.
Clin Neuropsychol ; 25(4): 608-23, 2011 May.
Artigo em Inglês | MEDLINE | ID: mdl-21512956

RESUMO

The meta-analytic findings of Binder et al. (1997) and Frencham et al. (2005) showed that the neuropsychological effect of mild traumatic brain injury (mTBI) was negligible in adults by 3 months post injury. Pertab et al. (2009) reported that verbal paired associates, coding tasks, and digit span yielded significant differences between mTBI and control groups. We re-analyzed data from the 25 studies used in the prior meta-analyses, correcting statistical and methodological limitations of previous efforts, and analyzed the chronicity data by discrete epochs. Three months post injury the effect size of -0.07 was not statistically different from zero and similar to that which has been found in several other meta-analyses (Belanger et al., 2005; Schretlen & Shapiro, 2003). The effect size 7 days post injury was -0.39. The effect of mTBI immediately post injury was largest on Verbal and Visual Memory domains. However, 3 months post injury all domains improved to show non-significant effect sizes. These findings indicate that mTBI has an initial small effect on neuropsychological functioning that dissipates quickly. The evidence of recovery in the present meta-analysis is consistent with previous conclusions of both Binder et al. and Frencham et al. Our findings may not apply to people with a history of multiple concussions or complicated mTBIs.


Assuntos
Lesões Encefálicas/complicações , Transtornos Cognitivos/diagnóstico , Transtornos Cognitivos/etiologia , Transtornos da Memória/diagnóstico , Testes Neuropsicológicos , Adulto , Feminino , Humanos , Masculino , Transtornos da Memória/etiologia , Análise de Regressão , Fatores de Tempo , Resultado do Tratamento , Adulto Jovem
19.
Clin Neuropsychol ; 24(1): 119-36, 2010 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-20029718

RESUMO

Several studies have reported that traumatic brain injury (TBI) has a smaller effect on neuropsychological test scores, in contrast to the large effect of poor effort on test performance. Consequently, many authors have concluded that effort needs to be measured routinely and that it is necessary to control for poor effort when measuring the effects of brain disease or injury on performance. Recently, however, Bowden, Shores, and Mathias (2006) have challenged these notions. They argued that the Immediate Recognition subtest of the Word Memory Test (Green & Flaro, 2003), an effort measure, is another verbal memory test rather than a measure of cognitive effort. In this study we re-examine the data from Bowden et al. (2006) and Green, Rohling, Lees-Haley, and Allen (2001) to identify differences between the two studies that might account for their contradictory conclusions. In both sets of data, reanalysis showed that effort explains approximately five times more of the variance in composite neuropsychological test scores than TBI severity. Importantly, scores on the Word Memory Test-Immediate Recognition (WMT-IR) were not correlated with measures of TBI severity, and were not found to correlate with major variables known to be measuring ability (e.g., years of education). These findings challenge the conclusions offered by Bowden and colleagues (2006).


Assuntos
Lesões Encefálicas , Testes Neuropsicológicos/estatística & dados numéricos , Índices de Gravidade do Trauma , Adulto , Fatores Etários , Lesões Encefálicas/diagnóstico , Lesões Encefálicas/fisiopatologia , Lesões Encefálicas/psicologia , Cognição/fisiologia , Avaliação da Deficiência , Escolaridade , Feminino , Humanos , Masculino , Memória/fisiologia , Pessoa de Meia-Idade , Reprodutibilidade dos Testes
20.
Neuropsychology ; 23(1): 20-39, 2009 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-19210030

RESUMO

The present study provides a meta-analysis of cognitive rehabilitation literature (K = 115, N = 2,014) that was originally reviewed by K. D. Cicerone et al. (2000, 2005) for the purpose of providing evidence-based practice guidelines for persons with acquired brain injury. The analysis yielded a small treatment effect size (ES = .30, d(+) statistic) directly attributable to cognitive rehabilitation. A larger treatment effect (ES = .71) was found for single-group pretest to posttest outcomes; however, modest improvement was observed for nontreatment control groups as well (ES = .41). Correction for this effect, which was not attributable to cognitive treatments, resulted in the small, but significant, overall estimate. Treatment effects were moderated by cognitive domain treated, time postinjury, type of brain injury, and age. The meta-analysis revealed sufficient evidence for the effectiveness of attention training after traumatic brain injury and of language and visuospatial training for aphasia and neglect syndromes after stroke. Results provide important quantitative documentation of effective treatments, complementing recent systematic reviews. Findings also highlight gaps in the scientific evidence supporting cognitive rehabilitation, thereby indicating future research directions.


Assuntos
Lesões Encefálicas/complicações , Transtornos Cognitivos/etiologia , Transtornos Cognitivos/reabilitação , Avaliação de Resultados em Cuidados de Saúde , Humanos , Testes Neuropsicológicos , PubMed/estatística & dados numéricos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...