Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 6 de 6
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
J Pers Assess ; 101(4): 374-392, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-29723065

RESUMO

This article documents and discusses the importance of using a formal systematic approach to validating psychological tests. To illustrate, results are presented from a systematic review of the validity findings cited in the Rorschach Comprehensive System (CS; Exner, 2003) test manual, originally conducted during the manuscript review process for Mihura, Meyer, Dumitrascu, and Bombel's (2013) CS meta-analyses. Our review documents (a) the degree to which the CS test manual reports validity findings for each test variable, (b) whether these findings are publicly accessible or unpublished studies coordinated by the test developer, and (c) the presence and nature of data discrepancies between the CS test manual and the cited source. Implications are discussed for the CS in particular, the Rorschach more generally, and psychological tests more broadly. Notably, a history of intensive scrutiny of the Rorschach has resulted in more stringent standards applied to it, even though its scales have more published and supportive construct validity meta-analyses than any other psychological test. Calls are made for (a) a mechanism to correct data errors in the scientific literature, (b) guidelines for test developers' key unpublished studies, and (c) systematic reviews and meta-analyses to become standard practice for all psychological tests.


Assuntos
Transtornos Mentais/diagnóstico , Escalas de Graduação Psiquiátrica/normas , Teste de Rorschach/normas , Humanos , Testes Psicológicos , Psicometria , Reprodutibilidade dos Testes
2.
J Pers Assess ; 100(3): 233-249, 2018.
Artigo em Inglês | MEDLINE | ID: mdl-28448159

RESUMO

Recently, psychologists have emphasized the response process-that is, the psychological operations and behaviors that lead to test scores-when designing psychological tests, interpreting their results, and refining their validity. To illustrate the centrality of the response process in construct validity and test interpretation, we provide a historical, conceptual, and empirical review of the main uses of the background white space of the Rorschach cards, called space reversal (SR) and space integration (SI) in the Rorschach Performance Assessment System. We show how SR and SI's unique response processes result in different interpretations, and that reviewing their literatures with these distinct interpretations in mind produces the expected patterns of convergent and discriminant validity. That is, SR was uniquely related to measures of oppositionality; SI was uniquely related to measures of cognitive complexity; and both SR and SI were related to measures of creativity. Our review further suggests that the Comprehensive System use of a single space code for all uses of white space likely led to its lack of meta-analytic support as a measure of oppositionality (Mihura, Meyer, Dumitrascu, & Bombel, 2013 ). We close by discussing the use of the response process to improve test interpretation, develop better measures, and advance the design of research.


Assuntos
Transtornos Mentais/diagnóstico , Teste de Rorschach/normas , Feminino , Humanos , Masculino , Testes Psicológicos , Psicometria , Reprodutibilidade dos Testes , Projetos de Pesquisa
3.
J Pers Assess ; 98(4): 343-50, 2016.
Artigo em Inglês | MEDLINE | ID: mdl-27153466

RESUMO

We respond to Tibon Czopp and Zeligman's (2016) critique of our systematic reviews and meta-analyses of 65 Rorschach Comprehensive System (CS) variables published in Psychological Bulletin (2013). The authors endorsed our supportive findings but critiqued the same methodology when used for the 13 unsupported variables. Unfortunately, their commentary was based on significant misunderstandings of our meta-analytic method and results, such as thinking we used introspectively assessed criteria in classifying levels of support and reporting only a subset of our externally assessed criteria. We systematically address their arguments that our construct label and criterion variable choices were inaccurate and, therefore, meta-analytic validity for these 13 CS variables was artificially low. For example, the authors created new construct labels for these variables that they called "the customary CS interpretation," but did not describe their methodology nor provide evidence that their labels would result in better validity than ours. They cite studies they believe we should have included; we explain how these studies did not fit our inclusion criteria and that including them would have actually reduced the relevant CS variables' meta-analytic validity. Ultimately, criticisms alone cannot change meta-analytic support from negative to positive; Tibon Czopp and Zeligman would need to conduct their own construct validity meta-analyses.


Assuntos
Metanálise como Assunto , Interpretação Estatística de Dados
4.
Psychol Bull ; 141(1): 250-260, 2015 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-25581288

RESUMO

Wood, Garb, Nezworski, Lilienfeld, and Duke (2015) found our systematic review and meta-analyses of 65 Rorschach variables to be accurate and unbiased, and hence removed their previous recommendation for a moratorium on the applied use of the Rorschach. However, Wood et al. (2015) hypothesized that publication bias would exist for 4 Rorschach variables. To test this hypothesis, they replicated our meta-analyses for these 4 variables and added unpublished dissertations to the pool of articles. In the process, they used procedures that contradicted their standards and recommendations for sound Rorschach research, which consistently led to significantly lower effect sizes. In reviewing their meta-analyses, we found numerous methodological errors, data errors, and omitted studies. In contrast to their strict requirements for interrater reliability in the Rorschach meta-analyses of other researchers, they did not report interrater reliability for any of their coding and classification decisions. In addition, many of their conclusions were based on a narrative review of individual studies and post hoc analyses rather than their meta-analytic findings. Finally, we challenge their sole use of dissertations to test publication bias because (a) they failed to reconcile their conclusion that publication bias was present with the analyses we conducted showing its absence, and (b) we found numerous problems with dissertation study quality. In short, one cannot rely on the findings or the conclusions reported in Wood et al.


Assuntos
Transtornos Mentais/diagnóstico , Teste de Rorschach , Humanos
5.
Psychol Bull ; 139(3): 548-605, 2013 May.
Artigo em Inglês | MEDLINE | ID: mdl-22925137

RESUMO

We systematically evaluated the peer-reviewed Rorschach validity literature for the 65 main variables in the popular Comprehensive System (CS). Across 53 meta-analyses examining variables against externally assessed criteria (e.g., observer ratings, psychiatric diagnosis), the mean validity was r = .27 (k = 770) as compared to r = .08 (k = 386) across 42 meta-analyses examining variables against introspectively assessed criteria (e.g., self-report). Using Hemphill's (2003) data-driven guidelines for interpreting the magnitude of assessment effect sizes with only externally assessed criteria, we found 13 variables had excellent support (r ≥ .33, p < .001; [Symbol: see text] FSN > 50), 17 had good support (r ≥ .21, p < .05, FSN ≥ 10), 10 had modest support (p < .05 and either r ≥ .21, FSN < 10, or r = .15-.20, FSN ≥ 10), 13 had little (p < .05 and either r = < .15 or FSN < 10) or no support (p > .05), and 12 had no construct-relevant validity studies. The variables with the strongest support were largely those that assess cognitive and perceptual processes (e.g., Perceptual-Thinking Index, Synthesized Response); those with the least support tended to be very rare (e.g., Color Projection) or some of the more recently developed scales (e.g., Egocentricity Index, Isolation Index). Our findings are less positive, more nuanced, and more inclusive than those reported in the CS test manual. We discuss study limitations and the implications for research and clinical practice, including the importance of using different methods in order to improve our understanding of people.


Assuntos
Transtornos Mentais/diagnóstico , Teste de Rorschach , Humanos , Metanálise como Assunto , Psicometria/instrumentação , Reprodutibilidade dos Testes
6.
J Pers Assess ; 89 Suppl 1: S142-8, 2007.
Artigo em Inglês | MEDLINE | ID: mdl-18039153

RESUMO

In this study Rorschach data from a Romanian sample of 111 respondents was collected and analyzed in a first attempt to establish national norms. The protocols were collected in a 5-year period (2002-2006) by the author. Interrater reliability statistics are presented for a sample of 20 cases, along with scores for the Rorschach Comprehensive System (CS; Exner, 1993). These results can be used for cross-cultural comparisons of the CS.


Assuntos
Saúde Mental , Determinação da Personalidade/estatística & dados numéricos , Personalidade , Projetos de Pesquisa/normas , Teste de Rorschach/estatística & dados numéricos , Adulto , Idoso , Características Culturais , Feminino , Humanos , Masculino , Transtornos Mentais/diagnóstico , Pessoa de Meia-Idade , Psicometria/estatística & dados numéricos , Valores de Referência , Reprodutibilidade dos Testes , Estudos Retrospectivos , Romênia/epidemiologia , População Rural/estatística & dados numéricos , Fatores Socioeconômicos , Inquéritos e Questionários , População Urbana/estatística & dados numéricos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...