Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 1 de 1
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Artigo em Inglês | MEDLINE | ID: mdl-38085328

RESUMO

The use of Structured Diagnostic Assessments (SDAs) is a solution for unreliability in psychiatry and the gold standard for diagnosis. However, except for studies between the 50 s and 70 s, reliability without the use of Non-SDAs (NSDA) is seldom tested, especially in non-Western, Educated, Industrialized, Rich, and Democratic (WEIRD) countries. We aim to measure reliability between examiners with NSDAs for psychiatric disorders. We compared diagnostic agreement after clinician change, in an outpatient academic setting. We used inter-rater Kappa measuring 8 diagnostic groups: Depression (DD: F32, F33), Anxiety Related Disorders (ARD: F40-F49, F50-F59), Personality Disorders (PD: F60-F69), Bipolar Disorder (BD: F30, F31, F34.0, F38.1), Organic Mental Disorders (Org: F00-F09), Neurodevelopment Disorders (ND: F70-F99) and Schizophrenia Spectrum Disorders (SSD: F20-F29). Cohen's Kappa measured agreement between groups, and Baphkar's test assessed if any diagnostic group have a higher tendency to change after a new diagnostic assessment. We analyzed 739 reevaluation pairs, from 99 subjects who attended IPUB's outpatient clinic. Overall inter-rater Kappa was moderate, and none of the groups had a different tendency to change. NSDA evaluation was moderately reliable, but the lack of some prevalent hypothesis inside the pairs raised concerns about NSDA sensitivity to some diagnoses. Diagnostic momentum bias (that is, a tendency to keep the last diagnosis observed) may have inflated the observed agreement. This research was approved by IPUB's ethical committee, registered under the CAAE33603220.1.0000.5263, and the UTN-U1111-1260-1212.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...