Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
J Affect Disord ; 256: 143-147, 2019 09 01.
Artigo em Inglês | MEDLINE | ID: mdl-31176186

RESUMO

International Society for CNS Clinical Trials and Methodology convened an expert Working Group that assembled consistency/inconsistency flags for the Montgomery-Asberg Depression Rating Scale (MADRS). Twenty-two flags were identified. Seven flags are believed to be strong flags that suggest that a thorough review of rating is warranted. The flags were applied to assessments derived from the NEWMEDS data repository. Almost 65% of ratings had at least one inconsistency flag raised and 22% had two or more. Application of flags to clinical ratings may improve reliability of ratings and validity of trials.


Assuntos
Depressão/diagnóstico , Escalas de Graduação Psiquiátrica/normas , Adulto , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Psicometria , Reprodutibilidade dos Testes
2.
J Clin Psychopharmacol ; 29(1): 82-5, 2009 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-19142114

RESUMO

BACKGROUND: Good interrater reliability is essential to minimize error variance and improve study power. Reasons why raters differ in scoring the same patient include information variance (different information obtained because of asking different questions), observation variance (the same information is obtained, but raters differ in what they notice and remember), interpretation variance (differences in the significance attached to what is observed), criterion variance (different criteria used to score items), and subject variance (true differences in the subject). We videotaped and transcribed 30 pairs of interviews to examine the most common sources of rater unreliability. METHOD: Thirty patients who experienced depression were independently interviewed by 2 different raters on the same day. Raters provided rationales for their scoring, and independent assessors reviewed the rationales, the interview transcripts, and the videotapes to code the main reason for each discrepancy. One third of the interviews were conducted by raters who had not administered the Hamilton Depression Rating Scale before; one third, by raters who were experienced but not calibrated; and one third, by experienced and calibrated raters. RESULTS: Experienced and calibrated raters had the highest interrater reliability (intraclass correlation [ICC]; r = 0.93) followed by inexperienced raters (r = 0.77) and experienced but uncalibrated raters (r = 0.55). The most common reason for disagreement was interpretation variance (39%), followed by information variance (30%), criterion variance (27%), and observation variance (4%). Experienced and calibrated raters had significantly less criterion variance than the other cohorts (P = 0.001). CONCLUSIONS: Reasons for disagreement varied by level of experience and calibration. Experienced and uncalibrated raters should focus on establishing common conventions, whereas experienced and calibrated raters should focus on fine tuning judgment calls on different thresholds of symptoms. Calibration training seems to improve reliability over experience alone. Experienced raters without cohort calibration had lower reliability than inexperienced raters.


Assuntos
Transtorno Depressivo/diagnóstico , Transtorno Depressivo/epidemiologia , Escalas de Graduação Psiquiátrica , Inquéritos e Questionários , Competência Clínica , Transtorno Depressivo/psicologia , Humanos , Entrevistas como Assunto , Variações Dependentes do Observador , Competência Profissional , Relações Profissional-Paciente , Psicometria , Reprodutibilidade dos Testes , Gravação de Videoteipe
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...