Measurement of Inter-Rater Reliability in Systematic Review / 한양의대학술지
Hanyang Medical Reviews
;
: 44-49, 2015.
Article
in Korean
| WPRIM
| ID: wpr-42474
ABSTRACT
Inter-rater reliability refers to the degree of agreement when a measurement is repeated under identical conditions by different raters. In systematic review, it can be used to evaluate agreement between authors in the process of extracting data. While there have been a variety of methods to measure inter-rater reliability, percent agreement and Cohen's kappa are commonly used in the categorical data. Percent agreement is an amount of actually observed agreement. While the calculation is simple, it has a limitation in that the effect of chance in achieving agreement between raters is not accounted for. Cohen's kappa is a more robust method than percent agreement since it is an adjusted agreement considering the effect of chance. The interpretation of kappa can be misled, because it is sensitive to the distribution of data. Therefore, it is desirable to present both values of percent agreement and kappa in the review. If the value of kappa is too low in spite of high observed agreement, alternative statistics can be pursued.
Full text:
Available
Index:
WPRIM (Western Pacific)
Type of study:
Systematic reviews
Language:
Korean
Journal:
Hanyang Medical Reviews
Year:
2015
Type:
Article
Similar
MEDLINE
...
LILACS
LIS