首页 | 本学科首页   官方微博 | 高级检索  
     


Percent agreement, Pearson's correlation, and kappa as measures of inter-examiner reliability
Authors:R J Hunt
Abstract:Percent agreement and Pearson's correlation coefficient are frequently used to represent inter-examiner reliability, but these measures can be misleading. The use of percent agreement to measure inter-examiner agreement should be discouraged, because it does not take into account the agreement due solely to chance. Caution must be used in the interpretation of Pearson's correlation, because it is unaffected by the presence of any systematic biases. Analyses of data from a reliability study show that even though percent agreement and kappa were consistently high among three examiners, the reliability measured by Pearson's correlation was inconsistent. This study shows that correlation and kappa can be used together to uncover non-random examiner error.
Keywords:
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号