首页 | 本学科首页   官方微博 | 高级检索  
     


Determination of Interrater Reliability of a Universal Evaluator Rubric to Assess Student Pharmacist Communication Skills
Authors:Susanne G. Barnett  Sheila M Allen  Karen MS Bastianelli  Jennifer S. Chen  Colleen A. Clark Dula  Marlowe Djuric Kachlic  Kristen L. Goliak  Laura E. Knockel  David E. Matthews  Lucio R. Volino  Michael R. Lasarev  Jeffrey C. Reist
Affiliation:aUniversity of Wisconsin-Madison, School of Pharmacy, Madison, Wisconsin;bUniversity of Illinois at Chicago, College of Pharmacy, Chicago, Illinois;cUniversity of Minnesota, College of Pharmacy, Minneapolis, Minnesota;dThe Ohio State University, College of Pharmacy, Columbus, Ohio;eUniversity of Iowa, College of Pharmacy, Iowa City, Iowa;fRutgers University, Ernest Mario School of Pharmacy, Piscataway, New Jersey
Abstract:
Objective. To evaluate the interrater reliability of a universal evaluator rubric used to assess student pharmacist communication skills during patient education sessions.Methods. Six US schools and colleges of pharmacy each submitted 10 student videos of a simulated community pharmacy patient education session and recruited two raters in each of the five rater groups (faculty, standardized patients, postgraduate year one residents, student pharmacists, and pharmacy preceptors). Raters used a rubric containing 20 items and a global assessment to evaluate student communication of 12 videos. Agreement was computed for individual items and overall rubric score within each rater group, and for each item across all rater groups. Average overall rubric agreement scores were compared between rater groups. Agreement coefficient scores were categorized as no to minimal, weak, moderate, strong, or almost perfect agreement.Results. Fifty-five raters representing five rater groups and six pharmacy schools evaluated student communication. Item agreement analysis for all raters revealed five items with no to minimal or weak agreement, 10 items with moderate agreement, one item with strong agreement, and five items with almost perfect agreement. Overall average agreement across all rater groups was 0.73 (95% CI, 0.66-0.81). The preceptor rater group exhibited the lowest agreement score of 0.68 (95% CI, 0.58-0.78), which significantly deviated from the overall average.Conclusion. While strong or almost perfect agreement scores were not observed for all rubric items, overall average interrater reliability results support the use of this rubric in a variety of raters to assess student pharmacist communication skills during patient education sessions.
Keywords:communication   medication education   patient education   assessment tool   interrater reliability
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号