首页 | 本学科首页   官方微博 | 高级检索  
     


Crowdsourcing for assessment items to support adaptive learning
Authors:Sean Tackett  Mark Raymond  Rishi Desai  Steven A. Haist  Amy Morales  Shiv Gaglani
Affiliation:1. Department of Medicine, Johns Hopkins Bayview Medical Center, Baltimore, Maryland;2. Osmosis, Baltimore, MD, USA;3. stacket1@jhmi.edu;5. National Board of Medical Examiners, Philadelphia, PA, USA;6. Osmosis, Baltimore, MD, USA;7. Stanford University School of Medicine, Palo Alto, CA, USA;8. Johns Hopkins University School of Medicine, Baltimore, MD, USA
Abstract:Abstract

Purpose: Adaptive learning requires frequent and valid assessments for learners to track progress against their goals. This study determined if multiple-choice questions (MCQs) “crowdsourced” from medical learners could meet the standards of many large-scale testing programs.

Methods: Users of a medical education app (Osmosis.org, Baltimore, MD) volunteered to submit case-based MCQs. Eleven volunteers were selected to submit MCQs targeted to second year medical students. Two hundred MCQs were subjected to duplicate review by a panel of internal medicine faculty who rated each item for relevance, content accuracy, and quality of response option explanations. A sample of 121 items was pretested on clinical subject exams completed by a national sample of U.S. medical students.

Results: Seventy-eight percent of the 200 MCQs met faculty reviewer standards based on relevance, accuracy, and quality of explanations. Of the 121 pretested MCQs, 50% met acceptable statistical criteria. The most common reasons for exclusion were that the item was too easy or had a low discrimination index.

Conclusions: Crowdsourcing can efficiently yield high-quality assessment items that meet rigorous judgmental and statistical criteria. Similar models may be adopted by students and educators to augment item pools that support adaptive learning.
Keywords:
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号