首页 | 本学科首页   官方微博 | 高级检索  
检索        


Item response theory facilitated cocalibrating cognitive tests and reduced bias in estimated rates of decline
Authors:Crane Paul K  Narasimhalu Kaavya  Gibbons Laura E  Mungas Dan M  Haneuse Sebastien  Larson Eric B  Kuller Lewis  Hall Kathleen  van Belle Gerald
Institution:Department of Medicine, University of Washington.
Abstract:OBJECTIVE: To cocalibrate the Mini-Mental State Examination, the Modified Mini-Mental State, the Cognitive Abilities Screening Instrument, and the Community Screening Instrument for Dementia using item response theory (IRT) to compare screening cut points used to identify cases of dementia from different studies, to compare measurement properties of the tests, and to explore the implications of these measurement properties on longitudinal studies of cognitive functioning over time. STUDY DESIGN AND SETTING: We used cross-sectional data from three large (n>1000) community-based studies of cognitive functioning in the elderly. We used IRT to cocalibrate the scales and performed simulations of longitudinal studies. RESULTS: Screening cut points varied quite widely across studies. The four tests have curvilinear scaling and varied levels of measurement precision, with more measurement error at higher levels of cognitive functioning. In longitudinal simulations, IRT scores always performed better than standard scoring, whereas a strategy to account for varying measurement precision had mixed results. CONCLUSION: Cocalibration allows direct comparison of cognitive functioning in studies using any of these four tests. Standard scoring appears to be a poor choice for analysis of longitudinal cognitive testing data. More research is needed into the implications of varying levels of measurement precision.
Keywords:
本文献已被 PubMed 等数据库收录!
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号