首页 | 本学科首页   官方微博 | 高级检索  
检索        


Reproducibility of physiological track-and-trigger warning systems for identifying at-risk patients on the ward
Authors:Christian P Subbe  Haiyan Gao  David A Harrison
Institution:Department of Medicine, Wrexham Maelor Hospital, Wrexham LL13 4TX, UK.
Abstract:OBJECTIVE: Physiological track-and-trigger warning systems are used to identify patients on acute wards at risk of deterioration, as early as possible. The objective of this study was to assess the inter-rater and intra-rater reliability of the physiological measurements, aggregate scores and triggering events of three such systems. DESIGN: Prospective cohort study. SETTING: General medical and surgical wards in one non-university acute hospital. PATIENTS AND PARTICIPANTS: Unselected ward patients: 114 patients in the inter-rater study and 45 patients in the intra-rater study were examined by four raters. MEASUREMENTS AND RESULTS: Physiological observations obtained at the bedside were evaluated using three systems: the medical emergency team call-out criteria (MET); the modified early warning score (MEWS); and the assessment score of sick-patient identification and step-up in treatment (ASSIST). Inter-rater and intra-rater reliability were assessed by intra-class correlation coefficients, kappa statistics and percentage agreement. There was fair to moderate agreement on most physiological parameters, and fair agreement on the scores, but better levels of agreement on triggers. Reliability was partially a function of simplicity: MET achieved a higher percentage of agreement than ASSIST, and ASSIST higher than MEWS. Intra-rater reliability was better then inter-rater reliability. Using corrected calculations improved the level of inter-rater agreement but not intra-rater agreement. CONCLUSION: There was significant variation in the reproducibility of different track-and-trigger warning systems. The systems examined showed better levels of agreement on triggers than on aggregate scores. Simpler systems had better reliability. Inter-rater agreement might improve by using electronic calculations of scores.
Keywords:Observer variation  Reproducibility of results  Critical illness  Scoring systems
本文献已被 PubMed SpringerLink 等数据库收录!
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号