Geographic and temporal validity of prediction models: different approaches were useful to examine model performance |
| |
Affiliation: | 1. Department of Medical Statistics, Informatics, and Health Economics, Medical University Innsbruck, Austria;2. University Clinic of Internal Medicine III - Cardiology and Angiology, Medical University Innsbruck, Austria;3. Department of Public Health, Erasmus University Medical Centre, Rotterdam, The Netherlands;4. Department of Biomedical Data Sciences, Leiden University Medical Centre, The Netherlands;5. Department of Cardiology and Karl Landsteiner Institute for Interdisciplinary Science, Rehabilitation Centre Münster in Tyrol, Austria;6. Department of Internal Medicine and Cardiology, Klinikum Klagenfurt, Austria;7. Department of Development and Regeneration, KU, Leuven, Belgium |
| |
Abstract: | ObjectiveValidation of clinical prediction models traditionally refers to the assessment of model performance in new patients. We studied different approaches to geographic and temporal validation in the setting of multicenter data from two time periods.Study Design and SettingWe illustrated different analytic methods for validation using a sample of 14,857 patients hospitalized with heart failure at 90 hospitals in two distinct time periods. Bootstrap resampling was used to assess internal validity. Meta-analytic methods were used to assess geographic transportability. Each hospital was used once as a validation sample, with the remaining hospitals used for model derivation. Hospital-specific estimates of discrimination (c-statistic) and calibration (calibration intercepts and slopes) were pooled using random-effects meta-analysis methods. I2 statistics and prediction interval width quantified geographic transportability. Temporal transportability was assessed using patients from the earlier period for model derivation and patients from the later period for model validation.ResultsEstimates of reproducibility, pooled hospital-specific performance, and temporal transportability were on average very similar, with c-statistics of 0.75. Between-hospital variation was moderate according to I2 statistics and prediction intervals for c-statistics.ConclusionThis study illustrates how performance of prediction models can be assessed in settings with multicenter data at different time periods. |
| |
Keywords: | Clinical prediction model Validation Risk prediction Calibration Discrimination c-statistic Receiver operating characteristic curve |
本文献已被 ScienceDirect 等数据库收录! |
|