首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到3条相似文献,搜索用时 0 毫秒
1.
When designing a study to develop a new prediction model with binary or time-to-event outcomes, researchers should ensure their sample size is adequate in terms of the number of participants (n) and outcome events (E) relative to the number of predictor parameters (p) considered for inclusion. We propose that the minimum values of n and E (and subsequently the minimum number of events per predictor parameter, EPP) should be calculated to meet the following three criteria: (i) small optimism in predictor effect estimates as defined by a global shrinkage factor of ≥ 0.9, (ii) small absolute difference of ≤ 0.05 in the model's apparent and adjusted Nagelkerke's R2 , and (iii) precise estimation of the overall risk in the population. Criteria (i) and (ii) aim to reduce overfitting conditional on a chosen p, and require prespecification of the model's anticipated Cox-Snell R2 , which we show can be obtained from previous studies. The values of n and E that meet all three criteria provides the minimum sample size required for model development. Upon application of our approach, a new diagnostic model for Chagas disease requires an EPP of at least 4.8 and a new prognostic model for recurrent venous thromboembolism requires an EPP of at least 23. This reinforces why rules of thumb (eg, 10 EPP) should be avoided. Researchers might additionally ensure the sample size gives precise estimates of key predictor effects; this is especially important when key categorical predictors have few events in some categories, as this may substantially increase the numbers required.  相似文献   

2.
Clinical prediction models provide individualized outcome predictions to inform patient counseling and clinical decision making. External validation is the process of examining a prediction model's performance in data independent to that used for model development. Current external validation studies often suffer from small sample sizes, and subsequently imprecise estimates of a model's predictive performance. To address this, we propose how to determine the minimum sample size needed for external validation of a clinical prediction model with a continuous outcome. Four criteria are proposed, that target precise estimates of (i) R2 (the proportion of variance explained), (ii) calibration‐in‐the‐large (agreement between predicted and observed outcome values on average), (iii) calibration slope (agreement between predicted and observed values across the range of predicted values), and (iv) the variance of observed outcome values. Closed‐form sample size solutions are derived for each criterion, which require the user to specify anticipated values of the model's performance (in particular R2 ) and the outcome variance in the external validation dataset. A sensible starting point is to base values on those for the model development study, as obtained from the publication or study authors. The largest sample size required to meet all four criteria is the recommended minimum sample size needed in the external validation dataset. The calculations can also be applied to estimate expected precision when an existing dataset with a fixed sample size is available, to help gauge if it is adequate. We illustrate the proposed methods on a case‐study predicting fat‐free mass in children.  相似文献   

3.
We propose a simple method to compute sample size for an arbitrary test hypothesis in population pharmacokinetics (PK) studies analysed with non-linear mixed effects models. Sample size procedures exist for linear mixed effects model, and have been recently extended by Rochon using the generalized estimating equation of Liang and Zeger. Thus, full model based inference in sample size computation has been possible. The method we propose extends the approach using a first-order linearization of the non-linear mixed effects model and use of the Wald chi(2) test statistic. The proposed method is general. It allows an arbitrary non-linear model as well as arbitrary distribution of random effects characterizing both inter- and intra-individual variability of the mixed effects model. To illustrate possible uses of the method we present tables of minimum sample sizes, in particular, with an illustration of the effect of sampling design on sample size. We demonstrate how (D-)optimal or frequent sampling requires fewer subjects in comparison to a sparse sampling design. We also present results from Monte Carlo simulations showing that the computed sample size can produce the desired power. The proposed method greatly reduces computing times compared with simulation-based methods of estimating sample sizes for population PK studies.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号