首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到10条相似文献,搜索用时 140 毫秒
1.
In the context of chronic diseases, patient's health evolution is often evaluated through the study of longitudinal markers and major clinical events such as relapses or death. Dynamic predictions of such types of events may be useful to improve patients management all along their follow‐up. Dynamic predictions consist of predictions that are based on information repeatedly collected over time, such as measurements of a biomarker, and that can be updated as soon as new information becomes available. Several techniques to derive dynamic predictions have already been suggested, and computation of dynamic predictions is becoming increasingly popular. In this work, we focus on assessing predictive accuracy of dynamic predictions and suggest that using an R2‐curve may help. It facilitates the evaluation of the predictive accuracy gain obtained when accumulating information on a patient's health profile over time. A nonparametric inverse probability of censoring weighted estimator is suggested to deal with censoring. Large sample results are provided, and methods to compute confidence intervals and bands are derived. A simulation study assesses the finite sample size behavior of the inference procedures and illustrates the shape of some R2‐curves which can be expected in common settings. A detailed application to kidney transplant data is also presented.  相似文献   

2.
Predictive modeling in healthcare has been gaining more interest and utilization in recent years. The tools for doing this have become more sophisticated with increasingly higher accuracy. We present a case study of how artificial intelligence (AI) can be used for a high quality predictive modeling process, and how this process is used to improve the quality and efficiency of healthcare. In this case study, MEDai, Inc. provides the analytical tools for the predictive modeling, and Sentara Healthcare uses these predictions to determine which members can be helped the most by actively looking for ways to prevent future severe outcomes. Most predictive methodologies implement rule-based systems or regression techniques. There are many pitfalls of these techniques when applied to medical data, where many variables and many interactive variable combinations exist necessitating modeling with AI. When comparing the R2 statistic (the commonly accepted measurement of how accurate a predictive model is) of traditional techniques versus AI techniques, the resulting accuracy more than doubles. The cited publications show a range of raw R2 values from 0.10 to 0.15. In contrast, the R2 value obtained from AI techniques implemented at Sentara is 0.34. Once the predictions are generated, data are displayed and analytical programs utilized for data mining and analysis. With this tool, it is possible to examine sub-groups of the data, or data mine to the member level. Risk factors can be determined and individual members/member groups can be analyzed to help make the decisions of what changes can be made to improve the level of medical care that people receive.  相似文献   

3.
Decision curve analysis: a novel method for evaluating prediction models.   总被引:2,自引:0,他引:2  
BACKGROUND: Diagnostic and prognostic models are typically evaluated with measures of accuracy that do not address clinical consequences. Decision-analytic techniques allow assessment of clinical outcomes but often require collection of additional information and may be cumbersome to apply to models that yield a continuous result. The authors sought a method for evaluating and comparing prediction models that incorporates clinical consequences,requires only the data set on which the models are tested,and can be applied to models that have either continuous or dichotomous results. METHOD: The authors describe decision curve analysis, a simple, novel method of evaluating predictive models. They start by assuming that the threshold probability of a disease or event at which a patient would opt for treatment is informative of how the patient weighs the relative harms of a false-positive and a false-negative prediction.This theoretical relationship is then used to derive the net benefit of the model across different threshold probabilities.Plotting net benefit against threshold probability yields the "decision curve." The authors apply the method to models for the prediction of seminal vesicle invasion in prostate cancer patients. Decision curve analysis identified the range of threshold probabilities in which a model was of value, the magnitude of benefit, and which of several models was optimal. CONCLUSION: Decision curve analysis is a suitable method for evaluating alternative diagnostic and prognostic strategies that has advantages over other commonly used measures and techniques.  相似文献   

4.
As treatment costs are increasingly determined from individual level cost data, the number of proposed multivariable methods for use in this analysis has multiplied. These methods involve estimation of multivariable cost functions that yield predictions at the individual level, coditional on interventions, patient characteristics, and other factors. What are these methods, how are they used properly, and what are the circumstances when one method is preferred over others? The purpose of this workshop will be to develop skills in conducting multivariable analysis of cost data from randomized trials. We will instruct participants in the use of ordinary least squares regression techniques and survival analysis techniques. We will discuss how non-normal cost data and censored cost data are properly and improperly handled in these methods. Participants will learn when it is appropriate to use log transformation of costs in their analysis and how to estimate unbiased treatment costs using smearing techniques. Participants will also learn how to apply the Cox proportional hazard model to analysis of costs. How does one determine which model is best given the circumstances? We will develop concepts important for evaluating the superior model: predictive validity and adherence to assumptions for unbiased estimators. We will present results from a simulation designed to evaluate how well the various methods perform under different circumstances. Those who want to learn the techniques of multivariable cost analysis and develop criteria for choosing the best technique will benefit from this workshop. Participants who would benefit include analysts of cost data and those who want to increase their understanding of the literature of economic evaluation.  相似文献   

5.
Individuals from the Federal Insecticide, Fungicide, and Rodenticide Act (FIFRA) Environmental Model Validation Task Force (FEMVTF) Statistics Committee periodically met to discuss the mechanism for conducting an uncertainty analysis of Version 3.12 of the pesticide root zone model (PRZM 3.12) and to identify those model input parameters that most contribute to model prediction error. This activity was part of a larger project evaluating PRZM 3.12. The goal of the uncertainty analysis was to compare site-specific model predictions and field measurements using the variability in each as a basis of comparison. Monte Carlo analysis was used as an integral tool for judging the model's ability to predict accurately. The model was judged on how well it predicts measured values, taking into account the uncertainty in the model predictions. Monte Carlo analysis provides the tool for inferring model prediction uncertainty. We argue that this is a fairer test of the model than a simple one-to-one comparison between predictions and measurements. Because models are known to be imperfect predictors prior to running the model, the inaccuracy in model predictions should be considered when models are judged for their predictive ability. Otherwise, complex models can easily fail a validation test. Few complex models, such as PRZM 3.12, would pass a typical model validation exercise. This paper describes the approaches to the validation of PRZM 3.12 used by the committee and discusses issues in sampling distribution selection and appropriate statistics for interpreting the model validation results.  相似文献   

6.
BACKGROUND: An essential characteristic of health impact assessment (HIA) is that it seeks to predict the future consequences of possible decisions for health. These predictions have to be valid, but as yet it is unclear how validity should be defined in HIA. AIMS: To examine the philosophical basis for predictions and the relevance of different forms of validity to HIA. CONCLUSIONS: HIA is valid if formal validity, plausibility and predictive validity are in order. Both formal validity and plausibility can usually be established, but establishing predictive validity implies outcome evaluation of HIA. This is seldom feasible owing to long time lags, migration, measurement problems, a lack of data and sensitive indicators, and the fact that predictions may influence subsequent events. Predictive validity most often is not attainable in HIA and we have to make do with formal validity and plausibility. However, in political science, this is by no means exceptional.  相似文献   

7.
OBJECTIVES: Early warning systems are an integral part of many health technology assessment programs. Despite this finding, to date, there have been no quantitative evaluations of the accuracy of predictions made by these systems. We report a study evaluating the accuracy of predictions made by the main United Kingdom early warning system. METHODS: As prediction of impact is analogous to diagnosis, a method normally applied to determine the accuracy of diagnostic tests was used. The sensitivity, specificity, and predictive values of the National Horizon Scanning Centre's prediction methods were estimated with reference to an (imperfect) gold standard, that is, expert opinion of impact 3 to 5 years after prediction. RESULTS: The sensitivity of predictions was 71 percent (95 percent confidence interval [CI], 0.36-0.92), and the specificity was 73 percent (95 percent CI, 0.64-0.8). The negative predictive value was 98 percent (95 percent CI, 0.92-0.99), and the positive predictive value was 14 percent (95 percent CI, 0.06-0.3). CONCLUSIONS: Forecasting is difficult, but the results suggest that this early warning system's predictions have an acceptable level of accuracy. However, there are caveats. The first is that early warning systems may themselves reduce the impact of a technology, as helping to control adoption and diffusion is their main purpose. The second is that the use of an imperfect gold standard may bias the results. As early warning systems are viewed as an increasingly important component of health technology assessment and decision making, their outcomes must be evaluated. The method used here should be investigated further and the accuracy of other early warning systems explored.  相似文献   

8.
Unlike test sensitivity and specificity, the false positive and negative predictive values (probabilities of mislabeling an individual being tested) depend heavily on the prevalence of the infection of the human immunodeficiency virus (HIV) as well as the quality of the kit. A consequence of this dependence is that the false positive predictive value can reach a high magnitude such as 0.9; that is, 90% of the positive tests are false. This raises many important issues pertaining to the current practice of HIV screening such as to how to control these misclassification errors, how to interpret test results, and how to estimate prevalence using test results. These issues are examined in detail here by considering the factors that dictate the quality of a screening program. Some real data examples are used to illustrate the importance of this consideration in designing programs to achieve the desired goals. The rationale behind the common two-step sequential protocol in HIV screening is examined to point out its limitations under practical situations. Finally, the use of entropy in evaluating the informativeness of a screening program is discussed.  相似文献   

9.
The CHARMS (critical appraisal and data extraction for systematic reviews of prediction modelling studies) checklist was created to provide methodological appraisals of predictive models, based on the best available scientific evidence and through systematic reviews. Our purpose is to give a general presentation on how to carry out a CHARMS analysis for prognostic multivariate models, making clear what the steps are and how they are applied individually to the studies included in the systematic review. This tutorial is aimed at providing such a resource. In addition to this explanation, we will apply the method to a real case: predictive models of atrial fibrillation in the community. This methodology could be applied to other predictive models using the steps provided in our review so as to have complete information for each included model and determine whether it can be implemented in daily clinical practice.  相似文献   

10.
Innovative approaches to analysing clinical databases can be considered from a perspective of innovations that improve the analytical approach or from a more global perspective in which clinical databases themselves are evaluated as a technology. The analytic approach for using a database to estimate risk can be considered as a matrix of three methodologic concerns: the predictive method; the assessment of the quality of the predictions; and the assessment of the validity or generalizability of the predictions. Considering databases as a technology places in perspective the merit of clinical databases and defines their potential value to the health care system. An awareness of both the clinical and analytic problem encourages innovation and can lead to creative solutions to the many problems present in the analysis of clinical databases.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号