首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
We describe a novel process for transforming the efficiency of partial expected value of sample information (EVSI) computation in decision models. Traditional EVSI computation begins with Monte Carlo sampling to produce new simulated data-sets with a specified sample size. Each data-set is synthesised with prior information to give posterior distributions for model parameters, either via analytic formulae or a further Markov Chain Monte Carlo (MCMC) simulation. A further 'inner level' Monte Carlo sampling then quantifies the effect of the simulated data on the decision. This paper describes a novel form of Bayesian Laplace approximation, which can be replace both the Bayesian updating and the inner Monte Carlo sampling to compute the posterior expectation of a function. We compare the accuracy of EVSI estimates in two case study cost-effectiveness models using 1st and 2nd order versions of our approximation formula, the approximation of Tierney and Kadane, and traditional Monte Carlo. Computational efficiency gains depend on the complexity of the net benefit functions, the number of inner level Monte Carlo samples used, and the requirement or otherwise for MCMC methods to produce the posterior distributions. This methodology provides a new and valuable approach for EVSI computation in health economic decision models and potential wider benefits in many fields requiring Bayesian approximation.  相似文献   

2.
Expected value of sample information (EVSI) involves simulating data collection, Bayesian updating, and re-examining decisions. Bayesian updating in Weibull models typically requires Markov chain Monte Carlo (MCMC). We examine five methods for calculating posterior expected net benefits: two heuristic methods (data lumping and pseudo-normal); two Bayesian approximation methods (Tierney & Kadane, Brennan & Kharroubi); and the gold standard MCMC. A case study computes EVSI for 25 study options. We compare accuracy, computation time and trade-offs of EVSI versus study costs. Brennan & Kharroubi (B&K) approximates expected net benefits to within +/-1% of MCMC. Other methods, data lumping (+54%), pseudo-normal (-5%) and Tierney & Kadane (+11%) are less accurate. B&K also produces the most accurate EVSI approximation. Pseudo-normal is also reasonably accurate, whilst Tierney & Kadane consistently underestimates and data lumping exhibits large variance. B&K computation is 12 times faster than the MCMC method in our case study. Though not always faster, B&K provides most computational efficiency when net benefits require appreciable computation time and when many MCMC samples are needed. The methods enable EVSI computation for economic models with Weibull survival parameters. The approach can generalize to complex multi-state models and to survival analyses using other smooth parametric distributions.  相似文献   

3.
《Value in health》2020,23(3):277-286
The allocation of healthcare resources among competing priorities requires an assessment of the expected costs and health effects of investing resources in the activities and of the opportunity cost of the expenditure. To date, much effort has been devoted to assessing the expected costs and health effects, but there remains an important need to also reflect the consequences of uncertainty in resource allocation decisions and the value of further research to reduce uncertainty. Decision making with uncertainty may turn out to be suboptimal, resulting in health loss. Consequently, there may be value in reducing uncertainty, through the collection of new evidence, to better inform resource decisions. This value can be quantified using value of information (VOI) analysis. This report from the ISPOR VOI Task Force describes methods for computing 4 VOI measures: the expected value of perfect information, expected value of partial perfect information (EVPPI), expected value of sample information (EVSI), and expected net benefit of sampling (ENBS). Several methods exist for computing EVPPI and EVSI, and this report provides guidance on selecting the most appropriate method based on the features of the decision problem. The report provides a number of recommendations for good practice when planning, undertaking, or reviewing VOI analyses. The software needed to compute VOI is discussed, and areas for future research are highlighted.  相似文献   

4.
Probabilistic analysis of decision trees using Monte Carlo simulation   总被引:11,自引:0,他引:11  
The authors describe methods for modeling uncertainty in the specification of decision tree probabilities and utilities using Monte Carlo simulation techniques. Exact confidence levels based upon the underlying probabilistic structure are provided. Probabilistic measures of sensitivity are derived in terms of classical information theory. These measures identify which variables are probabilistically important components of the decision. These techniques are illustrated in terms of the clinical problem of anticoagulation versus observation in the setting of deep vein thrombosis during the first trimester of pregnancy. These methods provide the decision analyst with powerful yet simple tools which give quantitative insight into the structure and inherent limitations of decision models arising from specification uncertainty. The techniques may be applied to complex decision models.  相似文献   

5.
Modern statistical models and computational methods can now incorporate uncertainty of the parameters used in Quantitative Microbial Risk Assessments (QMRA). Many QMRAs use Monte Carlo methods, but work from fixed estimates for means, variances and other parameters. We illustrate the ease of estimating all parameters contemporaneously with the risk assessment, incorporating all the parameter uncertainty arising from the experiments from which these parameters are estimated. A Bayesian approach is adopted, using Markov Chain Monte Carlo Gibbs sampling (MCMC) via the freely available software, WinBUGS. The method and its ease of implementation are illustrated by a case study that involves incorporating three disparate datasets into an MCMC framework. The probabilities of infection when the uncertainty associated with parameter estimation is incorporated into a QMRA are shown to be considerably more variable over various dose ranges than the analogous probabilities obtained when constants from the literature are simply ‘plugged’ in as is done in most QMRAs. Neglecting these sources of uncertainty may lead to erroneous decisions for public health and risk management.  相似文献   

6.
Partial expected value of perfect information (EVPI) calculations can quantify the value of learning about particular subsets of uncertain parameters in decision models. Published case studies have used different computational approaches. This article examines the computation of partial EVPI estimates via Monte Carlo sampling algorithms. The mathematical definition shows 2 nested expectations, which must be evaluated separately because of the need to compute a maximum between them. A generalized Monte Carlo sampling algorithm uses nested simulation with an outer loop to sample parameters of interest and, conditional upon these, an inner loop to sample remaining uncertain parameters. Alternative computation methods and shortcut algorithms are discussed and mathematical conditions for their use considered. Maxima of Monte Carlo estimates of expectations are biased upward, and the authors show that the use of small samples results in biased EVPI estimates. Three case studies illustrate 1) the bias due to maximization and also the inaccuracy of shortcut algorithms 2) when correlated variables are present and 3) when there is nonlinearity in net benefit functions. If relatively small correlation or nonlinearity is present, then the shortcut algorithm can be substantially inaccurate. Empirical investigation of the numbers of Monte Carlo samples suggests that fewer samples on the outer level and more on the inner level could be efficient and that relatively small numbers of samples can sometimes be used. Several remaining areas for methodological development are set out. A wider application of partial EVPI is recommended both for greater understanding of decision uncertainty and for analyzing research priorities.  相似文献   

7.
《Value in health》2020,23(6):734-742
Value of information (VOI) analyses can help policy makers make informed decisions about whether to conduct and how to design future studies. Historically a computationally expensive method to compute the expected value of sample information (EVSI) restricted the use of VOI to simple decision models and study designs. Recently, 4 EVSI approximation methods have made such analyses more feasible and accessible. Members of the Collaborative Network for Value of Information (ConVOI) compared the inputs, the analyst’s expertise and skills, and the software required for the 4 recently developed EVSI approximation methods. Our report provides practical guidance and recommendations to help inform the choice between the 4 efficient EVSI estimation methods. More specifically, this report provides: (1) a step-by-step guide to the methods’ use, (2) the expertise and skills required to implement the methods, and (3) method recommendations based on the features of decision-analytic problems.  相似文献   

8.
Processes of health technology assessment (HTA) inform decisions under uncertainty about whether to invest in new technologies based on evidence of incremental effects, incremental cost, and incremental net benefit monetary (INMB). An option value to delaying such decisions to wait for further evidence is suggested in the usual case of interest, in which the prior distribution of INMB is positive but uncertain. METHODS: of estimating the option value of delaying decisions to invest have previously been developed when investments are irreversible with an uncertain payoff over time and information is assumed fixed. However, in HTA decision uncertainty relates to information (evidence) on the distribution of INMB. This article demonstrates that the option value of delaying decisions to allow collection of further evidence can be estimated as the expected value of sample of information (EVSI). For irreversible decisions, delay and trial (DT) is demonstrated to be preferred to adopt and no trial (AN) when the EVSI exceeds expected costs of information, including expected opportunity costs of not treating patients with the new therapy. For reversible decisions, adopt and trial (AT) becomes a potentially optimal strategy, but costs of reversal are shown to reduce the EVSI of this strategy due to both a lower probability of reversal being optimal and lower payoffs when reversal is optimal. Hence, decision makers are generally shown to face joint research and reimbursement decisions (AN, DT and AT), with the optimal choice dependent on costs of reversal as well as opportunity costs of delay and the distribution of prior INMB.  相似文献   

9.
In this workshop we will focus on Monte Carlo disease simulations and how they can be used to perform economic evaluations of health care interventions. Monce Carlo disease simulation is a modeling technique that operates on a patient level basis, explicitly estimating the effect of variability among patients in both underlying disease progression patterns and in individual responsiveness to treatments. Typical outputs from these simulations are patient functional status, life years, quality-adjusted life years, and associated costs, all of which can be appropriately discounted. The output information is presented in the form of distributions which can be used to estimate mean or median values and confidence intervals for the outcomes of interest. These results can be used to compute cost-effectiveness ratios and other drug value measures. Monte Carlo disease simulation also allows decision makers to address the question of risk associated with smaller populations that may not tend to the "average" results generated by Markov models or simulations of large populations. In this workshop, we describe how to create a Monte Carlo simulation model and how different types of uncertainly can be incorporated into the model. We will briefly compare and contrast Monte Carlo and Markov simulation techniques. Discussion topics will be illustrated and motivated by an HIV/AIDS model of the effect of combination antiretroviral therapy on viral load and CD4 progression. This workshop should be beneficial to outcomes researchers and health care decision makers who need to incorporate uncertainty about the natural history of a disease and the impact of alternative disease management strategies for individual patients into their drug value analyses.  相似文献   

10.
Probabilistic analysis of decision trees using symbolic algebra   总被引:1,自引:0,他引:1  
Uncertainty in medical decision making techniques occurs in the specification of both decision tree probabilities and utilities. Using a computer-based algebraic approach, methods for modeling this uncertainty have been formulated. This analytic procedure allows an exact calculation of the statistical variance at the final decision node using automated symbolic manipulation. Confidence and conditional confidence levels for the preferred decision are derived from gaussian theory, and the mutual information index that identifies probabilistically important tree variables is provided. The computer-based algebraic method is illustrated for a problem previously analyzed by Monte Carlo simulation. This methodology provides the decision analyst with a procedure to evaluate the outcome of specification uncertainty, in many decision problems, without resorting to Monte Carlo analysis.  相似文献   

11.
Propensity score methods are increasingly being used to estimate the effects of treatments on health outcomes using observational data. There are four methods for using the propensity score to estimate treatment effects: covariate adjustment using the propensity score, stratification on the propensity score, propensity‐score matching, and inverse probability of treatment weighting (IPTW) using the propensity score. When outcomes are binary, the effect of treatment on the outcome can be described using odds ratios, relative risks, risk differences, or the number needed to treat. Several clinical commentators suggested that risk differences and numbers needed to treat are more meaningful for clinical decision making than are odds ratios or relative risks. However, there is a paucity of information about the relative performance of the different propensity‐score methods for estimating risk differences. We conducted a series of Monte Carlo simulations to examine this issue. We examined bias, variance estimation, coverage of confidence intervals, mean‐squared error (MSE), and type I error rates. A doubly robust version of IPTW had superior performance compared with the other propensity‐score methods. It resulted in unbiased estimation of risk differences, treatment effects with the lowest standard errors, confidence intervals with the correct coverage rates, and correct type I error rates. Stratification, matching on the propensity score, and covariate adjustment using the propensity score resulted in minor to modest bias in estimating risk differences. Estimators based on IPTW had lower MSE compared with other propensity‐score methods. Differences between IPTW and propensity‐score matching may reflect that these two methods estimate the average treatment effect and the average treatment effect for the treated, respectively. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

12.
Over the last decade or so, there have been many developments in methods to handle uncertainty in cost-effectiveness studies. In decision modelling, it is widely accepted that there needs to be an assessment of how sensitive the decision is to uncertainty in parameter values. The rationale for probabilistic sensitivity analysis (PSA) is primarily based on a consideration of the needs of decision makers in assessing the consequences of decision uncertainty. In this paper, we highlight some further compelling reasons for adopting probabilistic methods for decision modelling and sensitivity analysis, and specifically for adopting simulation from a Bayesian posterior distribution. Our reasoning is as follows. Firstly, cost-effectiveness analyses need to be based on all the available evidence, not a selected subset, and the uncertainties in the data need to be propagated through the model in order to provide a correct analysis of the uncertainties in the decision. In many--perhaps most--cases the evidence structure requires a statistical analysis that inevitably induces correlations between parameters. Deterministic sensitivity analysis requires that models are run with parameters fixed at 'extreme' values, but where parameter correlation exists it is not possible to identify sets of parameter values that can be considered 'extreme' in a meaningful sense. However, a correct probabilistic analysis can be readily achieved by Monte Carlo sampling from the joint posterior distribution of parameters. In this paper, we review some evidence structures commonly occurring in decision models, where analyses that correctly reflect the uncertainty in the data induce correlations between parameters. Frequently, this is because the evidence base includes information on functions of several parameters. It follows that, if health technology assessments are to be based on a correct analysis of all available data, then probabilistic methods must be used both for sensitivity analysis and for estimation of expected costs and benefits.  相似文献   

13.
The financial analysis of a proposed capital investment typically involves estimating the project's expected cash flows and profitability and then perhaps looking at one or two alternative scenarios. However, this procedure provides incomplete information about a project's potential risk/return characteristics because it focuses on only a few possibilities; whereas, real-world investments can have an almost unlimited number of financial outcomes. Monte Carlo simulation can solve the incomplete information problem. In a Monte Carlo simulation, relatively certain input variables are specified by single values, while relatively uncertain variables are specified by probability distributions. The end result is a probability distribution that describes the project's full range of potential profitability. With a complete set of information concerning a project's risk/return characteristics, decision makers can better judge the financial impact of the investment and hence make better capital investment decisions.  相似文献   

14.
In probabilistic economic analysis, the uncertainty concerning input parameters is quantified, and determines the level of uncertainty over the optimal decision. Researchers from a wide range of disciplines employ mathematical models to simulate complex processes. Common through many such disciplines is the conduct of importance analysis to determine those input parameters that contribute most to the uncertainty over the optimal decision based on the results of the analysis. In this study, we compare a range of potential importance measures to see how they compare with methods used in economic analysis. Techniques were classified as variance/correlation, information, probability, entropy, or elasticity-based measures. A selection of the most commonly used measures were applied to an economic model of treatment for patients with Parkinson's disease. Techniques were evaluated in terms of their ranking of variables, complexity, and interpretation.  相似文献   

15.
Quantifying and reporting uncertainty from systematic errors   总被引:1,自引:0,他引:1  
Optimal use of epidemiologic findings in decision making requires more information than standard analyses provide. It requires calculating and reporting the total uncertainty in the results, which in turn requires methods for quantifying the uncertainty introduced by systematic error. Quantified uncertainty can improve policy and clinical decisions, better direct further research, and aid public understanding, and thus enhance the contributions of epidemiology. The error quantification approach proposed here is based on estimating a probability distribution for a bias-corrected effect measure based on externally-derived distributions of bias levels. Using Monte Carlo simulation, corrections for multiple biases are combined by identifying the steps through which true causal effects become data, and (in reverse order) correcting for the errors introduced by each step. The bias-correction calculations are the same as those used in sensitivity analysis, but the resulting distribution of possible true values is more than a sensitivity analysis; it is a more complete reporting of the actual study results. The approach is illustrated with an application to a recent study that resulted in the drug, phenylpropanolamine, being removed from the market.  相似文献   

16.
《Value in health》2013,16(2):438-448
BackgroundThe expected value of partial perfect information (EVPPI) is a theoretically justifiable and informative measure of uncertainty in decision-analytic cost-effectiveness models, but its calculation is computationally intensive because it generally requires two-level Monte Carlo simulation. We introduce an efficient, one-level simulation method for the calculation of single-parameter EVPPI.ObjectiveWe show that under mild regularity assumptions, the expectation-maximization-expectation sequence in EVPPI calculation can be transformed into an expectation-maximization-maximization sequence. By doing so, calculations can be performed in a single-step expectation by using data generated for probabilistic sensitivity analysis. We prove that the proposed estimator of EVPPI converges in probability to the true EVPPI.Methods and ResultsThe performance of the new method was empirically demonstrated by using three exemplary decision models. Our proposed method seems to achieve remarkably higher accuracy than the two-level method with a fraction of its computation costs, though the achievement in accuracy was not uniform and varied across the parameters of the models. Software is provided to calculate single-parameter EVPPI based on the probabilistic sensitivity analysis data.ConclusionsThe new method, though applicable only to single-parameter EVPPI, is fast, accurate, and easy to implement. Further research is needed to evaluate the performance of this method in more complex scenarios and to extend such a concept to similar measures of decision uncertainty.  相似文献   

17.
Interest is growing in the application of standard statistical inferential techniques to the calculation of cost-effectiveness ratios (CER), but individual level data will not be available in many cases because it is very difficult to undertake prospective controlled trials of many public health interventions. We propose the application of probabilistic uncertainty analysis using Monte Carlo simulations, in combination with nonparametric bootstrapping techniques where appropriate. This paper also discusses how decision makers should interpret the CER of interventions where uncertainty intervals overlap. We show how the incorporation of uncertainty around costs and effects of interventions into a stochastic league table provides additional information to decision makers for priority setting. Stochastic league tables inform decision makers about the probability that a specific intervention would be included in the optimal mix of interventions for different resource levels, given the uncertainty surrounding the interventions.  相似文献   

18.
Objective. To develop and validate a general method (called regression risk analysis) to estimate adjusted risk measures from logistic and other nonlinear multiple regression models. We show how to estimate standard errors for these estimates. These measures could supplant various approximations (e.g., adjusted odds ratio [AOR]) that may diverge, especially when outcomes are common.
Study Design. Regression risk analysis estimates were compared with internal standards as well as with Mantel–Haenszel estimates, Poisson and log-binomial regressions, and a widely used (but flawed) equation to calculate adjusted risk ratios (ARR) from AOR.
Data Collection. Data sets produced using Monte Carlo simulations.
Principal Findings. Regression risk analysis accurately estimates ARR and differences directly from multiple regression models, even when confounders are continuous, distributions are skewed, outcomes are common, and effect size is large. It is statistically sound and intuitive, and has properties favoring it over other methods in many cases.
Conclusions. Regression risk analysis should be the new standard for presenting findings from multiple regression analysis of dichotomous outcomes for cross-sectional, cohort, and population-based case–control studies, particularly when outcomes are common or effect size is large.  相似文献   

19.
It is well known that the presence or absence of effect-measure modification depends upon the chosen measure. What is perhaps more disconcerting is that a positive change in one measure may be accompanied by a negative change in another. Therefore, research demonstrating that an effect is 'stronger' in one population when compared with another, but based on only one measure, for example, the odds ratio, may be difficult to interpret for researchers interested in another measure. The present article investigates relationships among changes in the relative risk, odds ratio, and risk difference from one stratum to another. Monte Carlo integration shows that the three measures change in the same direction for 78 or 89 per cent of the volume of the geometric space defined by the four underlying proportions, depending on whether the strata are presumed to share the same direction of effect or not. Analytic results are presented concerning necessary and sufficient conditions for the measures to change in opposite directions. In general, the conditions are seen to be quite complicated, though they do give way to some interesting results. For example, when exposure increases risk but all risks are less than 0.5, it is impossible for the relative risk and risk difference to change in the same direction but opposite to that of the odds ratio. Both data-analytic and hypothetical examples are presented to demonstrate circumstances under which the measures change in opposite directions.  相似文献   

20.
Decisions in health care must be made, despite uncertainty about benefits, risks, and costs. Value of information analysis is a theoretically sound method to estimate the expected value of future quantitative research pertaining to an uncertain decision. If the expected value of future research does not exceed the cost of research, additional research is not justified, and decisions should be based on current evidence, despite the uncertainty. To assess the importance of individual parameters relevant to a decision, different value of information methods have been suggested. The generally recommended method assumes that the expected value of perfect knowledge concerning a parameter is estimated as the reduction in expected opportunity loss. This method, however, results in biased expected values and incorrect importance ranking of parameters. The objective of this paper is to set out the correct methods to estimate the partial expected value of perfect information and to demonstrate why the generally recommended method is incorrect conceptually and mathematically.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号