首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Probabilistic analysis of decision trees using Monte Carlo simulation   总被引:11,自引:0,他引:11  
The authors describe methods for modeling uncertainty in the specification of decision tree probabilities and utilities using Monte Carlo simulation techniques. Exact confidence levels based upon the underlying probabilistic structure are provided. Probabilistic measures of sensitivity are derived in terms of classical information theory. These measures identify which variables are probabilistically important components of the decision. These techniques are illustrated in terms of the clinical problem of anticoagulation versus observation in the setting of deep vein thrombosis during the first trimester of pregnancy. These methods provide the decision analyst with powerful yet simple tools which give quantitative insight into the structure and inherent limitations of decision models arising from specification uncertainty. The techniques may be applied to complex decision models.  相似文献   

2.
This article demonstrates the use of the Monte Carlo simulation method in physician practice valuation. The Monte Carlo method allows the valuator to incorporate probability ranges into the discounted cash flow model and obtain an output indicating the probability for specified ranges of practice valuation. Given the high level of uncertainty in projected cash flows associated with physician practices, the value of this kind of information in a practice valuation decision would quite obviously be superior to any single point estimate generated by a traditional discounted cash flow model. It is postulated that virtually all hospitals support an information system that can easily accommodate a Monte Carlo simulation.  相似文献   

3.
Interest is growing in the application of standard statistical inferential techniques to the calculation of cost-effectiveness ratios (CER), but individual level data will not be available in many cases because it is very difficult to undertake prospective controlled trials of many public health interventions. We propose the application of probabilistic uncertainty analysis using Monte Carlo simulations, in combination with nonparametric bootstrapping techniques where appropriate. This paper also discusses how decision makers should interpret the CER of interventions where uncertainty intervals overlap. We show how the incorporation of uncertainty around costs and effects of interventions into a stochastic league table provides additional information to decision makers for priority setting. Stochastic league tables inform decision makers about the probability that a specific intervention would be included in the optimal mix of interventions for different resource levels, given the uncertainty surrounding the interventions.  相似文献   

4.
There has been an increasing interest in using expected value of information (EVI) theory in medical decision making, to identify the need for further research to reduce uncertainty in decision and as a tool for sensitivity analysis. Expected value of sample information (EVSI) has been proposed for determination of optimum sample size and allocation rates in randomized clinical trials. This article derives simple Monte Carlo, or nested Monte Carlo, methods that extend the use of EVSI calculations to medical decision applications with multiple sources of uncertainty, with particular attention to the form in which epidemiological data and research findings are structured. In particular, information on key decision parameters such as treatment efficacy are invariably available on measures of relative efficacy such as risk differences or odds ratios, but not on model parameters themselves. In addition, estimates of model parameters and of relative effect measures in the literature may be heterogeneous, reflecting additional sources of variation besides statistical sampling error. The authors describe Monte Carlo procedures for calculating EVSI for probability, rate, or continuous variable parameters in multi parameter decision models and approximate methods for relative measures such as risk differences, odds ratios, risk ratios, and hazard ratios. Where prior evidence is based on a random effects meta-analysis, the authors describe different ESVI calculations, one relevant for decisions concerning a specific patient group and the other for decisions concerning the entire population of patient groups. They also consider EVSI methods for new studies intended to update information on both baseline treatment efficacy and the relative efficacy of 2 treatments. Although there are restrictions regarding models with prior correlation between parameters, these methods can be applied to the majority of probabilistic decision models. Illustrative worked examples of EVSI calculations are given in an appendix.  相似文献   

5.
In clinical decision making, it is common to ask whether, and how much, a diagnostic procedure is contributing to subsequent treatment decisions. Statistically, quantification of the value of the information provided by a diagnostic procedure can be carried out using decision trees with multiple decision points, representing both the diagnostic test and the subsequent treatments that may depend on the test's results. This article investigates probabilistic sensitivity analysis approaches for exploring and communicating parameter uncertainty in such decision trees. Complexities arise because uncertainty about a model's inputs determines uncertainty about optimal decisions at all decision nodes of a tree. We present the expected utility solution strategy for multistage decision problems in the presence of uncertainty on input parameters, propose a set of graphical displays and summarization tools for probabilistic sensitivity analysis in multistage decision trees, and provide an application to axillary lymph node dissection in breast cancer.  相似文献   

6.
The cost‐effectiveness acceptability curve (CEAC) shows the probability that an option ranks first for net benefit. Where more than two options are under consideration, the CEAC offers only a partial picture of the decision uncertainty. This paper discusses the appropriateness of showing the full set of rank probabilities for reporting the results of economic evaluation in multiple technology appraisal (MTA). A case study is used to illustrate the calculation of rank probabilities and associated metrics, based on Monte Carlo simulations from a decision model. Rank probabilities are often used to show uncertainty in the results of network meta‐analysis, but until now have not been used for economic evaluation. They may be useful decision‐making tools to complement the CEAC in specific MTA contexts.  相似文献   

7.
In this workshop we will focus on Monte Carlo disease simulations and how they can be used to perform economic evaluations of health care interventions. Monce Carlo disease simulation is a modeling technique that operates on a patient level basis, explicitly estimating the effect of variability among patients in both underlying disease progression patterns and in individual responsiveness to treatments. Typical outputs from these simulations are patient functional status, life years, quality-adjusted life years, and associated costs, all of which can be appropriately discounted. The output information is presented in the form of distributions which can be used to estimate mean or median values and confidence intervals for the outcomes of interest. These results can be used to compute cost-effectiveness ratios and other drug value measures. Monte Carlo disease simulation also allows decision makers to address the question of risk associated with smaller populations that may not tend to the "average" results generated by Markov models or simulations of large populations. In this workshop, we describe how to create a Monte Carlo simulation model and how different types of uncertainly can be incorporated into the model. We will briefly compare and contrast Monte Carlo and Markov simulation techniques. Discussion topics will be illustrated and motivated by an HIV/AIDS model of the effect of combination antiretroviral therapy on viral load and CD4 progression. This workshop should be beneficial to outcomes researchers and health care decision makers who need to incorporate uncertainty about the natural history of a disease and the impact of alternative disease management strategies for individual patients into their drug value analyses.  相似文献   

8.
Partial expected value of perfect information (EVPI) calculations can quantify the value of learning about particular subsets of uncertain parameters in decision models. Published case studies have used different computational approaches. This article examines the computation of partial EVPI estimates via Monte Carlo sampling algorithms. The mathematical definition shows 2 nested expectations, which must be evaluated separately because of the need to compute a maximum between them. A generalized Monte Carlo sampling algorithm uses nested simulation with an outer loop to sample parameters of interest and, conditional upon these, an inner loop to sample remaining uncertain parameters. Alternative computation methods and shortcut algorithms are discussed and mathematical conditions for their use considered. Maxima of Monte Carlo estimates of expectations are biased upward, and the authors show that the use of small samples results in biased EVPI estimates. Three case studies illustrate 1) the bias due to maximization and also the inaccuracy of shortcut algorithms 2) when correlated variables are present and 3) when there is nonlinearity in net benefit functions. If relatively small correlation or nonlinearity is present, then the shortcut algorithm can be substantially inaccurate. Empirical investigation of the numbers of Monte Carlo samples suggests that fewer samples on the outer level and more on the inner level could be efficient and that relatively small numbers of samples can sometimes be used. Several remaining areas for methodological development are set out. A wider application of partial EVPI is recommended both for greater understanding of decision uncertainty and for analyzing research priorities.  相似文献   

9.
We describe a novel process for transforming the efficiency of partial expected value of sample information (EVSI) computation in decision models. Traditional EVSI computation begins with Monte Carlo sampling to produce new simulated data-sets with a specified sample size. Each data-set is synthesised with prior information to give posterior distributions for model parameters, either via analytic formulae or a further Markov Chain Monte Carlo (MCMC) simulation. A further 'inner level' Monte Carlo sampling then quantifies the effect of the simulated data on the decision. This paper describes a novel form of Bayesian Laplace approximation, which can be replace both the Bayesian updating and the inner Monte Carlo sampling to compute the posterior expectation of a function. We compare the accuracy of EVSI estimates in two case study cost-effectiveness models using 1st and 2nd order versions of our approximation formula, the approximation of Tierney and Kadane, and traditional Monte Carlo. Computational efficiency gains depend on the complexity of the net benefit functions, the number of inner level Monte Carlo samples used, and the requirement or otherwise for MCMC methods to produce the posterior distributions. This methodology provides a new and valuable approach for EVSI computation in health economic decision models and potential wider benefits in many fields requiring Bayesian approximation.  相似文献   

10.
Four screening strategies (no testing, HC Abbott, HC Pasteur, and a combined test) for the detection of hepatitis C virus (HCV) antibody in donated blood were considered in a formal decision tree. Decision criteria included residual risk of infection and overall monetary cost. Tree parameters were determined using data from the Central African Republic. The prevalences observed among blood donors for HIV infection, hepatitis B, syphilis, and hepatitis C varied between 6% and 15%. The current residual risk of transfusion-transmitted infections is very high (8.4%). Screening for HCV would bring that risk down to about 3% with either the HC Pasteur, the HC Abbott, or the combined test. Even though baseline analysis gives preference to the HC Abbott test (the combined test coming out last), Monte Carlo sensitivity and uncertainty analyses showed that Abbott's and Pasteur's tests are interchangeable, on the basis or either risk or cost considerations.  相似文献   

11.
OBJECTIVE: This report updates previous clinical decision analysis for patients with unruptured intracranial aneurysm (UN-AN) based on newly published data and discusses the role of reanalysis in individual decision making. METHODS: The authors employed probabilities for the natural history of UN-AN and results of preventive surgery based on the report by the International Study of Unruptured Intracranial Aneurysms. Probabilistic sensitivity analysis with Monte Carlo simulation and traditional n-way sensitivity analyses were used to assess the uncertainty of clinical decisions. RESULTS: The baseline decision in favor of preventive surgery is reversed by new data from the international study. Probabilistic sensitivity analyses revealed several populations showing heterogeneity in terms of strategy selection. One- and two-way sensitivity analyses detected two important factors for decision making: annual rupture rate and utility for knowingly living with UN-AN. CONCLUSIONS: Annual UN-AN rupture rate and the utility for knowingly living with UN-AN are key factors when deciding on a therapeutic strategy. Also, updating published decision analyses can improve clinical decision making by integrating clinical judgment and newly available clinical data.  相似文献   

12.
First-order analytical sensitivity and uncertainty analysis for environmental chemical fate models is described and applied to a regional contaminant fate model and a food web bioaccumulation model. By assuming linear relationships between inputs and outputs, independence, and log-normal distributions of input variables, a relationship between uncertainty in input parameters and uncertainty in output parameters can be derived, yielding results that are consistent with a Monte Carlo analysis with similar input assumptions. A graphical technique is devised for interpreting and communicating uncertainty propagation as a function of variance in input parameters and model sensitivity. The suggested approach is less calculationally intensive than Monte Carlo analysis and is appropriate for preliminary assessment of uncertainty when models are applied to generic environments or to large geographic areas or when detailed parameterization of input uncertainties is unwarranted or impossible. This approach is particularly useful as a starting point for identification of sensitive model inputs at the early stages of applying a generic contaminant fate model to a specific environmental scenario, as a tool to support refinements of the model and the uncertainty analysis for site-specific scenarios, or for examining defined end points. The analysis identifies those input parameters that contribute significantly to uncertainty in outputs, enabling attention to be focused on defining median values and more appropriate distributions to describe these variables.  相似文献   

13.
Advances in marker technology have made a dense marker map a reality. If each marker is considered separately, and separate tests for association with a disease gene are performed, then multiple testing becomes an issue. A common solution uses a Bonferroni correction to account for multiple tests performed. However, with dense marker maps, neighboring markers are tightly linked and may have associated alleles; thus tests at nearby marker loci may not be independent. When alleles at different marker loci are associated, the Bonferroni correction may lead to a conservative test, and hence a power loss. As an alternative, for tests of association that use family data, we propose a Monte Carlo procedure that provides a global assessment of significance. We examine the case of tightly linked markers with varying amounts of association between them. Using computer simulations, we study a family-based test for association (the transmission/disequilibrium test), and compare its power when either the Bonferroni or Monte Carlo procedure is used to determine significance. Our results show that when the alleles at different marker loci are not associated, using either procedure results in tests with similar power. However, when alleles at linked markers are associated, the test using the Monte Carlo procedure is more powerful than the test using the Bonferroni procedure. This proposed Monte Carlo procedure can be applied whenever it is suspected that markers examined have high amounts of association, or as a general approach to ensure appropriate significance levels and optimal power.  相似文献   

14.
Decision analysis is an appealing methodology with which to provide decision support to the practicing physician. However, its use in the clinical setting is impeded because computer-based explanations of decision-theoretic advice are difficult to generate without resorting to mathematical arguments. Nevertheless, human decision analysts generate useful and intuitive explanations based on decision trees. To facilitate the use of decision theory in a computer-based decision support system, the authors developed a computer program that uses symbolic reasoning techniques to generate nonquantitative explanations of the results of decision analyses. A combined approach has been implemented to explain the differences in expected utility among branches of a decision tree. First, the mathematical relationships inherent in the structure of the tree are used to find any asymmetries in tree structure or inequalities among analogous decision variables that are responsible for a difference in expected utility. Next, an explanation technique is selected and applied to the most significant variables, creating a symbolic expression that justifies the decision. Finally, the symbolic expression is converted to English-language text, thereby generating an explanation that justifies the desirability of the choice with the greater expected utility. The explanation does not refer to mathematical formulas, nor does it include probability or utility values. The results suggest that explanations produced by a combination of decision analysis and symbolic processing techniques may be more persuasive and acceptable to clinicians than those produced by either technique alone.  相似文献   

15.
To prioritize 100 animal diseases and zoonoses in Europe, we used a multicriteria decision-making procedure based on opinions of experts and evidence-based data. Forty international experts performed intracategory and intercategory weighting of 57 prioritization criteria. Two methods (deterministic with mean of each weight and probabilistic with distribution functions of weights by using Monte Carlo simulation) were used to calculate a score for each disease. Consecutive ranking was established. Few differences were observed between each method. Compared with previous prioritization methods, our procedure is evidence based, includes a range of fields and criteria while considering uncertainty, and will be useful for analyzing diseases that affect public health.  相似文献   

16.
Group sequential hypothesis testing is now widely used to analyze prospective data. If Monte Carlo simulation is used to construct the signaling threshold, the challenge is how to manage the type I error probability for each one of the multiple tests without losing control on the overall significance level. This paper introduces a valid method for a true management of the alpha spending at each one of a sequence of Monte Carlo tests. The method also enables the use of a sequential simulation strategy for each Monte Carlo test, which is useful for saving computational execution time. Thus, the proposed procedure allows for sequential Monte Carlo test in sequential analysis, and this is the reason that it is called ‘composite sequential’ test. An upper bound for the potential power losses from the proposed method is deduced. The composite sequential design is illustrated through an application for post‐market vaccine safety surveillance data. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

17.
Over the last decade or so, there have been many developments in methods to handle uncertainty in cost-effectiveness studies. In decision modelling, it is widely accepted that there needs to be an assessment of how sensitive the decision is to uncertainty in parameter values. The rationale for probabilistic sensitivity analysis (PSA) is primarily based on a consideration of the needs of decision makers in assessing the consequences of decision uncertainty. In this paper, we highlight some further compelling reasons for adopting probabilistic methods for decision modelling and sensitivity analysis, and specifically for adopting simulation from a Bayesian posterior distribution. Our reasoning is as follows. Firstly, cost-effectiveness analyses need to be based on all the available evidence, not a selected subset, and the uncertainties in the data need to be propagated through the model in order to provide a correct analysis of the uncertainties in the decision. In many--perhaps most--cases the evidence structure requires a statistical analysis that inevitably induces correlations between parameters. Deterministic sensitivity analysis requires that models are run with parameters fixed at 'extreme' values, but where parameter correlation exists it is not possible to identify sets of parameter values that can be considered 'extreme' in a meaningful sense. However, a correct probabilistic analysis can be readily achieved by Monte Carlo sampling from the joint posterior distribution of parameters. In this paper, we review some evidence structures commonly occurring in decision models, where analyses that correctly reflect the uncertainty in the data induce correlations between parameters. Frequently, this is because the evidence base includes information on functions of several parameters. It follows that, if health technology assessments are to be based on a correct analysis of all available data, then probabilistic methods must be used both for sensitivity analysis and for estimation of expected costs and benefits.  相似文献   

18.
Many recent studies have sought to quantify the degree to which viral phenotypic characters (such as epidemiological risk group, geographic location, cell tropism, drug resistance state, etc.) are correlated with shared ancestry, as represented by a viral phylogenetic tree. Here, we present a new Bayesian Markov-Chain Monte Carlo approach to the investigation of such phylogeny-trait correlations. This method accounts for uncertainty arising from phylogenetic error and provides a statistical significance test of the null hypothesis that traits are associated randomly with phylogeny tips. We perform extensive simulations to explore and compare the behaviour of three statistics of phylogeny-trait correlation. Finally, we re-analyse two existing published data sets as case studies. Our framework aims to provide an improvement over existing methods for this problem.  相似文献   

19.
We conduct Monte Carlo analysis to compare specification tests in choosing between the sample selection and two-part models for corner solutions when errors are correlated but there are no identifying instruments.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号