首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
A Monte Carlo uncertainty analysis with correlations between parameters is applied to a Markov-chain model that is used to support the choice of a replacement heart-valve. The objective is to quantify the effects of uncertainty in and of correlations between probabilities of valve-related events on the life expectancies of four valve types. The uncertainty in the logit- and log-transformed parameters-mostly representing probabilities and durations-is modeled as a multivariate normal distribution. The univariate distributions are obtained through values for the median and the 0.975 quantile of each parameter. Correlations between parameters are difficult to quantify. A sensitivity analysis is suggested to study their influences on the uncertainty in valve preference prior to further elicitation efforts. The results of the uncertainty analysis strengthen the conclusions from a preceding study, which did not include uncertainty in the model parameters, where the homograft turned out to be the best choice. It is concluded that the influence of correlations is limited in most cases. Preference statements become more certain when the correlation between valve types increases.  相似文献   

2.
Results of a simulation study with two methods of analysis of data simulated under the mixed model on a 232-member pedigree are presented. The programs Pedigree Analysis Package (PAP), which approximate the likelihoods needed in a complex segregation analysis, and MIXD, which uses Monte Carlo Markov chain (MCMC), to estimate likelihoods were used. PAP obtained unbiased estimates of the major locus genotype means and the gene frequency, but biased estimates of the environmental variance component, and thus the heritability. A substantial fraction of the runs did not converge to an internal set of parameter estimates when analyzed with PAP. MIXD, which uses the Gibbs sampler to perform the MCMC sampling, produced unbiased estimates of all parameters with considerably more accuracy than obtained with PAP, and did not suffer from convergence of estimates to the boundary of the parameter space. The difference in behavior and accuracy of parameter estimates between PAP and MIXD was most apparent for models with either high or low residual additive genetic variance. Thus in situations where accuracy of the model is important, use of MCMC methods may be useful. In situations where less accuracy is needed, approximation methods may be adequate. Practical issues in using MCMC as implemented in MIXD to fit the mixed model are also discussed. Results of the simulations indicate that, unlike PAP, the starting configurations of most parameter estimates do not substantially influence the final parameter estimates in analysis with MIXD. © 1996 Wiley-Liss, Inc.  相似文献   

3.
A Monte Carlo model, based on the Quantitative Microbial Risk Analysis approach (QMRA), has been developed to assess the relative risks of infection associated with the presence of Cryptosporidium and Giardia in drinking water. The impact of various approaches for modelling the initial parameters of the model on the final risk assessments is evaluated. The Monte Carlo simulations that we performed showed that the occurrence of parasites in raw water was best described by a mixed distribution: log-Normal for concentrations > detection limit (DL), and a uniform distribution for concentrations < DL. The selection of process performance distributions for modelling the performance of treatment (filtration and ozonation) influences the estimated risks significantly. The mean annual risks for conventional treatment are: 1.97E-03 (removal credit adjusted by log parasite = log spores), 1.58E-05 (log parasite = 1.7 × log spores) or 9.33E-03 (regulatory credits based on the turbidity measurement in filtered water). Using full scale validated SCADA data, the simplified calculation of CT performed at the plant was shown to largely underestimate the risk relative to a more detailed CT calculation, which takes into consideration the downtime and system failure events identified at the plant (1.46E-03 vs. 3.93E-02 for the mean risk).  相似文献   

4.
This paper demonstrates the usefulness of combining simulation with Bayesian estimation methods in analysis of cost-effectiveness data collected alongside a clinical trial. Specifically, we use Markov Chain Monte Carlo (MCMC) to estimate a system of generalized linear models relating costs and outcomes to a disease process affected by treatment under alternative therapies. The MCMC draws are used as parameters in simulations which yield inference about the relative cost-effectiveness of the novel therapy under a variety of scenarios. Total parametric uncertainty is assessed directly by examining the joint distribution of simulated average incremental cost and effectiveness. The approach allows flexibility in assessing treatment in various counterfactual premises and quantifies the global effect of parametric uncertainty on a decision-maker's confidence in adopting one therapy over the other.  相似文献   

5.
Quantitative microbial risk assessment (QMRA) is increasingly applied to estimate drinking water safety. In QMRA the risk of infection is calculated from pathogen concentrations in drinking water, water consumption and dose response relations. Pathogen concentrations in drinking water are generally low and monitoring provides little information for QMRA. Therefore pathogen concentrations are monitored in the raw water and reduction of pathogens by treatment is modelled stochastically with Monte Carlo simulations. The method was tested in a case study with Campylobacter monitoring data of rapid sand filtration and ozonation processes. This study showed that the currently applied method did not predict the monitoring data used for validation. Consequently the risk of infection was over estimated by one order of magnitude. An improved method for model validation was developed. It combines non-parametric bootstrapping with statistical extrapolation to rare events. Evaluation of the treatment model was improved by presenting monitoring data and modelling results in CCDF graphs, which focus on the occurrence of rare events. Apart from calculating the yearly average risk of infection, the model results were presented in FN curves. This allowed for evaluation of both the distribution of risk and the uncertainty associated with the assessment.  相似文献   

6.
Four estimators of annual infection probability were compared pertinent to Quantitative Microbial Risk Analysis (QMRA). A stochastic model, the Gold Standard, was used as the benchmark. It is a product of independent daily infection probabilities which in turn are based on daily doses. An alternative and commonly-used estimator, here referred to as the Naïve, assumes a single daily infection probability from a single value of daily dose. The typical use of this estimator in stochastic QMRA involves the generation of a distribution of annual infection probabilities, but since each of these is based on a single realisation of the dose distribution, the resultant annual infection probability distribution simply represents a set of inaccurate estimates. While the medians of both distributions were within an order of magnitude for our test scenario, the 95th percentiles, which are sometimes used in QMRA as conservative estimates of risk, differed by around one order of magnitude. The other two estimators examined, the Geometric and Arithmetic, were closely related to the Naïve and use the same equation, and both proved to be poor estimators. Lastly, this paper proposes a simple adjustment to the Gold Standard equation accommodating periodic infection probabilities when the daily infection probabilities are unknown.  相似文献   

7.
Bayesian methods for cluster randomized trials with continuous responses   总被引:1,自引:0,他引:1  
Bayesian methods for cluster randomized trials extend the random-effects formulation by allowing both the use of external evidence on parameters and straightforward relaxation of the standard normality and constant variance assumptions. Care is required in specifying prior distributions on variance components, and a number of different options are explored with implied prior distributions for other parameters given in closed form. Markov chain Monte Carlo (MCMC) methods permit the fitting of very general models and the introduction of parameter uncertainty into power calculations. We illustrate these ideas using a published example in which general practices were randomized to intervention or control, and show that different choices of supposedly 'non-informative' prior distributions can have substantial influence on conclusions. We also illustrate the use of forward simulation methods in power calculations with uncertainty on multiple inputs. Bayesian methods have the potential to be very useful but guidance is required as to appropriate strategies for robust analysis. Our current experience leads us to recommend a standard 'non-informative' prior distribution for the within-cluster sampling variance, and an independent prior on the intraclass correlation coefficient (ICC). The latter may exploit background evidence or, as a reference analysis, be a uniform ICC or a 'uniform shrinkage' prior.  相似文献   

8.
We present methodology for calculating Bayes factors between models as well as posterior probabilities of the models when the indicator variables of the models are integrated out of the posterior before Markov chain Monte Carlo (MCMC) computations. Standard methodology would include the indicator functions as part of the MCMC computations. We demonstrate that our methodology can give substantially greater accuracy than the traditional approach. We illustrate the methodology using the model selection prior of George and McCulloch applied to logistic regression and to a mixture model for observations in a hierarchical random effects model.  相似文献   

9.
Population projection for many developing countries could be quite a challenging task for the demographers mostly due to lack of availability of enough reliable data. The objective of this paper is to present an overview of the existing methods for population forecasting and to propose an alternative based on the Bayesian statistics, combining the formality of inference. The analysis has been made using Markov Chain Monte Carlo (MCMC) technique for Bayesian methodology available with the software WinBUGS. Convergence diagnostic techniques available with the WinBUGS software have been applied to ensure the convergence of the chains necessary for the implementation of MCMC. The Bayesian approach allows for the use of observed data and expert judgements by means of appropriate priors, and a more realistic population forecasts, along with associated uncertainty, has been possible.Key words: Cohort component Method, Monte Carlo error; Gompertz model; Highest posterior density; Logistic model; Markov Chain Monte Carlo; Non-linear regression model; Population projection; WinBUGS  相似文献   

10.
There has been an increasing interest in using expected value of information (EVI) theory in medical decision making, to identify the need for further research to reduce uncertainty in decision and as a tool for sensitivity analysis. Expected value of sample information (EVSI) has been proposed for determination of optimum sample size and allocation rates in randomized clinical trials. This article derives simple Monte Carlo, or nested Monte Carlo, methods that extend the use of EVSI calculations to medical decision applications with multiple sources of uncertainty, with particular attention to the form in which epidemiological data and research findings are structured. In particular, information on key decision parameters such as treatment efficacy are invariably available on measures of relative efficacy such as risk differences or odds ratios, but not on model parameters themselves. In addition, estimates of model parameters and of relative effect measures in the literature may be heterogeneous, reflecting additional sources of variation besides statistical sampling error. The authors describe Monte Carlo procedures for calculating EVSI for probability, rate, or continuous variable parameters in multi parameter decision models and approximate methods for relative measures such as risk differences, odds ratios, risk ratios, and hazard ratios. Where prior evidence is based on a random effects meta-analysis, the authors describe different ESVI calculations, one relevant for decisions concerning a specific patient group and the other for decisions concerning the entire population of patient groups. They also consider EVSI methods for new studies intended to update information on both baseline treatment efficacy and the relative efficacy of 2 treatments. Although there are restrictions regarding models with prior correlation between parameters, these methods can be applied to the majority of probabilistic decision models. Illustrative worked examples of EVSI calculations are given in an appendix.  相似文献   

11.
In probabilistic sensitivity analyses, analysts assign probability distributions to uncertain model parameters and use Monte Carlo simulation to estimate the sensitivity of model results to parameter uncertainty. The authors present Bayesian methods for constructing large-sample approximate posterior distributions for probabilities, rates, and relative effect parameters, for both controlled and uncontrolled studies, and discuss how to use these posterior distributions in a probabilistic sensitivity analysis. These results draw on and extend procedures from the literature on large-sample Bayesian posterior distributions and Bayesian random effects meta-analysis. They improve on standard approaches to probabilistic sensitivity analysis by allowing a proper accounting for heterogeneity across studies as well as dependence between control and treatment parameters, while still being simple enough to be carried out on a spreadsheet. The authors apply these methods to conduct a probabilistic sensitivity analysis for a recently published analysis of zidovudine prophylaxis following rapid HIV testing in labor to prevent vertical HIV transmission in pregnant women.  相似文献   

12.
For bivariate meta‐analysis of diagnostic studies, likelihood approaches are very popular. However, they often run into numerical problems with possible non‐convergence. In addition, the construction of confidence intervals is controversial. Bayesian methods based on Markov chain Monte Carlo (MCMC) sampling could be used, but are often difficult to implement, and require long running times and diagnostic convergence checks. Recently, a new Bayesian deterministic inference approach for latent Gaussian models using integrated nested Laplace approximations (INLA) has been proposed. With this approach MCMC sampling becomes redundant as the posterior marginal distributions are directly and accurately approximated. By means of a real data set we investigate the influence of the prior information provided and compare the results obtained by INLA, MCMC, and the maximum likelihood procedure SAS PROC NLMIXED . Using a simulation study we further extend the comparison of INLA and SAS PROC NLMIXED by assessing their performance in terms of bias, mean‐squared error, coverage probability, and convergence rate. The results indicate that INLA is more stable and gives generally better coverage probabilities for the pooled estimates and less biased estimates of variance parameters. The user‐friendliness of INLA is demonstrated by documented R‐code. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

13.
Markov transition models are frequently used to model disease progression. The authors show how the solution to Kolmogorov's forward equations can be exploited to map between transition rates and probabilities from probability data in multistate models. They provide a uniform, Bayesian treatment of estimation and propagation of uncertainty of transition rates and probabilities when 1) observations are available on all transitions and exact time at risk in each state (fully observed data) and 2) observations are on initial state and final state after a fixed interval of time but not on the sequence of transitions (partially observed data). The authors show how underlying transition rates can be recovered from partially observed data using Markov chain Monte Carlo methods in WinBUGS, and they suggest diagnostics to investigate inconsistencies between evidence from different starting states. An illustrative example for a 3-state model is given, which shows how the methods extend to more complex Markov models using the software WBDiff to compute solutions. Finally, the authors illustrate how to statistically combine data from multiple sources, including partially observed data at several follow-up times and also how to calibrate a Markov model to be consistent with data from one specific study.  相似文献   

14.
Molitor J 《American journal of epidemiology》2012,175(5):376-8; discussion 379-80
Bayesian methods have seen an increase in popularity in a wide variety of scientific fields, including epidemiology. One of the main reasons for their widespread application is the power of the Markov chain Monte Carlo (MCMC) techniques generally used to fit these models. As a result, researchers often implicitly associate Bayesian models with MCMC estimation procedures. However, Bayesian models do not always require Markov-chain-based methods for parameter estimation. This is important, as MCMC estimation methods, while generally quite powerful, are complex and computationally expensive and suffer from convergence problems related to the manner in which they generate correlated samples used to estimate probability distributions for parameters of interest. In this issue of the Journal, Cole et al. (Am J Epidemiol. 2012;175(5):368-375) present an interesting paper that discusses non-Markov-chain-based approaches to fitting Bayesian models. These methods, though limited, can overcome some of the problems associated with MCMC techniques and promise to provide simpler approaches to fitting Bayesian models. Applied researchers will find these estimation approaches intuitively appealing and will gain a deeper understanding of Bayesian models through their use. However, readers should be aware that other non-Markov-chain-based methods are currently in active development and have been widely published in other fields.  相似文献   

15.
Although extended pedigrees are often sampled through probands with extreme levels of a quantitative trait, Markov chain Monte Carlo (MCMC) methods for segregation and linkage analysis have not been able to perform ascertainment corrections. Further, the extent to which ascertainment of pedigrees leads to biases in the estimation of segregation and linkage parameters has not been previously studied for MCMC procedures. In this paper, we studied these issues with a Bayesian MCMC approach for joint segregation and linkage analysis, as implemented in the package Loki. We first simulated pedigrees ascertained through individuals with extreme values of a quantitative trait in spirit of the sequential sampling theory of Cannings and Thompson [Cannings and Thompson [1977] Clin. Genet. 12:208-212]. Using our simulated data, we detected no bias in estimates of the trait locus location. However, in addition to allele frequencies, when the ascertainment threshold was higher than or close to the true value of the highest genotypic mean, bias was also found in the estimation of this parameter. When there were multiple trait loci, this bias destroyed the additivity of the effects of the trait loci, and caused biases in the estimation all genotypic means when a purely additive model was used for analyzing the data. To account for pedigree ascertainment with sequential sampling, we developed a Bayesian ascertainment approach and implemented Metropolis-Hastings updates in the MCMC samplers used in Loki. Ascertainment correction greatly reduced biases in parameter estimates. Our method is designed for multiple, but a fixed number of trait loci.  相似文献   

16.
Expected value of sample information (EVSI) involves simulating data collection, Bayesian updating, and re-examining decisions. Bayesian updating in Weibull models typically requires Markov chain Monte Carlo (MCMC). We examine five methods for calculating posterior expected net benefits: two heuristic methods (data lumping and pseudo-normal); two Bayesian approximation methods (Tierney & Kadane, Brennan & Kharroubi); and the gold standard MCMC. A case study computes EVSI for 25 study options. We compare accuracy, computation time and trade-offs of EVSI versus study costs. Brennan & Kharroubi (B&K) approximates expected net benefits to within +/-1% of MCMC. Other methods, data lumping (+54%), pseudo-normal (-5%) and Tierney & Kadane (+11%) are less accurate. B&K also produces the most accurate EVSI approximation. Pseudo-normal is also reasonably accurate, whilst Tierney & Kadane consistently underestimates and data lumping exhibits large variance. B&K computation is 12 times faster than the MCMC method in our case study. Though not always faster, B&K provides most computational efficiency when net benefits require appreciable computation time and when many MCMC samples are needed. The methods enable EVSI computation for economic models with Weibull survival parameters. The approach can generalize to complex multi-state models and to survival analyses using other smooth parametric distributions.  相似文献   

17.
In statistical modelling, it is often important to know how much parameter estimates are influenced by particular observations. An attractive approach is to re-estimate the parameters with each observation deleted in turn, but this is computationally demanding when fitting models by using Markov chain Monte Carlo (MCMC), as obtaining complete sample estimates is often in itself a very time-consuming task. Here we propose two efficient ways to approximate the case-deleted estimates by using output from MCMC estimation. Our first proposal, which directly approximates the usual influence statistics in maximum likelihood analyses of generalised linear models (GLMs), is easy to implement and avoids any further evaluation of the likelihood. Hence, unlike the existing alternatives, it does not become more computationally intensive as the model complexity increases. Our second proposal, which utilises model perturbations, also has this advantage and does not require the form of the GLM to be specified. We show how our two proposed methods are related and evaluate them against the existing method of importance sampling and case deletion in a logistic regression analysis with missing covariates. We also provide practical advice for those implementing our procedures, so that they may be used in many situations where MCMC is used to fit statistical models.  相似文献   

18.
In genetic counseling for cancer risk, the probability of carrying a mutation of a cancer-causing gene plays an important role. Family history of various cancers is important in calculating this probability. BRCAPRO is a widely used software for calculating the probability of carrying mutations in BRCA1 and BRCA2 genes given the family history of breast and ovarian cancer in first- and second-degree relatives. BRCAPRO uses an analytical (exact) calculational procedure. Using Markov chain Monte Carlo (MCMC) methods, we extend BRCAPRO to handle, in principle, any type of cancer, family history, any number of genes and alleles that each gene may have. When the information used in this MCMC approach is the same as for BRCAPRO (two genes: BRCA1 and BRCA2; two cancers: breast and ovarian; first- and second-degree relatives only), the two approaches give essentially the same answer. Extending the model to include (1) prostate cancer, (2) two mutated alleles of BRCA2, namely, mutations in Ovarian Cancer Cluster Region (OCCR) and non-OCCR region, and (3) relatives of degree greater than second-degree, leads to different carrier probabilities. The MCMC approach is a useful tool in building a comprehensive model to give accurate estimates of carrier probabilities. Such an approach will be even more important as additional information about the genetics of various cancers becomes available.  相似文献   

19.
We describe a novel process for transforming the efficiency of partial expected value of sample information (EVSI) computation in decision models. Traditional EVSI computation begins with Monte Carlo sampling to produce new simulated data-sets with a specified sample size. Each data-set is synthesised with prior information to give posterior distributions for model parameters, either via analytic formulae or a further Markov Chain Monte Carlo (MCMC) simulation. A further 'inner level' Monte Carlo sampling then quantifies the effect of the simulated data on the decision. This paper describes a novel form of Bayesian Laplace approximation, which can be replace both the Bayesian updating and the inner Monte Carlo sampling to compute the posterior expectation of a function. We compare the accuracy of EVSI estimates in two case study cost-effectiveness models using 1st and 2nd order versions of our approximation formula, the approximation of Tierney and Kadane, and traditional Monte Carlo. Computational efficiency gains depend on the complexity of the net benefit functions, the number of inner level Monte Carlo samples used, and the requirement or otherwise for MCMC methods to produce the posterior distributions. This methodology provides a new and valuable approach for EVSI computation in health economic decision models and potential wider benefits in many fields requiring Bayesian approximation.  相似文献   

20.
Probabilistic analysis of decision trees using symbolic algebra   总被引:1,自引:0,他引:1  
Uncertainty in medical decision making techniques occurs in the specification of both decision tree probabilities and utilities. Using a computer-based algebraic approach, methods for modeling this uncertainty have been formulated. This analytic procedure allows an exact calculation of the statistical variance at the final decision node using automated symbolic manipulation. Confidence and conditional confidence levels for the preferred decision are derived from gaussian theory, and the mutual information index that identifies probabilistically important tree variables is provided. The computer-based algebraic method is illustrated for a problem previously analyzed by Monte Carlo simulation. This methodology provides the decision analyst with a procedure to evaluate the outcome of specification uncertainty, in many decision problems, without resorting to Monte Carlo analysis.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号