首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
We report a Markov chain Monte Carlo analysis of the five simulated quantitative traits in Genetic Analysis Workshop 12 using the Loki software. Our objectives were to determine the efficacy of the Markov chain Monte Carlo method and to test a new scoring technique. Our initial blind analysis, on replicate 42 (the “best replicate”) successfully detected four out of the five disease loci and found no false positives. A power analysis shows that the software could usually detect 4 of the 10 trait/gene combinations at an empirical point‐wise p‐value of 1.5 x 10‐4. © 2001 Wiley‐Liss, Inc.  相似文献   

2.
Objective:  Markov models are increasingly used in economic evaluations of treatments for osteoporosis. Most of the existing evaluations are cohort-based Markov models missing comprehensive memory management and versatility. In this article, we describe and validate an original Markov microsimulation model to accurately assess the cost-effectiveness of prevention and treatment of osteoporosis.
Methods:  We developed a Markov microsimulation model with a lifetime horizon and a direct health-care cost perspective. The patient history was recorded and was used in calculations of transition probabilities, utilities, and costs. To test the internal consistency of the model, we carried out an example calculation for alendronate therapy. Then, external consistency was investigated by comparing absolute lifetime risk of fracture estimates with epidemiologic data.
Results:  For women at age 70 years, with a twofold increase in the fracture risk of the average population, the costs per quality-adjusted life-year gained for alendronate therapy versus no treatment were estimated at €9105 and €15,325, respectively, under full and realistic adherence assumptions. All the sensitivity analyses in terms of model parameters and modeling assumptions were coherent with expected conclusions and absolute lifetime risk of fracture estimates were within the range of previous estimates, which confirmed both internal and external consistency of the model.
Conclusion:  Microsimulation models present some major advantages over cohort-based models, increasing the reliability of the results and being largely compatible with the existing state of the art, evidence-based literature. The developed model appears to be a valid model for use in economic evaluations in osteoporosis.  相似文献   

3.
Following its introduction over 30 years ago, the Markov cohort state-transition model has been used extensively to model population trajectories over time in health decision modeling and cost-effectiveness analysis studies. We recently showed that a cohort model represents the average of a continuous-time stochastic process on a multidimensional integer lattice governed by a master equation, which represents the time-evolution of the probability function of an integer-valued random vector. By leveraging this theoretical connection, this study introduces an alternative modeling method using a stochastic differential equation (SDE) approach, which captures not only the mean behavior but also the variance of the population process. We show the derivation of an SDE model from first principles, describe an algorithm to construct an SDE and solve the SDE via simulation for use in practice, and demonstrate the two applications of an SDE in detail. The first example demonstrates that the population trajectories, and their mean and variance, from the SDE and other commonly used methods in decision modeling match. The second example shows that users can readily apply the SDE method in their existing works without the need for additional inputs beyond those required for constructing a conventional cohort model. In addition, the second example demonstrates that the SDE model is superior to a microsimulation model in terms of computational speed. In summary, an SDE model provides an alternative modeling framework which includes information on variance, can accommodate for time-varying parameters, and is computationally less expensive than a microsimulation for a typical cohort modeling problem.  相似文献   

4.
Power calculations for survival analyses via Monte Carlo estimation   总被引:1,自引:0,他引:1  
BACKGROUND: Power calculations can be a useful step in the design of epidemiologic studies. For occupational and environmental cohort studies, however, the calculation of statistical power has been difficult because researchers are often interested in situations where exposure assignment is time-dependent, and in research questions that pertain to cumulative exposure-mortality trends evaluated with statistical methods for survival analysis. These conditions are not easily accommodated by available software or published formulas for power calculation. METHODS: Monte Carlo methods can be used to estimate statistical power for survival analyses. Simple computer programs are presented to illustrate this approach. RESULTS: We show that, for the simple case of a randomized clinical trial involving a dichotomous exposure, the results of power calculations derived via this Monte Carlo approach conform to values derived using a previously published formula. We then illustrate how the Monte Carlo approach may be extended to obtain estimates of statistical power for analyses of cumulative exposure-mortality trends under conditions more typical of occupational cohort studies. CONCLUSIONS: The Monte Carlo approach provides a way to perform power calculations for a wide range of study conditions. The approach illustrated in this study should simplify the task of calculating power for survival analyses, particularly in epidemiologic research on occupational cohorts.  相似文献   

5.
Bias from unmeasured confounding is a persistent concern in observational studies, and sensitivity analysis has been proposed as a solution. In the recent years, probabilistic sensitivity analysis using either Monte Carlo sensitivity analysis (MCSA) or Bayesian sensitivity analysis (BSA) has emerged as a practical analytic strategy when there are multiple bias parameters inputs. BSA uses Bayes theorem to formally combine evidence from the prior distribution and the data. In contrast, MCSA samples bias parameters directly from the prior distribution. Intuitively, one would think that BSA and MCSA ought to give similar results. Both methods use similar models and the same (prior) probability distributions for the bias parameters. In this paper, we illustrate the surprising finding that BSA and MCSA can give very different results. Specifically, we demonstrate that MCSA can give inaccurate uncertainty assessments (e.g. 95% intervals) that do not reflect the data's influence on uncertainty about unmeasured confounding. Using a data example from epidemiology and simulation studies, we show that certain combinations of data and prior distributions can result in dramatic prior‐to‐posterior changes in uncertainty about the bias parameters. This occurs because the application of Bayes theorem in a non‐identifiable model can sometimes rule out certain patterns of unmeasured confounding that are not compatible with the data. Consequently, the MCSA approach may give 95% intervals that are either too wide or too narrow and that do not have 95% frequentist coverage probability. Based on our findings, we recommend that analysts use BSA for probabilistic sensitivity analysis. Copyright © 2017 John Wiley & Sons, Ltd.  相似文献   

6.
目的评价由倾向指数方法得到的暴露效果的估计量和统计性质,并探讨其实用性。方法利用计算机模拟对倾向指数方法在无模型误定和有模型误定情况下的偏度和精度进行分析,并与基于模型方法的模拟结果进行比较。结果当存在模型误定时,倾向指数方法比基于模型的方法具有较好的稳健性。结论对于大量、关系复杂的数据,应用倾向指数方法具有较大的灵活性。  相似文献   

7.
We present a reversible jump Bayesian piecewise log-linear hazard model that extends the Bayesian piecewise exponential hazard to a continuous function of piecewise linear log hazards. A simulation study encompassing several different hazard shapes, accrual rates, censoring proportion, and sample sizes showed that the Bayesian piecewise linear log-hazard model estimated the true mean survival time and survival distributions better than the piecewsie exponential hazard. Survival data from Wake Forest Baptist Medical Center is analyzed by both methods and the posterior results are compared.  相似文献   

8.
We propose a two-step method to converse human tissue materials from patient computed tomography (CT) images, which is required in dose reconstructions for a retrospective study of carbon-ion radiotherapy (CIRT) using Monte Carlo (MC) simulation. The first step was to assign the standard tissues of the International Commission on Radiological Protection reference phantoms according to the CT-number. The second step was to determine the mass density of each material based on the relationship between CT-number and stopping power ratio (Hounsfield unit [HU]-SPR) registered in treatment planning system (TPS). Direct implementation of the well-calibrated HU-SPR curve allows the reproduction of previous clinical treatments recorded in TPS without uncertainty due to a mismatch of the CT scanner or scanning conditions, whereas MC simulation with realistic human tissue materials can fulfill the out-of-field dose, which was missing in the record. To validate our proposed method, depth-dose distributions in the homogenous and heterogeneous phantoms irradiated by a 400 MeV/u carbon beam with an 8 cm spread-out Bragg peak (SOBP) were computed by the MC simulation in combination with the proposed methods and compared with those of TPS. Good agreement of the depth-dose distributions between the TPS and MC simulation (within a 1% discrepancy in range) was obtained for different materials. In contrast, fluence distributions of secondary particles revealed the necessity of MC simulation using realistic human tissue. The proposed material assignment method will be used for a retrospective study using previous clinical data of CIRT at the National Institute of Radiological Sciences (NIRS).  相似文献   

9.

Objectives

To provide a practical approach for calculating uncertainty intervals and variance components associated with initial-condition and dynamic-equation parameters in computationally expensive population-based disease microsimulation models.

Methods

In the proposed uncertainty analysis approach, we calculated the required computational time and the number of runs given a user-defined error bound on the variance of the grand mean. The equations for optimal sample sizes were derived by minimizing the variance of the grand mean using initial estimates for variance components. Finally, analysis of variance estimators were used to calculate unbiased variance estimates.

Results

To illustrate the proposed approach, we performed uncertainty analysis to estimate the uncertainty associated with total direct cost of osteoarthritis in Canada from 2010 to 2031 according to a previously published population health microsimulation model of osteoarthritis. We first calculated crude estimates for initial-population sampling and dynamic-equation parameters uncertainty by performing a small number of runs. We then calculated the optimal sample sizes and finally derived 95% uncertainty intervals of the total cost and unbiased estimates for variance components. According to our results, the contribution of dynamic-equation parameter uncertainty to the overall variance was higher than that of initial parameter sampling uncertainty throughout the study period.

Conclusions

The proposed analysis of variance approach provides the uncertainty intervals for the mean outcome in addition to unbiased estimates for each source of uncertainty. The contributions of each source of uncertainty can then be compared with each other for validation purposes so as to improve the model accuracy.  相似文献   

10.
Different distributions of confounding variables in populations complicate any comparison of the relative frequency of an event. To resolve this, methods for fitting statistical models to tables of rates have recently been developed. One such model is the multiplicative model. We performed a Monte Carlo study of the multiplicative model for a 4 X 3 table of rates. For small samples the likelihood ratio test statistic was conservative for small expected cell counts, liberal for moderate expected counts, and performed well for large expected counts. The weighted least squares test statistic was generally more conservative and less powerful than both the likelihood ratio statistic and the Pearson statistic.  相似文献   

11.
We present a mixed treatment meta-analysis of antivirals for treatment of influenza, where some trials report summary measures on at least one of the two outcomes: time to alleviation of fever and time to alleviation of symptoms. The synthesis is further complicated by the variety of summary measures reported: mean time, median time and proportion symptom free at the end of follow-up. We compare several models using the deviance information criteria and the contribution of different evidence sources to the residual deviance to aid model selection. A Weibull model with exchangeable treatment effects that are independent for each outcome but have a common random effect mean for the two outcomes gives the best fit according to these criteria. This model allows us to summarize treatment effect on two outcomes in a single summary measure and draw conclusions as to the most effective treatment. Amantadine and Oseltamivir were the most effective treatments, with the probability of being most effective of 0.56 and 0.37, respectively. Amantadine reduces the duration of symptoms by an estimated 2.8 days, and Oseltamivir 2.6 days, compared with placebo. The models provide flexible methods for synthesis of evidence on multiple treatments in the absence of head-to-head trial data, when different summary measures are used and either different clinical outcomes are reported or where the same outcomes are reported at different or multiple time points.  相似文献   

12.
This article focuses on the modelling and prediction of costs due to disease accrued over time, to inform the planning of future services and budgets. It is well documented that the modelling of cost data is often problematic due to the distribution of such data; for example, strongly right skewed with a significant percentage of zero-cost observations. An additional problem associated with modelling costs over time is that cost observations measured on the same individual at different time points will usually be correlated. In this study we compare the performance of four different multilevel/hierarchical models (which allow for both the within-subject and between-subject variability) for analysing healthcare costs in a cohort of individuals with early inflammatory polyarthritis (IP) who were followed-up annually over a 5-year time period from 1990/1991. The hierarchical models fitted included linear regression models and two-part models with log-transformed costs, and two-part model with gamma regression and a log link. The cohort was split into a learning sample, to fit the different models, and a test sample to assess the predictive ability of these models. To obtain predicted costs on the original cost scale (rather than the log-cost scale) two different retransformation factors were applied. All analyses were carried out using Bayesian Markov chain Monte Carlo (MCMC) simulation methods.  相似文献   

13.
《Value in health》2015,18(5):597-604
BackgroundRepetitive transcranial magnetic stimulation (rTMS) therapy is a clinically safe, noninvasive, nonsystemic treatment for major depressive disorder.ObjectiveWe evaluated the cost-effectiveness of rTMS versus pharmacotherapy for the treatment of patients with major depressive disorder who have failed at least two adequate courses of antidepressant medications.MethodsA 3-year Markov microsimulation model with 2-monthly cycles was used to compare the costs and quality-adjusted life-years (QALYs) of rTMS and a mix of antidepressant medications (including selective serotonin reuptake inhibitors, serotonin and norepinephrine reuptake inhibitors, tricyclics, noradrenergic and specific serotonergic antidepressants, and monoamine oxidase inhibitors). The model synthesized data sourced from published literature, national cost reports, and expert opinions. Incremental cost-utility ratios were calculated, and uncertainty of the results was assessed using univariate and multivariate probabilistic sensitivity analyses.ResultsCompared with pharmacotherapy, rTMS is a dominant/cost-effective alternative for patients with treatment-resistant depressive disorder. The model predicted that QALYs gained with rTMS were higher than those gained with antidepressant medications (1.25 vs. 1.18 QALYs) while costs were slightly less (AU $31,003 vs. AU $31,190). In the Australian context, at the willingness-to-pay threshold of AU $50,000 per QALY gain, the probability that rTMS was cost-effective was 73%. Sensitivity analyses confirmed the superiority of rTMS in terms of value for money compared with antidepressant medications.ConclusionsAlthough both pharmacotherapy and rTMS are clinically effective treatments for major depressive disorder, rTMS is shown to outperform antidepressants in terms of cost-effectiveness for patients who have failed at least two adequate courses of antidepressant medications.  相似文献   

14.
Objective:  To give guidance in defining probability distributions for model inputs in probabilistic sensitivity analysis (PSA) from a full Bayesian perspective.
Methods:  A common approach to defining probability distributions for model inputs in PSA on the basis of input-related data is to use the likelihood of the data on an appropriate scale as the foundation for the distribution around the inputs. We will look at this approach from a Bayesian perspective, derive the implicit prior distributions in two examples (proportions and relative risks), and compare these to alternative prior distributions.
Results:  In cases where data are sparse (in which case sensitivity analysis is crucial), commonly used approaches can lead to unexpected results. Weshow that this is because of the prior distributions that are implicitly assumed, namely that these are not as "uninformative" or "vague" as believed. We propose priors that we believe are more sensible for two examples and which are just as easy to apply.
Conclusions:  Input probability distributions should not be based on the likelihood of the data, but on the Bayesian posterior distribution calculated from this likelihood and an explicitly stated prior distribution.  相似文献   

15.
BACKGROUND: We sought to develop and validate a decision-analytic model for the natural history of cervical cancer for the German health care context and to apply it to cervical cancer screening. METHODS: We developed a Markov model for the natural history of cervical cancer and cervical cancer screening in the German health care context. The model reflects current German practice standards for screening, diagnostic follow-up and treatment regarding cervical cancer and its precursors. Data for disease progression and cervical cancer survival were obtained from the literature and German cancer registries. Accuracy of Papanicolaou (Pap) testing was based on meta-analyses. We performed internal and external model validation using observed epidemiological data for unscreened women from different German cancer registries. The model predicts life expectancy, incidence of detected cervical cancer cases, lifetime cervical cancer risks and mortality. RESULTS: The model predicted a lifetime cervical cancer risk of 3.0% and a lifetime cervical cancer mortality of 1.0%, with a peak cancer incidence of 84/100,000 at age 51 years. These results were similar to observed data from German cancer registries, German literature data and results from other international models. Based on our model, annual Pap screening could prevent 98.7% of diagnosed cancer cases and 99.6% of deaths due to cervical cancer in women completely adherent to screening and compliant to treatment. Extending the screening interval from 1 year to 2, 3 or 5 years resulted in reduced screening effectiveness. CONCLUSIONS: This model provides a tool for evaluating the long-term effectiveness of different cervical cancer screening tests and strategies.  相似文献   

16.
《Value in health》2023,26(4):579-588
ObjectivesThis study aimed to understand the importance of criteria describing methods (eg, duration, costs, validity, and outcomes) according to decision makers for each decision point in the medical product lifecycle (MPLC) and to determine the suitability of a discrete choice experiment, swing weighting, probabilistic threshold technique, and best-worst scale cases 1 and 2 at each decision point in the MPLC.MethodsApplying multicriteria decision analysis, an online survey was sent to MPLC decision makers (ie, industry, regulatory, and health technology assessment representatives). They ranked and weighted 19 methods criteria from an existing performance matrix about their respective decisions across the MPLC. All criteria were given a relative weight based on the ranking and rating in the survey after which an overall suitability score was calculated for each preference elicitation method per decision point. Sensitivity analyses were conducted to reflect uncertainty in the performance matrix.ResultsFifty-nine industry, 29 regulatory, and 5 health technology assessment representatives completed the surveys. Overall, “estimating trade-offs between treatment characteristics” and “estimating weights for treatment characteristics” were highly important criteria throughout all MPLC decision points, whereas other criteria were most important only for specific MPLC stages. Swing weighting and probabilistic threshold technique received significantly higher suitability scores across decision points than other methods. Sensitivity analyses showed substantial impact of uncertainty in the performance matrix.ConclusionAlthough discrete choice experiment is the most applied preference elicitation method, other methods should also be considered to address the needs of decision makers. Development of evidence-based guidance documents for designing, conducting, and analyzing such methods could enhance their use.  相似文献   

17.
Probabilistic sensitivity analysis (PSA) is required to account for uncertainty in cost-effectiveness calculations arising from health economic models. The simplest way to perform PSA in practice is by Monte Carlo methods, which involves running the model many times using randomly sampled values of the model inputs. However, this can be impractical when the economic model takes appreciable amounts of time to run. This situation arises, in particular, for patient-level simulation models (also known as micro-simulation or individual-level simulation models), where a single run of the model simulates the health care of many thousands of individual patients. The large number of patients required in each run to achieve accurate estimation of cost-effectiveness means that only a relatively small number of runs is possible. For this reason, it is often said that PSA is not practical for patient-level models.We develop a way to reduce the computational burden of Monte Carlo PSA for patient-level models, based on the algebra of analysis of variance. Methods are presented to estimate the mean and variance of the model output, with formulae for determining optimal sample sizes. The methods are simple to apply and will typically reduce the computational demand very substantially.  相似文献   

18.

Background

Decision on the most appropriate oral anticoagulation therapy for stroke prevention in patients with nonvalvular atrial fibrillation is difficult because multiple treatment options are available, and these vary in their clinical effects and relevant nonclinical characteristics.

Objectives

To use a multicriteria decision analysis (MCDA) to compare the oral anticoagulants apixaban, dabigatran, edoxaban, rivaroxaban, and vitamin K antagonist (VKAs; specifically warfarin) in patients with nonvalvular atrial fibrillation.

Methods

We identified the evaluation criteria through a targeted literature review and clinical judgment. The final evaluation model included nine clinical events and four other criteria. We ranked possibly fatal clinical event criteria on the basis of the differences in risks of fatal events and the corresponding window of therapeutic opportunity, as observed in clinical trials. Clinical judgment was used to rank other criteria. Full criteria ranking was used to calculate centroid weights, which were combined with individual treatment performances to estimate the overall value score for each treatment.

Results

Using such an MCDA, dabigatran yielded the highest overall value, approximately 6% higher than that of the second-best treatment, apixaban. Dabigatran also had the highest first-rank probability (0.72) in the probabilistic sensitivity analysis. Rivaroxaban performed worse than the other non-VKA oral anticoagulants, but better than VKAs (with both having 0.00 first-rank probability). The results were insensitive to changes in model structure.

Conclusions

When all key oral anticoagulant value criteria and their relative importance are investigated in an MCDA, dabigatran appears to rank the highest and warfarin the lowest.  相似文献   

19.
A permutation test is proposed for assessing affection status models. The test uses marker data from regions with prior evidence of linkage to susceptibility genes, and three different test statistics are examined. We applied the test to the GAW10 data and found no evidence on chromosome 18 to reject the affection status model that groups individuals diagnosed with either bipolar I, bipolar II or unipolar. The chromosome 5 data gave similar results, and further suggested that individuals diagnosed with unipolar-single episode not be included as affected. A preliminary power study suggested that one of the proposed statistics, S, is to be preferred in certain circumstances. © 1997 Wiley-Liss, Inc.  相似文献   

20.
Decision analytical models are widely used in economic evaluation of health care interventions with the objective of generating valuable information to assist health policy decision-makers to allocate scarce health care resources efficiently. The whole decision modelling process can be summarised in four stages: (i) a systematic review of the relevant data (including meta-analyses), (ii) estimation of all inputs into the model (including effectiveness, transition probabilities and costs), (iii) sensitivity analysis for data and model specifications, and (iv) evaluation of the model. The aim of this paper is to demonstrate how the individual components of decision modelling, outlined above, may be addressed simultaneously in one coherent Bayesian model (sometimes known as a comprehensive decision analytical model) and evaluated using Markov Chain Monte Carlo simulation implemented in the specialist software WinBUGS. To illustrate the method described, it is applied to two illustrative examples: (1) The prophylactic use of neurominidase inhibitors for the prevention of influenza. (2) The use of taxanes for the second-line treatment of advanced breast cancer.The advantages of integrating the four stages outlined into one comprehensive decision analytical model, compared to the conventional 'two-stage' approach, are discussed.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号