首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The classification of ethnic status using name information.   总被引:6,自引:5,他引:1       下载免费PDF全文
Methodology is developed to classify ethnic status by name using a simple probabilistic model. This method involves the consideration of four rules which may be used to classify individuals using three name components (first, middle and last names). In order to do this, conditional probabilities of ethnic status are estimated from a sample in which the ethnic status is known. Using a split sample technique the sensitivity and specificity of this methodology were examined in a data set of death registrations. Each of the classification rules performed well on the data from which they were constructed but were not as efficient when applied to another population. Nevertheless a model (linear), in which the sum of the conditional probabilities of each home component is used, achieved a sensitivity and specificity of 97% and 100% respectively in males and 89% and 100% in females.  相似文献   

2.
In probabilistic sensitivity analyses, analysts assign probability distributions to uncertain model parameters and use Monte Carlo simulation to estimate the sensitivity of model results to parameter uncertainty. The authors present Bayesian methods for constructing large-sample approximate posterior distributions for probabilities, rates, and relative effect parameters, for both controlled and uncontrolled studies, and discuss how to use these posterior distributions in a probabilistic sensitivity analysis. These results draw on and extend procedures from the literature on large-sample Bayesian posterior distributions and Bayesian random effects meta-analysis. They improve on standard approaches to probabilistic sensitivity analysis by allowing a proper accounting for heterogeneity across studies as well as dependence between control and treatment parameters, while still being simple enough to be carried out on a spreadsheet. The authors apply these methods to conduct a probabilistic sensitivity analysis for a recently published analysis of zidovudine prophylaxis following rapid HIV testing in labor to prevent vertical HIV transmission in pregnant women.  相似文献   

3.
Over the last decade or so, there have been many developments in methods to handle uncertainty in cost-effectiveness studies. In decision modelling, it is widely accepted that there needs to be an assessment of how sensitive the decision is to uncertainty in parameter values. The rationale for probabilistic sensitivity analysis (PSA) is primarily based on a consideration of the needs of decision makers in assessing the consequences of decision uncertainty. In this paper, we highlight some further compelling reasons for adopting probabilistic methods for decision modelling and sensitivity analysis, and specifically for adopting simulation from a Bayesian posterior distribution. Our reasoning is as follows. Firstly, cost-effectiveness analyses need to be based on all the available evidence, not a selected subset, and the uncertainties in the data need to be propagated through the model in order to provide a correct analysis of the uncertainties in the decision. In many--perhaps most--cases the evidence structure requires a statistical analysis that inevitably induces correlations between parameters. Deterministic sensitivity analysis requires that models are run with parameters fixed at 'extreme' values, but where parameter correlation exists it is not possible to identify sets of parameter values that can be considered 'extreme' in a meaningful sense. However, a correct probabilistic analysis can be readily achieved by Monte Carlo sampling from the joint posterior distribution of parameters. In this paper, we review some evidence structures commonly occurring in decision models, where analyses that correctly reflect the uncertainty in the data induce correlations between parameters. Frequently, this is because the evidence base includes information on functions of several parameters. It follows that, if health technology assessments are to be based on a correct analysis of all available data, then probabilistic methods must be used both for sensitivity analysis and for estimation of expected costs and benefits.  相似文献   

4.
Confidence interval (CI) construction with respect to proportion/rate difference for paired binary data has become a standard procedure in many clinical trials and medical studies. When the sample size is small and incomplete data are present, asymptotic CIs may be dubious and exact CIs are not yet available. In this article, we propose exact and approximate unconditional test‐based methods for constructing CI for proportion/rate difference in the presence of incomplete paired binary data. Approaches based on one‐ and two‐sided Wald's tests will be considered. Unlike asymptotic CI estimators, exact unconditional CI estimators always guarantee their coverage probabilities at or above the pre‐specified confidence level. Our empirical studies further show that (i) approximate unconditional CI estimators usually yield shorter expected confidence width (ECW) with their coverage probabilities being well controlled around the pre‐specified confidence level; and (ii) the ECWs of the unconditional two‐sided‐test‐based CI estimators are generally narrower than those of the unconditional one‐sided‐test‐based CI estimators. Moreover, ECWs of asymptotic CIs may not necessarily be narrower than those of two‐sided‐based exact unconditional CIs. Two real examples will be used to illustrate our methodologies. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

5.
Increasingly complex models are being used to evaluate the cost-effectiveness of medical interventions. We describe the multiple sources of uncertainty that are relevant to such models, and their relation to either probabilistic or deterministic sensitivity analysis. A Bayesian approach appears natural in this context. We explore how sensitivity analysis to patient heterogeneity and parameter uncertainty can be simultaneously investigated, and illustrate the necessary computation when expected costs and benefits can be calculated in closed form, such as in discrete-time discrete-state Markov models. Information about parameters can either be expressed as a prior distribution, or derived as a posterior distribution given a generalized synthesis of available data in which multiple sources of evidence can be differentially weighted according to their assumed quality. The resulting joint posterior distributions on costs and benefits can then provide inferences on incremental cost-effectiveness, best presented as posterior distributions over net-benefit and cost-effectiveness acceptability curves. These ideas are illustrated with a detailed running example concerning the cost-effectiveness of hip prostheses in different age-sex subgroups. All computations are carried out using freely available software for conducting Markov chain Monte Carlo analysis.  相似文献   

6.
A computer program written in BASIC and implemented on a 48K RAM Apple II computer was developed to assist physicians in using decision analysis to solve clinical problems. Clinicians familiar with decision analysis can easily enter, modify, store, and retrieve decision trees. Probabilities and utilities can be calculated and a sensitivity analysis can be performed and printed. An entire decision tree can be listed, and a graphic display of any node with its branches, branch probabilities, and node utilities can be viewed and printed. The program is easy to use and can be learned in a few hours. Its flexibility and power will facilitate the application of decision analysis to a wide variety of clinical problems.  相似文献   

7.
Bias from unmeasured confounding is a persistent concern in observational studies, and sensitivity analysis has been proposed as a solution. In the recent years, probabilistic sensitivity analysis using either Monte Carlo sensitivity analysis (MCSA) or Bayesian sensitivity analysis (BSA) has emerged as a practical analytic strategy when there are multiple bias parameters inputs. BSA uses Bayes theorem to formally combine evidence from the prior distribution and the data. In contrast, MCSA samples bias parameters directly from the prior distribution. Intuitively, one would think that BSA and MCSA ought to give similar results. Both methods use similar models and the same (prior) probability distributions for the bias parameters. In this paper, we illustrate the surprising finding that BSA and MCSA can give very different results. Specifically, we demonstrate that MCSA can give inaccurate uncertainty assessments (e.g. 95% intervals) that do not reflect the data's influence on uncertainty about unmeasured confounding. Using a data example from epidemiology and simulation studies, we show that certain combinations of data and prior distributions can result in dramatic prior‐to‐posterior changes in uncertainty about the bias parameters. This occurs because the application of Bayes theorem in a non‐identifiable model can sometimes rule out certain patterns of unmeasured confounding that are not compatible with the data. Consequently, the MCSA approach may give 95% intervals that are either too wide or too narrow and that do not have 95% frequentist coverage probability. Based on our findings, we recommend that analysts use BSA for probabilistic sensitivity analysis. Copyright © 2017 John Wiley & Sons, Ltd.  相似文献   

8.
Programmatic cost analyses of preventive interventions commonly have a number of methodological difficulties. To determine the mean total costs and properly characterize variability, one often has to deal with small sample sizes, skewed distributions, and especially missing data. Standard approaches for dealing with missing data such as multiple imputation may suffer from a small sample size, a lack of appropriate covariates, or too few details around the method used to handle the missing data. In this study, we estimate total programmatic costs for a prevention trial evaluating the Strong African American Families-Teen program. This intervention focuses on the prevention of substance abuse and risky sexual behavior. To account for missing data in the assessment of programmatic costs we compare multiple imputation to probabilistic sensitivity analysis. The latter approach uses collected cost data to create a distribution around each input parameter. We found that with the multiple imputation approach, the mean (95 % confidence interval) incremental difference was $2,149 ($397, $3,901). With the probabilistic sensitivity analysis approach, the incremental difference was $2,583 ($778, $4,346). Although the true cost of the program is unknown, probabilistic sensitivity analysis may be a more viable alternative for capturing variability in estimates of programmatic costs when dealing with missing data, particularly with small sample sizes and the lack of strong predictor variables. Further, the larger standard errors produced by the probabilistic sensitivity analysis method may signal its ability to capture more of the variability in the data, thus better informing policymakers on the potentially true cost of the intervention.  相似文献   

9.
BACKGROUND: Individuals sometimes express preferences that do not follow expected utility theory. Cumulative prospect theory adjusts for some phenomena by using decision weights rather than probabilities when analyzing a decision tree. METHODS: The authors examined how probability transformations from cumulative prospect theory might alter a decision analysis of a prophylactic therapy in AIDS, eliciting utilities from patients with HIV infection (n = 75) and calculating expected outcomes using an established Markov model. They next focused on transformations of three sets of probabilities: 1) the probabilities used in calculating standard-gamble utility scores; 2) the probabilities of being in discrete Markov states; 3) the probabilities of transitioning between Markov states. RESULTS: The same prophylaxis strategy yielded the highest quality-adjusted survival under all transformations. For the average patient, prophylaxis appeared relatively less advantageous when standard-gamble utilities were transformed. Prophylaxis appeared relatively more advantageous when state probabilities were transformed and relatively less advantageous when transition probabilities were transformed. Transforming standard-gamble and transition probabilities simultaneously decreased the gain from prophylaxis by almost half. Sensitivity analysis indicated that even near-linear probability weighting transformations could substantially alter quality-adjusted survival estimates. CONCLUSION: The magnitude of benefit estimated in a decision-analytic model can change significantly after using cumulative prospect theory. Incorporating cumulative prospect theory into decision analysis can provide a form of sensitivity analysis and may help describe when people deviate from expected utility theory.  相似文献   

10.
Decision-analytic models are frequently used to evaluate the relative costs and benefits of alternative therapeutic strategies for health care. Various types of sensitivity analysis are used to evaluate the uncertainty inherent in the models. Although probabilistic sensitivity analysis is more difficult theoretically and computationally, the results can be much more powerful and useful than deterministic sensitivity analysis. The authors show how a Monte Carlo simulation can be implemented using standard software to perform a probabilistic sensitivity analysis incorporating the bootstrap. The method is applied to a decision-analytic model evaluating the cost-effectiveness of Helicobacter pylori eradication. The necessary steps are straightforward and are described in detail. The use of the bootstrap avoids certain difficulties encountered with theoretical distributions. The probabilistic sensitivity analysis provided insights into the decision-analytic model beyond the traditional base-case and deterministic sensitivity analyses and should become the standard method for assessing sensitivity.  相似文献   

11.
In this paper we propose a quasi-exact alternative to the exact unconditional method by Chan and Zhang (1999) estimating confidence intervals for the difference of two independent binomial proportions in small sample cases. The quasi-exact method is an approximation to a modified version of Chan and Zhang's method, where the two-sided p-value of an observation is defined by adding to the one-sided p-value the sum of all probabilities of more "extreme" events in the unobserved tail. We show that distinctively less conservative interval estimates can be derived following the modified definition of the two-sided p-value. The approximations applied in the quasi-exact method help to simplify the computations greatly, while the resulting infringements to the nominal level are low. Compared with other approximate methods, including the mid-p quasi-exact methods and the Miettinen and Nurminen (M&N) asymptotic method, our quasi-exact method demonstrates much better reliability in small sample cases.  相似文献   

12.
Various expressions have appeared for sample size calculation based on the power function of McNemar's test for paired or matched proportions, especially with reference to a matched case-control study. These differ principally with respect to the expression for the variance of the statistic under the alternative hypothesis. In addition to the conditional power function, I identify and compare four distinct unconditional expressions. I show that the unconditional calculation of Schlesselman for the matched case-control study can be expressed as a first-order unconditional calculation as described by Miettinen. Corrections to Schlesselman's unconditional expression presented by Fleiss and Levin and by Dupont, which use different models to describe exposure association among matched cases and controls, are also equivalent to a first-order unconditional calculation. I present a simplification of these corrections that directly provides the underlying table of cell probabilities, from which one can perform any of the alternative sample size calculations. Also, I compare the four unconditional sample size expressions relative to the exact power function. The conclusion is that Miettinen's first-order expression tends to underestimate sample size, while his second-order expression is usually fairly accurate, though possibly slightly anti-conservative. A multinomial-based expression presented by Connor, among others, is also fairly accurate and is usually slightly conservative. Finally, a local unconditional expression of Mitra, among others, tends to be excessively conservative.  相似文献   

13.
Cost-effectiveness acceptability curves have become a common way of presenting the results of probabilistic sensitivity analysis. However, these curves do not provide information on what the loss of welfare or net benefit (NB) is for cases where a given intervention is not the optimal one. We describe an alternate approach to presenting the results of probabilistic sensitivity analysis called the incremental benefit curve that presents the entire distribution of incremental NB of each intervention for a given WTP value. The incremental benefit curve provides the decision maker information regarding the potential welfare loss with a given intervention for scenarios in which it is not the optimal intervention, and thus would be a useful complement to the acceptability curve.  相似文献   

14.
A computer program has been developed to aid in diagnostic and therapeutic decisions concerning a patient with chest pain. It provides an analysis tailored to the individual patient, in that the data used in the analysis depend on specific patient characteristics. The user can elect to examine all stored data values (probabilities, quality-adjusted life expectancies, and monetary costs), and to alter any of them. Decisions at three stages in the patient workup are considered: prior to any diagnostic test, following an exercise tolerance test, and following coronary angiography. The results of the analysis can be displayed in several tabular and graphical formats. In addition, the program can carry out a Monte Carlo simulation (or probabilistic sensitivity analysis) to determine the effect of uncertainty in the data on the stability of the choice of optimal strategy.  相似文献   

15.
BACKGROUND: Endoscopic retrograde cholangiopancreatography (ERCP) is considered the gold standard for imaging of the biliary tract but is associated with complications. Less invasive imaging techniques, such as magnetic resonance cholangiopancreatography (MRCP), have a much lower complication rate. The accuracy of MRCP is comparable to that of ERCP, and MRCP may be more effective and cost-effective, particularly in cases for which the suspected prevalence of disease is low and further intervention can be avoided. A model was constructed to compare the effectiveness and cost-effectiveness of MRCP and ERCP in patients with a previous history of cholecystectomy, presenting with abdominal pain and/or abnormal liver function tests. METHODS: Diagnostic accuracy estimates came from a systematic review of MRCP. A decision analytic model was constructed to represent the diagnostic and treatment pathway of this patient group. The model compared the following two diagnostic strategies: (i) MRCP followed with ERCP if positive, and then management based on ERCP; and (ii) ERCP only. Deterministic and probabilistic analyses were used to assess the likelihood of MRCP being cost-effective. Sensitivity analyses examined the impact of prior probabilities of common bile duct stones (CBDS) and test performance characteristics. The outcomes considered were costs, quality-adjusted life years (QALYs), and cost per additional QALY. RESULTS: The deterministic analysis indicated that MRCP was dominant over ERCP. At prior probabilities of CBDS, less than 60 percent MRCP was the less costly initial diagnostic test; above this threshold, ERCP was less costly. Similarly, at probabilities of CBDS less than 68 percent, MRCP was also the more effective strategy (generated more QALYs). Above this threshold, ERCP became the more effective strategy. Probabilistic sensitivity analyses indicated that, in this patient group for which there is a low to moderate probability of CBDS, there was a 59 percent likelihood that MRCP was cost-saving, an 83 percent chance that MRCP was more effective with a higher quality adjusted survival, and an 83 percent chance that MRCP had a cost-effectiveness ratio more favorable than dollars 50,000 per QALY gained. CONCLUSIONS: Costs and cost-effectiveness are dependent upon the prior probability of CBDS. However, probabilistic analysis indicated that, with a high degree of certainty, MRCP was the more effective and cost-effective initial test in postcholecystectomy patients with a low to moderate probability of CBDS.  相似文献   

16.
目的 通过结合术后生存率、花费以及健康效用,在术后整个生命周期内评价腓骨重建的成本与效果,为患者及医生提供更为明确的决策基础,为医疗政策制定者分配医疗资源提供理论依据。 方法 成本-效果分析是基于Markov模型,将术后患者分为4种Markov状态:无疾病的,被救治的,远处转移的以及死亡的,通过Markov状态间的转换概率来拟合生存曲线,同时为每个Markov状态赋予相应的花费以及健康效用。游离腓骨重建下颌连续性与不重建的比较是通过增量成本效果比。模型的稳定性检验是基于蒙特卡洛模拟的概率敏感性分析。 结果 队列分析发现在整个生命周期,腓骨重建下颌连续性以多付出32 659 CNY的代价获得了0.33质量调整生命年的改善。概率敏感性分析发现,患者的支付意愿影响了手术方案的选择,超过99 075 CNY/QAL的支付意愿是选择重建的基础。 结论 游离腓骨重建下颌连续性以更高的花费获得更高的生存质量改善,因而患者的支付意愿影响着医疗决策,当支付意愿超过99 075 CNY/QALY时,重建是值得推荐的。  相似文献   

17.
Sensitivity and specificity are two customary performance measures associated with medical diagnostic tests. Typically, they are modeled independently as a function of risk factors using logistic regression, which provides estimated functions for these probabilities. Change in these probabilities across levels of risk factors is of primary interest and the indirect relationship is often displayed using a receiver operating characteristic curve. We refer to this as analysis of 'first-order' behavior. Here, we consider what we refer to as 'second-order' behavior where we examine the stochastic dependence between the (random) estimates of sensitivity and specificity. To do so, we argue that a model for the four cell probabilities that determine the joint distribution of screening test result and outcome result is needed. Such a modeling induces sensitivity and specificity as functions of these cell probabilities. In turn, this raises the issue of a coherent specification for these cell probabilities, given risk factors, i.e. a specification that ensures that all probabilities calculated under it fall between 0 and 1. This leads to the question of how to provide models that are coherent and mechanistically appropriate as well as computationally feasible to fit, particularly with large data sets. The goal of this article is to illuminate these issues both algebraically and through analysis of a real data set.  相似文献   

18.
Missingness mechanism is in theory unverifiable based only on observed data. If there is a suspicion of missing not at random, researchers often perform a sensitivity analysis to evaluate the impact of various missingness mechanisms. In general, sensitivity analysis approaches require a full specification of the relationship between missing values and missingness probabilities. Such relationship can be specified based on a selection model, a pattern-mixture model or a shared parameter model. Under the selection modeling framework, we propose a sensitivity analysis approach using a nonparametric multiple imputation strategy. The proposed approach only requires specifying the correlation coefficient between missing values and selection (response) probabilities under a selection model. The correlation coefficient is a standardized measure and can be used as a natural sensitivity analysis parameter. The sensitivity analysis involves multiple imputations of missing values, yet the sensitivity parameter is only used to select imputing/donor sets. Hence, the proposed approach might be more robust against misspecifications of the sensitivity parameter. For illustration, the proposed approach is applied to incomplete measurements of level of preoperative Hemoglobin A1c, for patients who had high-grade carotid artery stenosisa and were scheduled for surgery. A simulation study is conducted to evaluate the performance of the proposed approach.  相似文献   

19.
In clinical decision making, it is common to ask whether, and how much, a diagnostic procedure is contributing to subsequent treatment decisions. Statistically, quantification of the value of the information provided by a diagnostic procedure can be carried out using decision trees with multiple decision points, representing both the diagnostic test and the subsequent treatments that may depend on the test's results. This article investigates probabilistic sensitivity analysis approaches for exploring and communicating parameter uncertainty in such decision trees. Complexities arise because uncertainty about a model's inputs determines uncertainty about optimal decisions at all decision nodes of a tree. We present the expected utility solution strategy for multistage decision problems in the presence of uncertainty on input parameters, propose a set of graphical displays and summarization tools for probabilistic sensitivity analysis in multistage decision trees, and provide an application to axillary lymph node dissection in breast cancer.  相似文献   

20.
We consider the problem of assessing new and existing technologies for their cost-effectiveness in the case where data on both costs and effects are available from a clinical trial, and we address it by means of the cost-effectiveness acceptability curve. The main difficulty in these analyses is that cost data usually exhibit highly skew and heavy-tailed distributions so that it can be extremely difficult to produce realistic probabilistic models for the underlying population distribution, and in particular to model accurately the tail of the distribution, which is highly influential in estimating the population mean. Here, in order to integrate the uncertainty about the model into the analysis of cost data and into cost-effectiveness analyses, we consider an approach based on Bayesian model averaging: instead of choosing a single parametric model, we specify a set of plausible models for costs and estimate the mean cost with a weighted mean of its posterior expectations under each model, with weights given by the posterior model probabilities. The results are compared with those obtained with a semi-parametric approach that does not require any assumption about the distribution of costs.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号