首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Clinical journals increasingly illustrate uncertainty about the cost and effect of health care interventions using cost-effectiveness acceptability curves (CEACs). CEACs present the probability that each competing alternative is optimal for a range of values of the cost-effectiveness threshold. The objective of this article is to demonstrate the limitations of CEACs for presenting uncertainty in cost-effectiveness analyses. These limitations arise because the CEAC is unable to distinguish dramatically different joint distributions of incremental cost and effect. A CEAC is not sensitive to any change of the incremental joint distribution in the upper left and lower right quadrants of the cost-effectiveness plane; neither is it sensitive to radial shift of the incremental joint distribution in the upper right and lower left quadrants. As a result, CEACs are ambiguous to risk-averse policy makers, inhibit integration with risk attitude, hamper synthesis with other evidence or opinions, and are unhelpful to assess the need for more research. Moreover, CEACs may mislead policy makers and can incorrectly suggest medical importance. Both for guiding immediate decisions and for prioritizing future research, these considerable drawbacks of CEACs should make us rethink their use in communicating uncertainty. As opposed to CEACs, confidence and credible intervals do not conflate magnitude and precision of the net benefit of health care interventions. Therefore, they allow (in)formal synthesis of study results with risk attitude and other evidence or opinions. Presenting the value of information in addition to these intervals allows policy makers to evaluate the need for more empirical research.  相似文献   

2.
Sequential analysis is a statistical way of analysing cumulative data. Its goal is to come to a decision as soon as enough evidence is reached for one or another hypothesis. In this article three different statistical approaches, the frequentist, the Bayesian and the likelihood approach, are discussed in relation to sequential analysis. In particular, the less known likelihood approach is elucidated.  相似文献   

3.
Cost-effectiveness acceptability curves (CEACs) have been widely adopted as a method to quantify and graphically represent uncertainty in economic evaluation studies of health-care technologies. However, there remain some common fallacies regarding the nature and shape of CEACs that largely result from the 'textbook' illustration of the CEAC. This 'textbook' CEAC shows a smooth curve starting at probability 0, with an asymptote to 1 for higher money values of the health outcome (lambda). But this familiar 'ogive' shape which makes the 'textbook' CEAC look like a cumulative distribution function is just one special case of the CEAC. The reality is that the CEAC can take many shapes and turns because it is a graphic transformation from the cost-effectiveness plane, where the joint density of incremental costs and effects may 'straddle' quadrants with attendant discontinuities and asymptotes. In fact CEACs: (i) do not have to cut the y-axis at 0; (ii) do not have to asymptote to 1; (iii) are not always monotonically increasing in lambda; and (iv) do not represent cumulative distribution functions (cdfs). Within this paper we present a 'gallery' of CEACs in order to identify the fallacies and illustrate the facts surrounding the CEAC. The aim of the paper is to serve as a reference tool to accompany the increased use of CEACs within major medical journals.  相似文献   

4.

Background  

The cost-effectiveness acceptability curve (CEAC) is a method for summarizing the uncertainty in estimates of cost-effectiveness. The CEAC, derived from the joint distribution of costs and effects, illustrates the (Bayesian) probability that the data are consistent with a true cost-effectiveness ratio falling below a specified ceiling ratio. The objective of the paper is to illustrate how to construct and interpret a CEAC.  相似文献   

5.
A Bayesian approach to stochastic cost-effectiveness analysis.   总被引:4,自引:0,他引:4  
The aim of this paper is to briefly outline a Bayesian approach to cost-effectiveness analysis (CEA). Historically, frequentists have been cautious of Bayesian methodology, which is often held as synonymous with a subjective approach to statistical analysis. In this paper, the potential overlap between Bayesian and frequentist approaches to CEA is explored--the focus being on the empirical and uninformative prior-based approaches to Bayesian methods rather than the use of subjective beliefs. This approach emphasizes the advantage of a Bayesian interpretation for decision-making while retaining the robustness of the frequentist approach. In particular the use of cost-effectiveness acceptability curves is examined. A traditional frequentist approach is equivalent to a Bayesian approach assuming no prior information, while where there is pre-existing information available from which to construct a prior distribution, an empirical Bayes approach is equivalent to a frequentist approach based on pooling the available data. Cost-effectiveness acceptability curves directly address the decision-making problem in CEA. Although it is argued that their interpretation as the probability that an intervention is cost-effective given the data requires a Bayesian interpretation, this should generate no misgivings for the frequentist.  相似文献   

6.
The aim of this paper is to discuss the use of Bayesian methods in cost-effectiveness analysis (CEA) and the common ground between Bayesian and traditional frequentist approaches. A further aim is to explore the use of the net benefit statistic and its advantages over the incremental cost-effectiveness ratio (ICER) statistic. In particular, the use of cost-effectiveness acceptability curves is examined as a device for presenting the implications of uncertainty in a CEA to decision makers. Although it is argued that the interpretation of such curves as the probability that an intervention is cost-effective given the data requires a Bayesian approach, this should generate no misgivings for the frequentist. Furthermore, cost-effectiveness acceptability curves estimated using the net benefit statistic are exactly equivalent to those estimated from an appropriate analysis of ICERs on the cost-effectiveness plane. The principles examined in this paper are illustrated by application to the cost-effectiveness of blood pressure control in the U.K. Prospective Diabetes Study (UKPDS 40). Due to a lack of good-quality prior information on the cost and effectiveness of blood pressure control in diabetes, a Bayesian analysis assuming an uninformative prior is argued to be most appropriate. This generates exactly the same cost-effectiveness results as a standard frequentist analysis.  相似文献   

7.
The Bayesian approach to statistics has been growing rapidly in popularity as an alternative to the frequentist approach in the appraisal of healthcare technologies in clinical trials. Bayesian methods have significant advantages over classical frequentist statistical methods and the presentation of evidence to decision makers. A fundamental feature of a Bayesian analysis is the use of prior information as well as the clinical trial data in the final analysis. However, the incorporation of prior information remains a controversial subject that provides a potential barrier to the acceptance of practical uses of Bayesian methods. The purpose of this paper is to stimulate a debate on the use of prior information in evidence submitted to decision makers. We discuss the advantages of incorporating genuine prior information in cost-effectiveness analyses of clinical trial data and explore mechanisms to safeguard scientific rigor in the use of such prior information.  相似文献   

8.
Many meta‐analyses combine results from only a small number of studies, a situation in which the between‐study variance is imprecisely estimated when standard methods are applied. Bayesian meta‐analysis allows incorporation of external evidence on heterogeneity, providing the potential for more robust inference on the effect size of interest. We present a method for performing Bayesian meta‐analysis using data augmentation, in which we represent an informative conjugate prior for between‐study variance by pseudo data and use meta‐regression for estimation. To assist in this, we derive predictive inverse‐gamma distributions for the between‐study variance expected in future meta‐analyses. These may serve as priors for heterogeneity in new meta‐analyses. In a simulation study, we compare approximate Bayesian methods using meta‐regression and pseudo data against fully Bayesian approaches based on importance sampling techniques and Markov chain Monte Carlo (MCMC). We compare the frequentist properties of these Bayesian methods with those of the commonly used frequentist DerSimonian and Laird procedure. The method is implemented in standard statistical software and provides a less complex alternative to standard MCMC approaches. An importance sampling approach produces almost identical results to standard MCMC approaches, and results obtained through meta‐regression and pseudo data are very similar. On average, data augmentation provides closer results to MCMC, if implemented using restricted maximum likelihood estimation rather than DerSimonian and Laird or maximum likelihood estimation. The methods are applied to real datasets, and an extension to network meta‐analysis is described. The proposed method facilitates Bayesian meta‐analysis in a way that is accessible to applied researchers. © 2016 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.  相似文献   

9.
It is conventionally thought that a small p-value confers high credibility on the observed alternative hypothesis, and that a repetition of the same experiment will have a high probability of resulting again in statistical significance. It is shown that if the observed difference is the true one, the probability of repeating a statistically significant result, the 'replication probability', is substantially lower than expected. The reason for this is a mistake that generates other seeming paradoxes: the interpretation of the post-trial p-value in the same way as the pre-trial alpha error. The replication probability can be used as a frequentist counterpart of Bayesian and likelihood methods to show that p-values overstate the evidence against the null hypothesis.  相似文献   

10.
The incremental cost effectiveness ratio has long been the standard parameter of interest in the assessment of the cost-effectiveness of a new treatment. However, due to concerns with interpretability and statistical inference, authors have suggested using the willingness-to-pay for a unit of health benefit to define the incremental net benefit as an alternative. The incremental net benefit has a more consistent interpretation and is amenable to routine statistical procedures. These procedures rely on the fact that the willingness-to-accept compensation for a loss of a unit of health benefit (at some cost saving) is the same as the willingness-to-pay for it. Theoretical and empirical evidence suggest, however, that in health care the willingness-to-accept is about twice as much as the willingness-to-pay. We use Bayesian methods to provide a statistical procedure for the cost-effectiveness comparison of two arms of a randomized clinical trial that allows the willingness-to-pay and the willingness-to-accept to have different values. An example is provided.  相似文献   

11.

Background  

Cost-effectiveness acceptability curves (CEACs) describe the probability that a new treatment or intervention is cost-effective. The net benefit regression framework (NBRF) allows cost-effectiveness analysis to be done in a simple regression framework. The objective of the paper is to illustrate how net benefit regression can be used to construct a CEAC.  相似文献   

12.
Jalpa A. Doshi  PhD    Henry A. Glick  PhD    Daniel Polsky  PhD 《Value in health》2006,9(5):334-340
OBJECTIVE: The adoption and diffusion of new medical treatments depend increasingly on evidence of costs and cost-effectiveness. This evidence is increasingly being generated from economic data collected in randomized clinical trials. The objective of this article is to evaluate the statistical methods used for analysis of cost data in economic evaluations conducted alongside randomized controlled trials. METHODS: Systematic review of economic evaluations based on patient-level cost or resource-use data collected in randomized trials was published in 2003. One hundred fifteen articles were identified from the MEDLINE database. The use of statistical methods for 1) joint comparison of costs and effects and assessment of stochastic uncertainty, 2) incremental cost estimation, and 3) handling of incomplete or censored cost data was evaluated. RESULTS: Only 42 (37%) of the 115 economic evaluations presented a cost-effectiveness ratio or estimated net benefits and 24 (57%) of these reported the uncertainty of this statistic. A comparison of costs alone was more common with 92 (80%) of the 115 studies statistically comparing costs between treatment groups. Of these, about two-thirds (62; 68%) used at least one statistical test appropriate for drawing inferences for arithmetic means. Incomplete cost data were reported in 67 (58%) studies with only two using a published statistical approach for handling censored cost data. CONCLUSION: The quality of statistical methods used in economic evaluations conducted alongside randomized controlled trials was poor in the majority of studies published in 2003. Adoption of appropriate statistical methods is required before the results from such studies can consistently provide valid information to decision-makers.  相似文献   

13.
Likelihood methods for measuring statistical evidence   总被引:4,自引:0,他引:4  
Blume JD 《Statistics in medicine》2002,21(17):2563-2599
Focused on interpreting data as statistical evidence, the evidential paradigm uses likelihood ratios to measure the strength of statistical evidence. Under this paradigm, re-examination of accumulating evidence is encouraged because (i) the likelihood ratio, unlike a p-value, is unaffected by the number of examinations and (ii) the probability of observing strong misleading evidence is naturally low, even for study designs that re-examine the data with each new observation. Further, the controllable probabilities of observing misleading and weak evidence provide assurance that the study design is reliable without affecting the strength of statistical evidence in the data. This paper illustrates the ideas and methods associated with using likelihood ratios to measure statistical evidence. It contains a comprehensive introduction to the evidential paradigm, including an overview of how to quantify the probability of observing misleading evidence for various study designs. The University Group Diabetes Program (UGDP), a classic and still controversial multi-centred clinical trial, is used as an illustrative example. Some of the original UGDP results, and subsequent re-analyses, are presented for comparison purposes.  相似文献   

14.
A new drug is approved for use if its efficacy and safety have been demonstrated. However, healthcare decision makers may also require data on the cost-effectiveness of new drugs if they are to make informed decisions about their place in therapy. Cost-effectiveness evidence may lag behind the effectiveness data in terms of its availability. We explored the timeliness of delivering cost-effectiveness information about new drugs with established effectiveness and significant financial impact. Drugs were identified, based on guidance documents and reports published by the UK National Institute for Clinical Excellence (NICE), and the following data were collected: dates of publication of first effectiveness and cost-effectiveness evidence, methodology of the cost-effectiveness analysis, quality scores of the clinical studies. Eighteen guidance documents on the use of new drugs/drug groups published by NICE by October 2001 covered 30 health technologies, which were included in the analysis. The analysis of the evidence showed that their effectiveness had been demonstrated in the last 12 years, with only two exceptions. However, cost-effectiveness evidence had been published for 21 (70%) of the technologies. The cost-effectiveness was estimated in 52.4% of cases using models. The good quality effectiveness evidence lagged behind the first effectiveness evidence by 1.40 years (95% CI 0.57–2.23), while the mean lag between the first effectiveness evidence and the first cost-effectiveness publications was estimated as 3.20 years (95% CI 1.76–4.65). Cost-effectiveness evidence thus often lags behind the effectiveness evidence. As a result healthcare decision makers are sometimes in a position of having to take decisions without having adequate cost-effectiveness data at their disposal.  相似文献   

15.
This paper discusses the application of an adaptive design for treatment arm selection in an oncology trial, with survival as the primary endpoint and disease progression as a key secondary endpoint. We carried out treatment arm selection at an interim analysis by using Bayesian predictive power combining evidence from the two endpoints. At the final analysis, we carried out a frequentist statistical test of efficacy on the survival endpoint. We investigated several approaches (Bonferroni approach, 'Dunnett-like' approach, a conditional error function approach and a combination p-value approach) with respect to their power and the precise conditions under which type I error control is attained.  相似文献   

16.
The cost‐effectiveness acceptability curve (CEAC) shows the probability that an option ranks first for net benefit. Where more than two options are under consideration, the CEAC offers only a partial picture of the decision uncertainty. This paper discusses the appropriateness of showing the full set of rank probabilities for reporting the results of economic evaluation in multiple technology appraisal (MTA). A case study is used to illustrate the calculation of rank probabilities and associated metrics, based on Monte Carlo simulations from a decision model. Rank probabilities are often used to show uncertainty in the results of network meta‐analysis, but until now have not been used for economic evaluation. They may be useful decision‐making tools to complement the CEAC in specific MTA contexts.  相似文献   

17.
It has been suggested that scepticism among decision-makers about using cost-effectiveness analysis (CEA) is caused in part by the low level of the cost-effectiveness "thresholds" in the economic evaluation literature. This has led Ubel and colleagues to call for higher threshold values of US$200,000 or more per quality-adjusted life-year. We show that these arguments fail to identify the objective of CEA and hence do not consider whether or how the threshold relates to this objective. We show that incremental cost-effectiveness ratios (ICERs) cannot be used to identify an efficient use of resources--the "biggest bang for the bucks"--allocated to health care. On the contrary, the practical consequence of using the ICER approach is shown to be an increase in health care expenditures, or "bigger bucks for making a bang", without any evidence of the bang being bigger (i.e. that this leads to an increase in benefits to the population). We present an alternative approach that provides an unambiguous method of determining whether a new intervention leads to an increase in health gains from whatever resources are to be made available to health care decision-makers.  相似文献   

18.
Health status and outcomes are frequently measured on an ordinal scale. For high-throughput genomic datasets, the common approach to analyzing ordinal response data has been to break the problem into one or more dichotomous response analyses. This dichotomous response approach does not make use of all available data and therefore leads to loss of power and increases the number of type I errors. Herein we describe an innovative frequentist approach that combines two statistical techniques, L(1) penalization and continuation ratio models, for modeling an ordinal response using gene expression microarray data. We conducted a simulation study to assess the performance of two computational approaches and two model selection criteria for fitting frequentist L(1) penalized continuation ratio models. Moreover, we empirically compared the approaches using three application datasets, each of which seeks to classify an ordinal class using microarray gene expression data as the predictor variables. We conclude that the L(1) penalized constrained continuation ratio model is a useful approach for modeling an ordinal response for datasets where the number of covariates (p) exceeds the sample size (n) and the decision of whether to use Akaike Information Criterion (AIC) or Bayesian Information Criterion (BIC) for selecting the final model should depend upon the similarities between the pathologies underlying the disease states to be classified.  相似文献   

19.
A survey of the likelihood approach to bioequivalence trials   总被引:1,自引:0,他引:1  
Choi L  Caffo B  Rohde C 《Statistics in medicine》2008,27(24):4874-4894
Bioequivalence (BE) trials are abbreviated clinical trials whereby a generic drug or new formulation is evaluated to determine if it is 'equivalent' to a corresponding previously approved brand-name drug or formulation. In this paper, we survey the process of testing BE and advocate the likelihood paradigm for representing the resulting data as evidence. We emphasize the unique conflicts between hypothesis testing and confidence intervals in this area--which we believe are indicative of the existence of the systemic defects in the frequentist approach--that the likelihood paradigm avoids. We suggest the direct use of profile likelihoods for evaluating BE. We discuss how the likelihood approach is useful to present the evidence for both average and population BE within a unified framework. We also examine the main properties of profile likelihoods and estimated likelihoods under simulation. This simulation study shows that profile likelihoods offer a viable alternative to the (unknown) true likelihood for a range of parameters commensurate with BE research.  相似文献   

20.
The relative excess odds or risk due to interaction (ie, RERIOR and RERI) play an important role in epidemiologic data analysis and interpretation. Previous authors have advocated frequentist approaches based on nonparametric bootstrap, the method of variance estimates recovery, and profile likelihood for estimating confidence intervals. As an alternative, we propose a Bayesian approach that accounts for parameter constraints and estimates the RERIOR in a case-control study from a linear additive odds-ratio model, or the RERI in a cohort study from a linear additive risk-ratio model. We show that Bayesian credible intervals can often be obtained more easily than frequentist confidence intervals. Furthermore, the Bayesian approach can be easily extended to adjust for confounders. Because posterior computation with inequality constraints can be accomplished easily using free software, the proposed Bayesian approaches may be useful in practice.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号