首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
One misconception (of many) about Bayesian analyses is that prior distributions introduce assumptions that are more questionable than assumptions made by frequentist methods; yet the assumptions in priors can be more reasonable than the assumptions implicit in standard frequentist models. Another misconception is that Bayesian methods are computationally difficult and require special software. But perfectly adequate Bayesian analyses can be carried out with common software for frequentist analysis. Under a wide range of priors, the accuracy of these approximations is just as good as the frequentist accuracy of the software--and more than adequate for the inaccurate observational studies found in health and social sciences. An easy way to do Bayesian analyses is via inverse-variance (information) weighted averaging of the prior with the frequentist estimate. A more general method expresses the prior distributions in the form of prior data or 'data equivalents', which are then entered in the analysis as a new data stratum. That form reveals the strength of the prior judgements being introduced and may lead to tempering of those judgements. It is argued that a criterion for scientific acceptability of a prior distribution is that it be expressible as prior data, so that the strength of prior assumptions can be gauged by how much data they represent.  相似文献   

2.
This article describes extensions of the basic Bayesian methods using data priors to regression modelling, including hierarchical (multilevel) models. These methods provide an alternative to the parsimony-oriented approach of frequentist regression analysis. In particular, they replace arbitrary variable-selection criteria by prior distributions, and by doing so facilitate realistic use of imprecise but important prior information. They also allow Bayesian analyses to be conducted using standard regression packages; one need only be able to add variables and records to the data set. The methods thus facilitate the use of Bayesian solutions to problems of sparse data, multiple comparisons, subgroup analyses and study bias. Because these solutions have a frequentist interpretation as "shrinkage" (penalized) estimators, the methods can also be viewed as a means of implementing shrinkage approaches to multiparameter problems.  相似文献   

3.
The most common Bayesian methods for sample size determination (SSD) are reviewed in the non-sequential context of a confirmatory phase III trial in drug development. After recalling the regulatory viewpoint on SSD, we discuss the relevance of the various priors applied to the planning of clinical trials. We then investigate whether these Bayesian methods could compete with the usual frequentist approach to SSD and be considered as acceptable from a regulatory viewpoint.  相似文献   

4.
Phase II clinical trials are typically designed as two‐stage studies, in order to ensure early termination of the trial if the interim results show that the treatment is ineffective. Most of two‐stage designs, developed under both a frequentist and a Bayesian framework, select the second stage sample size before observing the first stage data. This may cause some paradoxical situations during the practical carrying out of the trial. To avoid these potential problems, we suggest a Bayesian predictive strategy to derive an adaptive two‐stage design, where the second stage sample size is not selected in advance, but depends on the first stage result. The criterion we propose is based on a modification of a Bayesian predictive design recently presented in the literature (see (Statist. Med. 2008; 27 :1199–1224)). The distinction between analysis and design priors is essential for the practical implementation of the procedure: some guidelines for choosing these prior distributions are discussed and their impact on the required sample size is examined. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

5.
The original definitions of false discovery rate (FDR) and false non-discovery rate (FNR) can be understood as the frequentist risks of false rejections and false non-rejections, respectively, conditional on the unknown parameter, while the Bayesian posterior FDR and posterior FNR are conditioned on the data. From a Bayesian point of view, it seems natural to take into account the uncertainties in both the parameter and the data. In this spirit, we propose averaging out the frequentist risks of false rejections and false non-rejections with respect to some prior distribution of the parameters to obtain the average FDR (AFDR) and average FNR (AFNR), respectively. A linear combination of the AFDR and AFNR, called the average Bayes error rate (ABER), is considered as an overall risk. Some useful formulas for the AFDR, AFNR and ABER are developed for normal samples with hierarchical mixture priors. The idea of finding threshold values by minimizing the ABER or controlling the AFDR is illustrated using a gene expression data set. Simulation studies show that the proposed approaches are more powerful and robust than the widely used FDR method.  相似文献   

6.
In applying capture-recapture methods for closed populations to epidemiology, one needs to estimate the total number of people with a certain disease in a certain research area by using several lists with information of patients. Problems of lists error often arise due to mistyping or misinformation. Adopting the concept of tag-loss methodology in animal populations, Seber et al. (Biometrics 2000; 56:1227-1232) proposed solutions to a two-list problem. This article reports an interesting simulation study, where Bayesian point estimates based on improper constant and Jeffreys prior for unknown population size N could have smaller frequentist standard errors and MSEs compared to the estimates proposed in Seber et al. (2000). The Bayesian credible intervals based on the same priors also have super frequentist coverage probabilities while some of the frequentist confidence intervals procedures have drastically poor coverage. Seber's real data set on gestational diabetics is analysed with the proposed new methods.  相似文献   

7.
Noninferiority trials have recently gained importance for the clinical trials of drugs and medical devices. In these trials, most statistical methods have been used from a frequentist perspective, and historical data have been used only for the specification of the noninferiority margin Δ>0. In contrast, Bayesian methods, which have been studied recently are advantageous in that they can use historical data to specify prior distributions and are expected to enable more efficient decision making than frequentist methods by borrowing information from historical trials. In the case of noninferiority trials for response probabilities π 1,π 2, Bayesian methods evaluate the posterior probability of H 1:π 1>π 2?Δ being true. To numerically calculate such posterior probability, complicated Appell hypergeometric function or approximation methods are used. Further, the theoretical relationship between Bayesian and frequentist methods is unclear. In this work, we give the exact expression of the posterior probability of the noninferiority under some mild conditions and propose the Bayesian noninferiority test framework which can flexibly incorporate historical data by using the conditional power prior. Further, we show the relationship between Bayesian posterior probability and the P value of the Fisher exact test. From this relationship, our method can be interpreted as the Bayesian noninferior extension of the Fisher exact test, and we can treat superiority and noninferiority in the same framework. Our method is illustrated through Monte Carlo simulations to evaluate the operating characteristics, the application to the real HIV clinical trial data, and the sample size calculation using historical data.  相似文献   

8.
Robust Bayesian sample size determination in clinical trials   总被引:1,自引:0,他引:1  
This article deals with determination of a sample size that guarantees the success of a trial. We follow a Bayesian approach and we say an experiment is successful if it yields a large posterior probability that an unknown parameter of interest (an unknown treatment effect or an effects-difference) is greater than a chosen threshold. In this context, a straightforward sample size criterion is to select the minimal number of observations so that the predictive probability of a successful trial is sufficiently large. In the paper we address the most typical criticism to Bayesian methods-their sensitivity to prior assumptions-by proposing a robust version of this sample size criterion. Specifically, instead of a single distribution, we consider a class of plausible priors for the parameter of interest. Robust sample sizes are then selected by looking at the predictive distribution of the lower bound of the posterior probability that the unknown parameter is greater than a chosen threshold. For their flexibility and mathematical tractability, we consider classes of epsilon-contamination priors. As specific applications we consider sample size determination for a Phase III trial.  相似文献   

9.
Data augmentation priors facilitate contextual evaluation of prior distributions and the generation of Bayesian outputs from frequentist software. Previous papers have presented approximate Bayesian methods using 2x2 tables of 'prior data' to represent lognormal relative-risk priors in stratified and regression analyses. The present paper describes extensions that use the tables to represent generalized-F prior distributions for relative risks, which subsume lognormal priors as a limiting case. The method provides a means to increase tail-weight or skew the prior distribution for the log relative risk away from normality, while retaining the simple 2x2 table form of the prior data. When prior normality is preferred, it also provides a more accurate lognormal relative-risk prior in for the 2x2 table format. For more compact representation in regression analyses, the prior data can be compressed into a single data record. The method is illustrated with historical data from a study of electronic foetal monitoring and neonatal death.  相似文献   

10.
We analyze the general (multiallelic) Hardy-Weinberg equilibrium problem from an objective Bayesian testing standpoint. We argue that for small or moderate sample sizes the answer is rather sensitive to the prior chosen, and this suggests to carry out a sensitivity analysis with respect to the prior. This goal is achieved through the identification of a class of priors specifically designed for this testing problem. In this paper, we consider the class of intrinsic priors under the full model, indexed by a tuning quantity, the training sample size. These priors are objective, satisfy Savage's continuity condition and have proved to behave extremely well for many statistical testing problems. We compute the posterior probability of the Hardy-Weinberg equilibrium model for the class of intrinsic priors, assess robustness over the range of plausible answers, as well as stability of the decision in favor of either hypothesis.  相似文献   

11.
This paper considers the design and interpretation of clinical trials comparing treatments for conditions so rare that worldwide recruitment efforts are likely to yield total sample sizes of 50 or fewer, even when patients are recruited over several years. For such studies, the sample size needed to meet a conventional frequentist power requirement is clearly infeasible. Rather, the expectation of any such trial has to be limited to the generation of an improved understanding of treatment options. We propose a Bayesian approach for the conduct of rare‐disease trials comparing an experimental treatment with a control where patient responses are classified as a success or failure. A systematic elicitation from clinicians of their beliefs concerning treatment efficacy is used to establish Bayesian priors for unknown model parameters. The process of determining the prior is described, including the possibility of formally considering results from related trials. As sample sizes are small, it is possible to compute all possible posterior distributions of the two success rates. A number of allocation ratios between the two treatment groups can be considered with a view to maximising the prior probability that the trial concludes recommending the new treatment when in fact it is non‐inferior to control. Consideration of the extent to which opinion can be changed, even by data from the best feasible design, can help to determine whether such a trial is worthwhile. © 2014 The Authors. Statistics in Medicine published by John Wiley & Sons, Ltd.  相似文献   

12.
In this paper, a Bayesian approach is developed for simultaneously comparing multiple experimental treatments with a common control treatment in an exploratory clinical trial. The sample size is set to ensure that, at the end of the study, there will be at least one treatment for which the investigators have a strong belief that it is better than control, or else they have a strong belief that none of the experimental treatments are substantially better than control. This criterion bears a direct relationship with conventional frequentist power requirements, while allowing prior opinion to feature in the analysis with a consequent reduction in sample size. If it is concluded that at least one of the experimental treatments shows promise, then it is envisaged that one or more of these promising treatments will be developed further in a definitive phase III trial. The approach is developed in the context of normally distributed responses sharing a common standard deviation regardless of treatment. To begin with, the standard deviation will be assumed known when the sample size is calculated. The final analysis will not rely upon this assumption, although the intended properties of the design may not be achieved if the anticipated standard deviation turns out to be inappropriate. Methods that formally allow for uncertainty about the standard deviation, expressed in the form of a Bayesian prior, are then explored. Illustrations of the sample sizes computed from the new method are presented, and comparisons are made with frequentist methods devised for the same situation. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

13.
In a bivariate meta‐analysis, the number of diagnostic studies involved is often very low so that frequentist methods may result in problems. Using Bayesian inference is particularly attractive as informative priors that add a small amount of information can stabilise the analysis without overwhelming the data. However, Bayesian analysis is often computationally demanding and the selection of the prior for the covariance matrix of the bivariate structure is crucial with little data. The integrated nested Laplace approximations method provides an efficient solution to the computational issues by avoiding any sampling, but the important question of priors remain. We explore the penalised complexity (PC) prior framework for specifying informative priors for the variance parameters and the correlation parameter. PC priors facilitate model interpretation and hyperparameter specification as expert knowledge can be incorporated intuitively. We conduct a simulation study to compare the properties and behaviour of differently defined PC priors to currently used priors in the field. The simulation study shows that the PC prior seems beneficial for the variance parameters. The use of PC priors for the correlation parameter results in more precise estimates when specified in a sensible neighbourhood around the truth. To investigate the usage of PC priors in practice, we reanalyse a meta‐analysis using the telomerase marker for the diagnosis of bladder cancer and compare the results with those obtained by other commonly used modelling approaches. Copyright © 2017 John Wiley & Sons, Ltd.  相似文献   

14.
Standard methods for analysing survival data with covariates rely on asymptotic inferences. Bayesian methods can be performed using simple computations and are applicable for any sample size. We propose a practical method for making prior specifications and discuss a complete Bayesian analysis for parametric accelerated failure time regression models. We emphasize inferences for the survival curve rather than regression coefficients. A key feature of the Bayesian framework is that model comparisons for various choices of baseline distribution are easily handled by the calculation of Bayes factors. Such comparisons between non-nested models are difficult in the frequentist setting. We illustrate diagnostic tools and examine the sensitivity of the Bayesian methods.  相似文献   

15.
It is well-known that both frequentist and Bayesian clinical trial designs have their own advantages and disadvantages. To have better properties inherited from these two types of designs, we developed a Bayesian-frequentist two-stage single-arm phase II clinical trial design. This design allows both early acceptance and rejection of the null hypothesis ( H(0) ). The measures (for example probability of trial early termination, expected sample size, etc.) of the design properties under both frequentist and Bayesian settings are derived. Moreover, under the Bayesian setting, the upper and lower boundaries are determined with predictive probability of trial success outcome. Given a beta prior and a sample size for stage I, based on the marginal distribution of the responses at stage I, we derived Bayesian Type I and Type II error rates. By controlling both frequentist and Bayesian error rates, the Bayesian-frequentist two-stage design has special features compared with other two-stage designs.  相似文献   

16.
A number of researchers have discussed phase II clinical trials from a Bayesian perspective. A recent article by Mayo and Gajewski focuses on sample size calculations, which they determine by specifying an informative prior distribution and then calculating a posterior probability that the true response will exceed a prespecified target. In this article, we extend these sample size calculations to include a mixture of informative prior distributions. The mixture comes from several sources of information. For example consider information from two (or more) clinicians. The first clinician is pessimistic about the drug and the second clinician is optimistic. We tabulate the results for sample size design using the fact that the simple mixture of Betas is a conjugate family for the Beta- Binomial model. We discuss the theoretical framework for these types of Bayesian designs and show that the Bayesian designs in this paper approximate this theoretical framework.  相似文献   

17.
Penalization is a very general method of stabilizing or regularizing estimates, which has both frequentist and Bayesian rationales. We consider some questions that arise when considering alternative penalties for logistic regression and related models. The most widely programmed penalty appears to be the Firth small‐sample bias‐reduction method (albeit with small differences among implementations and the results they provide), which corresponds to using the log density of the Jeffreys invariant prior distribution as a penalty function. The latter representation raises some serious contextual objections to the Firth reduction, which also apply to alternative penalties based on t‐distributions (including Cauchy priors). Taking simplicity of implementation and interpretation as our chief criteria, we propose that the log‐F(1,1) prior provides a better default penalty than other proposals. Penalization based on more general log‐F priors is trivial to implement and facilitates mean‐squared error reduction and sensitivity analyses of penalty strength by varying the number of prior degrees of freedom. We caution however against penalization of intercepts, which are unduly sensitive to covariate coding and design idiosyncrasies. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

18.
ObjectiveRandomized trials generally use “frequentist” statistics based on P-values and 95% confidence intervals. Frequentist methods have limitations that might be overcome, in part, by Bayesian inference. To illustrate these advantages, we re-analyzed randomized trials published in four general medical journals during 2004.Study Design and SettingWe used Medline to identify randomized superiority trials with two parallel arms, individual-level randomization and dichotomous or time-to-event primary outcomes. Studies with P < 0.05 in favor of the intervention were deemed “positive”; otherwise, they were “negative.” We used several prior distributions and exact conjugate analyses to calculate Bayesian posterior probabilities for clinically relevant effects.ResultsOf 88 included studies, 39 were positive using a frequentist analysis. Although the Bayesian posterior probabilities of any benefit (relative risk or hazard ratio < 1) were high in positive studies, these probabilities were lower and variable for larger benefits. The positive studies had only moderate probabilities for exceeding the effects that were assumed for calculating the sample size. By comparison, there were moderate probabilities of any benefit in negative studies.ConclusionBayesian and frequentist analyses complement each other when interpreting the results of randomized trials. Future reports of randomized trials should include both.  相似文献   

19.
Many meta‐analyses combine results from only a small number of studies, a situation in which the between‐study variance is imprecisely estimated when standard methods are applied. Bayesian meta‐analysis allows incorporation of external evidence on heterogeneity, providing the potential for more robust inference on the effect size of interest. We present a method for performing Bayesian meta‐analysis using data augmentation, in which we represent an informative conjugate prior for between‐study variance by pseudo data and use meta‐regression for estimation. To assist in this, we derive predictive inverse‐gamma distributions for the between‐study variance expected in future meta‐analyses. These may serve as priors for heterogeneity in new meta‐analyses. In a simulation study, we compare approximate Bayesian methods using meta‐regression and pseudo data against fully Bayesian approaches based on importance sampling techniques and Markov chain Monte Carlo (MCMC). We compare the frequentist properties of these Bayesian methods with those of the commonly used frequentist DerSimonian and Laird procedure. The method is implemented in standard statistical software and provides a less complex alternative to standard MCMC approaches. An importance sampling approach produces almost identical results to standard MCMC approaches, and results obtained through meta‐regression and pseudo data are very similar. On average, data augmentation provides closer results to MCMC, if implemented using restricted maximum likelihood estimation rather than DerSimonian and Laird or maximum likelihood estimation. The methods are applied to real datasets, and an extension to network meta‐analysis is described. The proposed method facilitates Bayesian meta‐analysis in a way that is accessible to applied researchers. © 2016 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.  相似文献   

20.
Chen J  Zhong J  Nie L 《Statistics in medicine》2008,27(13):2361-2380
Stability data are commonly analyzed using linear fixed or random effect model. The linear fixed effect model does not take into account the batch-to-batch variation, whereas the random effect model may suffer from the unreliable shelf-life estimates due to small sample size. Moreover, both methods do not utilize any prior information that might have been available. In this article, we propose a Bayesian hierarchical approach to modeling drug stability data. Under this hierarchical structure, we first use Bayes factor to test the poolability of batches. Given the decision on poolability of batches, we then estimate the shelf-life that applies to all batches. The approach is illustrated with two example data sets and its performance is compared in simulation studies with that of the commonly used frequentist methods.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号