首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Numerous meta‐analyses in healthcare research combine results from only a small number of studies, for which the variance representing between‐study heterogeneity is estimated imprecisely. A Bayesian approach to estimation allows external evidence on the expected magnitude of heterogeneity to be incorporated. The aim of this paper is to provide tools that improve the accessibility of Bayesian meta‐analysis. We present two methods for implementing Bayesian meta‐analysis, using numerical integration and importance sampling techniques. Based on 14 886 binary outcome meta‐analyses in the Cochrane Database of Systematic Reviews, we derive a novel set of predictive distributions for the degree of heterogeneity expected in 80 settings depending on the outcomes assessed and comparisons made. These can be used as prior distributions for heterogeneity in future meta‐analyses. The two methods are implemented in R, for which code is provided. Both methods produce equivalent results to standard but more complex Markov chain Monte Carlo approaches. The priors are derived as log‐normal distributions for the between‐study variance, applicable to meta‐analyses of binary outcomes on the log odds‐ratio scale. The methods are applied to two example meta‐analyses, incorporating the relevant predictive distributions as prior distributions for between‐study heterogeneity. We have provided resources to facilitate Bayesian meta‐analysis, in a form accessible to applied researchers, which allow relevant prior information on the degree of heterogeneity to be incorporated. © 2014 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd.  相似文献   

2.
As evidence accumulates within a meta‐analysis, it is desirable to determine when the results could be considered conclusive to guide systematic review updates and future trial designs. Adapting sequential testing methodology from clinical trials for application to pooled meta‐analytic effect size estimates appears well suited for this objective. In this paper, we describe a Bayesian sequential meta‐analysis method, in which an informative heterogeneity prior is employed and stopping rule criteria are applied directly to the posterior distribution for the treatment effect parameter. Using simulation studies, we examine how well this approach performs under different parameter combinations by monitoring the proportion of sequential meta‐analyses that reach incorrect conclusions (to yield error rates), the number of studies required to reach conclusion, and the resulting parameter estimates. By adjusting the stopping rule thresholds, the overall error rates can be controlled within the target levels and are no higher than those of alternative frequentist and semi‐Bayes methods for the majority of the simulation scenarios. To illustrate the potential application of this method, we consider two contrasting meta‐analyses using data from the Cochrane Library and compare the results of employing different sequential methods while examining the effect of the heterogeneity prior in the proposed Bayesian approach. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

3.
Systematic reviews often provide recommendations for further research. When meta‐analyses are inconclusive, such recommendations typically argue for further studies to be conducted. However, the nature and amount of future research should depend on the nature and amount of the existing research. We propose a method based on conditional power to make these recommendations more specific. Assuming a random‐effects meta‐analysis model, we evaluate the influence of the number of additional studies, of their information sizes and of the heterogeneity anticipated among them on the ability of an updated meta‐analysis to detect a prespecified effect size. The conditional powers of possible design alternatives can be summarized in a simple graph which can also be the basis for decision making. We use three examples from the Cochrane Database of Systematic Reviews to demonstrate our strategy. We demonstrate that if heterogeneity is anticipated, it might not be possible for a single study to reach the desirable power no matter how large it is. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

4.
The Copas parametric model is aimed at exploring the potential impact of publication bias via sensitivity analysis, by making assumptions regarding the probability of publication of individual studies related to the standard error of their effect sizes. Reviewers often have prior assumptions about the extent of selection in the set of studies included in a meta‐analysis. However, a Bayesian implementation of the Copas model has not been studied yet. We aim to present a Bayesian selection model for publication bias and to extend it to the case of network meta‐analysis where each treatment is compared either with placebo or with a reference treatment creating a star‐shaped network. We take advantage of the greater flexibility offered in the Bayesian context to incorporate in the model prior information on the extent and strength of selection. To derive prior distributions, we use both external data and an elicitation process of expert opinion. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

5.
There is now a large literature on objective Bayesian model selection in the linear model based on the g‐prior. The methodology has been recently extended to generalized linear models using test‐based Bayes factors. In this paper, we show that test‐based Bayes factors can also be applied to the Cox proportional hazards model. If the goal is to select a single model, then both the maximum a posteriori and the median probability model can be calculated. For clinical prediction of survival, we shrink the model‐specific log hazard ratio estimates with subsequent calculation of the Breslow estimate of the cumulative baseline hazard function. A Bayesian model average can also be employed. We illustrate the proposed methodology with the analysis of survival data on primary biliary cirrhosis patients and the development of a clinical prediction model for future cardiovascular events based on data from the Second Manifestations of ARTerial disease (SMART) cohort study. Cross‐validation is applied to compare the predictive performance with alternative model selection approaches based on Harrell's c‐Index, the calibration slope and the integrated Brier score. Finally, a novel application of Bayesian variable selection to optimal conditional prediction via landmarking is described. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

6.
Bivariate random‐effects meta‐analysis (BVMA) is a method of data synthesis that accounts for treatment effects measured on two outcomes. BVMA gives more precise estimates of the population mean and predicted values than two univariate random‐effects meta‐analyses (UVMAs). BVMA also addresses bias from incomplete reporting of outcomes. A few tutorials have covered technical details of BVMA of categorical or continuous outcomes. Limited guidance is available on how to analyze datasets that include trials with mixed continuous‐binary outcomes where treatment effects on one outcome or the other are not reported. Given the advantages of Bayesian BVMA for handling missing outcomes, we present a tutorial for Bayesian BVMA of incompletely reported treatment effects on mixed bivariate outcomes. This step‐by‐step approach can serve as a model for our intended audience, the methodologist familiar with Bayesian meta‐analysis, looking for practical advice on fitting bivariate models. To facilitate application of the proposed methods, we include our WinBUGS code. As an example, we use aggregate‐level data from published trials to demonstrate the estimation of the effects of vitamin K and bisphosphonates on two correlated bone outcomes, fracture, and bone mineral density. We present datasets where reporting of the pairs of treatment effects on both outcomes was ‘partially’ complete (i.e., pairs completely reported in some trials), and we outline steps for modeling the incompletely reported data. To assess what is gained from the additional work required by BVMA, we compare the resulting estimates to those from separate UVMAs. We discuss methodological findings and make four recommendations. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

7.
Meta‐analysis of genome‐wide association studies (GWAS) has achieved great success in detecting loci underlying human diseases. Incorporating GWAS results from diverse ethnic populations for meta‐analysis, however, remains challenging because of the possible heterogeneity across studies. Conventional fixed‐effects (FE) or random‐effects (RE) methods may not be most suitable to aggregate multiethnic GWAS results because of violation of the homogeneous effect assumption across studies (FE) or low power to detect signals (RE). Three recently proposed methods, modified RE (RE‐HE) model, binary‐effects (BE) model and a Bayesian approach (Meta‐analysis of Transethnic Association [MANTRA]), show increased power over FE and RE methods while incorporating heterogeneity of effects when meta‐analyzing trans‐ethnic GWAS results. We propose a two‐stage approach to account for heterogeneity in trans‐ethnic meta‐analysis in which we clustered studies with cohort‐specific ancestry information prior to meta‐analysis. We compare this to a no‐prior‐clustering (crude) approach, evaluating type I error and power of these two strategies, in an extensive simulation study to investigate whether the two‐stage approach offers any improvements over the crude approach. We find that the two‐stage approach and the crude approach for all five methods (FE, RE, RE‐HE, BE, MANTRA) provide well‐controlled type I error. However, the two‐stage approach shows increased power for BE and RE‐HE, and similar power for MANTRA and FE compared to their corresponding crude approach, especially when there is heterogeneity across the multiethnic GWAS results. These results suggest that prior clustering in the two‐stage approach can be an effective and efficient intermediate step in meta‐analysis to account for the multiethnic heterogeneity.  相似文献   

8.
A biologic is a product made from living organisms. A biosimilar is a new version of an already approved branded biologic. Regulatory guidelines recommend a totality‐of‐the‐evidence approach with stepwise development for a new biosimilar. Initial steps for biosimilar development are (a) analytical comparisons to establish similarity in structure and function followed by (b) potential animal studies and a human pharmacokinetics/pharmacodynamics equivalence study. The last step is a phase III clinical trial to confirm similar efficacy, safety, and immunogenicity between the biosimilar and the biologic. A high degree of analytical and pharmacokinetics/pharmacodynamics similarity could provide justification for an eased statistical threshold in the phase III trial, which could then further facilitate an overall abbreviated approval process for biosimilars. Bayesian methods can help in the analysis of clinical trials, by adding proper prior information into the analysis, thereby potentially decreasing required sample size. We develop proper prior information for the analysis of a phase III trial for showing that a proposed biosimilar is similar to a reference biologic. For the reference product, we use a meta‐analysis of published results to set a prior for the probability of efficacy, and we propose priors for the proposed biosimilar informed by the strength of the evidence generated in the earlier steps of the approval process. A simulation study shows that with few exceptions, the Bayesian relative risk analysis provides greater power, shorter 90% credible intervals with more than 90% frequentist coverage, and better root mean squared error.  相似文献   

9.
Bayesian meta‐analysis is an increasingly important component of clinical research, with multivariate meta‐analysis a promising tool for studies with multiple endpoints. Model assumptions, including the choice of priors, are crucial aspects of multivariate Bayesian meta‐analysis (MBMA) models. In a given model, two different prior distributions can lead to different inferences about a particular parameter. A simulation study was performed in which the impact of families of prior distributions for the covariance matrix of a multivariate normal random effects MBMA model was analyzed. Inferences about effect sizes were not particularly sensitive to prior choice, but the related covariance estimates were. A few families of prior distributions with small relative biases, tight mean squared errors, and close to nominal coverage for the effect size estimates were identified. Our results demonstrate the need for sensitivity analysis and suggest some guidelines for choosing prior distributions in this class of problems. The MBMA models proposed here are illustrated in a small meta‐analysis example from the periodontal field and a medium meta‐analysis from the study of stroke. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

10.
Prioritization is the process whereby a set of possible candidate genes or SNPs is ranked so that the most promising can be taken forward into further studies. In a genome‐wide association study, prioritization is usually based on the P‐values alone, but researchers sometimes take account of external annotation information about the SNPs such as whether the SNP lies close to a good candidate gene. Using external information in this way is inherently subjective and is often not formalized, making the analysis difficult to reproduce. Building on previous work that has identified 14 important types of external information, we present an approximate Bayesian analysis that produces an estimate of the probability of association. The calculation combines four sources of information: the genome‐wide data, SNP information derived from bioinformatics databases, empirical SNP weights, and the researchers’ subjective prior opinions. The calculation is fast enough that it can be applied to millions of SNPS and although it does rely on subjective judgments, those judgments are made explicit so that the final SNP selection can be reproduced. We show that the resulting probability of association is intuitively more appealing than the P‐value because it is easier to interpret and it makes allowance for the power of the study. We illustrate the use of the probability of association for SNP prioritization by applying it to a meta‐analysis of kidney function genome‐wide association studies and demonstrate that SNP selection performs better using the probability of association compared with P‐values alone.  相似文献   

11.
Many meta‐analyses combine results from only a small number of studies, a situation in which the between‐study variance is imprecisely estimated when standard methods are applied. Bayesian meta‐analysis allows incorporation of external evidence on heterogeneity, providing the potential for more robust inference on the effect size of interest. We present a method for performing Bayesian meta‐analysis using data augmentation, in which we represent an informative conjugate prior for between‐study variance by pseudo data and use meta‐regression for estimation. To assist in this, we derive predictive inverse‐gamma distributions for the between‐study variance expected in future meta‐analyses. These may serve as priors for heterogeneity in new meta‐analyses. In a simulation study, we compare approximate Bayesian methods using meta‐regression and pseudo data against fully Bayesian approaches based on importance sampling techniques and Markov chain Monte Carlo (MCMC). We compare the frequentist properties of these Bayesian methods with those of the commonly used frequentist DerSimonian and Laird procedure. The method is implemented in standard statistical software and provides a less complex alternative to standard MCMC approaches. An importance sampling approach produces almost identical results to standard MCMC approaches, and results obtained through meta‐regression and pseudo data are very similar. On average, data augmentation provides closer results to MCMC, if implemented using restricted maximum likelihood estimation rather than DerSimonian and Laird or maximum likelihood estimation. The methods are applied to real datasets, and an extension to network meta‐analysis is described. The proposed method facilitates Bayesian meta‐analysis in a way that is accessible to applied researchers. © 2016 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.  相似文献   

12.
Meta‐analysis is now an essential tool for genetic association studies, allowing them to combine large studies and greatly accelerating the pace of genetic discovery. Although the standard meta‐analysis methods perform equivalently as the more cumbersome joint analysis under ideal settings, they result in substantial power loss under unbalanced settings with various case–control ratios. Here, we investigate the power loss problem by the standard meta‐analysis methods for unbalanced studies, and further propose novel meta‐analysis methods performing equivalently to the joint analysis under both balanced and unbalanced settings. We derive improved meta‐score‐statistics that can accurately approximate the joint‐score‐statistics with combined individual‐level data, for both linear and logistic regression models, with and without covariates. In addition, we propose a novel approach to adjust for population stratification by correcting for known population structures through minor allele frequencies. In the simulated gene‐level association studies under unbalanced settings, our method recovered up to 85% power loss caused by the standard methods. We further showed the power gain of our methods in gene‐level tests with 26 unbalanced studies of age‐related macular degeneration . In addition, we took the meta‐analysis of three unbalanced studies of type 2 diabetes as an example to discuss the challenges of meta‐analyzing multi‐ethnic samples. In summary, our improved meta‐score‐statistics with corrections for population stratification can be used to construct both single‐variant and gene‐level association studies, providing a useful framework for ensuring well‐powered, convenient, cross‐study analyses.  相似文献   

13.
In genome‐wide association studies of binary traits, investigators typically use logistic regression to test common variants for disease association within studies, and combine association results across studies using meta‐analysis. For common variants, logistic regression tests are well calibrated, and meta‐analysis of study‐specific association results is only slightly less powerful than joint analysis of the combined individual‐level data. In recent sequencing and dense chip based association studies, investigators increasingly test low‐frequency variants for disease association. In this paper, we seek to (1) identify the association test with maximal power among tests with well controlled type I error rate and (2) compare the relative power of joint and meta‐analysis tests. We use analytic calculation and simulation to compare the empirical type I error rate and power of four logistic regression based tests: Wald, score, likelihood ratio, and Firth bias‐corrected. We demonstrate for low‐count variants (roughly minor allele count [MAC] < 400) that: (1) for joint analysis, the Firth test has the best combination of type I error and power; (2) for meta‐analysis of balanced studies (equal numbers of cases and controls), the score test is best, but is less powerful than Firth test based joint analysis; and (3) for meta‐analysis of sufficiently unbalanced studies, all four tests can be anti‐conservative, particularly the score test. We also establish MAC as the key parameter determining test calibration for joint and meta‐analysis.  相似文献   

14.
Non‐inferiority trials are becoming increasingly popular for comparative effectiveness research. However, inclusion of the placebo arm, whenever possible, gives rise to a three‐arm trial which has lesser burdensome assumptions than a standard two‐arm non‐inferiority trial. Most of the past developments in a three‐arm trial consider defining a pre‐specified fraction of unknown effect size of reference drug, that is, without directly specifying a fixed non‐inferiority margin. However, in some recent developments, a more direct approach is being considered with pre‐specified fixed margin albeit in the frequentist setup. Bayesian paradigm provides a natural path to integrate historical and current trials' information via sequential learning. In this paper, we propose a Bayesian approach for simultaneous testing of non‐inferiority and assay sensitivity in a three‐arm trial with normal responses. For the experimental arm, in absence of historical information, non‐informative priors are assumed under two situations, namely when (i) variance is known and (ii) variance is unknown. A Bayesian decision criteria is derived and compared with the frequentist method using simulation studies. Finally, several published clinical trial examples are reanalyzed to demonstrate the benefit of the proposed procedure. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

15.
Developing sophisticated statistical methods for go/no‐go decisions is crucial for clinical trials, as planning phase III or phase IV trials is costly and time consuming. In this paper, we develop a novel Bayesian methodology for determining the probability of success of a treatment regimen on the basis of the current data of a given trial. We introduce a new criterion for calculating the probability of success that allows for inclusion of covariates as well as allowing for historical data based on the treatment regimen, and patient characteristics. A new class of prior distributions and covariate distributions is developed to achieve this goal. The methodology is quite general and can be used with univariate or multivariate continuous or discrete data, and it generalizes Chuang‐Stein's work. This methodology will be invaluable for informing the scientist on the likelihood of success of the compound, while including the information of covariates for patient characteristics in the trial population for planning future pre‐market or post‐market trials. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

16.
Mendelian randomization (MR) requires strong assumptions about the genetic instruments, of which the most difficult to justify relate to pleiotropy. In a two‐sample MR, different methods of analysis are available if we are able to assume, M1: no pleiotropy (fixed effects meta‐analysis), M2: that there may be pleiotropy but that the average pleiotropic effect is zero (random effects meta‐analysis), and M3: that the average pleiotropic effect is nonzero (MR‐Egger). In the latter 2 cases, we also require that the size of the pleiotropy is independent of the size of the effect on the exposure. Selecting one of these models without good reason would run the risk of misrepresenting the evidence for causality. The most conservative strategy would be to use M3 in all analyses as this makes the weakest assumptions, but such an analysis gives much less precise estimates and so should be avoided whenever stronger assumptions are credible. We consider the situation of a two‐sample design when we are unsure which of these 3 pleiotropy models is appropriate. The analysis is placed within a Bayesian framework and Bayesian model averaging is used. We demonstrate that even large samples of the scale used in genome‐wide meta‐analysis may be insufficient to distinguish the pleiotropy models based on the data alone. Our simulations show that Bayesian model averaging provides a reasonable trade‐off between bias and precision. Bayesian model averaging is recommended whenever there is uncertainty about the nature of the pleiotropy.  相似文献   

17.
Adaptive clinical trial design has been proposed as a promising new approach that may improve the drug discovery process. Proponents of adaptive sample size re‐estimation promote its ability to avoid ‘up‐front’ commitment of resources, better address the complicated decisions faced by data monitoring committees, and minimize accrual to studies having delayed ascertainment of outcomes. We investigate aspects of adaptation rules, such as timing of the adaptation analysis and magnitude of sample size adjustment, that lead to greater or lesser statistical efficiency. Owing in part to the recent Food and Drug Administration guidance that promotes the use of pre‐specified sampling plans, we evaluate alternative approaches in the context of well‐defined, pre‐specified adaptation. We quantify the relative costs and benefits of fixed sample, group sequential, and pre‐specified adaptive designs with respect to standard operating characteristics such as type I error, maximal sample size, power, and expected sample size under a range of alternatives. Our results build on others’ prior research by demonstrating in realistic settings that simple and easily implemented pre‐specified adaptive designs provide only very small efficiency gains over group sequential designs with the same number of analyses. In addition, we describe optimal rules for modifying the sample size, providing efficient adaptation boundaries on a variety of scales for the interim test statistic for adaptation analyses occurring at several different stages of the trial. We thus provide insight into what are good and bad choices of adaptive sampling plans when the added flexibility of adaptive designs is desired. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

18.
The Food and Drug Administration in the United States issued a much‐awaited draft guidance on ‘Multiple Endpoints in Clinical Trials’ in January 2017. The draft guidance is well written and contains consistent message on the technical implementation of the principles laid out in the guidance. In this commentary, we raise a question on applying the principles to studies designed from a safety perspective. We then direct our attention to issues related to multiple co‐primary endpoints. In a paper published in the Drug Information Journal in 2007, Offen et al. give examples of disorders where multiple co‐primary endpoints are required by regulators. The standard test for multiple co‐primary endpoints is the min test which tests each endpoint individually, at the one‐sided 2.5% level, for a confirmatory trial. This approach leads to a substantial loss of power when the number of co‐primary endpoints exceeds 2, a fact acknowledged in the draft guidance. We review approaches that have been proposed to tackle the problem of power loss and propose a new one. Using recommendations by Chen et al. for the assessment of drugs for vulvar and vaginal atrophy published in the Drug Information Journal in 2010, we argue the need for more changes and urge a path forward that uses different levels of claims to reflect the effectiveness of a product on multiple endpoints that are equally important and mostly unrelated. Copyright © 2017 John Wiley & Sons, Ltd.  相似文献   

19.
The power prior has been widely used in many applications covering a large number of disciplines. The power prior is intended to be an informative prior constructed from historical data. It has been used in clinical trials, genetics, health care, psychology, environmental health, engineering, economics, and business. It has also been applied for a wide variety of models and settings, both in the experimental design and analysis contexts. In this review article, we give an A‐to‐Z exposition of the power prior and its applications to date. We review its theoretical properties, variations in its formulation, statistical contexts for which it has been used, applications, and its advantages over other informative priors. We review models for which it has been used, including generalized linear models, survival models, and random effects models. Statistical areas where the power prior has been used include model selection, experimental design, hierarchical modeling, and conjugate priors. Frequentist properties of power priors in posterior inference are established, and a simulation study is conducted to further examine the empirical performance of the posterior estimates with power priors. Real data analyses are given illustrating the power prior as well as the use of the power prior in the Bayesian design of clinical trials. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

20.
Meta‐analysis of clinical trials is a methodology to summarize information from a collection of trials about an intervention, in order to make informed inferences about that intervention. Random effects allow the target population outcomes to vary among trials. Since meta‐analysis is often an important element in helping shape public health policy, society depends on biostatisticians to help ensure that the methodology is sound. Yet when meta‐analysis involves randomized binomial trials with low event rates, the overwhelming majority of publications use methods currently not intended for such data. This statistical practice issue must be addressed. Proper methods exist, but they are rarely applied. This tutorial is devoted to estimating a well‐defined overall relative risk, via a patient‐weighted random‐effects method. We show what goes wrong with methods based on ‘inverse‐variance’ weights, which are almost universally used. To illustrate similarities and differences, we contrast our methods, inverse‐variance methods, and the published results (usually inverse‐variance) for 18 meta‐analyses from 13 Journal of the American Medical Association articles. We also consider the 2007 case of rosiglitazone (Avandia), where important public health issues were at stake, involving patient cardiovascular risk. The most widely used method would have reached a different conclusion. © 2016 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号