首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Measures that quantify the impact of heterogeneity in univariate meta‐analysis, including the very popular I2 statistic, are now well established. Multivariate meta‐analysis, where studies provide multiple outcomes that are pooled in a single analysis, is also becoming more commonly used. The question of how to quantify heterogeneity in the multivariate setting is therefore raised. It is the univariate R2 statistic, the ratio of the variance of the estimated treatment effect under the random and fixed effects models, that generalises most naturally, so this statistic provides our basis. This statistic is then used to derive a multivariate analogue of I2, which we call . We also provide a multivariate H2 statistic, the ratio of a generalisation of Cochran's heterogeneity statistic and its associated degrees of freedom, with an accompanying generalisation of the usual I2 statistic, . Our proposed heterogeneity statistics can be used alongside all the usual estimates and inferential procedures used in multivariate meta‐analysis. We apply our methods to some real datasets and show how our statistics are equally appropriate in the context of multivariate meta‐regression, where study level covariate effects are included in the model. Our heterogeneity statistics may be used when applying any procedure for fitting the multivariate random effects model. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

2.
Numerous meta‐analyses in healthcare research combine results from only a small number of studies, for which the variance representing between‐study heterogeneity is estimated imprecisely. A Bayesian approach to estimation allows external evidence on the expected magnitude of heterogeneity to be incorporated. The aim of this paper is to provide tools that improve the accessibility of Bayesian meta‐analysis. We present two methods for implementing Bayesian meta‐analysis, using numerical integration and importance sampling techniques. Based on 14 886 binary outcome meta‐analyses in the Cochrane Database of Systematic Reviews, we derive a novel set of predictive distributions for the degree of heterogeneity expected in 80 settings depending on the outcomes assessed and comparisons made. These can be used as prior distributions for heterogeneity in future meta‐analyses. The two methods are implemented in R, for which code is provided. Both methods produce equivalent results to standard but more complex Markov chain Monte Carlo approaches. The priors are derived as log‐normal distributions for the between‐study variance, applicable to meta‐analyses of binary outcomes on the log odds‐ratio scale. The methods are applied to two example meta‐analyses, incorporating the relevant predictive distributions as prior distributions for between‐study heterogeneity. We have provided resources to facilitate Bayesian meta‐analysis, in a form accessible to applied researchers, which allow relevant prior information on the degree of heterogeneity to be incorporated. © 2014 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd.  相似文献   

3.
Fixed‐effects meta‐analysis has been criticized because the assumption of homogeneity is often unrealistic and can result in underestimation of parameter uncertainty. Random‐effects meta‐analysis and meta‐regression are therefore typically used to accommodate explained and unexplained between‐study variability. However, it is not unusual to obtain a boundary estimate of zero for the (residual) between‐study standard deviation, resulting in fixed‐effects estimates of the other parameters and their standard errors. To avoid such boundary estimates, we suggest using Bayes modal (BM) estimation with a gamma prior on the between‐study standard deviation. When no prior information is available regarding the magnitude of the between‐study standard deviation, a weakly informative default prior can be used (with shape parameter 2 and rate parameter close to 0) that produces positive estimates but does not overrule the data, leading to only a small decrease in the log likelihood from its maximum. We review the most commonly used estimation methods for meta‐analysis and meta‐regression including classical and Bayesian methods and apply these methods, as well as our BM estimator, to real datasets. We then perform simulations to compare BM estimation with the other methods and find that BM estimation performs well by (i) avoiding boundary estimates; (ii) having smaller root mean squared error for the between‐study standard deviation; and (iii) better coverage for the overall effects than the other methods when the true model has at least a small or moderate amount of unexplained heterogeneity. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

4.
Many meta‐analyses combine results from only a small number of studies, a situation in which the between‐study variance is imprecisely estimated when standard methods are applied. Bayesian meta‐analysis allows incorporation of external evidence on heterogeneity, providing the potential for more robust inference on the effect size of interest. We present a method for performing Bayesian meta‐analysis using data augmentation, in which we represent an informative conjugate prior for between‐study variance by pseudo data and use meta‐regression for estimation. To assist in this, we derive predictive inverse‐gamma distributions for the between‐study variance expected in future meta‐analyses. These may serve as priors for heterogeneity in new meta‐analyses. In a simulation study, we compare approximate Bayesian methods using meta‐regression and pseudo data against fully Bayesian approaches based on importance sampling techniques and Markov chain Monte Carlo (MCMC). We compare the frequentist properties of these Bayesian methods with those of the commonly used frequentist DerSimonian and Laird procedure. The method is implemented in standard statistical software and provides a less complex alternative to standard MCMC approaches. An importance sampling approach produces almost identical results to standard MCMC approaches, and results obtained through meta‐regression and pseudo data are very similar. On average, data augmentation provides closer results to MCMC, if implemented using restricted maximum likelihood estimation rather than DerSimonian and Laird or maximum likelihood estimation. The methods are applied to real datasets, and an extension to network meta‐analysis is described. The proposed method facilitates Bayesian meta‐analysis in a way that is accessible to applied researchers. © 2016 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.  相似文献   

5.
Recently, multiple imputation has been proposed as a tool for individual patient data meta‐analysis with sporadically missing observations, and it has been suggested that within‐study imputation is usually preferable. However, such within study imputation cannot handle variables that are completely missing within studies. Further, if some of the contributing studies are relatively small, it may be appropriate to share information across studies when imputing. In this paper, we develop and evaluate a joint modelling approach to multiple imputation of individual patient data in meta‐analysis, with an across‐study probability distribution for the study specific covariance matrices. This retains the flexibility to allow for between‐study heterogeneity when imputing while allowing (i) sharing information on the covariance matrix across studies when this is appropriate, and (ii) imputing variables that are wholly missing from studies. Simulation results show both equivalent performance to the within‐study imputation approach where this is valid, and good results in more general, practically relevant, scenarios with studies of very different sizes, non‐negligible between‐study heterogeneity and wholly missing variables. We illustrate our approach using data from an individual patient data meta‐analysis of hypertension trials. © 2015 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.  相似文献   

6.
Making inferences about the average treatment effect using the random effects model for meta‐analysis is problematic in the common situation where there is a small number of studies. This is because estimates of the between‐study variance are not precise enough to accurately apply the conventional methods for testing and deriving a confidence interval for the average effect. We have found that a refined method for univariate meta‐analysis, which applies a scaling factor to the estimated effects’ standard error, provides more accurate inference. We explain how to extend this method to the multivariate scenario and show that our proposal for refined multivariate meta‐analysis and meta‐regression can provide more accurate inferences than the more conventional approach. We explain how our proposed approach can be implemented using standard output from multivariate meta‐analysis software packages and apply our methodology to two real examples. © 2013 The Authors. Statistics in Medicine published by John Wiley & Sons, Ltd.  相似文献   

7.
Systematic reviews often provide recommendations for further research. When meta‐analyses are inconclusive, such recommendations typically argue for further studies to be conducted. However, the nature and amount of future research should depend on the nature and amount of the existing research. We propose a method based on conditional power to make these recommendations more specific. Assuming a random‐effects meta‐analysis model, we evaluate the influence of the number of additional studies, of their information sizes and of the heterogeneity anticipated among them on the ability of an updated meta‐analysis to detect a prespecified effect size. The conditional powers of possible design alternatives can be summarized in a simple graph which can also be the basis for decision making. We use three examples from the Cochrane Database of Systematic Reviews to demonstrate our strategy. We demonstrate that if heterogeneity is anticipated, it might not be possible for a single study to reach the desirable power no matter how large it is. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

8.
The multivariate random effects model is a generalization of the standard univariate model. Multivariate meta‐analysis is becoming more commonly used and the techniques and related computer software, although continually under development, are now in place. In order to raise awareness of the multivariate methods, and discuss their advantages and disadvantages, we organized a one day ‘Multivariate meta‐analysis’ event at the Royal Statistical Society. In addition to disseminating the most recent developments, we also received an abundance of comments, concerns, insights, critiques and encouragement. This article provides a balanced account of the day's discourse. By giving others the opportunity to respond to our assessment, we hope to ensure that the various view points and opinions are aired before multivariate meta‐analysis simply becomes another widely used de facto method without any proper consideration of it by the medical statistics community. We describe the areas of application that multivariate meta‐analysis has found, the methods available, the difficulties typically encountered and the arguments for and against the multivariate methods, using four representative but contrasting examples. We conclude that the multivariate methods can be useful, and in particular can provide estimates with better statistical properties, but also that these benefits come at the price of making more assumptions which do not result in better inference in every case. Although there is evidence that multivariate meta‐analysis has considerable potential, it must be even more carefully applied than its univariate counterpart in practice. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

9.
Many meta‐analyses report using ‘Cochran's Q test' to assess heterogeneity of effect‐size estimates from the individual studies. Some authors cite work by W. G. Cochran, without realizing that Cochran deliberately did not use Q itself to test for heterogeneity. Further, when heterogeneity is absent, the actual null distribution of Q is not the chi‐squared distribution assumed for ‘Cochran's Q test'. This paper reviews work by Cochran related to Q. It then discusses derivations of the asymptotic approximation for the null distribution of Q, as well as work that has derived finite‐sample moments and corresponding approximations for the cases of specific measures of effect size. Those results complicate implementation and interpretation of the popular heterogeneity index I2. Also, it turns out that the test‐based confidence intervals used with I2 are based on a fallacious approach. Software that outputs Q and I2 should use the appropriate reference value of Q for the particular measure of effect size and the current meta‐analysis. Q is a key element of the popular DerSimonian–Laird procedure for random‐effects meta‐analysis, but the assumptions of that procedure and related procedures do not reflect the actual behavior of Q and may introduce bias. The DerSimonian–Laird procedure should be regarded as unreliable. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

10.
The quantile approximation method has recently been proposed as a simple method for deriving confidence intervals for the treatment effect in a random effects meta‐analysis. Although easily implemented, the quantiles used to construct intervals are derived from a single simulation study. Here it is shown that altering the study parameters, and in particular introducing changes to the distribution of the within‐study variances, can have a dramatic impact on the resulting quantiles. This is further illustrated analytically by examining the scenario where all trials are assumed to be the same size. A more cautious approach is therefore suggested, where the conventional standard normal quantile is used in the primary analysis, but where the use of alternative quantiles is also considered in a sensitivity analysis. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

11.
This study challenges two core conventional meta‐analysis methods: fixed effect and random effects. We show how and explain why an unrestricted weighted least squares estimator is superior to conventional random‐effects meta‐analysis when there is publication (or small‐sample) bias and better than a fixed‐effect weighted average if there is heterogeneity. Statistical theory and simulations of effect sizes, log odds ratios and regression coefficients demonstrate that this unrestricted weighted least squares estimator provides satisfactory estimates and confidence intervals that are comparable to random effects when there is no publication (or small‐sample) bias and identical to fixed‐effect meta‐analysis when there is no heterogeneity. When there is publication selection bias, the unrestricted weighted least squares approach dominates random effects; when there is excess heterogeneity, it is clearly superior to fixed‐effect meta‐analysis. In practical applications, an unrestricted weighted least squares weighted average will often provide superior estimates to both conventional fixed and random effects. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

12.
In epidemiologic studies and clinical trials with time‐dependent outcome (for instance death or disease progression), survival curves are used to describe the risk of the event over time. In meta‐analyses of studies reporting a survival curve, the most informative finding is a summary survival curve. In this paper, we propose a method to obtain a distribution‐free summary survival curve by expanding the product‐limit estimator of survival for aggregated survival data. The extension of DerSimonian and Laird's methodology for multiple outcomes is applied to account for the between‐study heterogeneity. Statistics I2 and H2 are used to quantify the impact of the heterogeneity in the published survival curves. A statistical test for between‐strata comparison is proposed, with the aim to explore study‐level factors potentially associated with survival. The performance of the proposed approach is evaluated in a simulation study. Our approach is also applied to synthesize the survival of untreated patients with hepatocellular carcinoma from aggregate data of 27 studies and synthesize the graft survival of kidney transplant recipients from individual data from six hospitals. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

13.
This article brings into serious question the validity of empirically based weighting in random effects meta‐analysis. These methods treat sample sizes as non‐random, whereas they need to be part of the random effects analysis. It will be demonstrated that empirical weighting risks substantial bias. Two alternate methods are proposed. The first estimates the arithmetic mean of the population of study effect sizes per the classical model for random effects meta‐analysis. We show that anything other than an unweighted mean of study effect sizes will risk serious bias for this targeted parameter. The second method estimates a patient level effect size, something quite different from the first. To prevent inconsistent estimation for this population parameter, the study effect sizes must be weighted in proportion to their total sample sizes for the trial. The two approaches will be presented for a meta‐analysis of a nasal decongestant, while at the same time will produce counter‐intuitive results for the DerSimonian–Laird approach, the most popular empirically based weighted method. It is concluded that all past publications based on empirically weighted random effects meta‐analysis should be revisited to see if the qualitative conclusions hold up under the methods proposed herein. It is also recommended that empirically based weighted random effects meta‐analysis not be used in the future, unless strong cautions about the assumptions underlying these analyses are stated, and at a minimum, some form of secondary analysis based on the principles set forth in this article be provided to supplement the primary analysis. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

14.
Effect size estimates to be combined in a systematic review are often found to be more variable than one would expect based on sampling differences alone. This is usually interpreted as evidence that the effect sizes are heterogeneous. A random-effects model is then often used to account for the heterogeneity in the effect sizes. A novel method for constructing confidence intervals for the amount of heterogeneity in the effect sizes is proposed that guarantees nominal coverage probabilities even in small samples when model assumptions are satisfied. A variety of existing approaches for constructing such confidence intervals are summarized and the various methods are applied to an example to illustrate their use. A simulation study reveals that the newly proposed method yields the most accurate coverage probabilities under conditions more analogous to practice, where assumptions about normally distributed effect size estimates and known sampling variances only hold asymptotically.  相似文献   

15.
Genome‐wide association studies have recently identified many new loci associated with human complex diseases. These newly discovered variants typically have weak effects requiring studies with large numbers of individuals to achieve the statistical power necessary to identify them. Likely, there exist even more associated variants, which remain to be found if even larger association studies can be assembled. Meta‐analysis provides a straightforward means of increasing study sample sizes without collecting new samples by combining existing data sets. One obstacle to combining studies is that they are often performed on platforms with different marker sets. Current studies overcome this issue by imputing genotypes missing from each of the studies and then performing standard meta‐analysis techniques. We show that this approach may result in a loss of power since errors in imputation are not accounted for. We present a new method for performing meta‐analysis over imputed single nucleotide polymorphisms, show that it is optimal with respect to power, and discuss practical implementation issues. Through simulation experiments, we show that our imputation aware meta‐analysis approach outperforms or matches standard meta‐analysis approaches. Genet. Epidemiol. 34: 537–542, 2010. © 2010 Wiley‐Liss, Inc.  相似文献   

16.
For bivariate meta‐analysis of diagnostic studies, likelihood approaches are very popular. However, they often run into numerical problems with possible non‐convergence. In addition, the construction of confidence intervals is controversial. Bayesian methods based on Markov chain Monte Carlo (MCMC) sampling could be used, but are often difficult to implement, and require long running times and diagnostic convergence checks. Recently, a new Bayesian deterministic inference approach for latent Gaussian models using integrated nested Laplace approximations (INLA) has been proposed. With this approach MCMC sampling becomes redundant as the posterior marginal distributions are directly and accurately approximated. By means of a real data set we investigate the influence of the prior information provided and compare the results obtained by INLA, MCMC, and the maximum likelihood procedure SAS PROC NLMIXED . Using a simulation study we further extend the comparison of INLA and SAS PROC NLMIXED by assessing their performance in terms of bias, mean‐squared error, coverage probability, and convergence rate. The results indicate that INLA is more stable and gives generally better coverage probabilities for the pooled estimates and less biased estimates of variance parameters. The user‐friendliness of INLA is demonstrated by documented R‐code. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

17.
Bivariate random‐effects meta‐analysis (BVMA) is a method of data synthesis that accounts for treatment effects measured on two outcomes. BVMA gives more precise estimates of the population mean and predicted values than two univariate random‐effects meta‐analyses (UVMAs). BVMA also addresses bias from incomplete reporting of outcomes. A few tutorials have covered technical details of BVMA of categorical or continuous outcomes. Limited guidance is available on how to analyze datasets that include trials with mixed continuous‐binary outcomes where treatment effects on one outcome or the other are not reported. Given the advantages of Bayesian BVMA for handling missing outcomes, we present a tutorial for Bayesian BVMA of incompletely reported treatment effects on mixed bivariate outcomes. This step‐by‐step approach can serve as a model for our intended audience, the methodologist familiar with Bayesian meta‐analysis, looking for practical advice on fitting bivariate models. To facilitate application of the proposed methods, we include our WinBUGS code. As an example, we use aggregate‐level data from published trials to demonstrate the estimation of the effects of vitamin K and bisphosphonates on two correlated bone outcomes, fracture, and bone mineral density. We present datasets where reporting of the pairs of treatment effects on both outcomes was ‘partially’ complete (i.e., pairs completely reported in some trials), and we outline steps for modeling the incompletely reported data. To assess what is gained from the additional work required by BVMA, we compare the resulting estimates to those from separate UVMAs. We discuss methodological findings and make four recommendations. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

18.
Multivariate meta‐analysis is increasingly used in medical statistics. In the univariate setting, the non‐iterative method proposed by DerSimonian and Laird is a simple and now standard way of performing random effects meta‐analyses. We propose a natural and easily implemented multivariate extension of this procedure which is accessible to applied researchers and provides a much less computationally intensive alternative to existing methods. In a simulation study, the proposed procedure performs similarly in almost all ways to the more established iterative restricted maximum likelihood approach. The method is applied to some real data sets and an extension to multivariate meta‐regression is described. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

19.
Meta‐analyses of genetic association studies are usually performed using a single polymorphism at a time, even though in many cases the individual studies report results from partially overlapping sets of polymorphisms. We present here a multipoint (or multilocus) method for multivariate meta‐analysis of published population‐based case‐control association studies. The method is derived by extending the general method for multivariate meta‐analysis and allows for multivariate modelling of log(odds ratios (OR)) derived from several polymorphisms that are in linkage disequilibrium (LD). The method is presented in a genetic model‐free approach, although it can also be used by assuming a genetic model of inheritance beforehand. Furthermore, the method is presented in a unified framework and is easily applied to both discrete outcomes (using the OR), as well as to meta‐analyses of a continuous outcome (using the mean difference). The main innovation of the method is the analytical calculation of the within‐studies covariances between estimates derived from linked polymorphisms. The only requirement is that of an external estimate for the degree of pairwise LD between the polymorphisms under study, which can be obtained from the same published studies, from the literature or from HapMap. Thus, the method is quite simple and fast, it can be extended to an arbitrary set of polymorphisms and can be fitted in nearly all statistical packages (Stata, R/Splus and SAS). Applications in two already published meta‐analyses provide encouraging results concerning the robustness and the usefulness of the method and we expect that it would be widely used in the future. Genet. Epidemiol. 34: 702‐715, 2010. © 2010 Wiley‐Liss, Inc.  相似文献   

20.
Heterogeneity in diagnostic meta‐analyses is common because of the observational nature of diagnostic studies and the lack of standardization in the positivity criterion (cut‐off value) for some tests. So far the unexplained heterogeneity across studies has been quantified by either using the I2 statistic for a single parameter (i.e. either the sensitivity or the specificity) or visually examining the data in a receiver‐operating characteristic space. In this paper, we derive improved I2 statistics measuring heterogeneity for dichotomous outcomes, with a focus on diagnostic tests. We show that the currently used estimate of the ‘typical’ within‐study variance proposed by Higgins and Thompson is not able to properly account for the variability of the within‐study variance across studies for dichotomous variables. Therefore, when the between‐study variance is large, the ‘typical’ within‐study variance underestimates the expected within‐study variance, and the corresponding I2 is overestimated. We propose to use the expected value of the within‐study variation in the construction of I2 in cases of univariate and bivariate diagnostic meta‐analyses. For bivariate diagnostic meta‐analyses, we derive a bivariate version of I2 that is able to account for the correlation between sensitivity and specificity. We illustrate the performance of these new estimators using simulated data as well as two real data sets. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号