首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 47 毫秒
1.
Meta‐analysis of clinical trials is a methodology to summarize information from a collection of trials about an intervention, in order to make informed inferences about that intervention. Random effects allow the target population outcomes to vary among trials. Since meta‐analysis is often an important element in helping shape public health policy, society depends on biostatisticians to help ensure that the methodology is sound. Yet when meta‐analysis involves randomized binomial trials with low event rates, the overwhelming majority of publications use methods currently not intended for such data. This statistical practice issue must be addressed. Proper methods exist, but they are rarely applied. This tutorial is devoted to estimating a well‐defined overall relative risk, via a patient‐weighted random‐effects method. We show what goes wrong with methods based on ‘inverse‐variance’ weights, which are almost universally used. To illustrate similarities and differences, we contrast our methods, inverse‐variance methods, and the published results (usually inverse‐variance) for 18 meta‐analyses from 13 Journal of the American Medical Association articles. We also consider the 2007 case of rosiglitazone (Avandia), where important public health issues were at stake, involving patient cardiovascular risk. The most widely used method would have reached a different conclusion. © 2016 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd.  相似文献   

2.
Multivariate meta‐analysis, which involves jointly analyzing multiple and correlated outcomes from separate studies, has received a great deal of attention. One reason to prefer the multivariate approach is its ability to account for the dependence between multiple estimates from the same study. However, nearly all the existing methods for analyzing multivariate meta‐analytic data require the knowledge of the within‐study correlations, which are usually unavailable in practice. We propose a simple non‐iterative method that can be used for the analysis of multivariate meta‐analysis datasets, that has no convergence problems, and does not require the use of within‐study correlations. Our approach uses standard univariate methods for the marginal effects but also provides valid joint inference for multiple parameters. The proposed method can directly handle missing outcomes under missing completely at random assumption. Simulation studies show that the proposed method provides unbiased estimates, well‐estimated standard errors, and confidence intervals with good coverage probability. Furthermore, the proposed method is found to maintain high relative efficiency compared with conventional multivariate meta‐analyses where the within‐study correlations are known. We illustrate the proposed method through two real meta‐analyses where functions of the estimated effects are of interest. © 2015 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.  相似文献   

3.
Many meta‐analyses combine results from only a small number of studies, a situation in which the between‐study variance is imprecisely estimated when standard methods are applied. Bayesian meta‐analysis allows incorporation of external evidence on heterogeneity, providing the potential for more robust inference on the effect size of interest. We present a method for performing Bayesian meta‐analysis using data augmentation, in which we represent an informative conjugate prior for between‐study variance by pseudo data and use meta‐regression for estimation. To assist in this, we derive predictive inverse‐gamma distributions for the between‐study variance expected in future meta‐analyses. These may serve as priors for heterogeneity in new meta‐analyses. In a simulation study, we compare approximate Bayesian methods using meta‐regression and pseudo data against fully Bayesian approaches based on importance sampling techniques and Markov chain Monte Carlo (MCMC). We compare the frequentist properties of these Bayesian methods with those of the commonly used frequentist DerSimonian and Laird procedure. The method is implemented in standard statistical software and provides a less complex alternative to standard MCMC approaches. An importance sampling approach produces almost identical results to standard MCMC approaches, and results obtained through meta‐regression and pseudo data are very similar. On average, data augmentation provides closer results to MCMC, if implemented using restricted maximum likelihood estimation rather than DerSimonian and Laird or maximum likelihood estimation. The methods are applied to real datasets, and an extension to network meta‐analysis is described. The proposed method facilitates Bayesian meta‐analysis in a way that is accessible to applied researchers. © 2016 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.  相似文献   

4.
Making inferences about the average treatment effect using the random effects model for meta‐analysis is problematic in the common situation where there is a small number of studies. This is because estimates of the between‐study variance are not precise enough to accurately apply the conventional methods for testing and deriving a confidence interval for the average effect. We have found that a refined method for univariate meta‐analysis, which applies a scaling factor to the estimated effects’ standard error, provides more accurate inference. We explain how to extend this method to the multivariate scenario and show that our proposal for refined multivariate meta‐analysis and meta‐regression can provide more accurate inferences than the more conventional approach. We explain how our proposed approach can be implemented using standard output from multivariate meta‐analysis software packages and apply our methodology to two real examples. © 2013 The Authors. Statistics in Medicine published by John Wiley & Sons, Ltd.  相似文献   

5.
An important question for clinicians appraising a meta‐analysis is: are the findings likely to be valid in their own practice—does the reported effect accurately represent the effect that would occur in their own clinical population? To this end we advance the concept of statistical validity—where the parameter being estimated equals the corresponding parameter for a new independent study. Using a simple (‘leave‐one‐out’) cross‐validation technique, we demonstrate how we may test meta‐analysis estimates for statistical validity using a new validation statistic, Vn , and derive its distribution. We compare this with the usual approach of investigating heterogeneity in meta‐analyses and demonstrate the link between statistical validity and homogeneity. Using a simulation study, the properties of Vn and the Q statistic are compared for univariate random effects meta‐analysis and a tailored meta‐regression model, where information from the setting (included as model covariates) is used to calibrate the summary estimate to the setting of application. Their properties are found to be similar when there are 50 studies or more, but for fewer studies Vn has greater power but a higher type 1 error rate than Q . The power and type 1 error rate of Vn are also shown to depend on the within‐study variance, between‐study variance, study sample size, and the number of studies in the meta‐analysis. Finally, we apply Vn to two published meta‐analyses and conclude that it usefully augments standard methods when deciding upon the likely validity of summary meta‐analysis estimates in clinical practice. © 2017 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd.  相似文献   

6.
In network meta‐analyses that synthesize direct and indirect comparison evidence concerning multiple treatments, multivariate random effects models have been routinely used for addressing between‐studies heterogeneities. Although their standard inference methods depend on large sample approximations (eg, restricted maximum likelihood estimation) for the number of trials synthesized, the numbers of trials are often moderate or small. In these situations, standard estimators cannot be expected to behave in accordance with asymptotic theory; in particular, confidence intervals cannot be assumed to exhibit their nominal coverage probabilities (also, the type I error probabilities of the corresponding tests cannot be retained). The invalidity issue may seriously influence the overall conclusions of network meta‐analyses. In this article, we develop several improved inference methods for network meta‐analyses to resolve these problems. We first introduce 2 efficient likelihood‐based inference methods, the likelihood ratio test–based and efficient score test–based methods, in a general framework of network meta‐analysis. Then, to improve the small‐sample inferences, we developed improved higher‐order asymptotic methods using Bartlett‐type corrections and bootstrap adjustment methods. The proposed methods adopt Monte Carlo approaches using parametric bootstraps to effectively circumvent complicated analytical calculations of case‐by‐case analyses and to permit flexible application to various statistical models network meta‐analyses. These methods can also be straightforwardly applied to multivariate meta‐regression analyses and to tests for the evaluation of inconsistency. In numerical evaluations via simulations, the proposed methods generally performed well compared with the ordinary restricted maximum likelihood–based inference method. Applications to 2 network meta‐analysis datasets are provided.  相似文献   

7.
Standard meta‐analytic theory assumes that study outcomes are normally distributed with known variances. However, methods derived from this theory are often applied to effect sizes having skewed distributions with estimated variances. Both shortcomings can be largely overcome by first applying a variance stabilizing transformation. Here we concentrate on study outcomes with Student t‐distributions and show that we can better estimate parameters of fixed or random effects models with confidence intervals using stable weights or with profile approximate likelihood intervals following stabilization. We achieve even better coverage with a finite sample bias correction. Further, a simple t‐interval provides very good coverage of an overall effect size without estimation of the inter‐study variance. We illustrate the methodology on two meta‐analytic studies from the medical literature, the effect of salt reduction on systolic blood pressure and the effect of opioids for the relief of breathlessness. Substantial simulation studies compare traditional methods with those newly proposed. We can apply the theoretical results to other study outcomes for which an effective variance stabilizer is available. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

8.
Recent advances in sequencing technologies have made it possible to explore the influence of rare variants on complex diseases and traits. Meta‐analysis is essential to this exploration because large sample sizes are required to detect rare variants. Several methods are available to conduct meta‐analysis for rare variants under fixed‐effects models, which assume that the genetic effects are the same across all studies. In practice, genetic associations are likely to be heterogeneous among studies because of differences in population composition, environmental factors, phenotype and genotype measurements, or analysis method. We propose random‐effects models which allow the genetic effects to vary among studies and develop the corresponding meta‐analysis methods for gene‐level association tests. Our methods take score statistics, rather than individual participant data, as input and thus can accommodate any study designs and any phenotypes. We produce the random‐effects versions of all commonly used gene‐level association tests, including burden, variable threshold, and variance‐component tests. We demonstrate through extensive simulation studies that our random‐effects tests are substantially more powerful than the fixed‐effects tests in the presence of moderate and high between‐study heterogeneity and achieve similar power to the latter when the heterogeneity is low. The usefulness of the proposed methods is further illustrated with data from National Heart, Lung, and Blood Institute Exome Sequencing Project (NHLBI ESP). The relevant software is freely available.  相似文献   

9.
We consider random effects meta‐analysis where the outcome variable is the occurrence of some event of interest. The data structures handled are where one has one or more groups in each study, and in each group either the number of subjects with and without the event, or the number of events and the total duration of follow‐up is available. Traditionally, the meta‐analysis follows the summary measures approach based on the estimates of the outcome measure(s) and the corresponding standard error(s). This approach assumes an approximate normal within‐study likelihood and treats the standard errors as known. This approach has several potential disadvantages, such as not accounting for the standard errors being estimated, not accounting for correlation between the estimate and the standard error, the use of an (arbitrary) continuity correction in case of zero events, and the normal approximation being bad in studies with few events. We show that these problems can be overcome in most cases occurring in practice by replacing the approximate normal within‐study likelihood by the appropriate exact likelihood. This leads to a generalized linear mixed model that can be fitted in standard statistical software. For instance, in the case of odds ratio meta‐analysis, one can use the non‐central hypergeometric distribution likelihood leading to mixed‐effects conditional logistic regression. For incidence rate ratio meta‐analysis, it leads to random effects logistic regression with an offset variable. We also present bivariate and multivariate extensions. We present a number of examples, especially with rare events, among which an example of network meta‐analysis. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

10.
Bland–Altman method comparison studies are common in the medical sciences and are used to compare a new measure to a gold‐standard (often costlier or more invasive) measure. The distribution of these differences is summarized by two statistics, the ‘bias’ and standard deviation, and these measures are combined to provide estimates of the limits of agreement (LoA). When these LoA are within the bounds of clinically insignificant differences, the new non‐invasive measure is preferred. Very often, multiple Bland–Altman studies have been conducted comparing the same two measures, and random‐effects meta‐analysis provides a means to pool these estimates. We provide a framework for the meta‐analysis of Bland–Altman studies, including methods for estimating the LoA and measures of uncertainty (i.e., confidence intervals). Importantly, these LoA are likely to be wider than those typically reported in Bland–Altman meta‐analyses. Frequently, Bland–Altman studies report results based on repeated measures designs but do not properly adjust for this design in the analysis. Meta‐analyses of Bland–Altman studies frequently exclude these studies for this reason. We provide a meta‐analytic approach that allows inclusion of estimates from these studies. This includes adjustments to the estimate of the standard deviation and a method for pooling the estimates based upon robust variance estimation. An example is included based on a previously published meta‐analysis. Copyright © 2017 John Wiley & Sons, Ltd.  相似文献   

11.
In meta‐analyses, where a continuous outcome is measured with different scales or standards, the summary statistic is the mean difference standardised to a common metric with a common variance. Where trial treatment is delivered by a person, nesting of patients within care providers leads to clustering that may interact with, or be limited to, one or more of the arms. Assuming a common standardising variance is less tenable and options for scaling the mean difference become numerous. Metrics suggested for cluster‐randomised trials are within, between and total variances and for unequal variances, the control arm or pooled variances. We consider summary measures and individual‐patient‐data methods for meta‐analysing standardised mean differences from trials with two‐level nested clustering, relaxing independence and common variance assumptions, allowing sample sizes to differ across arms. A general metric is proposed with comparable interpretation across designs. The relationship between the method of standardisation and choice of model is explored, allowing for bias in the estimator and imprecision in the standardising metric. A meta‐analysis of trials of counselling in primary care motivated this work. Assuming equal clustering effects across trials, the proposed random‐effects meta‐analysis model gave a pooled standardised mean difference of ?0.27 (95% CI ?0.45 to ?0.08) using summary measures and ?0.26 (95% CI ?0.45 to ?0.09) with the individual‐patient‐data. While treatment‐related clustering has rarely been taken into account in trials, it is now recommended that it is considered in trials and meta‐analyses. This paper contributes to the uptake of this guidance. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

12.
A widely used method in classic random‐effects meta‐analysis is the DerSimonian–Laird method. An alternative meta‐analytical approach is the Hartung–Knapp method. This article reports results of an empirical comparison and a simulation study of these two methods and presents corresponding analytical results. For the empirical evaluation, we took 157 meta‐analyses with binary outcomes, analysed each one using both methods and performed a comparison of the results based on treatment estimates, standard errors and associated P‐values. In several simulation scenarios, we systematically evaluated coverage probabilities and confidence interval lengths. Generally, results are more conservative with the Hartung–Knapp method, giving wider confidence intervals and larger P‐values for the overall treatment effect. However, in some meta‐analyses with very homogeneous individual treatment results, the Hartung–Knapp method yields narrower confidence intervals and smaller P‐values than the classic random‐effects method, which in this situation, actually reduces to a fixed‐effect meta‐analysis. Therefore, it is recommended to conduct a sensitivity analysis based on the fixed‐effect model instead of solely relying on the result of the Hartung–Knapp random‐effects meta‐analysis. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

13.
A prognostic factor is any measure that is associated with the risk of future health outcomes in those with existing disease. Often, the prognostic ability of a factor is evaluated in multiple studies. However, meta‐analysis is difficult because primary studies often use different methods of measurement and/or different cut‐points to dichotomise continuous factors into ‘high’ and ‘low’ groups; selective reporting is also common. We illustrate how multivariate random effects meta‐analysis models can accommodate multiple prognostic effect estimates from the same study, relating to multiple cut‐points and/or methods of measurement. The models account for within‐study and between‐study correlations, which utilises more information and reduces the impact of unreported cut‐points and/or measurement methods in some studies. The applicability of the approach is improved with individual participant data and by assuming a functional relationship between prognostic effect and cut‐point to reduce the number of unknown parameters. The models provide important inferential results for each cut‐point and method of measurement, including the summary prognostic effect, the between‐study variance and a 95% prediction interval for the prognostic effect in new populations. Two applications are presented. The first reveals that, in a multivariate meta‐analysis using published results, the Apgar score is prognostic of neonatal mortality but effect sizes are smaller at most cut‐points than previously thought. In the second, a multivariate meta‐analysis of two methods of measurement provides weak evidence that microvessel density is prognostic of mortality in lung cancer, even when individual participant data are available so that a continuous prognostic trend is examined (rather than cut‐points). © 2015 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.  相似文献   

14.
Statistical inference for analyzing the results from several independent studies on the same quantity of interest has been investigated frequently in recent decades. Typically, any meta‐analytic inference requires that the quantity of interest is available from each study together with an estimate of its variability. The current work is motivated by a meta‐analysis on comparing two treatments (thoracoscopic and open) of congenital lung malformations in young children. Quantities of interest include continuous end‐points such as length of operation or number of chest tube days. As studies only report mean values (and no standard errors or confidence intervals), the question arises how meta‐analytic inference can be developed. We suggest two methods to estimate study‐specific variances in such a meta‐analysis, where only sample means and sample sizes are available in the treatment arms. A general likelihood ratio test is derived for testing equality of variances in two groups. By means of simulation studies, the bias and estimated standard error of the overall mean difference from both methodologies are evaluated and compared with two existing approaches: complete study analysis only and partial variance information. The performance of the test is evaluated in terms of type I error. Additionally, we illustrate these methods in the meta‐analysis on comparing thoracoscopic and open surgery for congenital lung malformations and in a meta‐analysis on the change in renal function after kidney donation. Copyright © 2017 John Wiley & Sons, Ltd.  相似文献   

15.
We consider a study‐level meta‐analysis with a normally distributed outcome variable and possibly unequal study‐level variances, where the object of inference is the difference in means between a treatment and control group. A common complication in such an analysis is missing sample variances for some studies. A frequently used approach is to impute the weighted (by sample size) mean of the observed variances (mean imputation). Another approach is to include only those studies with variances reported (complete case analysis). Both mean imputation and complete case analysis are only valid under the missing‐completely‐at‐random assumption, and even then the inverse variance weights produced are not necessarily optimal. We propose a multiple imputation method employing gamma meta‐regression to impute the missing sample variances. Our method takes advantage of study‐level covariates that may be used to provide information about the missing data. Through simulation studies, we show that multiple imputation, when the imputation model is correctly specified, is superior to competing methods in terms of confidence interval coverage probability and type I error probability when testing a specified group difference. Finally, we describe a similar approach to handling missing variances in cross‐over studies. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

16.
Gene‐by‐environment (G × E) interactions are important in explaining the missing heritability and understanding the causation of complex diseases, but a single, moderately sized study often has limited statistical power to detect such interactions. With the increasing need for integrating data and reporting results from multiple collaborative studies or sites, debate over choice between mega‐ versus meta‐analysis continues. In principle, data from different sites can be integrated at the individual level into a “mega” data set, which can be fit by a joint “mega‐analysis.” Alternatively, analyses can be done at each site, and results across sites can be combined through a “meta‐analysis” procedure without integrating individual level data across sites. Although mega‐analysis has been advocated in several recent initiatives, meta‐analysis has the advantages of simplicity and feasibility, and has recently led to several important findings in identifying main genetic effects. In this paper, we conducted empirical and simulation studies, using data from a G × E study of lung cancer, to compare the mega‐ and meta‐analyses in four commonly used G × E analyses under the scenario that the number of studies is small and sample sizes of individual studies are relatively large. We compared the two data integration approaches in the context of fixed effect models and random effects models separately. Our investigations provide valuable insights in understanding the differences between mega‐ and meta‐analyses in practice of combining small number of studies in identifying G × E interactions.  相似文献   

17.
Numerous meta‐analyses in healthcare research combine results from only a small number of studies, for which the variance representing between‐study heterogeneity is estimated imprecisely. A Bayesian approach to estimation allows external evidence on the expected magnitude of heterogeneity to be incorporated. The aim of this paper is to provide tools that improve the accessibility of Bayesian meta‐analysis. We present two methods for implementing Bayesian meta‐analysis, using numerical integration and importance sampling techniques. Based on 14 886 binary outcome meta‐analyses in the Cochrane Database of Systematic Reviews, we derive a novel set of predictive distributions for the degree of heterogeneity expected in 80 settings depending on the outcomes assessed and comparisons made. These can be used as prior distributions for heterogeneity in future meta‐analyses. The two methods are implemented in R, for which code is provided. Both methods produce equivalent results to standard but more complex Markov chain Monte Carlo approaches. The priors are derived as log‐normal distributions for the between‐study variance, applicable to meta‐analyses of binary outcomes on the log odds‐ratio scale. The methods are applied to two example meta‐analyses, incorporating the relevant predictive distributions as prior distributions for between‐study heterogeneity. We have provided resources to facilitate Bayesian meta‐analysis, in a form accessible to applied researchers, which allow relevant prior information on the degree of heterogeneity to be incorporated. © 2014 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd.  相似文献   

18.
Recently, multivariate random‐effects meta‐analysis models have received a great deal of attention, despite its greater complexity compared to univariate meta‐analyses. One of its advantages is its ability to account for the within‐study and between‐study correlations. However, the standard inference procedures, such as the maximum likelihood or maximum restricted likelihood inference, require the within‐study correlations, which are usually unavailable. In addition, the standard inference procedures suffer from the problem of singular estimated covariance matrix. In this paper, we propose a pseudolikelihood method to overcome the aforementioned problems. The pseudolikelihood method does not require within‐study correlations and is not prone to singular covariance matrix problem. In addition, it can properly estimate the covariance between pooled estimates for different outcomes, which enables valid inference on functions of pooled estimates, and can be applied to meta‐analysis where some studies have outcomes missing completely at random. Simulation studies show that the pseudolikelihood method provides unbiased estimates for functions of pooled estimates, well‐estimated standard errors, and confidence intervals with good coverage probability. Furthermore, the pseudolikelihood method is found to maintain high relative efficiency compared to that of the standard inferences with known within‐study correlations. We illustrate the proposed method through three meta‐analyses for comparison of prostate cancer treatment, for the association between paraoxonase 1 activities and coronary heart disease, and for the association between homocysteine level and coronary heart disease. © 2014 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.  相似文献   

19.
Outcome reporting bias (ORB) is recognized as a threat to the validity of both pairwise and network meta‐analysis (NMA). In recent years, multivariate meta‐analytic methods have been proposed to reduce the impact of ORB in the pairwise setting. These methods have shown that multivariate meta‐analysis can reduce bias and increase efficiency of pooled effect sizes. However, it is unknown whether multivariate NMA (MNMA) can similarly reduce the impact of ORB. Additionally, it is quite challenging to implement MNMA due to the fact that correlation between treatments and outcomes must be modeled; thus, the dimension of the covariance matrix and number of components to estimate grows quickly with the number of treatments and number of outcomes. To determine whether MNMA can reduce the effects of ORB on pooled treatment effect sizes, we present an extensive simulation study of Bayesian MNMA. Via simulation studies, we show that MNMA reduces the bias of pooled effect sizes under a variety of outcome missingness scenarios, including missing at random and missing not at random. Further, MNMA improves the precision of estimates, producing narrower credible intervals. We demonstrate the applicability of the approach via application of MNMA to a multi‐treatment systematic review of randomized controlled trials of anti‐depressants for the treatment of depression in older adults.  相似文献   

20.

Background

Motivated by the setting of clinical trials in low back pain, this work investigated statistical methods to identify patient subgroups for which there is a large treatment effect (treatment by subgroup interaction). Statistical tests for interaction are often underpowered. Individual patient data (IPD) meta‐analyses provide a framework with improved statistical power to investigate subgroups. However, conventional approaches to subgroup analyses applied in both a single trial setting and an IPD setting have a number of issues, one of them being that factors used to define subgroups are investigated one at a time. As individuals have multiple characteristics that may be related to response to treatment, alternative exploratory statistical methods are required.

Methods

Tree‐based methods are a promising alternative that systematically searches the covariate space to identify subgroups defined by multiple characteristics. A tree method in particular, SIDES, is described and extended for application in an IPD meta‐analyses setting by incorporating fixed‐effects and random‐effects models to account for between‐trial variation. The performance of the proposed extension was assessed using simulation studies. The proposed method was then applied to an IPD low back pain dataset.

Results

The simulation studies found that the extended IPD‐SIDES method performed well in detecting subgroups especially in the presence of large between‐trial variation. The IPD‐SIDES method identified subgroups with enhanced treatment effect when applied to the low back pain data.

Conclusions

This work proposes an exploratory statistical approach for subgroup analyses applicable in any research discipline where subgroup analyses in an IPD meta‐analysis setting are of interest.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号