首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
There are still challenges when meta‐analyzing data from studies on diagnostic accuracy. This is mainly due to the bivariate nature of the response where information on sensitivity and specificity must be summarized while accounting for their correlation within a single trial. In this paper, we propose a new statistical model for the meta‐analysis for diagnostic accuracy studies. This model uses beta‐binomial distributions for the marginal numbers of true positives and true negatives and links these margins by a bivariate copula distribution. The new model comes with all the features of the current standard model, a bivariate logistic regression model with random effects, but has the additional advantages of a closed likelihood function and a larger flexibility for the correlation structure of sensitivity and specificity. In a simulation study, which compares three copula models and two implementations of the standard model, the Plackett and the Gauss copula do rarely perform worse but frequently better than the standard model. We use an example from a meta‐analysis to judge the diagnostic accuracy of telomerase (a urinary tumor marker) for the diagnosis of primary bladder cancer for illustration. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

2.
Meta‐analyses of clinical trials often treat the number of patients experiencing a medical event as binomially distributed when individual patient data for fitting standard time‐to‐event models are unavailable. Assuming identical drop‐out time distributions across arms, random censorship, and low proportions of patients with an event, a binomial approach results in a valid test of the null hypothesis of no treatment effect with minimal loss in efficiency compared with time‐to‐event methods. To deal with differences in follow‐up—at the cost of assuming specific distributions for event and drop‐out times—we propose a hierarchical multivariate meta‐analysis model using the aggregate data likelihood based on the number of cases, fatal cases, and discontinuations in each group, as well as the planned trial duration and groups sizes. Such a model also enables exchangeability assumptions about parameters of survival distributions, for which they are more appropriate than for the expected proportion of patients with an event across trials of substantially different length. Borrowing information from other trials within a meta‐analysis or from historical data is particularly useful for rare events data. Prior information or exchangeability assumptions also avoid the parameter identifiability problems that arise when using more flexible event and drop‐out time distributions than the exponential one. We discuss the derivation of robust historical priors and illustrate the discussed methods using an example. We also compare the proposed approach against other aggregate data meta‐analysis methods in a simulation study. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

3.
Statistical inference for analyzing the results from several independent studies on the same quantity of interest has been investigated frequently in recent decades. Typically, any meta‐analytic inference requires that the quantity of interest is available from each study together with an estimate of its variability. The current work is motivated by a meta‐analysis on comparing two treatments (thoracoscopic and open) of congenital lung malformations in young children. Quantities of interest include continuous end‐points such as length of operation or number of chest tube days. As studies only report mean values (and no standard errors or confidence intervals), the question arises how meta‐analytic inference can be developed. We suggest two methods to estimate study‐specific variances in such a meta‐analysis, where only sample means and sample sizes are available in the treatment arms. A general likelihood ratio test is derived for testing equality of variances in two groups. By means of simulation studies, the bias and estimated standard error of the overall mean difference from both methodologies are evaluated and compared with two existing approaches: complete study analysis only and partial variance information. The performance of the test is evaluated in terms of type I error. Additionally, we illustrate these methods in the meta‐analysis on comparing thoracoscopic and open surgery for congenital lung malformations and in a meta‐analysis on the change in renal function after kidney donation. Copyright © 2017 John Wiley & Sons, Ltd.  相似文献   

4.
Simulation studies to evaluate performance of statistical methods require a well‐specified data‐generating model. Details of these models are essential to interpret the results and arrive at proper conclusions. A case in point is random‐effects meta‐analysis of dichotomous outcomes. We reviewed a number of simulation studies that evaluated approximate normal models for meta‐analysis of dichotomous outcomes, and we assessed the data‐generating models that were used to generate events for a series of (heterogeneous) trials. We demonstrate that the performance of the statistical methods, as assessed by simulation, differs between these 3 alternative data‐generating models, with larger differences apparent in the small population setting. Our findings are relevant to multilevel binomial models in general.  相似文献   

5.
Recently, multivariate random‐effects meta‐analysis models have received a great deal of attention, despite its greater complexity compared to univariate meta‐analyses. One of its advantages is its ability to account for the within‐study and between‐study correlations. However, the standard inference procedures, such as the maximum likelihood or maximum restricted likelihood inference, require the within‐study correlations, which are usually unavailable. In addition, the standard inference procedures suffer from the problem of singular estimated covariance matrix. In this paper, we propose a pseudolikelihood method to overcome the aforementioned problems. The pseudolikelihood method does not require within‐study correlations and is not prone to singular covariance matrix problem. In addition, it can properly estimate the covariance between pooled estimates for different outcomes, which enables valid inference on functions of pooled estimates, and can be applied to meta‐analysis where some studies have outcomes missing completely at random. Simulation studies show that the pseudolikelihood method provides unbiased estimates for functions of pooled estimates, well‐estimated standard errors, and confidence intervals with good coverage probability. Furthermore, the pseudolikelihood method is found to maintain high relative efficiency compared to that of the standard inferences with known within‐study correlations. We illustrate the proposed method through three meta‐analyses for comparison of prostate cancer treatment, for the association between paraoxonase 1 activities and coronary heart disease, and for the association between homocysteine level and coronary heart disease. © 2014 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.  相似文献   

6.
In real life and somewhat contrary to biostatistical textbook knowledge, sensitivity and specificity (and not only predictive values) of diagnostic tests can vary with the underlying prevalence of disease. In meta‐analysis of diagnostic studies, accounting for this fact naturally leads to a trivariate expansion of the traditional bivariate logistic regression model with random study effects. In this paper, a new model is proposed using trivariate copulas and beta‐binomial marginal distributions for sensitivity, specificity, and prevalence as an expansion of the bivariate model. Two different copulas are used, the trivariate Gaussian copula and a trivariate vine copula based on the bivariate Plackett copula. This model has a closed‐form likelihood, so standard software (e.g., SAS PROC NLMIXED ) can be used. The results of a simulation study have shown that the copula models perform at least as good but frequently better than the standard model. The methods are illustrated by two examples. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

7.
Many meta‐analyses combine results from only a small number of studies, a situation in which the between‐study variance is imprecisely estimated when standard methods are applied. Bayesian meta‐analysis allows incorporation of external evidence on heterogeneity, providing the potential for more robust inference on the effect size of interest. We present a method for performing Bayesian meta‐analysis using data augmentation, in which we represent an informative conjugate prior for between‐study variance by pseudo data and use meta‐regression for estimation. To assist in this, we derive predictive inverse‐gamma distributions for the between‐study variance expected in future meta‐analyses. These may serve as priors for heterogeneity in new meta‐analyses. In a simulation study, we compare approximate Bayesian methods using meta‐regression and pseudo data against fully Bayesian approaches based on importance sampling techniques and Markov chain Monte Carlo (MCMC). We compare the frequentist properties of these Bayesian methods with those of the commonly used frequentist DerSimonian and Laird procedure. The method is implemented in standard statistical software and provides a less complex alternative to standard MCMC approaches. An importance sampling approach produces almost identical results to standard MCMC approaches, and results obtained through meta‐regression and pseudo data are very similar. On average, data augmentation provides closer results to MCMC, if implemented using restricted maximum likelihood estimation rather than DerSimonian and Laird or maximum likelihood estimation. The methods are applied to real datasets, and an extension to network meta‐analysis is described. The proposed method facilitates Bayesian meta‐analysis in a way that is accessible to applied researchers. © 2016 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.  相似文献   

8.
This paper investigates a likelihood‐based approach in meta‐analysis of clinical trials involving the baseline risk as explanatory variable. The approach takes account of the errors affecting the measure of either the treatment effect or the baseline risk, while facing the potential misspecification of the baseline risk distribution. To this aim, we suggest to model the baseline risk through a flexible family of distributions represented by the skew‐normal. We describe how to carry out inference within this framework and evaluate the performance of the approach through simulation. The method is compared with the routine likelihood approach based on the restrictive normality assumption for the baseline risk distribution and with the weighted least‐squares regression. We apply the competing approaches to the analysis of two published datasets. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

9.
Fixed‐effects meta‐analysis has been criticized because the assumption of homogeneity is often unrealistic and can result in underestimation of parameter uncertainty. Random‐effects meta‐analysis and meta‐regression are therefore typically used to accommodate explained and unexplained between‐study variability. However, it is not unusual to obtain a boundary estimate of zero for the (residual) between‐study standard deviation, resulting in fixed‐effects estimates of the other parameters and their standard errors. To avoid such boundary estimates, we suggest using Bayes modal (BM) estimation with a gamma prior on the between‐study standard deviation. When no prior information is available regarding the magnitude of the between‐study standard deviation, a weakly informative default prior can be used (with shape parameter 2 and rate parameter close to 0) that produces positive estimates but does not overrule the data, leading to only a small decrease in the log likelihood from its maximum. We review the most commonly used estimation methods for meta‐analysis and meta‐regression including classical and Bayesian methods and apply these methods, as well as our BM estimator, to real datasets. We then perform simulations to compare BM estimation with the other methods and find that BM estimation performs well by (i) avoiding boundary estimates; (ii) having smaller root mean squared error for the between‐study standard deviation; and (iii) better coverage for the overall effects than the other methods when the true model has at least a small or moderate amount of unexplained heterogeneity. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

10.
Treatment effects for multiple outcomes can be meta‐analyzed separately or jointly, but no systematic empirical comparison of the two approaches exists. From the Cochrane Library of Systematic Reviews, we identified 45 reviews, including 1473 trials and 258,675 patients, that contained two or three univariate meta‐analyses of categorical outcomes for the same interventions that could also be analyzed jointly. Eligible were meta‐analyses with at least seven trials reporting all outcomes for which the cross‐classification tables were exactly recoverable (e.g., outcomes were mutually exclusive, or one was a subset of the other). This ensured known correlation structures. Outcomes in 40 reviews had an is‐subset‐of relationship, and those in 5 were mutually exclusive. We analyzed these data with univariate and multivariate models based on discrete and approximate likelihoods. Discrete models were fit in the Bayesian framework using slightly informative priors. The summary effects for each outcome were similar with univariate and multivariate meta‐analyses (both using the approximate and discrete likelihoods); however, the multivariate model with the discrete likelihood gave smaller between‐study variance estimates, and narrower predictive intervals for new studies. When differences in the summary treatment effects were examined, the multivariate models gave similar summary estimates but considerably longer (shorter) uncertainty intervals because of positive (negative) correlation between outcome treatment effects. It is unclear whether any of the examined reviews would change their overall conclusions based on multivariate versus univariate meta‐analyses, because extra‐analytical and context‐specific considerations contribute to conclusions and, secondarily, because numerical differences were often modest. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

11.
Making inferences about the average treatment effect using the random effects model for meta‐analysis is problematic in the common situation where there is a small number of studies. This is because estimates of the between‐study variance are not precise enough to accurately apply the conventional methods for testing and deriving a confidence interval for the average effect. We have found that a refined method for univariate meta‐analysis, which applies a scaling factor to the estimated effects’ standard error, provides more accurate inference. We explain how to extend this method to the multivariate scenario and show that our proposal for refined multivariate meta‐analysis and meta‐regression can provide more accurate inferences than the more conventional approach. We explain how our proposed approach can be implemented using standard output from multivariate meta‐analysis software packages and apply our methodology to two real examples. © 2013 The Authors. Statistics in Medicine published by John Wiley & Sons, Ltd.  相似文献   

12.
Multivariate meta‐analysis is increasingly used in medical statistics. In the univariate setting, the non‐iterative method proposed by DerSimonian and Laird is a simple and now standard way of performing random effects meta‐analyses. We propose a natural and easily implemented multivariate extension of this procedure which is accessible to applied researchers and provides a much less computationally intensive alternative to existing methods. In a simulation study, the proposed procedure performs similarly in almost all ways to the more established iterative restricted maximum likelihood approach. The method is applied to some real data sets and an extension to multivariate meta‐regression is described. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

13.
Standard methods for fixed effects meta‐analysis assume that standard errors for study‐specific estimates are known, not estimated. While the impact of this simplifying assumption has been shown in a few special cases, its general impact is not well understood, nor are general‐purpose tools available for inference under more realistic assumptions. In this paper, we aim to elucidate the impact of using estimated standard errors in fixed effects meta‐analysis, showing why it does not go away in large samples and quantifying how badly miscalibrated standard inference will be if it is ignored. We also show the important role of a particular measure of heterogeneity in this miscalibration. These developments lead to confidence intervals for fixed effects meta‐analysis with improved performance for both location and scale parameters.  相似文献   

14.
In network meta‐analyses that synthesize direct and indirect comparison evidence concerning multiple treatments, multivariate random effects models have been routinely used for addressing between‐studies heterogeneities. Although their standard inference methods depend on large sample approximations (eg, restricted maximum likelihood estimation) for the number of trials synthesized, the numbers of trials are often moderate or small. In these situations, standard estimators cannot be expected to behave in accordance with asymptotic theory; in particular, confidence intervals cannot be assumed to exhibit their nominal coverage probabilities (also, the type I error probabilities of the corresponding tests cannot be retained). The invalidity issue may seriously influence the overall conclusions of network meta‐analyses. In this article, we develop several improved inference methods for network meta‐analyses to resolve these problems. We first introduce 2 efficient likelihood‐based inference methods, the likelihood ratio test–based and efficient score test–based methods, in a general framework of network meta‐analysis. Then, to improve the small‐sample inferences, we developed improved higher‐order asymptotic methods using Bartlett‐type corrections and bootstrap adjustment methods. The proposed methods adopt Monte Carlo approaches using parametric bootstraps to effectively circumvent complicated analytical calculations of case‐by‐case analyses and to permit flexible application to various statistical models network meta‐analyses. These methods can also be straightforwardly applied to multivariate meta‐regression analyses and to tests for the evaluation of inconsistency. In numerical evaluations via simulations, the proposed methods generally performed well compared with the ordinary restricted maximum likelihood–based inference method. Applications to 2 network meta‐analysis datasets are provided.  相似文献   

15.
Control rate regression is a diffuse approach to account for heterogeneity among studies in meta‐analysis by including information about the outcome risk of patients in the control condition. Correcting for the presence of measurement error affecting risk information in the treated and in the control group has been recognized as a necessary step to derive reliable inferential conclusions. Within this framework, the paper considers the problem of small sample size as an additional source of misleading inference about the slope of the control rate regression. Likelihood procedures relying on first‐order approximations are shown to be substantially inaccurate, especially when dealing with increasing heterogeneity and correlated measurement errors. We suggest to address the problem by relying on higher‐order asymptotics. In particular, we derive Skovgaard's statistic as an instrument to improve the accuracy of the approximation of the signed profile log‐likelihood ratio statistic to the standard normal distribution. The proposal is shown to provide much more accurate results than standard likelihood solutions, with no appreciable computational effort. The advantages of Skovgaard's statistic in control rate regression are shown in a series of simulation experiments and illustrated in a real data example. R code for applying first‐ and second‐order statistic for inference on the slope on the control rate regression is provided.  相似文献   

16.
Multilevel mixed effects survival models are used in the analysis of clustered survival data, such as repeated events, multicenter clinical trials, and individual participant data (IPD) meta‐analyses, to investigate heterogeneity in baseline risk and covariate effects. In this paper, we extend parametric frailty models including the exponential, Weibull and Gompertz proportional hazards (PH) models and the log logistic, log normal, and generalized gamma accelerated failure time models to allow any number of normally distributed random effects. Furthermore, we extend the flexible parametric survival model of Royston and Parmar, modeled on the log‐cumulative hazard scale using restricted cubic splines, to include random effects while also allowing for non‐PH (time‐dependent effects). Maximum likelihood is used to estimate the models utilizing adaptive or nonadaptive Gauss–Hermite quadrature. The methods are evaluated through simulation studies representing clinically plausible scenarios of a multicenter trial and IPD meta‐analysis, showing good performance of the estimation method. The flexible parametric mixed effects model is illustrated using a dataset of patients with kidney disease and repeated times to infection and an IPD meta‐analysis of prognostic factor studies in patients with breast cancer. User‐friendly Stata software is provided to implement the methods. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

17.
Outcome reporting bias (ORB) is recognized as a threat to the validity of both pairwise and network meta‐analysis (NMA). In recent years, multivariate meta‐analytic methods have been proposed to reduce the impact of ORB in the pairwise setting. These methods have shown that multivariate meta‐analysis can reduce bias and increase efficiency of pooled effect sizes. However, it is unknown whether multivariate NMA (MNMA) can similarly reduce the impact of ORB. Additionally, it is quite challenging to implement MNMA due to the fact that correlation between treatments and outcomes must be modeled; thus, the dimension of the covariance matrix and number of components to estimate grows quickly with the number of treatments and number of outcomes. To determine whether MNMA can reduce the effects of ORB on pooled treatment effect sizes, we present an extensive simulation study of Bayesian MNMA. Via simulation studies, we show that MNMA reduces the bias of pooled effect sizes under a variety of outcome missingness scenarios, including missing at random and missing not at random. Further, MNMA improves the precision of estimates, producing narrower credible intervals. We demonstrate the applicability of the approach via application of MNMA to a multi‐treatment systematic review of randomized controlled trials of anti‐depressants for the treatment of depression in older adults.  相似文献   

18.
Meta‐analysis is now an essential tool for genetic association studies, allowing them to combine large studies and greatly accelerating the pace of genetic discovery. Although the standard meta‐analysis methods perform equivalently as the more cumbersome joint analysis under ideal settings, they result in substantial power loss under unbalanced settings with various case–control ratios. Here, we investigate the power loss problem by the standard meta‐analysis methods for unbalanced studies, and further propose novel meta‐analysis methods performing equivalently to the joint analysis under both balanced and unbalanced settings. We derive improved meta‐score‐statistics that can accurately approximate the joint‐score‐statistics with combined individual‐level data, for both linear and logistic regression models, with and without covariates. In addition, we propose a novel approach to adjust for population stratification by correcting for known population structures through minor allele frequencies. In the simulated gene‐level association studies under unbalanced settings, our method recovered up to 85% power loss caused by the standard methods. We further showed the power gain of our methods in gene‐level tests with 26 unbalanced studies of age‐related macular degeneration . In addition, we took the meta‐analysis of three unbalanced studies of type 2 diabetes as an example to discuss the challenges of meta‐analyzing multi‐ethnic samples. In summary, our improved meta‐score‐statistics with corrections for population stratification can be used to construct both single‐variant and gene‐level association studies, providing a useful framework for ensuring well‐powered, convenient, cross‐study analyses.  相似文献   

19.
Quantitative evidence synthesis through meta‐analysis is central to evidence‐based medicine. For well‐documented reasons, the meta‐analysis of individual patient data is held in higher regard than aggregate data. With access to individual patient data, the analysis is not restricted to a “two‐stage” approach (combining estimates and standard errors) but can estimate parameters of interest by fitting a single model to all of the data, a so‐called “one‐stage” analysis. There has been debate about the merits of one‐ and two‐stage analysis. Arguments for one‐stage analysis have typically noted that a wider range of models can be fitted and overall estimates may be more precise. The two‐stage side has emphasised that the models that can be fitted in two stages are sufficient to answer the relevant questions, with less scope for mistakes because there are fewer modelling choices to be made in the two‐stage approach. For Gaussian data, we consider the statistical arguments for flexibility and precision in small‐sample settings. Regarding flexibility, several of the models that can be fitted only in one stage may not be of serious interest to most meta‐analysis practitioners. Regarding precision, we consider fixed‐ and random‐effects meta‐analysis and see that, for a model making certain assumptions, the number of stages used to fit this model is irrelevant; the precision will be approximately equal. Meta‐analysts should choose modelling assumptions carefully. Sometimes relevant models can only be fitted in one stage. Otherwise, meta‐analysts are free to use whichever procedure is most convenient to fit the identified model.  相似文献   

20.
A widely used method in classic random‐effects meta‐analysis is the DerSimonian–Laird method. An alternative meta‐analytical approach is the Hartung–Knapp method. This article reports results of an empirical comparison and a simulation study of these two methods and presents corresponding analytical results. For the empirical evaluation, we took 157 meta‐analyses with binary outcomes, analysed each one using both methods and performed a comparison of the results based on treatment estimates, standard errors and associated P‐values. In several simulation scenarios, we systematically evaluated coverage probabilities and confidence interval lengths. Generally, results are more conservative with the Hartung–Knapp method, giving wider confidence intervals and larger P‐values for the overall treatment effect. However, in some meta‐analyses with very homogeneous individual treatment results, the Hartung–Knapp method yields narrower confidence intervals and smaller P‐values than the classic random‐effects method, which in this situation, actually reduces to a fixed‐effect meta‐analysis. Therefore, it is recommended to conduct a sensitivity analysis based on the fixed‐effect model instead of solely relying on the result of the Hartung–Knapp random‐effects meta‐analysis. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号