首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Many meta‐analyses combine results from only a small number of studies, a situation in which the between‐study variance is imprecisely estimated when standard methods are applied. Bayesian meta‐analysis allows incorporation of external evidence on heterogeneity, providing the potential for more robust inference on the effect size of interest. We present a method for performing Bayesian meta‐analysis using data augmentation, in which we represent an informative conjugate prior for between‐study variance by pseudo data and use meta‐regression for estimation. To assist in this, we derive predictive inverse‐gamma distributions for the between‐study variance expected in future meta‐analyses. These may serve as priors for heterogeneity in new meta‐analyses. In a simulation study, we compare approximate Bayesian methods using meta‐regression and pseudo data against fully Bayesian approaches based on importance sampling techniques and Markov chain Monte Carlo (MCMC). We compare the frequentist properties of these Bayesian methods with those of the commonly used frequentist DerSimonian and Laird procedure. The method is implemented in standard statistical software and provides a less complex alternative to standard MCMC approaches. An importance sampling approach produces almost identical results to standard MCMC approaches, and results obtained through meta‐regression and pseudo data are very similar. On average, data augmentation provides closer results to MCMC, if implemented using restricted maximum likelihood estimation rather than DerSimonian and Laird or maximum likelihood estimation. The methods are applied to real datasets, and an extension to network meta‐analysis is described. The proposed method facilitates Bayesian meta‐analysis in a way that is accessible to applied researchers. © 2016 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.  相似文献   

2.
The variable life‐adjusted display (VLAD) is the first risk‐adjusted graphical procedure proposed in the literature for monitoring the performance of a surgeon. It displays the cumulative sum of expected minus observed deaths. It has since become highly popular because the statistic plotted is easy to understand. But it is also easy to misinterpret a surgeon's performance by utilizing the VLAD, potentially leading to grave consequences. The problem of misinterpretation is essentially caused by the variance of the VLAD's statistic that increases with sample size. In order for the VLAD to be truly useful, a simple signaling rule is desperately needed. Various forms of signaling rules have been developed, but they are usually quite complicated. Without signaling rules, making inferences using the VLAD alone is difficult if not misleading. In this paper, we establish an equivalence between a VLAD with V‐mask and a risk‐adjusted cumulative sum (RA‐CUSUM) chart based on the difference between the estimated probability of death and surgical outcome. Average run length analysis based on simulation shows that this particular RA‐CUSUM chart has similar performance as compared to the established RA‐CUSUM chart based on the log‐likelihood ratio statistic obtained by testing the odds ratio of death. We provide a simple design procedure for determining the V‐mask parameters based on a resampling approach. Resampling from a real data set ensures that these parameters can be estimated appropriately. Finally, we illustrate the monitoring of a real surgeon's performance using VLAD with V‐mask.  相似文献   

3.
Economic evaluation is often seen as a branch of health economics divorced from mainstream econometric techniques. Instead, it is perceived as relying on statistical methods for clinical trials. Furthermore, the statistic of interest in cost-effectiveness analysis, the incremental cost-effectiveness ratio is not amenable to regression-based methods, hence the traditional reliance on comparing aggregate measures across the arms of a clinical trial. In this paper, we explore the potential for health economists undertaking cost-effectiveness analysis to exploit the plethora of established econometric techniques through the use of the net-benefit framework - a recently suggested reformulation of the cost-effectiveness problem that avoids the reliance on cost-effectiveness ratios and their associated statistical problems. This allows the formulation of the cost-effectiveness problem within a standard regression type framework. We provide an example with empirical data to illustrate how a regression type framework can enhance the net-benefit method. We go on to suggest that practical advantages of the net-benefit regression approach include being able to use established econometric techniques, adjust for imperfect randomisation, and identify important subgroups in order to estimate the marginal cost-effectiveness of an intervention.  相似文献   

4.
Measurement error is common in epidemiological and biomedical studies. When biomarkers are measured in batches or groups, measurement error is potentially correlated within each batch or group. In regression analysis, most existing methods are not applicable in the presence of batch‐specific measurement error in predictors. We propose a robust conditional likelihood approach to account for batch‐specific error in predictors when batch effect is additive and the predominant source of error, which requires no assumptions on the distribution of measurement error. Although a regression model with batch as a categorical covariable yields the same parameter estimates as the proposed conditional likelihood approach for linear regression, this result does not hold in general for all generalized linear models, in particular, logistic regression. Our simulation studies show that the conditional likelihood approach achieves better finite sample performance than the regression calibration approach or a naive approach without adjustment for measurement error. In the case of logistic regression, our proposed approach is shown to also outperform the regression approach with batch as a categorical covariate. In addition, we also examine a ‘hybrid’ approach combining the conditional likelihood method and the regression calibration method, which is shown in simulations to achieve good performance in the presence of both batch‐specific and measurement‐specific errors. We illustrate our method by using data from a colorectal adenoma study. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

5.
Measures that quantify the impact of heterogeneity in univariate meta‐analysis, including the very popular I2 statistic, are now well established. Multivariate meta‐analysis, where studies provide multiple outcomes that are pooled in a single analysis, is also becoming more commonly used. The question of how to quantify heterogeneity in the multivariate setting is therefore raised. It is the univariate R2 statistic, the ratio of the variance of the estimated treatment effect under the random and fixed effects models, that generalises most naturally, so this statistic provides our basis. This statistic is then used to derive a multivariate analogue of I2, which we call . We also provide a multivariate H2 statistic, the ratio of a generalisation of Cochran's heterogeneity statistic and its associated degrees of freedom, with an accompanying generalisation of the usual I2 statistic, . Our proposed heterogeneity statistics can be used alongside all the usual estimates and inferential procedures used in multivariate meta‐analysis. We apply our methods to some real datasets and show how our statistics are equally appropriate in the context of multivariate meta‐regression, where study level covariate effects are included in the model. Our heterogeneity statistics may be used when applying any procedure for fitting the multivariate random effects model. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

6.
A. Guolo 《Statistics in medicine》2014,33(12):2062-2076
This paper investigates the use of SIMEX, a simulation‐based measurement error correction technique, for meta‐analysis of studies involving the baseline risk of subjects in the control group as explanatory variable. The approach accounts for the measurement error affecting the information about either the outcome in the treatment group or the baseline risk available from each study, while requiring no assumption about the distribution of the true unobserved baseline risk. This robustness property, together with the feasibility of computation, makes SIMEX very attractive. The approach is suggested as an alternative to the usual likelihood analysis, which can provide misleading inferential results when the commonly assumed normal distribution for the baseline risk is violated. The performance of SIMEX is compared to the likelihood method and to the moment‐based correction through an extensive simulation study and the analysis of two datasets from the medical literature. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

7.
Many epidemiological studies use a nested case‐control (NCC) design to reduce cost while maintaining study power. Because NCC sampling is conditional on the primary outcome, routine application of logistic regression to analyze a secondary outcome will generally be biased. Recently, many studies have proposed several methods to obtain unbiased estimates of risk for a secondary outcome from NCC data. Two common features of all current methods requires that the times of onset of the secondary outcome are known for cohort members not selected into the NCC study and the hazards of the two outcomes are conditionally independent given the available covariates. This last assumption will not be plausible when the individual frailty of study subjects is not captured by the measured covariates. We provide a maximum‐likelihood method that explicitly models the individual frailties and also avoids the need to have access to the full cohort data. We derive the likelihood contribution by respecting the original sampling procedure with respect to the primary outcome. We use proportional hazard models for the individual hazards, and Clayton's copula is used to model additional dependence between primary and secondary outcomes beyond that explained by the measured risk factors. We show that the proposed method is more efficient than weighted likelihood and is unbiased in the presence of shared frailty for the primary and secondary outcome. We illustrate the method with an application to a study of risk factors for diabetes in a Swedish cohort. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

8.
This paper investigates the use of likelihood methods for meta-analysis, within the random-effects models framework. We show that likelihood inference relying on first-order approximations, while improving common meta-analysis techniques, can be prone to misleading results. This drawback is very evident in the case of small sample sizes, which are typical in meta-analysis. We alleviate the problem by exploiting the theory of higher-order asymptotics. In particular, we focus on a second-order adjustment to the log-likelihood ratio statistic. Simulation studies in meta-analysis and meta-regression show that higher-order likelihood inference provides much more accurate results than its first-order counterpart, while being of a computationally feasible form. We illustrate the application of the proposed approach on a real example.  相似文献   

9.
We studied the problem of testing a hypothesized distribution in survival regression models when the data is right censored and survival times are influenced by covariates. A modified chi‐squared type test, known as Nikulin‐Rao‐Robson statistic, is applied for the comparison of accelerated failure time models. This statistic is used to test the goodness‐of‐fit for hypertabastic survival model and four other unimodal hazard rate functions. The results of simulation study showed that the hypertabastic distribution can be used as an alternative to log‐logistic and log‐normal distribution. In statistical modeling, because of its flexible shape of hazard functions, this distribution can also be used as a competitor of Birnbaum‐Saunders and inverse Gaussian distributions. The results for the real data application are shown. Copyright © 2017 John Wiley & Sons, Ltd.  相似文献   

10.
Fixed‐effects meta‐analysis has been criticized because the assumption of homogeneity is often unrealistic and can result in underestimation of parameter uncertainty. Random‐effects meta‐analysis and meta‐regression are therefore typically used to accommodate explained and unexplained between‐study variability. However, it is not unusual to obtain a boundary estimate of zero for the (residual) between‐study standard deviation, resulting in fixed‐effects estimates of the other parameters and their standard errors. To avoid such boundary estimates, we suggest using Bayes modal (BM) estimation with a gamma prior on the between‐study standard deviation. When no prior information is available regarding the magnitude of the between‐study standard deviation, a weakly informative default prior can be used (with shape parameter 2 and rate parameter close to 0) that produces positive estimates but does not overrule the data, leading to only a small decrease in the log likelihood from its maximum. We review the most commonly used estimation methods for meta‐analysis and meta‐regression including classical and Bayesian methods and apply these methods, as well as our BM estimator, to real datasets. We then perform simulations to compare BM estimation with the other methods and find that BM estimation performs well by (i) avoiding boundary estimates; (ii) having smaller root mean squared error for the between‐study standard deviation; and (iii) better coverage for the overall effects than the other methods when the true model has at least a small or moderate amount of unexplained heterogeneity. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

11.
An important question for clinicians appraising a meta‐analysis is: are the findings likely to be valid in their own practice—does the reported effect accurately represent the effect that would occur in their own clinical population? To this end we advance the concept of statistical validity—where the parameter being estimated equals the corresponding parameter for a new independent study. Using a simple (‘leave‐one‐out’) cross‐validation technique, we demonstrate how we may test meta‐analysis estimates for statistical validity using a new validation statistic, Vn , and derive its distribution. We compare this with the usual approach of investigating heterogeneity in meta‐analyses and demonstrate the link between statistical validity and homogeneity. Using a simulation study, the properties of Vn and the Q statistic are compared for univariate random effects meta‐analysis and a tailored meta‐regression model, where information from the setting (included as model covariates) is used to calibrate the summary estimate to the setting of application. Their properties are found to be similar when there are 50 studies or more, but for fewer studies Vn has greater power but a higher type 1 error rate than Q . The power and type 1 error rate of Vn are also shown to depend on the within‐study variance, between‐study variance, study sample size, and the number of studies in the meta‐analysis. Finally, we apply Vn to two published meta‐analyses and conclude that it usefully augments standard methods when deciding upon the likely validity of summary meta‐analysis estimates in clinical practice. © 2017 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd.  相似文献   

12.
In the estimation of Cox regression models, maximum partial likelihood estimates might be infinite in a monotone likelihood setting, where partial likelihood converges to a finite value and parameter estimates converge to infinite values. To address monotone likelihood, previous studies have applied Firth's bias correction method to Cox regression models. However, while the model selection criteria for Firth's penalized partial likelihood approach have not yet been studied, a heuristic AIC‐type information criterion can be used in a statistical package. Application of the heuristic information criterion to data obtained from a prospective observational study of patients with multiple brain metastases indicated that the heuristic information criterion selects models with many parameters and ignores the adequacy of the model. Moreover, we showed that the heuristic information criterion tends to select models with many regression parameters as the sample size increases. Thereby, in the present study, we propose an alternative AIC‐type information criterion based on the risk function. A Bayesian information criterion type was also evaluated. Further, the presented simulation results confirm that the proposed criteria performed well in a monotone likelihood setting. The proposed AIC‐type criterion was applied to prospective observational study data. © 2017 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd  相似文献   

13.
Baseline risk is a proxy for unmeasured but important patient‐level characteristics, which may be modifiers of treatment effect, and is a potential source of heterogeneity in meta‐analysis. Models adjusting for baseline risk have been developed for pairwise meta‐analysis using the observed event rate in the placebo arm and taking into account the measurement error in the covariate to ensure that an unbiased estimate of the relationship is obtained. Our objective is to extend these methods to network meta‐analysis where it is of interest to adjust for baseline imbalances in the non‐intervention group event rate to reduce both heterogeneity and possibly inconsistency. This objective is complicated in network meta‐analysis by this covariate being sometimes missing, because of the fact that not all studies in a network may have a non‐active intervention arm. A random‐effects meta‐regression model allowing for inclusion of multi‐arm trials and trials without a ‘non‐intervention’ arm is developed. Analyses are conducted within a Bayesian framework using the WinBUGS software. The method is illustrated using two examples: (i) interventions to promote functional smoke alarm ownership by households with children and (ii) analgesics to reduce post‐operative morphine consumption following a major surgery. The results showed no evidence of baseline effect in the smoke alarm example, but the analgesics example shows that the adjustment can greatly reduce heterogeneity and improve overall model fit. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

14.
Concordance measures are frequently used for assessing the discriminative ability of risk prediction models. The interpretation of estimated concordance at external validation is difficult if the case‐mix differs from the model development setting. We aimed to develop a concordance measure that provides insight into the influence of case‐mix heterogeneity and is robust to censoring of time‐to‐event data. We first derived a model‐based concordance (mbc) measure that allows for quantification of the influence of case‐mix heterogeneity on discriminative ability of proportional hazards and logistic regression models. This mbc can also be calculated including a regression slope that calibrates the predictions at external validation (c‐mbc), hence assessing the influence of overall regression coefficient validity on discriminative ability. We derived variance formulas for both mbc and c‐mbc. We compared the mbc and the c‐mbc with commonly used concordance measures in a simulation study and in two external validation settings. The mbc was asymptotically equivalent to a previously proposed resampling‐based case‐mix corrected c‐index. The c‐mbc remained stable at the true value with increasing proportions of censoring, while Harrell's c‐index and to a lesser extent Uno's concordance measure increased unfavorably. Variance estimates of mbc and c‐mbc were well in agreement with the simulated empirical variances. We conclude that the mbc is an attractive closed‐form measure that allows for a straightforward quantification of the expected change in a model's discriminative ability due to case‐mix heterogeneity. The c‐mbc also reflects regression coefficient validity and is a censoring‐robust alternative for the c‐index when the proportional hazards assumption holds. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

15.
Control risk regression is a diffuse approach for meta-analysis about the effectiveness of a treatment, relating the measure of risk with which the outcome occurs in the treated group to that in the control group. The severity of illness is a source of between-study heterogeneity that can be difficult to measure. It can be approximated by the rate of events in the control group. Since the estimate is a surrogate for the underlying risk, it is prone to measurement error. Correction methods are necessary to provide reliable inference. This article illustrates the extent of measurement error effects under different scenarios, including departures from the classical normality assumption for the control risk distribution. The performance of different measurement error corrections is examined. Attention will be paid to likelihood-based structural methods assuming a distribution for the control risk measure and to functional methods avoiding the assumption, namely, a simulation-based method and two score function methods. Advantages and limits of the approaches are evaluated through simulation. In case of large heterogeneity, structural approaches are preferable to score methods, while score methods perform better for small heterogeneity and small sample size. The simulation-based approach has a satisfactory behavior whichever the examined scenario, with no convergence issues. The methods are applied to a meta-analysis about the association between diabetes and risk of Parkinson disease. The study intends to make researchers aware of the measurement error problem occurring in control risk regression and lead them to the use of appropriate correction techniques to prevent fallacious conclusions.  相似文献   

16.
We consider random effects meta‐analysis where the outcome variable is the occurrence of some event of interest. The data structures handled are where one has one or more groups in each study, and in each group either the number of subjects with and without the event, or the number of events and the total duration of follow‐up is available. Traditionally, the meta‐analysis follows the summary measures approach based on the estimates of the outcome measure(s) and the corresponding standard error(s). This approach assumes an approximate normal within‐study likelihood and treats the standard errors as known. This approach has several potential disadvantages, such as not accounting for the standard errors being estimated, not accounting for correlation between the estimate and the standard error, the use of an (arbitrary) continuity correction in case of zero events, and the normal approximation being bad in studies with few events. We show that these problems can be overcome in most cases occurring in practice by replacing the approximate normal within‐study likelihood by the appropriate exact likelihood. This leads to a generalized linear mixed model that can be fitted in standard statistical software. For instance, in the case of odds ratio meta‐analysis, one can use the non‐central hypergeometric distribution likelihood leading to mixed‐effects conditional logistic regression. For incidence rate ratio meta‐analysis, it leads to random effects logistic regression with an offset variable. We also present bivariate and multivariate extensions. We present a number of examples, especially with rare events, among which an example of network meta‐analysis. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

17.
MCP‐MOD is a testing and model selection approach for clinical dose finding studies. During testing, contrasts of dose group means are derived from candidate dose response models. A multiple‐comparison procedure is applied that controls the alpha level for the family of null hypotheses associated with the contrasts. Provided at least one contrast is significant, a corresponding set of “good” candidate models is identified. The model generating the most significant contrast is typically selected. There have been numerous publications on the method. It was endorsed by the European Medicines Agency. The MCP‐MOD procedure can be alternatively represented as a method based on simple linear regression, where “simple” refers to the inclusion of an intercept and a single predictor variable, which is a transformation of dose. It is shown that the contrasts are equal to least squares linear regression slope estimates after a rescaling of the predictor variables. The test for each contrast is the usual t statistic for a null slope parameter, except that a variance estimate with fewer degrees of freedom is used in the standard error. Selecting the model corresponding to the most significant contrast P value is equivalent to selecting the predictor variable yielding the smallest residual sum of squares. This criteria orders the models like a common goodness‐of‐fit test, but it does not assure a good fit. Common inferential methods applied to the selected model are subject to distortions that are often present following data‐based model selection.  相似文献   

18.
A common paradigm in dealing with heterogeneity across tumors in cancer analysis is to cluster the tumors into subtypes using marker data on the tumor, and then to analyze each of the clusters separately. A more specific target is to investigate the association between risk factors and specific subtypes and to use the results for personalized preventive treatment. This task is usually carried out in two steps–clustering and risk factor assessment. However, two sources of measurement error arise in these problems. The first is the measurement error in the biomarker values. The second is the misclassification error when assigning observations to clusters. We consider the case with a specified set of relevant markers and propose a unified single‐likelihood approach for normally distributed biomarkers. As an alternative, we consider a two‐step procedure with the tumor type misclassification error taken into account in the second‐step risk factor analysis. We describe our method for binary data and also for survival analysis data using a modified version of the Cox model. We present asymptotic theory for the proposed estimators. Simulation results indicate that our methods significantly lower the bias with a small price being paid in terms of variance. We present an analysis of breast cancer data from the Nurses' Health Study to demonstrate the utility of our method. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

19.
In clinical trials, patients with different biomarker features may respond differently to the new treatments or drugs. In personalized medicine, it is important to study the interaction between treatment and biomarkers in order to clearly identify patients that benefit from the treatment. With the local partial‐likelihood estimation (LPLE) method proposed by Fan J, Lin H, Zhou Y. Local partial‐likelihood estimation for lifetime data. The Annals of Statistics 2006; 34 (1):290?325, the treatment effect can be modeled as a flexible function of the biomarker. In this paper, we propose a bootstrap test method for survival outcome data based on the LPLE, for assessing whether the treatment effect is a constant among all patients or varies as a function of the biomarker. The test method is called local partial‐likelihood bootstrap (LPLB) and is developed by bootstrapping the martingale residuals. The test statistic measures the amount of change in treatment effects across the entire range of the biomarker and is derived based on asymptotic theories for martingales. The LPLB method is nonparametric and is shown in simulations and data analysis examples to be flexible enough to identify treatment effects in a biomarker‐defined subset and more powerful to detect treatment‐biomarker interaction of complex forms than the Cox regression model with a simple interaction. We use data from a breast cancer and a prostate cancer clinical trial to illustrate the proposed LPLB test. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

20.
This study investigates appropriate estimation of estimator variability in the context of causal mediation analysis that employs propensity score‐based weighting. Such an analysis decomposes the total effect of a treatment on the outcome into an indirect effect transmitted through a focal mediator and a direct effect bypassing the mediator. Ratio‐of‐mediator‐probability weighting estimates these causal effects by adjusting for the confounding impact of a large number of pretreatment covariates through propensity score‐based weighting. In step 1, a propensity score model is estimated. In step 2, the causal effects of interest are estimated using weights derived from the prior step's regression coefficient estimates. Statistical inferences obtained from this 2‐step estimation procedure are potentially problematic if the estimated standard errors of the causal effect estimates do not reflect the sampling uncertainty in the estimation of the weights. This study extends to ratio‐of‐mediator‐probability weighting analysis a solution to the 2‐step estimation problem by stacking the score functions from both steps. We derive the asymptotic variance‐covariance matrix for the indirect effect and direct effect 2‐step estimators, provide simulation results, and illustrate with an application study. Our simulation results indicate that the sampling uncertainty in the estimated weights should not be ignored. The standard error estimation using the stacking procedure offers a viable alternative to bootstrap standard error estimation. We discuss broad implications of this approach for causal analysis involving propensity score‐based weighting.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号