首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
We consider estimation of treatment effects in two‐stage adaptive multi‐arm trials with a common control. The best treatment is selected at interim, and the primary endpoint is modeled via a Cox proportional hazards model. The maximum partial‐likelihood estimator of the log hazard ratio of the selected treatment will overestimate the true treatment effect in this case. Several methods for reducing the selection bias have been proposed for normal endpoints, including an iterative method based on the estimated conditional selection biases and a shrinkage approach based on empirical Bayes theory. We adapt these methods to time‐to‐event data and compare the bias and mean squared error of all methods in an extensive simulation study and apply the proposed methods to reconstructed data from the FOCUS trial. We find that all methods tend to overcorrect the bias, and only the shrinkage methods can reduce the mean squared error. © 2017 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.  相似文献   

2.
In most randomized clinical trials (RCTs) with a right-censored time-to-event outcome, the hazard ratio is taken as an appropriate measure of the effectiveness of a new treatment compared with a standard-of-care or control treatment. However, it has long been known that the hazard ratio is valid only under the proportional hazards (PH) assumption. This assumption is formally checked only rarely. Some recent trials, particularly the IPASS trial in lung cancer and the ICON7 trial in ovarian cancer, have alerted researchers to the possibility of gross non-PH, raising the critical question of how such data should be analyzed. Here, we propose the use of the restricted mean survival time at a prespecified, fixed time point as a useful general measure to report the difference between two survival curves. We describe different methods of estimating it and we illustrate its application to three RCTs in cancer. The examples are graded from a trial in kidney cancer in which there is no evidence of non-PH, to IPASS, where the opposite is clearly the case. We propose a simple, general scheme for the analysis of data from such RCTs. Key elements of our approach are Andersen's method of 'pseudo-observations,' which is based on the Kaplan-Meier estimate of the survival function, and Royston and Parmar's class of flexible parametric survival models, which may be used for analyzing data in the presence or in the absence of PH of the treatment effect.  相似文献   

3.
The difference in restricted mean survival times between two groups is a clinically relevant summary measure. With observational data, there may be imbalances in confounding variables between the two groups. One approach to account for such imbalances is estimating a covariate‐adjusted restricted mean difference by modeling the covariate‐adjusted survival distribution and then marginalizing over the covariate distribution. Because the estimator for the restricted mean difference is defined by the estimator for the covariate‐adjusted survival distribution, it is natural to expect that a better estimator of the covariate‐adjusted survival distribution is associated with a better estimator of the restricted mean difference. We therefore propose estimating restricted mean differences with stacked survival models. Stacked survival models estimate a weighted average of several survival models by minimizing predicted error. By including a range of parametric, semi‐parametric, and non‐parametric models, stacked survival models can robustly estimate a covariate‐adjusted survival distribution and, therefore, the restricted mean treatment effect in a wide range of scenarios. We demonstrate through a simulation study that better performance of the covariate‐adjusted survival distribution often leads to better mean squared error of the restricted mean difference although there are notable exceptions. In addition, we demonstrate that the proposed estimator can perform nearly as well as Cox regression when the proportional hazards assumption is satisfied and significantly better when proportional hazards is violated. Finally, the proposed estimator is illustrated with data from the United Network for Organ Sharing to evaluate post‐lung transplant survival between large‐volume and small‐volume centers. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

4.
The log‐rank test is the most powerful non‐parametric test for detecting a proportional hazards alternative and thus is the most commonly used testing procedure for comparing time‐to‐event distributions between different treatments in clinical trials. When the log‐rank test is used for the primary data analysis, the sample size calculation should also be based on the test to ensure the desired power for the study. In some clinical trials, the treatment effect may not manifest itself right after patients receive the treatment. Therefore, the proportional hazards assumption may not hold. Furthermore, patients may discontinue the study treatment prematurely and thus may have diluted treatment effect after treatment discontinuation. If a patient's treatment termination time is independent of his/her time‐to‐event of interest, the termination time can be treated as a censoring time in the final data analysis. Alternatively, we may keep collecting time‐to‐event data until study termination from those patients who discontinued the treatment and conduct an intent‐to‐treat analysis by including them in the original treatment groups. We derive formulas necessary to calculate the asymptotic power of the log‐rank test under this non‐proportional hazards alternative for the two data analysis strategies. Simulation studies indicate that the formulas provide accurate power for a variety of trial settings. A clinical trial example is used to illustrate the application of the proposed methods. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

5.
Fixed‐effects meta‐analysis has been criticized because the assumption of homogeneity is often unrealistic and can result in underestimation of parameter uncertainty. Random‐effects meta‐analysis and meta‐regression are therefore typically used to accommodate explained and unexplained between‐study variability. However, it is not unusual to obtain a boundary estimate of zero for the (residual) between‐study standard deviation, resulting in fixed‐effects estimates of the other parameters and their standard errors. To avoid such boundary estimates, we suggest using Bayes modal (BM) estimation with a gamma prior on the between‐study standard deviation. When no prior information is available regarding the magnitude of the between‐study standard deviation, a weakly informative default prior can be used (with shape parameter 2 and rate parameter close to 0) that produces positive estimates but does not overrule the data, leading to only a small decrease in the log likelihood from its maximum. We review the most commonly used estimation methods for meta‐analysis and meta‐regression including classical and Bayesian methods and apply these methods, as well as our BM estimator, to real datasets. We then perform simulations to compare BM estimation with the other methods and find that BM estimation performs well by (i) avoiding boundary estimates; (ii) having smaller root mean squared error for the between‐study standard deviation; and (iii) better coverage for the overall effects than the other methods when the true model has at least a small or moderate amount of unexplained heterogeneity. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

6.
Meta‐analyses of clinical trials often treat the number of patients experiencing a medical event as binomially distributed when individual patient data for fitting standard time‐to‐event models are unavailable. Assuming identical drop‐out time distributions across arms, random censorship, and low proportions of patients with an event, a binomial approach results in a valid test of the null hypothesis of no treatment effect with minimal loss in efficiency compared with time‐to‐event methods. To deal with differences in follow‐up—at the cost of assuming specific distributions for event and drop‐out times—we propose a hierarchical multivariate meta‐analysis model using the aggregate data likelihood based on the number of cases, fatal cases, and discontinuations in each group, as well as the planned trial duration and groups sizes. Such a model also enables exchangeability assumptions about parameters of survival distributions, for which they are more appropriate than for the expected proportion of patients with an event across trials of substantially different length. Borrowing information from other trials within a meta‐analysis or from historical data is particularly useful for rare events data. Prior information or exchangeability assumptions also avoid the parameter identifiability problems that arise when using more flexible event and drop‐out time distributions than the exponential one. We discuss the derivation of robust historical priors and illustrate the discussed methods using an example. We also compare the proposed approach against other aggregate data meta‐analysis methods in a simulation study. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

7.
Multilevel mixed effects survival models are used in the analysis of clustered survival data, such as repeated events, multicenter clinical trials, and individual participant data (IPD) meta‐analyses, to investigate heterogeneity in baseline risk and covariate effects. In this paper, we extend parametric frailty models including the exponential, Weibull and Gompertz proportional hazards (PH) models and the log logistic, log normal, and generalized gamma accelerated failure time models to allow any number of normally distributed random effects. Furthermore, we extend the flexible parametric survival model of Royston and Parmar, modeled on the log‐cumulative hazard scale using restricted cubic splines, to include random effects while also allowing for non‐PH (time‐dependent effects). Maximum likelihood is used to estimate the models utilizing adaptive or nonadaptive Gauss–Hermite quadrature. The methods are evaluated through simulation studies representing clinically plausible scenarios of a multicenter trial and IPD meta‐analysis, showing good performance of the estimation method. The flexible parametric mixed effects model is illustrated using a dataset of patients with kidney disease and repeated times to infection and an IPD meta‐analysis of prognostic factor studies in patients with breast cancer. User‐friendly Stata software is provided to implement the methods. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

8.
Randomized clinical trials with long-term survival data comparing two treatments often show Kaplan-Meier plots with crossing survival curves. Such behaviour implies a violation of the proportional hazards assumption for treatment. The Cox proportional hazards regression model with treatment as a fixed effect can therefore not be used to assess the influence of treatment of survival. In this paper we analyse long-term follow-up data from the Dutch Gastric Cancer Trial, a randomized study comparing limited (D1) lymph node dissection with extended (D2) lymph node dissection. We illustrate a number of ways of dealing with survival data that do not obey the proportional hazards assumption, each of which can be easily implemented in standard statistical packages.  相似文献   

9.
This paper concerns using modified weighted Schoenfeld residuals to test the proportionality of subdistribution hazards for the Fine–Gray model, similar to the tests proposed by Grambsch and Therneau for independently censored data. We develop a score test for the time‐varying coefficients based on the modified Schoenfeld residuals derived assuming a certain form of non‐proportionality. The methods perform well in simulations and a real data analysis of breast cancer data, where the treatment effect exhibits non‐proportional hazards. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

10.
In network meta‐analyses that synthesize direct and indirect comparison evidence concerning multiple treatments, multivariate random effects models have been routinely used for addressing between‐studies heterogeneities. Although their standard inference methods depend on large sample approximations (eg, restricted maximum likelihood estimation) for the number of trials synthesized, the numbers of trials are often moderate or small. In these situations, standard estimators cannot be expected to behave in accordance with asymptotic theory; in particular, confidence intervals cannot be assumed to exhibit their nominal coverage probabilities (also, the type I error probabilities of the corresponding tests cannot be retained). The invalidity issue may seriously influence the overall conclusions of network meta‐analyses. In this article, we develop several improved inference methods for network meta‐analyses to resolve these problems. We first introduce 2 efficient likelihood‐based inference methods, the likelihood ratio test–based and efficient score test–based methods, in a general framework of network meta‐analysis. Then, to improve the small‐sample inferences, we developed improved higher‐order asymptotic methods using Bartlett‐type corrections and bootstrap adjustment methods. The proposed methods adopt Monte Carlo approaches using parametric bootstraps to effectively circumvent complicated analytical calculations of case‐by‐case analyses and to permit flexible application to various statistical models network meta‐analyses. These methods can also be straightforwardly applied to multivariate meta‐regression analyses and to tests for the evaluation of inconsistency. In numerical evaluations via simulations, the proposed methods generally performed well compared with the ordinary restricted maximum likelihood–based inference method. Applications to 2 network meta‐analysis datasets are provided.  相似文献   

11.
In clinical trials with time‐to‐event outcomes, it is common to estimate the marginal hazard ratio from the proportional hazards model, even when the proportional hazards assumption is not valid. This is unavoidable from the perspective that the estimator must be specified a priori if probability statements about treatment effect estimates are desired. Marginal hazard ratio estimates under non‐proportional hazards are still useful, as they can be considered to be average treatment effect estimates over the support of the data. However, as many have shown, under non‐proportional hazard, the ‘usual’ unweighted marginal hazard ratio estimate is a function of the censoring distribution, which is not normally considered to be scientifically relevant when describing the treatment effect. In addition, in many practical settings, the censoring distribution is only conditionally independent (e.g., differing across treatment arms), which further complicates the interpretation. In this paper, we investigate an estimator of the hazard ratio that removes the influence of censoring and propose a consistent robust variance estimator. We compare the coverage probability of the estimator to both the usual Cox model estimator and an estimator proposed by Xu and O'Quigley (2000) when censoring is independent of the covariate. The new estimator should be used for inference that does not depend on the censoring distribution. It is particularly relevant to adaptive clinical trials where, by design, censoring distributions differ across treatment arms. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

12.
To access the calibration of a predictive model in a survival analysis setting, several authors have extended the Hosmer–Lemeshow goodness‐of‐fit test to survival data. Grønnesby and Borgan developed a test under the proportional hazards assumption, and Nam and D'Agostino developed a nonparametric test that is applicable in a more general survival setting for data with limited censoring. We analyze the performance of the two tests and show that the Grønnesby–Borgan test attains appropriate size in a variety of settings, whereas the Nam‐D'Agostino method has a higher than nominal Type 1 error when there is more than trivial censoring. Both tests are sensitive to small cell sizes. We develop a modification of the Nam‐D'Agostino test to allow for higher censoring rates. We show that this modified Nam‐D'Agostino test has appropriate control of Type 1 error and comparable power to the Grønnesby–Borgan test and is applicable to settings other than proportional hazards. We also discuss the application to small cell sizes. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

13.
Multivariate meta‐analysis allows the joint synthesis of effect estimates based on multiple outcomes from multiple studies, accounting for the potential correlations among them. However, standard methods for multivariate meta‐analysis for multiple outcomes are restricted to problems where the within‐study correlation is known or where individual participant data are available. This paper proposes an approach to approximating the within‐study covariances based on information about likely correlations between underlying outcomes. We developed methods for both continuous and dichotomous data and for combinations of the two types. An application to a meta‐analysis of treatments for stroke illustrates the use of the approximated covariance in multivariate meta‐analysis with correlated outcomes. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

14.

Background

Joint modelling of longitudinal and time‐to‐event data is often preferred over separate longitudinal or time‐to‐event analyses as it can account for study dropout, error in longitudinally measured covariates, and correlation between longitudinal and time‐to‐event outcomes. The joint modelling literature focuses mainly on the analysis of single studies with no methods currently available for the meta‐analysis of joint model estimates from multiple studies.

Methods

We propose a 2‐stage method for meta‐analysis of joint model estimates. These methods are applied to the INDANA dataset to combine joint model estimates of systolic blood pressure with time to death, time to myocardial infarction, and time to stroke. Results are compared to meta‐analyses of separate longitudinal or time‐to‐event models. A simulation study is conducted to contrast separate versus joint analyses over a range of scenarios.

Results

Using the real dataset, similar results were obtained by using the separate and joint analyses. However, the simulation study indicated a benefit of use of joint rather than separate methods in a meta‐analytic setting where association exists between the longitudinal and time‐to‐event outcomes.

Conclusions

Where evidence of association between longitudinal and time‐to‐event outcomes exists, results from joint models over standalone analyses should be pooled in 2‐stage meta‐analyses.  相似文献   

15.
We consider a study‐level meta‐analysis with a normally distributed outcome variable and possibly unequal study‐level variances, where the object of inference is the difference in means between a treatment and control group. A common complication in such an analysis is missing sample variances for some studies. A frequently used approach is to impute the weighted (by sample size) mean of the observed variances (mean imputation). Another approach is to include only those studies with variances reported (complete case analysis). Both mean imputation and complete case analysis are only valid under the missing‐completely‐at‐random assumption, and even then the inverse variance weights produced are not necessarily optimal. We propose a multiple imputation method employing gamma meta‐regression to impute the missing sample variances. Our method takes advantage of study‐level covariates that may be used to provide information about the missing data. Through simulation studies, we show that multiple imputation, when the imputation model is correctly specified, is superior to competing methods in terms of confidence interval coverage probability and type I error probability when testing a specified group difference. Finally, we describe a similar approach to handling missing variances in cross‐over studies. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

16.
Competing risks arise with time‐to‐event data when individuals are at risk of more than one type of event and the occurrence of one event precludes the occurrence of all other events. A useful measure with competing risks is the cause‐specific cumulative incidence function (CIF), which gives the probability of experiencing a particular event as a function of follow‐up time, accounting for the fact that some individuals may have a competing event. When modelling the cause‐specific CIF, the most common model is a semi‐parametric proportional subhazards model. In this paper, we propose the use of flexible parametric survival models to directly model the cause‐specific CIF where the effect of follow‐up time is modelled using restricted cubic splines. The models provide smooth estimates of the cause‐specific CIF with the important advantage that the approach is easily extended to model time‐dependent effects. The models can be fitted using standard survival analysis tools by a combination of data expansion and introducing time‐dependent weights. Various link functions are available that allow modelling on different scales and have proportional subhazards, proportional odds and relative absolute risks as particular cases. We conduct a simulation study to evaluate how well the spline functions approximate subhazard functions with complex shapes. The methods are illustrated using data from the European Blood and Marrow Transplantation Registry showing excellent agreement between parametric estimates of the cause‐specific CIF and those obtained from a semi‐parametric model. We also fit models relaxing the proportional subhazards assumption using alternative link functions and/or including time‐dependent effects. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

17.
We introduce a nonparametric survival prediction method for right‐censored data. The method generates a survival curve prediction by constructing a (weighted) Kaplan–Meier estimator using the outcomes of the K most similar training observations. Each observation has an associated set of covariates, and a metric on the covariate space is used to measure similarity between observations. We apply our method to a kidney transplantation data set to generate patient‐specific distributions of graft survival and to a simulated data set in which the proportional hazards assumption is explicitly violated. We compare the performance of our method with the standard Cox model and the random survival forests method. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

18.
Causal inference for non‐censored response variables, such as binary or quantitative outcomes, is often based on either (1) direct standardization (‘G‐formula’) or (2) inverse probability of treatment assignment weights (‘propensity score’). To do causal inference in survival analysis, one needs to address right‐censoring, and often, special techniques are required for that purpose. We will show how censoring can be dealt with ‘once and for all’ by means of so‐called pseudo‐observations when doing causal inference in survival analysis. The pseudo‐observations can be used as a replacement of the outcomes without censoring when applying ‘standard’ causal inference methods, such as (1) or (2) earlier. We study this idea for estimating the average causal effect of a binary treatment on the survival probability, the restricted mean lifetime, and the cumulative incidence in a competing risks situation. The methods will be illustrated in a small simulation study and via a study of patients with acute myeloid leukemia who received either myeloablative or non‐myeloablative conditioning before allogeneic hematopoetic cell transplantation. We will estimate the average causal effect of the conditioning regime on outcomes such as the 3‐year overall survival probability and the 3‐year risk of chronic graft‐versus‐host disease. Copyright © 2017 John Wiley & Sons, Ltd.  相似文献   

19.
Prognostic studies often estimate survival curves for patients with different covariate vectors, but the validity of their results depends largely on the accuracy of the estimated covariate effects. To avoid conventional proportional hazards and linearity assumptions, flexible extensions of Cox's proportional hazards model incorporate non‐linear (NL) and/or time‐dependent (TD) covariate effects. However, their impact on survival curves estimation is unclear. Our primary goal is to develop and validate a flexible method for estimating individual patients' survival curves, conditional on multiple predictors with possibly NL and/or TD effects. We first obtain maximum partial likelihood estimates of NL and TD effects and use backward elimination to select statistically significant effects into a final multivariable model. We then plug the selected NL and TD estimates in the full likelihood function and estimate the baseline hazard function and the resulting survival curves, conditional on individual covariate vectors. The TD and NL functions and the log hazard are modeled with unpenalized regression B‐splines. In simulations, our flexible survival curve estimates were unbiased and had much lower mean square errors than the conventional estimates. In real‐life analyses of mortality after a septic shock, our model improved significantly the deviance (likelihood ratio test = 84.8, df = 20, p < 0.0001) and changed substantially the predicted survival for several subjects. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

20.
Meta‐analysis of genome‐wide association studies (GWAS) has achieved great success in detecting loci underlying human diseases. Incorporating GWAS results from diverse ethnic populations for meta‐analysis, however, remains challenging because of the possible heterogeneity across studies. Conventional fixed‐effects (FE) or random‐effects (RE) methods may not be most suitable to aggregate multiethnic GWAS results because of violation of the homogeneous effect assumption across studies (FE) or low power to detect signals (RE). Three recently proposed methods, modified RE (RE‐HE) model, binary‐effects (BE) model and a Bayesian approach (Meta‐analysis of Transethnic Association [MANTRA]), show increased power over FE and RE methods while incorporating heterogeneity of effects when meta‐analyzing trans‐ethnic GWAS results. We propose a two‐stage approach to account for heterogeneity in trans‐ethnic meta‐analysis in which we clustered studies with cohort‐specific ancestry information prior to meta‐analysis. We compare this to a no‐prior‐clustering (crude) approach, evaluating type I error and power of these two strategies, in an extensive simulation study to investigate whether the two‐stage approach offers any improvements over the crude approach. We find that the two‐stage approach and the crude approach for all five methods (FE, RE, RE‐HE, BE, MANTRA) provide well‐controlled type I error. However, the two‐stage approach shows increased power for BE and RE‐HE, and similar power for MANTRA and FE compared to their corresponding crude approach, especially when there is heterogeneity across the multiethnic GWAS results. These results suggest that prior clustering in the two‐stage approach can be an effective and efficient intermediate step in meta‐analysis to account for the multiethnic heterogeneity.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号