首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 250 毫秒
1.
In most randomized clinical trials (RCTs) with a right-censored time-to-event outcome, the hazard ratio is taken as an appropriate measure of the effectiveness of a new treatment compared with a standard-of-care or control treatment. However, it has long been known that the hazard ratio is valid only under the proportional hazards (PH) assumption. This assumption is formally checked only rarely. Some recent trials, particularly the IPASS trial in lung cancer and the ICON7 trial in ovarian cancer, have alerted researchers to the possibility of gross non-PH, raising the critical question of how such data should be analyzed. Here, we propose the use of the restricted mean survival time at a prespecified, fixed time point as a useful general measure to report the difference between two survival curves. We describe different methods of estimating it and we illustrate its application to three RCTs in cancer. The examples are graded from a trial in kidney cancer in which there is no evidence of non-PH, to IPASS, where the opposite is clearly the case. We propose a simple, general scheme for the analysis of data from such RCTs. Key elements of our approach are Andersen's method of 'pseudo-observations,' which is based on the Kaplan-Meier estimate of the survival function, and Royston and Parmar's class of flexible parametric survival models, which may be used for analyzing data in the presence or in the absence of PH of the treatment effect.  相似文献   

2.
In clinical trials with time‐to‐event endpoints, it is not uncommon to see a significant proportion of patients being cured (or long‐term survivors), such as trials for the non‐Hodgkins lymphoma disease. The popularly used sample size formula derived under the proportional hazards (PH) model may not be proper to design a survival trial with a cure fraction, because the PH model assumption may be violated. To account for a cure fraction, the PH cure model is widely used in practice, where a PH model is used for survival times of uncured patients and a logistic distribution is used for the probability of patients being cured. In this paper, we develop a sample size formula on the basis of the PH cure model by investigating the asymptotic distributions of the standard weighted log‐rank statistics under the null and local alternative hypotheses. The derived sample size formula under the PH cure model is more flexible because it can be used to test the differences in the short‐term survival and/or cure fraction. Furthermore, we also investigate as numerical examples the impacts of accrual methods and durations of accrual and follow‐up periods on sample size calculation. The results show that ignoring the cure rate in sample size calculation can lead to either underpowered or overpowered studies. We evaluate the performance of the proposed formula by simulation studies and provide an example to illustrate its application with the use of data from a melanoma trial. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

3.
Sample size calculations for clinical trials dealing with survivorship are often based on an exponential model. This model is inappropriate when a non-zero proportion of the population is expected to have indefinite survival. In such cases the Gompertz model offers a reasonable alternative. A method for calculating the required accrual time for a clinical trial in which the treatment arms have Gompertz survival distributions satisfying the proportion hazards assumption is developed. A computer program to perform this method is given, as well as an iterative method that can be used when a computer is not available.  相似文献   

4.
Time‐to‐event data are very common in observational studies. Unlike randomized experiments, observational studies suffer from both observed and unobserved confounding biases. To adjust for observed confounding in survival analysis, the commonly used methods are the Cox proportional hazards (PH) model, the weighted logrank test, and the inverse probability of treatment weighted Cox PH model. These methods do not rely on fully parametric models, but their practical performances are highly influenced by the validity of the PH assumption. Also, there are few methods addressing the hidden bias in causal survival analysis. We propose a strategy to test for survival function differences based on the matching design and explore sensitivity of the P‐values to assumptions about unmeasured confounding. Specifically, we apply the paired Prentice‐Wilcoxon (PPW) test or the modified PPW test to the propensity score matched data. Simulation studies show that the PPW‐type test has higher power in situations when the PH assumption fails. For potential hidden bias, we develop a sensitivity analysis based on the matched pairs to assess the robustness of our finding, following Rosenbaum's idea for nonsurvival data. For a real data illustration, we apply our method to an observational cohort of chronic liver disease patients from a Mayo Clinic study. The PPW test based on observed data initially shows evidence of a significant treatment effect. But this finding is not robust, as the sensitivity analysis reveals that the P‐value becomes nonsignificant if there exists an unmeasured confounder with a small impact.  相似文献   

5.
A comparison of sample size methods for the logrank statistic.   总被引:1,自引:0,他引:1  
Several methods are available for sample size calculation for clinical trials when survival curves are to be compared using the logrank statistic. We discuss advantages and disadvantages of some of these methods, and present simulation results under exponential, proportional hazards and non-proportional hazard situations.  相似文献   

6.
Meta‐analysis of time‐to‐event outcomes using the hazard ratio as a treatment effect measure has an underlying assumption that hazards are proportional. The between‐arm difference in the restricted mean survival time is a measure that avoids this assumption and allows the treatment effect to vary with time. We describe and evaluate meta‐analysis based on the restricted mean survival time for dealing with non‐proportional hazards and present a diagnostic method for the overall proportional hazards assumption. The methods are illustrated with the application to two individual participant meta‐analyses in cancer. The examples were chosen because they differ in disease severity and the patterns of follow‐up, in order to understand the potential impacts on the hazards and the overall effect estimates. We further investigate the estimation methods for restricted mean survival time by a simulation study. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

7.
We consider interim analyses in clinical trials or observational studies with a time-to-event outcome variable where the survival curves are compared using the hazard ratio resulting from a proportional hazards (PH) model or tested with the logrank test or another two-sample test. We show and illustrate with an example that if the PH assumption is violated, the results of interim analyses can be heavily biased. This is due to the fact that the censoring pattern in interim analyses can be completely different from the final analysis. We argue that, when the PH assumption is violated, interim analyses are only sensible if a fixed time horizon for the final analysis is specified, and at the time of the interim analysis sufficient information is available over the whole time interval up to the horizon. We show how the bias can then be remedied by introducing in the estimation and testing procedures an appropriate weighting that reflects the weights to be expected in the final analysis. The consequences for design and analysis are discussed and some practical recommendations are given.  相似文献   

8.
For testing the efficacy of a treatment in a clinical trial with survival data, the Cox proportional hazards (PH) model is the well‐accepted, conventional tool. When using this model, one typically proceeds by confirming that the required PH assumption holds true. If the PH assumption fails to hold, there are many options available, proposed as alternatives to the Cox PH model. An important question which arises is whether the potential bias introduced by this sequential model fitting procedure merits concern and, if so, what are effective mechanisms for correction. We investigate by means of simulation study and draw attention to the considerable drawbacks, with regard to power, of a simple resampling technique, the permutation adjustment, a natural recourse for addressing such challenges. We also consider a recently proposed two‐stage testing strategy (2008) for ameliorating these effects. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

9.
Interval censoring arises when a subject misses prescheduled visits at which the failure is to be assessed. Most existing approaches for analysing interval-censored failure time data assume that the censoring mechanism is independent of the true failure time. However, there are situations where this assumption may not hold. In this paper, we consider such a situation in which the dependence structure between the censoring variables and the failure time can be modelled through some latent variables and a method for regression analysis of failure time data is proposed. The method makes use of the proportional hazards frailty model and an EM algorithm is presented for estimation. Finite sample properties of the proposed estimators of regression parameters are examined through simulation studies and we illustrate the method with data from an AIDS study.  相似文献   

10.
Two classes of econometric estimators are popular for modeling outcomes with idiosyncratic characteristics such as those present in medical costs data: (1) estimators based on the exponential conditional mean models where the mean function of the outcome is equal to exponential of the linear predictor and (2) estimators based on the proportional hazard assumption where hazard function of the outcome is equal to exponential of the linear predictor. Recent work has provided guidance both on choosing between the two classes of estimators and also on choosing among alternative estimators within the exponential conditional mean framework. The present work extends this literature by proposing a test for identifying the proportional hazards assumption within the class of exponential conditional mean models, thereby eliminating the need to run both classes of models in order to make informative choices. We implement this test using the generalized gamma regression model, thereby allowing the analyst to select between both parametric alternatives and also the semi-parametric Cox model from one cohesive framework. Our simulation results indicate that the proposed test perform as well as the traditional test of proportional hazards assumption following a Cox regression based on power and Type I error under a variety of data generating mechanisms. We illustrate its use in an analysis of physician visits.  相似文献   

11.
In survival studies, information lost through censoring can be partially recaptured through repeated measures data which are predictive of survival. In addition, such data may be useful in removing bias in survival estimates, due to censoring which depends upon the repeated measures. Here we investigate joint models for survival T and repeated measurements Y, given a vector of covariates Z. Mixture models indexed as f (T/Z) f (Y/T,Z) are well suited for assessing covariate effects on survival time. Our objective is efficiency gains, using non-parametric models for Y in order to avoid introducing bias by misspecification of the distribution for Y. We model (T/Z) as a piecewise exponential distribution with proportional hazards covariate effect. The component (Y/T,Z) has a multinomial model. The joint likelihood for survival and longitudinal data is maximized, using the EM algorithm. The estimate of covariate effect is compared to the estimate based on the standard proportional hazards model and an alternative joint model based estimate. We demonstrate modest gains in efficiency when using the joint piecewise exponential joint model. In a simulation, the estimated efficiency gain over the standard proportional hazards model is 6.4 per cent. In clinical trial data, the estimated efficiency gain over the standard proportional hazards model is 10.2 per cent.  相似文献   

12.
A major assumption of the Cox proportional hazards model is that the effect of a given covariate does not change over time. If this assumption is violated, the simple Cox model is invalid, and more sophisticated analyses are required. This paper describes eight graphical methods for detecting violations of the proportional hazards assumption and demonstrates each on three published datasets with a single binary covariate. I discuss the relative merits of these methods. Smoothed plots of the scaled Schoenfeld residuals are recommended for assessing PH violations because they provide precise usable information about the time dependence of the covariate effects.  相似文献   

13.
Keene ON 《Statistics in medicine》2002,21(23):3687-3700
Estimates of the efficacy of new medicines are key to the investigation of their clinical effectiveness. The most widely recommended approach to summarizing time-to-event data from clinical trials is to use a hazard ratio. When the proportional hazards assumption is questionable, a hazard ratio depends on the length of patient follow-up. Hazard ratios do not directly translate into differences in times to events and therefore can present difficulties in interpretation. This paper describes an area where summary by hazard ratio would seem unsuitable and explores alternative estimates of efficacy. In particular, the difference in median time to event between treatments can provide a useful and consistent measure of efficacy. Methods of calculating confidence intervals for differences in medians for censored time-to-event will be described. Accelerated failure time models provide a useful alternative approach to proportional hazards modelling. Estimates of the ratio of the median time to event between treatments are directly available from these models. One of the reasons given for summarizing time-to-event studies by a hazard ratio is to facilitate meta-analyses. The bootstrap estimate of standard error for difference in median in each trial can provide a method for combining results based on summary statistics.  相似文献   

14.
In the analysis of survival data using the Cox proportional hazard (PH) model, it is important to verify that the explanatory variables analysed satisfy the proportional hazard assumption of the model. This paper presents results of a simulation study that compares five test statistics to check the proportional hazard assumption of Cox's model. The test statistics were evaluated under proportional hazards and the following types of departures from the proportional hazard assumption: increasing relative hazards; decreasing relative hazards; crossing hazards; diverging hazards, and non-monotonic hazards. The test statistics compared include those based on partitioning of failure time and those that do not require partitioning of failure time. The simulation results demonstrate that the time-dependent covariate test, the weighted residuals score test and the linear correlation test have equally good power for detection of non-proportionality in the varieties of non-proportional hazards studied. Using illustrative data from the literature, these test statistics performed similarly. © 1997 by John Wiley & Sons, Ltd.  相似文献   

15.
Multilevel mixed effects survival models are used in the analysis of clustered survival data, such as repeated events, multicenter clinical trials, and individual participant data (IPD) meta‐analyses, to investigate heterogeneity in baseline risk and covariate effects. In this paper, we extend parametric frailty models including the exponential, Weibull and Gompertz proportional hazards (PH) models and the log logistic, log normal, and generalized gamma accelerated failure time models to allow any number of normally distributed random effects. Furthermore, we extend the flexible parametric survival model of Royston and Parmar, modeled on the log‐cumulative hazard scale using restricted cubic splines, to include random effects while also allowing for non‐PH (time‐dependent effects). Maximum likelihood is used to estimate the models utilizing adaptive or nonadaptive Gauss–Hermite quadrature. The methods are evaluated through simulation studies representing clinically plausible scenarios of a multicenter trial and IPD meta‐analysis, showing good performance of the estimation method. The flexible parametric mixed effects model is illustrated using a dataset of patients with kidney disease and repeated times to infection and an IPD meta‐analysis of prognostic factor studies in patients with breast cancer. User‐friendly Stata software is provided to implement the methods. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

16.
Kaplan-Meier survival curve estimation is a commonly used non-parametric method to evaluate survival distributions for groups of patients in the clinical trial setting. However, this method does not permit covariate adjustment which may reduce bias and increase precision. The Cox proportional hazards model is a commonly used semi-parametric method for conducting adjusted inferences and may be used to estimate covariate-adjusted survival curves. However, this model relies on the proportional hazards assumption that is often difficult to validate. Research work has been carried out to introduce a non-parametric covariate-adjusted method to estimate survival rates for certain given time intervals. We extend the non-parametric covariate-adjusted method to develop a new model to estimate the survival rates for treatment groups at any time point when an event occurs. Simulation studies are conducted to investigate the model's performance. This model is illustrated with an oncology clinical trial example.  相似文献   

17.
This paper addresses the problem of combining information from independent clinical trials which compare survival distributions of two treatment groups. Current meta-analytic methods which take censoring into account are often not feasible for meta-analyses which synthesize summarized results in published (or unpublished) references, as these methods require information usually not reported. The paper presents methodology which uses the log(-log) survival function difference, (i.e. log(-logS2(t))-log(-logS1(t)), as the contrast index to represent the multiplicative treatment effect on survival in independent trials. This article shows by the second mean value theorem for integrals that this contrast index, denoted as theta, is interpretable as a weighted average on a natural logarithmic scale of hazard ratios within the interval [0,t] in a trial. When the within-trial proportional hazards assumption is true, theta is the logarithm of the proportionality constant for the common hazard ratio for the interval considered within the trial. In this situation, an important advantage of using theta as a contrast index in the proposed methodology is that the estimation of theta is not affected by length of follow-up time. Other commonly used indices such as the odds ratio, risk ratio and risk differences do not have this invariance property under the proportional hazard model, since their estimation may be affected by length of follow-up time as a technical artefact. Thus, the proposed methodology obviates problems which often occur in survival meta-analysis because trials do not report survival at the same length of follow-up time. Even when the within-trial proportional hazards assumption is not realistic, the proposed methodology has the capability of testing a global null hypothesis of no multiplicative treatment effect on the survival distributions of two groups for all studies. A discussion of weighting schemes for meta-analysis is provided, in particular, a weighting scheme based on effective sample sizes is suggested for the meta-analysis of time-to-event data which involves censoring. A medical example illustrating the methodology is given. A simulation investigation suggested that the methodology performs well in the presence of moderate censoring.  相似文献   

18.
Currently many dose-finding clinical trial designs, including the continual reassessment method (CRM) and the standard ' 3 + 3' design, dichotomize toxicity outcomes based on the pre-specified dose-limiting toxicity (DLT) criteria. This loss of information is particularly inefficient due to the small sample sizes in phase I trials. Common Toxicity Criteria (CTCAEv3.0) classify adverse events into grades 1-5, which range from 1 as a mild adverse event to 5 as death related to an adverse event. In this paper, we extend the CRM to include ordinal toxicity outcomes as specified by CTCAEv3.0 using the proportional odds model (POM) and compare results with the dichotomous CRM. A sensitivity analysis of the new design compares various target DLT rates, sample sizes, and cohort sizes. This design is also assessed under various dose-toxicity relationship models including POMs as well as those that violate the proportional odds assumption. A simulation study shows that the proportional odds CRM performs as well as the dichotomous CRM on all criteria compared (including safety criteria such as percentage of patients treated at highly toxic or suboptimal dose levels) and with improved estimation of the maximum tolerated dose when the PO assumption is not violated. These findings suggest that it is beneficial to incorporate ordinal toxicity endpoints into phase I trial designs.  相似文献   

19.
The log‐rank test is the most powerful non‐parametric test for detecting a proportional hazards alternative and thus is the most commonly used testing procedure for comparing time‐to‐event distributions between different treatments in clinical trials. When the log‐rank test is used for the primary data analysis, the sample size calculation should also be based on the test to ensure the desired power for the study. In some clinical trials, the treatment effect may not manifest itself right after patients receive the treatment. Therefore, the proportional hazards assumption may not hold. Furthermore, patients may discontinue the study treatment prematurely and thus may have diluted treatment effect after treatment discontinuation. If a patient's treatment termination time is independent of his/her time‐to‐event of interest, the termination time can be treated as a censoring time in the final data analysis. Alternatively, we may keep collecting time‐to‐event data until study termination from those patients who discontinued the treatment and conduct an intent‐to‐treat analysis by including them in the original treatment groups. We derive formulas necessary to calculate the asymptotic power of the log‐rank test under this non‐proportional hazards alternative for the two data analysis strategies. Simulation studies indicate that the formulas provide accurate power for a variety of trial settings. A clinical trial example is used to illustrate the application of the proposed methods. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

20.
In clinical trials with time‐to‐event outcomes, it is common to estimate the marginal hazard ratio from the proportional hazards model, even when the proportional hazards assumption is not valid. This is unavoidable from the perspective that the estimator must be specified a priori if probability statements about treatment effect estimates are desired. Marginal hazard ratio estimates under non‐proportional hazards are still useful, as they can be considered to be average treatment effect estimates over the support of the data. However, as many have shown, under non‐proportional hazard, the ‘usual’ unweighted marginal hazard ratio estimate is a function of the censoring distribution, which is not normally considered to be scientifically relevant when describing the treatment effect. In addition, in many practical settings, the censoring distribution is only conditionally independent (e.g., differing across treatment arms), which further complicates the interpretation. In this paper, we investigate an estimator of the hazard ratio that removes the influence of censoring and propose a consistent robust variance estimator. We compare the coverage probability of the estimator to both the usual Cox model estimator and an estimator proposed by Xu and O'Quigley (2000) when censoring is independent of the covariate. The new estimator should be used for inference that does not depend on the censoring distribution. It is particularly relevant to adaptive clinical trials where, by design, censoring distributions differ across treatment arms. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号