共查询到20条相似文献,搜索用时 15 毫秒
1.
Parametric models for accelerated and long-term survival: a comment on proportional hazards 总被引:1,自引:0,他引:1
The Cox proportional hazards model (CPH) is routinely used in clinical trials, but it may encounter serious difficulties with departures from the proportional hazards assumption, even when the departures are not readily detected by commonly used diagnostics. We consider the Gamel-Boag (GB) model, a log-normal model for accelerated failure in which a proportion of subjects are long-term survivors. When the CPH model is fit to simulated data generated from this model, the results can range from gross overstatement of the effect size, to a situation where increasing follow-up may cause a decline in power. We implement a fitting algorithm for the GB model that permits separate covariate effects on the rapidity of early failure and the fraction of long-term survivors. When effects are detected by both the CPH and GB methods, the attribution of the effect to long-term or short-term survival may change the interpretation of the data. We believe these examples motivate more frequent use of parametric survival models in conjunction with the semi-parametric Cox proportional hazards model. 相似文献
2.
Chi Y 《Statistics in medicine》2005,24(1):23-35
In clinical trials or drug development studies, researchers are often interested in identifying which treatments or dosages are more effective than the standard one. Recently, several multiple testing procedures based on weighted logrank tests have been proposed to compare several treatments with a control in a one-way layout where survival data are subject to random right-censorship. However, weighted logrank tests are based on ranks, and these tests might not be sensitive to the magnitude of the difference in survival times against a specific alternative. Therefore, it is desirable to develop a more robust and powerful multiple testing procedure. This paper proposes multiple testing procedures based on two-sample weighted Kaplan-Meier statistics, each comparing an individual treatment with the control, to determine which treatments are more effective than the control. The comparative results from a simulation study are presented and the implementation of these methods to the prostate cancer clinical trial and the renal carcinoma tumour study are presented. 相似文献
3.
A common problem encountered in many medical applications is the comparison of survival curves. Often, rather than comparison of the entire survival curves, interest is focused on the comparison at a fixed point in time. In most cases, the naive test based on a difference in the estimates of survival is used for this comparison. In this note, we examine the performance of alternatives to the naive test. These include tests based on a number of transformations of the survival function and a test based on a generalized linear model for pseudo-observations. The type I errors and power of these tests for a variety of sample sizes are compared by a Monte Carlo study. We also discuss how these tests may be extended to situations where the data are stratified. The pseudo-value approach is also applicable in more detailed regression analysis of the survival probability at a fixed point in time. The methods are illustrated on a study comparing survival for autologous and allogeneic bone marrow transplants. 相似文献
4.
We discuss the use of local likelihood methods to fit proportional hazards regression models to right and interval censored data. The assumed model allows for an arbitrary, smoothed baseline hazard on which a vector of covariates operates in a proportional manner, and thus produces an interpretable baseline hazard function along with estimates of global covariate effects. For estimation, we extend the modified EM algorithm suggested by Betensky, Lindsey, Ryan and Wand. We illustrate the method with data on times to deterioration of breast cosmeses and HIV-1 infection rates among haemophiliacs. 相似文献
5.
In survival studies, information lost through censoring can be partially recaptured through repeated measures data which are predictive of survival. In addition, such data may be useful in removing bias in survival estimates, due to censoring which depends upon the repeated measures. Here we investigate joint models for survival T and repeated measurements Y, given a vector of covariates Z. Mixture models indexed as f (T/Z) f (Y/T,Z) are well suited for assessing covariate effects on survival time. Our objective is efficiency gains, using non-parametric models for Y in order to avoid introducing bias by misspecification of the distribution for Y. We model (T/Z) as a piecewise exponential distribution with proportional hazards covariate effect. The component (Y/T,Z) has a multinomial model. The joint likelihood for survival and longitudinal data is maximized, using the EM algorithm. The estimate of covariate effect is compared to the estimate based on the standard proportional hazards model and an alternative joint model based estimate. We demonstrate modest gains in efficiency when using the joint piecewise exponential joint model. In a simulation, the estimated efficiency gain over the standard proportional hazards model is 6.4 per cent. In clinical trial data, the estimated efficiency gain over the standard proportional hazards model is 10.2 per cent. 相似文献
6.
Interval‐censored failure time data occur in many areas, especially in medical follow‐up studies such as clinical trials, and in consequence, many methods have been developed for the problem. However, most of the existing approaches cannot deal with the situations where the hazard functions may cross each other. To address this, we develop a sieve maximum likelihood estimation procedure with the application of the short‐term and long‐term hazard ratio model. In the method, the I‐splines are used to approximate the underlying unknown function. An extensive simulation study was conducted for the assessment of the finite sample properties of the presented procedure and suggests that the method seems to work well for practical situations. The analysis of an motivated example is also provided. 相似文献
7.
Fei Wan 《Statistics in medicine》2017,36(5):838-854
The proportional hazard model is one of the most important statistical models used in medical research involving time‐to‐event data. Simulation studies are routinely used to evaluate the performance and properties of the model and other alternative statistical models for time‐to‐event outcomes under a variety of situations. Complex simulations that examine multiple situations with different censoring rates demand approaches that can accommodate this variety. In this paper, we propose a general framework for simulating right‐censored survival data for proportional hazards models by simultaneously incorporating a baseline hazard function from a known survival distribution, a known censoring time distribution, and a set of baseline covariates. Specifically, we present scenarios in which time to event is generated from exponential or Weibull distributions and censoring time has a uniform or Weibull distribution. The proposed framework incorporates any combination of covariate distributions. We describe the steps involved in nested numerical integration and using a root‐finding algorithm to choose the censoring parameter that achieves predefined censoring rates in simulated survival data. We conducted simulation studies to assess the performance of the proposed framework. We demonstrated the application of the new framework in a comprehensively designed simulation study. We investigated the effect of censoring rate on potential bias in estimating the conditional treatment effect using the proportional hazard model in the presence of unmeasured confounding variables. Copyright © 2016 John Wiley & Sons, Ltd. 相似文献
8.
Missing covariate data are common in observational studies of time to an event, especially when covariates are repeatedly measured over time. Failure to account for the missing data can lead to bias or loss of efficiency, especially when the data are non-ignorably missing. Previous work has focused on the case of fixed covariates rather than those that are repeatedly measured over the follow-up period, hence, here we present a selection model that allows for proportional hazards regression with time-varying covariates when some covariates may be non-ignorably missing. We develop a fully Bayesian model and obtain posterior estimates of the parameters via the Gibbs sampler in WinBUGS. We illustrate our model with an analysis of post-diagnosis weight change and survival after breast cancer diagnosis in the Long Island Breast Cancer Study Project follow-up study. Our results indicate that post-diagnosis weight gain is associated with lower all-cause and breast cancer-specific survival among women diagnosed with new primary breast cancer. Our sensitivity analysis showed only slight differences between models with different assumptions on the missing data mechanism yet the complete-case analysis yielded markedly different results. 相似文献
9.
Simulation studies present an important statistical tool to investigate the performance, properties and adequacy of statistical models in pre-specified situations. One of the most important statistical models in medical research is the proportional hazards model of Cox. In this paper, techniques to generate survival times for simulation studies regarding Cox proportional hazards models are presented. A general formula describing the relation between the hazard and the corresponding survival time of the Cox model is derived, which is useful in simulation studies. It is shown how the exponential, the Weibull and the Gompertz distribution can be applied to generate appropriate survival times for simulation studies. Additionally, the general relation between hazard and survival time can be used to develop own distributions for special situations and to handle flexibly parameterized proportional hazards models. The use of distributions other than the exponential distribution is indispensable to investigate the characteristics of the Cox proportional hazards model, especially in non-standard situations, where the partial likelihood depends on the baseline hazard. A simulation study investigating the effect of measurement errors in the German Uranium Miners Cohort Study is considered to illustrate the proposed simulation techniques and to emphasize the importance of a careful modelling of the baseline hazard in Cox models. 相似文献
10.
In most randomized clinical trials (RCTs) with a right-censored time-to-event outcome, the hazard ratio is taken as an appropriate measure of the effectiveness of a new treatment compared with a standard-of-care or control treatment. However, it has long been known that the hazard ratio is valid only under the proportional hazards (PH) assumption. This assumption is formally checked only rarely. Some recent trials, particularly the IPASS trial in lung cancer and the ICON7 trial in ovarian cancer, have alerted researchers to the possibility of gross non-PH, raising the critical question of how such data should be analyzed. Here, we propose the use of the restricted mean survival time at a prespecified, fixed time point as a useful general measure to report the difference between two survival curves. We describe different methods of estimating it and we illustrate its application to three RCTs in cancer. The examples are graded from a trial in kidney cancer in which there is no evidence of non-PH, to IPASS, where the opposite is clearly the case. We propose a simple, general scheme for the analysis of data from such RCTs. Key elements of our approach are Andersen's method of 'pseudo-observations,' which is based on the Kaplan-Meier estimate of the survival function, and Royston and Parmar's class of flexible parametric survival models, which may be used for analyzing data in the presence or in the absence of PH of the treatment effect. 相似文献
11.
Analysis of recurrent gap time data using the weighted risk-set method and the modified within-cluster resampling method 总被引:1,自引:0,他引:1
The gap times between recurrent events are often of primary interest in medical and epidemiology studies. The observed gap times cannot be naively treated as clustered survival data in analysis because of the sequential structure of recurrent events. This paper introduces two important building blocks, the averaged counting process and the averaged at-risk process, for the development of the weighted risk-set (WRS) estimation methods. We demonstrate that with the use of these two empirical processes, existing risk-set based methods for univariate survival time data can be easily extended to analyze recurrent gap times. Additionally, we propose a modified within-cluster resampling (MWCR) method that can be easily implemented in standard software. We show that the MWCR estimators are asymptotically equivalent to the WRS estimators. An analysis of hospitalization data from the Danish Psychiatric Central Register is presented to illustrate the proposed methods. 相似文献
12.
Magaret AS 《Statistics in medicine》2008,27(26):5456-5470
Standard proportional hazards methods are inappropriate for mismeasured outcomes. Previous work has shown that outcome mismeasurement can bias estimation of hazard ratios for covariates. We previously developed an adjusted proportional hazards method that can produce accurate hazard ratio estimates when outcome measurement is either non-sensitive or non-specific. That method requires that mismeasurement rates (the sensitivity and specificity of the diagnostic test) are known. Here, we develop an approach to handle unknown mismeasurement rates. We consider the case where the true failure status is known for a subset of subjects (the validation set) until the time of observed failure or censoring. Five methods of handling these mismeasured outcomes are described and compared. The first method uses only subjects on whom complete data are available (validation subset), whereas the second method uses only mismeasured outcomes (naive method). Three other methods include available data from both validated and non-validated subjects. Through simulation, we show that inclusion of the non-validated subjects can improve efficiency relative to use of the complete case data only and that inclusion of some true outcomes (the validation subset) can reduce bias relative to use of mismeasured outcomes only. We also compare the performance of the validation methods proposed using an example data set. 相似文献
13.
Motivated by a clinical trial of zinc nasal spray for the treatment of the common cold, we consider the problem of comparing two crossing hazard rates. A comprehensive review of the existing methods for dealing with the crossing hazard rates problem is provided. A new method, based on modelling the crossing hazard rates, is proposed and implemented under the Cox proportional hazards framework. The main advantage of the proposed method is the utilization of the Box-Cox transformation which covers a wide range of hazard crossing patterns. Simulation studies are conducted for comparing the performance of the existing methods and the proposed one, which show that the proposed method outperforms some of its peers in certain cases. Applications to a kidney dialysis patients data and the zinc nasal spray clinical trial data are discussed. 相似文献
14.
Long-term survival with non-proportional hazards: results from the Dutch Gastric Cancer Trial 总被引:1,自引:0,他引:1
Putter H Sasako M Hartgrink HH van de Velde CJ van Houwelingen JC 《Statistics in medicine》2005,24(18):2807-2821
Randomized clinical trials with long-term survival data comparing two treatments often show Kaplan-Meier plots with crossing survival curves. Such behaviour implies a violation of the proportional hazards assumption for treatment. The Cox proportional hazards regression model with treatment as a fixed effect can therefore not be used to assess the influence of treatment of survival. In this paper we analyse long-term follow-up data from the Dutch Gastric Cancer Trial, a randomized study comparing limited (D1) lymph node dissection with extended (D2) lymph node dissection. We illustrate a number of ways of dealing with survival data that do not obey the proportional hazards assumption, each of which can be easily implemented in standard statistical packages. 相似文献
15.
The analysis of gap times in recurrent events requires an adjustment to standard marginal models. One can perform this adjustment with a modified within‐cluster resampling technique; however, this method is computationally intensive. In this paper, we describe a simple adjustment to the standard Cox proportional hazards model analysis that mimics the intent of within‐cluster resampling and results in similar parameter estimates. This method essentially weights the partial likelihood contributions by the inverse of the number of gap times observed within the individual while assuming a working independence correlation matrix. We provide an example involving recurrent mammary tumours in female rats to illustrate the methods considered in this paper. Copyright © 2012 John Wiley & Sons, Ltd. 相似文献
16.
For survival data regression, the Cox proportional hazards model is the most popular model, but in certain situations the Cox model is inappropriate. Various authors have proposed the proportional odds model as an alternative. Yang and Prentice recently presented a number of easily implemented estimators for the proportional odds model. Here we show how to extend the methods of Yang and Prentice to a family of survival models that includes the proportional hazards model and proportional odds model as special cases. The model is defined in terms of a Box-Cox transformation of the survival function, indexed by a transformation parameter rho. This model has been discussed by other authors, and is related to the Harrington-Fleming G(rho) family of tests and to frailty models. We discuss inference for the case where rho is known and the case where rho must be estimated. We present a simulation study of a pseudo-likelihood estimator and a martingale residual estimator. We find that the methods perform reasonably. We apply our model to a real data set. 相似文献
17.
Peter C. Austin 《Statistics in medicine》2012,31(29):3946-3958
Simulations and Monte Carlo methods serve an important role in modern statistical research. They allow for an examination of the performance of statistical procedures in settings in which analytic and mathematical derivations may not be feasible. A key element in any statistical simulation is the existence of an appropriate data‐generating process: one must be able to simulate data from a specified statistical model. We describe data‐generating processes for the Cox proportional hazards model with time‐varying covariates when event times follow an exponential, Weibull, or Gompertz distribution. We consider three types of time‐varying covariates: first, a dichotomous time‐varying covariate that can change at most once from untreated to treated (e.g., organ transplant); second, a continuous time‐varying covariate such as cumulative exposure at a constant dose to radiation or to a pharmaceutical agent used for a chronic condition; third, a dichotomous time‐varying covariate with a subject being able to move repeatedly between treatment states (e.g., current compliance or use of a medication). In each setting, we derive closed‐form expressions that allow one to simulate survival times so that survival times are related to a vector of fixed or time‐invariant covariates and to a single time‐varying covariate. We illustrate the utility of our closed‐form expressions for simulating event times by using Monte Carlo simulations to estimate the statistical power to detect as statistically significant the effect of different types of binary time‐varying covariates. This is compared with the statistical power to detect as statistically significant a binary time‐invariant covariate. Copyright © 2012 John Wiley & Sons, Ltd. 相似文献
18.
We compare parameter estimates from the proportional hazards model, the cumulative logistic model and a new modified logistic model (referred to as the person-time logistic model), with the use of simulated data sets and with the following quantities varied: disease incidence, risk factor strength, length of follow-up, the proportion censored, non-proportional hazards, and sample size. Parameter estimates from the person-time logistic regression model closely approximated those from the Cox model when the survival time distribution was close to exponential, but could differ substantially in other situations. We found parameter estimates from the cumulative logistic model similar to those from the Cox and person-time logistic models when the disease was rare, the risk factor moderate, and censoring rates similar across the covariates. We also compare the models with analysis of a real data set that involves the relationship of age, race, sex, blood pressure, and smoking to subsequent mortality. In this example, the length of follow-up among survivors varied from 5 to 14 years and the Cox and person-time logistic approaches gave nearly identical results. The cumulative logistic results had somewhat larger p-values but were substantively similar for all but one coefficient (the age-race interaction). The latter difference reflects differential censoring rates by age, race and sex. 相似文献
19.
Giorgi R Abrahamowicz M Quantin C Bolard P Esteve J Gouvernet J Faivre J 《Statistics in medicine》2003,22(17):2767-2784
Relative survival, a method for assessing prognostic factors for disease-specific mortality in unselected populations, is frequently used in population-based studies. However, most relative survival models assume that the effects of covariates on disease-specific mortality conform with the proportional hazards hypothesis, which may not hold in some long-term studies. To accommodate variation over time of a predictor's effect on disease-specific mortality, we developed a new relative survival regression model using B-splines to model the hazard ratio as a flexible function of time, without having to specify a particular functional form. Our method also allows for testing the hypotheses of hazards proportionality and no association on disease-specific hazard. Accuracy of estimation and inference were evaluated in simulations. The method is illustrated by an analysis of a population-based study of colon cancer. 相似文献
20.
Studies on the comparison of transplantation with respect to standard therapy present a number of statistical challenges: they are usually not randomized, often retrospective (based on registry data) and the treatment assignment is time dependent (waiting time to transplant). Matching on known prognostic factors and waiting time to transplant can be used to select appropriate samples for the analysis. When a variable number of patients treated with conventional therapy matches each transplanted patient, the standard estimating and testing procedures need to be modified in order to account for the fact that matched data are highly stratified, with strata containing a few, possibly censored, observations. A weighted version of the Kaplan-Meier estimator, which accounts for a variable proportion of matching, is proposed and its statistical properties are studied. The problem of the comparison of the survival experience in the two treatment groups is also considered. Two tests, based on the distance between the survival estimates calculated at a prefixed time point, are examined and their behaviour is evaluated through simulations. The procedures proposed here are applied to data collected from an Italian study whose aim was the evaluation of bone marrow transplant, as compared to intensive chemotherapy, in the cure of paediatric acute lymphoblastic leukaemia. 相似文献