首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In medical studies, we commonly encounter multiple events data such as recurrent infection or attack times in patients suffering from a given disease. A number of statistical procedures for the analysis of such data use the Cox proportional hazards model, modified to include a random effect term called frailty which summarizes the dependence of recurrent times within a subject. These unobserved random frailty effects capture subject effects that are not explained by the known covariates. They are typically modelled constant over time and are assumed to be independently and identically distributed across subjects. However, in some situations, the subject-specific random frailty may change over time in the same manner as time-dependent covariate effects. This paper presents a time-dependent frailty model for recurrent failure time data in the Bayesian context and estimates it using a Markov chain Monte Carlo method. Our approach is illustrated by a data set relating to patients with chronic granulomatous disease and it is compared to the constant frailty model using the deviance information criterion.  相似文献   

2.
Often the effect of at least one of the prognostic factors in a Cox regression model changes over time, which violates the proportional hazards assumption of this model. As a consequence, the average hazard ratio for such a prognostic factor is under‐ or overestimated. While there are several methods to appropriately cope with non‐proportional hazards, in particular by including parameters for time‐dependent effects, weighted estimation in Cox regression is a parsimonious alternative without additional parameters. The methodology, which extends the weighted k‐sample logrank tests of the Tarone‐Ware scheme to models with multiple, binary and continuous covariates, has been introduced in the nineties of the last century and is further developed and re‐evaluated in this contribution. The notion of an average hazard ratio is defined and its connection to the effect size measure P(X<Y) is emphasized. The suggested approach accomplishes estimation of intuitively interpretable average hazard ratios and provides tools for inference. A Monte Carlo study confirms the satisfactory performance. Advantages of the approach are exemplified by comparing standard and weighted analyses of an international lung cancer study. SAS and R programs facilitate application. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

3.
Multivariate survival data are frequently encountered in biomedical applications in the form of clustered failures (or recurrent events data). A popular way of analyzing such data is by using shared frailty models, which assume that the proportional hazards assumption holds conditional on an unobserved cluster-specific random effect. Such models are often incorporated in more complicated joint models in survival analysis. If the random effect distribution has finite expectation, then the conditional proportional hazards assumption does not carry over to the marginal models. It has been shown that, for univariate data, this makes it impossible to distinguish between the presence of unobserved heterogeneity (eg, due to missing covariates) and marginal nonproportional hazards. We show that time-dependent covariate effects may falsely appear as evidence in favor of a frailty model also in the case of clustered failures or recurrent events data, when the cluster size or number of recurrent events is small. When true unobserved heterogeneity is present, the presence of nonproportional hazards leads to overestimating the frailty effect. We show that this phenomenon is somewhat mitigated as the cluster size grows. We carry out a simulation study to assess the behavior of test statistics and estimators for frailty models in such contexts. The gamma, inverse Gaussian, and positive stable shared frailty models are contrasted using a novel software implementation for estimating semiparametric shared frailty models. Two main questions are addressed in the contexts of clustered failures and recurrent events: whether covariates with a time-dependent effect may appear as indication of unobserved heterogeneity and whether the additional presence of unobserved heterogeneity can be detected in this case. Finally, the practical implications are illustrated in a real-world data analysis example.  相似文献   

4.
Relative survival provides a measure of the proportion of patients dying from the disease under study without requiring the knowledge of the cause of death. We propose an overall strategy based on regression models to estimate the relative survival and model the effects of potential prognostic factors. The baseline hazard was modelled until 10 years follow-up using parametric continuous functions. Six models including cubic regression splines were considered and the Akaike Information Criterion was used to select the final model. This approach yielded smooth and reliable estimates of mortality hazard and allowed us to deal with sparse data taking into account all the available information. Splines were also used to model simultaneously non-linear effects of continuous covariates and time-dependent hazard ratios. This led to a graphical representation of the hazard ratio that can be useful for clinical interpretation. Estimates of these models were obtained by likelihood maximization. We showed that these estimates could be also obtained using standard algorithms for Poisson regression.  相似文献   

5.
Liu Y  Craig BA 《Statistics in medicine》2006,25(10):1729-1740
In survival analysis, use of the Cox proportional hazards model requires knowledge of all covariates under consideration at every failure time. Since failure times rarely coincide with observation times, time-dependent covariates (covariates that vary over time) need to be inferred from the observed values. In this paper, we introduce the last value auto-regressed (LVAR) estimation method and compare it to several other established estimation approaches via a simulation study. The comparison shows that under several time-dependent covariate processes this method results in a smaller mean square error when considering the time-dependent covariate effect.  相似文献   

6.
In randomised controlled trials, the effect of treatment on those who comply with allocation to active treatment can be estimated by comparing their outcome to those in the comparison group who would have complied with active treatment had they been allocated to it. We compare three estimators of the causal effect of treatment on compliers when this is a parameter in a proportional hazards model and quantify the bias due to omitting baseline prognostic factors. Causal estimates are found directly by maximising a novel partial likelihood; based on a structural proportional hazards model; and based on a ‘corrected dataset’ derived after fitting a rank‐preserving structural failure time model. Where necessary, we extend these methods to incorporate baseline covariates. Comparisons use simulated data and a real data example. Analysing the simulated data, we found that all three methods are accurate when an important covariate was included in the proportional hazards model (maximum bias 5.4%). However, failure to adjust for this prognostic factor meant that causal treatment effects were underestimated (maximum bias 11.4%), because estimators were based on a misspecified marginal proportional hazards model. Analysing the real data example, we found that adjusting causal estimators is important to correct for residual imbalances in prognostic factors present between trial arms after randomisation. Our results show that methods of estimating causal treatment effects for time‐to‐event outcomes should be extended to incorporate covariates, thus providing an informative compliment to the corresponding intention‐to‐treat analysis. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

7.
Relative survival, a method for assessing prognostic factors for disease-specific mortality in unselected populations, is frequently used in population-based studies. However, most relative survival models assume that the effects of covariates on disease-specific mortality conform with the proportional hazards hypothesis, which may not hold in some long-term studies. To accommodate variation over time of a predictor's effect on disease-specific mortality, we developed a new relative survival regression model using B-splines to model the hazard ratio as a flexible function of time, without having to specify a particular functional form. Our method also allows for testing the hypotheses of hazards proportionality and no association on disease-specific hazard. Accuracy of estimation and inference were evaluated in simulations. The method is illustrated by an analysis of a population-based study of colon cancer.  相似文献   

8.
9.
The Cox proportional hazards model (CPH) is routinely used in clinical trials, but it may encounter serious difficulties with departures from the proportional hazards assumption, even when the departures are not readily detected by commonly used diagnostics. We consider the Gamel-Boag (GB) model, a log-normal model for accelerated failure in which a proportion of subjects are long-term survivors. When the CPH model is fit to simulated data generated from this model, the results can range from gross overstatement of the effect size, to a situation where increasing follow-up may cause a decline in power. We implement a fitting algorithm for the GB model that permits separate covariate effects on the rapidity of early failure and the fraction of long-term survivors. When effects are detected by both the CPH and GB methods, the attribution of the effect to long-term or short-term survival may change the interpretation of the data. We believe these examples motivate more frequent use of parametric survival models in conjunction with the semi-parametric Cox proportional hazards model.  相似文献   

10.
We consider Cox proportional hazards regression when the covariate vector includes error-prone discrete covariates along with error-free covariates, which may be discrete or continuous. The misclassification in the discrete error-prone covariates is allowed to be of any specified form. Building on the work of Nakamura and his colleagues, we present a corrected score method for this setting. The method can handle all three major study designs (internal validation design, external validation design, and replicate measures design), both functional and structural error models, and time-dependent covariates satisfying a certain 'localized error' condition. We derive the asymptotic properties of the method and indicate how to adjust the covariance matrix of the regression coefficient estimates to account for estimation of the misclassification matrix. We present the results of a finite-sample simulation study under Weibull survival with a single binary covariate having known misclassification rates. The performance of the method described here was similar to that of related methods we have examined in previous works. Specifically, our new estimator performed as well as or, in a few cases, better than the full Weibull maximum likelihood estimator. We also present simulation results for our method for the case where the misclassification probabilities are estimated from an external replicate measures study. Our method generally performed well in these simulations. The new estimator has a broader range of applicability than many other estimators proposed in the literature, including those described in our own earlier work, in that it can handle time-dependent covariates with an arbitrary misclassification structure. We illustrate the method on data from a study of the relationship between dietary calcium intake and distal colon cancer.  相似文献   

11.
The Cox proportional hazards model with time-dependent covariates (TDC) is now a part of the standard statistical analysis toolbox in medical research. As new methods involving more complex modeling of time-dependent variables are developed, simulations could often be used to systematically assess the performance of these models. Yet, generating event times conditional on TDC requires well-designed and efficient algorithms. We compare two classes of such algorithms: permutational algorithms (PAs) and algorithms based on a binomial model. We also propose a modification of the PA to incorporate a rejection sampler. We performed a simulation study to assess the accuracy, stability, and speed of these algorithms in several scenarios. Both classes of algorithms generated data sets that, once analyzed, provided virtually unbiased estimates with comparable variances. In terms of computational efficiency, the PA with the rejection sampler reduced the time necessary to generate data by more than 50 per cent relative to alternative methods. The PAs also allowed more flexibility in the specification of the marginal distributions of event times and required less calibration.  相似文献   

12.
Joint models are frequently used in survival analysis to assess the relationship between time-to-event data and time-dependent covariates, which are measured longitudinally but often with errors. Routinely, a linear mixed-effects model is used to describe the longitudinal data process, while the survival times are assumed to follow the proportional hazards model. However, in some practical situations, individual covariate profiles may contain changepoints. In this article, we assume a two-phase polynomial random effects with subject-specific changepoint model for the longitudinal data process and the proportional hazards model for the survival times. Our main interest is in the estimation of the parameter in the hazards model. We incorporate a smooth transition function into the changepoint model for the longitudinal data and develop the corrected score and conditional score estimators, which do not require any assumption regarding the underlying distribution of the random effects or that of the changepoints. The estimators are shown to be asymptotically equivalent and their finite-sample performance is examined via simulations. The methods are applied to AIDS clinical trial data.  相似文献   

13.
For testing the efficacy of a treatment in a clinical trial with survival data, the Cox proportional hazards (PH) model is the well‐accepted, conventional tool. When using this model, one typically proceeds by confirming that the required PH assumption holds true. If the PH assumption fails to hold, there are many options available, proposed as alternatives to the Cox PH model. An important question which arises is whether the potential bias introduced by this sequential model fitting procedure merits concern and, if so, what are effective mechanisms for correction. We investigate by means of simulation study and draw attention to the considerable drawbacks, with regard to power, of a simple resampling technique, the permutation adjustment, a natural recourse for addressing such challenges. We also consider a recently proposed two‐stage testing strategy (2008) for ameliorating these effects. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

14.
The shared frailty model is an extension of the Cox model to correlated failure times and, essentially, a random effects model for failure time outcomes. In this model, the latent frailty shared by individual members in a cluster acts multiplicatively as a factor on the hazard function and is typically modelled parametrically. One commonly used distribution is gamma, where both shape and scale parameters are set to be the same to allow for unique identification of baseline hazard function. It is popular because it is a conjugate prior, and the posterior distribution possesses the same form as gamma. In addition, the parameter can be interpreted as a time-independent cross-ratio function, a natural extension of odds ratio to failure time outcomes. In this paper, we study the effect of frailty distribution mis-specification on the marginal regression estimates and hazard functions under assumed gamma distribution with an application to family studies. The simulation results show that the biases are generally 10% and lower, even when the true frailty distribution deviates substantially from the assumed gamma distribution. This suggests that the gamma frailty model can be a practical choice in real data analyses if the regression parameters and marginal hazard function are of primary interest and individual cluster members are exchangeable with respect to their dependencies.  相似文献   

15.
Multivariate interval‐censored failure time data arise commonly in many studies of epidemiology and biomedicine. Analysis of these type of data is more challenging than the right‐censored data. We propose a simple multiple imputation strategy to recover the order of occurrences based on the interval‐censored event times using a conditional predictive distribution function derived from a parametric gamma random effects model. By imputing the interval‐censored failure times, the estimation of the regression and dependence parameters in the context of a gamma frailty proportional hazards model using the well‐developed EM algorithm is made possible. A robust estimator for the covariance matrix is suggested to adjust for the possible misspecification of the parametric baseline hazard function. The finite sample properties of the proposed method are investigated via simulation. The performance of the proposed method is highly satisfactory, whereas the computation burden is minimal. The proposed method is also applied to the diabetic retinopathy study (DRS) data for illustration purpose and the estimates are compared with those based on other existing methods for bivariate grouped survival data. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

16.
Meta‐analysis of time‐to‐event outcomes using the hazard ratio as a treatment effect measure has an underlying assumption that hazards are proportional. The between‐arm difference in the restricted mean survival time is a measure that avoids this assumption and allows the treatment effect to vary with time. We describe and evaluate meta‐analysis based on the restricted mean survival time for dealing with non‐proportional hazards and present a diagnostic method for the overall proportional hazards assumption. The methods are illustrated with the application to two individual participant meta‐analyses in cancer. The examples were chosen because they differ in disease severity and the patterns of follow‐up, in order to understand the potential impacts on the hazards and the overall effect estimates. We further investigate the estimation methods for restricted mean survival time by a simulation study. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

17.
Time-to-event regression is a frequent tool in biomedical research. In clinical trials this time is usually measured from the beginning of the study. The same approach is often adopted in the analysis of longitudinal observational studies. However, in recent years there has appeared literature making a case for the use of the date of birth as a starting point, and thus utilize age as the time-to-event. In this paper, we explore different types of age-scale models and compare them with time-on-study models in terms of the estimated regression coefficients they produce. We consider six proportional hazards regression models that differ in the choice of time scale and in the method of adjusting for the years before the study. By considering the estimating equations of these models as well as numerical simulations we conclude that correct adjustment for the age at entry is crucial in reducing bias of the estimated coefficients. The unadjusted age-scale model is inferior to any of the five other models considered, regardless of their choice of time scale. Additionally, if adjustment for age at entry is made, our analyses show very little to suggest that there exists any practically meaningful difference in the estimated regression coefficients depending on the choice of time scale. These findings are supported by four practical examples from the Framingham Heart Study.  相似文献   

18.
Recurrent event data frequently occur in longitudinal studies when subjects experience more than one event during the observation period. Often, the occurrence of subsequent events is associated with the experience of previous events. Such dependence is commonly ignored in the application of standard recurrent event methodology. In this paper, we utilize a Cox-type regression model with time-varying triggering effect depending on the number and timing of previous events to enhance both model fit and prediction. Parameter estimation and statistical inference is achieved via the partial likelihood. A statistical test procedure is provided to assess the existence of the triggering effects. We demonstrate our approach via comprehensive simulation studies and a real data analysis on chronic pseudomonas infections in young cystic fibrosis patients. Our model provides significantly better predictions than standard recurrent event models.  相似文献   

19.
We consider a competing risks setting, when evaluating the prognostic influence of an exposure on a specific cause of failure. Two main regression models are used in such analyses, the Cox cause-specific proportional hazards model and the subdistribution proportional hazards model. They are exemplified in a real data example focusing on relapse-free interval in acute leukaemia patients. We examine the properties of the estimator based on the latter model when the true model is the former. An explicit relationship between subdistribution hazards ratio and cause-specific hazards ratio is derived, assuming a flexible parametric distribution for latent failure times.  相似文献   

20.
Often in many biomedical and epidemiologic studies, estimating hazards function is of interest. The Breslow's estimator is commonly used for estimating the integrated baseline hazard, but this estimator requires the functional form of covariate effects to be correctly specified. It is generally difficult to identify the true functional form of covariate effects in the presence of time-dependent covariates. To provide a complementary method to the traditional proportional hazard model, we propose a tree-type method which enables simultaneously estimating both baseline hazards function and the effects of time-dependent covariates. Our interest will be focused on exploring the potential data structures rather than formal hypothesis testing. The proposed method approximates the baseline hazards and covariate effects with step-functions. The jump points in time and in covariate space are searched via an algorithm based on the improvement of the full log-likelihood function. In contrast to most other estimating methods, the proposed method estimates the hazards function rather than integrated hazards. The method is applied to model the risk of withdrawal in a clinical trial that evaluates the anti-depression treatment in preventing the development of clinical depression. Finally, the performance of the method is evaluated by several simulation studies.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号