首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 578 毫秒
1.
We explore several approaches for imputing partially observed covariates when the outcome of interest is a censored event time and when there is an underlying subset of the population that will never experience the event of interest. We call these subjects ‘cured’, and we consider the case where the data are modeled using a Cox proportional hazards (CPH) mixture cure model. We study covariate imputation approaches using fully conditional specification. We derive the exact conditional distribution and suggest a sampling scheme for imputing partially observed covariates in the CPH cure model setting. We also propose several approximations to the exact distribution that are simpler and more convenient to use for imputation. A simulation study demonstrates that the proposed imputation approaches outperform existing imputation approaches for survival data without a cure fraction in terms of bias in estimating CPH cure model parameters. We apply our multiple imputation techniques to a study of patients with head and neck cancer. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

2.
Medical studies frequently collect biological markers in which many subjects have values below the detectable limits of the assay, resulting in heavily censored data. We develop a modification of the Rigobon and Stoker index method for application to a Cox regression model with censored covariates. The index approach is compared with a complete case method and various fill-in methods. Our simulation results demonstrated that the index approach is an improvement over the other methods. We illustrated the usefulness of this approach with an example for the GenIMS study examining the relationship between two inflammatory markers and survival.  相似文献   

3.
Instrumental variable (IV) analysis has been widely used in economics, epidemiology, and other fields to estimate the causal effects of covariates on outcomes, in the presence of unobserved confounders and/or measurement errors in covariates. However, IV methods for time‐to‐event outcome with censored data remain underdeveloped. This paper proposes a Bayesian approach for IV analysis with censored time‐to‐event outcome by using a two‐stage linear model. A Markov chain Monte Carlo sampling method is developed for parameter estimation for both normal and non‐normal linear models with elliptically contoured error distributions. The performance of our method is examined by simulation studies. Our method largely reduces bias and greatly improves coverage probability of the estimated causal effect, compared with the method that ignores the unobserved confounders and measurement errors. We illustrate our method on the Women's Health Initiative Observational Study and the Atherosclerosis Risk in Communities Study. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

4.
In survival analysis, a competing risk is an event whose occurrence precludes the occurrence of the primary event of interest. Outcomes in medical research are frequently subject to competing risks. In survival analysis, there are 2 key questions that can be addressed using competing risk regression models: first, which covariates affect the rate at which events occur, and second, which covariates affect the probability of an event occurring over time. The cause‐specific hazard model estimates the effect of covariates on the rate at which events occur in subjects who are currently event‐free. Subdistribution hazard ratios obtained from the Fine‐Gray model describe the relative effect of covariates on the subdistribution hazard function. Hence, the covariates in this model can also be interpreted as having an effect on the cumulative incidence function or on the probability of events occurring over time. We conducted a review of the use and interpretation of the Fine‐Gray subdistribution hazard model in articles published in the medical literature in 2015. We found that many authors provided an unclear or incorrect interpretation of the regression coefficients associated with this model. An incorrect and inconsistent interpretation of regression coefficients may lead to confusion when comparing results across different studies. Furthermore, an incorrect interpretation of estimated regression coefficients can result in an incorrect understanding about the magnitude of the association between exposure and the incidence of the outcome. The objective of this article is to clarify how these regression coefficients should be reported and to propose suggestions for interpreting these coefficients.  相似文献   

5.
Several studies for the clinical validity of circulating tumor cells (CTCs) in metastatic breast cancer were conducted showing that it is a prognostic biomarker of overall survival. In this work, we consider an individual patient data meta-analysis for nonmetastatic breast cancer to assess the discrimination of CTCs regarding the risk of death. Data are collected in several centers and present correlated failure times for subjects of the same center. However, although the covariate-specific time-dependent receiver operating characteristic (ROC) curve has been widely used for assessing the performance of a biomarker, there is no methodology yet that can handle this specific setting with clustered censored failure times. We propose an estimator for the covariate-specific time-dependent ROC curves and area under the ROC curve when clustered failure times are detected. We discuss the assumptions under which the estimators are consistent and their interpretations. We assume a shared frailty model for modeling the effect of the covariates and the biomarker on the outcome in order to account for the cluster effect. A simulation study was conducted and it shows negligible bias for the proposed estimator and a nonparametric one based on inverse probability censoring weighting, while a semiparametric estimator, ignoring the clustering, is markedly biased. Finally, in our application to breast cancer data, the estimation of the covariate-specific area under the curves illustrates that the CTCs discriminate better patients with inflammatory tumor than patients with noninflammatory tumor, with respect to their risk of death.  相似文献   

6.
In cancer trials, a significant fraction of patients can be cured, that is, the disease is completely eliminated, so that it never recurs. In general, treatments are developed to both increase the patients' chances of being cured and prolong the survival time among non-cured patients. A cure rate model represents a combination of cure fraction and survival model, and can be applied to many clinical studies over several types of cancer. In this article, the cure rate model is considered in the interval censored data composed of two time points, which include the event time of interest. Interval censored data commonly occur in the studies of diseases that often progress without symptoms, requiring clinical evaluation for detection (Encyclopedia of Biostatistics. Wiley: New York, 1998; 2090-2095). In our study, an approximate likelihood approach suggested by Goetghebeur and Ryan (Biometrics 2000; 56:1139-1144) is used to derive the likelihood in interval censored data. In addition, a frailty model is introduced to characterize the association between the cure fraction and survival model. In particular, the positive association between the cure fraction and the survival time is incorporated by imposing a common normal frailty effect. The EM algorithm is used to estimate parameters and a multiple imputation based on the profile likelihood is adopted for variance estimation. The approach is applied to the smoking cessation study in which the event of interest is a smoking relapse and several covariates including an intensive care treatment are evaluated to be effective for both the occurrence of relapse and the non-smoking duration.  相似文献   

7.
We estimate a Cox proportional hazards model where one of the covariates measures the level of a subject's cognitive functioning by grading the total score obtained by the subject on the items of a questionnaire. A case study is presented where the sample includes partial respondents, who did not answer some questionnaire items. The total score takes, hence, the form of an interval‐censored variable and, as a result, the level of cognitive functioning is missing on some subjects. We handle the partial respondents by taking a likelihood‐based approach where survival time is jointly modelled with the censored total score and the size of the censoring interval. Estimates are obtained by an E‐M‐type algorithm that reduces to the iterative maximization of three complete log‐likelihood functions derived from two augmented data sets with case weights, alternated with weights updating. This methodology is exploited to assess the Mini‐Mental State Examination index as a prognostic factor of survival in a sample of Chinese older adults. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

8.
In survival analysis, time-varying covariates are covariates whose value can change during follow-up. Outcomes in medical research are frequently subject to competing risks (events precluding the occurrence of the primary outcome). We review the types of time-varying covariates and highlight the effect of their inclusion in the subdistribution hazard model. External time-dependent covariates are external to the subject, can effect the failure process, but are not otherwise involved in the failure mechanism. Internal time-varying covariates are measured on the subject, can effect the failure process directly, and may also be impacted by the failure mechanism. In the absence of competing risks, a consequence of including internal time-dependent covariates in the Cox model is that one cannot estimate the survival function or the effect of covariates on the survival function. In the presence of competing risks, the inclusion of internal time-varying covariates in a subdistribution hazard model results in the loss of the ability to estimate the cumulative incidence function (CIF) or the effect of covariates on the CIF. Furthermore, the definition of the risk set for the subdistribution hazard function can make defining internal time-varying covariates difficult or impossible. We conducted a review of the use of time-varying covariates in subdistribution hazard models in articles published in the medical literature in 2015 and in the first 5 months of 2019. Seven percent of articles published included a time-varying covariate. Several inappropriately described a time-varying covariate as having an association with the risk of the outcome.  相似文献   

9.
Survival analysis has been conventionally performed on a continuous time scale. In practice, the survival time is often recorded or handled on a discrete scale; when this is the case, the discrete-time survival analysis would provide analysis results more relevant to the actual data scale. Besides, data on time-dependent covariates in the survival analysis are usually collected through intermittent follow-ups, resulting in the missing and mismeasured covariate data. In this work, we propose the sufficient discrete hazard (SDH) approach to discrete-time survival analysis with longitudinal covariates that are subject to missingness and mismeasurement. The SDH method employs the conditional score idea available for dealing with mismeasured covariates, and the penalized least squares for estimating the missing covariate value using the regression spline basis. The SDH method is developed for the single event analysis with the logistic discrete hazard model, and for the competing risks analysis with the multinomial logit model. Simulation results revel good finite-sample performances of the proposed estimator and the associated asymptotic theory. The proposed SDH method is applied to the scleroderma lung study data, where the time to medication withdrawal and time to death were recorded discretely in months, for illustration.  相似文献   

10.
In this article, the second of a series on the analysis of time to event data, we address the case in which multiple predictors (covariates) that may influence the time to an event are taken into account. The hazard function is introduced, and is given in a form useful for assessing the impact of multiple covariates on time to an event. Methods for the assessment of model fitting are also discussed and an example with cancer survival as outcome with the presence or absence of multiple genes as covariates is presented.  相似文献   

11.
In this article, we show how Tobit models can address problems of identifying characteristics of subjects having left‐censored outcomes in the context of developing a method for jointly analyzing time‐to‐event and longitudinal data. There are some methods for handling these types of data separately, but they may not be appropriate when time to event is dependent on the longitudinal outcome, and a substantial portion of values are reported to be below the limits of detection. An alternative approach is to develop a joint model for the time‐to‐event outcome and a two‐part longitudinal outcome, linking them through random effects. This proposed approach is implemented to assess the association between the risk of decline of CD4/CD8 ratio and rates of change in viral load, along with discriminating between patients who are potentially progressors to AIDS from patients who do not. We develop a fully Bayesian approach for fitting joint two‐part Tobit models and illustrate the proposed methods on simulated and real data from an AIDS clinical study.  相似文献   

12.
Abstract

In this article, the second of a series on the analysis of time to event data, we address the case in which multiple predictors (covariates) that may influence the time to an event are taken into account. The hazard function is introduced, and is given in a form useful for assessing the impact of multiple covariates on time to an event. Methods for the assessment of model fitting are also discussed and an example with cancer survival as outcome with the presence or absence of multiple genes as covariates is presented.  相似文献   

13.
We develop an approach, based on multiple imputation, that estimates the marginal survival distribution in survival analysis using auxiliary variables to recover information for censored observations. To conduct the imputation, we use two working survival models to define a nearest neighbour imputing risk set. One model is for the event times and the other for the censoring times. Based on the imputing risk set, two non-parametric multiple imputation methods are considered: risk set imputation, and Kaplan-Meier imputation. For both methods a future event or censoring time is imputed for each censored observation. With a categorical auxiliary variable, we show that with a large number of imputes the estimates from the Kaplan-Meier imputation method correspond to the weighted Kaplan-Meier estimator. We also show that the Kaplan-Meier imputation method is robust to mis-specification of either one of the two working models. In a simulation study with time independent and time-dependent auxiliary variables, we compare the multiple imputation approaches with an inverse probability of censoring weighted method. We show that all approaches can reduce bias due to dependent censoring and improve the efficiency. We apply the approaches to AIDS clinical trial data comparing ZDV and placebo, in which CD4 count is the time-dependent auxiliary variable.  相似文献   

14.
The proliferation of longitudinal studies has increased the importance of statistical methods for time‐to‐event data that can incorporate time‐dependent covariates. The Cox proportional hazards model is one such method that is widely used. As more extensions of the Cox model with time‐dependent covariates are developed, simulations studies will grow in importance as well. An essential starting point for simulation studies of time‐to‐event models is the ability to produce simulated survival times from a known data generating process. This paper develops a method for the generation of survival times that follow a Cox proportional hazards model with time‐dependent covariates. The method presented relies on a simple transformation of random variables generated according to a truncated piecewise exponential distribution and allows practitioners great flexibility and control over both the number of time‐dependent covariates and the number of time periods in the duration of follow‐up measurement. Within this framework, an additional argument is suggested that allows researchers to generate time‐to‐event data in which covariates change at integer‐valued steps of the time scale. The purpose of this approach is to produce data for simulation experiments that mimic the types of data structures applied that researchers encounter when using longitudinal biomedical data. Validity is assessed in a set of simulation experiments, and results indicate that the proposed procedure performs well in producing data that conform to the assumptions of the Cox proportional hazards model. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

15.
Flexible survival models, which avoid assumptions about hazards proportionality (PH) or linearity of continuous covariates effects, bring the issues of model selection to a new level of complexity. Each 'candidate covariate' requires inter-dependent decisions regarding (i) its inclusion in the model, and representation of its effects on the log hazard as (ii) either constant over time or time-dependent (TD) and, for continuous covariates, (iii) either loglinear or non-loglinear (NL). Moreover, 'optimal' decisions for one covariate depend on the decisions regarding others. Thus, some efficient model-building strategy is necessary.We carried out an empirical study of the impact of the model selection strategy on the estimates obtained in flexible multivariable survival analyses of prognostic factors for mortality in 273 gastric cancer patients. We used 10 different strategies to select alternative multivariable parametric as well as spline-based models, allowing flexible modeling of non-parametric (TD and/or NL) effects. We employed 5-fold cross-validation to compare the predictive ability of alternative models.All flexible models indicated significant non-linearity and changes over time in the effect of age at diagnosis. Conventional 'parametric' models suggested the lack of period effect, whereas more flexible strategies indicated a significant NL effect. Cross-validation confirmed that flexible models predicted better mortality. The resulting differences in the 'final model' selected by various strategies had also impact on the risk prediction for individual subjects.Overall, our analyses underline (a) the importance of accounting for significant non-parametric effects of covariates and (b) the need for developing accurate model selection strategies for flexible survival analyses.  相似文献   

16.
Interval‐censored failure‐time data arise when subjects are examined or observed periodically such that the failure time of interest is not examined exactly but only known to be bracketed between two adjacent observation times. The commonly used approaches assume that the examination times and the failure time are independent or conditionally independent given covariates. In many practical applications, patients who are already in poor health or have a weak immune system before treatment usually tend to visit physicians more often after treatment than those with better health or immune system. In this situation, the visiting rate is positively correlated with the risk of failure due to the health status, which results in dependent interval‐censored data. While some measurable factors affecting health status such as age, gender, and physical symptom can be included in the covariates, some health‐related latent variables cannot be observed or measured. To deal with dependent interval censoring involving unobserved latent variable, we characterize the visiting/examination process as recurrent event process and propose a joint frailty model to account for the association of the failure time and visiting process. A shared gamma frailty is incorporated into the Cox model and proportional intensity model for the failure time and visiting process, respectively, in a multiplicative way. We propose a semiparametric maximum likelihood approach for estimating model parameters and show the asymptotic properties, including consistency and weak convergence. Extensive simulation studies are conducted and a data set of bladder cancer is analyzed for illustrative purposes. Copyright © 2017 John Wiley & Sons, Ltd.  相似文献   

17.
For modern evidence-based medicine, a well thought-out risk scoring system for predicting the occurrence of a clinical event plays an important role in selecting prevention and treatment strategies. Such an index system is often established based on the subject's 'baseline' genetic or clinical markers via a working parametric or semi-parametric model. To evaluate the adequacy of such a system, C-statistics are routinely used in the medical literature to quantify the capacity of the estimated risk score in discriminating among subjects with different event times. The C-statistic provides a global assessment of a fitted survival model for the continuous event time rather than focussing on the prediction of bit-year survival for a fixed time. When the event time is possibly censored, however, the population parameters corresponding to the commonly used C-statistics may depend on the study-specific censoring distribution. In this article, we present a simple C-statistic without this shortcoming. The new procedure consistently estimates a conventional concordance measure which is free of censoring. We provide a large sample approximation to the distribution of this estimator for making inferences about the concordance measure. Results from numerical studies suggest that the new procedure performs well in finite sample.  相似文献   

18.
In many observational studies, the objective is to estimate the effect of treatment or state‐change on the recurrent event rate. If treatment is assigned after the start of follow‐up, traditional methods (eg, adjustment for baseline‐only covariates or fully conditional adjustment for time‐dependent covariates) may give biased results. We propose a two‐stage modeling approach using the method of sequential stratification to accurately estimate the effect of a time‐dependent treatment on the recurrent event rate. At the first stage, we estimate the pretreatment recurrent event trajectory using a proportional rates model censored at the time of treatment. Prognostic scores are estimated from the linear predictor of this model and used to match treated patients to as yet untreated controls based on prognostic score at the time of treatment for the index patient. The final model is stratified on matched sets and compares the posttreatment recurrent event rate to the recurrent event rate of the matched controls. We demonstrate through simulation that bias due to dependent censoring is negligible, provided the treatment frequency is low, and we investigate a threshold at which correction for dependent censoring is needed. The method is applied to liver transplant (LT), where we estimate the effect of development of post‐LT End Stage Renal Disease (ESRD) on rate of days hospitalized.  相似文献   

19.
When modeling longitudinal data, the true values of time‐varying covariates may be unknown because of detection‐limit censoring or measurement error. A common approach in the literature is to empirically model the covariate process based on observed data and then predict the censored values or mismeasured values based on this empirical model. Such an empirical model can be misleading, especially for censored values since the (unobserved) censored values may behave very differently than observed values due to the underlying data‐generation mechanisms or disease status. In this paper, we propose a mechanistic nonlinear covariate model based on the underlying data‐generation mechanisms to address censored values and mismeasured values. Such a mechanistic model is based on solid scientific or biological arguments, so the predicted censored or mismeasured values are more reasonable. We use a Monte Carlo EM algorithm for likelihood inference and apply the methods to an AIDS dataset, where viral load is censored by a lower detection limit. Simulation results confirm that the proposed models and methods offer substantial advantages over existing empirical covariate models for censored and mismeasured covariates.  相似文献   

20.
L J Wei 《Statistics in medicine》1992,11(14-15):1871-1879
For the past two decades the Cox proportional hazards model has been used extensively to examine the covariate effects on the hazard function for the failure time variable. On the other hand, the accelerated failure time model, which simply regresses the logarithm of the survival time over the covariates, has seldom been utilized in the analysis of censored survival data. In this article, we review some newly developed linear regression methods for analysing failure time observations. These procedures have sound theoretical justification and can be implemented with an efficient numerical method. The accelerated failure time model has an intuitive physical interpretation and would be a useful alternative to the Cox model in survival analysis.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号