首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
We develop an approach, based on multiple imputation, that estimates the marginal survival distribution in survival analysis using auxiliary variables to recover information for censored observations. To conduct the imputation, we use two working survival models to define a nearest neighbour imputing risk set. One model is for the event times and the other for the censoring times. Based on the imputing risk set, two non-parametric multiple imputation methods are considered: risk set imputation, and Kaplan-Meier imputation. For both methods a future event or censoring time is imputed for each censored observation. With a categorical auxiliary variable, we show that with a large number of imputes the estimates from the Kaplan-Meier imputation method correspond to the weighted Kaplan-Meier estimator. We also show that the Kaplan-Meier imputation method is robust to mis-specification of either one of the two working models. In a simulation study with time independent and time-dependent auxiliary variables, we compare the multiple imputation approaches with an inverse probability of censoring weighted method. We show that all approaches can reduce bias due to dependent censoring and improve the efficiency. We apply the approaches to AIDS clinical trial data comparing ZDV and placebo, in which CD4 count is the time-dependent auxiliary variable.  相似文献   

2.
We consider the situation of estimating the marginal survival distribution from censored data subject to dependent censoring using auxiliary variables. We had previously developed a nonparametric multiple imputation approach. The method used two working proportional hazards (PH) models, one for the event times and the other for the censoring times, to define a nearest neighbor imputing risk set. This risk set was then used to impute failure times for censored observations. Here, we adapt the method to the situation where the event and censoring times follow accelerated failure time models and propose to use the Buckley–James estimator as the two working models. Besides studying the performances of the proposed method, we also compare the proposed method with two popular methods for handling dependent censoring through the use of auxiliary variables, inverse probability of censoring weighted and parametric multiple imputation methods, to shed light on the use of them. In a simulation study with time‐independent auxiliary variables, we show that all approaches can reduce bias due to dependent censoring. The proposed method is robust to misspecification of either one of the two working models and their link function. This indicates that a working proportional hazards model is preferred because it is more cumbersome to fit an accelerated failure time model. In contrast, the inverse probability of censoring weighted method is not robust to misspecification of the link function of the censoring time model. The parametric imputation methods rely on the specification of the event time model. The approaches are applied to a prostate cancer dataset. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

3.
Most multiple imputation (MI) methods for censored survival data either ignore patient characteristics when imputing a likely event time, or place quite restrictive modeling assumptions on the survival distributions used for imputation. In this research, we propose a robust MI approach that directly imputes restricted lifetimes over the study period based on a model of the mean restricted life as a linear function of covariates. This method has the advantages of retaining patient characteristics when making imputation choices through the restricted mean parameters and does not make assumptions on the shapes of hazards or survival functions. Simulation results show that our method outperforms its closest competitor for modeling restricted mean lifetimes in terms of bias and efficiency in both independent censoring and dependent censoring scenarios. Survival estimates of restricted lifetime model parameters and marginal survival estimates regain much of the precision lost due to censoring. The proposed method is also much less subject to dependent censoring bias captured by covariates in the restricted mean model. This particular feature is observed in a full statistical analysis conducted in the context of the International Breast Cancer Study Group Ludwig Trial V using the proposed methodology.  相似文献   

4.
Incorporating time‐dependent covariates into tree‐structured survival analysis (TSSA) may result in more accurate prognostic models than if only baseline values are used. Available time‐dependent TSSA methods exhaustively test every binary split on every covariate; however, this approach may result in selection bias toward covariates with more observed values. We present a method that uses unbiased significance levels from newly proposed permutation tests to select the time‐dependent or baseline covariate with the strongest relationship with the survival outcome. The specific splitting value is identified using only the selected covariate. Simulation results show that the proposed time‐dependent TSSA method produces tree models of equal or greater accuracy as compared to baseline TSSA models, even with high censoring rates and large within‐subject variability in the time‐dependent covariate. To illustrate, the proposed method is applied to data from a cohort of bipolar youths to identify subgroups at risk for self‐injurious behavior. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

5.
When the event time of interest depends on the censoring time, conventional two-sample test methods, such as the log-rank and Wilcoxon tests, can produce an invalid test result. We extend our previous work on estimation using auxiliary variables to adjust for dependent censoring via multiple imputation, to the comparison of two survival distributions. To conduct the imputation, we use two working models to define a set of similar observations called the imputing risk set. One model is for the event times and the other for the censoring times. Based on the imputing risk set, a nonparametric multiple imputation method, Kaplan-Meier imputation, is used to impute a future event or censoring time for each censored observation. After imputation, the conventional nonparametric two-sample tests can be easily implemented on the augmented data sets. Simulation studies show that the sizes of the log-rank and Wilcoxon tests constructed on the imputed data sets are comparable to the nominal level and the powers are much higher compared with the tests based on the unimputed data in the presence of dependent censoring if either one of the two working models is correctly specified. The method is illustrated using AIDS clinical trial data comparing ZDV and placebo, in which CD4 count is the time-dependent auxiliary variable.  相似文献   

6.
In many observational studies, the objective is to estimate the effect of treatment or state‐change on the recurrent event rate. If treatment is assigned after the start of follow‐up, traditional methods (eg, adjustment for baseline‐only covariates or fully conditional adjustment for time‐dependent covariates) may give biased results. We propose a two‐stage modeling approach using the method of sequential stratification to accurately estimate the effect of a time‐dependent treatment on the recurrent event rate. At the first stage, we estimate the pretreatment recurrent event trajectory using a proportional rates model censored at the time of treatment. Prognostic scores are estimated from the linear predictor of this model and used to match treated patients to as yet untreated controls based on prognostic score at the time of treatment for the index patient. The final model is stratified on matched sets and compares the posttreatment recurrent event rate to the recurrent event rate of the matched controls. We demonstrate through simulation that bias due to dependent censoring is negligible, provided the treatment frequency is low, and we investigate a threshold at which correction for dependent censoring is needed. The method is applied to liver transplant (LT), where we estimate the effect of development of post‐LT End Stage Renal Disease (ESRD) on rate of days hospitalized.  相似文献   

7.
Several approaches exist for handling missing covariates in the Cox proportional hazards model. The multiple imputation (MI) is relatively easy to implement with various software available and results in consistent estimates if the imputation model is correct. On the other hand, the fully augmented weighted estimators (FAWEs) recover a substantial proportion of the efficiency and have the doubly robust property. In this paper, we compare the FAWEs and the MI through a comprehensive simulation study. For the MI, we consider the multiple imputation by chained equation and focus on two imputation methods: Bayesian linear regression imputation and predictive mean matching. Simulation results show that the imputation methods can be rather sensitive to model misspecification and may have large bias when the censoring time depends on the missing covariates. In contrast, the FAWEs allow the censoring time to depend on the missing covariates and are remarkably robust as long as getting either the conditional expectations or the selection probability correct due to the doubly robust property. The comparison suggests that the FAWEs show the potential for being a competitive and attractive tool for tackling the analysis of survival data with missing covariates. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

8.
In this paper, we compare the robustness properties of a matching estimator with a doubly robust estimator. We describe the robustness properties of matching and subclassification estimators by showing how misspecification of the propensity score model can result in the consistent estimation of an average causal effect. The propensity scores are covariate scores, which are a class of functions that removes bias due to all observed covariates. When matching on a parametric model (e.g., a propensity or a prognostic score), the matching estimator is robust to model misspecifications if the misspecified model belongs to the class of covariate scores. The implication is that there are multiple possibilities for the matching estimator in contrast to the doubly robust estimator in which the researcher has two chances to make reliable inference. In simulations, we compare the finite sample properties of the matching estimator with a simple inverse probability weighting estimator and a doubly robust estimator. For the misspecifications in our study, the mean square error of the matching estimator is smaller than the mean square error of both the simple inverse probability weighting estimator and the doubly robust estimators.  相似文献   

9.
We provide non-parametric estimates of the marginal cumulative distribution of stage occupation times (waiting times) and non-parametric estimates of marginal cumulative incidence function (proportion of persons who leave stage j for stage j' within time t of entering stage j) using right-censored data from a multi-stage model. We allow for stage and path dependent censoring where the censoring hazard for an individual may depend on his or her natural covariate history such as the collection of stages visited before the current stage and their occupation times. Additional external time dependent covariates that may induce dependent censoring can also be incorporated into our estimates, if available. Our approach requires modelling the censoring hazard so that an estimate of the integrated censoring hazard can be used in constructing the estimates of the waiting times distributions. For this purpose, we propose the use of an additive hazard model which results in very flexible (robust) estimates. Examples based on data from burn patients and simulated data with tracking are also provided to demonstrate the performance of our estimators.  相似文献   

10.
Several studies for the clinical validity of circulating tumor cells (CTCs) in metastatic breast cancer were conducted showing that it is a prognostic biomarker of overall survival. In this work, we consider an individual patient data meta-analysis for nonmetastatic breast cancer to assess the discrimination of CTCs regarding the risk of death. Data are collected in several centers and present correlated failure times for subjects of the same center. However, although the covariate-specific time-dependent receiver operating characteristic (ROC) curve has been widely used for assessing the performance of a biomarker, there is no methodology yet that can handle this specific setting with clustered censored failure times. We propose an estimator for the covariate-specific time-dependent ROC curves and area under the ROC curve when clustered failure times are detected. We discuss the assumptions under which the estimators are consistent and their interpretations. We assume a shared frailty model for modeling the effect of the covariates and the biomarker on the outcome in order to account for the cluster effect. A simulation study was conducted and it shows negligible bias for the proposed estimator and a nonparametric one based on inverse probability censoring weighting, while a semiparametric estimator, ignoring the clustering, is markedly biased. Finally, in our application to breast cancer data, the estimation of the covariate-specific area under the curves illustrates that the CTCs discriminate better patients with inflammatory tumor than patients with noninflammatory tumor, with respect to their risk of death.  相似文献   

11.
Hong Zhu 《Statistics in medicine》2014,33(14):2467-2479
Regression methods for survival data with right censoring have been extensively studied under semiparametric transformation models such as the Cox regression model and the proportional odds model. However, their practical application could be limited because of possible violation of model assumption or lack of ready interpretation for the regression coefficients in some cases. As an alternative, in this paper, the proportional likelihood ratio model introduced by Luo and Tsai is extended to flexibly model the relationship between survival outcome and covariates. This model has a natural connection with many important semiparametric models such as generalized linear model and density ratio model and is closely related to biased sampling problems. Compared with the semiparametric transformation model, the proportional likelihood ratio model is appealing and practical in many ways because of its model flexibility and quite direct clinical interpretation. We present two likelihood approaches for the estimation and inference on the target regression parameters under independent and dependent censoring assumptions. Based on a conditional likelihood approach using uncensored failure times, a numerically simple estimation procedure is developed by maximizing a pairwise pseudo‐likelihood. We also develop a full likelihood approach, and the most efficient maximum likelihood estimator is obtained by a profile likelihood. Simulation studies are conducted to assess the finite‐sample properties of the proposed estimators and compare the efficiency of the two likelihood approaches. An application to survival data for bone marrow transplantation patients of acute leukemia is provided to illustrate the proposed method and other approaches for handling non‐proportionality. The relative merits of these methods are discussed in concluding remarks. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

12.
Combining multiple markers can improve classification accuracy compared with using a single marker. In practice, covariates associated with markers or disease outcome can affect the performance of a biomarker or biomarker combination in the population. The covariate‐adjusted receiver operating characteristic (ROC) curve has been proposed as a tool to tease out the covariate effect in the evaluation of a single marker; this curve characterizes the classification accuracy solely because of the marker of interest. However, research on the effect of covariates on the performance of marker combinations and on how to adjust for the covariate effect when combining markers is still lacking. In this article, we examine the effect of covariates on classification performance of linear marker combinations and propose to adjust for covariates in combining markers by maximizing the nonparametric estimate of the area under the covariate‐adjusted ROC curve. The proposed method provides a way to estimate the best linear biomarker combination that is robust to risk model assumptions underlying alternative regression‐model‐based methods. The proposed estimator is shown to be consistent and asymptotically normally distributed. We conduct simulations to evaluate the performance of our estimator in cohort and case/control designs and compare several different weighting strategies during estimation with respect to efficiency. Our estimator is also compared with alternative regression‐model‐based estimators or estimators that maximize the empirical area under the ROC curve, with respect to bias and efficiency. We apply the proposed method to a biomarker study from an human immunodeficiency virus vaccine trial. Copyright © 2017 John Wiley & Sons, Ltd.  相似文献   

13.
Epidemiologic research often aims to estimate the association between a binary exposure and a binary outcome, while adjusting for a set of covariates (eg, confounders). When data are clustered, as in, for instance, matched case-control studies and co-twin-control studies, it is common to use conditional logistic regression. In this model, all cluster-constant covariates are absorbed into a cluster-specific intercept, whereas cluster-varying covariates are adjusted for by explicitly adding these as explanatory variables to the model. In this paper, we propose a doubly robust estimator of the exposure-outcome odds ratio in conditional logistic regression models. This estimator protects against bias in the odds ratio estimator due to misspecification of the part of the model that contains the cluster-varying covariates. The doubly robust estimator uses two conditional logistic regression models for the odds ratio, one prospective and one retrospective, and is consistent for the exposure-outcome odds ratio if at least one of these models is correctly specified, not necessarily both. We demonstrate the properties of the proposed method by simulations and by re-analyzing a publicly available dataset from a matched case-control study on induced abortion and infertility.  相似文献   

14.
The proportional hazard model is one of the most important statistical models used in medical research involving time‐to‐event data. Simulation studies are routinely used to evaluate the performance and properties of the model and other alternative statistical models for time‐to‐event outcomes under a variety of situations. Complex simulations that examine multiple situations with different censoring rates demand approaches that can accommodate this variety. In this paper, we propose a general framework for simulating right‐censored survival data for proportional hazards models by simultaneously incorporating a baseline hazard function from a known survival distribution, a known censoring time distribution, and a set of baseline covariates. Specifically, we present scenarios in which time to event is generated from exponential or Weibull distributions and censoring time has a uniform or Weibull distribution. The proposed framework incorporates any combination of covariate distributions. We describe the steps involved in nested numerical integration and using a root‐finding algorithm to choose the censoring parameter that achieves predefined censoring rates in simulated survival data. We conducted simulation studies to assess the performance of the proposed framework. We demonstrated the application of the new framework in a comprehensively designed simulation study. We investigated the effect of censoring rate on potential bias in estimating the conditional treatment effect using the proportional hazard model in the presence of unmeasured confounding variables. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

15.
In survival analyses, inverse‐probability‐of‐treatment (IPT) and inverse‐probability‐of‐censoring (IPC) weighted estimators of parameters in marginal structural Cox models are often used to estimate treatment effects in the presence of time‐dependent confounding and censoring. In most applications, a robust variance estimator of the IPT and IPC weighted estimator is calculated leading to conservative confidence intervals. This estimator assumes that the weights are known rather than estimated from the data. Although a consistent estimator of the asymptotic variance of the IPT and IPC weighted estimator is generally available, applications and thus information on the performance of the consistent estimator are lacking. Reasons might be a cumbersome implementation in statistical software, which is further complicated by missing details on the variance formula. In this paper, we therefore provide a detailed derivation of the variance of the asymptotic distribution of the IPT and IPC weighted estimator and explicitly state the necessary terms to calculate a consistent estimator of this variance. We compare the performance of the robust and consistent variance estimators in an application based on routine health care data and in a simulation study. The simulation reveals no substantial differences between the 2 estimators in medium and large data sets with no unmeasured confounding, but the consistent variance estimator performs poorly in small samples or under unmeasured confounding, if the number of confounders is large. We thus conclude that the robust estimator is more appropriate for all practical purposes.  相似文献   

16.
Missing outcome data is a crucial threat to the validity of treatment effect estimates from randomized trials. The outcome distributions of participants with missing and observed data are often different, which increases bias. Causal inference methods may aid in reducing the bias and improving efficiency by incorporating baseline variables into the analysis. In particular, doubly robust estimators incorporate 2 nuisance parameters: the outcome regression and the missingness mechanism (ie, the probability of missingness conditional on treatment assignment and baseline variables), to adjust for differences in the observed and unobserved groups that can be explained by observed covariates. To consistently estimate the treatment effect, one of these nuisance parameters must be consistently estimated. Traditionally, nuisance parameters are estimated using parametric models, which often precludes consistency, particularly in moderate to high dimensions. Recent research on missing data has focused on data‐adaptive estimation to help achieve consistency, but the large sample properties of such methods are poorly understood. In this article, we discuss a doubly robust estimator that is consistent and asymptotically normal under data‐adaptive estimation of the nuisance parameters. We provide a formula for an asymptotically exact confidence interval under minimal assumptions. We show that our proposed estimator has smaller finite‐sample bias compared to standard doubly robust estimators. We present a simulation study demonstrating the enhanced performance of our estimators in terms of bias, efficiency, and coverage of the confidence intervals. We present the results of an illustrative example: a randomized, double‐blind phase 2/3 trial of antiretroviral therapy in HIV‐infected persons.  相似文献   

17.
In two‐stage randomization designs, patients are randomized to one of the initial treatments, and at the end of the first stage, they are randomized to one of the second stage treatments depending on the outcome of the initial treatment. Statistical inference for survival data from these trials uses methods such as marginal mean models and weighted risk set estimates. In this article, we propose two forms of weighted Kaplan–Meier (WKM) estimators based on inverse‐probability weighting—one with fixed weights and the other with time‐dependent weights. We compare their properties with that of the standard Kaplan–Meier (SKM) estimator, marginal mean model‐based (MM) estimator and weighted risk set (WRS) estimator. Simulation study reveals that both forms of weighted Kaplan–Meier estimators are asymptotically unbiased, and provide coverage rates similar to that of MM and WRS estimators. The SKM estimator, however, is biased when the second randomization rates are not the same for the responders and non‐responders to initial treatment. The methods described are demonstrated by applying to a leukemia data set. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

18.
We develop a weighted cumulative sum (WCUSUM) to evaluate and monitor pre‐transplant waitlist mortality of facilities in the context where transplantation is considered to be dependent censoring. Waitlist patients are evaluated multiple times in order to update their current medical condition as reflected in a time‐dependent variable called the Model for End‐Stage Liver Disease (MELD) score. Higher MELD scores are indicative of higher pre‐transplant death risk. Moreover, under the current liver allocation system, patients with higher MELD scores receive higher priority for liver transplantation. To evaluate the waitlist mortality of transplant centers, it is important to take this dependent censoring into consideration. We assume a ‘standard’ transplant practice through a transplant model and utilize inverse probability censoring weights to construct a WCUSUM. We evaluate the properties of a weighted zero‐mean process as the basis of the proposed WCUSUM. We then discuss a resampling technique to obtain control limits. The proposed WCUSUM is illustrated through the analysis of national transplant registry data. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

19.
In cluster‐randomized trials, intervention effects are often formulated by specifying marginal models, fitting them under a working independence assumption, and using robust variance estimates to address the association in the responses within clusters. We develop sample size criteria within this framework, with analyses based on semiparametric Cox regression models fitted with event times subject to right censoring. At the design stage, copula models are specified to enable derivation of the asymptotic variance of estimators from a marginal Cox regression model and to compute the number of clusters necessary to satisfy power requirements. Simulation studies demonstrate the validity of the sample size formula in finite samples for a range of cluster sizes, censoring rates, and degrees of within‐cluster association among event times. The power and relative efficiency implications of copula misspecification is studied, as well as the effect of within‐cluster dependence in the censoring times. Sample size criteria and other design issues are also addressed for the setting where the event status is only ascertained at periodic assessments and times are interval censored. Copyright © 2014 JohnWiley & Sons, Ltd.  相似文献   

20.
Propensity scores have been used widely as a bias reduction method to estimate the treatment effect in nonrandomized studies. Since many covariates are generally included in the model for estimating the propensity scores, the proportion of subjects with at least one missing covariate could be large. While many methods have been proposed for propensity score‐based estimation in the presence of missing covariates, little has been published comparing the performance of these methods. In this article we propose a novel method called multiple imputation missingness pattern (MIMP) and compare it with the naive estimator (ignoring propensity score) and three commonly used methods of handling missing covariates in propensity score‐based estimation (separate estimation of propensity scores within each pattern of missing data, multiple imputation and discarding missing data) under different mechanisms of missing data and degree of correlation among covariates. Simulation shows that all adjusted estimators are much less biased than the naive estimator. Under certain conditions MIMP provides benefits (smaller bias and mean‐squared error) compared with existing alternatives. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号