共查询到20条相似文献,搜索用时 15 毫秒
1.
Methods for dealing with tied event times in the Cox proportional hazards model are well developed. Also, the partial likelihood provides a natural way to handle covariates that change over time. However, ties between event times and the times that discrete time‐varying covariates change have not been systematically studied in the literature. In this article, we discuss the default behavior of current software and propose some simple methods for dealing with such ties. A simulation study shows that the default behavior of current software can lead to biased estimates of the coefficient of a binary time‐varying covariate and that two proposed methods (Random Jitter and Equally Weighted) reduce estimation bias. The proposed methods can be easily implemented with existing software. The methods are illustrated on the well‐known Stanford heart transplant data and data from a study on intimate partner violence and smoking. Copyright © 2012 John Wiley & Sons, Ltd. 相似文献
2.
Time‐to‐event data are very common in observational studies. Unlike randomized experiments, observational studies suffer from both observed and unobserved confounding biases. To adjust for observed confounding in survival analysis, the commonly used methods are the Cox proportional hazards (PH) model, the weighted logrank test, and the inverse probability of treatment weighted Cox PH model. These methods do not rely on fully parametric models, but their practical performances are highly influenced by the validity of the PH assumption. Also, there are few methods addressing the hidden bias in causal survival analysis. We propose a strategy to test for survival function differences based on the matching design and explore sensitivity of the P‐values to assumptions about unmeasured confounding. Specifically, we apply the paired Prentice‐Wilcoxon (PPW) test or the modified PPW test to the propensity score matched data. Simulation studies show that the PPW‐type test has higher power in situations when the PH assumption fails. For potential hidden bias, we develop a sensitivity analysis based on the matched pairs to assess the robustness of our finding, following Rosenbaum's idea for nonsurvival data. For a real data illustration, we apply our method to an observational cohort of chronic liver disease patients from a Mayo Clinic study. The PPW test based on observed data initially shows evidence of a significant treatment effect. But this finding is not robust, as the sensitivity analysis reveals that the P‐value becomes nonsignificant if there exists an unmeasured confounder with a small impact. 相似文献
3.
We consider Bayesian sensitivity analysis for unmeasured confounding in observational studies where the association between a binary exposure, binary response, measured confounders and a single binary unmeasured confounder can be formulated using logistic regression models. A model for unmeasured confounding is presented along with a family of prior distributions that model beliefs about a possible unknown unmeasured confounder. Simulation from the posterior distribution is accomplished using Markov chain Monte Carlo. Because the model for unmeasured confounding is not identifiable, standard large-sample theory for Bayesian analysis is not applicable. Consequently, the impact of different choices of prior distributions on the coverage probability of credible intervals is unknown. Using simulations, we investigate the coverage probability when averaged with respect to various distributions over the parameter space. The results indicate that credible intervals will have approximately nominal coverage probability, on average, when the prior distribution used for sensitivity analysis approximates the sampling distribution of model parameters in a hypothetical sequence of observational studies. We motivate the method in a study of the effectiveness of beta blocker therapy for treatment of heart failure. 相似文献
4.
In survival analysis, use of the Cox proportional hazards model requires knowledge of all covariates under consideration at every failure time. Since failure times rarely coincide with observation times, time-dependent covariates (covariates that vary over time) need to be inferred from the observed values. In this paper, we introduce the last value auto-regressed (LVAR) estimation method and compare it to several other established estimation approaches via a simulation study. The comparison shows that under several time-dependent covariate processes this method results in a smaller mean square error when considering the time-dependent covariate effect. 相似文献
5.
In many time‐to‐event studies, particularly in epidemiology, the time of the first observation or study entry is arbitrary in the sense that this is not a time of risk modification. We present a formal argument that, in these situations, it is not advisable to take the first observation as the time origin, either in accelerated failure time or proportional hazards models. Instead, we advocate using birth as the time origin. We use a two‐stage process to account for the fact that baseline observations may be made at different ages in different subjects. First, we marginally regress any potentially age‐varying covariates against age, retaining the residuals. These residuals are then used as covariates in fitting an accelerated failure time or proportional hazards model — we call the procedures residual accelerated failure time regression and residual proportional hazards regression, respectively. We compare residual accelerated failure time regression with the standard approach, demonstrating superior predictive ability of the residual method in realistic examples and potentially higher power of the residual method. This highlights flaws in current approaches to communicating risks from epidemiological evidence to support clinical and health policy decisions. Copyright © 2012 John Wiley & Sons, Ltd. 相似文献
6.
Fei Wan 《Statistics in medicine》2017,36(5):838-854
The proportional hazard model is one of the most important statistical models used in medical research involving time‐to‐event data. Simulation studies are routinely used to evaluate the performance and properties of the model and other alternative statistical models for time‐to‐event outcomes under a variety of situations. Complex simulations that examine multiple situations with different censoring rates demand approaches that can accommodate this variety. In this paper, we propose a general framework for simulating right‐censored survival data for proportional hazards models by simultaneously incorporating a baseline hazard function from a known survival distribution, a known censoring time distribution, and a set of baseline covariates. Specifically, we present scenarios in which time to event is generated from exponential or Weibull distributions and censoring time has a uniform or Weibull distribution. The proposed framework incorporates any combination of covariate distributions. We describe the steps involved in nested numerical integration and using a root‐finding algorithm to choose the censoring parameter that achieves predefined censoring rates in simulated survival data. We conducted simulation studies to assess the performance of the proposed framework. We demonstrated the application of the new framework in a comprehensively designed simulation study. We investigated the effect of censoring rate on potential bias in estimating the conditional treatment effect using the proportional hazard model in the presence of unmeasured confounding variables. Copyright © 2016 John Wiley & Sons, Ltd. 相似文献
7.
Multiple imputation is commonly used to impute missing data, and is typically more efficient than complete cases analysis in regression analysis when covariates have missing values. Imputation may be performed using a regression model for the incomplete covariates on other covariates and, importantly, on the outcome. With a survival outcome, it is a common practice to use the event indicator D and the log of the observed event or censoring time T in the imputation model, but the rationale is not clear. We assume that the survival outcome follows a proportional hazards model given covariates X and Z. We show that a suitable model for imputing binary or Normal X is a logistic or linear regression on the event indicator D, the cumulative baseline hazard H0(T), and the other covariates Z. This result is exact in the case of a single binary covariate; in other cases, it is approximately valid for small covariate effects and/or small cumulative incidence. If we do not know H0(T), we approximate it by the Nelson–Aalen estimator of H(T) or estimate it by Cox regression. We compare the methods using simulation studies. We find that using logT biases covariate‐outcome associations towards the null, while the new methods have lower bias. Overall, we recommend including the event indicator and the Nelson–Aalen estimator of H(T) in the imputation model. Copyright © 2009 John Wiley & Sons, Ltd. 相似文献
8.
As medical expenses continue to rise, methods to properly analyze cost outcomes are becoming of increasing relevance when seeking to compare average costs across treatments. Inverse probability weighted regression models have been developed to address the challenge of cost censoring in order to identify intent‐to‐treat effects (i.e., to compare mean costs between groups on the basis of their initial treatment assignment, irrespective of any subsequent changes to their treatment status). In this paper, we describe a nested g‐computation procedure that can be used to compare mean costs between two or more time‐varying treatment regimes. We highlight the relative advantages and limitations of this approach when compared with existing regression‐based models. We illustrate the utility of this approach as a means to inform public policy by applying it to a simulated data example motivated by costs associated with cancer treatments. Simulations confirm that inference regarding intent‐to‐treat effects versus the joint causal effects estimated by the nested g‐formula can lead to markedly different conclusions regarding differential costs. Therefore, it is essential to prespecify the desired target of inference when choosing between these two frameworks. The nested g‐formula should be considered as a useful, complementary tool to existing methods when analyzing cost outcomes. 相似文献
9.
David J. Hendry 《Statistics in medicine》2014,33(3):436-454
The proliferation of longitudinal studies has increased the importance of statistical methods for time‐to‐event data that can incorporate time‐dependent covariates. The Cox proportional hazards model is one such method that is widely used. As more extensions of the Cox model with time‐dependent covariates are developed, simulations studies will grow in importance as well. An essential starting point for simulation studies of time‐to‐event models is the ability to produce simulated survival times from a known data generating process. This paper develops a method for the generation of survival times that follow a Cox proportional hazards model with time‐dependent covariates. The method presented relies on a simple transformation of random variables generated according to a truncated piecewise exponential distribution and allows practitioners great flexibility and control over both the number of time‐dependent covariates and the number of time periods in the duration of follow‐up measurement. Within this framework, an additional argument is suggested that allows researchers to generate time‐to‐event data in which covariates change at integer‐valued steps of the time scale. The purpose of this approach is to produce data for simulation experiments that mimic the types of data structures applied that researchers encounter when using longitudinal biomedical data. Validity is assessed in a set of simulation experiments, and results indicate that the proposed procedure performs well in producing data that conform to the assumptions of the Cox proportional hazards model. Copyright © 2013 John Wiley & Sons, Ltd. 相似文献
10.
Observational studies provide a rich source of information for assessing effectiveness of treatment interventions in many situations where it is not ethical or practical to perform randomized controlled trials. However, such studies are prone to bias from hidden (unmeasured) confounding. A promising approach to identifying and reducing the impact of unmeasured confounding is prior event rate ratio (PERR) adjustment, a quasi‐experimental analytic method proposed in the context of electronic medical record database studies. In this paper, we present a statistical framework for using a pairwise approach to PERR adjustment that removes bias inherent in the original PERR method. A flexible pairwise Cox likelihood function is derived and used to demonstrate the consistency of the simple and convenient alternative PERR (PERR‐ALT) estimator. We show how to estimate standard errors and confidence intervals for treatment effect estimates based on the observed information and provide R code to illustrate how to implement the method. Assumptions required for the pairwise approach (as well as PERR) are clarified, and the consequences of model misspecification are explored. Our results confirm the need for researchers to consider carefully the suitability of the method in the context of each problem. Extensions of the pairwise likelihood to more complex designs involving time‐varying covariates or more than two periods are considered. We illustrate the application of the method using data from a longitudinal cohort study of enzyme replacement therapy for lysosomal storage disorders. © 2016 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd. 相似文献
11.
Molin Wang Xiaomei Liao Francine Laden Donna Spiegelman 《Statistics in medicine》2016,35(13):2283-2295
Identification of the latency period and age‐related susceptibility, if any, is an important aspect of assessing risks of environmental, nutritional, and occupational exposures. We consider estimation and inference for latency and age‐related susceptibility in relative risk and excess risk models. We focus on likelihood‐based methods for point and interval estimation of the latency period and age‐related windows of susceptibility coupled with several commonly considered exposure metrics. The method is illustrated in a study of the timing of the effects of constituents of air pollution on mortality in the Nurses' Health Study. Copyright © 2016 John Wiley & Sons, Ltd. 相似文献
12.
Countermatching designs can provide more efficient estimates than simple matching or case–cohort designs in certain situations such as when good surrogate variables for an exposure of interest are available. We extend pseudolikelihood estimation for the Cox model under countermatching designs to models where time‐varying covariates are considered. We also implement pseudolikelihood with calibrated weights to improve efficiency in nested case–control designs in the presence of time‐varying variables. A simulation study is carried out, which considers four different scenarios including a binary time‐dependent variable, a continuous time‐dependent variable, and the case including interactions in each. Simulation results show that pseudolikelihood with calibrated weights under countermatching offers large gains in efficiency if compared to case–cohort. Pseudolikelihood with calibrated weights yielded more efficient estimators than pseudolikelihood estimators. Additionally, estimators were more efficient under countermatching than under case–cohort for the situations considered. The methods are illustrated using the Colorado Plateau uranium miners cohort. Furthermore, we present a general method to generate survival times with time‐varying covariates. Copyright © 2016 John Wiley & Sons, Ltd. 相似文献
13.
Cox's proportional hazards model can be extended to accommodate time-dependent effects of prognostic factors. We briefly review these extensions along with their varying degrees of freedom. Spending more degrees of freedom with conventional procedures (a priori defined interactions with simple functions of time, restricted natural splines, piecewise estimation for partitions of the time axis) allows the fitting of almost any shape of time dependence but at an increased risk of over-fit. This results in increased width of confidence intervals of time-dependent hazard ratios and in reduced power to confirm any time-dependent effect or even any effect of a prognostic factor. By means of comparative empirical studies the consequences of over-fitting time-dependent effects have been explored. We conclude that fractional polynomials, and similarly penalized likelihood approaches, today are the methods of choice, avoiding over-fit by parsimonious use of degrees of freedom but also permitting flexible modelling if time dependence of a usually a priori unknown shape is present in a data set. The paradigm of a parsimonious analysis of time-dependent effects is exemplified by means of a gastric cancer study. 相似文献
14.
In order to yield more flexible models, the Cox regression model, lambda(t;x) = lambda(0)(t)exp(betax), has been generalized using different non-parametric model estimation techniques. One generalization is the relaxation of log-linearity in x, lambda(t;x) = lambda(0)(t)exp[r(x)]. Another is the relaxation of the proportional hazards assumption, lambda(t;x) = lambda(0)(t)exp[beta(t)x]. These generalizations are typically considered independently of each other. We propose the product model, lambda(t;x) = lambda(0)(t)exp[beta(t)r(x)] which allows for joint estimation of both effects, and investigate its properties. The functions describing the time-dependent beta(t) and non-linear r(x) effects are modelled simultaneously using regression splines and estimated by maximum partial likelihood. Likelihood ratio tests are proposed to compare alternative models. Simulations indicate that both the recovery of the shapes of the two functions and the size of the tests are reasonably accurate provided they are based on the correct model. By contrast, type I error rates may be highly inflated, and the estimates considerably biased, if the model is misspecified. Applications in cancer epidemiology illustrate how the product model may yield new insights about the role of prognostic factors. 相似文献
15.
Daniel Almirall Beth Ann Griffin Daniel F. McCaffrey Rajeev Ramchand Robert A. Yuen Susan A. Murphy 《Statistics in medicine》2014,33(20):3466-3487
This article considers the problem of examining time‐varying causal effect moderation using observational, longitudinal data in which treatment, candidate moderators, and possible confounders are time varying. The structural nested mean model (SNMM) is used to specify the moderated time‐varying causal effects of interest in a conditional mean model for a continuous response given time‐varying treatments and moderators. We present an easy‐to‐use estimator of the SNMM that combines an existing regression‐with‐residuals (RR) approach with an inverse‐probability‐of‐treatment weighting (IPTW) strategy. The RR approach has been shown to identify the moderated time‐varying causal effects if the time‐varying moderators are also the sole time‐varying confounders. The proposed IPTW+RR approach provides estimators of the moderated time‐varying causal effects in the SNMM in the presence of an additional, auxiliary set of known and measured time‐varying confounders. We use a small simulation experiment to compare IPTW+RR versus the traditional regression approach and to compare small and large sample properties of asymptotic versus bootstrap estimators of the standard errors for the IPTW+RR approach. This article clarifies the distinction between time‐varying moderators and time‐varying confounders. We illustrate the methodology in a case study to assess if time‐varying substance use moderates treatment effects on future substance use. Copyright © 2013 John Wiley & Sons, Ltd. 相似文献
16.
In evaluating the risk of mortality or development of opportunistic infections in HIV-infected patients, the number of CD4 lymphocyte cells per cubic millimetre of blood is widely recognized as one of the best available predictors of such future events. However, its usefulness is limited by the incompleteness and variability of such CD4 measurements during follow-up. Because of these limitations, analysis of such data requires the missing measurements to be 'filled in' or the patients without them to be excluded. We consider multiple imputation of CD4 values based partly on information from other health status measures such as haemoglobin, as well as on the event status of interest. These alternative health status measures are also considered as possible independent predictors of survival endpoints. Our work is motivated by a cohort of 1530 patients enrolled in two AIDS clinical trials. We compare our approach to other strategies such as basing evaluation of risk on baseline CD4, the last measured CD4 before an event, or a time-dependent covariate based on carrying the last CD4 value forward; we conclude with a strong recommendation for multiple imputation. 相似文献
17.
队列研究的特点之一是暴露因素会随时间而改变,如何充分利用暴露因素及其协变量的变化及其相互关系,从而获得更真实的暴露因素与结局关系是目前的研究热点。本研究以开滦队列为例,探讨基于基线暴露状态、随时间变化的暴露信息以及同时控制依时混杂因素时,如何利用Cox比例风险回归及其拓展模型,包括依时Cox回归及边际结构模型,探讨FPG与肝癌的关系,概述并比较了上述拓展模型的基本原理、应用条件、估计结果及结果解释。 相似文献
18.
Analyzing Data in Which the Outcome is Time to an Event Part II: The Presence of Multiple Covariates
AbstractIn this article, the second of a series on the analysis of time to event data, we address the case in which multiple predictors (covariates) that may influence the time to an event are taken into account. The hazard function is introduced, and is given in a form useful for assessing the impact of multiple covariates on time to an event. Methods for the assessment of model fitting are also discussed and an example with cancer survival as outcome with the presence or absence of multiple genes as covariates is presented. 相似文献
19.
In this work, we propose a single nucleotide polymorphism (SNP) set association test for censored phenotypes in the presence of a family‐based design. The proposed test is valid for both common and rare variants. A proportional hazards Cox model is specified for the marginal distribution of the trait and the familial dependence is modeled via a Gaussian copula. Censored values are treated as partially missing data and a multiple imputation procedure is proposed in order to compute the test statistics. The P‐value is then deduced analytically. The finite‐sample empirical properties of the proposed method are evaluated and compared to existing competitors by simulations and its use is illustrated using a breast cancer data set from the Consortium of Investigators of Modifiers of BRCA1 and BRCA2. 相似文献
20.
Many epidemiological studies assess the effects of time‐dependent exposures, where both the exposure status and its intensity vary over time. One example that attracts public attention concerns pharmacoepidemiological studies of the adverse effects of medications. The analysis of such studies poses challenges for modeling the impact of complex time‐dependent drug exposure, especially given the uncertainty about the way effects cumulate over time and about the etiological relevance of doses taken in different time periods. We present a flexible method for modeling cumulative effects of time‐varying exposures, weighted by recency, represented by time‐dependent covariates in the Cox proportional hazards model. The function that assigns weights to doses taken in the past is estimated using cubic regression splines. We validated the method in simulations and applied it to re‐assess the association between exposure to a psychotropic drug and fall‐related injuries in the elderly. Copyright © 2009 John Wiley & Sons, Ltd. 相似文献