首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
For time‐to‐event outcomes, a rich literature exists on the bias introduced by covariate measurement error in regression models, such as the Cox model, and methods of analysis to address this bias. By comparison, less attention has been given to understanding the impact or addressing errors in the failure time outcome. For many diseases, the timing of an event of interest (such as progression‐free survival or time to AIDS progression) can be difficult to assess or reliant on self‐report and therefore prone to measurement error. For linear models, it is well known that random errors in the outcome variable do not bias regression estimates. With nonlinear models, however, even random error or misclassification can introduce bias into estimated parameters. We compare the performance of 2 common regression models, the Cox and Weibull models, in the setting of measurement error in the failure time outcome. We introduce an extension of the SIMEX method to correct for bias in hazard ratio estimates from the Cox model and discuss other analysis options to address measurement error in the response. A formula to estimate the bias induced into the hazard ratio by classical measurement error in the event time for a log‐linear survival model is presented. Detailed numerical studies are presented to examine the performance of the proposed SIMEX method under varying levels and parametric forms of the error in the outcome. We further illustrate the method with observational data on HIV outcomes from the Vanderbilt Comprehensive Care Clinic.  相似文献   

2.
Unmeasured confounding is a common concern when researchers attempt to estimate a treatment effect using observational data or randomized studies with nonperfect compliance. To address this concern, instrumental variable methods, such as 2‐stage predictor substitution (2SPS) and 2‐stage residual inclusion (2SRI), have been widely adopted. In many clinical studies of binary and survival outcomes, 2SRI has been accepted as the method of choice over 2SPS, but a compelling theoretical rationale has not been postulated. We evaluate the bias and consistency in estimating the conditional treatment effect for both 2SPS and 2SRI when the outcome is binary, count, or time to event. We demonstrate analytically that the bias in 2SPS and 2SRI estimators can be reframed to mirror the problem of omitted variables in nonlinear models and that there is a direct relationship with the collapsibility of effect measures. In contrast to conclusions made by previous studies (Terza et al, 2008), we demonstrate that the consistency of 2SRI estimators only holds under the following conditions: (1) when the null hypothesis is true; (2) when the outcome model is collapsible; or (3) when estimating the nonnull causal effect from Cox or logistic regression models, the strong and unrealistic assumption that the effect of the unmeasured covariates on the treatment is proportional to their effect on the outcome needs to hold. We propose a novel dissimilarity metric to provide an intuitive explanation of the bias of 2SRI estimators in noncollapsible models and demonstrate that with increasing dissimilarity between the effects of the unmeasured covariates on the treatment versus outcome, the bias of 2SRI increases in magnitude.  相似文献   

3.
This paper provides guidance for researchers with some mathematical background on the conduct of time‐to‐event analysis in observational studies based on intensity (hazard) models. Discussions of basic concepts like time axis, event definition and censoring are given. Hazard models are introduced, with special emphasis on the Cox proportional hazards regression model. We provide check lists that may be useful both when fitting the model and assessing its goodness of fit and when interpreting the results. Special attention is paid to how to avoid problems with immortal time bias by introducing time‐dependent covariates. We discuss prediction based on hazard models and difficulties when attempting to draw proper causal conclusions from such models. Finally, we present a series of examples where the methods and check lists are exemplified. Computational details and implementation using the freely available R software are documented in Supplementary Material. The paper was prepared as part of the STRATOS initiative.  相似文献   

4.
Proportional hazards models are among the most popular regression models in survival analysis. Multi‐state models generalize them by jointly considering different types of events and their interrelations, whereas frailty models incorporate random effects to account for unobserved risk factors, possibly shared by clusters of subjects. The integration of multi‐state and frailty methodology is an interesting way to control for unobserved heterogeneity in the presence of complex event history structures and is particularly appealing for multicenter clinical trials. We propose the incorporation of correlated frailties in the transition‐specific hazard function, thanks to a nested hierarchy. We studied a semiparametric estimation approach based on maximum integrated partial likelihood. We show in a simulation study that the nested frailty multi‐state model improves the estimation of the effect of covariates, as well as the coverage probability of their confidence intervals. We present a case study concerning a prostate cancer multicenter clinical trial. The multi‐state nature of the model allows us to evidence the effect of treatment on death taking into account intermediate events. Copyright © 2015 JohnWiley & Sons, Ltd.  相似文献   

5.
Shared parameter joint models provide a framework under which a longitudinal response and a time to event can be modelled simultaneously. A common assumption in shared parameter joint models has been to assume that the longitudinal response is normally distributed. In this paper, we instead propose a joint model that incorporates a two‐part ‘hurdle’ model for the longitudinal response, motivated in part by longitudinal response data that is subject to a detection limit. The first part of the hurdle model estimates the probability that the longitudinal response is observed above the detection limit, whilst the second part of the hurdle model estimates the mean of the response conditional on having exceeded the detection limit. The time‐to‐event outcome is modelled using a parametric proportional hazards model, assuming a Weibull baseline hazard. We propose a novel association structure whereby the current hazard of the event is assumed to be associated with the current combined (expected) outcome from the two parts of the hurdle model. We estimate our joint model under a Bayesian framework and provide code for fitting the model using the Bayesian software Stan. We use our model to estimate the association between HIV RNA viral load, which is subject to a lower detection limit, and the hazard of stopping or modifying treatment in patients with HIV initiating antiretroviral therapy. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

6.
The accelerated failure time (AFT) model has been suggested as an alternative to the Cox proportional hazards model. However, a parametric AFT model requires the specification of an appropriate distribution for the event time, which is often difficult to identify in real‐life studies and may limit applications. A semiparametric AFT model was developed by Komárek et al based on smoothed error distribution that does not require such specification. In this article, we develop a spline‐based AFT model that also does not require specification of the parametric family of event time distribution. The baseline hazard function is modeled by regression B‐splines, allowing for the estimation of a variety of smooth and flexible shapes. In comprehensive simulations, we validate the performance of our approach and compare with the results from parametric AFT models and the approach of Komárek. Both the proposed spline‐based AFT model and the approach of Komárek provided unbiased estimates of covariate effects and survival curves for a variety of scenarios in which the event time followed different distributions, including both simple and complex cases. Spline‐based estimates of the baseline hazard showed also a satisfactory numerical stability. As expected, the baseline hazard and survival probabilities estimated by the misspecified parametric AFT models deviated from the truth. We illustrated the application of the proposed model in a study of colon cancer.  相似文献   

7.
Survival models incorporating random effects to account for unmeasured heterogeneity are being increasingly used in biostatistical and applied research. Specifically, unmeasured covariates whose lack of inclusion in the model would lead to biased, inefficient results are commonly modeled by including a subject-specific (or cluster-specific) frailty term that follows a given distribution (eg, gamma or lognormal). Despite that, in the context of parametric frailty models, little is known about the impact of misspecifying the baseline hazard or the frailty distribution or both. Therefore, our aim is to quantify the impact of such misspecification in a wide variety of clinically plausible scenarios via Monte Carlo simulation, using open-source software readily available to applied researchers. We generate clustered survival data assuming various baseline hazard functions, including mixture distributions with turning points, and assess the impact of sample size, variance of the frailty, baseline hazard function, and frailty distribution. Models compared include standard parametric distributions and more flexible spline-based approaches; we also included semiparametric Cox models. The resulting bias can be clinically relevant. In conclusion, we highlight the importance of fitting models that are flexible enough and the importance of assessing model fit. We illustrate our conclusions with two applications using data on diabetic retinopathy and bladder cancer. Our results show the importance of assessing model fit with respect to the baseline hazard function and the distribution of the frailty: misspecifying the former leads to biased relative and absolute risk estimates, whereas misspecifying the latter affects absolute risk estimates and measures of heterogeneity.  相似文献   

8.
Cox models are commonly used in the analysis of time to event data. One advantage of Cox models is the ability to include time‐varying covariates, often a binary covariate that codes for the occurrence of an event that affects an individual subject. A common assumption in this case is that the effect of the event on the outcome of interest is constant and permanent for each subject. In this paper, we propose a modification to the Cox model to allow the influence of an event to exponentially decay over time. Methods for generating data using the inverse cumulative density function for the proposed model are developed. Likelihood ratio tests and AIC are investigated as methods for comparing the proposed model to the commonly used permanent exposure model. A simulation study is performed, and 3 different data sets are presented as examples.  相似文献   

9.
The proliferation of longitudinal studies has increased the importance of statistical methods for time‐to‐event data that can incorporate time‐dependent covariates. The Cox proportional hazards model is one such method that is widely used. As more extensions of the Cox model with time‐dependent covariates are developed, simulations studies will grow in importance as well. An essential starting point for simulation studies of time‐to‐event models is the ability to produce simulated survival times from a known data generating process. This paper develops a method for the generation of survival times that follow a Cox proportional hazards model with time‐dependent covariates. The method presented relies on a simple transformation of random variables generated according to a truncated piecewise exponential distribution and allows practitioners great flexibility and control over both the number of time‐dependent covariates and the number of time periods in the duration of follow‐up measurement. Within this framework, an additional argument is suggested that allows researchers to generate time‐to‐event data in which covariates change at integer‐valued steps of the time scale. The purpose of this approach is to produce data for simulation experiments that mimic the types of data structures applied that researchers encounter when using longitudinal biomedical data. Validity is assessed in a set of simulation experiments, and results indicate that the proposed procedure performs well in producing data that conform to the assumptions of the Cox proportional hazards model. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

10.
It is routinely argued that, unlike standard regression‐based estimates, inverse probability weighted (IPW) estimates of the parameters of a correctly specified Cox marginal structural model (MSM) may remain unbiased in the presence of a time‐varying confounder affected by prior treatment. Previously proposed methods for simulating from a known Cox MSM lack knowledge of the law of the observed outcome conditional on the measured past. Although unbiased IPW estimation does not require this knowledge, standard regression‐based estimates rely on correct specification of this law. Thus, in typical high‐dimensional settings, such simulation methods cannot isolate bias due to complex time‐varying confounding as it may be conflated with bias due to misspecification of the outcome regression model. In this paper, we describe an approach to Cox MSM data generation that allows for a comparison of the bias of IPW estimates versus that of standard regression‐based estimates in the complete absence of model misspecification. This approach involves simulating data from a standard parametrization of the likelihood and solving for the underlying Cox MSM. We prove that solutions exist and computations are tractable under many data‐generating mechanisms. We show analytically and confirm in simulations that, in the absence of model misspecification, the bias of standard regression‐based estimates for the parameters of a Cox MSM is indeed a function of the coefficients in observed data models quantifying the presence of a time‐varying confounder affected by prior treatment. We discuss limitations of this approach including that implied by the ‘g‐null paradox’. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

11.
In survival analysis, a competing risk is an event whose occurrence precludes the occurrence of the primary event of interest. Outcomes in medical research are frequently subject to competing risks. In survival analysis, there are 2 key questions that can be addressed using competing risk regression models: first, which covariates affect the rate at which events occur, and second, which covariates affect the probability of an event occurring over time. The cause‐specific hazard model estimates the effect of covariates on the rate at which events occur in subjects who are currently event‐free. Subdistribution hazard ratios obtained from the Fine‐Gray model describe the relative effect of covariates on the subdistribution hazard function. Hence, the covariates in this model can also be interpreted as having an effect on the cumulative incidence function or on the probability of events occurring over time. We conducted a review of the use and interpretation of the Fine‐Gray subdistribution hazard model in articles published in the medical literature in 2015. We found that many authors provided an unclear or incorrect interpretation of the regression coefficients associated with this model. An incorrect and inconsistent interpretation of regression coefficients may lead to confusion when comparing results across different studies. Furthermore, an incorrect interpretation of estimated regression coefficients can result in an incorrect understanding about the magnitude of the association between exposure and the incidence of the outcome. The objective of this article is to clarify how these regression coefficients should be reported and to propose suggestions for interpreting these coefficients.  相似文献   

12.
Marginal structural Cox models are used for quantifying marginal treatment effects on outcome event hazard function. Such models are estimated using inverse probability of treatment and censoring (IPTC) weighting, which properly accounts for the impact of time‐dependent confounders, avoiding conditioning on factors on the causal pathway. To estimate the IPTC weights, the treatment assignment mechanism is conventionally modeled in discrete time. While this is natural in situations where treatment information is recorded at scheduled follow‐up visits, in other contexts, the events specifying the treatment history can be modeled in continuous time using the tools of event history analysis. This is particularly the case for treatment procedures, such as surgeries. In this paper, we propose a novel approach for flexible parametric estimation of continuous‐time IPTC weights and illustrate it in assessing the relationship between metastasectomy and mortality in metastatic renal cell carcinoma patients. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

13.
We consider estimation of treatment effects in two‐stage adaptive multi‐arm trials with a common control. The best treatment is selected at interim, and the primary endpoint is modeled via a Cox proportional hazards model. The maximum partial‐likelihood estimator of the log hazard ratio of the selected treatment will overestimate the true treatment effect in this case. Several methods for reducing the selection bias have been proposed for normal endpoints, including an iterative method based on the estimated conditional selection biases and a shrinkage approach based on empirical Bayes theory. We adapt these methods to time‐to‐event data and compare the bias and mean squared error of all methods in an extensive simulation study and apply the proposed methods to reconstructed data from the FOCUS trial. We find that all methods tend to overcorrect the bias, and only the shrinkage methods can reduce the mean squared error. © 2017 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.  相似文献   

14.
Many epidemiological studies use a nested case‐control (NCC) design to reduce cost while maintaining study power. Because NCC sampling is conditional on the primary outcome, routine application of logistic regression to analyze a secondary outcome will generally be biased. Recently, many studies have proposed several methods to obtain unbiased estimates of risk for a secondary outcome from NCC data. Two common features of all current methods requires that the times of onset of the secondary outcome are known for cohort members not selected into the NCC study and the hazards of the two outcomes are conditionally independent given the available covariates. This last assumption will not be plausible when the individual frailty of study subjects is not captured by the measured covariates. We provide a maximum‐likelihood method that explicitly models the individual frailties and also avoids the need to have access to the full cohort data. We derive the likelihood contribution by respecting the original sampling procedure with respect to the primary outcome. We use proportional hazard models for the individual hazards, and Clayton's copula is used to model additional dependence between primary and secondary outcomes beyond that explained by the measured risk factors. We show that the proposed method is more efficient than weighted likelihood and is unbiased in the presence of shared frailty for the primary and secondary outcome. We illustrate the method with an application to a study of risk factors for diabetes in a Swedish cohort. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

15.
ObjectiveTo define confounding bias in difference‐in‐difference studies and compare regression‐ and matching‐based estimators designed to correct bias due to observed confounders.Data sourcesWe simulated data from linear models that incorporated different confounding relationships: time‐invariant covariates with a time‐varying effect on the outcome, time‐varying covariates with a constant effect on the outcome, and time‐varying covariates with a time‐varying effect on the outcome. We considered a simple setting that is common in the applied literature: treatment is introduced at a single time point and there is no unobserved treatment effect heterogeneity.Study designWe compared the bias and root mean squared error of treatment effect estimates from six model specifications, including simple linear regression models and matching techniques.Data collectionSimulation code is provided for replication.Principal findingsConfounders in difference‐in‐differences are covariates that change differently over time in the treated and comparison group or have a time‐varying effect on the outcome. When such a confounding variable is measured, appropriately adjusting for this confounder (ie, including the confounder in a regression model that is consistent with the causal model) can provide unbiased estimates with optimal SE. However, when a time‐varying confounder is affected by treatment, recovering an unbiased causal effect using difference‐in‐differences is difficult.ConclusionsConfounding in difference‐in‐differences is more complicated than in cross‐sectional settings, from which techniques and intuition to address observed confounding cannot be imported wholesale. Instead, analysts should begin by postulating a causal model that relates covariates, both time‐varying and those with time‐varying effects on the outcome, to treatment. This causal model will then guide the specification of an appropriate analytical model (eg, using regression or matching) that can produce unbiased treatment effect estimates. We emphasize the importance of thoughtful incorporation of covariates to address confounding bias in difference‐in‐difference studies.  相似文献   

16.
The proportional hazard model is one of the most important statistical models used in medical research involving time‐to‐event data. Simulation studies are routinely used to evaluate the performance and properties of the model and other alternative statistical models for time‐to‐event outcomes under a variety of situations. Complex simulations that examine multiple situations with different censoring rates demand approaches that can accommodate this variety. In this paper, we propose a general framework for simulating right‐censored survival data for proportional hazards models by simultaneously incorporating a baseline hazard function from a known survival distribution, a known censoring time distribution, and a set of baseline covariates. Specifically, we present scenarios in which time to event is generated from exponential or Weibull distributions and censoring time has a uniform or Weibull distribution. The proposed framework incorporates any combination of covariate distributions. We describe the steps involved in nested numerical integration and using a root‐finding algorithm to choose the censoring parameter that achieves predefined censoring rates in simulated survival data. We conducted simulation studies to assess the performance of the proposed framework. We demonstrated the application of the new framework in a comprehensively designed simulation study. We investigated the effect of censoring rate on potential bias in estimating the conditional treatment effect using the proportional hazard model in the presence of unmeasured confounding variables. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

17.
Instrumental variable (IV) analysis has been widely used in economics, epidemiology, and other fields to estimate the causal effects of covariates on outcomes, in the presence of unobserved confounders and/or measurement errors in covariates. However, IV methods for time‐to‐event outcome with censored data remain underdeveloped. This paper proposes a Bayesian approach for IV analysis with censored time‐to‐event outcome by using a two‐stage linear model. A Markov chain Monte Carlo sampling method is developed for parameter estimation for both normal and non‐normal linear models with elliptically contoured error distributions. The performance of our method is examined by simulation studies. Our method largely reduces bias and greatly improves coverage probability of the estimated causal effect, compared with the method that ignores the unobserved confounders and measurement errors. We illustrate our method on the Women's Health Initiative Observational Study and the Atherosclerosis Risk in Communities Study. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

18.
Time‐to‐event data are very common in observational studies. Unlike randomized experiments, observational studies suffer from both observed and unobserved confounding biases. To adjust for observed confounding in survival analysis, the commonly used methods are the Cox proportional hazards (PH) model, the weighted logrank test, and the inverse probability of treatment weighted Cox PH model. These methods do not rely on fully parametric models, but their practical performances are highly influenced by the validity of the PH assumption. Also, there are few methods addressing the hidden bias in causal survival analysis. We propose a strategy to test for survival function differences based on the matching design and explore sensitivity of the P‐values to assumptions about unmeasured confounding. Specifically, we apply the paired Prentice‐Wilcoxon (PPW) test or the modified PPW test to the propensity score matched data. Simulation studies show that the PPW‐type test has higher power in situations when the PH assumption fails. For potential hidden bias, we develop a sensitivity analysis based on the matched pairs to assess the robustness of our finding, following Rosenbaum's idea for nonsurvival data. For a real data illustration, we apply our method to an observational cohort of chronic liver disease patients from a Mayo Clinic study. The PPW test based on observed data initially shows evidence of a significant treatment effect. But this finding is not robust, as the sensitivity analysis reveals that the P‐value becomes nonsignificant if there exists an unmeasured confounder with a small impact.  相似文献   

19.
Time-to-event outcomes are common for oncology clinical trials. Conventional methods of analysis for these endpoints include logrank or Wilcoxon tests for treatment group comparisons, Kaplan-Meier survival estimates, and Cox proportional hazards models to estimate the treatment group hazard ratio (both unadjusted and adjusted for relevant covariates). Adjusting for covariates reduces bias and may increase precision and power (Statist. Med. 2002; 21:2899-2908). However, the appropriateness of the Cox proportional hazards model depends on parametric assumptions. One way to address these issues is to use non-parametric analysis of covariance (J. Biopharm. Statist. 1999; 9:307-338). Here, we carry out simulations to investigate the type I error and power of the unadjusted and covariate-adjusted non-parametric logrank test and Wilcoxon test, and the Cox proportion hazards model. A comparison between the covariate-adjusted and unadjusted methods is also illustrated with an oncology clinical trial example.  相似文献   

20.
We propose a prediction model for the cumulative incidence functions of competing risks, based on a logit link. Because of a concern about censoring potentially depending on time‐varying covariates in our motivating human immunodeficiency virus (HIV) application, we describe an approach for estimating the parameters in the prediction models using inverse probability of censoring weighting under a missingness at random assumption. We then illustrate the application of this methodology to identify predictors of the competing outcomes of virologic failure, an efficacy outcome, and treatment limiting adverse event, a safety outcome, among human immunodeficiency virus‐infected patients first starting antiretroviral treatment. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号