首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The proliferation of longitudinal studies has increased the importance of statistical methods for time‐to‐event data that can incorporate time‐dependent covariates. The Cox proportional hazards model is one such method that is widely used. As more extensions of the Cox model with time‐dependent covariates are developed, simulations studies will grow in importance as well. An essential starting point for simulation studies of time‐to‐event models is the ability to produce simulated survival times from a known data generating process. This paper develops a method for the generation of survival times that follow a Cox proportional hazards model with time‐dependent covariates. The method presented relies on a simple transformation of random variables generated according to a truncated piecewise exponential distribution and allows practitioners great flexibility and control over both the number of time‐dependent covariates and the number of time periods in the duration of follow‐up measurement. Within this framework, an additional argument is suggested that allows researchers to generate time‐to‐event data in which covariates change at integer‐valued steps of the time scale. The purpose of this approach is to produce data for simulation experiments that mimic the types of data structures applied that researchers encounter when using longitudinal biomedical data. Validity is assessed in a set of simulation experiments, and results indicate that the proposed procedure performs well in producing data that conform to the assumptions of the Cox proportional hazards model. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

2.
Two‐period two‐treatment (2×2) crossover designs are commonly used in clinical trials. For continuous endpoints, it has been shown that baseline (pretreatment) measurements collected before the start of each treatment period can be useful in improving the power of the analysis. Methods to achieve a corresponding gain for censored time‐to‐event endpoints have not been adequately studied. We propose a method in which censored values are treated as missing data and multiply imputed using prespecified parametric event time models. The event times in each imputed data set are then log‐transformed and analyzed using a linear model suitable for a 2×2 crossover design with continuous endpoints, with the difference in period‐specific baselines included as a covariate. Results obtained from the imputed data sets are synthesized for point and confidence interval estimation of the treatment ratio of geometric mean event times using model averaging in conjunction with Rubin's combination rule. We use simulations to illustrate the favorable operating characteristics of our method relative to two other methods for crossover trials with censored time‐to‐event data, ie, a hierarchical rank test that ignores the baselines and a stratified Cox model that uses each study subject as a stratum and includes period‐specific baselines as a covariate. Application to a real data example is provided.  相似文献   

3.
Cox比例风险回归模型(Cox模型)是时间-事件数据分析中常用的多因素分析方法,拟合Cox模型时一个关键问题是如何选择合适的与结局事件发生相关的时间尺度。目前国内开展的队列研究在资料分析中较少关注Cox模型的时间尺度选择问题。本研究对文献报道中常见的几种时间尺度选择策略进行简要介绍和比较;并利用上海女性健康队列资料,以中心性肥胖与肝癌发病风险的关联为例,说明选择不同时间尺度的Cox模型对数据分析结果的影响;在此基础上提出几点Cox模型时间尺度选择上的建议,以期为队列研究资料的分析提供参考。  相似文献   

4.
Recurrent event data are commonly observed in biomedical longitudinal studies. In many instances, there exists a terminal event, which precludes the occurrence of additional repeated events, and usually there is also a nonignorable correlation between the terminal event and recurrent events. In this article, we propose a partly Aalen's additive model with a multiplicative frailty for the rate function of recurrent event process and assume a Cox frailty model for terminal event time. A shared gamma frailty is used to describe the correlation between the two types of events. Consequently, this joint model can provide the information of temporal influence of absolute covariate effects on the rate of recurrent event process, which is usually helpful in the decision‐making process for physicians. An estimating equation approach is developed to estimate marginal and association parameters in the joint model. The consistency of the proposed estimator is established. Simulation studies demonstrate that the proposed approach is appropriate for practical use. We apply the proposed method to a peritonitis cohort data set for illustration. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

5.
In the analysis of time‐to‐event data, the problem of competing risks occurs when an individual may experience one, and only one, of m different types of events. The presence of competing risks complicates the analysis of time‐to‐event data, and standard survival analysis techniques such as Kaplan–Meier estimation, log‐rank test and Cox modeling are not always appropriate and should be applied with caution. Fine and Gray developed a method for regression analysis that models the hazard that corresponds to the cumulative incidence function. This model is becoming widely used by clinical researchers and is now available in all the major software environments. Although model selection methods for Cox proportional hazards models have been developed, few methods exist for competing risks data. We have developed stepwise regression procedures, both forward and backward, based on AIC, BIC, and BICcr (a newly proposed criteria that is a modified BIC for competing risks data subject to right censoring) as selection criteria for the Fine and Gray model. We evaluated the performance of these model selection procedures in a large simulation study and found them to perform well. We also applied our procedures to assess the importance of bone mineral density in predicting the absolute risk of hip fracture in the Women's Health Initiative–Observational Study, where mortality was the competing risk. We have implemented our method as a freely available R package called crrstep. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

6.
For time‐to‐event outcomes, a rich literature exists on the bias introduced by covariate measurement error in regression models, such as the Cox model, and methods of analysis to address this bias. By comparison, less attention has been given to understanding the impact or addressing errors in the failure time outcome. For many diseases, the timing of an event of interest (such as progression‐free survival or time to AIDS progression) can be difficult to assess or reliant on self‐report and therefore prone to measurement error. For linear models, it is well known that random errors in the outcome variable do not bias regression estimates. With nonlinear models, however, even random error or misclassification can introduce bias into estimated parameters. We compare the performance of 2 common regression models, the Cox and Weibull models, in the setting of measurement error in the failure time outcome. We introduce an extension of the SIMEX method to correct for bias in hazard ratio estimates from the Cox model and discuss other analysis options to address measurement error in the response. A formula to estimate the bias induced into the hazard ratio by classical measurement error in the event time for a log‐linear survival model is presented. Detailed numerical studies are presented to examine the performance of the proposed SIMEX method under varying levels and parametric forms of the error in the outcome. We further illustrate the method with observational data on HIV outcomes from the Vanderbilt Comprehensive Care Clinic.  相似文献   

7.
Predicting an individual's risk of experiencing a future clinical outcome is a statistical task with important consequences for both practicing clinicians and public health experts. Modern observational databases such as electronic health records provide an alternative to the longitudinal cohort studies traditionally used to construct risk models, bringing with them both opportunities and challenges. Large sample sizes and detailed covariate histories enable the use of sophisticated machine learning techniques to uncover complex associations and interactions, but observational databases are often ‘messy’, with high levels of missing data and incomplete patient follow‐up. In this paper, we propose an adaptation of the well‐known Naive Bayes machine learning approach to time‐to‐event outcomes subject to censoring. We compare the predictive performance of our method with the Cox proportional hazards model which is commonly used for risk prediction in healthcare populations, and illustrate its application to prediction of cardiovascular risk using an electronic health record dataset from a large Midwest integrated healthcare system. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

8.
Many extensions of survival models based on the Cox proportional hazards approach have been proposed to handle clustered or multiple event data. Of particular note are five Cox-based models for recurrent event data: Andersen and Gill (AG); Wei, Lin and Weissfeld (WLW); Prentice, Williams and Peterson, total time (PWP-CP) and gap time (PWP-GT); and Lee, Wei and Amato (LWA). Some authors have compared these models by observing differences that arise from fitting the models to real and simulated data. However, no attempt has been made to systematically identify the components of the models that are appropriate for recurrent event data. We propose a systematic way of characterizing such Cox-based models using four key components: risk intervals; baseline hazard; risk set, and correlation adjustment. From the definitions of risk interval and risk set there are conceptually seven such Cox-based models that are permissible, five of which are those previously identified. The two new variant models are termed the 'total time - restricted' (TT-R) and 'gap time - unrestricted' (GT-UR) models. The aim of the paper is to determine which models are appropriate for recurrent event data using the key components. The models are fitted to simulated data sets and to a data set of childhood recurrent infectious diseases. The LWA model is not appropriate for recurrent event data because it allows a subject to be at risk several times for the same event. The WLW model overestimates treatment effect and is not recommended. We conclude that PWP-GT and TT-R are useful models for analysing recurrent event data, providing answers to slightly different research questions. Further, applying a robust variance to any of these models does not adequately account for within-subject correlation.  相似文献   

9.
Interval‐censored data, in which the event time is only known to lie in some time interval, arise commonly in practice, for example, in a medical study in which patients visit clinics or hospitals at prescheduled times and the events of interest occur between visits. Such data are appropriately analyzed using methods that account for this uncertainty in event time measurement. In this paper, we propose a survival tree method for interval‐censored data based on the conditional inference framework. Using Monte Carlo simulations, we find that the tree is effective in uncovering underlying tree structure, performs similarly to an interval‐censored Cox proportional hazards model fit when the true relationship is linear, and performs at least as well as (and in the presence of right‐censoring outperforms) the Cox model when the true relationship is not linear. Further, the interval‐censored tree outperforms survival trees based on imputing the event time as an endpoint or the midpoint of the censoring interval. We illustrate the application of the method on tooth emergence data.  相似文献   

10.
ObjectiveCox proportional hazards regression models are frequently used to determine the association between exposure and time-to-event outcomes in both randomized controlled trials and in observational cohort studies. The resultant hazard ratio is a relative measure of effect that provides limited clinical information.Study Design and SettingA method is described for deriving absolute reductions in the risk of an event occurring within a given duration of follow-up time from a Cox regression model. The associated number needed to treat can be derived from this quantity. The method involves determining the probability of the outcome occurring within the specified duration of follow-up if each subject in the cohort was treated and if each subject was untreated, based on the covariates in the regression model. These probabilities are then averaged across the study population to determine the average probability of the occurrence of an event within a specific duration of follow-up in the population if all subjects were treated and if all subjects were untreated.ResultsRisk differences and numbers needed to treat.ConclusionsAbsolute measures of treatment effect can be derived in prospective studies when Cox regression is used to adjust for possible imbalance in prognostically important baseline covariates.  相似文献   

11.
Two common statistical problems in pooling survival data from several studies are addressed. The first problem is that the data are doubly censored in that the origin is interval censored and the endpoint event may be right censored. Two approaches to incorporate the uncertainty of interval-censored origins are developed, and then compared with more usual analyses using imputation of a single fixed value for each origin. The second problem is that the data are collected from multiple studies and it is likely that heterogeneity exists among the study populations. A random-effects hierarchical Cox proportional hazards model is therefore used.The scientific problem motivating this work is a pooled survival analysis of data sets from three studies to examine the effect of GB virus type C (GBV-C) coinfection on survival of HIV-infected individuals. The time of HIV infection is the origin and for each subject this time is unknown, but is known to lie later than the last time at which the subject was known to be HIV negative, and earlier than the first time the subject was known to be HIV positive. The use of an approximate Bayesian approach using the partial likelihood as the likelihood is recommended because it more appropriately incorporates the uncertainty of interval-censored HIV infection times.  相似文献   

12.
Two‐phase designs are commonly used to subsample subjects from a cohort in order to study covariates that are too expensive to ascertain for everyone in the cohort. This is particularly true for the study of immune response biomarkers in vaccine immunology, where new, elaborate assays are constantly being developed to improve our understanding of the human immune responses to vaccines and how the immune response may protect humans from virus infection. It has long being recognized that if there exist variables that are correlated with expensive variables and can be measured for every subject in the cohort, they can be leveraged to improve the estimation efficiency for the effects of the expensive variables. In this research article, we developed an improved inverse probability weighted estimation approach for semiparametric transformation models with a two‐phase study design. Semiparametric transformation models are a class of models that include the Cox PH and proportional odds models. They provide an attractive way to model the effects of immune response biomarkers as human immune responses generally wane over time. Our approach is based on weights calibration, which has its origin in survey statistics and was used by Breslow et al. 1 , 2 to improve inverse probability weighted estimation of the Cox regression model. We develop asymptotic theory for our estimator and examine its performance through simulation studies. We illustrate the proposed method with application to two HIV‐1 vaccine efficacy trials. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

13.
Family‐based designs enriched with affected subjects and disease associated variants can increase statistical power for identifying functional rare variants. However, few rare variant analysis approaches are available for time‐to‐event traits in family designs and none of them applicable to the X chromosome. We developed novel pedigree‐based burden and kernel association tests for time‐to‐event outcomes with right censoring for pedigree data, referred to FamRATS (family‐based rare variant association tests for survival traits). Cox proportional hazard models were employed to relate a time‐to‐event trait with rare variants with flexibility to encompass all ranges and collapsing of multiple variants. In addition, the robustness of violating proportional hazard assumptions was investigated for the proposed and four current existing tests, including the conventional population‐based Cox proportional model and the burden, kernel, and sum of squares statistic (SSQ) tests for family data. The proposed tests can be applied to large‐scale whole‐genome sequencing data. They are appropriate for the practical use under a wide range of misspecified Cox models, as well as for population‐based, pedigree‐based, or hybrid designs. In our extensive simulation study and data example, we showed that the proposed kernel test is the most powerful and robust choice among the proposed burden test and the existing four rare variant survival association tests. When applied to the Diabetes Heart Study, the proposed tests found exome variants of the JAK1 gene on chromosome 1 showed the most significant association with age at onset of type 2 diabetes from the exome‐wide analysis.  相似文献   

14.
Randomized clinical trials are the standard for evaluating new drugs, devices and procedures. Traditional clinical trials entail not only considerable expense, but require considerable time to complete. The use of surrogate endpoints constitutes an effort to control cost and completion time for clinical trials. We propose a method to quantify the proportion of treatment effect explained by a surrogate endpoint based on a general model setting which includes the commonly used linear, logistic and Cox regression models. The interpretation of this quantitative measure is facilitated by graphical displays. To reduce the variability associated with the estimate, a meta-analytic approach is proposed based on random effects models. An example using real clinical trial data is given to illustrate the proposed procedures.  相似文献   

15.
In an observational study of the effect of a treatment on a time‐to‐event outcome, a major problem is accounting for confounding because of unknown or unmeasured factors. We propose including covariates in a Cox model that can partially account for an unknown time‐independent frailty that is related to starting or stopping treatment as well as the outcome of interest. These covariates capture the times at which treatment is started or stopped and so are called treatment choice (TC) covariates. Three such models are developed: first, an interval TC model that assumes a very general form for the respective hazard functions of starting treatment, stopping treatment, and the outcome of interest and second, a parametric TC model that assumes that the log hazard functions for starting treatment, stopping treatment, and the outcome event include frailty as an additive term. Finally, a hybrid TC model that combines attributes from the parametric and interval TC models. As compared with an ordinary Cox model, the TC models are shown to substantially reduce the bias of the estimated hazard ratio for treatment when data are simulated from a realistic Cox model with residual confounding due to the unobserved frailty. The simulations also indicate that the bias decreases or levels off as the sample size increases. A TC model is illustrated by analyzing the Women's Health Initiative Observational Study of hormone replacement for post‐menopausal women. Published 2017. This article has been contributed to by US Government employees and their work is in the public domain in the USA.  相似文献   

16.
临床生存数据新视角:竞争风险模型   总被引:3,自引:1,他引:2       下载免费PDF全文
临床生存数据常常伴有多个结局,各结局间存在竞争关系,忽略竞争风险使用传统单因素Kaplan-Meier法会高估累积死亡率,使用传统多因素Cox有可能错误估计HR值。目前国内临床文献较少提及竞争风险且方法学均未提供具体实现程序,亦无解析主流模型应用条件与参数。为此本文旨在阐述竞争风险的概念与核心模型,以实例解析累积发生率、原因别风险模型、部分分布风险模型正确的应用,并提供相应SAS 9.4程序以便临床研究人员进行竞争风险建模时参考。  相似文献   

17.
BACKGROUND: In epidemiology, we are often interested in the association between the evolution of a quantitative variable and the onset of an event. The aim of this paper is to present a joint model for the analysis of Gaussian repeated data and survival time. Such models allow, for example, to perform survival analysis when a time-dependent explanatory variable is measured intermittently, or to study the evolution of a quantitative marker conditionally to an event. METHODS: They are constructed by combining a mixed model for repeated Gaussian variables and a survival model which can be parametric or semi-parametric (Cox model). RESULTS: We discuss the hypotheses underlying the different joint models proposed in the literature and the necessary assumptions for maximum likelihood estimation. The interest of these methods is illustrated with a study of the natural history of dementia in a cohort of elderly persons.  相似文献   

18.
We consider the joint modelling of longitudinal and event time data. The longitudinal data are irregularly collected and the event times are subject to right censoring. Most methods described in the literature are quite complex and do not belong to the standard statistical tools. We propose a more practical approach using Cox regression with time-dependent covariates. Since the longitudinal data are observed irregularly, we have to account for differences in observation frequency between individual patients. Therefore, the time elapsed since last observation (TEL) is added to the model. TEL and its interaction with the time-dependent covariate show a strong effect on the hazard. The latter indicates that older recordings have less impact than recent recordings. Pros and cons of this methodology are discussed and a simulation study is performed to study the effect of TEL on the hazard. The fitted Cox model serves as a starting point for the prediction of future patient's events. Our method is applied to a study on chronic myeloid leukaemia (CML) with longitudinal white blood cell counts (WBC) as time-dependent covariate and patient's death as event.  相似文献   

19.
We consider the situation of estimating the marginal survival distribution from censored data subject to dependent censoring using auxiliary variables. We had previously developed a nonparametric multiple imputation approach. The method used two working proportional hazards (PH) models, one for the event times and the other for the censoring times, to define a nearest neighbor imputing risk set. This risk set was then used to impute failure times for censored observations. Here, we adapt the method to the situation where the event and censoring times follow accelerated failure time models and propose to use the Buckley–James estimator as the two working models. Besides studying the performances of the proposed method, we also compare the proposed method with two popular methods for handling dependent censoring through the use of auxiliary variables, inverse probability of censoring weighted and parametric multiple imputation methods, to shed light on the use of them. In a simulation study with time‐independent auxiliary variables, we show that all approaches can reduce bias due to dependent censoring. The proposed method is robust to misspecification of either one of the two working models and their link function. This indicates that a working proportional hazards model is preferred because it is more cumbersome to fit an accelerated failure time model. In contrast, the inverse probability of censoring weighted method is not robust to misspecification of the link function of the censoring time model. The parametric imputation methods rely on the specification of the event time model. The approaches are applied to a prostate cancer dataset. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

20.
Event history studies are commonly conducted in many fields, and a great deal of literature has been established for the analysis of the two types of data commonly arising from these studies: recurrent event data and panel count data. The former arises if all study subjects are followed continuously, while the latter means that each study subject is observed only at discrete time points. In reality, a third type of data, a mixture of the two types of the data earlier, may occur and furthermore, as with the first two types of the data, there may exist a dependent terminal event, which may preclude the occurrences of recurrent events of interest. This paper discusses regression analysis of mixed recurrent event and panel count data in the presence of a terminal event and an estimating equation‐based approach is proposed for estimation of regression parameters of interest. In addition, the asymptotic properties of the proposed estimator are established, and a simulation study conducted to assess the finite‐sample performance of the proposed method suggests that it works well in practical situations. Finally, the methodology is applied to a childhood cancer study that motivated this study. Copyright © 2017 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号