首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
    
We extend the shared frailty model of recurrent events and a dependent terminal event to allow for a nonparametric covariate function. We include a Gaussian random effect (frailty) in the intensity functions of both the recurrent and terminal events to capture correlation between the two processes. We employ the penalized cubic spline method to describe the nonparametric covariate function in the recurrent events model. We use Laplace approximation to evaluate the marginal penalized partial likelihood without a closed form. We also propose the variance estimates for regression coefficients. Numerical analysis results show that the proposed estimates perform well for both the nonparametric and parametric components. We apply this method to analyze the hospitalization rate of patients with heart failure in the presence of death.  相似文献   

2.
    
It has been increasingly common to analyze simultaneously repeated measures and time to failure data. In this paper we propose a joint model when the repeated measures are semi‐continuous, characterized by the presence of a large portion of zero values, as well as right skewness of non zero (positive) values. Examples include monthly medical costs, car insurance annual claims, or annual number of hospitalization days. A random effects two‐part model is used to describe respectively the odds of being positive and the level of positive values. The random effects from the two‐part model are then incorporated in the hazard of the failure time to form the joint model. The estimation can be carried out by Gaussian quadrature techniques conveniently implemented in SAS Proc NLMIXED. Our model is applied to longitudinal (monthly) medical costs of 1455 chronic heart‐failure patients from the clinical data repository at the University of Virginia. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

3.
    
In this paper, we propose a model for medical costs recorded at regular time intervals, e.g. every month, as repeated measures in the presence of a terminating event, such as death. Prior models have related monthly medical costs to time since entry, with extra costs at the final observations at the time of death. Our joint model for monthly medical costs and survival time incorporates two important new features. First, medical cost and survival may be correlated because more 'frail' patients tend to accumulate medical costs faster and die earlier. A joint random effects model is proposed to account for the correlation between medical costs and survival by a shared random effect. Second, monthly medical costs usually increase during the time period prior to death because of the intensive care for dying patients. We present a method for estimating the pattern of cost prior to death, which is applicable if the pattern can be characterized as an additive effect that is limited to a fixed time interval, say b units of time before death. This 'turn back time' method for censored observations censors cost data b units of time before the actual censoring time, while keeping the actual censoring time for the survival data. Time-dependent covariates can be included. Maximum likelihood estimation and inference are carried out through a Monte Carlo EM algorithm with a Metropolis-Hastings sampler in the E-step. An analysis of monthly outpatient EPO medical cost data for dialysis patients is presented to illustrate the proposed methods.  相似文献   

4.
    
Interval censoring arises when a subject misses prescheduled visits at which the failure is to be assessed. Most existing approaches for analysing interval-censored failure time data assume that the censoring mechanism is independent of the true failure time. However, there are situations where this assumption may not hold. In this paper, we consider such a situation in which the dependence structure between the censoring variables and the failure time can be modelled through some latent variables and a method for regression analysis of failure time data is proposed. The method makes use of the proportional hazards frailty model and an EM algorithm is presented for estimation. Finite sample properties of the proposed estimators of regression parameters are examined through simulation studies and we illustrate the method with data from an AIDS study.  相似文献   

5.
    
Liu L  Huang X 《Statistics in medicine》2008,27(14):2665-2683
In this paper, we propose a novel Gaussian quadrature estimation method in various frailty proportional hazards models. We approximate the unspecified baseline hazard by a piecewise constant one, resulting in a parametric model that can be fitted conveniently by Gaussian quadrature tools in standard software such as SAS Proc NLMIXED. We first apply our method to simple frailty models for correlated survival data (e.g. recurrent or clustered failure times), then to joint frailty models for correlated failure times with informative dropout or a dependent terminal event such as death. Simulation studies show that our method compares favorably with the well-received penalized partial likelihood method and the Monte Carlo EM (MCEM) method, for both normal and Gamma frailty models. We apply our method to three real data examples: (1) the time to blindness of both eyes in a diabetic retinopathy study, (2) the joint analysis of recurrent opportunistic diseases in the presence of death for HIV-infected patients, and (3) the joint modeling of local, distant tumor recurrences and patients survival in a soft tissue sarcoma study. The proposed method greatly simplifies the implementation of the (joint) frailty models and makes them much more accessible to general statistical practitioners.  相似文献   

6.
It is often assumed that randomisation will prevent bias in estimation of treatment effects from clinical trials, but this is not true of the semiparametric Proportional Hazards model for survival data when there is underlying risk heterogeneity. Here, a new formula is proposed for estimation of this bias, improving on a previous formula through ease of use and clarity regarding the role of the mid‐study cumulative hazard rate, shown to be an important factor for the bias magnitude. Informative censoring (IC) is recognised as a source of bias. Here, work on selection effects among survivors due to risk heterogeneity is extended to include IC. A new formula shows that bias in causal effect estimation under IC has two sources: one consequent on heterogeneity and one from the additional impact of IC. The formula provides new insights not previously shown: there may less bias under IC than when there is no IC and even, in principle, zero bias. When tested against simulated data, the new formulae are shown to be very accurate for prediction of bias in Proportional Hazards and accelerated failure time analyses which ignore heterogeneity. These data are also used to investigate the performance of accelerated failure time models which explicitly model risk heterogeneity (‘frailty models’) and G estimation. These methods remove bias when there is simple censoring but not with informative censoring when they may lead to overestimation of treatment effects. The new formulae may be used to help researchers judge the possible extent of bias in past studies. Copyright © 2017 John Wiley & Sons, Ltd.  相似文献   

7.
    
Huang X  Wolfe RA  Hu C 《Statistics in medicine》2004,23(13):2089-2107
Frailty models are frequently used to analyse clustered survival data. The assumption of non-informative censoring is commonly used by these models, even though it may not be true in many situations. This article proposes a test for this assumption. It uses the estimated correlation between two types of martingale residuals, one from a model for failure and the other from a model for censoring. It distinguishes two types of censoring, namely withdrawal and the end of the study. Simulation studies show that the proposed test works well under various scenarios. For illustration, the test is applied to a data set for kidney disease patients from multiple dialysis centres.  相似文献   

8.
  总被引:1,自引:0,他引:1  
Zhang Z  Sun J  Sun L 《Statistics in medicine》2005,24(9):1399-1407
Current status data arise when each study subject is observed only once and the survival time of interest is known only to be either less or greater than the observation time. Such data often occur in, for example, cross-sectional studies, demographical investigations and tumorigenicity experiments and several semi-parametric and non-parametric methods for their analysis have been proposed. However, most of these methods deal only with the situation where observation time is independent of the underlying survival time completely or given covariates. This paper discusses regression analysis of current status data when the observation time may be related to the underlying survival time and inference procedures are presented for estimation of regression parameters under the additive hazards regression model. The procedures can be easily implemented and are applied to two motivating examples.  相似文献   

9.
Net survival, the one that would be observed if cancer were the only cause of death, is the most appropriate indicator to compare cancer mortality between areas or countries. Several parametric and non-parametric methods have been developed to estimate net survival, particularly when the cause of death is unknown. These methods are based either on the relative survival ratio or on the additive excess hazard model, the latter using the general population mortality hazard to estimate the excess mortality hazard (the hazard related to net survival). The present work used simulations to compare estimator abilities to estimate net survival in different settings such as the presence/absence of an age effect on the excess mortality hazard or on the potential time of follow-up, knowing that this covariate has an effect on the general population mortality hazard too. It showed that when age affected the excess mortality hazard, most estimators, including specific survival, were biased. Only two estimators were appropriate to estimate net survival. The first is based on a multivariable excess hazard model that includes age as covariate. The second is non-parametric and is based on the inverse probability weighting. These estimators take differently into account the informative censoring induced by the expected mortality process. The former offers great flexibility whereas the latter requires neither the assumption of a specific distribution nor a model-building strategy. Because of its simplicity and availability in commonly used software, the nonparametric estimator should be considered by cancer registries for population-based studies.  相似文献   

10.
    
The analysis of high‐dimensional survival data is challenging, primarily owing to the problem of overfitting, which occurs when spurious relationships are inferred from data that subsequently fail to exist in test data. Here, we propose a novel method of extracting a low‐dimensional representation of covariates in survival data by combining the popular Gaussian process latent variable model with a Weibull proportional hazards model. The combined model offers a flexible non‐linear probabilistic method of detecting and extracting any intrinsic low‐dimensional structure from high‐dimensional data. By reducing the covariate dimension, we aim to diminish the risk of overfitting and increase the robustness and accuracy with which we infer relationships between covariates and survival outcomes. In addition, we can simultaneously combine information from multiple data sources by expressing multiple datasets in terms of the same low‐dimensional space. We present results from several simulation studies that illustrate a reduction in overfitting and an increase in predictive performance, as well as successful detection of intrinsic dimensionality. We provide evidence that it is advantageous to combine dimensionality reduction with survival outcomes rather than performing unsupervised dimensionality reduction on its own. Finally, we use our model to analyse experimental gene expression data and detect and extract a low‐dimensional representation that allows us to distinguish high‐risk and low‐risk groups with superior accuracy compared with doing regression on the original high‐dimensional data. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

11.
12.
13.
    
Many epidemiological studies assess the effects of time‐dependent exposures, where both the exposure status and its intensity vary over time. One example that attracts public attention concerns pharmacoepidemiological studies of the adverse effects of medications. The analysis of such studies poses challenges for modeling the impact of complex time‐dependent drug exposure, especially given the uncertainty about the way effects cumulate over time and about the etiological relevance of doses taken in different time periods. We present a flexible method for modeling cumulative effects of time‐varying exposures, weighted by recency, represented by time‐dependent covariates in the Cox proportional hazards model. The function that assigns weights to doses taken in the past is estimated using cubic regression splines. We validated the method in simulations and applied it to re‐assess the association between exposure to a psychotropic drug and fall‐related injuries in the elderly. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

14.
    
The Cox proportional hazards model with time-dependent covariates (TDC) is now a part of the standard statistical analysis toolbox in medical research. As new methods involving more complex modeling of time-dependent variables are developed, simulations could often be used to systematically assess the performance of these models. Yet, generating event times conditional on TDC requires well-designed and efficient algorithms. We compare two classes of such algorithms: permutational algorithms (PAs) and algorithms based on a binomial model. We also propose a modification of the PA to incorporate a rejection sampler. We performed a simulation study to assess the accuracy, stability, and speed of these algorithms in several scenarios. Both classes of algorithms generated data sets that, once analyzed, provided virtually unbiased estimates with comparable variances. In terms of computational efficiency, the PA with the rejection sampler reduced the time necessary to generate data by more than 50 per cent relative to alternative methods. The PAs also allowed more flexibility in the specification of the marginal distributions of event times and required less calibration.  相似文献   

15.
    
Interval-censored, or more generally, coarsened event-time data arise when study participants are observed at irregular time periods and experience the event of interest in between study observations. Such data are often analysed assuming non-informative censoring, which can produce biased results if the assumption is wrong. This paper extends the standard approach for estimating survivor functions to allow informatively interval-censored data by incorporating various assumptions about the censoring mechanism into the model. We include a Bayesian extension in which final estimates are produced by mixing over a distribution of assumed censoring mechanisms. We illustrate these methods with a natural history study of HIV-infected individuals using assumptions elicited from an AIDS expert.  相似文献   

16.
    
Analyses to compare non-randomized groups are more and more common in both post hoc analyses of randomized clinical trials data and in analyses of long-term observational data. In such cases, it is quite likely that there are unknown or uncollected sources of heterogeneity in event rates. Research has shown that an underlying source of heterogeneity in event rates which is not included in proportional hazards regression models leads to biased estimates for included covariate effect estimates and lower power to test them whether the source of heterogeneity is assumed to be fixed or random.We demonstrate here using several post hoc analyses of clinical trials data that a potentially common problem may be that the non-randomized groups which are to be compared have differential variability in their event rates. We then show through simulation that such underlying heterogeneity which varies across the groups, when ignored in the modelling, can lead to an attenuated regression effect estimate for comparing the two groups to each other, lower rejection rates for the effect, and Wald-based confidence intervals with potentially much lower coverage than nominal. When the groups are not significantly different, but heterogeneity differs between them, an analysis ignoring the heterogeneity can even result in a significant negative comparison.  相似文献   

17.
    
Analysis of clustered data focusing on inference of the marginal distribution may be problematic when the risk of the outcome is related to the cluster size, termed as informative cluster size. In the absence of censoring, Hoffman et al. proposed a within-cluster resampling method, which is asymptotically equivalent to a weighted generalized estimating equations score equation. We investigate the estimation of the marginal distribution for multivariate survival data with informative cluster size using cluster-weighted Weibull and Cox proportional hazards models. The cluster-weighted Cox model can be implemented using standard software. Simulation results demonstrate that the proposed methods produce unbiased parameter estimation in the presence of informative cluster size. To illustrate the proposed approach, we analyze survival data from a lymphatic filariasis study in Recife, Brazil.  相似文献   

18.
    
Incorporating spatial variation could potentially enhance information coming from survival data. In addition, simultaneous (joint) modeling of time-to-event data from different diseases, such as cancers, from the same patient could provide useful insights as to how these diseases behave together. This paper proposes Bayesian hierarchical survival models for capturing spatial correlations within the proportional hazards (PH) and proportional odds (PO) frameworks. Parametric (Weibull for the PH and log-logistic for the PO) models were used for the baseline distribution while spatial correlation is introduced in the form of county-cancer-level frailties. We illustrate with data from the Surveillance Epidemiology and End Results database of the National Cancer Institute on patients in Iowa diagnosed with multiple gastrointestinal cancers. Model checking and comparison among competing models were performed and some implementation issues were presented. We recommend the use of the spatial PH model for this data set.  相似文献   

19.
    
Survival regression is commonly applied in biomedical studies or clinical trials, and evaluating their predictive performance plays an essential role for model diagnosis and selection. The presence of censored data, particularly if informative, may pose more challenges for the assessment of predictive accuracy. Existing literature mainly focuses on prediction for survival probabilities with limitation work for survival time. In this work, we focus on accuracy measures of predicted survival times adjusted for a potentially informative censoring mechanism (ie, coarsening at random (CAR); non-CAR) by adopting the technique of inverse probability of censoring weighting. Our proposed predictive metric can be adaptive to various survival regression frameworks including but not limited to accelerated failure time models and proportional hazards models. Moreover, we provide the asymptotic properties of the inverse probability of censoring weighting estimators under CAR. We consider the settings of high-dimensional data under CAR or non-CAR for extensions. The performance of the proposed method is evaluated through extensive simulation studies and analysis of real data from the Critical Assessment of Microarray Data Analysis.  相似文献   

20.
    
The proliferation of longitudinal studies has increased the importance of statistical methods for time‐to‐event data that can incorporate time‐dependent covariates. The Cox proportional hazards model is one such method that is widely used. As more extensions of the Cox model with time‐dependent covariates are developed, simulations studies will grow in importance as well. An essential starting point for simulation studies of time‐to‐event models is the ability to produce simulated survival times from a known data generating process. This paper develops a method for the generation of survival times that follow a Cox proportional hazards model with time‐dependent covariates. The method presented relies on a simple transformation of random variables generated according to a truncated piecewise exponential distribution and allows practitioners great flexibility and control over both the number of time‐dependent covariates and the number of time periods in the duration of follow‐up measurement. Within this framework, an additional argument is suggested that allows researchers to generate time‐to‐event data in which covariates change at integer‐valued steps of the time scale. The purpose of this approach is to produce data for simulation experiments that mimic the types of data structures applied that researchers encounter when using longitudinal biomedical data. Validity is assessed in a set of simulation experiments, and results indicate that the proposed procedure performs well in producing data that conform to the assumptions of the Cox proportional hazards model. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号