首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Multiple imputation is commonly used to impute missing data, and is typically more efficient than complete cases analysis in regression analysis when covariates have missing values. Imputation may be performed using a regression model for the incomplete covariates on other covariates and, importantly, on the outcome. With a survival outcome, it is a common practice to use the event indicator D and the log of the observed event or censoring time T in the imputation model, but the rationale is not clear. We assume that the survival outcome follows a proportional hazards model given covariates X and Z. We show that a suitable model for imputing binary or Normal X is a logistic or linear regression on the event indicator D, the cumulative baseline hazard H0(T), and the other covariates Z. This result is exact in the case of a single binary covariate; in other cases, it is approximately valid for small covariate effects and/or small cumulative incidence. If we do not know H0(T), we approximate it by the Nelson–Aalen estimator of H(T) or estimate it by Cox regression. We compare the methods using simulation studies. We find that using logT biases covariate‐outcome associations towards the null, while the new methods have lower bias. Overall, we recommend including the event indicator and the Nelson–Aalen estimator of H(T) in the imputation model. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

2.
In behavioral, biomedical, and social‐psychological sciences, it is common to encounter latent variables and heterogeneous data. Mixture structural equation models (SEMs) are very useful methods to analyze these kinds of data. Moreover, the presence of missing data, including both missing responses and missing covariates, is an important issue in practical research. However, limited work has been done on the analysis of mixture SEMs with non‐ignorable missing responses and covariates. The main objective of this paper is to develop a Bayesian approach for analyzing mixture SEMs with an unknown number of components, in which a multinomial logit model is introduced to assess the influence of some covariates on the component probability. Results of our simulation study show that the Bayesian estimates obtained by the proposed method are accurate, and the model selection procedure via a modified DIC is useful in identifying the correct number of components and in selecting an appropriate missing mechanism in the proposed mixture SEMs. A real data set related to a longitudinal study of polydrug use is employed to illustrate the methodology. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

3.
We assess stratum (e.g. treatment) interactions with covariates and with the baseline hazard function in the proportional hazards (PH) regression model for lifetime data. We consider models incorporating stratum interactions both with and without stratification of the risk sets in the likelihood function, and describe likelihood ratio statistics for tests of the presence of these interactions. We also present step-down methods for building reduced models which include stratum-specific parameters corresponding to covariates which interact with treatment. We apply PH models with such interactions to a clinical trial of DES in the treatment of prostate cancer to determine optimal treatment conditional on each patient's covariates.  相似文献   

4.
The proliferation of longitudinal studies has increased the importance of statistical methods for time‐to‐event data that can incorporate time‐dependent covariates. The Cox proportional hazards model is one such method that is widely used. As more extensions of the Cox model with time‐dependent covariates are developed, simulations studies will grow in importance as well. An essential starting point for simulation studies of time‐to‐event models is the ability to produce simulated survival times from a known data generating process. This paper develops a method for the generation of survival times that follow a Cox proportional hazards model with time‐dependent covariates. The method presented relies on a simple transformation of random variables generated according to a truncated piecewise exponential distribution and allows practitioners great flexibility and control over both the number of time‐dependent covariates and the number of time periods in the duration of follow‐up measurement. Within this framework, an additional argument is suggested that allows researchers to generate time‐to‐event data in which covariates change at integer‐valued steps of the time scale. The purpose of this approach is to produce data for simulation experiments that mimic the types of data structures applied that researchers encounter when using longitudinal biomedical data. Validity is assessed in a set of simulation experiments, and results indicate that the proposed procedure performs well in producing data that conform to the assumptions of the Cox proportional hazards model. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

5.
For testing the efficacy of a treatment in a clinical trial with survival data, the Cox proportional hazards (PH) model is the well‐accepted, conventional tool. When using this model, one typically proceeds by confirming that the required PH assumption holds true. If the PH assumption fails to hold, there are many options available, proposed as alternatives to the Cox PH model. An important question which arises is whether the potential bias introduced by this sequential model fitting procedure merits concern and, if so, what are effective mechanisms for correction. We investigate by means of simulation study and draw attention to the considerable drawbacks, with regard to power, of a simple resampling technique, the permutation adjustment, a natural recourse for addressing such challenges. We also consider a recently proposed two‐stage testing strategy (2008) for ameliorating these effects. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

6.
We discuss the use of local likelihood methods to fit proportional hazards regression models to right and interval censored data. The assumed model allows for an arbitrary, smoothed baseline hazard on which a vector of covariates operates in a proportional manner, and thus produces an interpretable baseline hazard function along with estimates of global covariate effects. For estimation, we extend the modified EM algorithm suggested by Betensky, Lindsey, Ryan and Wand. We illustrate the method with data on times to deterioration of breast cosmeses and HIV-1 infection rates among haemophiliacs.  相似文献   

7.
We consider Cox proportional hazards regression when the covariate vector includes error-prone discrete covariates along with error-free covariates, which may be discrete or continuous. The misclassification in the discrete error-prone covariates is allowed to be of any specified form. Building on the work of Nakamura and his colleagues, we present a corrected score method for this setting. The method can handle all three major study designs (internal validation design, external validation design, and replicate measures design), both functional and structural error models, and time-dependent covariates satisfying a certain 'localized error' condition. We derive the asymptotic properties of the method and indicate how to adjust the covariance matrix of the regression coefficient estimates to account for estimation of the misclassification matrix. We present the results of a finite-sample simulation study under Weibull survival with a single binary covariate having known misclassification rates. The performance of the method described here was similar to that of related methods we have examined in previous works. Specifically, our new estimator performed as well as or, in a few cases, better than the full Weibull maximum likelihood estimator. We also present simulation results for our method for the case where the misclassification probabilities are estimated from an external replicate measures study. Our method generally performed well in these simulations. The new estimator has a broader range of applicability than many other estimators proposed in the literature, including those described in our own earlier work, in that it can handle time-dependent covariates with an arbitrary misclassification structure. We illustrate the method on data from a study of the relationship between dietary calcium intake and distal colon cancer.  相似文献   

8.
Several approaches exist for handling missing covariates in the Cox proportional hazards model. The multiple imputation (MI) is relatively easy to implement with various software available and results in consistent estimates if the imputation model is correct. On the other hand, the fully augmented weighted estimators (FAWEs) recover a substantial proportion of the efficiency and have the doubly robust property. In this paper, we compare the FAWEs and the MI through a comprehensive simulation study. For the MI, we consider the multiple imputation by chained equation and focus on two imputation methods: Bayesian linear regression imputation and predictive mean matching. Simulation results show that the imputation methods can be rather sensitive to model misspecification and may have large bias when the censoring time depends on the missing covariates. In contrast, the FAWEs allow the censoring time to depend on the missing covariates and are remarkably robust as long as getting either the conditional expectations or the selection probability correct due to the doubly robust property. The comparison suggests that the FAWEs show the potential for being a competitive and attractive tool for tackling the analysis of survival data with missing covariates. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

9.
Missing covariate values are prevalent in regression applications. While an array of methods have been developed for estimating parameters in regression models with missing covariate data for a variety of response types, minimal focus has been given to validation of the response model and influence diagnostics. Previous research has mainly focused on estimating residuals for observations with missing covariates using expected values, after which specialized techniques are needed to conduct proper inference. We suggest a multiple imputation strategy that allows for the use of standard methods for residual analyses on the imputed data sets or a stacked data set. We demonstrate the suggested multiple imputation method by analyzing the Sleep in Mammals data in the context of a linear regression model and the New York Social Indicators Status data with a logistic regression model.  相似文献   

10.
With competing risks failure time data, one often needs to assess the covariate effects on the cumulative incidence probabilities. Fine and Gray proposed a proportional hazards regression model to directly model the subdistribution of a competing risk. They developed the estimating procedure for right-censored competing risks data, based on the inverse probability of censoring weighting. Right-censored and left-truncated competing risks data sometimes occur in biomedical researches. In this paper, we study the proportional hazards regression model for the subdistribution of a competing risk with right-censored and left-truncated data. We adopt a new weighting technique to estimate the parameters in this model. We have derived the large sample properties of the proposed estimators. To illustrate the application of the new method, we analyze the failure time data for children with acute leukemia. In this example, the failure times for children who had bone marrow transplants were left truncated.  相似文献   

11.
目的为了控制日益增长的医疗费用,通过Cox回归分析找出影响单病种费用的主要因素,为制定有针对性的费用控制政策提供参考依据。方法对3768例手术类单病种病人费用的影响因素进行Cox回归分析。结果提示手术类单病种发生高费用与病人性别、年龄、手术方式、麻醉方式、麻醉检测时间、药占比、病人付款方式密切相关。结论通过COX回归分析,明确了手术类单病种费用的主要影响因素,提出严格筛选病种、结合合理的临床路径和合理的花费开展费用控制工作。  相似文献   

12.
Cox比例风险回归模型(Cox模型)是时间-事件数据分析中常用的多因素分析方法,拟合Cox模型时一个关键问题是如何选择合适的与结局事件发生相关的时间尺度。目前国内开展的队列研究在资料分析中较少关注Cox模型的时间尺度选择问题。本研究对文献报道中常见的几种时间尺度选择策略进行简要介绍和比较;并利用上海女性健康队列资料,以中心性肥胖与肝癌发病风险的关联为例,说明选择不同时间尺度的Cox模型对数据分析结果的影响;在此基础上提出几点Cox模型时间尺度选择上的建议,以期为队列研究资料的分析提供参考。  相似文献   

13.
Simulations and Monte Carlo methods serve an important role in modern statistical research. They allow for an examination of the performance of statistical procedures in settings in which analytic and mathematical derivations may not be feasible. A key element in any statistical simulation is the existence of an appropriate data‐generating process: one must be able to simulate data from a specified statistical model. We describe data‐generating processes for the Cox proportional hazards model with time‐varying covariates when event times follow an exponential, Weibull, or Gompertz distribution. We consider three types of time‐varying covariates: first, a dichotomous time‐varying covariate that can change at most once from untreated to treated (e.g., organ transplant); second, a continuous time‐varying covariate such as cumulative exposure at a constant dose to radiation or to a pharmaceutical agent used for a chronic condition; third, a dichotomous time‐varying covariate with a subject being able to move repeatedly between treatment states (e.g., current compliance or use of a medication). In each setting, we derive closed‐form expressions that allow one to simulate survival times so that survival times are related to a vector of fixed or time‐invariant covariates and to a single time‐varying covariate. We illustrate the utility of our closed‐form expressions for simulating event times by using Monte Carlo simulations to estimate the statistical power to detect as statistically significant the effect of different types of binary time‐varying covariates. This is compared with the statistical power to detect as statistically significant a binary time‐invariant covariate. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

14.
Relative survival, a method for assessing prognostic factors for disease-specific mortality in unselected populations, is frequently used in population-based studies. However, most relative survival models assume that the effects of covariates on disease-specific mortality conform with the proportional hazards hypothesis, which may not hold in some long-term studies. To accommodate variation over time of a predictor's effect on disease-specific mortality, we developed a new relative survival regression model using B-splines to model the hazard ratio as a flexible function of time, without having to specify a particular functional form. Our method also allows for testing the hypotheses of hazards proportionality and no association on disease-specific hazard. Accuracy of estimation and inference were evaluated in simulations. The method is illustrated by an analysis of a population-based study of colon cancer.  相似文献   

15.
Analysis of health care cost data is often complicated by a high level of skewness, heteroscedastic variances and the presence of missing data. Most of the existing literature on cost data analysis have been focused on modeling the conditional mean. In this paper, we study a weighted quantile regression approach for estimating the conditional quantiles health care cost data with missing covariates. The weighted quantile regression estimator is consistent, unlike the naive estimator, and asymptotically normal. Furthermore, we propose a modified BIC for variable selection in quantile regression when the covariates are missing at random. The quantile regression framework allows us to obtain a more complete picture of the effects of the covariates on the health care cost and is naturally adapted to the skewness and heterogeneity of the cost data. The method is semiparametric in the sense that it does not require to specify the likelihood function for the random error or the covariates. We investigate the weighted quantile regression procedure and the modified BIC via extensive simulations. We illustrate the application by analyzing a real data set from a health care cost study. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

16.
Liu Y  Craig BA 《Statistics in medicine》2006,25(10):1729-1740
In survival analysis, use of the Cox proportional hazards model requires knowledge of all covariates under consideration at every failure time. Since failure times rarely coincide with observation times, time-dependent covariates (covariates that vary over time) need to be inferred from the observed values. In this paper, we introduce the last value auto-regressed (LVAR) estimation method and compare it to several other established estimation approaches via a simulation study. The comparison shows that under several time-dependent covariate processes this method results in a smaller mean square error when considering the time-dependent covariate effect.  相似文献   

17.
One can fruitfully approach survival problems without covariates in an actuarial way. In narrow time bins, the number of people at risk is counted together with the number of events. The relationship between time and probability of an event can then be estimated with a parametric or semi-parametric model. The number of events observed in each bin is described using a Poisson distribution with the log mean specified using a flexible penalized B-splines model with a large number of equidistant knots. Regression on pertinent covariates can easily be performed using the same log-linear model, leading to the classical proportional hazard model. We propose to extend that model by allowing the regression coefficients to vary in a smooth way with time. Penalized B-splines models will be proposed for each of these coefficients. We show how the regression parameters and the penalty weights can be estimated efficiently using Bayesian inference tools based on the Metropolis-adjusted Langevin algorithm.  相似文献   

18.
A normal copula-based selection model is proposed for continuous longitudinal data with a non-ignorable non-monotone missing-data process. The normal copula is used to combine the distribution of the outcome of interest and that of the missing-data indicators given the covariates. Parameters in the model are estimated by a pseudo-likelihood method. We first use the GEE with a logistic link to estimate the parameters associated with the marginal distribution of the missing-data indicator given the covariates, assuming that covariates are always observed. Then we estimate other parameters by inserting the estimates from the first step into the full likelihood function. A simulation study is conducted to assess the robustness of the assumed model under different missing-data processes. The proposed method is then applied to one example from a community cohort study to demonstrate its capability to reduce bias.  相似文献   

19.
In survival studies, information lost through censoring can be partially recaptured through repeated measures data which are predictive of survival. In addition, such data may be useful in removing bias in survival estimates, due to censoring which depends upon the repeated measures. Here we investigate joint models for survival T and repeated measurements Y, given a vector of covariates Z. Mixture models indexed as f (T/Z) f (Y/T,Z) are well suited for assessing covariate effects on survival time. Our objective is efficiency gains, using non-parametric models for Y in order to avoid introducing bias by misspecification of the distribution for Y. We model (T/Z) as a piecewise exponential distribution with proportional hazards covariate effect. The component (Y/T,Z) has a multinomial model. The joint likelihood for survival and longitudinal data is maximized, using the EM algorithm. The estimate of covariate effect is compared to the estimate based on the standard proportional hazards model and an alternative joint model based estimate. We demonstrate modest gains in efficiency when using the joint piecewise exponential joint model. In a simulation, the estimated efficiency gain over the standard proportional hazards model is 6.4 per cent. In clinical trial data, the estimated efficiency gain over the standard proportional hazards model is 10.2 per cent.  相似文献   

20.
We consider a general semiparametric hazards regression model that encompasses the Cox proportional hazards model and the accelerated failure time model for survival analysis. To overcome the nonexistence of the maximum likelihood, we derive a kernel‐smoothed profile likelihood function and prove that the resulting estimates of the regression parameters are consistent and achieve semiparametric efficiency. In addition, we develop penalized structure selection techniques to determine which covariates constitute the accelerated failure time model and which covariates constitute the proportional hazards model. The proposed method is able to estimate the model structure consistently and model parameters efficiently. Furthermore, variance estimation is straightforward. The proposed estimation performs well in simulation studies and is applied to the analysis of a real data set. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号