共查询到20条相似文献,搜索用时 10 毫秒
1.
Motivated by a clinical trial of zinc nasal spray for the treatment of the common cold, we consider the problem of comparing two crossing hazard rates. A comprehensive review of the existing methods for dealing with the crossing hazard rates problem is provided. A new method, based on modelling the crossing hazard rates, is proposed and implemented under the Cox proportional hazards framework. The main advantage of the proposed method is the utilization of the Box-Cox transformation which covers a wide range of hazard crossing patterns. Simulation studies are conducted for comparing the performance of the existing methods and the proposed one, which show that the proposed method outperforms some of its peers in certain cases. Applications to a kidney dialysis patients data and the zinc nasal spray clinical trial data are discussed. 相似文献
2.
A coefficient of explained randomness, analogous to explained variation but for non-linear models, was presented by Kent. The construct hinges upon the notion of Kullback-Leibler information gain. Kent and O'Quigley developed these ideas, obtaining simple, multiple and partial coefficients for the situation of proportional hazards regression. Their approach was based upon the idea of transforming a general proportional hazards model to a specific one of Weibull form. Xu and O'Quigley developed a more direct approach, more in harmony with the semi-parametric nature of the proportional hazards model thereby simplifying inference and allowing, for instance, the use of time dependent covariates. A potential drawback to the coefficient of Xu and O'Quigley is its interpretation as explained randomness in the covariate given time. An investigator might feel that the interpretation of the Kent and O'Quigley coefficient, as a proportion of explained randomness of time given the covariate, is preferable. One purpose of this note is to indicate that, under an independent censoring assumption, the two population coefficients coincide. Thus the simpler inferential setting for Xu and O'Quigley can also be applied to the coefficient of Kent and O'Quigley. Our second purpose is to point out that a sample-based coefficient in common use in the SAS statistical package can be interpreted as an estimate of explained randomness when there is no censoring. When there is censoring the SAS coefficient would not seem satisfactory in that its population counterpart depends on an independent censoring mechanism. However there is a quick fix and we argue in favour of its use. 相似文献
3.
G R Howe 《International journal of epidemiology》1986,15(2):257-262
Computer simulation has been used to study the behaviour of four estimators of the common rate ratio estimated from cohort data, in terms of bias and root mean square error. It is concluded that the maximum likelihood estimator is the preferable approach, though an excellent approximation to this is yielded by the analogue of the Mantel-Haenszel estimator. When the number of observed cases in the referent group is large, it may be preferable to present the results as the standardized mortality (morbidity) ratio, though under these circumstances all four estimators tend to the same value. The O/E estimator proposed for use with clinical trials data under some situations is noticeably biased and inefficient and therefore should be avoided. 相似文献
4.
Efficiency of the logistic regression and Cox proportional hazards models in longitudinal studies 总被引:1,自引:0,他引:1
Both logistic regression and Cox proportional hazards models are used widely in longitudinal epidemiologic studies for analysing the relationship between several risk factors and a time-related dichotomous event. The two models yield similar estimates of regression coefficients in studies with short follow-up and low incidence of event occurrence. Further, with just one dichotomous covariate and identical censoring times for all subjects, the asymptotic relative efficiency of the two models is very close to 1 unless the duration of follow-up is extended. We generalize this result to several qualitative or quantitative covariates. This was motivated by the analysis of mortality data from a study where all subjects are followed up during the same fixed period without loss except by death. Logistic and Cox models were applied to these data. Similar results were obtained for the two models in shorter periods of follow-up of five years or less, but not in longer periods of ten years or more, where the survival rate was lower. 相似文献
5.
In survival studies, information lost through censoring can be partially recaptured through repeated measures data which are predictive of survival. In addition, such data may be useful in removing bias in survival estimates, due to censoring which depends upon the repeated measures. Here we investigate joint models for survival T and repeated measurements Y, given a vector of covariates Z. Mixture models indexed as f (T/Z) f (Y/T,Z) are well suited for assessing covariate effects on survival time. Our objective is efficiency gains, using non-parametric models for Y in order to avoid introducing bias by misspecification of the distribution for Y. We model (T/Z) as a piecewise exponential distribution with proportional hazards covariate effect. The component (Y/T,Z) has a multinomial model. The joint likelihood for survival and longitudinal data is maximized, using the EM algorithm. The estimate of covariate effect is compared to the estimate based on the standard proportional hazards model and an alternative joint model based estimate. We demonstrate modest gains in efficiency when using the joint piecewise exponential joint model. In a simulation, the estimated efficiency gain over the standard proportional hazards model is 6.4 per cent. In clinical trial data, the estimated efficiency gain over the standard proportional hazards model is 10.2 per cent. 相似文献
6.
This paper presents a mixture model which combines features of the usual Cox proportional hazards model with those of a class of models, known as mixtures-of-experts. The resulting model is more flexible than the usual Cox model in the sense that the log hazard ratio is allowed to vary non-linearly as a function of the covariates. Thus it provides a flexible approach to both modelling survival data and model checking. The method is illustrated with simulated data, as well as with multiple myeloma data. 相似文献
7.
PURPOSE: We describe a method for testing and estimating a two-way additive interaction between two categorical variables, each of which has greater than or equal to two levels. METHODS: We test additive and multiplicative interactions in the same proportional hazards model and measure additivity by relative excess risk due to interaction (RERI), proportion of disease attributable to interaction (AP), and synergy index (S). A simulation study was used to compare the performance of these measures of additivity. Data from the Atherosclerosis Risk in Communities cohort study with a total of 15,792 subjects were used to exemplify the methods. RESULTS: The test and measures of departure from additivity depend neither on follow-up time nor on the covariates. The simulation study indicates that RERI is the best choice of measures of additivity using a proportional hazards model. The examples indicated that an interaction between two variables can be statistically significant on additive measure (RERI=1.14, p=0.04) but not on multiplicative measure (beta3=0.59, p=0.12) and that additive and multiplicative interactions can be in opposite directions (RERI=0.08, beta3=-0.08). CONCLUSIONS: The method has broader application for any regression models with a rate as the dependent variable. In the case that both additive and multiplicative interactions are statistically significant and in the opposite direction, the interpretation needs caution. 相似文献
8.
Cox比例风险回归模型(Cox模型)是时间-事件数据分析中常用的多因素分析方法,拟合Cox模型时一个关键问题是如何选择合适的与结局事件发生相关的时间尺度。目前国内开展的队列研究在资料分析中较少关注Cox模型的时间尺度选择问题。本研究对文献报道中常见的几种时间尺度选择策略进行简要介绍和比较;并利用上海女性健康队列资料,以中心性肥胖与肝癌发病风险的关联为例,说明选择不同时间尺度的Cox模型对数据分析结果的影响;在此基础上提出几点Cox模型时间尺度选择上的建议,以期为队列研究资料的分析提供参考。 相似文献
9.
For survival data regression, the Cox proportional hazards model is the most popular model, but in certain situations the Cox model is inappropriate. Various authors have proposed the proportional odds model as an alternative. Yang and Prentice recently presented a number of easily implemented estimators for the proportional odds model. Here we show how to extend the methods of Yang and Prentice to a family of survival models that includes the proportional hazards model and proportional odds model as special cases. The model is defined in terms of a Box-Cox transformation of the survival function, indexed by a transformation parameter rho. This model has been discussed by other authors, and is related to the Harrington-Fleming G(rho) family of tests and to frailty models. We discuss inference for the case where rho is known and the case where rho must be estimated. We present a simulation study of a pseudo-likelihood estimator and a martingale residual estimator. We find that the methods perform reasonably. We apply our model to a real data set. 相似文献
10.
P Mock 《Statistics in medicine》1990,9(4):463-464
11.
We compare parameter estimates from the proportional hazards model, the cumulative logistic model and a new modified logistic model (referred to as the person-time logistic model), with the use of simulated data sets and with the following quantities varied: disease incidence, risk factor strength, length of follow-up, the proportion censored, non-proportional hazards, and sample size. Parameter estimates from the person-time logistic regression model closely approximated those from the Cox model when the survival time distribution was close to exponential, but could differ substantially in other situations. We found parameter estimates from the cumulative logistic model similar to those from the Cox and person-time logistic models when the disease was rare, the risk factor moderate, and censoring rates similar across the covariates. We also compare the models with analysis of a real data set that involves the relationship of age, race, sex, blood pressure, and smoking to subsequent mortality. In this example, the length of follow-up among survivors varied from 5 to 14 years and the Cox and person-time logistic approaches gave nearly identical results. The cumulative logistic results had somewhat larger p-values but were substantively similar for all but one coefficient (the age-race interaction). The latter difference reflects differential censoring rates by age, race and sex. 相似文献
12.
Simulation studies present an important statistical tool to investigate the performance, properties and adequacy of statistical models in pre-specified situations. One of the most important statistical models in medical research is the proportional hazards model of Cox. In this paper, techniques to generate survival times for simulation studies regarding Cox proportional hazards models are presented. A general formula describing the relation between the hazard and the corresponding survival time of the Cox model is derived, which is useful in simulation studies. It is shown how the exponential, the Weibull and the Gompertz distribution can be applied to generate appropriate survival times for simulation studies. Additionally, the general relation between hazard and survival time can be used to develop own distributions for special situations and to handle flexibly parameterized proportional hazards models. The use of distributions other than the exponential distribution is indispensable to investigate the characteristics of the Cox proportional hazards model, especially in non-standard situations, where the partial likelihood depends on the baseline hazard. A simulation study investigating the effect of measurement errors in the German Uranium Miners Cohort Study is considered to illustrate the proposed simulation techniques and to emphasize the importance of a careful modelling of the baseline hazard in Cox models. 相似文献
13.
Reduced-rank proportional hazards regression and simulation-based prediction for multi-state models 总被引:1,自引:0,他引:1
In this paper we address two issues arising in multi-state models with covariates. The first issue deals with how to obtain parsimony in the modeling of the effect of covariates. The standard way of incorporating covariates in multi-state models is by considering the transitions as separate building blocks, and modeling the effect of covariates for each transition separately, usually through a proportional hazards model for the transition hazard. This typically leads to a large number of regression coefficients to be estimated, and there is a real danger of over-fitting, especially when transitions with few events are present. We extend the reduced-rank ideas, proposed earlier in the context of competing risks, to multi-state models, in order to deal with this issue. The second issue addressed in this paper was motivated by the wish to obtain standard errors of the regression coefficients of the reduced-rank model. We propose a model-based resampling technique, based on repeatedly sampling trajectories through the multi-state model. The same ideas are also used for the estimation of prediction probabilities in general multi-state models and associated standard errors.We use data from the European Group for Blood and Marrow Transplantation to illustrate our techniques. 相似文献
14.
We present examples of the usage of regression trees for censored response via two real world datasets, one a rheumatoid arthritis survival study and the other a hip replacement study, and draw comparisons with the results of Cox proportional hazards modelling. The two methods pursue different goals. Motivation of the tree techniques is the desire to extract meaningful prognostic groups while the proportional hazards model enables assessment of the impact of risk factors. The methods are thus complementary. For the arthritis study the two techniques corroborate one another, although the flavour of the conclusions derived differ. For the hip replacement study, however, the regression tree approach reveals structure that would not emerge from a routine proportional hazards analysis. We also discuss the treatment of data analytic issues such as the handling of missing values and influence in the presence of non-uniform censoring. 相似文献
15.
We provide a simple and practical, yet flexible, penalized estimation method for a Cox proportional hazards model with current status data. We approximate the baseline cumulative hazard function by monotone B‐splines and use a hybrid approach based on the Fisher‐scoring algorithm and the isotonic regression to compute the penalized estimates. We show that the penalized estimator of the nonparametric component achieves the optimal rate of convergence under some smooth conditions and that the estimators of the regression parameters are asymptotically normal and efficient. Moreover, a simple variance estimation method is considered for inference on the regression parameters. We perform 2 extensive Monte Carlo studies to evaluate the finite‐sample performance of the penalized approach and compare it with the 3 competing R packages: C1.coxph, intcox, and ICsurv. A goodness‐of‐fit test and model diagnostics are also discussed. The methodology is illustrated with 2 real applications. 相似文献
16.
We present a local influence analysis to assigned model quantities in the context of a dose-response analysis of cancer mortality in relation to estimated absorbed dose of dioxin. The risk estimation is performed using dioxin dose as a time-dependent explanatory variable in a proportional hazard model. The dioxin dose is computed using a toxicokinetic model, which depends on some factors, such as assigned constants and estimated parameters. We present a local influence analysis to assess the effects on final results of minor perturbations of toxicokinetic model factors. In the present context, there is no evidence of local influence in risk estimates. It is however possible to identify which factors are more influential. 相似文献
17.
Kenneth R. Hess 《Statistics in medicine》1994,13(10):1045-1062
Proportional hazards (or Cox) regression is a popular method for modelling the effects of prognostic factors on survival. Use of cubic spline functions to model time-by-covariate interactions in Cox regression allows investigation of the shape of a possible covariate-time dependence without having to specify a specific functional form. Cubic spline functions allow one to graph such time-by-covariate interactions, to test formally for the proportional hazards assumption, and also to test for non-linearity of the time-by-covariate interaction. The functions can be fitted with existing software using relatively few parameters; the regression coefficients are estimated using standard maximum likelihood methods. 相似文献
18.
In this paper, we propose a novel Gaussian quadrature estimation method in various frailty proportional hazards models. We approximate the unspecified baseline hazard by a piecewise constant one, resulting in a parametric model that can be fitted conveniently by Gaussian quadrature tools in standard software such as SAS Proc NLMIXED. We first apply our method to simple frailty models for correlated survival data (e.g. recurrent or clustered failure times), then to joint frailty models for correlated failure times with informative dropout or a dependent terminal event such as death. Simulation studies show that our method compares favorably with the well-received penalized partial likelihood method and the Monte Carlo EM (MCEM) method, for both normal and Gamma frailty models. We apply our method to three real data examples: (1) the time to blindness of both eyes in a diabetic retinopathy study, (2) the joint analysis of recurrent opportunistic diseases in the presence of death for HIV-infected patients, and (3) the joint modeling of local, distant tumor recurrences and patients survival in a soft tissue sarcoma study. The proposed method greatly simplifies the implementation of the (joint) frailty models and makes them much more accessible to general statistical practitioners. 相似文献
19.
S Greenland 《Statistics in medicine》1989,8(7):825-829
In twin studies (and other matched-pair studies) of the effect of a K-level risk factor on disease risk, one must estimate the proportion of pairs in each of K2 possible pair categories, of which K(K-1) categories represent discordant pairs. In particular, for a binary factor, one must estimate proportions within two discordant-pair categories and the variances of functions of these estimates. This paper shows how to do so when misclassification is present and stable estimates of the classification rates are available. Unlike methods that estimate only the discordance ratio, one can use the methods presented here to improve estimates of epidemiologic effects. 相似文献
20.
Magaret AS 《Statistics in medicine》2008,27(26):5456-5470
Standard proportional hazards methods are inappropriate for mismeasured outcomes. Previous work has shown that outcome mismeasurement can bias estimation of hazard ratios for covariates. We previously developed an adjusted proportional hazards method that can produce accurate hazard ratio estimates when outcome measurement is either non-sensitive or non-specific. That method requires that mismeasurement rates (the sensitivity and specificity of the diagnostic test) are known. Here, we develop an approach to handle unknown mismeasurement rates. We consider the case where the true failure status is known for a subset of subjects (the validation set) until the time of observed failure or censoring. Five methods of handling these mismeasured outcomes are described and compared. The first method uses only subjects on whom complete data are available (validation subset), whereas the second method uses only mismeasured outcomes (naive method). Three other methods include available data from both validated and non-validated subjects. Through simulation, we show that inclusion of the non-validated subjects can improve efficiency relative to use of the complete case data only and that inclusion of some true outcomes (the validation subset) can reduce bias relative to use of mismeasured outcomes only. We also compare the performance of the validation methods proposed using an example data set. 相似文献