首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 10 毫秒
1.
We discuss some of the fundamental concepts underlying the development of frailty and random effects models in survival. One of these fundamental concepts was the idea of a frailty model where each subject has his or her own disposition to failure, their so-called frailty, additional to any effects we wish to quantify via regression. Although the concept of individual frailty can be of value when thinking about how data arise or when interpreting parameter estimates in the context of a fitted model, we argue that the concept is of limited practical value. Individual random effects (frailties), whenever detected, can be made to disappear by elementary model transformation. In consequence, unless we are to take some model form as unassailable, beyond challenge and carved in stone, and if we are to understand the term 'frailty' as referring to individual random effects, then frailty models have no value. Random effects models on the other hand, in which groups of individuals share some common effect, can be used to advantage. Even in this case however, if we are prepared to sacrifice some efficiency, we can avoid complex modelling by using the considerable power already provided by the stratified proportional hazards model. Stratified models and random effects models can both be seen to be particular cases of partially proportional hazards models, a view that gives further insight. The added structure of a random effects model, viewed as a stratified proportional hazards model with some added distributional constraints, will, for group sizes of five or more, provide no more than modest efficiency gains, even when the additional assumptions are exactly true. On the other hand, for moderate to large numbers of very small groups, of sizes two or three, the study of twins being a well known example, the efficiency gains of the random effects model can be far from negligible. For such applications, the case for using random effects models rather than the stratified model is strong. This is especially so in view of the good robustness properties of random effects models. Nonetheless, the simpler analysis, based upon the stratified model, remains valid, albeit making a less efficient use of resources.  相似文献   

2.
A wholly parametric non-proportional hazards survival model is introduced. The model retains Cox's constant of proportionality as the leading term in the relative risk but permits additional flexibility by modelling the relative risk as a function of time. Covariate effects are modelled on the log odds scale, a choice which is more in keeping with the spirit of the multiple logistic function, rather than on the logarithmic scale, as in the proportional hazards model. Some basic properties of the model are described. A special feature of the model is that, when the proportional hazards model applies, Cox's regression coefficients are easily recovered and the computation of other time dependent quantities of interest is routine. A semi-Markov version of the model is derived to analyse recurrent sequential state processes and this is applied to a study of valvotomies conducted in the Regional Medical Cardiology Centre in Belfast, Northern Ireland. The results obtained are compared with those from the classical proportional hazards analysis. © 1997 John Wiley & Sons, Ltd.  相似文献   

3.
In this paper, we develop a Bayesian approach to estimate a Cox proportional hazards model that allows a threshold in the regression coefficient, when some fraction of subjects are not susceptible to the event of interest. A data augmentation scheme with latent binary cure indicators is adopted to simplify the Markov chain Monte Carlo implementation. Given the binary cure indicators, the Cox cure model reduces to a standard Cox model and a logistic regression model. Furthermore, the threshold detection problem reverts to a threshold problem in a regular Cox model. The baseline cumulative hazard for the Cox model is formulated non‐parametrically using counting processes with a gamma process prior. Simulation studies demonstrate that the method provides accurate point and interval estimates. Application to a data set of oropharynx cancer patients suggests a significant threshold in age at diagnosis such that the effect of gender on disease‐specific survival changes after the threshold. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

4.
Proportional hazards models for average data for groups   总被引:1,自引:0,他引:1  
In ecological studies it is sometimes tempting to apply statistical models derived from longitudinal studies of individuals to cross-sectional data which may be available only in aggregate form for groups of individuals. This paper examines the assumptions and approximations that are made when average data for groups are used with predictive equations derived for individuals, in particular for the proportional hazards model. It is shown that this method underestimates age-specific hazard functions but that if ratios of hazard functions are used to compare groups then the approach is valid provided that certain plausible conditions hold. A numerical example is given about trends in heart disease mortality in Australia. Group data are available on risk factor levels and mortality. A proportional hazards model derived from the Framingham study is used to estimate the effects on mortality which may be attributed to risk factor changes and the effects attributable to other factors such as improved medical treatment. The interpretation of the results is discussed.  相似文献   

5.
6.
The use of random-effects models for the analysis of longitudinal data with missing responses has been discussed by several authors. In this paper, we extend the non-linear random-effects model for a single response to the case of multiple responses, allowing for arbitrary patterns of observed and missing data. Parameters for this model are estimated via the EM algorithm and by the first-order approximation available in SAS Proc NLMIXED. The set of equations for this estimation procedure is derived and these are appropriately modified to deal with missing data. The methodology is illustrated with an example using data coming from a study involving 161 pregnant women presenting to a private obstetrics clinic in Santiago, Chile.  相似文献   

7.
Lu Mao 《Statistics in medicine》2019,38(19):3628-3641
Rodent survival-sacrifice experiments are routinely conducted to assess the tumor-inducing potential of a certain exposure or drug. Because most tumors under study are impalpable, animals are examined at death for evidence of tumor formation. In some studies, the cause of death is ascertained by a pathologist to account for possible correlation between tumor development and death. Existing methods for survival-sacrifice data with cause-of-death information have been restricted to multi-group testing or one-sample estimation of tumor onset distribution and thus do not provide a natural way to quantify treatment effect or dose-response relationship. In this paper, we propose semiparametric regression methods under the popular proportional hazards model for both tumor onset and tumor-caused death. For inference, we develop a maximum pseudo-likelihood estimation procedure using a modified iterative convex minorant algorithm, which is guaranteed to converge to the unique maximizer of the objective function. Simulation studies under different tumor rates show that the new methods provide valid inference on the covariate-outcome relationship and outperform alternative approaches. A real study investigating the effects of benzidine dihydrochloride on liver tumor in mice is analyzed as an illustration.  相似文献   

8.
Cox proportional hazard regression model is a popular tool to analyze the relationship between a censored lifetime variable with other relevant factors. The semiparametric Cox model is widely used to study different types of data arising from applied disciplines such as medical science, biology, and reliability studies. A fully parametric version of the Cox regression model, if properly specified, can yield more efficient parameter estimates, leading to better insight generation. However, the existing maximum likelihood approach of generating inference under the fully parametric proportional hazards model is highly nonrobust against data contamination (often manifested through outliers), which restricts its practical usage. In this paper, we develop a robust estimation procedure for the parametric proportional hazards model based on the minimum density power divergence approach. The proposed minimum density power divergence estimator is seen to produce highly robust estimates under data contamination with only a slight loss in efficiency under pure data. Further, it is always seen to generate more precise inference than the likelihood based estimates under the semiparametric Cox models or their existing robust versions. We also justify their robustness theoretically through the influence function analysis. The practical applicability and usefulness of the proposal are illustrated through simulations and real data examples.  相似文献   

9.
We consider a model for mortality rates that includes both the long and short term effects of switching from an initial to a second state, for example, when patients receive an initial treatment and then switch to a second treatment. We include transient effects associated with the switch in the model through the use of time-dependent covariates. One can choose the form of the time-dependent covariate to correspond with a variety of possible transition patterns. We use an exponential decay model to compare the survival experience of transplant versus dialysis treatment of end stage renal disease (ESRD) patients from the Michigan Kidney Registry (MKR). This model involves a hazard function that has an initial effect in mortality at the time of transplant, expected to be higher, followed by a smooth exponential decay to a long term effect, expected to be lower than the risk for those remaining on dialysis. Cox and Oakes used this model to analyse the Stanford Heart Transplant data. The model implicitly suggests there is a time at which the hazard curves (and survival curves) for the treatment groups cross. Those crossing times are useful in advising patients who have the option of receiving a transplant. We describe methods for obtaining estimates of the crossing times and their associated variances, and then apply them in analysing the MKR data.  相似文献   

10.
Beath KJ 《Statistics in medicine》2007,26(12):2547-2564
Models for infant growth have usually been based on parametric forms, commonly an exponential or similar model, which have been shown to fit poorly especially during the first year of life. An alternative approach is to use a non-parametric model, based on a shape invariant model (SIM), where a single function is transformed by shifting and scaling to fit each subject. In the model a regression spline is used as the function, with log transformation of the data and a simplification of the SIM, obtained from the relationship with the exponential model. All subjects are fitted as a nonlinear mixed effects model, allowing the variation in the parameters between subjects to be determined. Methods for the inclusion of covariates in growth models based on SIM are developed, with parameters for time independent covariates included in the model by varying either the shape, the size parameter or the growth parameter and time-dependent co-variates included by transforming the time axis, to either increase or decrease the growth rate dependent on the co-variate, similar to methods used for accelerated failure-time models. The model is used to fit weight data for 602 infants, measured from 0 to 2 years as part of the Childhood Asthma Prevention Study (CAPS) trial, and to determine the effect of breastfeeding on infant weight.  相似文献   

11.
Xiang L  Ma X  Yau KK 《Statistics in medicine》2011,30(9):995-1006
The mixture cure model is an effective tool for analysis of survival data with a cure fraction. This approach integrates the logistic regression model for the proportion of cured subjects and the survival model (either the Cox proportional hazards or accelerated failure time model) for uncured subjects. Methods based on the mixture cure model have been extensively investigated in the literature for data with exact failure/censoring times. In this paper, we propose a mixture cure modeling procedure for analyzing clustered and interval-censored survival time data by incorporating random effects in both the logistic regression and PH regression components. Under the generalized linear mixed model framework, we develop the REML estimation for the parameters, as well as an iterative algorithm for estimation of the survival function for interval-censored data. The estimation procedure is implemented via an EM algorithm. A simulation study is conducted to evaluate the performance of the proposed method in various practical situations. To demonstrate its usefulness, we apply the proposed method to analyze the interval-censored relapse time data from a smoking cessation study whose subjects were recruited from 51 zip code regions in the southeastern corner of Minnesota.  相似文献   

12.
We present a multilevel frailty model for handling serial dependence and simultaneous heterogeneity in survival data with a multilevel structure attributed to clustering of subjects and the presence of multiple failure outcomes. One commonly observes such data, for example, in multi-institutional, randomized placebo-controlled trials in which patients suffer repeated episodes (eg, recurrent migraines) of the disease outcome being measured. The model extends the proportional hazards model by incorporating a random covariate and unobservable random institution effect to respectively account for treatment-by-institution interaction and institutional variation in the baseline risk. Moreover, a random effect term with correlation structure driven by a first-order autoregressive process is attached to the model to facilitate estimation of between patient heterogeneity and serial dependence. By means of the generalized linear mixed model methodology, the random effects distribution is assumed normal and the residual maximum likelihood and the maximum likelihood methods are extended for estimation of model parameters. Simulation studies are carried out to evaluate the performance of the residual maximum likelihood and the maximum likelihood estimators and to assess the impact of misspecifying random effects distribution on the proposed inference. We demonstrate the practical feasibility of the modeling methodology by analyzing real data from a double-blind randomized multi-institutional clinical trial, designed to examine the effect of rhDNase on the occurrence of respiratory exacerbations among patients with cystic fibrosis.  相似文献   

13.
Assessing regional differences in the survival of cancer patients is important but difficult when separate regions are small or sparsely populated. In this paper, we apply a mixture cure fraction model with random effects to cause‐specific survival data of female breast cancer patients collected by the population‐based Finnish Cancer Registry. Two sets of random effects were used to capture the regional variation in the cure fraction and in the survival of the non‐cured patients, respectively. This hierarchical model was implemented in a Bayesian framework using a Metropolis‐within‐Gibbs algorithm. To avoid poor mixing of the Markov chain, when the variance of either set of random effects was close to zero, posterior simulations were based on a parameter‐expanded model with tailor‐made proposal distributions in Metropolis steps. The random effects allowed the fitting of the cure fraction model to the sparse regional data and the estimation of the regional variation in 10‐year cause‐specific breast cancer survival with a parsimonious number of parameters. Before 1986, the capital of Finland clearly stood out from the rest, but since then all the 21 hospital districts have achieved approximately the same level of survival. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

14.
The Generalised linear mixed model (GLMM) is widely used for modelling environmental data. However, such data are prone to influential observations, which can distort the estimated exposure–response curve particularly in regions of high exposure. Deletion diagnostics for iterative estimation schemes commonly derive the deleted estimates based on a single iteration of the full system holding certain pivotal quantities such as the information matrix to be constant. In this paper, we present an approximate formula for the deleted estimates and Cook's distance for the GLMM, which does not assume that the estimates of variance parameters are unaffected by deletion. The procedure allows the user to calculate standardised DFBETAs for mean as well as variance parameters. In certain cases such as when using the GLMM as a device for smoothing, such residuals for the variance parameters are interesting in their own right. In general, the procedure leads to deleted estimates of mean parameters, which are corrected for the effect of deletion on variance components as estimation of the two sets of parameters is interdependent. The probabilistic behaviour of these residuals is investigated and a simulation based procedure suggested for their standardisation. The method is used to identify influential individuals in an occupational cohort exposed to silica. The results show that failure to conduct post model fitting diagnostics for variance components can lead to erroneous conclusions about the fitted curve and unstable confidence intervals. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

15.
Liu L  Ma JZ  Johnson BA 《Statistics in medicine》2008,27(18):3528-3539
Two-part random effects models (J. Am. Statist. Assoc. 2001; 96:730-745; Statist. Methods Med. Res. 2002; 11:341-355) have been applied to longitudinal studies for semi-continuous outcomes, characterized by a large portion of zero values and continuous non-zero (positive) values. Examples include repeated measures of daily drinking records, monthly medical costs, and annual claims of car insurance. However, the question of how to apply such models to multi-level data settings remains. In this paper, we propose a novel multi-level two-part random effects model. Distinct random effects are used to characterize heterogeneity at different levels. Maximum likelihood estimation and inference are carried out through Gaussian quadrature technique, which can be implemented conveniently in freely available software-aML. The model is applied to the analysis of repeated measures of the daily drinking record in a randomized controlled trial of topiramate for alcohol-dependence treatment.  相似文献   

16.
The nested case-control design is frequently used to evaluate exposures and health outcomes within the confines of a cohort study. When incidence-density sampling is used to identify controls, the resulting data can be analyzed using conditional logistic regression (equivalent to stratified Cox proportional hazards regression). In these studies, exposure lagging is often used to account for disease latency. In light of recent criticism of incidence-density sampling, we used simulated occupational cohorts to evaluate age-based incidence-density sampling for lagged exposures in the presence of birth-cohort effects and associations among time-related variables. Effect estimates were unbiased when adjusted for birth cohort; however, unadjusted effect estimates were biased, particularly when age at hire and year of hire were correlated. When the analysis included an adjustment for birth cohort, the inclusion of lagged-out cases and controls (assigned a lagged exposure of zero) did not introduce bias.  相似文献   

17.
This paper presents methods of analysis for ordinal repeated measures. We use a generalization of the continuation ratio model with a random effect. We handle frailty by using a normal distribution and also a non-parametric distribution. The methodology is readily implemented in existing software and flexible enough to treat large data sets with uncommon time measurements as well as different numbers of repeated measures for each individual. The models are used to investigate how a group of explanatory variables influences the overall condition of patients treated for breast cancer.  相似文献   

18.
Deb P 《Health economics》2001,10(5):371-383
I have developed a random effects probit model in which the distribution of the random intercept is approximated by a discrete density. Monte Carlo results show that only three to four points of support are required for the discrete density to closely mimic normal and chi-squared densities and provide unbiased estimates of the structural parameters and the variance of the random intercept. The empirical application shows that both observed family characteristics and unobserved family-level heterogeneity are important determinants of the demand for preventive care.  相似文献   

19.
Birth-weight- and gestational-age-specific perinatal mortality curves intersect when compared by race and maternal smoking. The authors propose a new measure to replace fetal and infant mortality and an analytic strategy to assess the effects of risk factors on this outcome. They used 1998 data for US Blacks and Whites. Age-specific post-last menstrual period (LMP) mortality rate was defined as the proportion of deaths (stillbirth, perinatal death, or infant death) at a given age post-LMP. The authors used extended Cox regression with time-varying covariates and hazard ratios to model the effects of race and smoking on post-LMP mortality. Perinatal mortality rates (conventional calculation) for Blacks and Whites showed the expected crossover. However, analyses of post-LMP mortality showed no crossover. For the Black-White comparison, a hazard ratio of 1.72 (95% confidence interval: 1.67, 1.77) was obtained. The hazard was higher for smokers than for nonsmokers, but the hazard ratio increased from 1.09 (95% confidence interval: 0.98, 1.22) at 22 weeks to 1.82 (95% confidence interval: 1.72, 1.92) at 40 weeks. The hazard ratio associated with birth was also time dependent: higher than 1 for preterm gestation and lower than 1 for term gestation. The increasing adverse effect of smoking with gestational age suggests an accumulating effect of smoking on mortality. Modeling post-LMP mortality eliminates the crossover paradox for race and maternal smoking in a single statistical model.  相似文献   

20.
Cure models for clustered survival data have the potential for broad applicability. In this paper, we consider the mixture cure model with random effects and propose several estimation methods based on Gaussian quadrature, rejection sampling, and importance sampling to obtain the maximum likelihood estimates of the model for clustered survival data with a cure fraction. The methods are flexible to accommodate various correlation structures. A simulation study demonstrates that the maximum likelihood estimates of parameters in the model tend to have smaller biases and variances than the estimates obtained from the existing methods. We apply the model to a study of tonsil cancer patients clustered by treatment centers to investigate the effect of covariates on the cure rate and on the failure time distribution of the uncured patients. The maximum likelihood estimates of the parameters demonstrate strong correlation among the failure times of the uncured patients and weak correlation among cure statuses in the same center.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号