首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Jones RH 《Statistics in medicine》2011,30(25):3050-3056
When a number of models are fit to the same data set, one method of choosing the 'best' model is to select the model for which Akaike's information criterion (AIC) is lowest. AIC applies when maximum likelihood is used to estimate the unknown parameters in the model. The value of -2 log likelihood for each model fit is penalized by adding twice the number of estimated parameters. The number of estimated parameters includes both the linear parameters and parameters in the covariance structure. Another criterion for model selection is the Bayesian information criterion (BIC). BIC penalizes -2 log likelihood by adding the number of estimated parameters multiplied by the log of the sample size. For large sample sizes, BIC penalizes -2 log likelihood much more than AIC making it harder to enter new parameters into the model. An assumption in BIC is that the observations are independent. In mixed models, the observations are not independent. This paper develops a method for calculating the 'effective sample size' for mixed models based on Fisher's information. The effective sample size replaces the sample size in BIC and can vary from the number of subjects to the number of observations. A number of error models are considered based on a general mixed model including unstructured, compound symmetry.  相似文献   

2.
In the estimation of Cox regression models, maximum partial likelihood estimates might be infinite in a monotone likelihood setting, where partial likelihood converges to a finite value and parameter estimates converge to infinite values. To address monotone likelihood, previous studies have applied Firth's bias correction method to Cox regression models. However, while the model selection criteria for Firth's penalized partial likelihood approach have not yet been studied, a heuristic AIC‐type information criterion can be used in a statistical package. Application of the heuristic information criterion to data obtained from a prospective observational study of patients with multiple brain metastases indicated that the heuristic information criterion selects models with many parameters and ignores the adequacy of the model. Moreover, we showed that the heuristic information criterion tends to select models with many regression parameters as the sample size increases. Thereby, in the present study, we propose an alternative AIC‐type information criterion based on the risk function. A Bayesian information criterion type was also evaluated. Further, the presented simulation results confirm that the proposed criteria performed well in a monotone likelihood setting. The proposed AIC‐type criterion was applied to prospective observational study data. © 2017 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd  相似文献   

3.
The frailty model is a random effect survival model, which allows for unobserved heterogeneity or for statistical dependence between observed survival data. The nested frailty model accounts for the hierarchical clustering of the data by including two nested random effects. Nested frailty models are particularly appropriate when data are clustered at several hierarchical levels naturally or by design. In such cases it is important to estimate the parameters of interest as accurately as possible by taking into account the hierarchical structure of the data. We present a maximum penalized likelihood estimation (MPnLE) to estimate non-parametrically a continuous hazard function in a nested gamma-frailty model with right-censored and left-truncated data. The estimators for the regression coefficients and the variance components of the random effects are obtained simultaneously. The simulation study demonstrates that this semi-parametric approach yields satisfactory results in this complex setting. In order to illustrate the MPnLE method and the nested frailty model, we present two applications. One is for modelling the effect of particulate air pollution on mortality in different areas with two levels of geographical regrouping. The other application is based on recurrent infection times of patients from different hospitals. We illustrate that using a shared frailty model instead of a nested frailty model with two levels of regrouping leads to inaccurate estimates, with an overestimation of the variance of the random effects. We show that even when the frailty effects are fairly small in magnitude, they are important since they alter the results in a systematic pattern.  相似文献   

4.
The frailty model, an extension of the proportional hazards model, is often used to model clustered survival data. However, some extension of the ordinary frailty model is required when there exist competing risks within a cluster. Under competing risks, the underlying processes affecting the events of interest and competing events could be different but correlated. In this paper, the hierarchical likelihood method is proposed to infer the cause‐specific hazard frailty model for clustered competing risks data. The hierarchical likelihood incorporates fixed effects as well as random effects into an extended likelihood function, so that the method does not require intensive numerical methods to find the marginal distribution. Simulation studies are performed to assess the behavior of the estimators for the regression coefficients and the correlation structure among the bivariate frailty distribution for competing events. The proposed method is illustrated with a breast cancer dataset. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

5.
The proportional subdistribution hazards model (i.e. Fine‐Gray model) has been widely used for analyzing univariate competing risks data. Recently, this model has been extended to clustered competing risks data via frailty. To the best of our knowledge, however, there has been no literature on variable selection method for such competing risks frailty models. In this paper, we propose a simple but unified procedure via a penalized h‐likelihood (HL) for variable selection of fixed effects in a general class of subdistribution hazard frailty models, in which random effects may be shared or correlated. We consider three penalty functions, least absolute shrinkage and selection operator (LASSO), smoothly clipped absolute deviation (SCAD) and HL, in our variable selection procedure. We show that the proposed method can be easily implemented using a slight modification to existing h‐likelihood estimation approaches. Numerical studies demonstrate that the proposed procedure using the HL penalty performs well, providing a higher probability of choosing the true model than LASSO and SCAD methods without losing prediction accuracy. The usefulness of the new method is illustrated using two actual datasets from multi‐center clinical trials. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

6.
The process by which patients experience a series of recurrent events, such as hospitalizations, may be subject to death. In cohort studies, one strategy for analyzing such data is to fit a joint frailty model for the intensities of the recurrent event and death, which estimates covariate effects on the two event types while accounting for their dependence. When certain covariates are difficult to obtain, however, researchers may only have the resources to subsample patients on whom to collect complete data: one way is using the nested case–control (NCC) design, in which risk set sampling is performed based on a single outcome. We develop a general framework for the design of NCC studies in the presence of recurrent and terminal events and propose estimation and inference for a joint frailty model for recurrence and death using data arising from such studies. We propose a maximum weighted penalized likelihood approach using flexible spline models for the baseline intensity functions. Two standard error estimators are proposed: a sandwich estimator and a perturbation resampling procedure. We investigate operating characteristics of our estimators as well as design considerations via a simulation study and illustrate our methods using two studies: one on recurrent cardiac hospitalizations in patients with heart failure and the other on local recurrence and metastasis in patients with breast cancer.  相似文献   

7.
In clinical data analysis, the restricted maximum likelihood (REML) method has been commonly used for estimating variance components in the linear mixed effects model. Under the REML estimation, however, it is not straightforward to compare several linear mixed effects models with different mean and covariance structures. In particular, few approaches have been proposed for the comparison of linear mixed effects models with different mean structures under the REML estimation. We propose an approach using extended information criterion (EIC), which is a bootstrap-based extension of AIC, for comparing linear mixed effects models with different mean and covariance structures under the REML estimation. We present simulation studies and applications to two actual clinical data sets.  相似文献   

8.
Frailty models are widely used to model clustered survival data arising in multicenter clinical studies. In the literature, most existing frailty models are proportional hazards, additive hazards, or accelerated failure time model based. In this paper, we propose a frailty model framework based on mean residual life regression to accommodate intracluster correlation and in the meantime provide easily understand and straightforward interpretation for the effects of prognostic factors on the expectation of the remaining lifetime. To overcome estimation challenges, a novel hierarchical quasi-likelihood approach is developed by making use of the idea of hierarchical likelihood in the construction of the quasi-likelihood function, leading to hierarchical estimating equations. Simulation results show favorable performance of the method regardless of frailty distributions. The utility of the proposed methodology is illustrated by its application to the data from a multi-institutional study of breast cancer.  相似文献   

9.
Our aim is to develop a rich and coherent framework for modeling correlated time‐to‐event data, including (1) survival regression models with different links and (2) flexible modeling for time‐dependent and nonlinear effects with rich postestimation. We extend the class of generalized survival models, which expresses a transformed survival in terms of a linear predictor, by incorporating a shared frailty or random effects for correlated survival data. The proposed approach can include parametric or penalized smooth functions for time, time‐dependent effects, nonlinear effects, and their interactions. The maximum (penalized) marginal likelihood method is used to estimate the regression coefficients and the variance for the frailty or random effects. The optimal smoothing parameters for the penalized marginal likelihood estimation can be automatically selected by a likelihood‐based cross‐validation criterion. For models with normal random effects, Gauss‐Hermite quadrature can be used to obtain the cluster‐level marginal likelihoods. The Akaike Information Criterion can be used to compare models and select the link function. We have implemented these methods in the R package rstpm2. Simulating for both small and larger clusters, we find that this approach performs well. Through 2 applications, we demonstrate (1) a comparison of proportional hazards and proportional odds models with random effects for clustered survival data and (2) the estimation of time‐varying effects on the log‐time scale, age‐varying effects for a specific treatment, and two‐dimensional splines for time and age.  相似文献   

10.
The analysis of multivariate time-to-event (TTE) data can become complicated due to the presence of clustering, leading to dependence between multiple event times. For a long time, (conditional) frailty models and (marginal) copula models have been used to analyze clustered TTE data. In this article, we propose a general frailty model employing a copula function between the frailty terms to construct flexible (bivariate) frailty distributions with the application to current status data. The model has the advantage to impose a less restrictive correlation structure among latent frailty variables as compared to traditional frailty models. Specifically, our model uses a copula function to join the marginal distributions of the frailty vector. In this article, we considered different copula functions, and we relied on marginal gamma distributions due to their mathematical convenience. Based on a simulation study, our novel model outperformed the commonly used additive correlated gamma frailty model, especially in the case of a negative association between the frailties. At the end of the article, the new methodology is illustrated on real-life data applications entailing bivariate serological survey data.  相似文献   

11.
The Akaike information criterion, AIC, is one of the most frequently used methods to select one or a few good, optimal regression models from a set of candidate models. In case the sample is incomplete, the naive use of this criterion on the so-called complete cases can lead to the selection of poor or inappropriate models. A similar problem occurs when a sample based on a design with unequal selection probabilities, is treated as a simple random sample. In this paper, we consider a modification of AIC, based on reweighing the sample in analogy with the weighted Horvitz-Thompson estimates. It is shown that this weighted AIC-criterion provides better model choices for both incomplete and design-based samples. The use of the weighted AIC-criterion is illustrated on data from the Belgian Health Interview Survey, which motivated this research. Simulations show its performance in a variety of settings.  相似文献   

12.
In this article, we present a frailty model using the generalized gamma distribution as the frailty distribution. It is a power generalization of the popular gamma frailty model. It also includes other frailty models such as the lognormal and Weibull frailty models as special cases. The flexibility of this frailty distribution makes it possible to detect a complex frailty distribution structure which may otherwise be missed. Due to the intractable integrals in the likelihood function and its derivatives, we propose to approximate the integrals either by Monte Carlo simulation or by a quadrature method and then determine the maximum likelihood estimates of the parameters in the model. We explore the properties of the proposed frailty model and the computation method through a simulation study. The study shows that the proposed model can potentially reduce errors in the estimation, and that it provides a viable alternative for correlated data. The merits of proposed model are demonstrated in analysing the effects of sublingual nitroglycerin and oral isosorbide dinitrate on angina pectoris of coronary heart disease patients based on the data set in Danahy et al. (sustained hemodynamic and antianginal effect of high dose oral isosorbide dinitrate. Circulation 1977; 55:381-387).  相似文献   

13.
Logistic regression analysis may well be used to develop a prognostic model for a dichotomous outcome. Especially when limited data are available, it is difficult to determine an appropriate selection of covariables for inclusion in such models. Also, predictions may be improved by applying some sort of shrinkage in the estimation of regression coefficients. In this study we compare the performance of several selection and shrinkage methods in small data sets of patients with acute myocardial infarction, where we aim to predict 30-day mortality. Selection methods included backward stepwise selection with significance levels alpha of 0.01, 0.05, 0. 157 (the AIC criterion) or 0.50, and the use of qualitative external information on the sign of regression coefficients in the model. Estimation methods included standard maximum likelihood, the use of a linear shrinkage factor, penalized maximum likelihood, the Lasso, or quantitative external information on univariable regression coefficients. We found that stepwise selection with a low alpha (for example, 0.05) led to a relatively poor model performance, when evaluated on independent data. Substantially better performance was obtained with full models with a limited number of important predictors, where regression coefficients were reduced with any of the shrinkage methods. Incorporation of external information for selection and estimation improved the stability and quality of the prognostic models. We therefore recommend shrinkage methods in full models including prespecified predictors and incorporation of external information, when prognostic models are constructed in small data sets.  相似文献   

14.
Many epidemiological studies use a nested case‐control (NCC) design to reduce cost while maintaining study power. Because NCC sampling is conditional on the primary outcome, routine application of logistic regression to analyze a secondary outcome will generally be biased. Recently, many studies have proposed several methods to obtain unbiased estimates of risk for a secondary outcome from NCC data. Two common features of all current methods requires that the times of onset of the secondary outcome are known for cohort members not selected into the NCC study and the hazards of the two outcomes are conditionally independent given the available covariates. This last assumption will not be plausible when the individual frailty of study subjects is not captured by the measured covariates. We provide a maximum‐likelihood method that explicitly models the individual frailties and also avoids the need to have access to the full cohort data. We derive the likelihood contribution by respecting the original sampling procedure with respect to the primary outcome. We use proportional hazard models for the individual hazards, and Clayton's copula is used to model additional dependence between primary and secondary outcomes beyond that explained by the measured risk factors. We show that the proposed method is more efficient than weighted likelihood and is unbiased in the presence of shared frailty for the primary and secondary outcome. We illustrate the method with an application to a study of risk factors for diabetes in a Swedish cohort. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

15.
The penalized likelihood methodology has been consistently demonstrated to be an attractive shrinkage and selection method. It does not only automatically and consistently select the important variables but also produces estimators that are as efficient as the oracle estimator. In this paper, we apply this approach to a general likelihood function for data organized in clusters, which corresponds to a class of frailty models, which includes the Cox model and the Gamma frailty model as special cases. Our aim was to provide practitioners in the medical or reliability field with options other than the Gamma frailty model, which has been extensively studied because of its mathematical convenience. We illustrate the penalized likelihood methodology for frailty models through simulations and real data. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

16.
Despite the use of standardized protocols in, multi-centre, randomized clinical trials, outcome may vary between centres. Such heterogeneity may alter the interpretation and reporting of the treatment effect. Below, we propose a general frailty modelling approach for investigating, inter alia, putative treatment-by-centre interactions in time-to-event data in multi-centre clinical trials. A correlated random effects model is used to model the baseline risk and the treatment effect across centres. It may be based on shared, individual or correlated random effects. For inference we develop the hierarchical-likelihood (or h-likelihood) approach which facilitates computation of prediction intervals for the random effects with proper precision. We illustrate our methods using disease-free time-to-event data on bladder cancer patients participating in an European Organization for Research and Treatment of Cancer trial, and a simulation study. We also demonstrate model selection using h-likelihood criteria.  相似文献   

17.
Yin G 《Statistics in medicine》2008,27(28):5929-5940
We propose a class of transformation cure frailty models to accommodate a survival fraction in multivariate failure time data. Established through a general power transformation, this family of cure frailty models includes the proportional hazards and the proportional odds modeling structures as two special cases. Within the Bayesian paradigm, we obtain the joint posterior distribution and the corresponding full conditional distributions of the model parameters for the implementation of Gibbs sampling. Model selection is based on the conditional predictive ordinate statistic and deviance information criterion. As an illustration, we apply the proposed method to a real data set from dentistry.  相似文献   

18.
Liu L  Huang X 《Statistics in medicine》2008,27(14):2665-2683
In this paper, we propose a novel Gaussian quadrature estimation method in various frailty proportional hazards models. We approximate the unspecified baseline hazard by a piecewise constant one, resulting in a parametric model that can be fitted conveniently by Gaussian quadrature tools in standard software such as SAS Proc NLMIXED. We first apply our method to simple frailty models for correlated survival data (e.g. recurrent or clustered failure times), then to joint frailty models for correlated failure times with informative dropout or a dependent terminal event such as death. Simulation studies show that our method compares favorably with the well-received penalized partial likelihood method and the Monte Carlo EM (MCEM) method, for both normal and Gamma frailty models. We apply our method to three real data examples: (1) the time to blindness of both eyes in a diabetic retinopathy study, (2) the joint analysis of recurrent opportunistic diseases in the presence of death for HIV-infected patients, and (3) the joint modeling of local, distant tumor recurrences and patients survival in a soft tissue sarcoma study. The proposed method greatly simplifies the implementation of the (joint) frailty models and makes them much more accessible to general statistical practitioners.  相似文献   

19.
We address the problem of meta‐analysis of pairs of survival curves under heterogeneity. Starting point for the meta‐analysis is a set of studies, each comparing the same two treatments, containing information about multiple survival outcomes. Under heterogeneity, we model the number of events using an extension of the Poisson correlated gamma‐frailty model with serial within‐arm and positive between‐arm correlations. The parameters of the models are estimated following a two‐stage estimation procedure. In the first stage the underlying hazards and between‐study variance are estimated using the marginals, while a second stage is used to estimate both within‐arm and between‐arm correlations. The methodology is illustrated with an observational study on breast cancer. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

20.
We develop flexible multiparameter regression (MPR) survival models for interval-censored survival data arising in longitudinal prospective studies and longitudinal randomised controlled clinical trials. A multiparameter Weibull regression survival model, which is wholly parametric, and has nonproportional hazards, is the main focus of the article. We describe the basic model, develop the interval-censored likelihood, and extend the model to include gamma frailty and a dispersion model. We evaluate the models by means of a simulation study and a detailed reanalysis of data from the Signal Tandmobiel study. The results demonstrate that the MPR model with frailty is computationally efficient and provides an excellent fit to the data.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号