首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到18条相似文献,搜索用时 11 毫秒
1.
The penalized likelihood methodology has been consistently demonstrated to be an attractive shrinkage and selection method. It does not only automatically and consistently select the important variables but also produces estimators that are as efficient as the oracle estimator. In this paper, we apply this approach to a general likelihood function for data organized in clusters, which corresponds to a class of frailty models, which includes the Cox model and the Gamma frailty model as special cases. Our aim was to provide practitioners in the medical or reliability field with options other than the Gamma frailty model, which has been extensively studied because of its mathematical convenience. We illustrate the penalized likelihood methodology for frailty models through simulations and real data. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

2.
Cure models have been developed to analyze failure time data with a cured fraction. For such data, standard survival models are usually not appropriate because they do not account for the possibility of cure. Mixture cure models assume that the studied population is a mixture of susceptible individuals, who may experience the event of interest, and non‐susceptible individuals that will never experience it. Important issues in mixture cure models are estimation of the baseline survival function for susceptibles and estimation of the variance of the regression parameters. The aim of this paper is to propose a penalized likelihood approach, which allows for flexible modeling of the hazard function for susceptible individuals using M‐splines. This approach also permits direct computation of the variance of parameters using the inverse of the Hessian matrix. Properties and limitations of the proposed method are discussed and an illustration from a cancer study is presented. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

3.
In the estimation of Cox regression models, maximum partial likelihood estimates might be infinite in a monotone likelihood setting, where partial likelihood converges to a finite value and parameter estimates converge to infinite values. To address monotone likelihood, previous studies have applied Firth's bias correction method to Cox regression models. However, while the model selection criteria for Firth's penalized partial likelihood approach have not yet been studied, a heuristic AIC‐type information criterion can be used in a statistical package. Application of the heuristic information criterion to data obtained from a prospective observational study of patients with multiple brain metastases indicated that the heuristic information criterion selects models with many parameters and ignores the adequacy of the model. Moreover, we showed that the heuristic information criterion tends to select models with many regression parameters as the sample size increases. Thereby, in the present study, we propose an alternative AIC‐type information criterion based on the risk function. A Bayesian information criterion type was also evaluated. Further, the presented simulation results confirm that the proposed criteria performed well in a monotone likelihood setting. The proposed AIC‐type criterion was applied to prospective observational study data. © 2017 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd  相似文献   

4.
This paper considers Cox proportional hazard models estimation under informative right censored data using maximum penalized likelihood, where dependence between censoring and event times are modelled by a copula function and a roughness penalty function is used to restrain the baseline hazard as a smooth function. Since the baseline hazard is nonnegative, we propose a special algorithm where each iteration involves updating regression coefficients by the Newton algorithm and baseline hazard by the multiplicative iterative algorithm. The asymptotic properties for both regression coefficients and baseline hazard estimates are developed. The simulation study investigates the performance of our method and also compares it with an existing maximum likelihood method. We apply the proposed method to a dementia patients dataset.  相似文献   

5.
Noh M  Ha ID  Lee Y 《Statistics in medicine》2006,25(8):1341-1354
In medical research recurrent event times can be analysed using a frailty model in which the frailties for different individuals are independent and identically distributed. However, such a homogeneous assumption about frailties could sometimes be suspect. For modelling heterogeneity in frailties we describe dispersion frailty models arising from a new class of models, namely hierarchical generalized linear models. Using the kidney infection data we illustrate how to detect and model heterogeneity among frailties. Stratification of frailty models is also investigated.  相似文献   

6.
The proportional subdistribution hazards model (i.e. Fine‐Gray model) has been widely used for analyzing univariate competing risks data. Recently, this model has been extended to clustered competing risks data via frailty. To the best of our knowledge, however, there has been no literature on variable selection method for such competing risks frailty models. In this paper, we propose a simple but unified procedure via a penalized h‐likelihood (HL) for variable selection of fixed effects in a general class of subdistribution hazard frailty models, in which random effects may be shared or correlated. We consider three penalty functions, least absolute shrinkage and selection operator (LASSO), smoothly clipped absolute deviation (SCAD) and HL, in our variable selection procedure. We show that the proposed method can be easily implemented using a slight modification to existing h‐likelihood estimation approaches. Numerical studies demonstrate that the proposed procedure using the HL penalty performs well, providing a higher probability of choosing the true model than LASSO and SCAD methods without losing prediction accuracy. The usefulness of the new method is illustrated using two actual datasets from multi‐center clinical trials. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

7.
Survival models incorporating random effects to account for unmeasured heterogeneity are being increasingly used in biostatistical and applied research. Specifically, unmeasured covariates whose lack of inclusion in the model would lead to biased, inefficient results are commonly modeled by including a subject-specific (or cluster-specific) frailty term that follows a given distribution (eg, gamma or lognormal). Despite that, in the context of parametric frailty models, little is known about the impact of misspecifying the baseline hazard or the frailty distribution or both. Therefore, our aim is to quantify the impact of such misspecification in a wide variety of clinically plausible scenarios via Monte Carlo simulation, using open-source software readily available to applied researchers. We generate clustered survival data assuming various baseline hazard functions, including mixture distributions with turning points, and assess the impact of sample size, variance of the frailty, baseline hazard function, and frailty distribution. Models compared include standard parametric distributions and more flexible spline-based approaches; we also included semiparametric Cox models. The resulting bias can be clinically relevant. In conclusion, we highlight the importance of fitting models that are flexible enough and the importance of assessing model fit. We illustrate our conclusions with two applications using data on diabetic retinopathy and bladder cancer. Our results show the importance of assessing model fit with respect to the baseline hazard function and the distribution of the frailty: misspecifying the former leads to biased relative and absolute risk estimates, whereas misspecifying the latter affects absolute risk estimates and measures of heterogeneity.  相似文献   

8.
Liu L  Huang X 《Statistics in medicine》2008,27(14):2665-2683
In this paper, we propose a novel Gaussian quadrature estimation method in various frailty proportional hazards models. We approximate the unspecified baseline hazard by a piecewise constant one, resulting in a parametric model that can be fitted conveniently by Gaussian quadrature tools in standard software such as SAS Proc NLMIXED. We first apply our method to simple frailty models for correlated survival data (e.g. recurrent or clustered failure times), then to joint frailty models for correlated failure times with informative dropout or a dependent terminal event such as death. Simulation studies show that our method compares favorably with the well-received penalized partial likelihood method and the Monte Carlo EM (MCEM) method, for both normal and Gamma frailty models. We apply our method to three real data examples: (1) the time to blindness of both eyes in a diabetic retinopathy study, (2) the joint analysis of recurrent opportunistic diseases in the presence of death for HIV-infected patients, and (3) the joint modeling of local, distant tumor recurrences and patients survival in a soft tissue sarcoma study. The proposed method greatly simplifies the implementation of the (joint) frailty models and makes them much more accessible to general statistical practitioners.  相似文献   

9.
Various frailty models have been developed and are now widely used for analysing multivariate survival data. It is therefore important to develop an information criterion for model selection. However, in frailty models there are several alternative ways of forming a criterion and the particular criterion chosen may not be uniformly best. In this paper, we study an Akaike information criterion (AIC) on selecting a frailty structure from a set of (possibly) non-nested frailty models. We propose two new AIC criteria, based on a conditional likelihood and an extended restricted likelihood (ERL) given by Lee and Nelder (J. R. Statist. Soc. B 1996; 58:619-678). We compare their performance using well-known practical examples and demonstrate that the two criteria may yield rather different results. A simulation study shows that the AIC based on the ERL is recommended, when attention is focussed on selecting the frailty structure rather than the fixed effects.  相似文献   

10.
Repeated events processes are ubiquitous across a great range of important health, medical, and public policy applications, but models for these processes have serious limitations. Alternative estimators often produce different inferences concerning treatment effects due to bias and inefficiency. We recommend a robust strategy for the estimation of effects in medical treatments, social conditions, individual behaviours, and public policy programs in repeated events survival models under three common conditions: heterogeneity across individuals, dependence across the number of events, and both heterogeneity and event dependence. We compare several models for analysing recurrent event data that exhibit both heterogeneity and event dependence. The conditional frailty model best accounts for the various conditions of heterogeneity and event dependence by using a frailty term, stratification, and gap time formulation of the risk set. We examine the performance of recurrent event models that are commonly used in applied work using Monte Carlo simulations, and apply the findings to data on chronic granulomatous disease and cystic fibrosis.  相似文献   

11.
The process by which patients experience a series of recurrent events, such as hospitalizations, may be subject to death. In cohort studies, one strategy for analyzing such data is to fit a joint frailty model for the intensities of the recurrent event and death, which estimates covariate effects on the two event types while accounting for their dependence. When certain covariates are difficult to obtain, however, researchers may only have the resources to subsample patients on whom to collect complete data: one way is using the nested case–control (NCC) design, in which risk set sampling is performed based on a single outcome. We develop a general framework for the design of NCC studies in the presence of recurrent and terminal events and propose estimation and inference for a joint frailty model for recurrence and death using data arising from such studies. We propose a maximum weighted penalized likelihood approach using flexible spline models for the baseline intensity functions. Two standard error estimators are proposed: a sandwich estimator and a perturbation resampling procedure. We investigate operating characteristics of our estimators as well as design considerations via a simulation study and illustrate our methods using two studies: one on recurrent cardiac hospitalizations in patients with heart failure and the other on local recurrence and metastasis in patients with breast cancer.  相似文献   

12.
In a meta-analysis combining survival data from different clinical trials, an important issue is the possible heterogeneity between trials. Such intertrial variation can not only be explained by heterogeneity of treatment effects across trials but also by heterogeneity of their baseline risk. In addition, one might examine the relationship between magnitude of the treatment effect and the underlying risk of the patients in the different trials. Such a scenario can be accounted for by using additive random effects in the Cox model, with a random trial effect and a random treatment-by-trial interaction. We propose to use this kind of model with a general correlation structure for the random effects and to estimate parameters and hazard function using a semi-parametric penalized marginal likelihood method (maximum penalized likelihood estimators). This approach gives smoothed estimates of the hazard function, which represents incidence in epidemiology. The idea for the approach in this paper comes from the study of heterogeneity in a large meta-analysis of randomized trials in patients with head and neck cancers (meta-analysis of chemotherapy in head and neck cancers) and the effect of adding chemotherapy to locoregional treatment. The simulation study and the application demonstrate that the proposed approach yields satisfactory results and they illustrate the need to use a flexible variance-covariance structure for the random effects.  相似文献   

13.
In this paper, we model multivariate time‐to‐event data by composite likelihood of pairwise frailty likelihoods and marginal hazards using natural cubic splines. Both right‐ and interval‐censored data are considered. The suggested approach is applied on two types of family studies using the gamma‐ and stable frailty distribution: The first study is on adoption data where the association between survival in families of adopted children and their adoptive and biological parents is studied. The second study is a cross‐sectional study of the occurrence of back and neck pain in twins, illustrating the methodology in the context of genetic epidemiology. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

14.
This paper investigates the surgical volume–outcome relationship for patients undergoing hip fracture surgery in Quebec between 1991 and 1993. Using a duration model with multiple destinations which accounts for observed and unobserved (by the researcher) patient characteristics, our initial estimates show that higher surgical volume is associated with a higher conditional probability of live discharge from the hospital. However, these results reflect differences between hospitals rather than differences within hospitals over time: when we also control for differences between hospitals that are fixed over time, hospitals performing more surgeries in period t + 1 than in period t experience no significant change in outcomes, as would be predicted by the ‘practice makes perfect’ hypothesis. The volume–outcome relationship for hip fracture patients thus appears to reflect quality differences between high and low volume hospitals. © 1997 John Wiley & Sons, Ltd.  相似文献   

15.
In clinical data analysis, the restricted maximum likelihood (REML) method has been commonly used for estimating variance components in the linear mixed effects model. Under the REML estimation, however, it is not straightforward to compare several linear mixed effects models with different mean and covariance structures. In particular, few approaches have been proposed for the comparison of linear mixed effects models with different mean structures under the REML estimation. We propose an approach using extended information criterion (EIC), which is a bootstrap-based extension of AIC, for comparing linear mixed effects models with different mean and covariance structures under the REML estimation. We present simulation studies and applications to two actual clinical data sets.  相似文献   

16.
Most PM2.5-associated mortality studies are not conducted in rural areas where mortality rates may differ when population characteristics, health care access, and PM2.5 composition differ. PM2.5-associated mortality was investigated in the elderly residing in rural–urban zip codes. Exposure (2000–2006) was estimated using different models and Poisson regression was performed using 2006 mortality data. PM2.5 models estimated comparable exposures, although subtle differences were observed in rate ratios (RR) within areas by health outcomes. Cardiovascular disease (CVD), ischemic heart disease (IHD), and cardiopulmonary disease (CPD), mortality was significantly associated with rural, urban, and statewide chronic PM2.5 exposures. We observed larger effect sizes in RRs for CVD, CPD, and all-cause (AC) with similar sizes for IHD mortality in rural areas compared to urban areas. PM2.5 was significantly associated with AC mortality in rural areas and statewide; however, in urban areas, only the most restrictive exposure model showed an association. Given the results seen, future mortality studies should consider adjusting for differences with rural–urban variables.  相似文献   

17.
To evaluate the probabilities of a disease state, ideally all subjects in a study should be diagnosed by a definitive diagnostic or gold standard test. However, since definitive diagnostic tests are often invasive and expensive, it is generally unethical to apply them to subjects whose screening tests are negative. In this article, we consider latent class models for screening studies with two imperfect binary diagnostic tests and a definitive categorical disease status measured only for those with at least one positive screening test. Specifically, we discuss a conditional‐independent and three homogeneous conditional‐dependent latent class models and assess the impact of misspecification of the dependence structure on the estimation of disease category probabilities using frequentist and Bayesian approaches. Interestingly, the three homogeneous‐dependent models can provide identical goodness‐of‐fit but substantively different estimates for a given study. However, the parametric form of the assumed dependence structure itself is not ‘testable’ from the data, and thus the dependence structure modeling considered here can only be viewed as a sensitivity analysis concerning a more complicated non‐identifiable model potentially involving a heterogeneous dependence structure. Furthermore, we discuss Bayesian model averaging together with its limitations as an alternative way to partially address this particularly challenging problem. The methods are applied to two cancer screening studies, and simulations are conducted to evaluate the performance of these methods. In summary, further research is needed to reduce the impact of model misspecification on the estimation of disease prevalence in such settings. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

18.

Background

Several studies suggest that airborne particulate matter (PM) is associated with infant mortality; however, most focused on short-term exposure to larger particles.

Objectives

We evaluated associations between long-term exposure to different sizes of particles [total suspended particles (TSP), PM ≤ 10 μm in aerodynamic diameter (PM10), ≤ 10–2.5 μm (PM10–2.5), and ≤ 2.5 μm (PM2.5)] and infant mortality in a cohort in Seoul, Korea, 2004–2007.

Methods

The study includes 359,459 births with 225 deaths. We applied extended Cox proportional hazards modeling with time-dependent covariates to three mortality categories: all causes, respiratory, and sudden infant death syndrome (SIDS). We calculated exposures from birth to death (or end of eligibility for outcome at 1 year of age) and pregnancy (gestation and each trimester) and treated exposures as time-dependent variables for subjects’ exposure for each pollutant. We adjusted by sex, gestational length, season of birth, maternal age and educational level, and heat index. Each cause of death and exposure time frame was analyzed separately.

Results

We found a relationship between gestational exposures to PM and infant mortality from all causes or respiratory causes for normal-birth-weight infants. For total mortality (all causes), risks were 1.44 (95% confidence interval, 1.06–1.97), 1.65 (1.18–2.31), 1.53 (1.22–1.90), and 1.19 (0.83–1.70) per interquartile range increase in TSP, PM10, PM2.5, and PM10–2.5, respectively; for respiratory mortality, risks were 3.78 (1.18–12.13), 6.20 (1.50–25.66), 3.15 (1.26–7.85), and 2.86 (0.76–10.85). For SIDS, risks were 0.92 (0.33–2.58), 1.15 (0.38–3.48), 1.42 (0.71–2.87), and 0.57 (0.16–1.96), respectively.

Conclusions

Our findings provide supportive evidence of an association of long-term exposure to PM air pollution with infant mortality.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号