首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
We consider a general semiparametric hazards regression model that encompasses the Cox proportional hazards model and the accelerated failure time model for survival analysis. To overcome the nonexistence of the maximum likelihood, we derive a kernel‐smoothed profile likelihood function and prove that the resulting estimates of the regression parameters are consistent and achieve semiparametric efficiency. In addition, we develop penalized structure selection techniques to determine which covariates constitute the accelerated failure time model and which covariates constitute the proportional hazards model. The proposed method is able to estimate the model structure consistently and model parameters efficiently. Furthermore, variance estimation is straightforward. The proposed estimation performs well in simulation studies and is applied to the analysis of a real data set. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

2.
Modern medical treatments have substantially improved survival rates for many chronic diseases and have generated considerable interest in developing cure fraction models for survival data with a non‐ignorable cured proportion. Statistical analysis of such data may be further complicated by competing risks that involve multiple types of endpoints. Regression analysis of competing risks is typically undertaken via a proportional hazards model adapted on cause‐specific hazard or subdistribution hazard. In this article, we propose an alternative approach that treats competing events as distinct outcomes in a mixture. We consider semiparametric accelerated failure time models for the cause‐conditional survival function that are combined through a multinomial logistic model within the cure‐mixture modeling framework. The cure‐mixture approach to competing risks provides a means to determine the overall effect of a treatment and insights into how this treatment modifies the components of the mixture in the presence of a cure fraction. The regression and nonparametric parameters are estimated by a nonparametric kernel‐based maximum likelihood estimation method. Variance estimation is achieved through resampling methods for the kernel‐smoothed likelihood function. Simulation studies show that the procedures work well in practical settings. Application to a sarcoma study demonstrates the use of the proposed method for competing risk data with a cure fraction.  相似文献   

3.
Composite endpoints are frequently used in clinical trials, but simple approaches, such as the time to first event, do not reflect any ordering among the endpoints. However, some endpoints, such as mortality, are worse than others. A variety of procedures have been proposed to reflect the severity of the individual endpoints such as pairwise ranking approaches, the win ratio, and the desirability of outcome ranking. When patients have different lengths of follow-up, however, ranking can be difficult and proposed methods do not naturally lead to regression approaches and require specialized software. This paper defines an ordering score O to operationalize the patient ranking implied by hierarchical endpoints. We show how differential right censoring of follow-up corresponds to multiple interval censoring of the ordering score allowing standard software for survival models to be used to calculate the nonparametric maximum likelihood estimators (NPMLEs) of different measures. Additionally, if one assumes that the ordering score is transformable to an exponential random variable, a semiparametric regression is obtained, which is equivalent to the proportional hazards model subject to multiple interval censoring. Standard software can be used for estimation. We show that the NPMLE can be poorly behaved compared to the simple estimators in staggered entry trials. We also show that the semiparametric estimator can be more efficient than simple estimators and explore how standard Cox regression maneuvers can be used to assess model fit, allow for flexible generalizations, and assess interactions of covariates with treatment. We analyze a trial of short versus long-term antiplatelet therapy using our methods.  相似文献   

4.
Many biomedical and clinical studies with time‐to‐event outcomes involve competing risks data. These data are frequently subject to interval censoring. This means that the failure time is not precisely observed but is only known to lie between two observation times such as clinical visits in a cohort study. Not taking into account the interval censoring may result in biased estimation of the cause‐specific cumulative incidence function, an important quantity in the competing risks framework, used for evaluating interventions in populations, for studying the prognosis of various diseases, and for prediction and implementation science purposes. In this work, we consider the class of semiparametric generalized odds rate transformation models in the context of sieve maximum likelihood estimation based on B‐splines. This large class of models includes both the proportional odds and the proportional subdistribution hazard models (i.e., the Fine–Gray model) as special cases. The estimator for the regression parameter is shown to be consistent, asymptotically normal and semiparametrically efficient. Simulation studies suggest that the method performs well even with small sample sizes. As an illustration, we use the proposed method to analyze data from HIV‐infected individuals obtained from a large cohort study in sub‐Saharan Africa. We also provide the R function ciregic that implements the proposed method and present an illustrative example. Copyright © 2017 John Wiley & Sons, Ltd.  相似文献   

5.
Clustered survival data in the presence of cure has received increasing attention. In this paper, we consider a semiparametric mixture cure model which incorporates a logistic regression model for the cure fraction and a semiparametric regression model for the failure time. We utilize Archimedean copula (AC) models to assess the strength of association for both susceptibility and failure times between susceptible individuals in the same cluster. Instead of using the full likelihood approach, we consider a composite likelihood function and a two-stage estimation procedure for both marginal and association parameters. A Jackknife procedure that takes out one cluster at a time is proposed for the variance estimation of the estimators. Akaike information criterion is applied to select the best model among ACs. Simulation studies are performed to validate our estimating procedures, and two real data sets are analyzed to demonstrate the practical use of our proposed method.  相似文献   

6.
Among several semiparametric models, the Cox proportional hazard model is widely used to assess the association between covariates and the time-to-event when the observed time-to-event is interval-censored. Often, covariates are measured with error. To handle this covariate uncertainty in the Cox proportional hazard model with the interval-censored data, flexible approaches have been proposed. To fill a gap and broaden the scope of statistical applications to analyze time-to-event data with different models, in this paper, a general approach is proposed for fitting the semiparametric linear transformation model to interval-censored data when a covariate is measured with error. The semiparametric linear transformation model is a broad class of models that includes the proportional hazard model and the proportional odds model as special cases. The proposed method relies on a set of estimating equations to estimate the regression parameters and the infinite-dimensional parameter. For handling interval censoring and covariate measurement error, a flexible imputation technique is used. Finite sample performance of the proposed method is judged via simulation studies. Finally, the suggested method is applied to analyze a real data set from an AIDS clinical trial.  相似文献   

7.
The linear mixed effects model based on a full likelihood is one of the few methods available to model longitudinal data subject to left censoring. However, a full likelihood approach is complicated algebraically because of the large dimension of the numeric computations, and maximum likelihood estimation can be computationally prohibitive when the data are heavily censored. Moreover, for mixed models, the complexity of the computation increases as the dimension of the random effects in the model increases. We propose a method based on pseudo likelihood that simplifies the computational complexities, allows a wide class of multivariate models, and that can be used for many different data structures including settings where the level of censoring is high. The motivation for this work comes from the need for a joint model to assess the joint effect of pro‐inflammatory and anti‐inflammatory biomarker data on 30‐day mortality status while simultaneously accounting for longitudinal left censoring and correlation between markers in the analysis of Genetic and Inflammatory Markers for Sepsis study conducted at the University of Pittsburgh. Two markers, interleukin‐6 and interleukin‐10, which naturally are correlated because of a shared similar biological pathways and are left‐censored because of the limited sensitivity of the assays, are considered to determine if higher levels of these markers is associated with an increased risk of death after accounting for the left censoring and their assumed correlation. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

8.
In clinical trials with time‐to‐event outcomes, it is common to estimate the marginal hazard ratio from the proportional hazards model, even when the proportional hazards assumption is not valid. This is unavoidable from the perspective that the estimator must be specified a priori if probability statements about treatment effect estimates are desired. Marginal hazard ratio estimates under non‐proportional hazards are still useful, as they can be considered to be average treatment effect estimates over the support of the data. However, as many have shown, under non‐proportional hazard, the ‘usual’ unweighted marginal hazard ratio estimate is a function of the censoring distribution, which is not normally considered to be scientifically relevant when describing the treatment effect. In addition, in many practical settings, the censoring distribution is only conditionally independent (e.g., differing across treatment arms), which further complicates the interpretation. In this paper, we investigate an estimator of the hazard ratio that removes the influence of censoring and propose a consistent robust variance estimator. We compare the coverage probability of the estimator to both the usual Cox model estimator and an estimator proposed by Xu and O'Quigley (2000) when censoring is independent of the covariate. The new estimator should be used for inference that does not depend on the censoring distribution. It is particularly relevant to adaptive clinical trials where, by design, censoring distributions differ across treatment arms. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

9.
The generalized odds‐rate model is a class of semiparametric regression models, which includes the proportional hazards and proportional odds models as special cases. There are few works on estimation of the generalized odds‐rate model with interval censored data because of the challenges in maximizing the complex likelihood function. In this paper, we propose a gamma‐Poisson data augmentation approach to develop an Expectation Maximization algorithm, which can be used to fit the generalized odds‐rate model to interval censored data. The proposed Expectation Maximization algorithm is easy to implement and is computationally efficient. The performance of the proposed method is evaluated by comprehensive simulation studies and illustrated through applications to datasets from breast cancer and hemophilia studies. In order to make the proposed method easy to use in practice, an R package ‘ICGOR’ was developed. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

10.
Interval‐censored data occur naturally in many fields and the main feature is that the failure time of interest is not observed exactly, but is known to fall within some interval. In this paper, we propose a semiparametric probit model for analyzing case 2 interval‐censored data as an alternative to the existing semiparametric models in the literature. Specifically, we propose to approximate the unknown nonparametric nondecreasing function in the probit model with a linear combination of monotone splines, leading to only a finite number of parameters to estimate. Both the maximum likelihood and the Bayesian estimation methods are proposed. For each method, regression parameters and the baseline survival function are estimated jointly. The proposed methods make no assumptions about the observation process and can be applicable to any interval‐censored data with easy implementation. The methods are evaluated by simulation studies and are illustrated by two real‐life interval‐censored data applications. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

11.
Various semiparametric regression models have recently been proposed for the analysis of gap times between consecutive recurrent events. Among them, the semiparametric accelerated failure time (AFT) model is especially appealing owing to its direct interpretation of covariate effects on the gap times. In general, estimation of the semiparametric AFT model is challenging because the rank‐based estimating function is a nonsmooth step function. As a result, solutions to the estimating equations do not necessarily exist. Moreover, the popular resampling‐based variance estimation for the AFT model requires solving rank‐based estimating equations repeatedly and hence can be computationally cumbersome and unstable. In this paper, we extend the induced smoothing approach to the AFT model for recurrent gap time data. Our proposed smooth estimating function permits the application of standard numerical methods for both the regression coefficients estimation and the standard error estimation. Large‐sample properties and an asymptotic variance estimator are provided for the proposed method. Simulation studies show that the proposed method outperforms the existing nonsmooth rank‐based estimating function methods in both point estimation and variance estimation. The proposed method is applied to the data analysis of repeated hospitalizations for patients in the Danish Psychiatric Center Register.  相似文献   

12.
This paper discusses regression analysis of multivariate current status failure time data (The Statistical Analysis of Interval‐censoring Failure Time Data. Springer: New York, 2006), which occur quite often in, for example, tumorigenicity experiments and epidemiologic investigations of the natural history of a disease. For the problem, several marginal approaches have been proposed that model each failure time of interest individually (Biometrics 2000; 56 :940–943; Statist. Med. 2002; 21 :3715–3726). In this paper, we present a full likelihood approach based on the proportional hazards frailty model. For estimation, an Expectation Maximization (EM) algorithm is developed and simulation studies suggest that the presented approach performs well for practical situations. The approach is applied to a set of bivariate current status data arising from a tumorigenicity experiment. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

13.
Prognostic studies often estimate survival curves for patients with different covariate vectors, but the validity of their results depends largely on the accuracy of the estimated covariate effects. To avoid conventional proportional hazards and linearity assumptions, flexible extensions of Cox's proportional hazards model incorporate non‐linear (NL) and/or time‐dependent (TD) covariate effects. However, their impact on survival curves estimation is unclear. Our primary goal is to develop and validate a flexible method for estimating individual patients' survival curves, conditional on multiple predictors with possibly NL and/or TD effects. We first obtain maximum partial likelihood estimates of NL and TD effects and use backward elimination to select statistically significant effects into a final multivariable model. We then plug the selected NL and TD estimates in the full likelihood function and estimate the baseline hazard function and the resulting survival curves, conditional on individual covariate vectors. The TD and NL functions and the log hazard are modeled with unpenalized regression B‐splines. In simulations, our flexible survival curve estimates were unbiased and had much lower mean square errors than the conventional estimates. In real‐life analyses of mortality after a septic shock, our model improved significantly the deviance (likelihood ratio test = 84.8, df = 20, p < 0.0001) and changed substantially the predicted survival for several subjects. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

14.
Cox proportional hazard regression model is a popular tool to analyze the relationship between a censored lifetime variable with other relevant factors. The semiparametric Cox model is widely used to study different types of data arising from applied disciplines such as medical science, biology, and reliability studies. A fully parametric version of the Cox regression model, if properly specified, can yield more efficient parameter estimates, leading to better insight generation. However, the existing maximum likelihood approach of generating inference under the fully parametric proportional hazards model is highly nonrobust against data contamination (often manifested through outliers), which restricts its practical usage. In this paper, we develop a robust estimation procedure for the parametric proportional hazards model based on the minimum density power divergence approach. The proposed minimum density power divergence estimator is seen to produce highly robust estimates under data contamination with only a slight loss in efficiency under pure data. Further, it is always seen to generate more precise inference than the likelihood based estimates under the semiparametric Cox models or their existing robust versions. We also justify their robustness theoretically through the influence function analysis. The practical applicability and usefulness of the proposal are illustrated through simulations and real data examples.  相似文献   

15.
Continuous‐time multistate survival models can be used to describe health‐related processes over time. In the presence of interval‐censored times for transitions between the living states, the likelihood is constructed using transition probabilities. Models can be specified using parametric or semiparametric shapes for the hazards. Semiparametric hazards can be fitted using P‐splines and penalised maximum likelihood estimation. This paper presents a method to estimate flexible multistate models that allow for parametric and semiparametric hazard specifications. The estimation is based on a scoring algorithm. The method is illustrated with data from the English Longitudinal Study of Ageing.  相似文献   

16.
In this paper, we introduce a flexible family of cure rate models, mainly motivated by the biological derivation of the classical promotion time cure rate model and assuming that a metastasis‐competent tumor cell produces a detectable‐tumor mass only when a specific number of distinct biological factors affect the cell. Special cases of the new model are, among others, the promotion time (proportional hazards), the geometric (proportional odds), and the negative binomial cure rate model. In addition, our model generalizes specific families of transformation cure rate models and some well‐studied destructive cure rate models. Exact likelihood inference is carried out by the aid of the expectation?maximization algorithm; a profile likelihood approach is exploited for estimating the parameters of the model while model discrimination problem is analyzed by the aid of the likelihood ratio test. A simulation study demonstrates the accuracy of the proposed inferential method. Finally, as an illustration, we fit the proposed model to a cutaneous melanoma data‐set. Copyright © 2017 John Wiley & Sons, Ltd.  相似文献   

17.
The statistical analysis of panel count data has recently attracted a great deal of attention, and a number of approaches have been developed. However, most of these approaches are for situations where the observation and follow‐up processes are independent of the underlying recurrent event process unconditional or conditional on covariates. In this paper, we discuss a more general situation where both the observation and the follow‐up processes may be related with the recurrent event process of interest. For regression analysis, we present a class of semiparametric transformation models and develop some estimating equations for estimation of regression parameters. Numerical studies under different settings conducted for assessing the proposed methodology suggest that it works well for practical situations, and the approach is applied to a skin cancer study that motivated the study. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

18.
This paper considers Cox proportional hazard models estimation under informative right censored data using maximum penalized likelihood, where dependence between censoring and event times are modelled by a copula function and a roughness penalty function is used to restrain the baseline hazard as a smooth function. Since the baseline hazard is nonnegative, we propose a special algorithm where each iteration involves updating regression coefficients by the Newton algorithm and baseline hazard by the multiplicative iterative algorithm. The asymptotic properties for both regression coefficients and baseline hazard estimates are developed. The simulation study investigates the performance of our method and also compares it with an existing maximum likelihood method. We apply the proposed method to a dementia patients dataset.  相似文献   

19.
Analysis of long‐term follow‐up survival studies require more sophisticated approaches than the proportional hazards model. To account for the dynamic behaviour of fixed covariates, penalized Cox models can be employed in models with interactions of the covariates and known time functions. In this work, I discuss some of the suggested methods and emphasize on the use of a ridge penalty in survival models. I review different strategies for choosing an optimal penalty weight and argue for the use of the computationally efficient restricted maximum likelihood (REML)‐type method. A ridge penalty term can be subtracted from the likelihood when modelling time‐varying effects in order to control the behaviour of the time functions. I suggest using flexible time functions such as B‐splines and constrain the behaviour of these by adding proper penalties. I present the basic methods and illustrate different penalty weights in two different datasets. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

20.
Cure models have been applied to analyze clinical trials with cures and age‐at‐onset studies with nonsusceptibility. Lu and Ying (On semiparametric transformation cure model. Biometrika 2004; 91:331?‐343. DOI: 10.1093/biomet/91.2.331) developed a general class of semiparametric transformation cure models, which assumes that the failure times of uncured subjects, after an unknown monotone transformation, follow a regression model with homoscedastic residuals. However, it cannot deal with frequently encountered heteroscedasticity, which may result from dispersed ranges of failure time span among uncured subjects' strata. To tackle the phenomenon, this article presents semiparametric heteroscedastic transformation cure models. The cure status and the failure time of an uncured subject are fitted by a logistic regression model and a heteroscedastic transformation model, respectively. Unlike the approach of Lu and Ying, we derive score equations from the full likelihood for estimating the regression parameters in the proposed model. The similar martingale difference function to their proposal is used to estimate the infinite‐dimensional transformation function. Our proposed estimating approach is intuitively applicable and can be conveniently extended to other complicated models when the maximization of the likelihood may be too tedious to be implemented. We conduct simulation studies to validate large‐sample properties of the proposed estimators and to compare with the approach of Lu and Ying via the relative efficiency. The estimating method and the two relevant goodness‐of‐fit graphical procedures are illustrated by using breast cancer data and melanoma data. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号