首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
A mixed effect model is proposed to jointly analyze multivariate longitudinal data with continuous, proportion, count, and binary responses. The association of the variables is modeled through the correlation of random effects. We use a quasi‐likelihood type approximation for nonlinear variables and transform the proposed model into a multivariate linear mixed model framework for estimation and inference. Via an extension to the EM approach, an efficient algorithm is developed to fit the model. The method is applied to physical activity data, which uses a wearable accelerometer device to measure daily movement and energy expenditure information. Our approach is also evaluated by a simulation study.  相似文献   

2.
The proportional subdistribution hazards model (i.e. Fine‐Gray model) has been widely used for analyzing univariate competing risks data. Recently, this model has been extended to clustered competing risks data via frailty. To the best of our knowledge, however, there has been no literature on variable selection method for such competing risks frailty models. In this paper, we propose a simple but unified procedure via a penalized h‐likelihood (HL) for variable selection of fixed effects in a general class of subdistribution hazard frailty models, in which random effects may be shared or correlated. We consider three penalty functions, least absolute shrinkage and selection operator (LASSO), smoothly clipped absolute deviation (SCAD) and HL, in our variable selection procedure. We show that the proposed method can be easily implemented using a slight modification to existing h‐likelihood estimation approaches. Numerical studies demonstrate that the proposed procedure using the HL penalty performs well, providing a higher probability of choosing the true model than LASSO and SCAD methods without losing prediction accuracy. The usefulness of the new method is illustrated using two actual datasets from multi‐center clinical trials. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

3.
In this paper, we develop a Bayesian method for joint analysis of longitudinal measurements and competing risks failure time data. The model allows one to analyze the longitudinal outcome with nonignorable missing data induced by multiple types of events, to analyze survival data with dependent censoring for the key event, and to draw inferences on multiple endpoints simultaneously. Compared with the likelihood approach, the Bayesian method has several advantages. It is computationally more tractable for high‐dimensional random effects. It is also convenient to draw inference. Moreover, it provides a means to incorporate prior information that may help to improve estimation accuracy. An illustration is given using a clinical trial data of scleroderma lung disease. The performance of our method is evaluated by simulation studies. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

4.
Generalized linear models with random effects are often used to explain the serial dependence of longitudinal categorical data. Marginalized random effects models (MREMs) permit likelihood‐based estimations of marginal mean parameters and also explain the serial dependence of longitudinal data. In this paper, we extend the MREM to accommodate multivariate longitudinal binary data using a new covariance matrix with a Kronecker decomposition, which easily explains both the serial dependence and time‐specific response correlation. A maximum marginal likelihood estimation is proposed utilizing a quasi‐Newton algorithm with quasi‐Monte Carlo integration of the random effects. Our approach is applied to analyze metabolic syndrome data from the Korean Genomic Epidemiology Study for Korean adults. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

5.
The normality assumption of measurement error is a widely used distribution in joint models of longitudinal and survival data, but it may lead to unreasonable or even misleading results when longitudinal data reveal skewness feature. This paper proposes a new joint model for multivariate longitudinal and multivariate survival data by incorporating a nonparametric function into the trajectory function and hazard function and assuming that measurement errors in longitudinal measurement models follow a skew‐normal distribution. A Monte Carlo Expectation‐Maximization (EM) algorithm together with the penalized‐splines technique and the Metropolis–Hastings algorithm within the Gibbs sampler is developed to estimate parameters and nonparametric functions in the considered joint models. Case deletion diagnostic measures are proposed to identify the potential influential observations, and an extended local influence method is presented to assess local influence of minor perturbations. Simulation studies and a real example from a clinical trial are presented to illustrate the proposed methodologies. Copyright © 2017 John Wiley & Sons, Ltd.  相似文献   

6.
Censored failure time data with a cured subgroup is frequently encountered in many scientific areas including the cancer screening research, tumorigenicity studies, and sociological surveys. Meanwhile, one may also encounter an extraordinary large number of risk factors in practice, such as patient's demographic characteristics, clinical measurements, and medical history, which makes variable selection an emerging need in the data analysis. Motivated by a medical study on prostate cancer screening, we develop a variable selection method in the semiparametric nonmixture or promotion time cure model when interval-censored data with a cured subgroup are present. Specifically, we propose a penalized likelihood approach with the use of the least absolute shrinkage and selection operator, adaptive least absolute shrinkage and selection operator, or smoothly clipped absolute deviation penalties, which can be easily accomplished via a novel penalized expectation-maximization algorithm. We assess the finite-sample performance of the proposed methodology through extensive simulations and analyze the prostate cancer screening data for illustration.  相似文献   

7.
Longitudinal studies often gather joint information on time to some event (survival analysis, time to dropout) and serial outcome measures (repeated measures, growth curves). Depending on the purpose of the study, one may wish to estimate and compare serial trends over time while accounting for possibly non-ignorable dropout or one may wish to investigate any associations that may exist between the event time of interest and various longitudinal trends. In this paper, we consider a class of random-effects models known as shared parameter models that are particularly useful for jointly analysing such data; namely repeated measurements and event time data. Specific attention will be given to the longitudinal setting where the primary goal is to estimate and compare serial trends over time while adjusting for possible informative censoring due to patient dropout. Parametric and semi-parametric survival models for event times together with generalized linear or non-linear mixed-effects models for repeated measurements are proposed for jointly modelling serial outcome measures and event times. Methods of estimation are based on a generalized non-linear mixed-effects model that may be easily implemented using existing software. This approach allows for flexible modelling of both the distribution of event times and of the relationship of the longitudinal response variable to the event time of interest. The model and methods are illustrated using data from a multi-centre study of the effects of diet and blood pressure control on progression of renal disease, the modification of diet in renal disease study.  相似文献   

8.
Recurrent event data occur in many clinical and observational studies, and in these situations, there may exist a terminal event such as death that is related to the recurrent event of interest. In addition, sometimes more than one type of recurrent events may occur, that is, one may encounter multivariate recurrent event data with some dependent terminal event. For the analysis of such data, one must take into account the dependence among different types of recurrent events and that between the recurrent events and the terminal event. In this paper, we extend a method for univariate recurrent and terminal events and propose a joint modeling approach for regression analysis of the data and establish the finite and asymptotic properties of the resulting estimates of unknown parameters. The method is applied to a set of bivariate recurrent event data arising from a long-term follow-up study of childhood cancer survivors.  相似文献   

9.
The proportional subdistribution hazard regression model has been widely used by clinical researchers for analyzing competing risks data. It is well known that quantile regression provides a more comprehensive alternative to model how covariates influence not only the location but also the entire conditional distribution. In this paper, we develop variable selection procedures based on penalized estimating equations for competing risks quantile regression. Asymptotic properties of the proposed estimators including consistency and oracle properties are established. Monte Carlo simulation studies are conducted, confirming that the proposed methods are efficient. A bone marrow transplant data set is analyzed to demonstrate our methodologies.  相似文献   

10.
Owing to the rapid development of biomarkers in clinical trials, joint modeling of longitudinal and survival data has gained its popularity in the recent years because it reduces bias and provides improvements of efficiency in the assessment of treatment effects and other prognostic factors. Although much effort has been put into inferential methods in joint modeling, such as estimation and hypothesis testing, design aspects have not been formally considered. Statistical design, such as sample size and power calculations, is a crucial first step in clinical trials. In this paper, we derive a closed-form sample size formula for estimating the effect of the longitudinal process in joint modeling, and extend Schoenfeld's sample size formula to the joint modeling setting for estimating the overall treatment effect. The sample size formula we develop is quite general, allowing for p-degree polynomial trajectories. The robustness of our model is demonstrated in simulation studies with linear and quadratic trajectories. We discuss the impact of the within-subject variability on power and data collection strategies, such as spacing and frequency of repeated measurements, in order to maximize the power. When the within-subject variability is large, different data collection strategies can influence the power of the study in a significant way. Optimal frequency of repeated measurements also depends on the nature of the trajectory with higher polynomial trajectories and larger measurement error requiring more frequent measurements.  相似文献   

11.
Elashoff RM  Li G  Li N 《Statistics in medicine》2007,26(14):2813-2835
Joint analysis of longitudinal measurements and survival data has received much attention in recent years. However, previous work has primarily focused on a single failure type for the event time. In this paper we consider joint modelling of repeated measurements and competing risks failure time data to allow for more than one distinct failure type in the survival endpoint which occurs frequently in clinical trials. Our model uses latent random variables and common covariates to link together the sub-models for the longitudinal measurements and competing risks failure time data, respectively. An EM-based algorithm is derived to obtain the parameter estimates, and a profile likelihood method is proposed to estimate their standard errors. Our method enables one to make joint inference on multiple outcomes which is often necessary in analyses of clinical trials. Furthermore, joint analysis has several advantages compared with separate analysis of either the longitudinal data or competing risks survival data. By modelling the event time, the analysis of longitudinal measurements is adjusted to allow for non-ignorable missing data due to informative dropout, which cannot be appropriately handled by the standard linear mixed effects models alone. In addition, the joint model utilizes information from both outcomes, and could be substantially more efficient than the separate analysis of the competing risk survival data as shown in our simulation study. The performance of our method is evaluated and compared with separate analyses using both simulated data and a clinical trial for the scleroderma lung disease.  相似文献   

12.
Accelerated failure time (AFT) models allowing for random effects are linear mixed models under the log-transformation of survival time with censoring and describe dependence in correlated survival data. It is well known that the AFT models are useful alternatives to frailty models. To the best of our knowledge, however, there is no literature on variable selection methods for such AFT models. In this paper, we propose a simple but unified variable-selection procedure of fixed effects in the AFT random-effect models using penalized h-likelihood (HL). We consider four penalty functions (ie, least absolute shrinkage and selection operator (LASSO), adaptive LASSO, smoothly clipped absolute deviation (SCAD), and HL). We show that the proposed method can be easily implemented via a slight modification to existing h-likelihood estimation procedures. We thus demonstrate that the proposed method can also be easily extended to AFT models with multilevel (or nested) structures. Simulation studies also show that the procedure using the adaptive LASSO, SCAD, or HL penalty performs well. In particular, we find via the simulation results that the variable selection method with HL penalty provides a higher probability of choosing the true model than other three methods. The usefulness of the new method is illustrated using two actual datasets from multicenter clinical trials.  相似文献   

13.
Comparison of two hazard rate functions is important for evaluating treatment effect in studies concerning times to some important events. In practice, it may happen that the two hazard rate functions cross each other at one or more unknown time points, representing temporal changes of the treatment effect. Also, besides survival data, there could be longitudinal data available regarding some time‐dependent covariates. When jointly modeling the survival and longitudinal data in such cases, model selection and model diagnostics are especially important to provide reliable statistical analysis of the data, which are lacking in the literature. In this paper, we discuss several criteria for assessing model fit that have been used for model selection and apply them to the joint modeling of survival and longitudinal data for comparing two crossing hazard rate functions. We also propose hypothesis testing and graphical methods for model diagnostics of the proposed joint modeling approach. Our proposed methods are illustrated by a simulation study and by a real‐data example concerning two early breast cancer treatments. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

14.
The penalized likelihood methodology has been consistently demonstrated to be an attractive shrinkage and selection method. It does not only automatically and consistently select the important variables but also produces estimators that are as efficient as the oracle estimator. In this paper, we apply this approach to a general likelihood function for data organized in clusters, which corresponds to a class of frailty models, which includes the Cox model and the Gamma frailty model as special cases. Our aim was to provide practitioners in the medical or reliability field with options other than the Gamma frailty model, which has been extensively studied because of its mathematical convenience. We illustrate the penalized likelihood methodology for frailty models through simulations and real data. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

15.
In clinical and epidemiological studies, there is a growing interest in studying the heterogeneity among patients based on longitudinal characteristics to identify subtypes of the study population. Compared to clustering a single longitudinal marker, simultaneously clustering multiple longitudinal markers allow additional information to be incorporated into the clustering process, which reveals co-existing longitudinal patterns and generates deeper biological insight. In the current study, we propose a Bayesian consensus clustering (BCC) model for multivariate longitudinal data. Instead of arriving at a single overall clustering, the proposed model allows each marker to follow marker-specific local clustering and these local clusterings are aggregated to find a global (consensus) clustering. To estimate the posterior distribution of model parameters, a Gibbs sampling algorithm is proposed. We apply our proposed model to the primary biliary cirrhosis study to identify patient subtypes that may be associated with their prognosis. We also perform simulation studies to compare the clustering performance between the proposed model and existing models under several scenarios. The results demonstrate that the proposed BCC model serves as a useful tool for clustering multivariate longitudinal data.  相似文献   

16.
This article explores Bayesian joint models for a quantile of longitudinal response, mismeasured covariate and event time outcome with an attempt to (i) characterize the entire conditional distribution of the response variable based on quantile regression that may be more robust to outliers and misspecification of error distribution; (ii) tailor accuracy from measurement error, evaluate non‐ignorable missing observations, and adjust departures from normality in covariate; and (iii) overcome shortages of confidence in specifying a time‐to‐event model. When statistical inference is carried out for a longitudinal data set with non‐central location, non‐linearity, non‐normality, measurement error, and missing values as well as event time with being interval censored, it is important to account for the simultaneous treatment of these data features in order to obtain more reliable and robust inferential results. Toward this end, we develop Bayesian joint modeling approach to simultaneously estimating all parameters in the three models: quantile regression‐based nonlinear mixed‐effects model for response using asymmetric Laplace distribution, linear mixed‐effects model with skew‐t distribution for mismeasured covariate in the presence of informative missingness and accelerated failure time model with unspecified nonparametric distribution for event time. We apply the proposed modeling approach to analyzing an AIDS clinical data set and conduct simulation studies to assess the performance of the proposed joint models and method. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

17.
Due to its flexibility, the random-effects approach for the joint modelling of multivariate longitudinal profiles received a lot of attention in recent publications. In this approach different mixed models are joined by specifying a common distribution for their random-effects. Parameter estimates of this common distribution can then be used to evaluate the relation between the different responses. Using bivariate longitudinal measurements on pure-tone hearing thresholds, it will be shown that such a random-effects approach can yield misleading results for evaluating this relationship.  相似文献   

18.
Recurrent event data are quite common in biomedical and epidemiological studies. A significant portion of these data also contain additional longitudinal information on surrogate markers. Previous studies have shown that popular methods using a Cox model with longitudinal outcomes as time‐dependent covariates may lead to biased results, especially when longitudinal outcomes are measured with error. Hence, it is important to incorporate longitudinal information into the analysis properly. To achieve this, we model the correlation between longitudinal and recurrent event processes using latent random effect terms. We then propose a two‐stage conditional estimating equation approach to model the rate function of recurrent event process conditioned on the observed longitudinal information. The performance of our proposed approach is evaluated through simulation. We also apply the approach to analyze cocaine addiction data collected by the University of Connecticut Health Center. The data include recurrent event information on cocaine relapse and longitudinal cocaine craving scores. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

19.
Lu W  Zhang HH 《Statistics in medicine》2007,26(20):3771-3781
In this paper we study the problem of variable selection for the proportional odds model, which is a useful alternative to the proportional hazards model and might be appropriate when the proportional hazards assumption is not satisfied. We propose to fit the proportional odds model by maximizing the marginal likelihood subject to a shrinkage-type penalty, which encourages sparse solutions and hence facilitates the process of variable selection. Two types of shrinkage penalties are considered: the LASSO and the adaptive-LASSO (ALASSO) penalty. In the ALASSO penalty, different weights are imposed on different coefficients such that important variables are more protectively retained in the final model while unimportant ones are more likely to be shrunk to zeros. We further provide an efficient computation algorithm to implement the proposed methods, and demonstrate their performance through simulation studies and an application to real data. Numerical results indicate that both methods can produce accurate and interpretable models, and the ALASSO tends to work better than the usual LASSO.  相似文献   

20.
Variable selection is of increasing importance to address the difficulties of high dimensionality in many scientific areas. In this paper, we demonstrate a property for distance covariance, which is incorporated in a novel feature screening procedure together with the use of distance correlation. The approach makes no distributional assumptions for the variables and does not require the specification of a regression model and hence is especially attractive in variable selection given an enormous number of candidate attributes without much information about the true model with the response. The method is applied to two genetic risk problems, where issues including uncertainty of variable selection via cross validation, subgroup of hard‐to‐classify cases, and the application of a reject option are discussed. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号