首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Huang Y  Dagne G  Wu L 《Statistics in medicine》2011,30(24):2930-2946
Normality (symmetry) of the model random errors is a routine assumption for mixed-effects models in many longitudinal studies, but it may be unrealistically obscuring important features of subject variations. Covariates are usually introduced in the models to partially explain inter-subject variations, but some covariates such as CD4 cell count may be often measured with substantial errors. This paper formulates a class of models in general forms that considers model errors to have skew-normal distributions for a joint behavior of longitudinal dynamic processes and time-to-event process of interest. For estimating model parameters, we propose a Bayesian approach to jointly model three components (response, covariate, and time-to-event processes) linked through the random effects that characterize the underlying individual-specific longitudinal processes. We discuss in detail special cases of the model class, which are offered to jointly model HIV dynamic response in the presence of CD4 covariate process with measurement errors and time to decrease in CD4/CD8 ratio, to provide a tool to assess antiretroviral treatment and to monitor disease progression. We illustrate the proposed methods using the data from a clinical trial study of HIV treatment. The findings from this research suggest that the joint models with a skew-normal distribution may provide more reliable and robust results if the data exhibit skewness, and particularly the results may be important for HIV/AIDS studies in providing quantitative guidance to better understand the virologic responses to antiretroviral treatment.  相似文献   

2.
Generalized linear models with random effects are often used to explain the serial dependence of longitudinal categorical data. Marginalized random effects models (MREMs) permit likelihood‐based estimations of marginal mean parameters and also explain the serial dependence of longitudinal data. In this paper, we extend the MREM to accommodate multivariate longitudinal binary data using a new covariance matrix with a Kronecker decomposition, which easily explains both the serial dependence and time‐specific response correlation. A maximum marginal likelihood estimation is proposed utilizing a quasi‐Newton algorithm with quasi‐Monte Carlo integration of the random effects. Our approach is applied to analyze metabolic syndrome data from the Korean Genomic Epidemiology Study for Korean adults. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

3.
Background: Joint modeling of longitudinal and time-to-event data is often advantageous over separate longitudinal or time-to-event analyses as it can account for study dropout, error in longitudinally measured covariates, and correlation between longitudinal and time-to-event outcomes. The current literature on joint modeling focuses mainly on the analysis of single studies with a lack of methods available for the meta-analysis of joint data from multiple studies. Methods: We investigate a variety of one-stage methods for the meta-analysis of joint longitudinal and time-to-event outcome data. These methods are applied to the INDANA dataset to investigate longitudinally measured systolic blood pressure, with each of time to death, time to myocardial infarction, and time to stroke. Results are compared to separate longitudinal or time-to-event meta-analyses. A simulation study is conducted to contrast separate versus joint analyses over a range of scenarios. Results: The performance of the examined one-stage joint meta-analytic models varied. Models that accounted for between study heterogeneity performed better than models that ignored it. Of the examined methods to account for between study heterogeneity, under the examined association structure, fixed effect approaches appeared preferable, whereas methods involving a baseline hazard stratified by study were least time intensive. Conclusions: One-stage joint meta-analytic models that accounted for between study heterogeneity using a mix of fixed effects or a stratified baseline hazard were reliable; however, models examined that included study level random effects in the association structure were less reliable.  相似文献   

4.
Observational cohort studies often feature longitudinal data subject to irregular observation. Moreover, the timings of observations may be associated with the underlying disease process and must thus be accounted for when analysing the data. This paper suggests that multiple outputation, which consists of repeatedly discarding excess observations, may be a helpful way of approaching the problem. Multiple outputation was designed for clustered data where observations within a cluster are exchangeable; an adaptation for longitudinal data subject to irregular observation is proposed. We show how multiple outputation can be used to expand the range of models that can be fitted to irregular longitudinal data. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

5.
Immunotherapy—treatments that target a patient's immune system—has attracted considerable attention in cancer research. Its recent success has led to generation of novel immunotherapeutic agents that need to be evaluated in clinical trials. Two unique features of immunotherapy are the immune response and the fact that some patients may achieve long-term durable response. In this article, we propose a two-arm Bayesian adaptive randomized phase II clinical trial design for immunotherapy that jointly models the longitudinal immune response and time-to-event efficacy (BILITE), with a fraction of patients assumed to be cured by the treatment. The longitudinal immune response is modeled using hierarchical nonlinear mixed-effects models with possibly different trajectory patterns for the cured and susceptible groups. Conditional on the immune response trajectory, the time-to-event efficacy data for patients in the susceptible group is modeled via a time-dependent Cox-type regression model. We quantify the desirability of the treatment using a utility function and propose a two-stage design to adaptively randomize patients to treatments and make treatment recommendations at the end of the trial. Simulation studies show that compared with a conventional design that ignores the immune response, BILITE yields superior operating characteristics in terms of the ability to identify promising agents and terminate the trial early for futility.  相似文献   

6.
Existing joint models for longitudinal and survival data are not applicable for longitudinal ordinal outcomes with possible non‐ignorable missing values caused by multiple reasons. We propose a joint model for longitudinal ordinal measurements and competing risks failure time data, in which a partial proportional odds model for the longitudinal ordinal outcome is linked to the event times by latent random variables. At the survival endpoint, our model adopts the competing risks framework to model multiple failure types at the same time. The partial proportional odds model, as an extension of the popular proportional odds model for ordinal outcomes, is more flexible and at the same time provides a tool to test the proportional odds assumption. We use a likelihood approach and derive an EM algorithm to obtain the maximum likelihood estimates of the parameters. We further show that all the parameters at the survival endpoint are identifiable from the data. Our joint model enables one to make inference for both the longitudinal ordinal outcome and the failure times simultaneously. In addition, the inference at the longitudinal endpoint is adjusted for possible non‐ignorable missing data caused by the failure times. We apply the method to the NINDS rt‐PA stroke trial. Our study considers the modified Rankin Scale only. Other ordinal outcomes in the trial, such as the Barthel and Glasgow scales, can be treated in the same way. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

7.
Causal inference with observational longitudinal data and time‐varying exposures is complicated due to the potential for time‐dependent confounding and unmeasured confounding. Most causal inference methods that handle time‐dependent confounding rely on either the assumption of no unmeasured confounders or the availability of an unconfounded variable that is associated with the exposure (eg, an instrumental variable). Furthermore, when data are incomplete, validity of many methods often depends on the assumption of missing at random. We propose an approach that combines a parametric joint mixed‐effects model for the study outcome and the exposure with g‐computation to identify and estimate causal effects in the presence of time‐dependent confounding and unmeasured confounding. G‐computation can estimate participant‐specific or population‐average causal effects using parameters of the joint model. The joint model is a type of shared parameter model where the outcome and exposure‐selection models share common random effect(s). We also extend the joint model to handle missing data and truncation by death when missingness is possibly not at random. We evaluate the performance of the proposed method using simulation studies and compare the method to both linear mixed‐ and fixed‐effects models combined with g‐computation as well as to targeted maximum likelihood estimation. We apply the method to an epidemiologic study of vitamin D and depressive symptoms in older adults and include code using SAS PROC NLMIXED software to enhance the accessibility of the method to applied researchers.  相似文献   

8.
We propose a new nonparametric test to compare treatment effects using a composite endpoint comprising a longitudinal continuous measurement and multiple time-to-event outcomes. The composite endpoint incorporates the severity of each outcome as measured against the whole cohort in the entire study. This new proposed test is conceptually simple and computationally easy to implement. Unlike many currently available methods, our strategy is very flexible and can be applied to many types of clinical settings including the situation where we have multiple time-to-event outcomes. We apply our method to reanalyze two clinical trials for Scleroderma patients and also use simulation studies to show that the performance of our method is compatible with the results from a popular method proposed by Finkelstein and Schoenfeld (Statist. Med. 1999; 18:1341-1354).  相似文献   

9.
Longitudinal cohort studies often collect both repeated measurements of longitudinal outcomes and times to clinical events whose occurrence precludes further longitudinal measurements. Although joint modeling of the clinical events and the longitudinal data can be used to provide valid statistical inference for target estimands in certain contexts, the application of joint models in medical literature is currently rather restricted because of the complexity of the joint models and the intensive computation involved. We propose a multiple imputation approach to jointly impute missing data of both the longitudinal and clinical event outcomes. With complete imputed datasets, analysts are then able to use simple and transparent statistical methods and standard statistical software to perform various analyses without dealing with the complications of missing data and joint modeling. We show that the proposed multiple imputation approach is flexible and easy to implement in practice. Numerical results are also provided to demonstrate its performance. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

10.
This paper is motivated by combining serial neurocognitive assessments and other clinical variables for monitoring the progression of Alzheimer's disease (AD). We propose a novel framework for the use of multiple longitudinal neurocognitive markers to predict the progression of AD. The conventional joint modeling longitudinal and survival data approach is not applicable when there is a large number of longitudinal outcomes. We introduce various approaches based on functional principal component for dimension reduction and feature extraction from multiple longitudinal outcomes. We use these features to extrapolate the health outcome trajectories and use scores on these features as predictors in a Cox proportional hazards model to conduct predictions over time. We propose a personalized dynamic prediction framework that can be updated as new observations collected to reflect the patient's latest prognosis, and thus intervention could be initiated in a timely manner. Simulation studies and application to the Alzheimer's Disease Neuroimaging Initiative dataset demonstrate the robustness of the method for the prediction of future health outcomes and risks of target events under various scenarios.  相似文献   

11.
In longitudinal studies of patients with the human immunodeficiency virus (HIV), objectives of interest often include modeling of individual-level trajectories of HIV ribonucleic acid (RNA) as a function of time. Such models can be used to predict the effects of different treatment regimens or to classify subjects into subgroups with similar trajectories. Empirical evidence, however, suggests that individual trajectories often possess multiple points of rapid change, which may vary from subject to subject. Additionally, some individuals may end up dropping out of the study and the tendency to drop out may be related to the level of the biomarker. Modeling of individual viral RNA profiles is challenging in the presence of these changes, and currently available methods do not address all the issues such as multiple changes, informative dropout, clustering, etc. in a single model. In this article, we propose a new joint model, where a multiple-changepoint model is proposed for the longitudinal viral RNA response and a proportional hazards model for the time of dropout process. Dirichlet process (DP) priors are used to model the distribution of the individual random effects and error distribution. In addition to robustifying the model against possible misspecifications, the DP leads to a natural clustering of subjects with similar trajectories which can be of importance in itself. Sharing of information among subjects with similar trajectories also results in improved parameter estimation. A fully Bayesian approach for model fitting and prediction is implemented using MCMC procedures on the ACTG 398 clinical trial data. The proposed model is seen to give rise to improved estimates of individual trajectories when compared with a parametric approach.  相似文献   

12.
Due to its flexibility, the random-effects approach for the joint modelling of multivariate longitudinal profiles received a lot of attention in recent publications. In this approach different mixed models are joined by specifying a common distribution for their random-effects. Parameter estimates of this common distribution can then be used to evaluate the relation between the different responses. Using bivariate longitudinal measurements on pure-tone hearing thresholds, it will be shown that such a random-effects approach can yield misleading results for evaluating this relationship.  相似文献   

13.
Mixed effects models have become very popular, especially for the analysis of longitudinal data. One challenge is how to build a good enough mixed effects model. In this paper, we suggest a systematic strategy for addressing this challenge and introduce easily implemented practical advice to build mixed effects models. A general discussion of the scientific strategies motivates the recommended five‐step procedure for model fitting. The need to model both the mean structure (the fixed effects) and the covariance structure (the random effects and residual error) creates the fundamental flexibility and complexity. Some very practical recommendations help to conquer the complexity. Centering, scaling, and full‐rank coding of all the predictor variables radically improve the chances of convergence, computing speed, and numerical accuracy. Applying computational and assumption diagnostics from univariate linear models to mixed model data greatly helps to detect and solve the related computational problems. Applying computational and assumption diagnostics from the univariate linear models to the mixed model data can radically improve the chances of convergence, computing speed, and numerical accuracy. The approach helps to fit more general covariance models, a crucial step in selecting a credible covariance model needed for defensible inference. A detailed demonstration of the recommended strategy is based on data from a published study of a randomized trial of a multicomponent intervention to prevent young adolescents' alcohol use. The discussion highlights a need for additional covariance and inference tools for mixed models. The discussion also highlights the need for improving how scientists and statisticians teach and review the process of finding a good enough mixed model. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

14.
Several methods for the estimation and comparison of rates of change in longitudinal studies with staggered entry and informative drop-outs have been recently proposed. For multivariate normal linear models, REML estimation is used. There are various approaches to maximizing the corresponding log-likelihood; in this paper we use a restricted iterative generalized least squares method (RIGLS) combined with a nested EM algorithm. An important statistical problem in such approaches is the estimation of the standard errors adjusted for the missing data (observed data information matrix). Louis has provided a general technique for computing the observed data information in terms of completed data quantities within the EM framework. The multiple imputation (MI) method for obtaining variances can be regarded as an alternative to this. The aim of this paper is to develop, apply and compare the Louis and a modified MI method in the setting of longitudinal studies where the source of missing data is either death or disease progression (informative) or end of the study (assumed non-informative). Longitudinal data are simultaneously modelled with the missingness process. The methods are illustrated by modelling CD4 count data from an HIV-1 clinical trial and evaluated through simulation studies. Both methods, Louis and MI, are used with Monte Carlo simulations of the missing data using the appropriate conditional distributions, the former with 100 simulations, the latter with 5 and 10. It is seen that naive SEs based on the completed data likelihood can be seriously biased. This bias was largely corrected by Louis and modified MI methods, which gave broadly similar estimates. Given the relative simplicity of the modified MI method, it may be preferable.  相似文献   

15.
Joint modelling of longitudinal and survival data has received much attention in recent years. Most have concentrated on a single longitudinal variable. This paper considers joint modelling in the presence of multiple longitudinal variables. We explore direct association of time-to-event and multiple longitudinal processes through a frailty model and use a mixed effects model for each of the longitudinal variables. Correlations among the longitudinal variables are induced through correlated random effects. We allow effects of categorical and continuous covariates on both longitudinal and time-to-event responses and explore interactions between the longitudinal variables and other covariates on time-to-event. Estimates of the parameters are obtained by maximizing the joint likelihood for the longitudinal variable processes and the event process. We use a one-step-late EM algorithm to handle the direct dependence of the event process on the modelled longitudinal variables along with the presence of other fixed covariates in both processes. We argue that such a joint analysis with multiple longitudinal variables is advantageous to one with only a single longitudinal variable in revealing interplay among multiple longitudinal variables and the time-to-event.  相似文献   

16.
In survival studies, information lost through censoring can be partially recaptured through repeated measures data which are predictive of survival. In addition, such data may be useful in removing bias in survival estimates, due to censoring which depends upon the repeated measures. Here we investigate joint models for survival T and repeated measurements Y, given a vector of covariates Z. Mixture models indexed as f (T/Z) f (Y/T,Z) are well suited for assessing covariate effects on survival time. Our objective is efficiency gains, using non-parametric models for Y in order to avoid introducing bias by misspecification of the distribution for Y. We model (T/Z) as a piecewise exponential distribution with proportional hazards covariate effect. The component (Y/T,Z) has a multinomial model. The joint likelihood for survival and longitudinal data is maximized, using the EM algorithm. The estimate of covariate effect is compared to the estimate based on the standard proportional hazards model and an alternative joint model based estimate. We demonstrate modest gains in efficiency when using the joint piecewise exponential joint model. In a simulation, the estimated efficiency gain over the standard proportional hazards model is 6.4 per cent. In clinical trial data, the estimated efficiency gain over the standard proportional hazards model is 10.2 per cent.  相似文献   

17.
Correlation is inherent in longitudinal studies due to the repeated measurements on subjects, as well as due to time-dependent covariates in the study. In the National Longitudinal Study of Adolescent to Adult Health (Add Health), data were repeatedly collected on children in grades 7-12 across four waves. Thus, observations obtained on the same adolescent were correlated, while predictors were correlated with current and future outcomes such as obesity status, among other health issues. Previous methods, such as the generalized method of moments (GMM) approach have been proposed to estimate regression coefficients for time-dependent covariates. However, these approaches combined all valid moment conditions to produce an averaged parameter estimate for each covariate and thus assumed that the effect of each covariate on the response was constant across time. This assumption is not necessarily optimal in applications such as Add Health or health-related data. Thus, we depart from this assumption and instead use the Partitioned GMM approach to estimate multiple coefficients for the data based on different time periods. These extra regression coefficients are obtained using a partitioning of the moment conditions pertaining to each respective relationship. This approach offers a deeper understanding and appreciation into the effect of each covariate on the response. We conduct simulation studies, as well as analyses of obesity in Add Health, rehospitalization in Medicare data, and depression scores in a clinical study. The Partitioned GMM methods exhibit benefits over previously proposed models with improved insight into the nonconstant relationships realized when analyzing longitudinal data.  相似文献   

18.
Yang Y  Kang J  Mao K  Zhang J 《Statistics in medicine》2007,26(20):3782-3800
In this article we develop flexible regression models in two respects to evaluate the influence of the covariate variables on the mixed Poisson and continuous responses and to evaluate how the correlation between Poisson response and continuous response changes over time. A scenario for dealing with regression models of mixed continuous and Poisson responses when the heterogeneous variance and correlation changing over time exist is proposed. Our general approach is first to jointly build marginal model and to check whether the variance and correlation change over time via likelihood ratio test. If the variance and correlation change over time, we will do a suitable data transformation to properly evaluate the influence of the covariates on the mixed responses. The proposed methods are applied to the interstitial cystitis data base (ICDB) cohort study, and we find that the positive correlations significantly change over time, which suggests heterogeneous variances should not be ignored in modelling and inference.  相似文献   

19.
In many longitudinal studies, the outcomes recorded on each subject include both a sequence of repeated measurements at pre-specified times and the time at which an event of particular interest occurs: for example, death, recurrence of symptoms or drop out from the study. The event time for each subject may be recorded exactly, interval censored or right censored. The term joint modelling refers to the statistical analysis of the resulting data while taking account of any association between the repeated measurement and time-to-event outcomes. In this paper, we first discuss different approaches to joint modelling and argue that the analysis strategy should depend on the scientific focus of the study. We then describe in detail a particularly simple, fully parametric approach. Finally, we use this approach to re-analyse data from a clinical trial of drug therapies for schizophrenic patients, in which the event time is an interval-censored or right-censored time to withdrawal from the study due to adverse side effects.  相似文献   

20.
We extend the marginalized transition model of Heagerty to accommodate non-ignorable monotone drop-out. Using a selection model, weakly identified drop-out parameters are held constant and their effects evaluated through sensitivity analysis. For data missing at random (MAR), efficiency of inverse probability of censoring weighted generalized estimating equations (IPCW-GEE) is as low as 40 per cent compared to a likelihood-based marginalized transition model (MTM) with comparable modelling burden. MTM and IPCW-GEE regression parameters both display misspecification bias for MAR and non-ignorable missing data, and both reduce bias noticeably by improving model fit.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号