首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
It is a common practice to analyze complex longitudinal data using nonlinear mixed‐effects (NLME) models with normality assumption. The NLME models with normal distributions provide the most popular framework for modeling continuous longitudinal outcomes, assuming individuals are from a homogeneous population and relying on random‐effects to accommodate inter‐individual variation. However, the following two issues may standout: (i) normality assumption for model errors may cause lack of robustness and subsequently lead to invalid inference and unreasonable estimates, particularly, if the data exhibit skewness and (ii) a homogeneous population assumption may be unrealistically obscuring important features of between‐subject and within‐subject variations, which may result in unreliable modeling results. There has been relatively few studies concerning longitudinal data with both heterogeneity and skewness features. In the last two decades, the skew distributions have shown beneficial in dealing with asymmetric data in various applications. In this article, our objective is to address the simultaneous impact of both features arisen from longitudinal data by developing a flexible finite mixture of NLME models with skew distributions under Bayesian framework that allows estimates of both model parameters and class membership probabilities for longitudinal data. Simulation studies are conducted to assess the performance of the proposed models and methods, and a real example from an AIDS clinical trial illustrates the methodology by modeling the viral dynamics to compare potential models with different distribution specifications; the analysis results are reported. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

2.
This article explores Bayesian joint models for a quantile of longitudinal response, mismeasured covariate and event time outcome with an attempt to (i) characterize the entire conditional distribution of the response variable based on quantile regression that may be more robust to outliers and misspecification of error distribution; (ii) tailor accuracy from measurement error, evaluate non‐ignorable missing observations, and adjust departures from normality in covariate; and (iii) overcome shortages of confidence in specifying a time‐to‐event model. When statistical inference is carried out for a longitudinal data set with non‐central location, non‐linearity, non‐normality, measurement error, and missing values as well as event time with being interval censored, it is important to account for the simultaneous treatment of these data features in order to obtain more reliable and robust inferential results. Toward this end, we develop Bayesian joint modeling approach to simultaneously estimating all parameters in the three models: quantile regression‐based nonlinear mixed‐effects model for response using asymmetric Laplace distribution, linear mixed‐effects model with skew‐t distribution for mismeasured covariate in the presence of informative missingness and accelerated failure time model with unspecified nonparametric distribution for event time. We apply the proposed modeling approach to analyzing an AIDS clinical data set and conduct simulation studies to assess the performance of the proposed joint models and method. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

3.
Functional data are increasingly collected in public health and medical studies to better understand many complex diseases. Besides the functional data, other clinical measures are often collected repeatedly. Investigating the association between these longitudinal data and time to a survival event is of great interest to these studies. In this article, we develop a functional joint model (FJM) to account for functional predictors in both longitudinal and survival submodels in the joint modeling framework. The parameters of FJM are estimated in a maximum likelihood framework via expectation maximization algorithm. The proposed FJM provides a flexible framework to incorporate many features both in joint modeling of longitudinal and survival data and in functional data analysis. The FJM is evaluated by a simulation study and is applied to the Alzheimer's Disease Neuroimaging Initiative study, a motivating clinical study testing whether serial brain imaging, clinical, and neuropsychological assessments can be combined to measure the progression of Alzheimer's disease. Copyright © 2017 John Wiley & Sons, Ltd.  相似文献   

4.
Tao Lu 《Statistics in medicine》2017,36(16):2614-2629
In AIDS studies, heterogeneous between and within subject variations are often observed on longitudinal endpoints. To accommodate heteroscedasticity in the longitudinal data, statistical methods have been developed to model the mean and variance jointly. Most of these methods assume (conditional) normal distributions for random errors, which is not realistic in practice. In this article, we propose a Bayesian mixed‐effects location scale model with skew‐t distribution and mismeasured covariates for heterogeneous longitudinal data with skewness. The proposed model captures the between‐subject and within‐subject (WS) heterogeneity by modeling the between‐subject and WS variations with covariates as well as a random effect at subject level in the WS variance. Further, the proposed model also takes into account the covariate measurement errors, and commonly assumed normal distributions for model errors are substituted by skew‐t distribution to account for skewness. Parameter estimation is carried out in a Bayesian framework. The proposed method is illustrated with a Multicenter AIDS Cohort Study. Simulation studies are performed to assess the performance of the proposed method. Copyright © 2017 John Wiley & Sons, Ltd.  相似文献   

5.
In this article, we show how Tobit models can address problems of identifying characteristics of subjects having left‐censored outcomes in the context of developing a method for jointly analyzing time‐to‐event and longitudinal data. There are some methods for handling these types of data separately, but they may not be appropriate when time to event is dependent on the longitudinal outcome, and a substantial portion of values are reported to be below the limits of detection. An alternative approach is to develop a joint model for the time‐to‐event outcome and a two‐part longitudinal outcome, linking them through random effects. This proposed approach is implemented to assess the association between the risk of decline of CD4/CD8 ratio and rates of change in viral load, along with discriminating between patients who are potentially progressors to AIDS from patients who do not. We develop a fully Bayesian approach for fitting joint two‐part Tobit models and illustrate the proposed methods on simulated and real data from an AIDS clinical study.  相似文献   

6.

Background

Joint modelling of longitudinal and time‐to‐event data is often preferred over separate longitudinal or time‐to‐event analyses as it can account for study dropout, error in longitudinally measured covariates, and correlation between longitudinal and time‐to‐event outcomes. The joint modelling literature focuses mainly on the analysis of single studies with no methods currently available for the meta‐analysis of joint model estimates from multiple studies.

Methods

We propose a 2‐stage method for meta‐analysis of joint model estimates. These methods are applied to the INDANA dataset to combine joint model estimates of systolic blood pressure with time to death, time to myocardial infarction, and time to stroke. Results are compared to meta‐analyses of separate longitudinal or time‐to‐event models. A simulation study is conducted to contrast separate versus joint analyses over a range of scenarios.

Results

Using the real dataset, similar results were obtained by using the separate and joint analyses. However, the simulation study indicated a benefit of use of joint rather than separate methods in a meta‐analytic setting where association exists between the longitudinal and time‐to‐event outcomes.

Conclusions

Where evidence of association between longitudinal and time‐to‐event outcomes exists, results from joint models over standalone analyses should be pooled in 2‐stage meta‐analyses.  相似文献   

7.
Huang Y  Dagne G  Wu L 《Statistics in medicine》2011,30(24):2930-2946
Normality (symmetry) of the model random errors is a routine assumption for mixed-effects models in many longitudinal studies, but it may be unrealistically obscuring important features of subject variations. Covariates are usually introduced in the models to partially explain inter-subject variations, but some covariates such as CD4 cell count may be often measured with substantial errors. This paper formulates a class of models in general forms that considers model errors to have skew-normal distributions for a joint behavior of longitudinal dynamic processes and time-to-event process of interest. For estimating model parameters, we propose a Bayesian approach to jointly model three components (response, covariate, and time-to-event processes) linked through the random effects that characterize the underlying individual-specific longitudinal processes. We discuss in detail special cases of the model class, which are offered to jointly model HIV dynamic response in the presence of CD4 covariate process with measurement errors and time to decrease in CD4/CD8 ratio, to provide a tool to assess antiretroviral treatment and to monitor disease progression. We illustrate the proposed methods using the data from a clinical trial study of HIV treatment. The findings from this research suggest that the joint models with a skew-normal distribution may provide more reliable and robust results if the data exhibit skewness, and particularly the results may be important for HIV/AIDS studies in providing quantitative guidance to better understand the virologic responses to antiretroviral treatment.  相似文献   

8.
This paper presents a new Bayesian methodology for identifying a transition period for the development of drug resistance to antiretroviral drug or therapy in HIV/AIDS studies or other related fields. Estimation of such a transition period requires an availability of longitudinal data where growth trajectories of a response variable tend to exhibit a gradual change from a declining trend to an increasing trend rather than an abrupt change. We assess this clinically important feature of the longitudinal HIV/AIDS data using the bent‐cable framework within a growth mixture Tobit model. To account for heterogeneity of drug resistance among subjects, the parameters of the bent‐cable growth mixture Tobit model are also allowed to differ by subgroups (subpopulations) of patients classified into latent classes on the basis of trajectories of observed viral load data with skewness and left‐censoring. The proposed methods are illustrated using real data from an AIDS clinical study. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

9.
Instrumental variable (IV) analysis has been widely used in economics, epidemiology, and other fields to estimate the causal effects of covariates on outcomes, in the presence of unobserved confounders and/or measurement errors in covariates. However, IV methods for time‐to‐event outcome with censored data remain underdeveloped. This paper proposes a Bayesian approach for IV analysis with censored time‐to‐event outcome by using a two‐stage linear model. A Markov chain Monte Carlo sampling method is developed for parameter estimation for both normal and non‐normal linear models with elliptically contoured error distributions. The performance of our method is examined by simulation studies. Our method largely reduces bias and greatly improves coverage probability of the estimated causal effect, compared with the method that ignores the unobserved confounders and measurement errors. We illustrate our method on the Women's Health Initiative Observational Study and the Atherosclerosis Risk in Communities Study. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

10.
Causal inference with observational longitudinal data and time‐varying exposures is complicated due to the potential for time‐dependent confounding and unmeasured confounding. Most causal inference methods that handle time‐dependent confounding rely on either the assumption of no unmeasured confounders or the availability of an unconfounded variable that is associated with the exposure (eg, an instrumental variable). Furthermore, when data are incomplete, validity of many methods often depends on the assumption of missing at random. We propose an approach that combines a parametric joint mixed‐effects model for the study outcome and the exposure with g‐computation to identify and estimate causal effects in the presence of time‐dependent confounding and unmeasured confounding. G‐computation can estimate participant‐specific or population‐average causal effects using parameters of the joint model. The joint model is a type of shared parameter model where the outcome and exposure‐selection models share common random effect(s). We also extend the joint model to handle missing data and truncation by death when missingness is possibly not at random. We evaluate the performance of the proposed method using simulation studies and compare the method to both linear mixed‐ and fixed‐effects models combined with g‐computation as well as to targeted maximum likelihood estimation. We apply the method to an epidemiologic study of vitamin D and depressive symptoms in older adults and include code using SAS PROC NLMIXED software to enhance the accessibility of the method to applied researchers.  相似文献   

11.
Joint models for longitudinal and time‐to‐event data are particularly relevant to many clinical studies where longitudinal biomarkers could be highly associated with a time‐to‐event outcome. A cutting‐edge research direction in this area is dynamic predictions of patient prognosis (e.g., survival probabilities) given all available biomarker information, recently boosted by the stratified/personalized medicine initiative. As these dynamic predictions are individualized, flexible models are desirable in order to appropriately characterize each individual longitudinal trajectory. In this paper, we propose a new joint model using individual‐level penalized splines (P‐splines) to flexibly characterize the coevolution of the longitudinal and time‐to‐event processes. An important feature of our approach is that dynamic predictions of the survival probabilities are straightforward as the posterior distribution of the random P‐spline coefficients given the observed data is a multivariate skew‐normal distribution. The proposed methods are illustrated with data from the HIV Epidemiology Research Study. Our simulation results demonstrate that our model has better dynamic prediction performance than other existing approaches. © 2017 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.  相似文献   

12.
Joint analysis of longitudinal and survival data has received increasing attention in the recent years, especially for analyzing cancer and AIDS data. As both repeated measurements (longitudinal) and time‐to‐event (survival) outcomes are observed in an individual, a joint modeling is more appropriate because it takes into account the dependence between the two types of responses, which are often analyzed separately. We propose a Bayesian hierarchical model for jointly modeling longitudinal and survival data considering functional time and spatial frailty effects, respectively. That is, the proposed model deals with non‐linear longitudinal effects and spatial survival effects accounting for the unobserved heterogeneity among individuals living in the same region. This joint approach is applied to a cohort study of patients with HIV/AIDS in Brazil during the years 2002–2006. Our Bayesian joint model presents considerable improvements in the estimation of survival times of the Brazilian HIV/AIDS patients when compared with those obtained through a separate survival model and shows that the spatial risk of death is the same across the different Brazilian states. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

13.
The number needed to treat is a tool often used in clinical settings to illustrate the effect of a treatment. It has been widely adopted in the communication of risks to both clinicians and non‐clinicians, such as patients, who are better able to understand this measure than absolute risk or rate reductions. The concept was introduced by Laupacis, Sackett, and Roberts in 1988 for binary data, and extended to time‐to‐event data by Altman and Andersen in 1999. However, up to the present, there is no definition of the number needed to treat for time‐to‐event data with competing risks. This paper introduces such a definition using the cumulative incidence function and suggests non‐parametric and semi‐parametric inferential methods for right‐censored time‐to‐event data in the presence of competing risks. The procedures are illustrated using the data from a breast cancer clinical trial. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

14.
Studies of HIV dynamics in AIDS research are very important in understanding the pathogenesis of HIV‐1 infection and also in assessing the effectiveness of antiviral therapies. Nonlinear mixed‐effects (NLME) models have been used for modeling between‐subject and within‐subject variations in viral load measurements. Mostly, normality of both within‐subject random error and random‐effects is a routine assumption for NLME models, but it may be unrealistic, obscuring important features of between‐subject and within‐subject variations, particularly, if the data exhibit skewness. In this paper, we develop a Bayesian approach to NLME models and relax the normality assumption by considering both model random errors and random‐effects to have a multivariate skew‐normal distribution. The proposed model provides flexibility in capturing a broad range of non‐normal behavior and includes normality as a special case. We use a real data set from an AIDS study to illustrate the proposed approach by comparing various candidate models. We find that the model with skew‐normality provides better fit to the observed data and the corresponding estimates of parameters are significantly different from those based on the model with normality when skewness is present in the data. These findings suggest that it is very important to assume a model with skew‐normal distribution in order to achieve robust and reliable results, in particular, when the data exhibit skewness. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

15.
Common problems to many longitudinal HIV/AIDS, cancer, vaccine, and environmental exposure studies are the presence of a lower limit of quantification of an outcome with skewness and time‐varying covariates with measurement errors. There has been relatively little work published simultaneously dealing with these features of longitudinal data. In particular, left‐censored data falling below a limit of detection may sometimes have a proportion larger than expected under a usually assumed log‐normal distribution. In such cases, alternative models, which can account for a high proportion of censored data, should be considered. In this article, we present an extension of the Tobit model that incorporates a mixture of true undetectable observations and those values from a skew‐normal distribution for an outcome with possible left censoring and skewness, and covariates with substantial measurement error. To quantify the covariate process, we offer a flexible nonparametric mixed‐effects model within the Tobit framework. A Bayesian modeling approach is used to assess the simultaneous impact of left censoring, skewness, and measurement error in covariates on inference. The proposed methods are illustrated using real data from an AIDS clinical study. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

16.
Longitudinal growth patterns are routinely seen in medical studies where individual growth and population growth are followed up over a period of time. Many current methods for modeling growth presuppose a parametric relationship between the outcome and time (e.g., linear and quadratic); however, these relationships may not accurately capture growth over time. Functional mixed‐effects (FME) models provide flexibility in handling longitudinal data with nonparametric temporal trends. Although FME methods are well developed for continuous, normally distributed outcome measures, nonparametric methods for handling categorical outcomes are limited. We consider the situation with binomially distributed longitudinal outcomes. Although percent correct data can be modeled assuming normality, estimates outside the parameter space are possible, and thus, estimated curves can be unrealistic. We propose a binomial FME model using Bayesian methodology to account for growth curves with binomial (percentage) outcomes. The usefulness of our methods is demonstrated using a longitudinal study of speech perception outcomes from cochlear implant users where we successfully model both the population and individual growth trajectories. Simulation studies also advocate the usefulness of the binomial model particularly when outcomes occur near the boundary of the probability parameter space and in situations with a small number of trials. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

17.
The collection of repeated measurements over time on an experimental unit to study the changes over time of a certain characteristic is common in biological and clinical studies. Data of this type are also often referred to as growth curve data or repeated measures data. There arise situations when one is interested in an estimate of the time to an event, based on a characteristic that indicates progression towards the event. The assessment of the progression of labor during childbirth based on cervical dilation is one such example. Here increase in the dilation of the cervix indicates progression towards delivery. Based on how long one has been in labor and an estimate of the time to complete dilation one might make crucial decisions like the decision to administer a drug or to perform a C‐section. Here a repeated measures approach is developed to model the time to the event. The parameters of the model are estimated by a maximum likelihood approach. A general model is developed for a class of data structures and a nonlinear model is developed specific to the labor progression data. Simulations are performed to assess the methodology and conditions are suggested for predicting the time to an event. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

18.
Our aim is to develop a rich and coherent framework for modeling correlated time‐to‐event data, including (1) survival regression models with different links and (2) flexible modeling for time‐dependent and nonlinear effects with rich postestimation. We extend the class of generalized survival models, which expresses a transformed survival in terms of a linear predictor, by incorporating a shared frailty or random effects for correlated survival data. The proposed approach can include parametric or penalized smooth functions for time, time‐dependent effects, nonlinear effects, and their interactions. The maximum (penalized) marginal likelihood method is used to estimate the regression coefficients and the variance for the frailty or random effects. The optimal smoothing parameters for the penalized marginal likelihood estimation can be automatically selected by a likelihood‐based cross‐validation criterion. For models with normal random effects, Gauss‐Hermite quadrature can be used to obtain the cluster‐level marginal likelihoods. The Akaike Information Criterion can be used to compare models and select the link function. We have implemented these methods in the R package rstpm2. Simulating for both small and larger clusters, we find that this approach performs well. Through 2 applications, we demonstrate (1) a comparison of proportional hazards and proportional odds models with random effects for clustered survival data and (2) the estimation of time‐varying effects on the log‐time scale, age‐varying effects for a specific treatment, and two‐dimensional splines for time and age.  相似文献   

19.
Dynamic prediction uses longitudinal biomarkers for real‐time prediction of an individual patient's prognosis. This is critical for patients with an incurable disease such as cancer. Biomarker trajectories are usually not linear, nor even monotone, and vary greatly across individuals. Therefore, it is difficult to fit them with parametric models. With this consideration, we propose an approach for dynamic prediction that does not need to model the biomarker trajectories. Instead, as a trade‐off, we assume that the biomarker effects on the risk of disease recurrence are smooth functions over time. This approach turns out to be computationally easier. Simulation studies show that the proposed approach achieves stable estimation of biomarker effects over time, has good predictive performance, and is robust against model misspecification. It is a good compromise between two major approaches, namely, (i) joint modeling of longitudinal and survival data and (ii) landmark analysis. The proposed method is applied to patients with chronic myeloid leukemia. At any time following their treatment with tyrosine kinase inhibitors, longitudinally measured BCR‐ABL gene expression levels are used to predict the risk of disease progression. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

20.
Conventional phase II trials using binary endpoints as early indicators of a time‐to‐event outcome are not always feasible. Uveal melanoma has no reliable intermediate marker of efficacy. In pancreatic cancer and viral clearance, the time to the event of interest is short, making an early indicator unnecessary. In the latter application, Weibull models have been used to analyse corresponding time‐to‐event data. Bayesian sample size calculations are presented for single‐arm and randomised phase II trials assuming proportional hazards models for time‐to‐event endpoints. Special consideration is given to the case where survival times follow the Weibull distribution. The proposed methods are demonstrated through an illustrative trial based on uveal melanoma patient data. A procedure for prior specification based on knowledge or predictions of survival patterns is described. This enables investigation into the choice of allocation ratio in the randomised setting to assess whether a control arm is indeed required. The Bayesian framework enables sample sizes consistent with those used in practice to be obtained. When a confirmatory phase III trial will follow if suitable evidence of efficacy is identified, Bayesian approaches are less controversial than for definitive trials. In the randomised setting, a compromise for obtaining feasible sample sizes is a loss in certainty in the specified hypotheses: the Bayesian counterpart of power. However, this approach may still be preferable to running a single‐arm trial where no data is collected on the control treatment. This dilemma is present in most phase II trials, where resources are not sufficient to conduct a definitive trial. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号