首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到10条相似文献,搜索用时 140 毫秒
1.
Longitudinal studies often gather joint information on time to some event (survival analysis, time to dropout) and serial outcome measures (repeated measures, growth curves). Depending on the purpose of the study, one may wish to estimate and compare serial trends over time while accounting for possibly non-ignorable dropout or one may wish to investigate any associations that may exist between the event time of interest and various longitudinal trends. In this paper, we consider a class of random-effects models known as shared parameter models that are particularly useful for jointly analysing such data; namely repeated measurements and event time data. Specific attention will be given to the longitudinal setting where the primary goal is to estimate and compare serial trends over time while adjusting for possible informative censoring due to patient dropout. Parametric and semi-parametric survival models for event times together with generalized linear or non-linear mixed-effects models for repeated measurements are proposed for jointly modelling serial outcome measures and event times. Methods of estimation are based on a generalized non-linear mixed-effects model that may be easily implemented using existing software. This approach allows for flexible modelling of both the distribution of event times and of the relationship of the longitudinal response variable to the event time of interest. The model and methods are illustrated using data from a multi-centre study of the effects of diet and blood pressure control on progression of renal disease, the modification of diet in renal disease study.  相似文献   

2.
The measurement of cervical dilation of a pregnant woman is used to monitor the progression of labor until 10 cm when pushing begins. There is anecdotal evidence that labor tracks across repeated pregnancies; moreover, no statistical methodology has been developed to address this important issue, which can help obstetricians make more informed clinical decisions about an individual woman's progression. Motivated by the NICHD Consecutive Pregnancies Study (CPS), we propose new methodology for analyzing labor curves across consecutive pregnancies. Our focus is both on studying the correlation between repeated labor curves on the same woman and on using the cervical dilation data from prior pregnancies to predict subsequent labor curves. We propose a hierarchical random effects model with a random change point that characterizes repeated labor curves within and between women to address these issues. We employ Bayesian methodology for parameter estimation and prediction. Model diagnostics to examine the appropriateness of the hierarchical random effects structure for characterizing the dependence structure across consecutive pregnancies are also proposed. The methodology was used in analyzing the CPS data and in developing a predictor for labor progression that can be used in clinical practice.  相似文献   

3.
In longitudinal studies, it is sometimes of interest to estimate the distribution of the time a longitudinal process takes to traverse from one threshold to another. For example, the distribution of the time it takes a woman's cervical dilation to progress from 3 to 4 cm can aid the decision‐making of obstetricians as to whether a stalled labor should be allowed to proceed or stopped in favor of other options. Often researchers treat this type of data structure as interval censored and employ traditional survival analysis methods. However, the traditional interval censoring approaches are inefficient in that they do not use all of the available data. In this paper, we propose utilizing a longitudinal threshold model to estimate the distribution of the elapsed time between two thresholds of the longitudinal process from repeated measurements. We extend this modeling framework to be used with multiple thresholds. A Wiener process under the first hitting time framework is used to represent survival distribution. We demonstrate our model through simulation studies and an analysis of data from the Consortium on Safe Labor study. Published 2016. This article is a U.S. Government work and is in the public domain in the USA.  相似文献   

4.
In longitudinal studies, it is of interest to investigate how repeatedly measured markers in time are associated with a time to an event of interest, and in the mean time, the repeated measurements are often observed with the features of a heterogeneous population, non‐normality, and covariate measured with error because of longitudinal nature. Statistical analysis may complicate dramatically when one analyzes longitudinal–survival data with these features together. Recently, a mixture of skewed distributions has received increasing attention in the treatment of heterogeneous data involving asymmetric behaviors across subclasses, but there are relatively few studies accommodating heterogeneity, non‐normality, and measurement error in covariate simultaneously arose in longitudinal–survival data setting. Under the umbrella of Bayesian inference, this article explores a finite mixture of semiparametric mixed‐effects joint models with skewed distributions for longitudinal measures with an attempt to mediate homogeneous characteristics, adjust departures from normality, and tailor accuracy from measurement error in covariate as well as overcome shortages of confidence in specifying a time‐to‐event model. The Bayesian mixture of joint modeling offers an appropriate avenue to estimate not only all parameters of mixture joint models but also probabilities of class membership. Simulation studies are conducted to assess the performance of the proposed method, and a real example is analyzed to demonstrate the methodology. The results are reported by comparing potential models with various scenarios. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

5.
For time‐to‐event outcomes, a rich literature exists on the bias introduced by covariate measurement error in regression models, such as the Cox model, and methods of analysis to address this bias. By comparison, less attention has been given to understanding the impact or addressing errors in the failure time outcome. For many diseases, the timing of an event of interest (such as progression‐free survival or time to AIDS progression) can be difficult to assess or reliant on self‐report and therefore prone to measurement error. For linear models, it is well known that random errors in the outcome variable do not bias regression estimates. With nonlinear models, however, even random error or misclassification can introduce bias into estimated parameters. We compare the performance of 2 common regression models, the Cox and Weibull models, in the setting of measurement error in the failure time outcome. We introduce an extension of the SIMEX method to correct for bias in hazard ratio estimates from the Cox model and discuss other analysis options to address measurement error in the response. A formula to estimate the bias induced into the hazard ratio by classical measurement error in the event time for a log‐linear survival model is presented. Detailed numerical studies are presented to examine the performance of the proposed SIMEX method under varying levels and parametric forms of the error in the outcome. We further illustrate the method with observational data on HIV outcomes from the Vanderbilt Comprehensive Care Clinic.  相似文献   

6.
Propensity score methods are increasingly being used to estimate causal treatment effects in observational studies. In medical and epidemiological studies, outcomes are frequently time‐to‐event in nature. Propensity‐score methods are often applied incorrectly when estimating the effect of treatment on time‐to‐event outcomes. This article describes how two different propensity score methods (matching and inverse probability of treatment weighting) can be used to estimate the measures of effect that are frequently reported in randomized controlled trials: (i) marginal survival curves, which describe survival in the population if all subjects were treated or if all subjects were untreated; and (ii) marginal hazard ratios. The use of these propensity score methods allows one to replicate the measures of effect that are commonly reported in randomized controlled trials with time‐to‐event outcomes: both absolute and relative reductions in the probability of an event occurring can be determined. We also provide guidance on variable selection for the propensity score model, highlight methods for assessing the balance of baseline covariates between treated and untreated subjects, and describe the implementation of a sensitivity analysis to assess the effect of unmeasured confounding variables on the estimated treatment effect when outcomes are time‐to‐event in nature. The methods in the paper are illustrated by estimating the effect of discharge statin prescribing on the risk of death in a sample of patients hospitalized with acute myocardial infarction. In this tutorial article, we describe and illustrate all the steps necessary to conduct a comprehensive analysis of the effect of treatment on time‐to‐event outcomes. © 2013 The authors. Statistics in Medicine published by John Wiley & Sons, Ltd.  相似文献   

7.
Mixed treatment comparison (MTC) meta‐analyses estimate relative treatment effects from networks of evidence while preserving randomisation. We extend the MTC framework to allow for repeated measurements of a continuous endpoint that varies over time. We used, as a case study, a systematic review and meta‐analysis of intraocular pressure (IOP) measurements from randomised controlled trials evaluating topical ocular hypotensives in primary open‐angle glaucoma or ocular hypertension because IOP varies over the day and over the treatment course, and repeated measurements are frequently reported. We adopted models for conducting MTC in W inBUGS (The BUGS Project, Cambridge, UK) to allow for repeated IOP measurements and to impute missing standard deviations of the raw data using the predictive distribution from observations with standard deviations. A flexible model with an unconstrained baseline for IOP variations over time and time‐invariant random treatment effects fitted the data well. We also adopted repeated measures models to allow for class effects; assuming treatment effects to be exchangeable within classes slightly improved model fit but could bias estimated treatment effects if exchangeability assumptions were not valid. We enabled all timepoints to be included in the analysis, allowing for repeated measures to increase precision around treatment effects and avoid bias associated with selecting timepoints for meta‐analysis.The methods we developed for modelling repeated measures and allowing for missing data may be adapted for use in other MTC meta‐analyses. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

8.
High‐dimensional longitudinal data involving latent variables such as depression and anxiety that cannot be quantified directly are often encountered in biomedical and social sciences. Multiple responses are used to characterize these latent quantities, and repeated measures are collected to capture their trends over time. Furthermore, substantive research questions may concern issues such as interrelated trends among latent variables that can only be addressed by modeling them jointly. Although statistical analysis of univariate longitudinal data has been well developed, methods for modeling multivariate high‐dimensional longitudinal data are still under development. In this paper, we propose a latent factor linear mixed model (LFLMM) for analyzing this type of data. This model is a combination of the factor analysis and multivariate linear mixed models. Under this modeling framework, we reduced the high‐dimensional responses to low‐dimensional latent factors by the factor analysis model, and then we used the multivariate linear mixed model to study the longitudinal trends of these latent factors. We developed an expectation–maximization algorithm to estimate the model. We used simulation studies to investigate the computational properties of the expectation–maximization algorithm and compare the LFLMM model with other approaches for high‐dimensional longitudinal data analysis. We used a real data example to illustrate the practical usefulness of the model. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

9.
Simultaneous inference in longitudinal, repeated‐measures, and multi‐endpoint designs can be onerous, especially when trying to find a reasonable joint model from which the interesting effects and covariances are estimated. A novel statistical approach known as multiple marginal models greatly simplifies the modelling process: the core idea is to “marginalise” the problem and fit multiple small models to different portions of the data, and then estimate the overall covariance matrix in a subsequent, separate step. Using these estimates guarantees strong control of the family‐wise error rate, however only asymptotically. In this paper, we show how to make the approach also applicable to small‐sample data problems. Specifically, we discuss the computation of adjusted P values and simultaneous confidence bounds for comparisons of randomised treatment groups as well as for levels of a nonrandomised factor such as multiple endpoints, repeated measures, or a series of points in time or space. We illustrate the practical use of the method with a data example.  相似文献   

10.
Cox models are commonly used in the analysis of time to event data. One advantage of Cox models is the ability to include time‐varying covariates, often a binary covariate that codes for the occurrence of an event that affects an individual subject. A common assumption in this case is that the effect of the event on the outcome of interest is constant and permanent for each subject. In this paper, we propose a modification to the Cox model to allow the influence of an event to exponentially decay over time. Methods for generating data using the inverse cumulative density function for the proposed model are developed. Likelihood ratio tests and AIC are investigated as methods for comparing the proposed model to the commonly used permanent exposure model. A simulation study is performed, and 3 different data sets are presented as examples.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号