首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 177 毫秒
1.
In this article, we show how Tobit models can address problems of identifying characteristics of subjects having left‐censored outcomes in the context of developing a method for jointly analyzing time‐to‐event and longitudinal data. There are some methods for handling these types of data separately, but they may not be appropriate when time to event is dependent on the longitudinal outcome, and a substantial portion of values are reported to be below the limits of detection. An alternative approach is to develop a joint model for the time‐to‐event outcome and a two‐part longitudinal outcome, linking them through random effects. This proposed approach is implemented to assess the association between the risk of decline of CD4/CD8 ratio and rates of change in viral load, along with discriminating between patients who are potentially progressors to AIDS from patients who do not. We develop a fully Bayesian approach for fitting joint two‐part Tobit models and illustrate the proposed methods on simulated and real data from an AIDS clinical study.  相似文献   

2.
This article explores Bayesian joint models for a quantile of longitudinal response, mismeasured covariate and event time outcome with an attempt to (i) characterize the entire conditional distribution of the response variable based on quantile regression that may be more robust to outliers and misspecification of error distribution; (ii) tailor accuracy from measurement error, evaluate non‐ignorable missing observations, and adjust departures from normality in covariate; and (iii) overcome shortages of confidence in specifying a time‐to‐event model. When statistical inference is carried out for a longitudinal data set with non‐central location, non‐linearity, non‐normality, measurement error, and missing values as well as event time with being interval censored, it is important to account for the simultaneous treatment of these data features in order to obtain more reliable and robust inferential results. Toward this end, we develop Bayesian joint modeling approach to simultaneously estimating all parameters in the three models: quantile regression‐based nonlinear mixed‐effects model for response using asymmetric Laplace distribution, linear mixed‐effects model with skew‐t distribution for mismeasured covariate in the presence of informative missingness and accelerated failure time model with unspecified nonparametric distribution for event time. We apply the proposed modeling approach to analyzing an AIDS clinical data set and conduct simulation studies to assess the performance of the proposed joint models and method. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

3.
Causal inference with observational longitudinal data and time‐varying exposures is complicated due to the potential for time‐dependent confounding and unmeasured confounding. Most causal inference methods that handle time‐dependent confounding rely on either the assumption of no unmeasured confounders or the availability of an unconfounded variable that is associated with the exposure (eg, an instrumental variable). Furthermore, when data are incomplete, validity of many methods often depends on the assumption of missing at random. We propose an approach that combines a parametric joint mixed‐effects model for the study outcome and the exposure with g‐computation to identify and estimate causal effects in the presence of time‐dependent confounding and unmeasured confounding. G‐computation can estimate participant‐specific or population‐average causal effects using parameters of the joint model. The joint model is a type of shared parameter model where the outcome and exposure‐selection models share common random effect(s). We also extend the joint model to handle missing data and truncation by death when missingness is possibly not at random. We evaluate the performance of the proposed method using simulation studies and compare the method to both linear mixed‐ and fixed‐effects models combined with g‐computation as well as to targeted maximum likelihood estimation. We apply the method to an epidemiologic study of vitamin D and depressive symptoms in older adults and include code using SAS PROC NLMIXED software to enhance the accessibility of the method to applied researchers.  相似文献   

4.
Instrumental variable (IV) analysis has been widely used in economics, epidemiology, and other fields to estimate the causal effects of covariates on outcomes, in the presence of unobserved confounders and/or measurement errors in covariates. However, IV methods for time‐to‐event outcome with censored data remain underdeveloped. This paper proposes a Bayesian approach for IV analysis with censored time‐to‐event outcome by using a two‐stage linear model. A Markov chain Monte Carlo sampling method is developed for parameter estimation for both normal and non‐normal linear models with elliptically contoured error distributions. The performance of our method is examined by simulation studies. Our method largely reduces bias and greatly improves coverage probability of the estimated causal effect, compared with the method that ignores the unobserved confounders and measurement errors. We illustrate our method on the Women's Health Initiative Observational Study and the Atherosclerosis Risk in Communities Study. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

5.
Model‐based standardization enables adjustment for confounding of a population‐averaged exposure effect on an outcome. It requires either a model for the probability of the exposure conditional on the confounders (an exposure model) or a model for the expectation of the outcome conditional on the exposure and the confounders (an outcome model). The methodology can also be applied to estimate averaged exposure effects within categories of an effect modifier and to test whether these effects differ or not. Recently, we extended that methodology for use with complex survey data, to estimate the effects of disability status on cost barriers to health care within three age categories and to test for differences. We applied the methodology to data from the 2007 Florida Behavioral Risk Factor Surveillance System Survey (BRFSS). The exposure modeling and outcome modeling approaches yielded two contrasting sets of results. In the present paper, we develop and apply to the BRFSS example two doubly robust approaches to testing and estimating effect modification with complex survey data; these approaches require that only one of these two models be correctly specified. Furthermore, assuming that at least one of the models is correctly specified, we can use the doubly robust approaches to develop and apply goodness‐of‐fit tests for the exposure and outcome models. We compare the exposure modeling, outcome modeling, and doubly robust approaches in terms of a simulation study and the BRFSS example. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

6.
The linear mixed effects model based on a full likelihood is one of the few methods available to model longitudinal data subject to left censoring. However, a full likelihood approach is complicated algebraically because of the large dimension of the numeric computations, and maximum likelihood estimation can be computationally prohibitive when the data are heavily censored. Moreover, for mixed models, the complexity of the computation increases as the dimension of the random effects in the model increases. We propose a method based on pseudo likelihood that simplifies the computational complexities, allows a wide class of multivariate models, and that can be used for many different data structures including settings where the level of censoring is high. The motivation for this work comes from the need for a joint model to assess the joint effect of pro‐inflammatory and anti‐inflammatory biomarker data on 30‐day mortality status while simultaneously accounting for longitudinal left censoring and correlation between markers in the analysis of Genetic and Inflammatory Markers for Sepsis study conducted at the University of Pittsburgh. Two markers, interleukin‐6 and interleukin‐10, which naturally are correlated because of a shared similar biological pathways and are left‐censored because of the limited sensitivity of the assays, are considered to determine if higher levels of these markers is associated with an increased risk of death after accounting for the left censoring and their assumed correlation. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

7.
Outcome variables that are semicontinuous with clumping at zero are commonly seen in biomedical research. In addition, the outcome measurement is sometimes subject to interval censoring and a lower detection limit (LDL). This gives rise to interval‐censored observations with clumping below the LDL. Level of antibody against influenza virus measured by the hemagglutination inhibition assay is an example. The interval censoring is due to the assay's technical procedure. The clumping below LDL is likely a result of the lack of prior exposure in some individuals such that they either have zero level of antibodies or do not have detectable level of antibodies. Given a pair of such measurements from the same subject at two time points, a binary ‘fold‐increase’ endpoint can be defined according to the ratio of these two measurements, as it often is in vaccine clinical trials. The intervention effect or vaccine immunogenicity can be assessed by comparing the binary endpoint between groups of subjects given different vaccines or placebos. We introduce a two‐part random effects model for modeling the paired interval‐censored data with clumping below the LDL. Based on the estimated model parameters, we propose to use Monte Carlo approximation for estimation of the ‘fold‐increase’ endpoint and the intervention effect. Bootstrapping is used for variance estimation. The performance of the proposed method is demonstrated by simulation. We analyze antibody data from an influenza vaccine trial for illustration. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

8.
Methods for analyzing interval‐censored data are well established. Unfortunately, these methods are inappropriate for the studies with correlated data. In this paper, we focus on developing a method for analyzing clustered interval‐censored data. Our method is based on Cox's proportional hazard model with piecewise‐constant baseline hazard function. The correlation structure of the data can be modeled by using Clayton's copula or independence model with proper adjustment in the covariance estimation. We establish estimating equations for the regression parameters and baseline hazards (and a parameter in copula) simultaneously. Simulation results confirm that the point estimators follow a multivariate normal distribution, and our proposed variance estimations are reliable. In particular, we found that the approach with independence model worked well even when the true correlation model was derived from Clayton's copula. We applied our method to a family‐based cohort study of pandemic H1N1 influenza in Taiwan during 2009–2010. Using the proposed method, we investigate the impact of vaccination and family contacts on the incidence of pH1N1 influenza. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

9.
The random effect Tobit model is a regression model that accommodates both left‐ and/or right‐censoring and within‐cluster dependence of the outcome variable. Regression coefficients of random effect Tobit models have conditional interpretations on a constructed latent dependent variable and do not provide inference of overall exposure effects on the original outcome scale. Marginalized random effects model (MREM) permits likelihood‐based estimation of marginal mean parameters for the clustered data. For random effect Tobit models, we extend the MREM to marginalize over both the random effects and the normal space and boundary components of the censored response to estimate overall exposure effects at population level. We also extend the ‘Average Predicted Value’ method to estimate the model‐predicted marginal means for each person under different exposure status in a designated reference group by integrating over the random effects and then use the calculated difference to assess the overall exposure effect. The maximum likelihood estimation is proposed utilizing a quasi‐Newton optimization algorithm with Gauss–Hermite quadrature to approximate the integration of the random effects. We use these methods to carefully analyze two real datasets. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

10.
Varying‐coefficient models have claimed an increasing portion of statistical research and are now applied to censored data analysis in medical studies. We incorporate such flexible semiparametric regression tools for interval censored data with a cured proportion. We adopted a two‐part model to describe the overall survival experience for such complicated data. To fit the unknown functional components in the model, we take the local polynomial approach with bandwidth chosen by cross‐validation. We establish consistency and asymptotic distribution of the estimation and propose to use bootstrap for inference. We constructed a BIC‐type model selection method to recommend an appropriate specification of parametric and nonparametric components in the model. We conducted extensive simulations to assess the performance of our methods. An application on a decompression sickness data illustrates our methods. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

11.
Interval‐censored data, in which the event time is only known to lie in some time interval, arise commonly in practice, for example, in a medical study in which patients visit clinics or hospitals at prescheduled times and the events of interest occur between visits. Such data are appropriately analyzed using methods that account for this uncertainty in event time measurement. In this paper, we propose a survival tree method for interval‐censored data based on the conditional inference framework. Using Monte Carlo simulations, we find that the tree is effective in uncovering underlying tree structure, performs similarly to an interval‐censored Cox proportional hazards model fit when the true relationship is linear, and performs at least as well as (and in the presence of right‐censoring outperforms) the Cox model when the true relationship is not linear. Further, the interval‐censored tree outperforms survival trees based on imputing the event time as an endpoint or the midpoint of the censoring interval. We illustrate the application of the method on tooth emergence data.  相似文献   

12.
Uveitis is characterised as a recurrent inflammation of the eye and an ongoing inflammation can have severe impact on the visual acuity of the patient. The Rotterdam Eye Hospital has been collecting data on every uveitis patient visiting the hospital since 2000. We propose a joint model for the inflammation and visual acuity with the purpose of making dynamic predictions. Dynamic prediction models allow predictions to be updated during the follow-up of the patient based on the patient's disease history. The joint model consists of a submodel for the inflammation, the event history outcome, and one for the visual acuity, the longitudinal outcome. The inflammation process is described with a two-state reversible multistate model, where transition times are interval censored. Correlated log-normal frailties are included in the multistate model to account for the within eye and within patient correlation. A linear mixed model is used for the visual acuity. The joint model is fitted in a two-stage procedure and we illustrate how the model can be used to make dynamic predictions. The performance of the method was investigated in a simulation study. The novelty of the proposed model includes the extension to a multistate outcome, whereas, previously, the standard has been to consider survival or competing risk outcomes. Furthermore, it is usually the case that the longitudinal outcome affects the event history outcome, but in this model, the relation is reversed.  相似文献   

13.
We derive a nonparametric maximum likelihood estimate of the overall survival distribution in an illness–death model from interval censored observations with unknown status of the nonfatal event. This expanded model is applied to the re‐analysis of data from a randomized trial where infants, born to women infected with HIV‐1 that were randomly assigned to breastfeeding or counseling for formula feeding, were followed for 24 months for HIV‐1 positivity, HIV‐1‐free survival, and overall survival. HIV‐1 positivity, assessed by postpartum venous blood tests, is the interval censored nonfatal event, and HIV‐1 positivity status is unknown for a subset of infants due to periodic assessment. The analysis demonstrates that estimation of the overall and the pre‐ and post‐nonfatal event survival distributions with the proposed methods provide novel insights into how overall survival is influenced by the occurrence of the nonfatal event. More generally, it suggests the usefulness of this expanded illness–death model when evaluating composite endpoints as potential surrogates for overall survival in a given disease setting. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

14.
A model is developed for chronic diseases with an indolent phase that is followed by a phase with more active disease resulting in progression and damage. The time scales for the intensity functions for the active phase are more naturally based on the time since the start of the active phase, corresponding to a semi‐Markov formulation. This two‐phase model enables one to fit a separate regression model for the duration of the indolent phase and intensity‐based models for the more active second phase. In cohort studies for which the disease status is only known at a series of clinical assessment times, transition times are interval‐censored, which means the time origin for phase II is interval‐censored. Weakly parametric models with piecewise constant baseline hazard and rate functions are specified, and an expectation‐maximization algorithm is described for model fitting. Simulation studies examining the performance of the proposed model show good performance under maximum likelihood and two‐stage estimation. An application to data from the motivating study of disease progression in psoriatic arthritis illustrates the procedure and identifies new human leukocyte antigens associated with the duration of the indolent phase. Copyright © 2017 John Wiley & Sons, Ltd.  相似文献   

15.
Interval‐censored data occur naturally in many fields and the main feature is that the failure time of interest is not observed exactly, but is known to fall within some interval. In this paper, we propose a semiparametric probit model for analyzing case 2 interval‐censored data as an alternative to the existing semiparametric models in the literature. Specifically, we propose to approximate the unknown nonparametric nondecreasing function in the probit model with a linear combination of monotone splines, leading to only a finite number of parameters to estimate. Both the maximum likelihood and the Bayesian estimation methods are proposed. For each method, regression parameters and the baseline survival function are estimated jointly. The proposed methods make no assumptions about the observation process and can be applicable to any interval‐censored data with easy implementation. The methods are evaluated by simulation studies and are illustrated by two real‐life interval‐censored data applications. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

16.
Joint latent class modeling is an appealing approach for evaluating the association between a longitudinal biomarker and clinical outcome when the study population is heterogeneous. The link between the biomarker trajectory and the risk of event is reflected by the latent classes, which accommodate the underlying population heterogeneity. The estimation of joint latent class models may be complicated by the censored data in the biomarker measurements due to detection limits. We propose a modified likelihood function under the parametric assumption of biomarker distribution and develop a Monte Carlo expectation‐maximization algorithm for joint analysis of a biomarker and a binary outcome. We conduct simulation studies to demonstrate the satisfactory performance of our Monte Carlo expectation‐maximization algorithm and the superiority of our method to the naive imputation method for handling censored biomarker data. In addition, we apply our method to the Genetic and Inflammatory Markers of Sepsis study to investigate the role of inflammatory biomarker profile in predicting 90‐day mortality for patients hospitalized with community‐acquired pneumonia.  相似文献   

17.
Event history studies based on disease clinic data often face several complications. Specifically, patients may visit the clinic irregularly, and the intermittent observation times could depend on disease‐related variables; this can cause a failure time outcome to be dependently interval‐censored. We propose a weighted estimating function approach so that dependently interval‐censored failure times can be analysed consistently. A so‐called inverse‐intensity‐of‐visit weight is employed to adjust for the informative inspection times. Left truncation of failure times can also be easily handled. Additionally, in observational studies, treatment assignments are typically non‐randomized and may depend on disease‐related variables. An inverse‐probability‐of‐treatment weight is applied to estimating functions to further adjust for measured confounders. Simulation studies are conducted to examine the finite sample performances of the proposed estimators. Finally, the Toronto Psoriatic Arthritis Cohort Study is used for illustration. Copyright © 2017 John Wiley & Sons, Ltd.  相似文献   

18.
In conventional survival analysis there is an underlying assumption that all study subjects are susceptible to the event. In general, this assumption does not adequately hold when investigating the time to an event other than death. Owing to genetic and/or environmental etiology, study subjects may not be susceptible to the disease. Analyzing nonsusceptibility has become an important topic in biomedical, epidemiological, and sociological research, with recent statistical studies proposing several mixture models for right‐censored data in regression analysis. In longitudinal studies, we often encounter left, interval, and right‐censored data because of incomplete observations of the time endpoint, as well as possibly left‐truncated data arising from the dissimilar entry ages of recruited healthy subjects. To analyze these kinds of incomplete data while accounting for nonsusceptibility and possible crossing hazards in the framework of mixture regression models, we utilize a logistic regression model to specify the probability of susceptibility, and a generalized gamma distribution, or a log‐logistic distribution, in the accelerated failure time location‐scale regression model to formulate the time to the event. Relative times of the conditional event time distribution for susceptible subjects are extended in the accelerated failure time location‐scale submodel. We also construct graphical goodness‐of‐fit procedures on the basis of the Turnbull–Frydman estimator and newly proposed residuals. Simulation studies were conducted to demonstrate the validity of the proposed estimation procedure. The mixture regression models are illustrated with alcohol abuse data from the Taiwan Aboriginal Study Project and hypertriglyceridemia data from the Cardiovascular Disease Risk Factor Two‐township Study in Taiwan. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

19.
We develop a multivariate cure survival model to estimate lifetime patterns of colorectal cancer screening. Screening data cover long periods of time, with sparse observations for each person. Some events may occur before the study begins or after the study ends, so the data are both left‐censored and right‐censored, and some individuals are never screened (the ‘cured’ population). We propose a multivariate parametric cure model that can be used with left‐censored and right‐censored data. Our model allows for the estimation of the time to screening as well as the average number of times individuals will be screened. We calculate likelihood functions based on the observations for each subject using a distribution that accounts for within‐subject correlation and estimate parameters using Markov chain Monte Carlo methods. We apply our methods to the estimation of lifetime colorectal cancer screening behavior in the SEER‐Medicare data set. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

20.
Common problems to many longitudinal HIV/AIDS, cancer, vaccine, and environmental exposure studies are the presence of a lower limit of quantification of an outcome with skewness and time‐varying covariates with measurement errors. There has been relatively little work published simultaneously dealing with these features of longitudinal data. In particular, left‐censored data falling below a limit of detection may sometimes have a proportion larger than expected under a usually assumed log‐normal distribution. In such cases, alternative models, which can account for a high proportion of censored data, should be considered. In this article, we present an extension of the Tobit model that incorporates a mixture of true undetectable observations and those values from a skew‐normal distribution for an outcome with possible left censoring and skewness, and covariates with substantial measurement error. To quantify the covariate process, we offer a flexible nonparametric mixed‐effects model within the Tobit framework. A Bayesian modeling approach is used to assess the simultaneous impact of left censoring, skewness, and measurement error in covariates on inference. The proposed methods are illustrated using real data from an AIDS clinical study. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号