首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
This article explores Bayesian joint models for a quantile of longitudinal response, mismeasured covariate and event time outcome with an attempt to (i) characterize the entire conditional distribution of the response variable based on quantile regression that may be more robust to outliers and misspecification of error distribution; (ii) tailor accuracy from measurement error, evaluate non‐ignorable missing observations, and adjust departures from normality in covariate; and (iii) overcome shortages of confidence in specifying a time‐to‐event model. When statistical inference is carried out for a longitudinal data set with non‐central location, non‐linearity, non‐normality, measurement error, and missing values as well as event time with being interval censored, it is important to account for the simultaneous treatment of these data features in order to obtain more reliable and robust inferential results. Toward this end, we develop Bayesian joint modeling approach to simultaneously estimating all parameters in the three models: quantile regression‐based nonlinear mixed‐effects model for response using asymmetric Laplace distribution, linear mixed‐effects model with skew‐t distribution for mismeasured covariate in the presence of informative missingness and accelerated failure time model with unspecified nonparametric distribution for event time. We apply the proposed modeling approach to analyzing an AIDS clinical data set and conduct simulation studies to assess the performance of the proposed joint models and method. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

2.
We studied the problem of testing a hypothesized distribution in survival regression models when the data is right censored and survival times are influenced by covariates. A modified chi‐squared type test, known as Nikulin‐Rao‐Robson statistic, is applied for the comparison of accelerated failure time models. This statistic is used to test the goodness‐of‐fit for hypertabastic survival model and four other unimodal hazard rate functions. The results of simulation study showed that the hypertabastic distribution can be used as an alternative to log‐logistic and log‐normal distribution. In statistical modeling, because of its flexible shape of hazard functions, this distribution can also be used as a competitor of Birnbaum‐Saunders and inverse Gaussian distributions. The results for the real data application are shown. Copyright © 2017 John Wiley & Sons, Ltd.  相似文献   

3.
In this article, we show how Tobit models can address problems of identifying characteristics of subjects having left‐censored outcomes in the context of developing a method for jointly analyzing time‐to‐event and longitudinal data. There are some methods for handling these types of data separately, but they may not be appropriate when time to event is dependent on the longitudinal outcome, and a substantial portion of values are reported to be below the limits of detection. An alternative approach is to develop a joint model for the time‐to‐event outcome and a two‐part longitudinal outcome, linking them through random effects. This proposed approach is implemented to assess the association between the risk of decline of CD4/CD8 ratio and rates of change in viral load, along with discriminating between patients who are potentially progressors to AIDS from patients who do not. We develop a fully Bayesian approach for fitting joint two‐part Tobit models and illustrate the proposed methods on simulated and real data from an AIDS clinical study.  相似文献   

4.
A semi-parametric accelerated failure time cure model   总被引:1,自引:0,他引:1  
Li CS  Taylor JM 《Statistics in medicine》2002,21(21):3235-3247
A cure model is a useful approach for analysing failure time data in which some subjects could eventually experience, and others never experience, the event of interest. A cure model has two components: incidence which indicates whether the event could eventually occur and latency which denotes when the event will occur given the subject is susceptible to the event. In this paper, we propose a semi-parametric cure model in which covariates can affect both the incidence and the latency. A logistic regression model is proposed for the incidence, and the latency is determined by an accelerated failure time regression model with unspecified error distribution. An EM algorithm is developed to fit the model. The procedure is applied to a data set of tonsil cancer patients treated with radiation therapy.  相似文献   

5.
Instrumental variable (IV) analysis has been widely used in economics, epidemiology, and other fields to estimate the causal effects of covariates on outcomes, in the presence of unobserved confounders and/or measurement errors in covariates. However, IV methods for time‐to‐event outcome with censored data remain underdeveloped. This paper proposes a Bayesian approach for IV analysis with censored time‐to‐event outcome by using a two‐stage linear model. A Markov chain Monte Carlo sampling method is developed for parameter estimation for both normal and non‐normal linear models with elliptically contoured error distributions. The performance of our method is examined by simulation studies. Our method largely reduces bias and greatly improves coverage probability of the estimated causal effect, compared with the method that ignores the unobserved confounders and measurement errors. We illustrate our method on the Women's Health Initiative Observational Study and the Atherosclerosis Risk in Communities Study. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

6.
Multivariate interval‐censored failure time data arise commonly in many studies of epidemiology and biomedicine. Analysis of these type of data is more challenging than the right‐censored data. We propose a simple multiple imputation strategy to recover the order of occurrences based on the interval‐censored event times using a conditional predictive distribution function derived from a parametric gamma random effects model. By imputing the interval‐censored failure times, the estimation of the regression and dependence parameters in the context of a gamma frailty proportional hazards model using the well‐developed EM algorithm is made possible. A robust estimator for the covariance matrix is suggested to adjust for the possible misspecification of the parametric baseline hazard function. The finite sample properties of the proposed method are investigated via simulation. The performance of the proposed method is highly satisfactory, whereas the computation burden is minimal. The proposed method is also applied to the diabetic retinopathy study (DRS) data for illustration purpose and the estimates are compared with those based on other existing methods for bivariate grouped survival data. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

7.
A limiting feature of previous work on growth mixture modeling is the assumption of normally distributed variables within each latent class. With strongly non‐normal outcomes, this means that several latent classes are required to capture the observed variable distributions. Being able to relax the assumption of within‐class normality has the advantage that a non‐normal observed distribution does not necessitate using more than one class to fit the distribution. It is valuable to add parameters representing the skewness and the thickness of the tails. A new growth mixture model of this kind is proposed drawing on recent work in a series of papers using the skew‐t distribution. The new method is illustrated using the longitudinal development of body mass index in two data sets. The first data set is from the National Longitudinal Survey of Youth covering ages 12–23 years. Here, the development is related to an antecedent measuring socioeconomic background. The second data set is from the Framingham Heart Study covering ages 25–65 years. Here, the development is related to the concurrent event of treatment for hypertension using a joint growth mixture‐survival model. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

8.
Logistic or other constraints often preclude the possibility of conducting incident cohort studies. A feasible alternative in such cases is to conduct a cross‐sectional prevalent cohort study for which we recruit prevalent cases, that is, subjects who have already experienced the initiating event, say the onset of a disease. When the interest lies in estimating the lifespan between the initiating event and a terminating event, say death for instance, such subjects may be followed prospectively until the terminating event or loss to follow‐up, whichever happens first. It is well known that prevalent cases have, on average, longer lifespans. As such, they do not constitute a representative random sample from the target population; they comprise a biased sample. If the initiating events are generated from a stationary Poisson process, the so‐called stationarity assumption, this bias is called length bias. The current literature on length‐biased sampling lacks a simple method for estimating the margin of errors of commonly used summary statistics. We fill this gap by using the empirical likelihood‐based confidence intervals by adapting this method to right‐censored length‐biased survival data. Both large and small sample behaviors of these confidence intervals are studied. We illustrate our method by using a set of data on survival with dementia, collected as part of the Canadian Study of Health and Aging. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

9.
We study Bayesian linear regression models with skew‐symmetric scale mixtures of normal error distributions. These kinds of models can be used to capture departures from the usual assumption of normality of the errors in terms of heavy tails and asymmetry. We propose a general noninformative prior structure for these regression models and show that the corresponding posterior distribution is proper under mild conditions. We extend these propriety results to cases where the response variables are censored. The latter scenario is of interest in the context of accelerated failure time models, which are relevant in survival analysis. We present a simulation study that demonstrates good frequentist properties of the posterior credible intervals associated with the proposed priors. This study also sheds some light on the trade‐off between increased model flexibility and the risk of over‐fitting. We illustrate the performance of the proposed models with real data. Although we focus on models with univariate response variables, we also present some extensions to the multivariate case in the Supporting Information. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

10.
Cai Wu  Liang Li 《Statistics in medicine》2018,37(21):3106-3124
This paper focuses on quantifying and estimating the predictive accuracy of prognostic models for time‐to‐event outcomes with competing events. We consider the time‐dependent discrimination and calibration metrics, including the receiver operating characteristics curve and the Brier score, in the context of competing risks. To address censoring, we propose a unified nonparametric estimation framework for both discrimination and calibration measures, by weighting the censored subjects with the conditional probability of the event of interest given the observed data. The proposed method can be extended to time‐dependent predictive accuracy metrics constructed from a general class of loss functions. We apply the methodology to a data set from the African American Study of Kidney Disease and Hypertension to evaluate the predictive accuracy of a prognostic risk score in predicting end‐stage renal disease, accounting for the competing risk of pre–end‐stage renal disease death, and evaluate its numerical performance in extensive simulation studies.  相似文献   

11.
In many medical problems that collect multiple observations per subject, the time to an event is often of interest. Sometimes, the occurrence of the event can be recorded at regular intervals leading to interval‐censored data. It is further desirable to obtain the most parsimonious model in order to increase predictive power and to obtain ease of interpretation. Variable selection and often random effects selection in case of clustered data become crucial in such applications. We propose a Bayesian method for random effects selection in mixed effects accelerated failure time (AFT) models. The proposed method relies on the Cholesky decomposition on the random effects covariance matrix and the parameter‐expansion method for the selection of random effects. The Dirichlet prior is used to model the uncertainty in the random effects. The error distribution for the accelerated failure time model has been specified using a Gaussian mixture to allow flexible error density and prediction of the survival and hazard functions. We demonstrate the model using extensive simulations and the Signal Tandmobiel Study®. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

12.
Methods for analyzing interval‐censored data are well established. Unfortunately, these methods are inappropriate for the studies with correlated data. In this paper, we focus on developing a method for analyzing clustered interval‐censored data. Our method is based on Cox's proportional hazard model with piecewise‐constant baseline hazard function. The correlation structure of the data can be modeled by using Clayton's copula or independence model with proper adjustment in the covariance estimation. We establish estimating equations for the regression parameters and baseline hazards (and a parameter in copula) simultaneously. Simulation results confirm that the point estimators follow a multivariate normal distribution, and our proposed variance estimations are reliable. In particular, we found that the approach with independence model worked well even when the true correlation model was derived from Clayton's copula. We applied our method to a family‐based cohort study of pandemic H1N1 influenza in Taiwan during 2009–2010. Using the proposed method, we investigate the impact of vaccination and family contacts on the incidence of pH1N1 influenza. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

13.
We consider the monitoring of surgical outcomes, where each patient has a different risk of post‐operative mortality due to risk factors that exist prior to the surgery. We propose a risk‐adjusted (RA) survival time CUSUM chart (RAST CUSUM) for monitoring a continuous, time‐to‐event variable that may be right‐censored. Risk adjustment is accomplished using accelerated failure time regression models. We compare the average run length performance of the RAST CUSUM chart with the RA Bernoulli CUSUM chart using data from cardiac surgeries to motivate the details of the comparison. The comparisons show that the RAST CUSUM chart is more efficient at detecting a sudden increase in the odds of mortality than the RA Bernoulli CUSUM chart, especially when the fraction of censored observations is relatively low or when a small increase in the odds of mortality occurs. We also discuss the impact of the amount of training data used to estimate chart parameters as well as the implementation of the RAST CUSUM chart during prospective monitoring. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

14.
Nonresponses and missing data are common in observational studies. Ignoring or inadequately handling missing data may lead to biased parameter estimation, incorrect standard errors and, as a consequence, incorrect statistical inference and conclusions. We present a strategy for modelling non‐ignorable missingness where the probability of nonresponse depends on the outcome. Using a simple case of logistic regression, we quantify the bias in regression estimates and show the observed likelihood is non‐identifiable under non‐ignorable missing data mechanism. We then adopt a selection model factorisation of the joint distribution as the basis for a sensitivity analysis to study changes in estimated parameters and the robustness of study conclusions against different assumptions. A Bayesian framework for model estimation is used as it provides a flexible approach for incorporating different missing data assumptions and conducting sensitivity analysis. Using simulated data, we explore the performance of the Bayesian selection model in correcting for bias in a logistic regression. We then implement our strategy using survey data from the 45 and Up Study to investigate factors associated with worsening health from the baseline to follow‐up survey. Our findings have practical implications for the use of the 45 and Up Study data to answer important research questions relating to health and quality‐of‐life. Copyright © 2017 John Wiley & Sons, Ltd.  相似文献   

15.
Two‐period two‐treatment (2×2) crossover designs are commonly used in clinical trials. For continuous endpoints, it has been shown that baseline (pretreatment) measurements collected before the start of each treatment period can be useful in improving the power of the analysis. Methods to achieve a corresponding gain for censored time‐to‐event endpoints have not been adequately studied. We propose a method in which censored values are treated as missing data and multiply imputed using prespecified parametric event time models. The event times in each imputed data set are then log‐transformed and analyzed using a linear model suitable for a 2×2 crossover design with continuous endpoints, with the difference in period‐specific baselines included as a covariate. Results obtained from the imputed data sets are synthesized for point and confidence interval estimation of the treatment ratio of geometric mean event times using model averaging in conjunction with Rubin's combination rule. We use simulations to illustrate the favorable operating characteristics of our method relative to two other methods for crossover trials with censored time‐to‐event data, ie, a hierarchical rank test that ignores the baselines and a stratified Cox model that uses each study subject as a stratum and includes period‐specific baselines as a covariate. Application to a real data example is provided.  相似文献   

16.
Biomarkers are often measured over time in epidemiological studies and clinical trials for better understanding of the mechanism of diseases. In large cohort studies, case‐cohort sampling provides a cost effective method to collect expensive biomarker data for revealing the relationship between biomarker trajectories and time to event. However, biomarker measurements are often limited by the sensitivity and precision of a given assay, resulting in data that are censored at detection limits and prone to measurement errors. Additionally, the occurrence of an event of interest may preclude biomarkers from being further evaluated. Inappropriate handling of these types of data can lead to biased conclusions. Under a classical case cohort design, we propose a modified likelihood‐based approach to accommodate these special features of longitudinal biomarker measurements in the accelerated failure time models. The maximum likelihood estimators based on the full likelihood function are obtained by Gaussian quadrature method. We evaluate the performance of our case‐cohort estimator and compare its relative efficiency to the full cohort estimator through simulation studies. The proposed method is further illustrated using the data from a biomarker study of sepsis among patients with community acquired pneumonia. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

17.
Random forest is a supervised learning method that combines many classification or regression trees for prediction. Here we describe an extension of the random forest method for building event risk prediction models in survival analysis with competing risks. In case of right‐censored data, the event status at the prediction horizon is unknown for some subjects. We propose to replace the censored event status by a jackknife pseudo‐value, and then to apply an implementation of random forests for uncensored data. Because the pseudo‐responses take on values on a continuous scale, the node variance is chosen as split criterion for growing regression trees. In a simulation study, the pseudo split criterion is compared with the Gini split criterion when the latter is applied to the uncensored event status. To investigate the resulting pseudo random forest method for building risk prediction models, we analyze it in a simulation study of predictive performance where we compare it to Cox regression and random survival forest. The method is further illustrated in two real data sets. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

18.
Sequentially administered, laboratory‐based diagnostic tests or self‐reported questionnaires are often used to determine the occurrence of a silent event. In this paper, we consider issues relevant in design of studies aimed at estimating the association of one or more covariates with a non‐recurring, time‐to‐event outcome that is observed using a repeatedly administered, error‐prone diagnostic procedure. The problem is motivated by the Women's Health Initiative, in which diabetes incidence among the approximately 160,000 women is obtained from annually collected self‐reported data. For settings of imperfect diagnostic tests or self‐reports with known sensitivity and specificity, we evaluate the effects of various factors on resulting power and sample size calculations and compare the relative efficiency of different study designs. The methods illustrated in this paper are readily implemented using our freely available R software package icensmis, which is available at the Comprehensive R Archive Network website. An important special case is that when diagnostic procedures are perfect, they result in interval‐censored, time‐to‐event outcomes. The proposed methods are applicable for the design of studies in which a time‐to‐event outcome is interval censored. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

19.
The accelerated failure time (AFT) model has been suggested as an alternative to the Cox proportional hazards model. However, a parametric AFT model requires the specification of an appropriate distribution for the event time, which is often difficult to identify in real‐life studies and may limit applications. A semiparametric AFT model was developed by Komárek et al based on smoothed error distribution that does not require such specification. In this article, we develop a spline‐based AFT model that also does not require specification of the parametric family of event time distribution. The baseline hazard function is modeled by regression B‐splines, allowing for the estimation of a variety of smooth and flexible shapes. In comprehensive simulations, we validate the performance of our approach and compare with the results from parametric AFT models and the approach of Komárek. Both the proposed spline‐based AFT model and the approach of Komárek provided unbiased estimates of covariate effects and survival curves for a variety of scenarios in which the event time followed different distributions, including both simple and complex cases. Spline‐based estimates of the baseline hazard showed also a satisfactory numerical stability. As expected, the baseline hazard and survival probabilities estimated by the misspecified parametric AFT models deviated from the truth. We illustrated the application of the proposed model in a study of colon cancer.  相似文献   

20.
For cost‐effectiveness and efficiency, many large‐scale general‐purpose cohort studies are being assembled within large health‐care providers who use electronic health records. Two key features of such data are that incident disease is interval‐censored between irregular visits and there can be pre‐existing (prevalent) disease. Because prevalent disease is not always immediately diagnosed, some disease diagnosed at later visits are actually undiagnosed prevalent disease. We consider prevalent disease as a point mass at time zero for clinical applications where there is no interest in time of prevalent disease onset. We demonstrate that the naive Kaplan–Meier cumulative risk estimator underestimates risks at early time points and overestimates later risks. We propose a general family of mixture models for undiagnosed prevalent disease and interval‐censored incident disease that we call prevalence–incidence models. Parameters for parametric prevalence–incidence models, such as the logistic regression and Weibull survival (logistic–Weibull) model, are estimated by direct likelihood maximization or by EM algorithm. Non‐parametric methods are proposed to calculate cumulative risks for cases without covariates. We compare naive Kaplan–Meier, logistic–Weibull, and non‐parametric estimates of cumulative risk in the cervical cancer screening program at Kaiser Permanente Northern California. Kaplan–Meier provided poor estimates while the logistic–Weibull model was a close fit to the non‐parametric. Our findings support our use of logistic–Weibull models to develop the risk estimates that underlie current US risk‐based cervical cancer screening guidelines. Published 2017. This article has been contributed to by US Government employees and their work is in the public domain in the USA.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号