首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Huang Y  Dagne G  Wu L 《Statistics in medicine》2011,30(24):2930-2946
Normality (symmetry) of the model random errors is a routine assumption for mixed-effects models in many longitudinal studies, but it may be unrealistically obscuring important features of subject variations. Covariates are usually introduced in the models to partially explain inter-subject variations, but some covariates such as CD4 cell count may be often measured with substantial errors. This paper formulates a class of models in general forms that considers model errors to have skew-normal distributions for a joint behavior of longitudinal dynamic processes and time-to-event process of interest. For estimating model parameters, we propose a Bayesian approach to jointly model three components (response, covariate, and time-to-event processes) linked through the random effects that characterize the underlying individual-specific longitudinal processes. We discuss in detail special cases of the model class, which are offered to jointly model HIV dynamic response in the presence of CD4 covariate process with measurement errors and time to decrease in CD4/CD8 ratio, to provide a tool to assess antiretroviral treatment and to monitor disease progression. We illustrate the proposed methods using the data from a clinical trial study of HIV treatment. The findings from this research suggest that the joint models with a skew-normal distribution may provide more reliable and robust results if the data exhibit skewness, and particularly the results may be important for HIV/AIDS studies in providing quantitative guidance to better understand the virologic responses to antiretroviral treatment.  相似文献   

2.
A two-stage model for evaluating both trial-level and patient-level surrogacy of correlated time-to-event endpoints has been introduced, using patient-level data when multiple clinical trials are available. However, the associated maximum likelihood approach often suffers from numerical problems when different baseline hazards among trials and imperfect estimation of treatment effects are assumed. To address this issue, we propose performing the second-stage, trial-level evaluation of potential surrogates within a Bayesian framework, where we may naturally borrow information across trials while maintaining these realistic assumptions. Posterior distributions on surrogacy measures of interest may then be used to compare measures or make decisions regarding the candidacy of a specific endpoint. We perform a simulation study to investigate differences in estimation performance between traditional maximum likelihood and new Bayesian representations of common meta-analytic surrogacy measures, while assessing sensitivity to data characteristics such as number of trials, trial size, and amount of censoring. Furthermore, we present both frequentist and Bayesian trial-level surrogacy evaluations of time to recurrence for overall survival in two meta-analyses of adjuvant therapy trials in colon cancer. With these results, we recommend Bayesian evaluation as an attractive and numerically stable alternative in the multitrial assessment of potential surrogate endpoints.  相似文献   

3.
Sparse high-dimensional massive sample size (sHDMSS) time-to-event data present multiple challenges to quantitative researchers as most current sparse survival regression methods and software will grind to a halt and become practically inoperable. This paper develops a scalable 0-based sparse Cox regression tool for right-censored time-to-event data that easily takes advantage of existing high performance implementation of 2-penalized regression method for sHDMSS time-to-event data. Specifically, we extend the 0-based broken adaptive ridge (BAR) methodology to the Cox model, which involves repeatedly performing reweighted 2-penalized regression. We rigorously show that the resulting estimator for the Cox model is selection consistent, oracle for parameter estimation, and has a grouping property for highly correlated covariates. Furthermore, we implement our BAR method in an R package for sHDMSS time-to-event data by leveraging existing efficient algorithms for massive 2-penalized Cox regression. We evaluate the BAR Cox regression method by extensive simulations and illustrate its application on an sHDMSS time-to-event data from the National Trauma Data Bank with hundreds of thousands of observations and tens of thousands sparsely represented covariates.  相似文献   

4.
Variable selection is a crucial issue in model building and it has received considerable attention in the literature of survival analysis. However, available approaches in this direction have mainly focused on time-to-event data with right censoring. Moreover, a majority of existing variable selection procedures for survival models are developed in a frequentist framework. In this article, we consider additive hazards model in the presence of current status data. We propose a Bayesian adaptive least absolute shrinkage and selection operator procedure to conduct a simultaneous variable selection and parameter estimation. Efficient Markov chain Monte Carlo methods are developed to implement posterior sampling and inference. The empirical performance of the proposed method is demonstrated by simulation studies. An application to a study on the risk factors of heart failure disease for type 2 diabetes patients is presented.  相似文献   

5.
The association between visit-to-visit systolic blood pressure variability and cardiovascular events has recently received a lot of attention in the cardiovascular literature. But, blood pressure variability is usually estimated on a person-by-person basis and is therefore subject to considerable measurement error. We demonstrate that hazard ratios estimated using this approach are subject to bias due to regression dilution, and we propose alternative methods to reduce this bias: a two-stage method and a joint model. For the two-stage method, in stage one, repeated measurements are modelled using a mixed effects model with a random component on the residual standard deviation (SD). The mixed effects model is used to estimate the blood pressure SD for each individual, which, in stage two, is used as a covariate in a time-to-event model. For the joint model, the mixed effects submodel and time-to-event submodel are fitted simultaneously using shared random effects. We illustrate the methods using data from the Atherosclerosis Risk in Communities study.  相似文献   

6.
Among several semiparametric models, the Cox proportional hazard model is widely used to assess the association between covariates and the time-to-event when the observed time-to-event is interval-censored. Often, covariates are measured with error. To handle this covariate uncertainty in the Cox proportional hazard model with the interval-censored data, flexible approaches have been proposed. To fill a gap and broaden the scope of statistical applications to analyze time-to-event data with different models, in this paper, a general approach is proposed for fitting the semiparametric linear transformation model to interval-censored data when a covariate is measured with error. The semiparametric linear transformation model is a broad class of models that includes the proportional hazard model and the proportional odds model as special cases. The proposed method relies on a set of estimating equations to estimate the regression parameters and the infinite-dimensional parameter. For handling interval censoring and covariate measurement error, a flexible imputation technique is used. Finite sample performance of the proposed method is judged via simulation studies. Finally, the suggested method is applied to analyze a real data set from an AIDS clinical trial.  相似文献   

7.
In this paper, we consider two-stage designs with failure-time endpoints in single-arm phase II trials. We propose designs in which stopping rules are constructed by comparing the Bayes risk of stopping at stage I with the expected Bayes risk of continuing to stage II using both the observed data in stage I and the predicted survival data in stage II. Terminal decision rules are constructed by comparing the posterior expected loss of a rejection decision versus an acceptance decision. Simple threshold loss functions are applied to time-to-event data modeled either parametrically or nonparametrically, and the cost parameters in the loss structure are calibrated to obtain desired type I error and power. We ran simulation studies to evaluate design properties including types I and II errors, probability of early stopping, expected sample size, and expected trial duration and compared them with the Simon two-stage designs and a design, which is an extension of the Simon's designs with time-to-event endpoints. An example based on a recently conducted phase II sarcoma trial illustrates the method.  相似文献   

8.
Joint modelling of longitudinal and survival data has received much attention in recent years. Most have concentrated on a single longitudinal variable. This paper considers joint modelling in the presence of multiple longitudinal variables. We explore direct association of time-to-event and multiple longitudinal processes through a frailty model and use a mixed effects model for each of the longitudinal variables. Correlations among the longitudinal variables are induced through correlated random effects. We allow effects of categorical and continuous covariates on both longitudinal and time-to-event responses and explore interactions between the longitudinal variables and other covariates on time-to-event. Estimates of the parameters are obtained by maximizing the joint likelihood for the longitudinal variable processes and the event process. We use a one-step-late EM algorithm to handle the direct dependence of the event process on the modelled longitudinal variables along with the presence of other fixed covariates in both processes. We argue that such a joint analysis with multiple longitudinal variables is advantageous to one with only a single longitudinal variable in revealing interplay among multiple longitudinal variables and the time-to-event.  相似文献   

9.
Given the long follow‐up periods that are often required for treatment or intervention studies, the potential to use surrogate markers to decrease the required follow‐up time is a very attractive goal. However, previous studies have shown that using inadequate markers or making inappropriate assumptions about the relationship between the primary outcome and surrogate marker can lead to inaccurate conclusions regarding the treatment effect. Currently available methods for identifying and validating surrogate markers tend to rely on restrictive model assumptions and/or focus on uncensored outcomes. The ability to use such methods in practice when the primary outcome of interest is a time‐to‐event outcome is difficult because of censoring and missing surrogate information among those who experience the primary outcome before surrogate marker measurement. In this paper, we propose a novel definition of the proportion of treatment effect explained by surrogate information collected up to a specified time in the setting of a time‐to‐event primary outcome. Our proposed approach accommodates a setting where individuals may experience the primary outcome before the surrogate marker is measured. We propose a robust non‐parametric procedure to estimate the defined quantity using censored data and use a perturbation‐resampling procedure for variance estimation. Simulation studies demonstrate that the proposed procedures perform well in finite samples. We illustrate the proposed procedures by investigating two potential surrogate markers for diabetes using data from the Diabetes Prevention Program. Copyright © 2017 John Wiley & Sons, Ltd.  相似文献   

10.
Background: Joint modeling of longitudinal and time-to-event data is often advantageous over separate longitudinal or time-to-event analyses as it can account for study dropout, error in longitudinally measured covariates, and correlation between longitudinal and time-to-event outcomes. The current literature on joint modeling focuses mainly on the analysis of single studies with a lack of methods available for the meta-analysis of joint data from multiple studies. Methods: We investigate a variety of one-stage methods for the meta-analysis of joint longitudinal and time-to-event outcome data. These methods are applied to the INDANA dataset to investigate longitudinally measured systolic blood pressure, with each of time to death, time to myocardial infarction, and time to stroke. Results are compared to separate longitudinal or time-to-event meta-analyses. A simulation study is conducted to contrast separate versus joint analyses over a range of scenarios. Results: The performance of the examined one-stage joint meta-analytic models varied. Models that accounted for between study heterogeneity performed better than models that ignored it. Of the examined methods to account for between study heterogeneity, under the examined association structure, fixed effect approaches appeared preferable, whereas methods involving a baseline hazard stratified by study were least time intensive. Conclusions: One-stage joint meta-analytic models that accounted for between study heterogeneity using a mix of fixed effects or a stratified baseline hazard were reliable; however, models examined that included study level random effects in the association structure were less reliable.  相似文献   

11.
Despite the use of standardized protocols in, multi-centre, randomized clinical trials, outcome may vary between centres. Such heterogeneity may alter the interpretation and reporting of the treatment effect. Below, we propose a general frailty modelling approach for investigating, inter alia, putative treatment-by-centre interactions in time-to-event data in multi-centre clinical trials. A correlated random effects model is used to model the baseline risk and the treatment effect across centres. It may be based on shared, individual or correlated random effects. For inference we develop the hierarchical-likelihood (or h-likelihood) approach which facilitates computation of prediction intervals for the random effects with proper precision. We illustrate our methods using disease-free time-to-event data on bladder cancer patients participating in an European Organization for Research and Treatment of Cancer trial, and a simulation study. We also demonstrate model selection using h-likelihood criteria.  相似文献   

12.
In randomized treatment studies where the primary outcome requires long follow‐up of patients and/or expensive or invasive obtainment procedures, the availability of a surrogate marker that could be used to estimate the treatment effect and could potentially be observed earlier than the primary outcome would allow researchers to make conclusions regarding the treatment effect with less required follow‐up time and resources. The Prentice criterion for a valid surrogate marker requires that a test for treatment effect on the surrogate marker also be a valid test for treatment effect on the primary outcome of interest. Based on this criterion, methods have been developed to define and estimate the proportion of treatment effect on the primary outcome that is explained by the treatment effect on the surrogate marker. These methods aim to identify useful statistical surrogates that capture a large proportion of the treatment effect. However, current methods to estimate this proportion usually require restrictive model assumptions that may not hold in practice and thus may lead to biased estimates of this quantity. In this paper, we propose a nonparametric procedure to estimate the proportion of treatment effect on the primary outcome that is explained by the treatment effect on a potential surrogate marker and extend this procedure to a setting with multiple surrogate markers. We compare our approach with previously proposed model‐based approaches and propose a variance estimation procedure based on a perturbation–resampling method. Simulation studies demonstrate that the procedure performs well in finite samples and outperforms model‐based procedures when the specified models are not correct. We illustrate our proposed procedure using a data set from a randomized study investigating a group‐mediated cognitive behavioral intervention for peripheral artery disease participants. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

13.
Many models for clinical prediction (prognosis or diagnosis) are published in the medical literature every year but few such models find their way into clinical practice. The reason may be that since in most cases models have not been validated in independent data, they lack generality and/or credibility. In this paper we consider the situation in which several compatible, independent data sets relating to a given disease with a time-to-event endpoint are available for analysis. The aim is to construct and evaluate a single prognostic model. Building a multivariable model from the available prognostic factors is accomplished within the Cox proportional hazards framework, stratifying by study. Non-linear relationships with continuous predictors are modelled by using fractional polynomials. To assess the discrimination or separation of a survival model, we use the D statistic of Royston and Sauerbrei. D may be interpreted as the separation (log hazard ratio) between the survival distributions for two independent prognostic groups. To evaluate the generality of a prognostic model across the data sets, we propose 'internal-external cross-validation' on D: each study is omitted in turn, the model parameters are estimated from the remaining studies and D is evaluated in the omitted study. Because the linear predictor of a survival model tells only part of the story, we also suggest a method for investigating heterogeneity in the baseline distribution function across studies which involves fitting completely specified, flexible parametric survival models (Royston and Parmar). Our final models combine the prognostic index (obtained with stratification by study) with the pooled baseline survival distribution (estimated parametrically). By applying this methodology, we construct two prognostic scores in superficial bladder cancer. The simpler of the two scores is more suited to clinical application. We show that a three-group prognostic classification scheme based on either score produces well-separated survival curves for each of the data sets, despite identifiable heterogeneity among the baseline distribution functions and to a lesser extent among the prognostic indexes for the individual studies.  相似文献   

14.
Repeated low‐dose challenge designs in nonhuman primate studies have recently received attention in the literature as a means of evaluating vaccines for HIV prevention and identifying immune surrogates for their protective effects. Existing methods for surrogate identification in this type of study design rely on the assumption of homogeneity across subjects (namely, independent infection risks after each challenge within each subject and conditional on covariates). In practice, random variation across subjects is likely to occur because of unmeasured biologic factors. Failure to account for this heterogeneity or within‐subject correlation can result in biased inference regarding the surrogate value of immune biomarkers and underpowered study designs for detecting surrogate endpoints. In this paper, we adopt a discrete‐time survival model with random effects to account for between‐subject heterogeneity, and we develop estimators and testing procedures for evaluating principal surrogacy of immune biomarkers. Simulation studies reveal that the heterogeneous model achieves substantial bias reduction compared to the homogeneous model, with little cost of efficiency. We recommend the use of this heterogeneous model as a complementary tool to existing methods when designing and analyzing repeated low‐dose challenge studies for evaluating surrogate endpoints.  相似文献   

15.
Both delayed study entry (left-truncation) and competing risks are common phenomena in observational time-to-event studies. For example, in studies conducted by Teratology Information Services (TIS) on adverse drug reactions during pregnancy, the natural time scale is gestational age, but women enter the study after time origin and upon contact with the service. Competing risks are present, because an elective termination may be precluded by a spontaneous abortion. If left-truncation is entirely random, the Aalen-Johansen estimator is the canonical estimator of the cumulative incidence functions of the competing events. If the assumption of random left-truncation is in doubt, we propose a new semiparametric estimator of the cumulative incidence function. The dependence between entry time and time-to-event is modeled using a cause-specific Cox proportional hazards model and the marginal (unconditional) estimates are derived via inverse probability weighting arguments. We apply the new estimator to data about coumarin usage during pregnancy. Here, the concern is that the cause-specific hazard of experiencing an induced abortion may depend on the time when seeking advice by a TIS, which also is the time of left-truncation or study entry. While the aims of counseling by a TIS are to reduce the rate of elective terminations based on irrational overestimation of drug risks and to lead to better and safer medical treatment of maternal disease, it is conceivable that women considering an induced abortion are more likely to seek counseling. The new estimator is also evaluated in extensive simulation studies and found preferable compared to the Aalen-Johansen estimator in non–misspecified scenarios and to at least provide for a sensitivity analysis otherwise.  相似文献   

16.
Recurrent event data are quite common in biomedical and epidemiological studies. A significant portion of these data also contain additional longitudinal information on surrogate markers. Previous studies have shown that popular methods using a Cox model with longitudinal outcomes as time‐dependent covariates may lead to biased results, especially when longitudinal outcomes are measured with error. Hence, it is important to incorporate longitudinal information into the analysis properly. To achieve this, we model the correlation between longitudinal and recurrent event processes using latent random effect terms. We then propose a two‐stage conditional estimating equation approach to model the rate function of recurrent event process conditioned on the observed longitudinal information. The performance of our proposed approach is evaluated through simulation. We also apply the approach to analyze cocaine addiction data collected by the University of Connecticut Health Center. The data include recurrent event information on cocaine relapse and longitudinal cocaine craving scores. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

17.
Informative and accurate survival prediction with individualized dynamic risk profiles over time is critical for personalized disease prevention and clinical management. The massive genetic data, such as SNPs from genome-wide association studies (GWAS), together with well-characterized time-to-event phenotypes provide unprecedented opportunities for developing effective survival prediction models. Recent advances in deep learning have made extraordinary achievements in establishing powerful prediction models in the biomedical field. However, the applications of deep learning approaches in survival prediction are limited, especially with utilizing the wealthy GWAS data. Motivated by developing powerful prediction models for the progression of an eye disease, age-related macular degeneration (AMD), we develop and implement a multilayer deep neural network (DNN) survival model to effectively extract features and make accurate and interpretable predictions. Various simulation studies are performed to compare the prediction performance of the DNN survival model with several other machine learning-based survival models. Finally, using the GWAS data from two large-scale randomized clinical trials in AMD with over 7800 observations, we show that the DNN survival model not only outperforms several existing survival prediction models in terms of prediction accuracy (eg, c-index =0.76 ), but also successfully detects clinically meaningful risk subgroups by effectively learning the complex structures among genetic variants. Moreover, we obtain a subject-specific importance measure for each predictor from the DNN survival model, which provides valuable insights into the personalized early prevention and clinical management for this disease.  相似文献   

18.
BACKGROUND: Job titles or work areas are often used as surrogate indicators of exposure in occupational epidemiological studies. In this article, we assess the validity and comparability of commonly used surrogate indicators. METHODS: We analyzed lung cancer mortality among a hypothetical and an actual cohort of rubber workers. Surrogate indicators of exposure were defined according to jobs in which workers were "only," "ever," "longest" or "last" employed, or in which they were employed at the "census" of the study. Occupational risks were estimated using standardized mortality ratios. Validity of surrogate indicators was assessed in the simulated data by comparison between estimated effects and the known underlying associations. Comparisons of surrogate indicators were conducted in both simulated and empirical data. RESULTS: Use of the definition "only" as the surrogate indicator gave valid but imprecise results. For all other definitions, we observed a moderate overestimation of risks in no-risk or low-risk jobs and attenuation of underlying dose-response relationships, without substantial differences among the applied definitions. CONCLUSIONS: Our results demonstrate a limitation of using surrogate indicators of exposure in occupational epidemiological studies. However, they suggest that the inconsistencies of published study findings in the rubber industry are unlikely to be attributable to the use of different surrogate indicators.  相似文献   

19.
Joint models are frequently used in survival analysis to assess the relationship between time-to-event data and time-dependent covariates, which are measured longitudinally but often with errors. Routinely, a linear mixed-effects model is used to describe the longitudinal data process, while the survival times are assumed to follow the proportional hazards model. However, in some practical situations, individual covariate profiles may contain changepoints. In this article, we assume a two-phase polynomial random effects with subject-specific changepoint model for the longitudinal data process and the proportional hazards model for the survival times. Our main interest is in the estimation of the parameter in the hazards model. We incorporate a smooth transition function into the changepoint model for the longitudinal data and develop the corrected score and conditional score estimators, which do not require any assumption regarding the underlying distribution of the random effects or that of the changepoints. The estimators are shown to be asymptotically equivalent and their finite-sample performance is examined via simulations. The methods are applied to AIDS clinical trial data.  相似文献   

20.
Studies with longitudinal measurements are common in clinical research. Particular interest lies in studies where the repeated measurements are used to predict a time-to-event outcome, such as mortality, in a dynamic manner. If event rates in a study are low, however, and most information is to be expected from the patients experiencing the study endpoint, it may be more cost efficient to only use a subset of the data. One way of achieving this is by applying a case-cohort design, which selects all cases and only a random samples of the noncases. In the standard way of analyzing data in a case-cohort design, the noncases who were not selected are completely excluded from analysis; however, the overrepresentation of the cases will lead to bias. We propose to include survival information of all patients from the cohort in the analysis. We approach the fact that we do not have longitudinal information for a subset of the patients as a missing data problem and argue that the missingness mechanism is missing at random. Hence, results obtained from an appropriate model, such as a joint model, should remain valid. Simulations indicate that our method performs similar to fitting the model on a full cohort, both in terms of parameters estimates and predictions of survival probabilities. Estimating the model on the classical version of the case-cohort design shows clear bias and worse performance of the predictions. The procedure is further illustrated in data from a biomarker study on acute coronary syndrome patients, BIOMArCS.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号