共查询到20条相似文献,搜索用时 15 毫秒
1.
Time‐to‐event data with time‐varying biomarkers measured only at study entry,with applications to Alzheimer's disease 下载免费PDF全文
Catherine Lee Rebecca A. Betensky for the Alzheimer's Disease Neuroimaging Initiative 《Statistics in medicine》2018,37(6):914-932
Relating time‐varying biomarkers of Alzheimer's disease to time‐to‐event using a Cox model is complicated by the fact that Alzheimer's disease biomarkers are sparsely collected, typically only at study entry; this is problematic since Cox regression with time‐varying covariates requires observation of the covariate process at all failure times. The analysis might be simplified by using study entry as the time origin and treating the time‐varying covariate measured at study entry as a fixed baseline covariate. In this paper, we first derive conditions under which using an incorrect time origin of study entry results in consistent estimation of regression parameters when the time‐varying covariate is continuous and fully observed. We then derive conditions under which treating the time‐varying covariate as fixed at study entry results in consistent estimation. We provide methods for estimating the regression parameter when a functional form can be assumed for the time‐varying biomarker, which is measured only at study entry. We demonstrate our analytical results in a simulation study and apply our methods to data from the Rush Religious Orders Study and Memory and Aging Project and data from the Alzheimer's Disease Neuroimaging Initiative. 相似文献
2.
Methods for dealing with tied event times in the Cox proportional hazards model are well developed. Also, the partial likelihood provides a natural way to handle covariates that change over time. However, ties between event times and the times that discrete time‐varying covariates change have not been systematically studied in the literature. In this article, we discuss the default behavior of current software and propose some simple methods for dealing with such ties. A simulation study shows that the default behavior of current software can lead to biased estimates of the coefficient of a binary time‐varying covariate and that two proposed methods (Random Jitter and Equally Weighted) reduce estimation bias. The proposed methods can be easily implemented with existing software. The methods are illustrated on the well‐known Stanford heart transplant data and data from a study on intimate partner violence and smoking. Copyright © 2012 John Wiley & Sons, Ltd. 相似文献
3.
Per Kragh Andersen Maja Pohar Perme Hans C. van Houwelingen Richard J. Cook Pierre Joly Torben Martinussen Jeremy M. G. Taylor Michal Abrahamowicz Terry M. Therneau 《Statistics in medicine》2021,40(1):185-211
This paper provides guidance for researchers with some mathematical background on the conduct of time‐to‐event analysis in observational studies based on intensity (hazard) models. Discussions of basic concepts like time axis, event definition and censoring are given. Hazard models are introduced, with special emphasis on the Cox proportional hazards regression model. We provide check lists that may be useful both when fitting the model and assessing its goodness of fit and when interpreting the results. Special attention is paid to how to avoid problems with immortal time bias by introducing time‐dependent covariates. We discuss prediction based on hazard models and difficulties when attempting to draw proper causal conclusions from such models. Finally, we present a series of examples where the methods and check lists are exemplified. Computational details and implementation using the freely available R software are documented in Supplementary Material. The paper was prepared as part of the STRATOS initiative. 相似文献
4.
The clinical trial design including a test treatment, an active control and a placebo is called the gold standard design. In this paper, we develop a statistical method for planning and evaluating non‐inferiority trials with gold standard design for right‐censored time‐to‐event data. We consider both lost to follow‐up and administrative censoring. We present a semiparametric approach that only assumes the proportionality of the hazard functions. In particular, we develop an algorithm for calculating the minimal total sample size and its optimal allocation to treatment groups such that a desired power can be attained for a specific parameter constellation under the alternative. For the purpose of sample size calculation, we assume the endpoints to be Weibull distributed. By means of simulations, we investigate the actual type I error rate, power and the accuracy of the calculated sample sizes. Finally, we compare our procedure with a previously proposed procedure assuming exponentially distributed event times. To illustrate our method, we consider a double‐blinded, randomized, active and placebo controlled trial in major depression. Copyright © 2013 John Wiley & Sons, Ltd. 相似文献
5.
David J. Hendry 《Statistics in medicine》2014,33(3):436-454
The proliferation of longitudinal studies has increased the importance of statistical methods for time‐to‐event data that can incorporate time‐dependent covariates. The Cox proportional hazards model is one such method that is widely used. As more extensions of the Cox model with time‐dependent covariates are developed, simulations studies will grow in importance as well. An essential starting point for simulation studies of time‐to‐event models is the ability to produce simulated survival times from a known data generating process. This paper develops a method for the generation of survival times that follow a Cox proportional hazards model with time‐dependent covariates. The method presented relies on a simple transformation of random variables generated according to a truncated piecewise exponential distribution and allows practitioners great flexibility and control over both the number of time‐dependent covariates and the number of time periods in the duration of follow‐up measurement. Within this framework, an additional argument is suggested that allows researchers to generate time‐to‐event data in which covariates change at integer‐valued steps of the time scale. The purpose of this approach is to produce data for simulation experiments that mimic the types of data structures applied that researchers encounter when using longitudinal biomedical data. Validity is assessed in a set of simulation experiments, and results indicate that the proposed procedure performs well in producing data that conform to the assumptions of the Cox proportional hazards model. Copyright © 2013 John Wiley & Sons, Ltd. 相似文献
6.
Bayesian methods for setting sample sizes and choosing allocation ratios in phase II clinical trials with time‐to‐event endpoints 下载免费PDF全文
Conventional phase II trials using binary endpoints as early indicators of a time‐to‐event outcome are not always feasible. Uveal melanoma has no reliable intermediate marker of efficacy. In pancreatic cancer and viral clearance, the time to the event of interest is short, making an early indicator unnecessary. In the latter application, Weibull models have been used to analyse corresponding time‐to‐event data. Bayesian sample size calculations are presented for single‐arm and randomised phase II trials assuming proportional hazards models for time‐to‐event endpoints. Special consideration is given to the case where survival times follow the Weibull distribution. The proposed methods are demonstrated through an illustrative trial based on uveal melanoma patient data. A procedure for prior specification based on knowledge or predictions of survival patterns is described. This enables investigation into the choice of allocation ratio in the randomised setting to assess whether a control arm is indeed required. The Bayesian framework enables sample sizes consistent with those used in practice to be obtained. When a confirmatory phase III trial will follow if suitable evidence of efficacy is identified, Bayesian approaches are less controversial than for definitive trials. In the randomised setting, a compromise for obtaining feasible sample sizes is a loss in certainty in the specified hypotheses: the Bayesian counterpart of power. However, this approach may still be preferable to running a single‐arm trial where no data is collected on the control treatment. This dilemma is present in most phase II trials, where resources are not sufficient to conduct a definitive trial. Copyright © 2015 John Wiley & Sons, Ltd. 相似文献
7.
As medical expenses continue to rise, methods to properly analyze cost outcomes are becoming of increasing relevance when seeking to compare average costs across treatments. Inverse probability weighted regression models have been developed to address the challenge of cost censoring in order to identify intent‐to‐treat effects (i.e., to compare mean costs between groups on the basis of their initial treatment assignment, irrespective of any subsequent changes to their treatment status). In this paper, we describe a nested g‐computation procedure that can be used to compare mean costs between two or more time‐varying treatment regimes. We highlight the relative advantages and limitations of this approach when compared with existing regression‐based models. We illustrate the utility of this approach as a means to inform public policy by applying it to a simulated data example motivated by costs associated with cancer treatments. Simulations confirm that inference regarding intent‐to‐treat effects versus the joint causal effects estimated by the nested g‐formula can lead to markedly different conclusions regarding differential costs. Therefore, it is essential to prespecify the desired target of inference when choosing between these two frameworks. The nested g‐formula should be considered as a useful, complementary tool to existing methods when analyzing cost outcomes. 相似文献
8.
Lynn E. Eberly James S. Hodges Kay Savik Olga Gurvich Donna Z. Bliss Christine Mueller 《Statistics in medicine》2013,32(23):4006-4020
The Peters–Belson (PB) method was developed for quantifying and testing disparities between groups in an outcome by using linear regression to compute group‐specific observed and expected outcomes. It has since been extended to generalized linear models for binary and other outcomes and to analyses with probability‐based sample weighting. In this work, we extend the PB approach to right‐censored survival analysis, including stratification if needed. The extension uses the theory and methods of expected survival on the basis of Cox regression in a reference population. Within the PB framework, among the groups to be compared, one group is chosen as the reference group, and outcomes in that group are modeled as a function of available predictors. By using this fitted model's estimated parameters, and the predictor values for a comparator group, the comparator group's expected outcomes are then calculated and compared, formally with testing and informally with graphics, with their observed outcomes. We derive the extension, show how we applied it in a study of incontinence in nursing home elderly, and discuss issues in implementing it. We used the ‘survival’ package in the R system to do computations. Copyright © 2013 John Wiley & Sons, Ltd. 相似文献
9.
In survival analysis, use of the Cox proportional hazards model requires knowledge of all covariates under consideration at every failure time. Since failure times rarely coincide with observation times, time-dependent covariates (covariates that vary over time) need to be inferred from the observed values. In this paper, we introduce the last value auto-regressed (LVAR) estimation method and compare it to several other established estimation approaches via a simulation study. The comparison shows that under several time-dependent covariate processes this method results in a smaller mean square error when considering the time-dependent covariate effect. 相似文献
10.
Meta‐analysis of time‐to‐event outcomes from randomized trials using restricted mean survival time: application to individual participant data 下载免费PDF全文
Yinghui Wei Patrick Royston Jayne F. Tierney Mahesh K. B. Parmar 《Statistics in medicine》2015,34(21):2881-2898
Meta‐analysis of time‐to‐event outcomes using the hazard ratio as a treatment effect measure has an underlying assumption that hazards are proportional. The between‐arm difference in the restricted mean survival time is a measure that avoids this assumption and allows the treatment effect to vary with time. We describe and evaluate meta‐analysis based on the restricted mean survival time for dealing with non‐proportional hazards and present a diagnostic method for the overall proportional hazards assumption. The methods are illustrated with the application to two individual participant meta‐analyses in cancer. The examples were chosen because they differ in disease severity and the patterns of follow‐up, in order to understand the potential impacts on the hazards and the overall effect estimates. We further investigate the estimation methods for restricted mean survival time by a simulation study. Copyright © 2015 John Wiley & Sons, Ltd. 相似文献
11.
Considerations for analysis of time‐to‐event outcomes measured with error: Bias and correction with SIMEX 下载免费PDF全文
Eric J. Oh Bryan E. Shepherd Thomas Lumley Pamela A. Shaw 《Statistics in medicine》2018,37(8):1276-1289
For time‐to‐event outcomes, a rich literature exists on the bias introduced by covariate measurement error in regression models, such as the Cox model, and methods of analysis to address this bias. By comparison, less attention has been given to understanding the impact or addressing errors in the failure time outcome. For many diseases, the timing of an event of interest (such as progression‐free survival or time to AIDS progression) can be difficult to assess or reliant on self‐report and therefore prone to measurement error. For linear models, it is well known that random errors in the outcome variable do not bias regression estimates. With nonlinear models, however, even random error or misclassification can introduce bias into estimated parameters. We compare the performance of 2 common regression models, the Cox and Weibull models, in the setting of measurement error in the failure time outcome. We introduce an extension of the SIMEX method to correct for bias in hazard ratio estimates from the Cox model and discuss other analysis options to address measurement error in the response. A formula to estimate the bias induced into the hazard ratio by classical measurement error in the event time for a log‐linear survival model is presented. Detailed numerical studies are presented to examine the performance of the proposed SIMEX method under varying levels and parametric forms of the error in the outcome. We further illustrate the method with observational data on HIV outcomes from the Vanderbilt Comprehensive Care Clinic. 相似文献
12.
Quantifying risk over the life course – latency,age‐related susceptibility,and other time‐varying exposure metrics 下载免费PDF全文
Molin Wang Xiaomei Liao Francine Laden Donna Spiegelman 《Statistics in medicine》2016,35(13):2283-2295
Identification of the latency period and age‐related susceptibility, if any, is an important aspect of assessing risks of environmental, nutritional, and occupational exposures. We consider estimation and inference for latency and age‐related susceptibility in relative risk and excess risk models. We focus on likelihood‐based methods for point and interval estimation of the latency period and age‐related windows of susceptibility coupled with several commonly considered exposure metrics. The method is illustrated in a study of the timing of the effects of constituents of air pollution on mortality in the Nurses' Health Study. Copyright © 2016 John Wiley & Sons, Ltd. 相似文献
13.
Incorporating baseline measurements into the analysis of crossover trials with time‐to‐event endpoints 下载免费PDF全文
Two‐period two‐treatment (2×2) crossover designs are commonly used in clinical trials. For continuous endpoints, it has been shown that baseline (pretreatment) measurements collected before the start of each treatment period can be useful in improving the power of the analysis. Methods to achieve a corresponding gain for censored time‐to‐event endpoints have not been adequately studied. We propose a method in which censored values are treated as missing data and multiply imputed using prespecified parametric event time models. The event times in each imputed data set are then log‐transformed and analyzed using a linear model suitable for a 2×2 crossover design with continuous endpoints, with the difference in period‐specific baselines included as a covariate. Results obtained from the imputed data sets are synthesized for point and confidence interval estimation of the treatment ratio of geometric mean event times using model averaging in conjunction with Rubin's combination rule. We use simulations to illustrate the favorable operating characteristics of our method relative to two other methods for crossover trials with censored time‐to‐event data, ie, a hierarchical rank test that ignores the baselines and a stratified Cox model that uses each study subject as a stratum and includes period‐specific baselines as a covariate. Application to a real data example is provided. 相似文献
14.
John M. Lachin 《Statistics in medicine》2013,32(2):220-229
Power for time‐to‐event analyses is usually assessed under continuous‐time models. Often, however, times are discrete or grouped, as when the event is only observed when a procedure is performed. Wallenstein and Wittes (Biometrics, 1993) describe the power of the Mantel–Haenszel test for discrete lifetables under their chained binomial model for specified vectors of event probabilities over intervals of time. Herein, the expressions for these probabilities are derived under a piecewise exponential model allowing for staggered entry and losses to follow‐up. Radhakrishna (Biometrics, 1965) showed that the Mantel–Haenszel test is maximally efficient under the alternative of a constant odds ratio and derived the optimal weighted test under other alternatives. Lachin (Biostatistical Methods: The Assessment of Relative Risks, 2011) described the power function of this family of weighted Mantel–Haenszel tests. Prentice and Gloeckler (Biometrics, 1978) described a generalization of the proportional hazards model for grouped time data and the corresponding maximally efficient score test. Their test is also shown to be a weighted Mantel–Haenszel test, and its power function is likewise obtained. There is trivial loss in power under the discrete chained binomial model relative to the continuous‐time case provided that there is a modest number of periodic evaluations. Relative to the case of homogeneity of odds ratios, there can be substantial loss in power when there is substantial heterogeneity of odds ratios, especially when heterogeneity occurs early in a study when most subjects are at risk, but little loss in power when there is heterogeneity late in a study. Copyright © 2012 John Wiley & Sons, Ltd. 相似文献
15.
Observational studies provide a rich source of information for assessing effectiveness of treatment interventions in many situations where it is not ethical or practical to perform randomized controlled trials. However, such studies are prone to bias from hidden (unmeasured) confounding. A promising approach to identifying and reducing the impact of unmeasured confounding is prior event rate ratio (PERR) adjustment, a quasi‐experimental analytic method proposed in the context of electronic medical record database studies. In this paper, we present a statistical framework for using a pairwise approach to PERR adjustment that removes bias inherent in the original PERR method. A flexible pairwise Cox likelihood function is derived and used to demonstrate the consistency of the simple and convenient alternative PERR (PERR‐ALT) estimator. We show how to estimate standard errors and confidence intervals for treatment effect estimates based on the observed information and provide R code to illustrate how to implement the method. Assumptions required for the pairwise approach (as well as PERR) are clarified, and the consequences of model misspecification are explored. Our results confirm the need for researchers to consider carefully the suitability of the method in the context of each problem. Extensions of the pairwise likelihood to more complex designs involving time‐varying covariates or more than two periods are considered. We illustrate the application of the method using data from a longitudinal cohort study of enzyme replacement therapy for lysosomal storage disorders. © 2016 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd. 相似文献
16.
The Fibrinogen Studies Collaboration 《Statistics in medicine》2009,28(8):1218-1237
One difficulty in performing meta‐analyses of observational cohort studies is that the availability of confounders may vary between cohorts, so that some cohorts provide fully adjusted analyses while others only provide partially adjusted analyses. Commonly, analyses of the association between an exposure and disease either are restricted to cohorts with full confounder information, or use all cohorts but do not fully adjust for confounding. We propose using a bivariate random‐effects meta‐analysis model to use information from all available cohorts while still adjusting for all the potential confounders. Our method uses both the fully adjusted and the partially adjusted estimated effects in the cohorts with full confounder information, together with an estimate of their within‐cohort correlation. The method is applied to estimate the association between fibrinogen level and coronary heart disease incidence using data from 154 012 participants in 31 cohorts.? One hundred and ninety‐nine participants from the original 154 211 withdrew their consent and have been removed from this analysis. Copyright © 2009 John Wiley & Sons, Ltd. 相似文献
17.
Charles Donald George Keown‐Stoneman Julie Horrocks Gerarda Darlington 《Statistics in medicine》2018,37(5):776-788
Cox models are commonly used in the analysis of time to event data. One advantage of Cox models is the ability to include time‐varying covariates, often a binary covariate that codes for the occurrence of an event that affects an individual subject. A common assumption in this case is that the effect of the event on the outcome of interest is constant and permanent for each subject. In this paper, we propose a modification to the Cox model to allow the influence of an event to exponentially decay over time. Methods for generating data using the inverse cumulative density function for the proposed model are developed. Likelihood ratio tests and AIC are investigated as methods for comparing the proposed model to the commonly used permanent exposure model. A simulation study is performed, and 3 different data sets are presented as examples. 相似文献
18.
Peter C. Austin 《Statistics in medicine》2012,31(29):3946-3958
Simulations and Monte Carlo methods serve an important role in modern statistical research. They allow for an examination of the performance of statistical procedures in settings in which analytic and mathematical derivations may not be feasible. A key element in any statistical simulation is the existence of an appropriate data‐generating process: one must be able to simulate data from a specified statistical model. We describe data‐generating processes for the Cox proportional hazards model with time‐varying covariates when event times follow an exponential, Weibull, or Gompertz distribution. We consider three types of time‐varying covariates: first, a dichotomous time‐varying covariate that can change at most once from untreated to treated (e.g., organ transplant); second, a continuous time‐varying covariate such as cumulative exposure at a constant dose to radiation or to a pharmaceutical agent used for a chronic condition; third, a dichotomous time‐varying covariate with a subject being able to move repeatedly between treatment states (e.g., current compliance or use of a medication). In each setting, we derive closed‐form expressions that allow one to simulate survival times so that survival times are related to a vector of fixed or time‐invariant covariates and to a single time‐varying covariate. We illustrate the utility of our closed‐form expressions for simulating event times by using Monte Carlo simulations to estimate the statistical power to detect as statistically significant the effect of different types of binary time‐varying covariates. This is compared with the statistical power to detect as statistically significant a binary time‐invariant covariate. Copyright © 2012 John Wiley & Sons, Ltd. 相似文献
19.
Mediation analysis for a survival outcome with time‐varying exposures,mediators, and confounders 下载免费PDF全文
Sheng‐Hsuan Lin Jessica G. Young Roger Logan Tyler J. VanderWeele 《Statistics in medicine》2017,36(26):4153-4166
We propose an approach to conduct mediation analysis for survival data with time‐varying exposures, mediators, and confounders. We identify certain interventional direct and indirect effects through a survival mediational g‐formula and describe the required assumptions. We also provide a feasible parametric approach along with an algorithm and software to estimate these effects. We apply this method to analyze the Framingham Heart Study data to investigate the causal mechanism of smoking on mortality through coronary artery disease. The estimated overall 10‐year all‐cause mortality risk difference comparing “always smoke 30 cigarettes per day” versus “never smoke” was 4.3 (95% CI = (1.37, 6.30)). Of the overall effect, we estimated 7.91% (95% CI: = 1.36%, 19.32%) was mediated by the incidence and timing of coronary artery disease. The survival mediational g‐formula constitutes a powerful tool for conducting mediation analysis with longitudinal data. 相似文献
20.
A Bayesian approach for instrumental variable analysis with censored time‐to‐event outcome 下载免费PDF全文
Instrumental variable (IV) analysis has been widely used in economics, epidemiology, and other fields to estimate the causal effects of covariates on outcomes, in the presence of unobserved confounders and/or measurement errors in covariates. However, IV methods for time‐to‐event outcome with censored data remain underdeveloped. This paper proposes a Bayesian approach for IV analysis with censored time‐to‐event outcome by using a two‐stage linear model. A Markov chain Monte Carlo sampling method is developed for parameter estimation for both normal and non‐normal linear models with elliptically contoured error distributions. The performance of our method is examined by simulation studies. Our method largely reduces bias and greatly improves coverage probability of the estimated causal effect, compared with the method that ignores the unobserved confounders and measurement errors. We illustrate our method on the Women's Health Initiative Observational Study and the Atherosclerosis Risk in Communities Study. Copyright © 2014 John Wiley & Sons, Ltd. 相似文献