首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Clinical trials with multiple primary time‐to‐event outcomes are common. Use of multiple endpoints creates challenges in the evaluation of power and the calculation of sample size during trial design particularly for time‐to‐event outcomes. We present methods for calculating the power and sample size for randomized superiority clinical trials with two correlated time‐to‐event outcomes. We do this for independent and dependent censoring for three censoring scenarios: (i) the two events are non‐fatal; (ii) one event is fatal (semi‐competing risk); and (iii) both are fatal (competing risk). We derive the bivariate log‐rank test in all three censoring scenarios and investigate the behavior of power and the required sample sizes. Separate evaluations are conducted for two inferential goals, evaluation of whether the test intervention is superior to the control on: (1) all of the endpoints (multiple co‐primary) or (2) at least one endpoint (multiple primary). Copyright © 2017 John Wiley & Sons, Ltd.  相似文献   

2.
Composite endpoints combine several events of interest within a single variable. These are often time‐to‐first‐event data, which are analyzed via survival analysis techniques. To demonstrate the significance of an overall clinical benefit, it is sufficient to assess the test problem formulated for the composite. However, the effect observed for the composite does not necessarily reflect the effects for the components. Therefore, it would be desirable that the sample size for clinical trials using composite endpoints provides enough power not only to detect a clinically relevant superiority for the composite but also to address the components in an adequate way. The single components of a composite endpoint assessed as time‐to‐first‐event define competing risks. We consider multiple test problems based on the cause‐specific hazards of competing events to address the problem of analyzing both a composite endpoint and its components. Thereby, we use sequentially rejective test procedures to reduce the power loss to a minimum. We show how to calculate the sample size for the given multiple test problem by using a simply applicable simulation tool in SAS . Our ideas are illustrated by two clinical study examples. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

3.
The number needed to treat is a tool often used in clinical settings to illustrate the effect of a treatment. It has been widely adopted in the communication of risks to both clinicians and non‐clinicians, such as patients, who are better able to understand this measure than absolute risk or rate reductions. The concept was introduced by Laupacis, Sackett, and Roberts in 1988 for binary data, and extended to time‐to‐event data by Altman and Andersen in 1999. However, up to the present, there is no definition of the number needed to treat for time‐to‐event data with competing risks. This paper introduces such a definition using the cumulative incidence function and suggests non‐parametric and semi‐parametric inferential methods for right‐censored time‐to‐event data in the presence of competing risks. The procedures are illustrated using the data from a breast cancer clinical trial. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

4.
Cai Wu  Liang Li 《Statistics in medicine》2018,37(21):3106-3124
This paper focuses on quantifying and estimating the predictive accuracy of prognostic models for time‐to‐event outcomes with competing events. We consider the time‐dependent discrimination and calibration metrics, including the receiver operating characteristics curve and the Brier score, in the context of competing risks. To address censoring, we propose a unified nonparametric estimation framework for both discrimination and calibration measures, by weighting the censored subjects with the conditional probability of the event of interest given the observed data. The proposed method can be extended to time‐dependent predictive accuracy metrics constructed from a general class of loss functions. We apply the methodology to a data set from the African American Study of Kidney Disease and Hypertension to evaluate the predictive accuracy of a prognostic risk score in predicting end‐stage renal disease, accounting for the competing risk of pre–end‐stage renal disease death, and evaluate its numerical performance in extensive simulation studies.  相似文献   

5.
Principal surrogate endpoints are useful as targets for phase I and II trials. In many recent trials, multiple post‐randomization biomarkers are measured. However, few statistical methods exist for comparison of or combination of biomarkers as principal surrogates, and none of these methods to our knowledge utilize time‐to‐event clinical endpoint information. We propose a Weibull model extension of the semi‐parametric estimated maximum likelihood method that allows for the inclusion of multiple biomarkers in the same risk model as multivariate candidate principal surrogates. We propose several methods for comparing candidate principal surrogates and evaluating multivariate principal surrogates. These include the time‐dependent and surrogate‐dependent true and false positive fraction, the time‐dependent and the integrated standardized total gain, and the cumulative distribution function of the risk difference. We illustrate the operating characteristics of our proposed methods in simulations and outline how these statistics can be used to evaluate and compare candidate principal surrogates. We use these methods to investigate candidate surrogates in the Diabetes Control and Complications Trial. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

6.
We consider the use of the assurance method in clinical trial planning. In the assurance method, which is an alternative to a power calculation, we calculate the probability of a clinical trial resulting in a successful outcome, via eliciting a prior probability distribution about the relevant treatment effect. This is typically a hybrid Bayesian‐frequentist procedure, in that it is usually assumed that the trial data will be analysed using a frequentist hypothesis test, so that the prior distribution is only used to calculate the probability of observing the desired outcome in the frequentist test. We argue that assessing the probability of a successful clinical trial is a useful part of the trial planning process. We develop assurance methods to accommodate survival outcome measures, assuming both parametric and nonparametric models. We also develop prior elicitation procedures for each survival model so that the assurance calculations can be performed more easily and reliably. We have made free software available for implementing our methods. © 2013 The Authors. Statistics in Medicine published by John Wiley & Sons, Ltd.  相似文献   

7.
Sequentially administered, laboratory‐based diagnostic tests or self‐reported questionnaires are often used to determine the occurrence of a silent event. In this paper, we consider issues relevant in design of studies aimed at estimating the association of one or more covariates with a non‐recurring, time‐to‐event outcome that is observed using a repeatedly administered, error‐prone diagnostic procedure. The problem is motivated by the Women's Health Initiative, in which diabetes incidence among the approximately 160,000 women is obtained from annually collected self‐reported data. For settings of imperfect diagnostic tests or self‐reports with known sensitivity and specificity, we evaluate the effects of various factors on resulting power and sample size calculations and compare the relative efficiency of different study designs. The methods illustrated in this paper are readily implemented using our freely available R software package icensmis, which is available at the Comprehensive R Archive Network website. An important special case is that when diagnostic procedures are perfect, they result in interval‐censored, time‐to‐event outcomes. The proposed methods are applicable for the design of studies in which a time‐to‐event outcome is interval censored. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

8.
Dynamic prediction uses longitudinal biomarkers for real‐time prediction of an individual patient's prognosis. This is critical for patients with an incurable disease such as cancer. Biomarker trajectories are usually not linear, nor even monotone, and vary greatly across individuals. Therefore, it is difficult to fit them with parametric models. With this consideration, we propose an approach for dynamic prediction that does not need to model the biomarker trajectories. Instead, as a trade‐off, we assume that the biomarker effects on the risk of disease recurrence are smooth functions over time. This approach turns out to be computationally easier. Simulation studies show that the proposed approach achieves stable estimation of biomarker effects over time, has good predictive performance, and is robust against model misspecification. It is a good compromise between two major approaches, namely, (i) joint modeling of longitudinal and survival data and (ii) landmark analysis. The proposed method is applied to patients with chronic myeloid leukemia. At any time following their treatment with tyrosine kinase inhibitors, longitudinally measured BCR‐ABL gene expression levels are used to predict the risk of disease progression. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

9.
Cancer studies frequently yield multiple event times that correspond to landmarks in disease progression, including non‐terminal events (i.e., cancer recurrence) and an informative terminal event (i.e., cancer‐related death). Hence, we often observe semi‐competing risks data. Work on such data has focused on scenarios in which the cause of the terminal event is known. However, in some circumstances, the information on cause for patients who experience the terminal event is missing; consequently, we are not able to differentiate an informative terminal event from a non‐informative terminal event. In this article, we propose a method to handle missing data regarding the cause of an informative terminal event when analyzing the semi‐competing risks data. We first consider the nonparametric estimation of the survival function for the terminal event time given missing cause‐of‐failure data via the expectation–maximization algorithm. We then develop an estimation method for semi‐competing risks data with missing cause of the terminal event, under a pre‐specified semiparametric copula model. We conduct simulation studies to investigate the performance of the proposed method. We illustrate our methodology using data from a study of early‐stage breast cancer. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

10.
Our aim is to develop a rich and coherent framework for modeling correlated time‐to‐event data, including (1) survival regression models with different links and (2) flexible modeling for time‐dependent and nonlinear effects with rich postestimation. We extend the class of generalized survival models, which expresses a transformed survival in terms of a linear predictor, by incorporating a shared frailty or random effects for correlated survival data. The proposed approach can include parametric or penalized smooth functions for time, time‐dependent effects, nonlinear effects, and their interactions. The maximum (penalized) marginal likelihood method is used to estimate the regression coefficients and the variance for the frailty or random effects. The optimal smoothing parameters for the penalized marginal likelihood estimation can be automatically selected by a likelihood‐based cross‐validation criterion. For models with normal random effects, Gauss‐Hermite quadrature can be used to obtain the cluster‐level marginal likelihoods. The Akaike Information Criterion can be used to compare models and select the link function. We have implemented these methods in the R package rstpm2. Simulating for both small and larger clusters, we find that this approach performs well. Through 2 applications, we demonstrate (1) a comparison of proportional hazards and proportional odds models with random effects for clustered survival data and (2) the estimation of time‐varying effects on the log‐time scale, age‐varying effects for a specific treatment, and two‐dimensional splines for time and age.  相似文献   

11.
Instrumental variable (IV) analysis has been widely used in economics, epidemiology, and other fields to estimate the causal effects of covariates on outcomes, in the presence of unobserved confounders and/or measurement errors in covariates. However, IV methods for time‐to‐event outcome with censored data remain underdeveloped. This paper proposes a Bayesian approach for IV analysis with censored time‐to‐event outcome by using a two‐stage linear model. A Markov chain Monte Carlo sampling method is developed for parameter estimation for both normal and non‐normal linear models with elliptically contoured error distributions. The performance of our method is examined by simulation studies. Our method largely reduces bias and greatly improves coverage probability of the estimated causal effect, compared with the method that ignores the unobserved confounders and measurement errors. We illustrate our method on the Women's Health Initiative Observational Study and the Atherosclerosis Risk in Communities Study. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

12.
This paper provides guidance for researchers with some mathematical background on the conduct of time‐to‐event analysis in observational studies based on intensity (hazard) models. Discussions of basic concepts like time axis, event definition and censoring are given. Hazard models are introduced, with special emphasis on the Cox proportional hazards regression model. We provide check lists that may be useful both when fitting the model and assessing its goodness of fit and when interpreting the results. Special attention is paid to how to avoid problems with immortal time bias by introducing time‐dependent covariates. We discuss prediction based on hazard models and difficulties when attempting to draw proper causal conclusions from such models. Finally, we present a series of examples where the methods and check lists are exemplified. Computational details and implementation using the freely available R software are documented in Supplementary Material. The paper was prepared as part of the STRATOS initiative.  相似文献   

13.
A two-stage model for evaluating both trial-level and patient-level surrogacy of correlated time-to-event endpoints has been introduced, using patient-level data when multiple clinical trials are available. However, the associated maximum likelihood approach often suffers from numerical problems when different baseline hazards among trials and imperfect estimation of treatment effects are assumed. To address this issue, we propose performing the second-stage, trial-level evaluation of potential surrogates within a Bayesian framework, where we may naturally borrow information across trials while maintaining these realistic assumptions. Posterior distributions on surrogacy measures of interest may then be used to compare measures or make decisions regarding the candidacy of a specific endpoint. We perform a simulation study to investigate differences in estimation performance between traditional maximum likelihood and new Bayesian representations of common meta-analytic surrogacy measures, while assessing sensitivity to data characteristics such as number of trials, trial size, and amount of censoring. Furthermore, we present both frequentist and Bayesian trial-level surrogacy evaluations of time to recurrence for overall survival in two meta-analyses of adjuvant therapy trials in colon cancer. With these results, we recommend Bayesian evaluation as an attractive and numerically stable alternative in the multitrial assessment of potential surrogate endpoints.  相似文献   

14.
Joint models for longitudinal and time‐to‐event data are particularly relevant to many clinical studies where longitudinal biomarkers could be highly associated with a time‐to‐event outcome. A cutting‐edge research direction in this area is dynamic predictions of patient prognosis (e.g., survival probabilities) given all available biomarker information, recently boosted by the stratified/personalized medicine initiative. As these dynamic predictions are individualized, flexible models are desirable in order to appropriately characterize each individual longitudinal trajectory. In this paper, we propose a new joint model using individual‐level penalized splines (P‐splines) to flexibly characterize the coevolution of the longitudinal and time‐to‐event processes. An important feature of our approach is that dynamic predictions of the survival probabilities are straightforward as the posterior distribution of the random P‐spline coefficients given the observed data is a multivariate skew‐normal distribution. The proposed methods are illustrated with data from the HIV Epidemiology Research Study. Our simulation results demonstrate that our model has better dynamic prediction performance than other existing approaches. © 2017 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.  相似文献   

15.
Meta‐analysis of time‐to‐event outcomes using the hazard ratio as a treatment effect measure has an underlying assumption that hazards are proportional. The between‐arm difference in the restricted mean survival time is a measure that avoids this assumption and allows the treatment effect to vary with time. We describe and evaluate meta‐analysis based on the restricted mean survival time for dealing with non‐proportional hazards and present a diagnostic method for the overall proportional hazards assumption. The methods are illustrated with the application to two individual participant meta‐analyses in cancer. The examples were chosen because they differ in disease severity and the patterns of follow‐up, in order to understand the potential impacts on the hazards and the overall effect estimates. We further investigate the estimation methods for restricted mean survival time by a simulation study. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

16.
When conducting a meta‐analysis of studies with bivariate binary outcomes, challenges arise when the within‐study correlation and between‐study heterogeneity should be taken into account. In this paper, we propose a marginal beta‐binomial model for the meta‐analysis of studies with binary outcomes. This model is based on the composite likelihood approach and has several attractive features compared with the existing models such as bivariate generalized linear mixed model (Chu and Cole, 2006) and Sarmanov beta‐binomial model (Chen et al., 2012). The advantages of the proposed marginal model include modeling the probabilities in the original scale, not requiring any transformation of probabilities or any link function, having closed‐form expression of likelihood function, and no constraints on the correlation parameter. More importantly, because the marginal beta‐binomial model is only based on the marginal distributions, it does not suffer from potential misspecification of the joint distribution of bivariate study‐specific probabilities. Such misspecification is difficult to detect and can lead to biased inference using currents methods. We compare the performance of the marginal beta‐binomial model with the bivariate generalized linear mixed model and the Sarmanov beta‐binomial model by simulation studies. Interestingly, the results show that the marginal beta‐binomial model performs better than the Sarmanov beta‐binomial model, whether or not the true model is Sarmanov beta‐binomial, and the marginal beta‐binomial model is more robust than the bivariate generalized linear mixed model under model misspecifications. Two meta‐analyses of diagnostic accuracy studies and a meta‐analysis of case–control studies are conducted for illustration. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

17.
For time‐to‐event outcomes, a rich literature exists on the bias introduced by covariate measurement error in regression models, such as the Cox model, and methods of analysis to address this bias. By comparison, less attention has been given to understanding the impact or addressing errors in the failure time outcome. For many diseases, the timing of an event of interest (such as progression‐free survival or time to AIDS progression) can be difficult to assess or reliant on self‐report and therefore prone to measurement error. For linear models, it is well known that random errors in the outcome variable do not bias regression estimates. With nonlinear models, however, even random error or misclassification can introduce bias into estimated parameters. We compare the performance of 2 common regression models, the Cox and Weibull models, in the setting of measurement error in the failure time outcome. We introduce an extension of the SIMEX method to correct for bias in hazard ratio estimates from the Cox model and discuss other analysis options to address measurement error in the response. A formula to estimate the bias induced into the hazard ratio by classical measurement error in the event time for a log‐linear survival model is presented. Detailed numerical studies are presented to examine the performance of the proposed SIMEX method under varying levels and parametric forms of the error in the outcome. We further illustrate the method with observational data on HIV outcomes from the Vanderbilt Comprehensive Care Clinic.  相似文献   

18.
The expression of X‐chromosome undergoes three possible biological processes: X‐chromosome inactivation (XCI), escape of the X‐chromosome inactivation (XCI‐E), and skewed X‐chromosome inactivation (XCI‐S). Although these expressions are included in various predesigned genetic variation chip platforms, the X‐chromosome has generally been excluded from the majority of genome‐wide association studies analyses; this is most likely due to the lack of a standardized method in handling X‐chromosomal genotype data. To analyze the X‐linked genetic association for time‐to‐event outcomes with the actual process unknown, we propose a unified approach of maximizing the partial likelihood over all of the potential biological processes. The proposed method can be used to infer the true biological process and derive unbiased estimates of the genetic association parameters. A partial likelihood ratio test statistic that has been proved asymptotically chi‐square distributed can be used to assess the X‐chromosome genetic association. Furthermore, if the X‐chromosome expression pertains to the XCI‐S process, we can infer the correct skewed direction and magnitude of inactivation, which can elucidate significant findings regarding the genetic mechanism. A population‐level model and a more general subject‐level model have been developed to model the XCI‐S process. Finite sample performance of this novel method is examined via extensive simulation studies. An application is illustrated with implementation of the method on a cancer genetic study with survival outcome.  相似文献   

19.
This paper presents a novel approach to estimation of the cumulative incidence function in the presence of competing risks. The underlying statistical model is specified via a mixture factorization of the joint distribution of the event type and the time to the event. The time to event distributions conditional on the event type are modeled using smooth semi‐nonparametric densities. One strength of this approach is that it can handle arbitrary censoring and truncation while relying on mild parametric assumptions. A stepwise forward algorithm for model estimation and adaptive selection of smooth semi‐nonparametric polynomial degrees is presented, implemented in the statistical software R, evaluated in a sequence of simulation studies, and applied to data from a clinical trial in cryptococcal meningitis. The simulations demonstrate that the proposed method frequently outperforms both parametric and nonparametric alternatives. They also support the use of ‘ad hoc’ asymptotic inference to derive confidence intervals. An extension to regression modeling is also presented, and its potential and challenges are discussed. © 2017 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.  相似文献   

20.
We consider regulatory clinical trials that require a prespecified method for the comparison of two treatments for chronic diseases (e.g. Chronic Obstructive Pulmonary Disease) in which patients suffer deterioration in a longitudinal process until death occurs. We define a composite endpoint structure that encompasses both the longitudinal data for deterioration and the time‐to‐event data for death, and use multivariate time‐to‐event methods to assess treatment differences on both data structures simultaneously, without a need for parametric assumptions or modeling. Our method is straightforward to implement, and simulations show that the method has robust power in situations in which incomplete data could lead to lower than expected power for either the longitudinal or survival data. We illustrate the method on data from a study of chronic lung disease. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号