首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到14条相似文献,搜索用时 0 毫秒
1.
With censored event time observations, the logrank test is the most popular tool for testing the equality of two underlying survival distributions. Although this test is asymptotically distribution free, it may not be powerful when the proportional hazards assumption is violated. Various other novel testing procedures have been proposed, which generally are derived by assuming a class of specific alternative hypotheses with respect to the hazard functions. The test considered by Pepe and Fleming (1989) is based on a linear combination of weighted differences of the two Kaplan–Meier curves over time and is a natural tool to assess the difference of two survival functions directly. In this article, we take a similar approach but choose weights that are proportional to the observed standardized difference of the estimated survival curves at each time point. The new proposal automatically makes weighting adjustments empirically. The new test statistic is aimed at a one‐sided general alternative hypothesis and is distributed with a short right tail under the null hypothesis but with a heavy tail under the alternative. The results from extensive numerical studies demonstrate that the new procedure performs well under various general alternatives with a caution of a minor inflation of the type I error rate when the sample size is small or the number of observed events is small. The survival data from a recent cancer comparative study are utilized for illustrating the implementation of the process. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

2.
Meta‐analysis of time‐to‐event outcomes using the hazard ratio as a treatment effect measure has an underlying assumption that hazards are proportional. The between‐arm difference in the restricted mean survival time is a measure that avoids this assumption and allows the treatment effect to vary with time. We describe and evaluate meta‐analysis based on the restricted mean survival time for dealing with non‐proportional hazards and present a diagnostic method for the overall proportional hazards assumption. The methods are illustrated with the application to two individual participant meta‐analyses in cancer. The examples were chosen because they differ in disease severity and the patterns of follow‐up, in order to understand the potential impacts on the hazards and the overall effect estimates. We further investigate the estimation methods for restricted mean survival time by a simulation study. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

3.
The Peters–Belson (PB) method was developed for quantifying and testing disparities between groups in an outcome by using linear regression to compute group‐specific observed and expected outcomes. It has since been extended to generalized linear models for binary and other outcomes and to analyses with probability‐based sample weighting. In this work, we extend the PB approach to right‐censored survival analysis, including stratification if needed. The extension uses the theory and methods of expected survival on the basis of Cox regression in a reference population. Within the PB framework, among the groups to be compared, one group is chosen as the reference group, and outcomes in that group are modeled as a function of available predictors. By using this fitted model's estimated parameters, and the predictor values for a comparator group, the comparator group's expected outcomes are then calculated and compared, formally with testing and informally with graphics, with their observed outcomes. We derive the extension, show how we applied it in a study of incontinence in nursing home elderly, and discuss issues in implementing it. We used the ‘survival’ package in the R system to do computations. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

4.
The clinical trial design including a test treatment, an active control and a placebo is called the gold standard design. In this paper, we develop a statistical method for planning and evaluating non‐inferiority trials with gold standard design for right‐censored time‐to‐event data. We consider both lost to follow‐up and administrative censoring. We present a semiparametric approach that only assumes the proportionality of the hazard functions. In particular, we develop an algorithm for calculating the minimal total sample size and its optimal allocation to treatment groups such that a desired power can be attained for a specific parameter constellation under the alternative. For the purpose of sample size calculation, we assume the endpoints to be Weibull distributed. By means of simulations, we investigate the actual type I error rate, power and the accuracy of the calculated sample sizes. Finally, we compare our procedure with a previously proposed procedure assuming exponentially distributed event times. To illustrate our method, we consider a double‐blinded, randomized, active and placebo controlled trial in major depression. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

5.
Phase II clinical trials are often conducted to determine whether a new treatment is sufficiently promising to warrant a major controlled clinical evaluation against a standard therapy. We consider single‐arm phase II clinical trials with right censored survival time responses where the ordinary one‐sample logrank test is commonly used for testing the treatment efficacy. For planning such clinical trials, this paper presents two‐stage designs that are optimal in the sense that the expected sample size is minimized if the new regimen has low efficacy subject to constraints of the type I and type II errors. Two‐stage designs, which minimize the maximal sample size, are also determined. Optimal and minimax designs for a range of design parameters are tabulated along with examples. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

6.
In randomised controlled trials, the effect of treatment on those who comply with allocation to active treatment can be estimated by comparing their outcome to those in the comparison group who would have complied with active treatment had they been allocated to it. We compare three estimators of the causal effect of treatment on compliers when this is a parameter in a proportional hazards model and quantify the bias due to omitting baseline prognostic factors. Causal estimates are found directly by maximising a novel partial likelihood; based on a structural proportional hazards model; and based on a ‘corrected dataset’ derived after fitting a rank‐preserving structural failure time model. Where necessary, we extend these methods to incorporate baseline covariates. Comparisons use simulated data and a real data example. Analysing the simulated data, we found that all three methods are accurate when an important covariate was included in the proportional hazards model (maximum bias 5.4%). However, failure to adjust for this prognostic factor meant that causal treatment effects were underestimated (maximum bias 11.4%), because estimators were based on a misspecified marginal proportional hazards model. Analysing the real data example, we found that adjusting causal estimators is important to correct for residual imbalances in prognostic factors present between trial arms after randomisation. Our results show that methods of estimating causal treatment effects for time‐to‐event outcomes should be extended to incorporate covariates, thus providing an informative compliment to the corresponding intention‐to‐treat analysis. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

7.
Conventional phase II trials using binary endpoints as early indicators of a time‐to‐event outcome are not always feasible. Uveal melanoma has no reliable intermediate marker of efficacy. In pancreatic cancer and viral clearance, the time to the event of interest is short, making an early indicator unnecessary. In the latter application, Weibull models have been used to analyse corresponding time‐to‐event data. Bayesian sample size calculations are presented for single‐arm and randomised phase II trials assuming proportional hazards models for time‐to‐event endpoints. Special consideration is given to the case where survival times follow the Weibull distribution. The proposed methods are demonstrated through an illustrative trial based on uveal melanoma patient data. A procedure for prior specification based on knowledge or predictions of survival patterns is described. This enables investigation into the choice of allocation ratio in the randomised setting to assess whether a control arm is indeed required. The Bayesian framework enables sample sizes consistent with those used in practice to be obtained. When a confirmatory phase III trial will follow if suitable evidence of efficacy is identified, Bayesian approaches are less controversial than for definitive trials. In the randomised setting, a compromise for obtaining feasible sample sizes is a loss in certainty in the specified hypotheses: the Bayesian counterpart of power. However, this approach may still be preferable to running a single‐arm trial where no data is collected on the control treatment. This dilemma is present in most phase II trials, where resources are not sufficient to conduct a definitive trial. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

8.
Power for time‐to‐event analyses is usually assessed under continuous‐time models. Often, however, times are discrete or grouped, as when the event is only observed when a procedure is performed. Wallenstein and Wittes (Biometrics, 1993) describe the power of the Mantel–Haenszel test for discrete lifetables under their chained binomial model for specified vectors of event probabilities over intervals of time. Herein, the expressions for these probabilities are derived under a piecewise exponential model allowing for staggered entry and losses to follow‐up. Radhakrishna (Biometrics, 1965) showed that the Mantel–Haenszel test is maximally efficient under the alternative of a constant odds ratio and derived the optimal weighted test under other alternatives. Lachin (Biostatistical Methods: The Assessment of Relative Risks, 2011) described the power function of this family of weighted Mantel–Haenszel tests. Prentice and Gloeckler (Biometrics, 1978) described a generalization of the proportional hazards model for grouped time data and the corresponding maximally efficient score test. Their test is also shown to be a weighted Mantel–Haenszel test, and its power function is likewise obtained. There is trivial loss in power under the discrete chained binomial model relative to the continuous‐time case provided that there is a modest number of periodic evaluations. Relative to the case of homogeneity of odds ratios, there can be substantial loss in power when there is substantial heterogeneity of odds ratios, especially when heterogeneity occurs early in a study when most subjects are at risk, but little loss in power when there is heterogeneity late in a study. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

9.
Cluster randomized trials (CRTs) involve the random assignment of intact social units rather than independent subjects to intervention groups. Time‐to‐event outcomes often are endpoints in CRTs. Analyses of such data need to account for the correlation among cluster members. The intracluster correlation coefficient (ICC) is used to assess the similarity among binary and continuous outcomes that belong to the same cluster. However, estimating the ICC in CRTs with time‐to‐event outcomes is a challenge because of the presence of censored observations. The literature suggests that the ICC may be estimated using either censoring indicators or observed event times. A simulation study explores the effect of administrative censoring on estimating the ICC. Results show that ICC estimators derived from censoring indicators or observed event times are negatively biased. Analytic work further supports these results. Observed event times are preferred to estimate the ICC under minimum frequency of administrative censoring. To our knowledge, the existing literature provides no practical guidance on the estimation of ICC when substantial amount of administrative censoring is present. The results from this study corroborate the need for further methodological research on estimating the ICC for correlated time‐to‐event outcomes. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

10.
We consider regulatory clinical trials that require a prespecified method for the comparison of two treatments for chronic diseases (e.g. Chronic Obstructive Pulmonary Disease) in which patients suffer deterioration in a longitudinal process until death occurs. We define a composite endpoint structure that encompasses both the longitudinal data for deterioration and the time‐to‐event data for death, and use multivariate time‐to‐event methods to assess treatment differences on both data structures simultaneously, without a need for parametric assumptions or modeling. Our method is straightforward to implement, and simulations show that the method has robust power in situations in which incomplete data could lead to lower than expected power for either the longitudinal or survival data. We illustrate the method on data from a study of chronic lung disease. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

11.
Unmeasured confounding remains an important problem in observational studies, including pharmacoepidemiological studies of large administrative databases. Several recently developed methods utilize smaller validation samples, with information on additional confounders, to control for confounders unmeasured in the main, larger database. However, up‐to‐date applications of these methods to survival analyses seem to be limited to propensity score calibration, which relies on a strong surrogacy assumption. We propose a new method, specifically designed for time‐to‐event analyses, which uses martingale residuals, in addition to measured covariates, to enhance imputation of the unmeasured confounders in the main database. The method is applicable for analyses with both time‐invariant data and time‐varying exposure/confounders. In simulations, our method consistently eliminated bias because of unmeasured confounding, regardless of surrogacy violation and other relevant design parameters, and almost always yielded lower mean squared errors than other methods applicable for survival analyses, outperforming propensity score calibration in several scenarios. We apply the method to a real‐life pharmacoepidemiological database study of the association between glucocorticoid therapy and risk of type II diabetes mellitus in patients with rheumatoid arthritis, with additional potential confounders available in an external validation sample. Compared with conventional analyses, which adjust only for confounders measured in the main database, our estimates suggest a considerably weaker association. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

12.
If past treatment assignments are unmasked, selection bias may arise even in randomized controlled trials. The impact of such bias can be measured by considering the type I error probability. In case of a normally distributed outcome, there already exists a model accounting for selection bias that permits calculating the corresponding type I error probabilities. To model selection bias for trials with a time‐to‐event outcome, we introduce a new biasing policy for exponentially distributed data. Using this biasing policy, we derive an exact formula to compute type I error probabilities whenever an F‐test is performed and no observations are censored. Two exemplary settings, with and without random censoring, are considered in order to illustrate how our results can be applied to compare distinct randomization procedures with respect to their performance in the presence of selection bias. © 2017 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.  相似文献   

13.
In 1996–1997, the AIDS Clinical Trial Group 320 study randomized 1156 HIV‐infected U.S. patients to combination antiretroviral therapy (ART) or highly active ART with equal probability. Ninety‐six patients incurred AIDS or died, 51 (4 per cent) dropped out, and 290 (=51 + 239, 25 per cent) dropped out or stopped their assigned therapy for reasons other than toxicity during a 52‐week follow‐up. Such noncompliance likely results in null‐biased estimates of intent‐to‐treat hazard ratios (HR) of AIDS or death comparing highly active ART with combination ART, which were 0.75 (95 per cent confidence limits [CL]: 0.43, 1.31), 0.30 (95 per cent CL: 0.15, 0.60), and 0.51 (95 per cent CL: 0.33, 0.77) for follow‐up within 15 weeks, beyond 15 weeks, and overall, respectively. Noncompliance correction using Robins and Finkelstein's (RF) inverse probability‐of‐censoring weights (where participants are censored at dropout or when first noncompliant) yielded estimated HR of 0.46 (95 per cent CL: 0.25, 0.85), 0.43 (95 per cent CL: 0.19, 0.96), and 0.45 (95 per cent CL: 0.27, 0.74) for follow‐up within 15 weeks, beyond 15 weeks, and overall, respectively. Weights were estimated conditional on measured age, sex, race, ethnicity, prior Zidovudine use, randomized arm, baseline and time‐varying CD4 cell count, and time‐varying HIV‐related symptoms. Noncompliance corrected results were 63 and 13 per cent farther from the null value of one than intent‐to‐treat results within 15 weeks and overall, respectively, and resolve the apparent non‐proportionality in intent‐to‐treat results. Inverse probability‐of‐censoring weighted methods could help to resolve discrepancies between compliant and noncompliant randomized evidence, as well as between randomized and observational evidence, in a variety of biomedical fields. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

14.
Despite an enormous and growing statistical literature, formal procedures for dose‐finding are only slowly being implemented in phase I clinical trials. Even in oncology and other life‐threatening conditions in which a balance between efficacy and toxicity has to be struck, model‐based approaches, such as the Continual Reassessment Method, have not been universally adopted. Two related concerns have limited the adoption of the new methods. One relates to doubts about the appropriateness of models assumed to link the risk of toxicity to dose, and the other is the difficulty of communicating the nature of the process to clinical investigators responsible for early phase studies. In this paper, we adopt a new Bayesian approach involving a simple model assuming only monotonicity in the dose‐toxicity relationship. The parameters that define the model have immediate and simple interpretation. The approach can be applied automatically, and we present a simulation investigation of its properties when it is. More importantly, it can be used in a transparent fashion as one element in the expert consideration of what dose to administer to the next patient or group of patients. The procedure serves to summarize the opinions and the data concerning risks of a binary characterization of toxicity which can then be considered, together with additional and less tidy trial information, by the clinicians responsible for making decisions on the allocation of doses. Graphical displays of these opinions can be used to ease communication with investigators. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号