共查询到20条相似文献,搜索用时 0 毫秒
1.
Correcting for non‐compliance in randomized non‐inferiority trials with active and placebo control using structural models 下载免费PDF全文
The three‐arm clinical trial design, which includes a test treatment, an active reference, and placebo control, is the gold standard for the assessment of non‐inferiority. In the presence of non‐compliance, one common concern is that an intent‐to‐treat (ITT) analysis (which is the standard approach to non‐inferiority trials), tends to increase the chances of erroneously concluding non‐inferiority, suggesting that the per‐protocol (PP) analysis may be preferable for non‐inferiority trials despite its inherent bias. The objective of this paper was to develop statistical methodology for dealing with non‐compliance in three‐arm non‐inferiority trials for censored, time‐to‐event data. Changes in treatment were here considered the only form of non‐compliance. An approach using a three‐arm rank preserving structural failure time model and G‐estimation analysis is here presented. Using simulations, the impact of non‐compliance on non‐inferiority trials was investigated in detail using ITT, PP analyses, and the present proposed method. Results indicate that the proposed method shows good characteristics, and that neither ITT nor PP analyses can always guarantee the validity of the non‐inferiority conclusion. A Statistical Analysis System program for the implementation of the proposed test procedure is available from the authors upon request. Copyright © 2014 John Wiley & Sons, Ltd. 相似文献
2.
In the three‐arm ‘gold standard’ non‐inferiority design, an experimental treatment, an active reference, and a placebo are compared. This design is becoming increasingly popular, and it is, whenever feasible, recommended for use by regulatory guidelines. We provide a general method to calculate the required sample size for clinical trials performed in this design. As special cases, the situations of continuous, binary, and Poisson distributed outcomes are explored. Taking into account the correlation structure of the involved test statistics, the proposed approach leads to considerable savings in sample size as compared with application of ad hoc methods for all three scale levels. Furthermore, optimal sample size allocation ratios are determined that result in markedly smaller total sample sizes as compared with equal assignment. As optimal allocation makes the active treatment groups larger than the placebo group, implementation of the proposed approach is also desirable from an ethical viewpoint. Copyright © 2012 John Wiley & Sons, Ltd. 相似文献
3.
Bayesian methods for setting sample sizes and choosing allocation ratios in phase II clinical trials with time‐to‐event endpoints 下载免费PDF全文
Conventional phase II trials using binary endpoints as early indicators of a time‐to‐event outcome are not always feasible. Uveal melanoma has no reliable intermediate marker of efficacy. In pancreatic cancer and viral clearance, the time to the event of interest is short, making an early indicator unnecessary. In the latter application, Weibull models have been used to analyse corresponding time‐to‐event data. Bayesian sample size calculations are presented for single‐arm and randomised phase II trials assuming proportional hazards models for time‐to‐event endpoints. Special consideration is given to the case where survival times follow the Weibull distribution. The proposed methods are demonstrated through an illustrative trial based on uveal melanoma patient data. A procedure for prior specification based on knowledge or predictions of survival patterns is described. This enables investigation into the choice of allocation ratio in the randomised setting to assess whether a control arm is indeed required. The Bayesian framework enables sample sizes consistent with those used in practice to be obtained. When a confirmatory phase III trial will follow if suitable evidence of efficacy is identified, Bayesian approaches are less controversial than for definitive trials. In the randomised setting, a compromise for obtaining feasible sample sizes is a loss in certainty in the specified hypotheses: the Bayesian counterpart of power. However, this approach may still be preferable to running a single‐arm trial where no data is collected on the control treatment. This dilemma is present in most phase II trials, where resources are not sufficient to conduct a definitive trial. Copyright © 2015 John Wiley & Sons, Ltd. 相似文献
4.
John M. Lachin 《Statistics in medicine》2013,32(2):220-229
Power for time‐to‐event analyses is usually assessed under continuous‐time models. Often, however, times are discrete or grouped, as when the event is only observed when a procedure is performed. Wallenstein and Wittes (Biometrics, 1993) describe the power of the Mantel–Haenszel test for discrete lifetables under their chained binomial model for specified vectors of event probabilities over intervals of time. Herein, the expressions for these probabilities are derived under a piecewise exponential model allowing for staggered entry and losses to follow‐up. Radhakrishna (Biometrics, 1965) showed that the Mantel–Haenszel test is maximally efficient under the alternative of a constant odds ratio and derived the optimal weighted test under other alternatives. Lachin (Biostatistical Methods: The Assessment of Relative Risks, 2011) described the power function of this family of weighted Mantel–Haenszel tests. Prentice and Gloeckler (Biometrics, 1978) described a generalization of the proportional hazards model for grouped time data and the corresponding maximally efficient score test. Their test is also shown to be a weighted Mantel–Haenszel test, and its power function is likewise obtained. There is trivial loss in power under the discrete chained binomial model relative to the continuous‐time case provided that there is a modest number of periodic evaluations. Relative to the case of homogeneity of odds ratios, there can be substantial loss in power when there is substantial heterogeneity of odds ratios, especially when heterogeneity occurs early in a study when most subjects are at risk, but little loss in power when there is heterogeneity late in a study. Copyright © 2012 John Wiley & Sons, Ltd. 相似文献
5.
Unmeasured confounding remains an important problem in observational studies, including pharmacoepidemiological studies of large administrative databases. Several recently developed methods utilize smaller validation samples, with information on additional confounders, to control for confounders unmeasured in the main, larger database. However, up‐to‐date applications of these methods to survival analyses seem to be limited to propensity score calibration, which relies on a strong surrogacy assumption. We propose a new method, specifically designed for time‐to‐event analyses, which uses martingale residuals, in addition to measured covariates, to enhance imputation of the unmeasured confounders in the main database. The method is applicable for analyses with both time‐invariant data and time‐varying exposure/confounders. In simulations, our method consistently eliminated bias because of unmeasured confounding, regardless of surrogacy violation and other relevant design parameters, and almost always yielded lower mean squared errors than other methods applicable for survival analyses, outperforming propensity score calibration in several scenarios. We apply the method to a real‐life pharmacoepidemiological database study of the association between glucocorticoid therapy and risk of type II diabetes mellitus in patients with rheumatoid arthritis, with additional potential confounders available in an external validation sample. Compared with conventional analyses, which adjust only for confounders measured in the main database, our estimates suggest a considerably weaker association. Copyright © 2016 John Wiley & Sons, Ltd. 相似文献
6.
Lynn E. Eberly James S. Hodges Kay Savik Olga Gurvich Donna Z. Bliss Christine Mueller 《Statistics in medicine》2013,32(23):4006-4020
The Peters–Belson (PB) method was developed for quantifying and testing disparities between groups in an outcome by using linear regression to compute group‐specific observed and expected outcomes. It has since been extended to generalized linear models for binary and other outcomes and to analyses with probability‐based sample weighting. In this work, we extend the PB approach to right‐censored survival analysis, including stratification if needed. The extension uses the theory and methods of expected survival on the basis of Cox regression in a reference population. Within the PB framework, among the groups to be compared, one group is chosen as the reference group, and outcomes in that group are modeled as a function of available predictors. By using this fitted model's estimated parameters, and the predictor values for a comparator group, the comparator group's expected outcomes are then calculated and compared, formally with testing and informally with graphics, with their observed outcomes. We derive the extension, show how we applied it in a study of incontinence in nursing home elderly, and discuss issues in implementing it. We used the ‘survival’ package in the R system to do computations. Copyright © 2013 John Wiley & Sons, Ltd. 相似文献
7.
Three-arm trials including an experimental treatment, an active control and a placebo group are frequently preferred for the assessment of non-inferiority. In contrast to two-arm non-inferiority studies, these designs allow a direct proof of efficacy of a new treatment by comparison with placebo. As a further advantage, the test problem for establishing non-inferiority can be formulated in such a way that rejection of the null hypothesis assures that a pre-defined portion of the (unknown) effect the reference shows versus placebo is preserved by the treatment under investigation. We present statistical methods for this study design and the situation of a binary outcome variable. Asymptotic test procedures are given and their actual type I error rates are calculated. Approximate sample size formulae are derived and their accuracy is discussed. Furthermore, the question of optimal allocation of the total sample size is considered. Power properties of the testing strategy including a pre-test for assay sensitivity are presented. The derived methods are illustrated by application to a clinical trial in depression. 相似文献
8.
Meta‐analysis of time‐to‐event outcomes from randomized trials using restricted mean survival time: application to individual participant data 下载免费PDF全文
Yinghui Wei Patrick Royston Jayne F. Tierney Mahesh K. B. Parmar 《Statistics in medicine》2015,34(21):2881-2898
Meta‐analysis of time‐to‐event outcomes using the hazard ratio as a treatment effect measure has an underlying assumption that hazards are proportional. The between‐arm difference in the restricted mean survival time is a measure that avoids this assumption and allows the treatment effect to vary with time. We describe and evaluate meta‐analysis based on the restricted mean survival time for dealing with non‐proportional hazards and present a diagnostic method for the overall proportional hazards assumption. The methods are illustrated with the application to two individual participant meta‐analyses in cancer. The examples were chosen because they differ in disease severity and the patterns of follow‐up, in order to understand the potential impacts on the hazards and the overall effect estimates. We further investigate the estimation methods for restricted mean survival time by a simulation study. Copyright © 2015 John Wiley & Sons, Ltd. 相似文献
9.
Yutaka Matsuyama 《Statistics in medicine》2010,29(20):2107-2116
While intent‐to‐treat (ITT) analysis is widely accepted for superiority trials, there remains debate about its role in non‐inferiority trials. It has often been said that ITT analysis tends to be anti‐conservative in demonstrating non‐inferiority, suggesting that per‐protocol (PP) analysis may be preferable for non‐inferiority trials, despite the inherent bias of such analyses. We propose using randomization‐based g‐estimation analyses that more effectively preserve the integrity of randomization than do the more widely used PP analyses. Simulation studies were conducted to investigate the impacts of different types of treatment changes on the conservatism or anti‐conservatism of analyses using the ITT, PP, and g‐estimation methods in a time‐to‐event outcome. The ITT results were anti‐conservative for all simulations. Anti‐conservativeness increased with the percentage of treatment change and was more pronounced for outcome‐dependent treatment changes. PP analysis, in which treatment‐switching cases were censored at the time of treatment change, maintained type I error near the nominal level for independent treatment changes, whereas for outcome‐dependent cases, PP analysis was either conservative or anti‐conservative depending on the mechanism underlying the percentage of treatment changes. G‐estimation analysis maintained type I error near the nominal level even for outcome‐dependent treatment changes, although information on unmeasured covariates is not used in the analysis. Thus, randomization‐based g‐estimation analyses should be used to supplement the more conventional ITT and PP analyses, especially for non‐inferiority trials. Copyright © 2010 John Wiley & Sons, Ltd. 相似文献
10.
A three‐arm clinical trial design with an experimental treatment, an active control, and a placebo control, commonly referred to as the gold standard design, enables testing of non‐inferiority or superiority of the experimental treatment compared with the active control. In this paper, we propose methods for designing and analyzing three‐arm trials with negative binomially distributed endpoints. In particular, we develop a Wald‐type test with a restricted maximum‐likelihood variance estimator for testing non‐inferiority or superiority. For this test, sample size and power formulas as well as optimal sample size allocations will be derived. The performance of the proposed test will be assessed in an extensive simulation study with regard to type I error rate, power, sample size, and sample size allocation. For the purpose of comparison, Wald‐type statistics with a sample variance estimator and an unrestricted maximum‐likelihood estimator are included in the simulation study. We found that the proposed Wald‐type test with a restricted variance estimator performed well across the considered scenarios and is therefore recommended for application in clinical trials. The methods proposed are motivated and illustrated by a recent clinical trial in multiple sclerosis. The R package ThreeArmedTrials , which implements the methods discussed in this paper, is available on CRAN. Copyright © 2015 John Wiley & Sons, Ltd. 相似文献
11.
We consider the use of the assurance method in clinical trial planning. In the assurance method, which is an alternative to a power calculation, we calculate the probability of a clinical trial resulting in a successful outcome, via eliciting a prior probability distribution about the relevant treatment effect. This is typically a hybrid Bayesian‐frequentist procedure, in that it is usually assumed that the trial data will be analysed using a frequentist hypothesis test, so that the prior distribution is only used to calculate the probability of observing the desired outcome in the frequentist test. We argue that assessing the probability of a successful clinical trial is a useful part of the trial planning process. We develop assurance methods to accommodate survival outcome measures, assuming both parametric and nonparametric models. We also develop prior elicitation procedures for each survival model so that the assurance calculations can be performed more easily and reliably. We have made free software available for implementing our methods. © 2013 The Authors. Statistics in Medicine published by John Wiley & Sons, Ltd. 相似文献
12.
A flexible and coherent test/estimation procedure based on restricted mean survival times for censored time‐to‐event data in randomized clinical trials 下载免费PDF全文
Miki Horiguchi Angel M. Cronin Masahiro Takeuchi Hajime Uno 《Statistics in medicine》2018,37(15):2307-2320
In randomized clinical trials where time‐to‐event is the primary outcome, almost routinely, the logrank test is prespecified as the primary test and the hazard ratio is used to quantify treatment effect. If the ratio of 2 hazard functions is not constant, the logrank test is not optimal and the interpretation of hazard ratio is not obvious. When such a nonproportional hazards case is expected at the design stage, the conventional practice is to prespecify another member of weighted logrank tests, eg, Peto‐Prentice‐Wilcoxon test. Alternatively, one may specify a robust test as the primary test, which can capture various patterns of difference between 2 event time distributions. However, most of those tests do not have companion procedures to quantify the treatment difference, and investigators have fallen back on reporting treatment effect estimates not associated with the primary test. Such incoherence in the “test/estimation” procedure may potentially mislead clinicians/patients who have to balance risk‐benefit for treatment decision. To address this, we propose a flexible and coherent test/estimation procedure based on restricted mean survival time, where the truncation time τ is selected data dependently. The proposed procedure is composed of a prespecified test and an estimation of corresponding robust and interpretable quantitative treatment effect. The utility of the new procedure is demonstrated by numerical studies based on 2 randomized cancer clinical trials; the test is dramatically more powerful than the logrank, Wilcoxon tests, and the restricted mean survival time–based test with a fixed τ, for the patterns of difference seen in these cancer clinical trials. 相似文献
13.
A Bayesian approach for instrumental variable analysis with censored time‐to‐event outcome 下载免费PDF全文
Instrumental variable (IV) analysis has been widely used in economics, epidemiology, and other fields to estimate the causal effects of covariates on outcomes, in the presence of unobserved confounders and/or measurement errors in covariates. However, IV methods for time‐to‐event outcome with censored data remain underdeveloped. This paper proposes a Bayesian approach for IV analysis with censored time‐to‐event outcome by using a two‐stage linear model. A Markov chain Monte Carlo sampling method is developed for parameter estimation for both normal and non‐normal linear models with elliptically contoured error distributions. The performance of our method is examined by simulation studies. Our method largely reduces bias and greatly improves coverage probability of the estimated causal effect, compared with the method that ignores the unobserved confounders and measurement errors. We illustrate our method on the Women's Health Initiative Observational Study and the Atherosclerosis Risk in Communities Study. Copyright © 2014 John Wiley & Sons, Ltd. 相似文献
14.
Estimation in multi‐arm two‐stage trials with treatment selection and time‐to‐event endpoint 下载免费PDF全文
We consider estimation of treatment effects in two‐stage adaptive multi‐arm trials with a common control. The best treatment is selected at interim, and the primary endpoint is modeled via a Cox proportional hazards model. The maximum partial‐likelihood estimator of the log hazard ratio of the selected treatment will overestimate the true treatment effect in this case. Several methods for reducing the selection bias have been proposed for normal endpoints, including an iterative method based on the estimated conditional selection biases and a shrinkage approach based on empirical Bayes theory. We adapt these methods to time‐to‐event data and compare the bias and mean squared error of all methods in an extensive simulation study and apply the proposed methods to reconstructed data from the FOCUS trial. We find that all methods tend to overcorrect the bias, and only the shrinkage methods can reduce the mean squared error. © 2017 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd. 相似文献
15.
Getachew A. Dagne 《Statistics in medicine》2017,36(26):4214-4229
In this article, we show how Tobit models can address problems of identifying characteristics of subjects having left‐censored outcomes in the context of developing a method for jointly analyzing time‐to‐event and longitudinal data. There are some methods for handling these types of data separately, but they may not be appropriate when time to event is dependent on the longitudinal outcome, and a substantial portion of values are reported to be below the limits of detection. An alternative approach is to develop a joint model for the time‐to‐event outcome and a two‐part longitudinal outcome, linking them through random effects. This proposed approach is implemented to assess the association between the risk of decline of CD4/CD8 ratio and rates of change in viral load, along with discriminating between patients who are potentially progressors to AIDS from patients who do not. We develop a fully Bayesian approach for fitting joint two‐part Tobit models and illustrate the proposed methods on simulated and real data from an AIDS clinical study. 相似文献
16.
In active controlled trials without a placebo arm, there are usually two study objectives: to test a superiority hypothesis that the experimental treatment is more effective than the active control therapy, and to test a non-inferiority hypothesis that the experimental treatment is therapeutically no worse than the active control within a defined margin. For a two-stage adaptive design, it is not necessary to give a fixed sample size calculation at the planning stage of the study when treatment effect information is often insufficient. Instead, decision and estimation of the design specifications can be made more reliably after the first stage when interim results are available. We propose the use of conditional power approach to determine the sample size and critical values for testing the superiority and non-inferiority hypotheses for the second stage based on the observed result of the first stage. The proposed adaptive procedure preserves the overall type I error rate for both superiority and non-inferiority, and has the flexibility of early termination of the study (for futility or efficacy) or extending the study by appropriate sample size. 相似文献
17.
In the presence of non‐compliance, conventional analysis by intention‐to‐treat provides an unbiased comparison of treatment policies but typically under‐estimates treatment efficacy. With all‐or‐nothing compliance, efficacy may be specified as the complier‐average causal effect (CACE), where compliers are those who receive intervention if and only if randomised to it. We extend the CACE approach to model longitudinal data with time‐dependent non‐compliance, focusing on the situation in which those randomised to control may receive treatment and allowing treatment effects to vary arbitrarily over time. Defining compliance type to be the time of surgical intervention if randomised to control, so that compliers are patients who would not have received treatment at all if they had been randomised to control, we construct a causal model for the multivariate outcome conditional on compliance type and randomised arm. This model is applied to the trial of alternative regimens for glue ear treatment evaluating surgical interventions in childhood ear disease, where outcomes are measured over five time points, and receipt of surgical intervention in the control arm may occur at any time. We fit the models using Markov chain Monte Carlo methods to obtain estimates of the CACE at successive times after receiving the intervention. In this trial, over a half of those randomised to control eventually receive intervention. We find that surgery is more beneficial than control at 6months, with a small but non‐significant beneficial effect at 12months. © 2015 The Authors. Statistics in Medicine Published by JohnWiley & Sons Ltd. 相似文献
18.
Composite endpoints combine several events of interest within a single variable. These are often time‐to‐first‐event data, which are analyzed via survival analysis techniques. To demonstrate the significance of an overall clinical benefit, it is sufficient to assess the test problem formulated for the composite. However, the effect observed for the composite does not necessarily reflect the effects for the components. Therefore, it would be desirable that the sample size for clinical trials using composite endpoints provides enough power not only to detect a clinically relevant superiority for the composite but also to address the components in an adequate way. The single components of a composite endpoint assessed as time‐to‐first‐event define competing risks. We consider multiple test problems based on the cause‐specific hazards of competing events to address the problem of analyzing both a composite endpoint and its components. Thereby, we use sequentially rejective test procedures to reduce the power loss to a minimum. We show how to calculate the sample size for the given multiple test problem by using a simply applicable simulation tool in SAS . Our ideas are illustrated by two clinical study examples. Copyright © 2013 John Wiley & Sons, Ltd. 相似文献
19.
In the recent years there have been numerous publications on the design and the analysis of three‐arm ‘gold standard’ noninferiority trials. Whenever feasible, regulatory authorities recommend the use of such three‐arm designs including a test treatment, an active control, and a placebo. Nevertheless, it is desirable in many respects, for example, ethical reasons, to keep the placebo group size as small as possible. We first give a short overview on the fixed sample size design of a three‐arm noninferiority trial with normally distributed outcomes and a fixed noninferiority margin. An optimal single stage design is derived that should serve as a benchmark for the group sequential designs proposed in the main part of this work. It turns out, that the number of patients allocated to placebo is substantially low for the optimal design. Subsequently, approaches for group sequential designs aiming to further reduce the expected sample sizes are presented. By means of choosing different rejection boundaries for the respective null hypotheses, we obtain designs with quite different operating characteristics. We illustrate the approaches via numerical calculations and a comparison with the optimal single stage design. Furthermore, we derive approximately optimal boundaries for different goals, for example, to reduce the overall average sample size. The results show that the implementation of a group sequential design further improves the optimal single stage design. Besides cost and time savings, the possible early termination of the placebo arm is a key advantage that could help to overcome ethical concerns. Copyright © 2013 John Wiley & Sons, Ltd. 相似文献
20.
In randomised controlled trials, the effect of treatment on those who comply with allocation to active treatment can be estimated by comparing their outcome to those in the comparison group who would have complied with active treatment had they been allocated to it. We compare three estimators of the causal effect of treatment on compliers when this is a parameter in a proportional hazards model and quantify the bias due to omitting baseline prognostic factors. Causal estimates are found directly by maximising a novel partial likelihood; based on a structural proportional hazards model; and based on a ‘corrected dataset’ derived after fitting a rank‐preserving structural failure time model. Where necessary, we extend these methods to incorporate baseline covariates. Comparisons use simulated data and a real data example. Analysing the simulated data, we found that all three methods are accurate when an important covariate was included in the proportional hazards model (maximum bias 5.4%). However, failure to adjust for this prognostic factor meant that causal treatment effects were underestimated (maximum bias 11.4%), because estimators were based on a misspecified marginal proportional hazards model. Analysing the real data example, we found that adjusting causal estimators is important to correct for residual imbalances in prognostic factors present between trial arms after randomisation. Our results show that methods of estimating causal treatment effects for time‐to‐event outcomes should be extended to incorporate covariates, thus providing an informative compliment to the corresponding intention‐to‐treat analysis. Copyright © 2012 John Wiley & Sons, Ltd. 相似文献