首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 93 毫秒
1.
In the three‐arm ‘gold standard’ non‐inferiority design, an experimental treatment, an active reference, and a placebo are compared. This design is becoming increasingly popular, and it is, whenever feasible, recommended for use by regulatory guidelines. We provide a general method to calculate the required sample size for clinical trials performed in this design. As special cases, the situations of continuous, binary, and Poisson distributed outcomes are explored. Taking into account the correlation structure of the involved test statistics, the proposed approach leads to considerable savings in sample size as compared with application of ad hoc methods for all three scale levels. Furthermore, optimal sample size allocation ratios are determined that result in markedly smaller total sample sizes as compared with equal assignment. As optimal allocation makes the active treatment groups larger than the placebo group, implementation of the proposed approach is also desirable from an ethical viewpoint. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

2.
In this article, we study blinded sample size re‐estimation in the ‘gold standard’ design with internal pilot study for normally distributed outcomes. The ‘gold standard’ design is a three‐arm clinical trial design that includes an active and a placebo control in addition to an experimental treatment. We focus on the absolute margin approach to hypothesis testing in three‐arm trials at which the non‐inferiority of the experimental treatment and the assay sensitivity are assessed by pairwise comparisons. We compare several blinded sample size re‐estimation procedures in a simulation study assessing operating characteristics including power and type I error. We find that sample size re‐estimation based on the popular one‐sample variance estimator results in overpowered trials. Moreover, sample size re‐estimation based on unbiased variance estimators such as the Xing–Ganju variance estimator results in underpowered trials, as it is expected because an overestimation of the variance and thus the sample size is in general required for the re‐estimation procedure to eventually meet the target power. To overcome this problem, we propose an inflation factor for the sample size re‐estimation with the Xing–Ganju variance estimator and show that this approach results in adequately powered trials. Because of favorable features of the Xing–Ganju variance estimator such as unbiasedness and a distribution independent of the group means, the inflation factor does not depend on the nuisance parameter and, therefore, can be calculated prior to a trial. Moreover, we prove that the sample size re‐estimation based on the Xing–Ganju variance estimator does not bias the effect estimate. Copyright © 2017 John Wiley & Sons, Ltd.  相似文献   

3.
A three‐arm clinical trial design with an experimental treatment, an active control, and a placebo control, commonly referred to as the gold standard design, enables testing of non‐inferiority or superiority of the experimental treatment compared with the active control. In this paper, we propose methods for designing and analyzing three‐arm trials with negative binomially distributed endpoints. In particular, we develop a Wald‐type test with a restricted maximum‐likelihood variance estimator for testing non‐inferiority or superiority. For this test, sample size and power formulas as well as optimal sample size allocations will be derived. The performance of the proposed test will be assessed in an extensive simulation study with regard to type I error rate, power, sample size, and sample size allocation. For the purpose of comparison, Wald‐type statistics with a sample variance estimator and an unrestricted maximum‐likelihood estimator are included in the simulation study. We found that the proposed Wald‐type test with a restricted variance estimator performed well across the considered scenarios and is therefore recommended for application in clinical trials. The methods proposed are motivated and illustrated by a recent clinical trial in multiple sclerosis. The R package ThreeArmedTrials , which implements the methods discussed in this paper, is available on CRAN. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

4.
The three‐arm clinical trial design, which includes a test treatment, an active reference, and placebo control, is the gold standard for the assessment of non‐inferiority. In the presence of non‐compliance, one common concern is that an intent‐to‐treat (ITT) analysis (which is the standard approach to non‐inferiority trials), tends to increase the chances of erroneously concluding non‐inferiority, suggesting that the per‐protocol (PP) analysis may be preferable for non‐inferiority trials despite its inherent bias. The objective of this paper was to develop statistical methodology for dealing with non‐compliance in three‐arm non‐inferiority trials for censored, time‐to‐event data. Changes in treatment were here considered the only form of non‐compliance. An approach using a three‐arm rank preserving structural failure time model and G‐estimation analysis is here presented. Using simulations, the impact of non‐compliance on non‐inferiority trials was investigated in detail using ITT, PP analyses, and the present proposed method. Results indicate that the proposed method shows good characteristics, and that neither ITT nor PP analyses can always guarantee the validity of the non‐inferiority conclusion. A Statistical Analysis System program for the implementation of the proposed test procedure is available from the authors upon request. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

5.
Non‐inferiority trials are becoming increasingly popular for comparative effectiveness research. However, inclusion of the placebo arm, whenever possible, gives rise to a three‐arm trial which has lesser burdensome assumptions than a standard two‐arm non‐inferiority trial. Most of the past developments in a three‐arm trial consider defining a pre‐specified fraction of unknown effect size of reference drug, that is, without directly specifying a fixed non‐inferiority margin. However, in some recent developments, a more direct approach is being considered with pre‐specified fixed margin albeit in the frequentist setup. Bayesian paradigm provides a natural path to integrate historical and current trials' information via sequential learning. In this paper, we propose a Bayesian approach for simultaneous testing of non‐inferiority and assay sensitivity in a three‐arm trial with normal responses. For the experimental arm, in absence of historical information, non‐informative priors are assumed under two situations, namely when (i) variance is known and (ii) variance is unknown. A Bayesian decision criteria is derived and compared with the frequentist method using simulation studies. Finally, several published clinical trial examples are reanalyzed to demonstrate the benefit of the proposed procedure. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

6.
The clinical trial design including a test treatment, an active control and a placebo is called the gold standard design. In this paper, we develop a statistical method for planning and evaluating non‐inferiority trials with gold standard design for right‐censored time‐to‐event data. We consider both lost to follow‐up and administrative censoring. We present a semiparametric approach that only assumes the proportionality of the hazard functions. In particular, we develop an algorithm for calculating the minimal total sample size and its optimal allocation to treatment groups such that a desired power can be attained for a specific parameter constellation under the alternative. For the purpose of sample size calculation, we assume the endpoints to be Weibull distributed. By means of simulations, we investigate the actual type I error rate, power and the accuracy of the calculated sample sizes. Finally, we compare our procedure with a previously proposed procedure assuming exponentially distributed event times. To illustrate our method, we consider a double‐blinded, randomized, active and placebo controlled trial in major depression. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

7.
In the recent years there have been numerous publications on the design and the analysis of three‐arm ‘gold standard’ noninferiority trials. Whenever feasible, regulatory authorities recommend the use of such three‐arm designs including a test treatment, an active control, and a placebo. Nevertheless, it is desirable in many respects, for example, ethical reasons, to keep the placebo group size as small as possible. We first give a short overview on the fixed sample size design of a three‐arm noninferiority trial with normally distributed outcomes and a fixed noninferiority margin. An optimal single stage design is derived that should serve as a benchmark for the group sequential designs proposed in the main part of this work. It turns out, that the number of patients allocated to placebo is substantially low for the optimal design. Subsequently, approaches for group sequential designs aiming to further reduce the expected sample sizes are presented. By means of choosing different rejection boundaries for the respective null hypotheses, we obtain designs with quite different operating characteristics. We illustrate the approaches via numerical calculations and a comparison with the optimal single stage design. Furthermore, we derive approximately optimal boundaries for different goals, for example, to reduce the overall average sample size. The results show that the implementation of a group sequential design further improves the optimal single stage design. Besides cost and time savings, the possible early termination of the placebo arm is a key advantage that could help to overcome ethical concerns. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

8.
The three‐arm design with test treatment, reference treatment and placebo offers internal assay sensitivity for a proof of non‐inferiority. In this design the relative effects known from nonparametric theory are robust tools allowing the assessment of non‐inferiority in a range of situations. An asymptotic nonparametric theory is established in the three‐arm design based on the asymptotic distribution of rank means under alternative. A rank test for non‐inferiority is derived. Fieller's formula is used to calculate a respective confidence interval. The approach is expanded to multicentre studies. The simulation studies are conducted demonstrating the accuracy of the methods and an example is discussed. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

9.
This paper addresses statistical issues in non‐inferiority trials where the primary outcome is a fatal event. The investigations are inspired by a recent Food and Drug Administration (FDA) draft guideline on treatments for nosocomial pneumonia. The non‐inferiority margin suggested in this guideline for the endpoint all‐cause mortality is defined on different distance measures (rate difference and odds ratio) and is discontinuous. Furthermore, the margin enables considerable power for the statistical proof of non‐inferiority at alternatives that might be regarded as clinically unacceptable, that is, even if the experimental treatment is harmful as compared with the control. We investigated the appropriateness of the proposed non‐inferiority margin as well as the performance of possible test statistics to be used for the analysis. A continuous variant of the margin proposed in the FDA guideline together with the unconditional exact test according to Barnard showed favorable characteristics with respect to type I error rate control and power. To prevent harmful new treatments from being declared as non‐inferior, we propose to add a ‘second hurdle’. We discuss examples and explore power characteristics when requiring both statistical significance and overcoming the second hurdle. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

10.
The best information about the benefits of long‐term treatment is obtained from a long‐term placebo‐controlled trial. However, once efficacy has been demonstrated in relatively brief trials, it may not be possible to conduct long‐term placebo‐controlled trials, for ethical or practical reasons. This paper presents a method for estimating long‐term effects of a treatment from a placebo‐controlled trial in which some participants originally randomized to active‐treatment volunteer to continue on treatment during an extension study, but follow‐up of participants originally assigned to placebo ends with the trial, or they are crossed over to active treatment during the extension. We propose using data from the trial to project the outcomes for a ‘virtual twin’ for each active‐treatment volunteer under the counterfactual placebo condition, and using bootstrap methods for inference. The proposed method is validated using simulation, and applied to data from the Fracture Intervention Trial and its extension, FLEX. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

11.
Missing data are a common issue in cost‐effectiveness analysis (CEA) alongside randomised trials and are often addressed assuming the data are ‘missing at random’. However, this assumption is often questionable, and sensitivity analyses are required to assess the implications of departures from missing at random. Reference‐based multiple imputation provides an attractive approach for conducting such sensitivity analyses, because missing data assumptions are framed in an intuitive way by making reference to other trial arms. For example, a plausible not at random mechanism in a placebo‐controlled trial would be to assume that participants in the experimental arm who dropped out stop taking their treatment and have similar outcomes to those in the placebo arm. Drawing on the increasing use of this approach in other areas, this paper aims to extend and illustrate the reference‐based multiple imputation approach in CEA. It introduces the principles of reference‐based imputation and proposes an extension to the CEA context. The method is illustrated in the CEA of the CoBalT trial evaluating cognitive behavioural therapy for treatment‐resistant depression. Stata code is provided. We find that reference‐based multiple imputation provides a relevant and accessible framework for assessing the robustness of CEA conclusions to different missing data assumptions.  相似文献   

12.
Adaptive designs that are based on group‐sequential approaches have the benefit of being efficient as stopping boundaries can be found that lead to good operating characteristics with test decisions based solely on sufficient statistics. The drawback of these so called ‘pre‐planned adaptive’ designs is that unexpected design changes are not possible without impacting the error rates. ‘Flexible adaptive designs’ on the other hand can cope with a large number of contingencies at the cost of reduced efficiency. In this work, we focus on two different approaches for multi‐arm multi‐stage trials, which are based on group‐sequential ideas, and discuss how these ‘pre‐planned adaptive designs’ can be modified to allow for flexibility. We then show how the added flexibility can be used for treatment selection and sample size reassessment and evaluate the impact on the error rates in a simulation study. The results show that an impressive overall procedure can be found by combining a well chosen pre‐planned design with an application of the conditional error principle to allow flexible treatment selection. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

13.
After a non‐inferiority clinical trial, a new therapy may be accepted as effective, even if its treatment effect is slightly smaller than the current standard. It is therefore possible that, after a series of trials where the new therapy is slightly worse than the preceding drugs, an ineffective or harmful therapy might be incorrectly declared efficacious; this is known as ‘bio‐creep’. Several factors may influence the rate at which bio‐creep occurs, including the distribution of the effects of the new agents being tested and how that changes over time, the choice of active comparator, the method used to account for the variability of the estimate of the effect of the active comparator, and changes in the effect of the active comparator from one trial to the next (violations of the constancy assumption). We performed a simulation study to examine which of these factors might lead to bio‐creep and found that bio‐creep was rare, except when the constancy assumption was violated. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

14.
We investigate the use of permutation tests for the analysis of parallel and stepped‐wedge cluster‐randomized trials. Permutation tests for parallel designs with exponential family endpoints have been extensively studied. The optimal permutation tests developed for exponential family alternatives require information on intraclass correlation, a quantity not yet defined for time‐to‐event endpoints. Therefore, it is unclear how efficient permutation tests can be constructed for cluster‐randomized trials with such endpoints. We consider a class of test statistics formed by a weighted average of pair‐specific treatment effect estimates and offer practical guidance on the choice of weights to improve efficiency. We apply the permutation tests to a cluster‐randomized trial evaluating the effect of an intervention to reduce the incidence of hospital‐acquired infection. In some settings, outcomes from different clusters may be correlated, and we evaluate the validity and efficiency of permutation test in such settings. Lastly, we propose a permutation test for stepped‐wedge designs and compare its performance with mixed‐effect modeling and illustrate its superiority when sample sizes are small, the underlying distribution is skewed, or there is correlation across clusters. Copyright © 2017 John Wiley & Sons, Ltd.  相似文献   

15.
Baseline risk is a proxy for unmeasured but important patient‐level characteristics, which may be modifiers of treatment effect, and is a potential source of heterogeneity in meta‐analysis. Models adjusting for baseline risk have been developed for pairwise meta‐analysis using the observed event rate in the placebo arm and taking into account the measurement error in the covariate to ensure that an unbiased estimate of the relationship is obtained. Our objective is to extend these methods to network meta‐analysis where it is of interest to adjust for baseline imbalances in the non‐intervention group event rate to reduce both heterogeneity and possibly inconsistency. This objective is complicated in network meta‐analysis by this covariate being sometimes missing, because of the fact that not all studies in a network may have a non‐active intervention arm. A random‐effects meta‐regression model allowing for inclusion of multi‐arm trials and trials without a ‘non‐intervention’ arm is developed. Analyses are conducted within a Bayesian framework using the WinBUGS software. The method is illustrated using two examples: (i) interventions to promote functional smoke alarm ownership by households with children and (ii) analgesics to reduce post‐operative morphine consumption following a major surgery. The results showed no evidence of baseline effect in the smoke alarm example, but the analgesics example shows that the adjustment can greatly reduce heterogeneity and improve overall model fit. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

16.
In group‐randomized trials, a frequent practical limitation to adopting rigorous research designs is that only a small number of groups may be available, and therefore, simple randomization cannot be relied upon to balance key group‐level prognostic factors across the comparison arms. Constrained randomization is an allocation technique proposed for ensuring balance and can be used together with a permutation test for randomization‐based inference. However, several statistical issues have not been thoroughly studied when constrained randomization is considered. Therefore, we used simulations to evaluate key issues including the following: the impact of the choice of the candidate set size and the balance metric used to guide randomization; the choice of adjusted versus unadjusted analysis; and the use of model‐based versus randomization‐based tests. We conducted a simulation study to compare the type I error and power of the F‐test and the permutation test in the presence of group‐level potential confounders. Our results indicate that the adjusted F‐test and the permutation test perform similarly and slightly better for constrained randomization relative to simple randomization in terms of power, and the candidate set size does not substantially affect their power. Under constrained randomization, however, the unadjusted F‐test is conservative, while the unadjusted permutation test carries the desired type I error rate as long as the candidate set size is not too small; the unadjusted permutation test is consistently more powerful than the unadjusted F‐test and gains power as candidate set size changes. Finally, we caution against the inappropriate specification of permutation distribution under constrained randomization. An ongoing group‐randomized trial is used as an illustrative example for the constrained randomization design. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

17.
Studies for assessing non‐inferiority or superiority of a new diagnostic or screening test relative to a standard test use a complete matched‐pairs design in which results for both tests are obtained for all subjects. We present alternative tests for the situation where results on the standard test are obtained for all subjects but results on the new test are obtained for a subset of those subjects only. This situation is common when results for the standard test are available from a large biobank. We present a stratified random sampling procedure for drawing the subsample of subjects that receive the new diagnostic test with strata defined by the two outcome categories of the standard test. We derive appropriate statistical tests for non‐inferiority and superiority of the new diagnostic test. We will show how the number of subjects that receive the new test can be minimized by non‐proportional stratified random sampling. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

18.
Adaptive designs encompass all trials allowing various types of design modifications over the course of the trial. A key requirement for confirmatory adaptive designs to be accepted by regulators is the strong control of the family‐wise error rate. This can be achieved by combining the p‐values for each arm and stage to account for adaptations (including but not limited to treatment selection), sample size adaptation and multiple stages. While the theory for this is novel and well‐established, in practice, these methods can perform poorly, especially for unbalanced designs and for small to moderate sample sizes. The problem is that standard stagewise tests have inflated type I error rate, especially but not only when the baseline success rate is close to the boundary and this is carried over to the adaptive tests, seriously inflating the family‐wise error rate. We propose to fix this problem by feeding the adaptive test with second‐order accurate p‐values, in particular bootstrap p‐values. Secondly, an adjusted version of the Simes procedure for testing intersection hypotheses that reduces the built‐in conservatism is suggested. Numerical work and simulations show that unlike their standard counterparts the new approach preserves the overall error rate, at or below the nominal level across the board, irrespective of the baseline rate, stagewise sample sizes or allocation ratio. Copyright © 2017 John Wiley & Sons, Ltd.  相似文献   

19.
Network meta‐analysis is a statistical method combining information from randomised trials that compare two or more treatments for a given medical condition. Consistent treatment effects are estimated for all possible treatment comparisons. For estimation, weighted least squares regression that in a natural way generalises standard pairwise meta‐analysis can be used. Typically, as part of the network, multi‐arm studies are found. In a multi‐arm study, observed pairwise comparisons are correlated, which must be accounted for. To this aim, two methods have been proposed, a standard regression approach and a new approach coming from graph theory and based on contrast‐based data (Rücker 2012). In the standard approach, the dimension of the design matrix is appropriately reduced until it is invertible (‘reduce dimension’). In the alternative approach, the weights of comparisons coming from multi‐arm studies are appropriately reduced (‘reduce weights’). As it was unclear, to date, how these approaches are related to each other, we give a mathematical proof that both approaches lead to identical estimates. The ‘reduce weights’ approach can be interpreted as the construction of a network of independent two‐arm studies, which is basically equivalent to the given network with multi‐arm studies. Thus, a simple random‐effects model is obtained, with one additional parameter for a common heterogeneity variance. This is applied to a systematic review in depression. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

20.
The topic of applying two‐stage designs in the field of bioequivalence studies has recently gained attention in the literature and in regulatory guidelines. While there exists some methodological research on the application of group sequential designs in bioequivalence studies, implementation of adaptive approaches has focused up to now on superiority and non‐inferiority trials. Especially, no comparison of the features and performance characteristics of these designs has been performed, and therefore, the question of which design to employ in this setting remains open. In this paper, we discuss and compare ‘classical’ group sequential designs and three types of adaptive designs that offer the option of mid‐course sample size recalculation. A comprehensive simulation study demonstrates that group sequential designs can be identified, which show power characteristics that are similar to those of the adaptive designs but require a lower average sample size. The methods are illustrated with a real bioequivalence study example. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号