首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The clinical trial design including a test treatment, an active control and a placebo is called the gold standard design. In this paper, we develop a statistical method for planning and evaluating non‐inferiority trials with gold standard design for right‐censored time‐to‐event data. We consider both lost to follow‐up and administrative censoring. We present a semiparametric approach that only assumes the proportionality of the hazard functions. In particular, we develop an algorithm for calculating the minimal total sample size and its optimal allocation to treatment groups such that a desired power can be attained for a specific parameter constellation under the alternative. For the purpose of sample size calculation, we assume the endpoints to be Weibull distributed. By means of simulations, we investigate the actual type I error rate, power and the accuracy of the calculated sample sizes. Finally, we compare our procedure with a previously proposed procedure assuming exponentially distributed event times. To illustrate our method, we consider a double‐blinded, randomized, active and placebo controlled trial in major depression. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

2.
A three‐arm clinical trial design with an experimental treatment, an active control, and a placebo control, commonly referred to as the gold standard design, enables testing of non‐inferiority or superiority of the experimental treatment compared with the active control. In this paper, we propose methods for designing and analyzing three‐arm trials with negative binomially distributed endpoints. In particular, we develop a Wald‐type test with a restricted maximum‐likelihood variance estimator for testing non‐inferiority or superiority. For this test, sample size and power formulas as well as optimal sample size allocations will be derived. The performance of the proposed test will be assessed in an extensive simulation study with regard to type I error rate, power, sample size, and sample size allocation. For the purpose of comparison, Wald‐type statistics with a sample variance estimator and an unrestricted maximum‐likelihood estimator are included in the simulation study. We found that the proposed Wald‐type test with a restricted variance estimator performed well across the considered scenarios and is therefore recommended for application in clinical trials. The methods proposed are motivated and illustrated by a recent clinical trial in multiple sclerosis. The R package ThreeArmedTrials , which implements the methods discussed in this paper, is available on CRAN. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

3.
In the recent years there have been numerous publications on the design and the analysis of three‐arm ‘gold standard’ noninferiority trials. Whenever feasible, regulatory authorities recommend the use of such three‐arm designs including a test treatment, an active control, and a placebo. Nevertheless, it is desirable in many respects, for example, ethical reasons, to keep the placebo group size as small as possible. We first give a short overview on the fixed sample size design of a three‐arm noninferiority trial with normally distributed outcomes and a fixed noninferiority margin. An optimal single stage design is derived that should serve as a benchmark for the group sequential designs proposed in the main part of this work. It turns out, that the number of patients allocated to placebo is substantially low for the optimal design. Subsequently, approaches for group sequential designs aiming to further reduce the expected sample sizes are presented. By means of choosing different rejection boundaries for the respective null hypotheses, we obtain designs with quite different operating characteristics. We illustrate the approaches via numerical calculations and a comparison with the optimal single stage design. Furthermore, we derive approximately optimal boundaries for different goals, for example, to reduce the overall average sample size. The results show that the implementation of a group sequential design further improves the optimal single stage design. Besides cost and time savings, the possible early termination of the placebo arm is a key advantage that could help to overcome ethical concerns. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

4.
Phase II clinical trials are performed to investigate whether a novel treatment shows sufficient promise of efficacy to justify its evaluation in a subsequent definitive phase III trial, and they are often also used to select the dose to take forward. In this paper we discuss different design proposals for a phase II trial in which three active treatment doses and a placebo control are to be compared in terms of a single‐ordered categorical endpoint. The sample size requirements for one‐stage and two‐stage designs are derived, based on an approach similar to that of Dunnett. Detailed computations are prepared for an illustrative example concerning a study in stroke. Allowance for early stopping for futility is made. Simulations are used to verify that the specified type I error and power requirements are valid, despite certain approximations used in the derivation of sample size. The advantages and disadvantages of the different designs are discussed, and the scope for extending the approach to different forms of endpoint is considered. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

5.
Simon's optimal two‐stage design has been widely used in early phase clinical trials for Oncology and AIDS studies with binary endpoints. With this approach, the second‐stage sample size is fixed when the trial passes the first stage with sufficient activity. Adaptive designs, such as those due to Banerjee and Tsiatis (2006) and Englert and Kieser (2013), are flexible in the sense that the second‐stage sample size depends on the response from the first stage, and these designs are often seen to reduce the expected sample size under the null hypothesis as compared with Simon's approach. An unappealing trait of the existing designs is that they are not associated with a second‐stage sample size, which is a non‐increasing function of the first‐stage response rate. In this paper, an efficient intelligent process, the branch‐and‐bound algorithm, is used in extensively searching for the optimal adaptive design with the smallest expected sample size under the null, while the type I and II error rates are maintained and the aforementioned monotonicity characteristic is respected. The proposed optimal design is observed to have smaller expected sample sizes compared to Simon's optimal design, and the maximum total sample size of the proposed adaptive design is very close to that from Simon's method. The proposed optimal adaptive two‐stage design is recommended for use in practice to improve the flexibility and efficiency of early phase therapeutic development. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

6.
The sequential parallel comparison design (SPCD) has been proposed to increase the likelihood of success of clinical trials in therapeutic areas where high-placebo response is a concern. The trial is run in two stages, and subjects are randomized into three groups: (i) placebo in both stages; (ii) placebo in the first stage and drug in the second stage; and (iii) drug in both stages. We consider the case of binary response data (response/no response). In the SPCD, all first-stage and second-stage data from placebo subjects who failed to respond in the first stage of the trial are utilized in the efficacy analysis. We develop 1 and 2 degree of freedom score tests for treatment effect in the SPCD. We give formulae for asymptotic power and for sample size computations and evaluate their accuracy via simulation studies. We compute the optimal allocation ratio between drug and placebo in stage 1 for the SPCD to determine from a theoretical viewpoint whether a single-stage design, a two-stage design with placebo only in the first stage, or a two-stage design is the best design for a given set of response rates. As response rates are not known before the trial, a two-stage approach with allocation to active drug in both stages is a robust design choice.  相似文献   

7.
In this article, we study blinded sample size re‐estimation in the ‘gold standard’ design with internal pilot study for normally distributed outcomes. The ‘gold standard’ design is a three‐arm clinical trial design that includes an active and a placebo control in addition to an experimental treatment. We focus on the absolute margin approach to hypothesis testing in three‐arm trials at which the non‐inferiority of the experimental treatment and the assay sensitivity are assessed by pairwise comparisons. We compare several blinded sample size re‐estimation procedures in a simulation study assessing operating characteristics including power and type I error. We find that sample size re‐estimation based on the popular one‐sample variance estimator results in overpowered trials. Moreover, sample size re‐estimation based on unbiased variance estimators such as the Xing–Ganju variance estimator results in underpowered trials, as it is expected because an overestimation of the variance and thus the sample size is in general required for the re‐estimation procedure to eventually meet the target power. To overcome this problem, we propose an inflation factor for the sample size re‐estimation with the Xing–Ganju variance estimator and show that this approach results in adequately powered trials. Because of favorable features of the Xing–Ganju variance estimator such as unbiasedness and a distribution independent of the group means, the inflation factor does not depend on the nuisance parameter and, therefore, can be calculated prior to a trial. Moreover, we prove that the sample size re‐estimation based on the Xing–Ganju variance estimator does not bias the effect estimate. Copyright © 2017 John Wiley & Sons, Ltd.  相似文献   

8.
Many non-inferiority trials of a test treatment versus an active control may also, if ethical, incorporate a placebo arm. Inclusion of a placebo arm enables a direct assessment of assay sensitivity. It also allows construction of a non-inferiority test that avoids the problematic specification of an absolute non-inferiority margin, and instead evaluates whether the test treatment preserves a pre-specified proportion of the effect of the active control over placebo. We describe a two-stage procedure for sample size recalculation in such a setting that maintains the desired power more closely than a fixed sample approach when the magnitude of the effect of the active control differs from that anticipated. We derive an allocation rule for randomization under which the procedure preserves the type I error rate, and show that this coincides with that previously presented for optimal allocation of the sample size among the three treatment arms.  相似文献   

9.
The ‘gold standard’ design for three‐arm trials refers to trials with an active control and a placebo control in addition to the experimental treatment group. This trial design is recommended when being ethically justifiable and it allows the simultaneous comparison of experimental treatment, active control, and placebo. Parametric testing methods have been studied plentifully over the past years. However, these methods often tend to be liberal or conservative when distributional assumptions are not met particularly with small sample sizes. In this article, we introduce a studentized permutation test for testing non‐inferiority and superiority of the experimental treatment compared with the active control in three‐arm trials in the ‘gold standard’ design. The performance of the studentized permutation test for finite sample sizes is assessed in a Monte Carlo simulation study under various parameter constellations. Emphasis is put on whether the studentized permutation test meets the target significance level. For comparison purposes, commonly used Wald‐type tests, which do not make any distributional assumptions, are included in the simulation study. The simulation study shows that the presented studentized permutation test for assessing non‐inferiority in three‐arm trials in the ‘gold standard’ design outperforms its competitors, for instance the test based on a quasi‐Poisson model, for count data. The methods discussed in this paper are implemented in the R package ThreeArmedTrials which is available on the comprehensive R archive network (CRAN). Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

10.
In many randomized controlled trials, treatment groups are of equal size, but this is not necessarily the best choice. This paper provides a methodology to calculate optimal treatment allocations for longitudinal trials when we wish to compare multiple treatment groups with a placebo group, and the comparisons may have unequal importance. The focus is on trials with a survival endpoint measured in discrete time. We assume the underlying survival process is Weibull and show that values for the parameters in the Weibull distribution have an impact on the optimal treatment allocation scheme in an interesting way. Additionally, we incorporate different cost considerations at the subject and measurement levels and determine the optimal number of time periods. We also show that when many events occur at the beginning of the trial, fewer time periods are more efficient. As an application, we revisit a risperidone maintenance treatment trial in schizophrenia and use our proposed methodology to redesign it and compare merits of our optimal design. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

11.
Step‐up procedures have been shown to be powerful testing methods in clinical trials for comparisons of several treatments with a control. In this paper, a determination of the optimal sample size for a step‐up procedure that allows a pre‐specified power level to be attained is discussed. Various definitions of power, such as all‐pairs power, any‐pair power, per‐pair power and average power, in one‐ and two‐sided tests are considered. An extensive numerical study confirms that square root allocation of sample size among treatments provides a better approximation of the optimal sample size relative to equal allocation. Based on square root allocation, tables are constructed, and users can conveniently obtain the approximate required sample size for the selected configurations of parameters and power. For clinical studies with difficulties in recruiting patients or when additional subjects lead to a significant increase in cost, a more precise computation of the required sample size is recommended. In such circumstances, our proposed procedure may be adopted to obtain the optimal sample size. It is also found that, contrary to conventional belief, the optimal allocation may considerably reduce the total sample size requirement in certain cases. The determination of the required sample sizes using both allocation rules are illustrated with two examples in clinical studies. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

12.
The three‐arm clinical trial design, which includes a test treatment, an active reference, and placebo control, is the gold standard for the assessment of non‐inferiority. In the presence of non‐compliance, one common concern is that an intent‐to‐treat (ITT) analysis (which is the standard approach to non‐inferiority trials), tends to increase the chances of erroneously concluding non‐inferiority, suggesting that the per‐protocol (PP) analysis may be preferable for non‐inferiority trials despite its inherent bias. The objective of this paper was to develop statistical methodology for dealing with non‐compliance in three‐arm non‐inferiority trials for censored, time‐to‐event data. Changes in treatment were here considered the only form of non‐compliance. An approach using a three‐arm rank preserving structural failure time model and G‐estimation analysis is here presented. Using simulations, the impact of non‐compliance on non‐inferiority trials was investigated in detail using ITT, PP analyses, and the present proposed method. Results indicate that the proposed method shows good characteristics, and that neither ITT nor PP analyses can always guarantee the validity of the non‐inferiority conclusion. A Statistical Analysis System program for the implementation of the proposed test procedure is available from the authors upon request. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

13.
When several treatments are available for evaluation in a clinical trial, different design options are available. We compare multi‐arm multi‐stage with factorial designs, and in particular, we will consider a 2 × 2 factorial design, where groups of patients will either take treatments A, B, both or neither. We investigate the performance and characteristics of both types of designs under different scenarios and compare them using both theory and simulations. For the factorial designs, we construct appropriate test statistics to test the hypothesis of no treatment effect against the control group with overall control of the type I error. We study the effect of the choice of the allocation ratios on the critical value and sample size requirements for a target power. We also study how the possibility of an interaction between the two treatments A and B affects type I and type II errors when testing for significance of each of the treatment effects. We present both simulation results and a case study on an osteoarthritis clinical trial. We discover that in an optimal factorial design in terms of minimising the associated critical value, the corresponding allocation ratios differ substantially to those of a balanced design. We also find evidence of potentially big losses in power in factorial designs for moderate deviations from the study design assumptions and little gain compared with multi‐arm multi‐stage designs when the assumptions hold. © 2016 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.  相似文献   

14.
Three-arm trials including an experimental treatment, an active control and a placebo group are frequently preferred for the assessment of non-inferiority. In contrast to two-arm non-inferiority studies, these designs allow a direct proof of efficacy of a new treatment by comparison with placebo. As a further advantage, the test problem for establishing non-inferiority can be formulated in such a way that rejection of the null hypothesis assures that a pre-defined portion of the (unknown) effect the reference shows versus placebo is preserved by the treatment under investigation. We present statistical methods for this study design and the situation of a binary outcome variable. Asymptotic test procedures are given and their actual type I error rates are calculated. Approximate sample size formulae are derived and their accuracy is discussed. Furthermore, the question of optimal allocation of the total sample size is considered. Power properties of the testing strategy including a pre-test for assay sensitivity are presented. The derived methods are illustrated by application to a clinical trial in depression.  相似文献   

15.
To address the objective in a clinical trial to estimate the mean or mean difference of an expensive endpoint Y, one approach employs a two‐phase sampling design, wherein inexpensive auxiliary variables W predictive of Y are measured in everyone, Y is measured in a random sample, and the semiparametric efficient estimator is applied. This approach is made efficient by specifying the phase two selection probabilities as optimal functions of the auxiliary variables and measurement costs. While this approach is familiar to survey samplers, it apparently has seldom been used in clinical trials, and several novel results practicable for clinical trials are developed. We perform simulations to identify settings where the optimal approach significantly improves efficiency compared to approaches in current practice. We provide proofs and R code. The optimality results are developed to design an HIV vaccine trial, with objective to compare the mean ‘importance‐weighted’ breadth (Y) of the T‐cell response between randomized vaccine groups. The trial collects an auxiliary response (W) highly predictive of Y and measures Y in the optimal subset. We show that the optimal design‐estimation approach can confer anywhere between absent and large efficiency gain (up to 24 % in the examples) compared to the approach with the same efficient estimator but simple random sampling, where greater variability in the cost‐standardized conditional variance of Y given W yields greater efficiency gains. Accurate estimation of E[Y | W] is important for realizing the efficiency gain, which is aided by an ample phase two sample and by using a robust fitting method. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

16.
Sequential parallel comparison design (SPCD) has been proposed to increase the likelihood of success of clinical trials especially trials with possibly high placebo effect. Sequential parallel comparison design is conducted with 2 stages. Participants are randomized between active therapy and placebo in stage 1. Then, stage 1 placebo nonresponders are rerandomized between active therapy and placebo. Data from the 2 stages are pooled to yield a single P value. We consider SPCD with binary and with time‐to‐event outcomes. For time‐to‐event outcomes, response is defined as a favorable event prior to the end of follow‐up for a given stage of SPCD. We show that for these cases, the usual test statistics from stages 1 and 2 are asymptotically normal and uncorrelated under the null hypothesis, leading to a straightforward combined testing procedure. In addition, we show that the estimators of the treatment effects from the 2 stages are asymptotically normal and uncorrelated under the null and alternative hypothesis, yielding confidence interval procedures with correct coverage. Simulations and real data analysis demonstrate the utility of the binary and time‐to‐event SPCD.  相似文献   

17.
The objective of this paper is to develop statistical methodology for non-inferiority hypotheses to censored, exponentially distributed time to event endpoints. Motivated by a recent clinical trial in depression, we consider a gold standard design where a test group is compared with an active reference and with a placebo group. The test problem is formulated in terms of a retention of effect hypothesis. Thus, the proposed Wald-type test procedure assures that the effect of the test group is better than a pre-specified proportion Delta of the treatment effect of the reference group compared with the placebo group. A sample size allocation rule to achieve optimal power is presented, which only depends on the pre-specified Delta and the probabilities for the occurrence of censoring. In addition, a pretest is presented for either the reference or the test group to ensure assay sensitivity in the complete test procedure. The actual type I error and the sample size formula of the proposed tests are explored asymptotically by means of a simulation study showing good small sample characteristics. To illustrate the procedure a randomized, double blind clinical trial in depression is evaluated. An R-package for implementation of the proposed tests and for sample size determination accompanies this paper on the author's web page.  相似文献   

18.
A multiple‐objective allocation strategy was recently proposed for constructing response‐adaptive repeated measurement designs for continuous responses. We extend the allocation strategy to constructing response‐adaptive repeated measurement designs for binary responses. The approach with binary responses is quite different from the continuous case, as the information matrix is a function of responses, and it involves nonlinear modeling. To deal with these problems, we first build the design on the basis of success probabilities. Then we illustrate how various models can accommodate carryover effects on the basis of logits of response profiles as well as any correlation structure. Through computer simulations, we find that the allocation strategy developed for continuous responses also works well for binary responses. As expected, design efficiency in terms of mean squared error drops sharply, as more emphasis is placed on increasing treatment benefit than estimation precision. However, we find that it can successfully allocate more patients to better treatment sequences without sacrificing much estimation precision. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

19.
Adaptive designs encompass all trials allowing various types of design modifications over the course of the trial. A key requirement for confirmatory adaptive designs to be accepted by regulators is the strong control of the family‐wise error rate. This can be achieved by combining the p‐values for each arm and stage to account for adaptations (including but not limited to treatment selection), sample size adaptation and multiple stages. While the theory for this is novel and well‐established, in practice, these methods can perform poorly, especially for unbalanced designs and for small to moderate sample sizes. The problem is that standard stagewise tests have inflated type I error rate, especially but not only when the baseline success rate is close to the boundary and this is carried over to the adaptive tests, seriously inflating the family‐wise error rate. We propose to fix this problem by feeding the adaptive test with second‐order accurate p‐values, in particular bootstrap p‐values. Secondly, an adjusted version of the Simes procedure for testing intersection hypotheses that reduces the built‐in conservatism is suggested. Numerical work and simulations show that unlike their standard counterparts the new approach preserves the overall error rate, at or below the nominal level across the board, irrespective of the baseline rate, stagewise sample sizes or allocation ratio. Copyright © 2017 John Wiley & Sons, Ltd.  相似文献   

20.
Genome‐wide association studies (GWAS) offer an excellent opportunity to identify the genetic variants underlying complex human diseases. Successful utilization of this approach requires a large sample size to identify single nucleotide polymorphisms (SNPs) with subtle effects. Meta‐analysis is a cost‐efficient means to achieve large sample size by combining data from multiple independent GWAS; however, results from studies performed on different populations can be variable due to various reasons, including varied linkage equilibrium structures as well as gene‐gene and gene‐environment interactions. Nevertheless, one should expect effects of the SNP are more similar between similar populations than those between populations with quite different genetic and environmental backgrounds. Prior information on populations of GWAS is often not considered in current meta‐analysis methods, rendering such analyses less optimal for the detecting association. This article describes a test that improves meta‐analysis to incorporate variable heterogeneity among populations. The proposed method is remarkably simple in computation and hence can be performed in a rapid fashion in the setting of GWAS. Simulation results demonstrate the validity and higher power of the proposed method over conventional methods in the presence of heterogeneity. As a demonstration, we applied the test to real GWAS data to identify SNPs associated with circulating insulin‐like growth factor I concentrations.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号