首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到18条相似文献,搜索用时 0 毫秒
1.
The topic of applying two‐stage designs in the field of bioequivalence studies has recently gained attention in the literature and in regulatory guidelines. While there exists some methodological research on the application of group sequential designs in bioequivalence studies, implementation of adaptive approaches has focused up to now on superiority and non‐inferiority trials. Especially, no comparison of the features and performance characteristics of these designs has been performed, and therefore, the question of which design to employ in this setting remains open. In this paper, we discuss and compare ‘classical’ group sequential designs and three types of adaptive designs that offer the option of mid‐course sample size recalculation. A comprehensive simulation study demonstrates that group sequential designs can be identified, which show power characteristics that are similar to those of the adaptive designs but require a lower average sample size. The methods are illustrated with a real bioequivalence study example. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

2.
In recent years, as more generic drug products become available, it is a concern not only whether generic drug products that have been approved based on the regulation of average bioequivalence will have the same quality, safety and efficacy as that of the brand-name drug product, but also whether the approved generic drug products can be used interchangeably. In its recent draft guidance, the U.S. Food and Drug Administration (FDA) recommends that individual bioequivalence (IBE) be assessed using the method proposed by Hyslop, Hsuan, and Holder to address drug switchability. The FDA suggests that a 2x4 cross-over design be considered for assessment of IBE, while a 2x3 cross-over design may be used as an alternative design to reduce the length and cost of the study. Little or no information regarding the statistical procedures under 2x3 cross-over designs is discussed in the guidance. In this paper, a detailed statistical procedure for assessment of IBE under 2x3 cross-over designs is derived. The main purpose of this paper, however, is to derive an IBE test under an alternative 2x3 design and show that the resulting IBE test is better than that under a 2x3 cross-over design and is comparable to or even better than that under a 2x4 cross-over design. Our conclusions are supported by theoretical considerations and empirical results. Furthermore, a method of determining the sample sizes required for IBE tests to reach a given level of power is proposed.  相似文献   

3.
This work is motivated by trials in rapidly lethal cancers or cancers for which measuring shrinkage of tumours is infeasible. In either case, traditional phase II designs focussing on tumour response are unsuitable. Usually, tumour response is considered as a substitute for the more relevant but longer‐term endpoint of death. In rapidly lethal cancers such as pancreatic cancer, there is no need to use a surrogate, as the definitive endpoint is (sadly) available so soon. In uveal cancer, there is no counterpart to tumour response, and so, mortality is the only realistic response available. Cytostatic cancer treatments do not seek to kill tumours, but to mitigate their effects. Trials of such therapy might also be based on survival times to death or progression, rather than on tumour shrinkage. Phase II oncology trials are often conducted with all study patients receiving the experimental therapy, and this approach is considered here. Simple extensions of one‐stage and two‐stage designs based on binary responses are presented. Outcomes based on survival past a small number of landmark times are considered: here, the case of three such times is explored in examples. This approach allows exact calculations to be made for both design and analysis purposes. Simulations presented here show that calculations based on normal approximations can lead to loss of power when sample sizes are small. Two‐stage versions of the procedure are also suggested. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

4.
For normally distributed data, determination of the appropriate sample size requires a knowledge of the variance. Because of the uncertainty in the planning phase, two-stage procedures are attractive where the variance is reestimated from a subsample and the sample size is adjusted if necessary. From a regulatory viewpoint, preserving blindness and maintaining the ability to calculate or control the type I error rate are essential. Recently, a number of proposals have been made for sample size adjustment procedures in the t-test situation. Unfortunately, none of these methods satisfy both these requirements. We show through analytical computations that the type I error rate of the t-test is not affected if simple blind variance estimators are used for sample size recalculation. Furthermore, the results for the expected power of the procedures demonstrate that the methods are effective in ensuring the desired power even under initial misspecification of the variance. A method is discussed that can be applied in a more general setting and that assumes analysis with a permutation test. This procedure maintains the significance level for any design situation and arbitrary blind sample size recalculation strategy.  相似文献   

5.
Graf AC  Bauer P 《Statistics in medicine》2011,30(14):1637-1647
We calculate the maximum type 1 error rate of the pre-planned conventional fixed sample size test for comparing the means of independent normal distributions (with common known variance) which can be yielded when sample size and allocation rate to the treatment arms can be modified in an interim analysis. Thereby it is assumed that the experimenter fully exploits knowledge of the unblinded interim estimates of the treatment effects in order to maximize the conditional type 1 error rate. The 'worst-case' strategies require knowledge of the unknown common treatment effect under the null hypothesis. Although this is a rather hypothetical scenario it may be approached in practice when using a standard control treatment for which precise estimates are available from historical data. The maximum inflation of the type 1 error rate is substantially larger than derived by Proschan and Hunsberger (Biometrics 1995; 51:1315-1324) for design modifications applying balanced samples before and after the interim analysis. Corresponding upper limits for the maximum type 1 error rate are calculated for a number of situations arising from practical considerations (e.g. restricting the maximum sample size, not allowing sample size to decrease, allowing only increase in the sample size in the experimental treatment). The application is discussed for a motivating example.  相似文献   

6.
We address design of two‐stage clinical trials comparing experimental and control patients. Our end point is success or failure, however measured, with null hypothesis that the chance of success in both arms is p 0 and alternative that it is p 0 among controls and p 1 > p 0 among experimental patients. Standard rules will have the null hypothesis rejected when the number of successes in the (E)xperimental arm, E , sufficiently exceeds C , that among (C)ontrols. Here, we combine one‐sample rejection decision rules, , with two‐sample rules of the form E  ? C  > r to achieve two‐sample tests with low sample number and low type I error. We find designs with sample numbers not far from the minimum possible using standard two‐sample rules, but with type I error of 5% rather than 15% or 20% associated with them, and of equal power. This level of type I error is achieved locally, near the stated null, and increases to 15% or 20% when the null is significantly higher than specified. We increase the attractiveness of these designs to patients by using 2:1 randomization. Examples of the application of this new design covering both high and low success rates under the null hypothesis are provided. Copyright © 2017 John Wiley & Sons, Ltd.  相似文献   

7.
Phase II clinical trials are typically designed as two‐stage studies, in order to ensure early termination of the trial if the interim results show that the treatment is ineffective. Most of two‐stage designs, developed under both a frequentist and a Bayesian framework, select the second stage sample size before observing the first stage data. This may cause some paradoxical situations during the practical carrying out of the trial. To avoid these potential problems, we suggest a Bayesian predictive strategy to derive an adaptive two‐stage design, where the second stage sample size is not selected in advance, but depends on the first stage result. The criterion we propose is based on a modification of a Bayesian predictive design recently presented in the literature (see (Statist. Med. 2008; 27 :1199–1224)). The distinction between analysis and design priors is essential for the practical implementation of the procedure: some guidelines for choosing these prior distributions are discussed and their impact on the required sample size is examined. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

8.
The most common primary statistical end point of a phase II clinical trial is the categorization of a patient as either a ‘responder’ or ‘nonresponder’. The primary objective of typical randomized phase II anticancer clinical trials is to evaluate experimental treatments that potentially will increase response rate over a historical baseline and select one to consider for further study. We propose single‐stage and two‐stage designs for randomized phase II clinical trials, precisely defining various type I error rates and powers to achieve this objective. We develop a program to compute these error rates and powers exactly, and we provide many design examples to satisfy pre‐fixed requirements on error rates and powers. Finally, we apply our method to a randomized phase II trial in patients with relapsed non‐Hodgkin's disease. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

9.
Adaptive designs encompass all trials allowing various types of design modifications over the course of the trial. A key requirement for confirmatory adaptive designs to be accepted by regulators is the strong control of the family‐wise error rate. This can be achieved by combining the p‐values for each arm and stage to account for adaptations (including but not limited to treatment selection), sample size adaptation and multiple stages. While the theory for this is novel and well‐established, in practice, these methods can perform poorly, especially for unbalanced designs and for small to moderate sample sizes. The problem is that standard stagewise tests have inflated type I error rate, especially but not only when the baseline success rate is close to the boundary and this is carried over to the adaptive tests, seriously inflating the family‐wise error rate. We propose to fix this problem by feeding the adaptive test with second‐order accurate p‐values, in particular bootstrap p‐values. Secondly, an adjusted version of the Simes procedure for testing intersection hypotheses that reduces the built‐in conservatism is suggested. Numerical work and simulations show that unlike their standard counterparts the new approach preserves the overall error rate, at or below the nominal level across the board, irrespective of the baseline rate, stagewise sample sizes or allocation ratio. Copyright © 2017 John Wiley & Sons, Ltd.  相似文献   

10.
While traditional clinical trials seek to determine treatment efficacy within a specified population, they often ignore the role of a patient's treatment preference on his or her treatment response. The two‐stage (doubly) randomized preference trial design provides one approach for researchers seeking to disentangle preference effects from treatment effects. Currently, this two‐stage design is limited to the design and analysis of continuous outcome variables; in this presentation, we extend this current design to include binary variables. We present test statistics for testing preference, selection, and treatment effects in a two‐stage randomized design with a binary outcome measure, with and without stratification. We also derive closed‐form sample size formulas to indicate the number of patients needed to detect each effect. A series of simulation studies explore the properties and efficiency of both the unstratified and stratified two‐stage randomized trial designs. Finally, we demonstrate the applicability of these methods using an example of a trial of Hepatitis C treatment.  相似文献   

11.
Consider a parallel group trial for the comparison of an experimental treatment to a control, where the second‐stage sample size may depend on the blinded primary endpoint data as well as on additional blinded data from a secondary endpoint. For the setting of normally distributed endpoints, we demonstrate that this may lead to an inflation of the type I error rate if the null hypothesis holds for the primary but not the secondary endpoint. We derive upper bounds for the inflation of the type I error rate, both for trials that employ random allocation and for those that use block randomization. We illustrate the worst‐case sample size reassessment rule in a case study. For both randomization strategies, the maximum type I error rate increases with the effect size in the secondary endpoint and the correlation between endpoints. The maximum inflation increases with smaller block sizes if information on the block size is used in the reassessment rule. Based on our findings, we do not question the well‐established use of blinded sample size reassessment methods with nuisance parameter estimates computed from the blinded interim data of the primary endpoint. However, we demonstrate that the type I error rate control of these methods relies on the application of specific, binding, pre‐planned and fully algorithmic sample size reassessment rules and does not extend to general or unplanned sample size adjustments based on blinded data. © 2015 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.  相似文献   

12.
In clinical trials with t-distributed test statistics the required sample size depends on the unknown variance. Taking estimates from previous studies often leads to a misspecification of the true value of the variance. Hence, re-estimation of the variance based on the collected data and re-calculation of the required sample size is attractive. We present a flexible method for extensions of fixed sample or group-sequential trials with t-distributed test statistics. The method can be applied at any time during the course of the trial and does not require the necessity to pre-specify a sample size re-calculation rule. All available information can be used to determine the new sample size. The advantage of our method when compared with other adaptive methods is maintenance of the efficient t-test design when no extensions are actually made. We show that the type I error rate is preserved.  相似文献   

13.
We discuss sample size determination in group‐sequential designs with two endpoints as co‐primary. We derive the power and sample size within two decision‐making frameworks. One is to claim the test intervention's benefit relative to control when superiority is achieved for the two endpoints at the same interim timepoint of the trial. The other is when superiority is achieved for the two endpoints at any interim timepoint, not necessarily simultaneously. We evaluate the behaviors of sample size and power with varying design elements and provide a real example to illustrate the proposed sample size methods. In addition, we discuss sample size recalculation based on observed data and evaluate the impact on the power and Type I error rate. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

14.
Little is known about the relative performance of competing model‐based dose‐finding methods for combination phase I trials. In this study, we focused on five model‐based dose‐finding methods that have been recently developed. We compared the recommendation rates for true maximum‐tolerated dose combinations (MTDCs) and over‐dose combinations among these methods under 16 scenarios for 3 × 3, 4 × 4, 2 × 4, and 3 × 5 dose combination matrices. We found that performance of the model‐based dose‐finding methods varied depending on (1) whether the dose combination matrix is square or not; (2) whether the true MTDCs exist within the same group along the diagonals of the dose combination matrix; and (3) the number of true MTDCs. We discuss the details of the operating characteristics and the advantages and disadvantages of the five methods compared. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

15.
In non‐inferiority trials that employ the synthesis method several types of dependencies among test statistics occur due to sharing of the same information from the historical trial. The conditions under which the dependencies appear may be divided into three categories. The first case is when a new drug is approved with single non‐inferiority trial. The second case is when a new drug is approved if two independent non‐inferiority trials show positive results. The third case is when two new different drugs are approved with the same active control. The problem of the dependencies is that they can make the type I error rate deviate from the nominal level. In order to study such deviations, we introduce the unconditional and conditional across‐trial type I error rates when the non‐inferiority margin is estimated from the historical trial, and investigate how the dependencies affect the type I error rates. We show that the unconditional across‐trial type I error rate increases dramatically as does the correlation between two non‐inferiority tests when a new drug is approved based on the positive results of two non‐inferiority trials. We conclude that the conditional across‐trial type I error rate involves the unknown treatment effect in the historical trial. The formulae of the conditional across‐trial type I error rates provide us with a way of investigating the conditional across‐trial type I error rates for various assumed values of the treatment effect in the historical trial. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

16.
This article considers sample size determination for jointly testing a cause‐specific hazard and the all‐cause hazard for competing risks data. The cause‐specific hazard and the all‐cause hazard jointly characterize important study end points such as the disease‐specific survival and overall survival, which are commonly used as coprimary end points in clinical trials. Specifically, we derive sample size calculation methods for 2‐group comparisons based on an asymptotic chi‐square joint test and a maximum joint test of the aforementioned quantities, taking into account censoring due to lost to follow‐up as well as staggered entry and administrative censoring. We illustrate the application of the proposed methods using the Die Deutsche Diabetes Dialyse Studies clinical trial. An R package “powerCompRisk” has been developed and made available at the CRAN R library.  相似文献   

17.
The loss of signal associated with categorizing a continuous variable is well known, and previous studies have demonstrated that this can lead to an inflation of Type‐I error when the categorized variable is a confounder in a regression analysis estimating the effect of an exposure on an outcome. However, it is not known how the Type‐I error may vary under different circumstances, including logistic versus linear regression, different distributions of the confounder, and different categorization methods. Here, we analytically quantified the effect of categorization and then performed a series of 9600 Monte Carlo simulations to estimate the Type‐I error inflation associated with categorization of a confounder under different regression scenarios. We show that Type‐I error is unacceptably high (>10% in most scenarios and often 100%). The only exception was when the variable categorized was a continuous mixture proxy for a genuinely dichotomous latent variable, where both the continuous proxy and the categorized variable are error‐ridden proxies for the dichotomous latent variable. As expected, error inflation was also higher with larger sample size, fewer categories, and stronger associations between the confounder and the exposure or outcome. We provide online tools that can help researchers estimate the potential error inflation and understand how serious a problem this is. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

18.
We recently proposed a bias correction approach to evaluate accurate estimation of the odds ratio (OR) of genetic variants associated with a secondary phenotype, in which the secondary phenotype is associated with the primary disease, based on the original case‐control data collected for the purpose of studying the primary disease. As reported in this communication, we further investigated the type I error probabilities and powers of the proposed approach, and compared the results to those obtained from logistic regression analysis (with or without adjustment for the primary disease status). We performed a simulation study based on a frequency‐matching case‐control study with respect to the secondary phenotype of interest. We examined the empirical distribution of the natural logarithm of the corrected OR obtained from the bias correction approach and found it to be normally distributed under the null hypothesis. On the basis of the simulation study results, we found that the logistic regression approaches that adjust or do not adjust for the primary disease status had low power for detecting secondary phenotype associated variants and highly inflated type I error probabilities, whereas our approach was more powerful for identifying the SNP‐secondary phenotype associations and had better‐controlled type I error probabilities. Genet. Epidemiol. 2011. © 2011 Wiley Periodicals, Inc. 35:739‐743, 2011  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号