首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到11条相似文献,搜索用时 0 毫秒
1.
Graf AC  Bauer P 《Statistics in medicine》2011,30(14):1637-1647
We calculate the maximum type 1 error rate of the pre-planned conventional fixed sample size test for comparing the means of independent normal distributions (with common known variance) which can be yielded when sample size and allocation rate to the treatment arms can be modified in an interim analysis. Thereby it is assumed that the experimenter fully exploits knowledge of the unblinded interim estimates of the treatment effects in order to maximize the conditional type 1 error rate. The 'worst-case' strategies require knowledge of the unknown common treatment effect under the null hypothesis. Although this is a rather hypothetical scenario it may be approached in practice when using a standard control treatment for which precise estimates are available from historical data. The maximum inflation of the type 1 error rate is substantially larger than derived by Proschan and Hunsberger (Biometrics 1995; 51:1315-1324) for design modifications applying balanced samples before and after the interim analysis. Corresponding upper limits for the maximum type 1 error rate are calculated for a number of situations arising from practical considerations (e.g. restricting the maximum sample size, not allowing sample size to decrease, allowing only increase in the sample size in the experimental treatment). The application is discussed for a motivating example.  相似文献   

2.
We address design of two‐stage clinical trials comparing experimental and control patients. Our end point is success or failure, however measured, with null hypothesis that the chance of success in both arms is p 0 and alternative that it is p 0 among controls and p 1 > p 0 among experimental patients. Standard rules will have the null hypothesis rejected when the number of successes in the (E)xperimental arm, E , sufficiently exceeds C , that among (C)ontrols. Here, we combine one‐sample rejection decision rules, , with two‐sample rules of the form E  ? C  > r to achieve two‐sample tests with low sample number and low type I error. We find designs with sample numbers not far from the minimum possible using standard two‐sample rules, but with type I error of 5% rather than 15% or 20% associated with them, and of equal power. This level of type I error is achieved locally, near the stated null, and increases to 15% or 20% when the null is significantly higher than specified. We increase the attractiveness of these designs to patients by using 2:1 randomization. Examples of the application of this new design covering both high and low success rates under the null hypothesis are provided. Copyright © 2017 John Wiley & Sons, Ltd.  相似文献   

3.
A general method is presented that allows the researcher to change statistical design elements such as the residual sample size during the course of an experiment, to include an interim analysis for early stopping when no formal rule for early stopping was foreseen, to increase or reduce the number of planned interim analyses, to change time points and the type I error spending function for the further design of interim analyses, or to change the test statistic, the outcome measure, etc. At the time of a pre-planned interim analysis for early stopping or at any time of an interim look without spending part of the type I error level the method offers the option to completely redesign the remaining part of the trial, without affecting the type I error level. The method is described in the usual Brownian motion model and extended to the general context of statistical decision functions. It is based on the conditional rejection probability of a decision variable.  相似文献   

4.
Between-group comparison based on the restricted mean survival time (RMST) is getting attention as an alternative to the conventional logrank/hazard ratio approach for time-to-event outcomes in randomized controlled trials (RCTs). The validity of the commonly used nonparametric inference procedure for RMST has been well supported by large sample theories. However, we sometimes encounter cases with a small sample size in practice, where we cannot rely on the large sample properties. Generally, the permutation approach can be useful to handle these situations in RCTs. However, a numerical issue arises when implementing permutation tests for difference or ratio of RMST from two groups. In this article, we discuss the numerical issue and consider six permutation methods for comparing survival time distributions between two groups using RMST in RCTs setting. We conducted extensive numerical studies and assessed type I error rates of these methods. Our numerical studies demonstrated that the inflation of the type I error rate of the asymptotic methods is not negligible when sample size is small, and that all of the six permutation methods are workable solutions. Although some permutation methods became a little conservative, no remarkable inflation of the type I error rates were observed. We recommend using permutation tests instead of the asymptotic tests, especially when the sample size is less than 50 per arm.  相似文献   

5.
For normally distributed data, determination of the appropriate sample size requires a knowledge of the variance. Because of the uncertainty in the planning phase, two-stage procedures are attractive where the variance is reestimated from a subsample and the sample size is adjusted if necessary. From a regulatory viewpoint, preserving blindness and maintaining the ability to calculate or control the type I error rate are essential. Recently, a number of proposals have been made for sample size adjustment procedures in the t-test situation. Unfortunately, none of these methods satisfy both these requirements. We show through analytical computations that the type I error rate of the t-test is not affected if simple blind variance estimators are used for sample size recalculation. Furthermore, the results for the expected power of the procedures demonstrate that the methods are effective in ensuring the desired power even under initial misspecification of the variance. A method is discussed that can be applied in a more general setting and that assumes analysis with a permutation test. This procedure maintains the significance level for any design situation and arbitrary blind sample size recalculation strategy.  相似文献   

6.
In a 2×2 crossover trial for establishing average bioequivalence (ABE) of a generic agent and a currently marketed drug, the recommended approach to hypothesis testing is the two one‐sided test (TOST) procedure, which depends, among other things, on the estimated within‐subject variability. The power of this procedure, and therefore the sample size required to achieve a minimum power, depends on having a good estimate of this variability. When there is uncertainty, it is advisable to plan the design in two stages, with an interim sample size reestimation after the first stage, using an interim estimate of the within‐subject variability. One method and 3 variations of doing this were proposed by Potvin et al. Using simulation, the operating characteristics, including the empirical type I error rate, of the 4 variations (called Methods A, B, C, and D) were assessed by Potvin et al and Methods B and C were recommended. However, none of these 4 variations formally controls the type I error rate of falsely claiming ABE, even though the amount of inflation produced by Method C was considered acceptable. A major disadvantage of assessing type I error rate inflation using simulation is that unless all possible scenarios for the intended design and analysis are investigated, it is impossible to be sure that the type I error rate is controlled. Here, we propose an alternative, principled method of sample size reestimation that is guaranteed to control the type I error rate at any given significance level. This method uses a new version of the inverse‐normal combination of p‐values test, in conjunction with standard group sequential techniques, that is more robust to large deviations in initial assumptions regarding the variability of the pharmacokinetic endpoints. The sample size reestimation step is based on significance levels and power requirements that are conditional on the first‐stage results. This necessitates a discussion and exploitation of the peculiar properties of the power curve of the TOST testing procedure. We illustrate our approach with an example based on a real ABE study and compare the operating characteristics of our proposed method with those of Method B of Povin et al.  相似文献   

7.
This paper discusses the benefits and limitations of adaptive sample size re-estimation for phase 3 confirmatory clinical trials. Comparisons are made with more traditional fixed sample and group sequential designs. It is seen that the real benefit of the adaptive approach arises through the ability to invest sample size resources into the trial in stages. The trial starts with a small up-front sample size commitment. Additional sample size resources are committed to the trial only if promising results are obtained at an interim analysis. This strategy is shown through examples of actual trials, one in neurology and one in cardiology, to be more advantageous than the fixed sample or group sequential approaches in certain settings. A major factor that has generated controversy and inhibited more widespread use of these methods has been their reliance on non-standard tests and p-values for preserving the type-1 error. If, however, the sample size is only increased when interim results are promising, one can dispense with these non-standard methods of inference. Therefore, in the spirit of making adaptive increases in trial size more widely appealing and readily implementable we here define those promising circumstances in which a conventional final inference can be performed while preserving the overall type-1 error. Methodological, regulatory and operational issues are examined.  相似文献   

8.
This paper demonstrates an inflation of the type I error rate that occurs when testing the statistical significance of a continuous risk factor after adjusting for a correlated continuous confounding variable that has been divided into a categorical variable. We used Monte Carlo simulation methods to assess the inflation of the type I error rate when testing the statistical significance of a risk factor after adjusting for a continuous confounding variable that has been divided into categories. We found that the inflation of the type I error rate increases with increasing sample size, as the correlation between the risk factor and the confounding variable increases, and with a decrease in the number of categories into which the confounder is divided. Even when the confounder is divided in a five-level categorical variable, the inflation of the type I error rate remained high when both the sample size and the correlation between the risk factor and the confounder were high.  相似文献   

9.
In clinical trials with t-distributed test statistics the required sample size depends on the unknown variance. Taking estimates from previous studies often leads to a misspecification of the true value of the variance. Hence, re-estimation of the variance based on the collected data and re-calculation of the required sample size is attractive. We present a flexible method for extensions of fixed sample or group-sequential trials with t-distributed test statistics. The method can be applied at any time during the course of the trial and does not require the necessity to pre-specify a sample size re-calculation rule. All available information can be used to determine the new sample size. The advantage of our method when compared with other adaptive methods is maintenance of the efficient t-test design when no extensions are actually made. We show that the type I error rate is preserved.  相似文献   

10.
Results from association studies are traditionally corroborated by replicating the findings in an independent data set. Although replication studies may be comparable for the main trait or phenotype of interest, it is unlikely that secondary phenotypes will be comparable across studies, making replication problematic. Alternatively, there may simply not be a replication sample available because of the nature or frequency of the phenotype. In these situations, an approach based on complementary pairs stability selection for genome-wide association study (ComPaSS-GWAS), is proposed as an ad-hoc alternative to replication. In this method, the sample is randomly split into two conditionally independent halves multiple times (resamples) and a GWAS is performed on each half in each resample. Similar in spirit to testing for association with independent discovery and replication samples, a marker is corroborated if its p-value is significant in both halves of the resample. Simulation experiments were performed for both nongenetic and genetic models. The type I error rate and power of ComPaSS-GWAS were determined and compared to the statistical properties of a traditional GWAS. Simulation results show that the type I error rate decreased as the number of resamples increased with only a small reduction in power and that these results were comparable with those from a traditional GWAS. Blood levels of vitamin pyridoxal 5′-phosphate from the Trinity Student Study (TSS) were used to validate this approach. The results from the validation study were compared to, and were consistent with, those obtained from previously published independent replication data and functional studies.  相似文献   

11.
We consider a study starting with two treatment groups and a control group with a planned interim analysis. The inferior treatment group will be dropped after the interim analysis, and only the winning treatment and the control will continue to the end of the study. This 'Two-Stage Winner Design' is based on the concepts of multiple comparison, adaptive design, and winner selection. In a study with such a design, there is less multiplicity, but more adaptability if the interim selection is performed at an early stage. If the interim selection is performed close to the end of the study, the situation becomes the conventional multiple comparison where Dunnett's method may be applied.The unconditional distribution of the final test statistic from the 'winner' treatment is no longer normal, the exact distribution of which is provided in this paper, but numerical integration is needed for its calculation. To avoid complex computations, we propose a normal approximation approach to calculate the type I error, the power, the point estimate, and the confidence intervals. Due to the well understood and attractive properties of the normal distribution, the 'Winner Design' can be easily planned and adequately executed, which is demonstrated by an example. We also provide detailed discussion on how the proposed design should be practically implemented by optimizing the timing of the interim look and the probability of winner selection.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号