首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
In randomized clinical trials, it is standard to include baseline variables in the primary analysis as covariates, as it is recommended by international guidelines. For the study design to be consistent with the analysis, these variables should also be taken into account when calculating the sample size to appropriately power the trial. Because assumptions made in the sample size calculation are always subject to some degree of uncertainty, a blinded sample size reestimation (BSSR) is recommended to adjust the sample size when necessary. In this article, we introduce a BSSR approach for count data outcomes with baseline covariates. Count outcomes are common in clinical trials and examples include the number of exacerbations in asthma and chronic obstructive pulmonary disease, relapses, and scan lesions in multiple sclerosis and seizures in epilepsy. The introduced methods are based on Wald and likelihood ratio test statistics. The approaches are illustrated by a clinical trial in epilepsy. The BSSR procedures proposed are compared in a Monte Carlo simulation study and shown to yield power values close to the target while not inflating the type I error rate.  相似文献   

2.
In this article, we study blinded sample size re‐estimation in the ‘gold standard’ design with internal pilot study for normally distributed outcomes. The ‘gold standard’ design is a three‐arm clinical trial design that includes an active and a placebo control in addition to an experimental treatment. We focus on the absolute margin approach to hypothesis testing in three‐arm trials at which the non‐inferiority of the experimental treatment and the assay sensitivity are assessed by pairwise comparisons. We compare several blinded sample size re‐estimation procedures in a simulation study assessing operating characteristics including power and type I error. We find that sample size re‐estimation based on the popular one‐sample variance estimator results in overpowered trials. Moreover, sample size re‐estimation based on unbiased variance estimators such as the Xing–Ganju variance estimator results in underpowered trials, as it is expected because an overestimation of the variance and thus the sample size is in general required for the re‐estimation procedure to eventually meet the target power. To overcome this problem, we propose an inflation factor for the sample size re‐estimation with the Xing–Ganju variance estimator and show that this approach results in adequately powered trials. Because of favorable features of the Xing–Ganju variance estimator such as unbiasedness and a distribution independent of the group means, the inflation factor does not depend on the nuisance parameter and, therefore, can be calculated prior to a trial. Moreover, we prove that the sample size re‐estimation based on the Xing–Ganju variance estimator does not bias the effect estimate. Copyright © 2017 John Wiley & Sons, Ltd.  相似文献   

3.
The benefit of adjusting the sample size in clinical trials on the basis of treatment effects observed in interim analysis has been the subject of several recent papers. Different conclusions were drawn about the usefulness of this approach for gaining power or saving sample size, because of differences in trial design and setting. We examined the benefit of sample size adjustment in relation to trial design parameters such as 'time of interim analysis' and 'choice of stopping criteria'. We compared the adaptive weighted inverse normal method with classical group sequential methods for the most common and for optimal stopping criteria in early, half-time and late interim analyses. We found that reacting to interim data might significantly reduce average sample size in some situations, while classical approaches can out-perform the adaptive designs under other circumstances. We characterized these situations with respect to time of interim analysis and choice of stopping criteria.  相似文献   

4.
Counts of events are increasingly common as primary endpoints in randomized clinical trials. With between‐patient heterogeneity leading to variances in excess of the mean (referred to as overdispersion), statistical models reflecting this heterogeneity by mixtures of Poisson distributions are frequently employed. Sample size calculation in the planning of such trials requires knowledge on the nuisance parameters, that is, the control (or overall) event rate and the overdispersion parameter. Usually, there is only little prior knowledge regarding these parameters in the design phase resulting in considerable uncertainty regarding the sample size. In this situation internal pilot studies have been found very useful and very recently several blinded procedures for sample size re‐estimation have been proposed for overdispersed count data, one of which is based on an EM‐algorithm. In this paper we investigate the EM‐algorithm based procedure with respect to aspects of their implementation by studying the algorithm's dependence on the choice of convergence criterion and find that the procedure is sensitive to the choice of the stopping criterion in scenarios relevant to clinical practice. We also compare the EM‐based procedure to other competing procedures regarding their operating characteristics such as sample size distribution and power. Furthermore, the robustness of these procedures to deviations from the model assumptions is explored. We find that some of the procedures are robust to at least moderate deviations. The results are illustrated using data from the US National Heart, Lung and Blood Institute sponsored Asymptomatic Cardiac Ischemia Pilot study. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

5.
Statistical inference based on correlated count measurements are frequently performed in biomedical studies. Most of existing sample size calculation methods for count outcomes are developed under the Poisson model. Deviation from the Poisson assumption (equality of mean and variance) has been widely documented in practice, which indicates urgent needs of sample size methods with more realistic assumptions to ensure valid experimental design. In this study, we investigate sample size calculation for clinical trials with correlated count measurements based on the negative binomial distribution. This approach is flexible to accommodate overdispersion and unequal measurement intervals, as well as arbitrary randomization ratios, missing data patterns, and correlation structures. Importantly, the derived sample size formulas have closed forms both for the comparison of slopes and for the comparison of time-averaged responses, which greatly reduces the burden of implementation in practice. We conducted extensive simulation to demonstrate that the proposed method maintains the nominal levels of power and type I error over a wide range of design configurations. We illustrate the application of this approach using a real epileptic trial.  相似文献   

6.
For normally distributed data, determination of the appropriate sample size requires a knowledge of the variance. Because of the uncertainty in the planning phase, two-stage procedures are attractive where the variance is reestimated from a subsample and the sample size is adjusted if necessary. From a regulatory viewpoint, preserving blindness and maintaining the ability to calculate or control the type I error rate are essential. Recently, a number of proposals have been made for sample size adjustment procedures in the t-test situation. Unfortunately, none of these methods satisfy both these requirements. We show through analytical computations that the type I error rate of the t-test is not affected if simple blind variance estimators are used for sample size recalculation. Furthermore, the results for the expected power of the procedures demonstrate that the methods are effective in ensuring the desired power even under initial misspecification of the variance. A method is discussed that can be applied in a more general setting and that assumes analysis with a permutation test. This procedure maintains the significance level for any design situation and arbitrary blind sample size recalculation strategy.  相似文献   

7.
In some diseases, such as multiple sclerosis, lesion counts obtained from magnetic resonance imaging (MRI) are used as markers of disease progression. This leads to longitudinal, and typically overdispersed, count data outcomes in clinical trials. Models for such data invariably include a number of nuisance parameters, which can be difficult to specify at the planning stage, leading to considerable uncertainty in sample size specification. Consequently, blinded sample size re-estimation procedures are used, allowing for an adjustment of the sample size within an ongoing trial by estimating relevant nuisance parameters at an interim point, without compromising trial integrity. To date, the methods available for re-estimation have required an assumption that the mean count is time-constant within patients. We propose a new modeling approach that maintains the advantages of established procedures but allows for general underlying and treatment-specific time trends in the mean response. A simulation study is conducted to assess the effectiveness of blinded sample size re-estimation methods over fixed designs. Sample sizes attained through blinded sample size re-estimation procedures are shown to maintain the desired study power without inflating the Type I error rate and the procedure is demonstrated on MRI data from a recent study in multiple sclerosis.  相似文献   

8.
Many non-inferiority trials of a test treatment versus an active control may also, if ethical, incorporate a placebo arm. Inclusion of a placebo arm enables a direct assessment of assay sensitivity. It also allows construction of a non-inferiority test that avoids the problematic specification of an absolute non-inferiority margin, and instead evaluates whether the test treatment preserves a pre-specified proportion of the effect of the active control over placebo. We describe a two-stage procedure for sample size recalculation in such a setting that maintains the desired power more closely than a fixed sample approach when the magnitude of the effect of the active control differs from that anticipated. We derive an allocation rule for randomization under which the procedure preserves the type I error rate, and show that this coincides with that previously presented for optimal allocation of the sample size among the three treatment arms.  相似文献   

9.
Several adaptive design methods have been proposed to reestimate sample size using the observed treatment effect after an initial stage of a clinical trial while preserving the overall type I error at the time of the final analysis. One unfortunate property of the algorithms used in some methods is that they can be inverted to reveal the exact treatment effect at the interim analysis. We propose using a step function with an inverted U‐shape of observed treatment difference for sample size reestimation to lessen the information on treatment effect revealed. This will be referred to as stepwise two‐stage sample size adaptation. This method applies calculation methods used for group sequential designs. We minimize expected sample size among a class of these designs and compare efficiency with the fully optimized two‐stage design, optimal two‐stage group sequential design, and designs based on promising conditional power. The trade‐off between efficiency versus the improved blinding of the interim treatment effect will be discussed. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

10.
Recently, Stewart and Ruberg proposed the use of contrast tests for detecting dose-response relationships. They considered in particular bivariate contrasts for healing rates and gave several possibilities of defining adequate sets of coefficients. This paper extends their work in several directions. First, asymptotic power expressions for both single and multiple contrast tests are derived. Secondly, well known trend tests are rewritten as multiple contrast tests, thus alleviating the inherent problem of choosing adequate contrast coefficients. Thirdly, recent results on the efficient calculation of multivariate normal probabilities overcome the traditional simulation-based methods for the numerical computations. Modifications of the power formulae allow the calculation of sample sizes for given type I and II errors, the spontaneous rate, and the dose-response shape. Some numerical results of a power study for small to moderate sample sizes show that the nominal power is a reasonably good approximation to the actual power. An example from a clinical trial illustrates the practical use of the results.  相似文献   

11.
Many sample size criteria exist. These include power calculations and methods based on confidence interval widths from a frequentist viewpoint, and Bayesian methods based on credible interval widths or decision theory. Bayesian methods account for the inherent uncertainty of inputs to sample size calculations through the use of prior information rather than the point estimates typically used by frequentist methods. However, the choice of prior density can be problematic because there will almost always be different appreciations of the past evidence. Such differences can be accommodated a priori by robust methods for Bayesian design, for example, using mixtures or ϵ-contaminated priors. This would then ensure that the prior class includes divergent opinions. However, one may prefer to report several posterior densities arising from a “community of priors,” which cover the range of plausible prior densities, rather than forming a single class of priors. To date, however, there are no corresponding sample size methods that specifically account for a community of prior densities in the sense of ensuring a large-enough sample size for the data to sufficiently overwhelm the priors to ensure consensus across widely divergent prior views. In this paper, we develop methods that account for the variability in prior opinions by providing the sample size required to induce posterior agreement to a prespecified degree. Prototypic examples to one- and two-sample binomial outcomes are included. We compare sample sizes from criteria that consider a family of priors to those that would result from previous interval-based Bayesian criteria.  相似文献   

12.
Adaptive sample size designs, including group sequential designs, have been used as alternatives to fixed sample size designs to achieve more robust statistical power and better trial efficiency. This work investigates the efficiency of adaptive sample size designs as compared to group sequential designs. We show that given a group sequential design, a uniformly more efficient adaptive sample size design based on the same maximum sample size and rejection boundary can be constructed. While maintaining stable statistical power at the required level, the expected sample size of the obtained adaptive sample size design is uniformly smaller than that of the group sequential design with respect to a range of the true treatment difference. The finding provides further insights into the efficiency of adaptive sample size designs and challenges the popular belief of better efficiency associated with group sequential designs. Good adaptive performance plus easy implementation and other desirable operational features make adaptive sample size designs more attractive and applicable to modern clinical trials.  相似文献   

13.
Blinded sample size re-estimation and information monitoring based on blinded data has been suggested to mitigate risks due to planning uncertainties regarding nuisance parameters. Motivated by a randomized controlled trial in pediatric multiple sclerosis (MS), a continuous monitoring procedure for overdispersed count data was proposed recently. However, this procedure assumed constant event rates, an assumption often not met in practice. Here we extend the procedure to accommodate time trends in the event rates considering two blinded approaches: (a) the mixture approach modeling the number of events by a mixture of two negative binomial distributions and (b) the lumping approach approximating the marginal distribution of the event counts by a negative binomial distribution. Through simulations the operating characteristics of the proposed procedures are investigated under decreasing event rates. We find that the type I error rate is not inflated relevantly by either of the monitoring procedures, with the exception of strong time dependencies where the procedure assuming constant rates exhibits some inflation. Furthermore, the procedure accommodating time trends has generally favorable power properties compared with the procedure based on constant rates which stops often too late. The proposed method is illustrated by the clinical trial in pediatric MS.  相似文献   

14.
In clinical trials, the study sample size is often chosen to provide specific power at a single point of a treatment difference. When this treatment difference is not close to the true one, the actual power of the trial can deviate from the specified power. To address this issue, we consider obtaining a flexible sample size design that provides sufficient power and has close to the 'ideal' sample size over possible values of the true treatment difference within an interval. A performance score is proposed to assess the overall performance of these flexible sample size designs. Its application to the determination of the best solution among considered candidate sample size designs is discussed and illustrated through computer simulations.  相似文献   

15.
16.
Adaptive designs have been proposed for clinical trials in which the nuisance parameters or alternative of interest are unknown or likely to be misspecified before the trial. Although most previous works on adaptive designs and mid-course sample size re-estimation have focused on two-stage or group-sequential designs in the normal case, we consider here a new approach that involves at most three stages and is developed in the general framework of multiparameter exponential families. This approach not only maintains the prescribed type I error probability but also provides a simple but asymptotically efficient sequential test whose finite-sample performance, measured in terms of the expected sample size and power functions, is shown to be comparable to the optimal sequential design, determined by dynamic programming, in the simplified normal mean case with known variance and prespecified alternative, and superior to the existing two-stage designs and also to adaptive group-sequential designs when the alternative or nuisance parameters are unknown or misspecified.  相似文献   

17.
In standard clinical trial designs, the required sample size is fixed in the planning stage based on initial parameter assumptions. It is intuitive that the correct choice of the sample size is of major importance for an ethical justification of the trial. The required parameter assumptions should be based on previously published results from the literature. In clinical practice, however, historical data often do not exist or show highly variable results. Adaptive group sequential designs allow a sample size recalculation after a planned unblinded interim analysis in order to adjust the sample size during the ongoing trial. So far, there exist no unique standards to assess the performance of sample size recalculation rules. Single performance criteria commonly reported are given by the power and the average sample size; the variability of the recalculated sample size and the conditional power distribution are usually ignored. Therefore, the need for an adequate performance score combining these relevant performance criteria is evident. To judge the performance of an adaptive design, there exist two possible perspectives, which might also be combined: Either the global performance of the design can be addressed, which averages over all possible interim results, or the conditional performance is addressed, which focuses on the remaining performance conditional on a specific interim result. In this work, we give a compact overview of sample size recalculation rules and performance measures. Moreover, we propose a new conditional performance score and apply it to various standard recalculation rules by means of Monte-Carlo simulations.  相似文献   

18.
Li J  Fine J 《Statistics in medicine》2004,23(16):2537-2550
The design of a study of disease screening tests may be based on hypothesis tests for the sensitivity and specificity of the tests. The case-control study requires knowledge of the disease status of patients at the time of enrollment. This may not be possible in a prospective setting, when the gold standard is obtained subsequent to the initial screening and the number of diseased individuals is random and can not be fixed by design. Several ad hoc procedures for determining the total sample size are commonly used by practitioners, for example, the prevalence inflation method. The properties of these methods are not well understood. We develop a formal method for sample size and power calculations based on the unconditional power properties of the test statistics. The approach provides novel insights into the behaviour of the commonly used methods. We find that the ad hoc prevalence inflation method may serve as a useful approximation to our rigorous framework for sample size determination in the prospective set-up. The design of a large population-based study of mammography for breast cancer screening illustrates the key issues.  相似文献   

19.
This paper discusses the benefits and limitations of adaptive sample size re-estimation for phase 3 confirmatory clinical trials. Comparisons are made with more traditional fixed sample and group sequential designs. It is seen that the real benefit of the adaptive approach arises through the ability to invest sample size resources into the trial in stages. The trial starts with a small up-front sample size commitment. Additional sample size resources are committed to the trial only if promising results are obtained at an interim analysis. This strategy is shown through examples of actual trials, one in neurology and one in cardiology, to be more advantageous than the fixed sample or group sequential approaches in certain settings. A major factor that has generated controversy and inhibited more widespread use of these methods has been their reliance on non-standard tests and p-values for preserving the type-1 error. If, however, the sample size is only increased when interim results are promising, one can dispense with these non-standard methods of inference. Therefore, in the spirit of making adaptive increases in trial size more widely appealing and readily implementable we here define those promising circumstances in which a conventional final inference can be performed while preserving the overall type-1 error. Methodological, regulatory and operational issues are examined.  相似文献   

20.
This paper presents a simple Bayesian approach to sample size determination in clinical trials. It is required that the trial should be large enough to ensure that the data collected will provide convincing evidence either that an experimental treatment is better than a control or that it fails to improve upon control by some clinically relevant difference. The method resembles standard frequentist formulations of the problem, and indeed in certain circumstances involving 'non-informative' prior information it leads to identical answers. In particular, unlike many Bayesian approaches to sample size determination, use is made of an alternative hypothesis that an experimental treatment is better than a control treatment by some specified magnitude. The approach is introduced in the context of testing whether a single stream of binary observations are consistent with a given success rate p(0). Next the case of comparing two independent streams of normally distributed responses is considered, first under the assumption that their common variance is known and then for unknown variance. Finally, the more general situation in which a large sample is to be collected and analysed according to the asymptotic properties of the score statistic is explored.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号