首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 390 毫秒
1.
Sample size estimation in clinical trials depends critically on nuisance parameters, such as variances or overall event rates, which have to be guessed or estimated from previous studies in the planning phase of a trial. Blinded sample size reestimation estimates these nuisance parameters based on blinded data from the ongoing trial, and allows to adjust the sample size based on the acquired information. In the present paper, this methodology is developed for clinical trials with count data as the primary endpoint. In multiple sclerosis such endpoints are commonly used in phase 2 trials (lesion counts in magnetic resonance imaging (MRI)) and phase 3 trials (relapse counts). Sample size adjustment formulas are presented for both Poisson‐distributed data and for overdispersed Poisson‐distributed data. The latter arise from sometimes considerable between‐patient heterogeneity, which can be observed in particular in MRI lesion counts. The operation characteristics of the procedure are evaluated by simulations and recommendations on how to choose the size of the internal pilot study are given. The results suggest that blinded sample size reestimation for count data maintains the required power without an increase in the type I error. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

2.
Blinded sample size re-estimation and information monitoring based on blinded data has been suggested to mitigate risks due to planning uncertainties regarding nuisance parameters. Motivated by a randomized controlled trial in pediatric multiple sclerosis (MS), a continuous monitoring procedure for overdispersed count data was proposed recently. However, this procedure assumed constant event rates, an assumption often not met in practice. Here we extend the procedure to accommodate time trends in the event rates considering two blinded approaches: (a) the mixture approach modeling the number of events by a mixture of two negative binomial distributions and (b) the lumping approach approximating the marginal distribution of the event counts by a negative binomial distribution. Through simulations the operating characteristics of the proposed procedures are investigated under decreasing event rates. We find that the type I error rate is not inflated relevantly by either of the monitoring procedures, with the exception of strong time dependencies where the procedure assuming constant rates exhibits some inflation. Furthermore, the procedure accommodating time trends has generally favorable power properties compared with the procedure based on constant rates which stops often too late. The proposed method is illustrated by the clinical trial in pediatric MS.  相似文献   

3.
Mehta CR  Patel NR 《Statistics in medicine》2006,25(19):3250-69; discussion 3297-301, 3302-4, 3313-4, 3326-47
This paper presents two adaptive methods for sample size re-estimation within a unified group sequential framework. The conceptual and practical distinction between these adaptive modifications and more traditional sample size changes due to revised estimates of nuisance parameters is highlighted. The motivation for the adaptive designs is discussed. Having established that adaptive sample size modifications can be made without inflating the type 1 error, the paper concludes with a novel decision theoretic approach for determining the magnitude of the sample size modification.  相似文献   

4.
In randomized clinical trials, it is standard to include baseline variables in the primary analysis as covariates, as it is recommended by international guidelines. For the study design to be consistent with the analysis, these variables should also be taken into account when calculating the sample size to appropriately power the trial. Because assumptions made in the sample size calculation are always subject to some degree of uncertainty, a blinded sample size reestimation (BSSR) is recommended to adjust the sample size when necessary. In this article, we introduce a BSSR approach for count data outcomes with baseline covariates. Count outcomes are common in clinical trials and examples include the number of exacerbations in asthma and chronic obstructive pulmonary disease, relapses, and scan lesions in multiple sclerosis and seizures in epilepsy. The introduced methods are based on Wald and likelihood ratio test statistics. The approaches are illustrated by a clinical trial in epilepsy. The BSSR procedures proposed are compared in a Monte Carlo simulation study and shown to yield power values close to the target while not inflating the type I error rate.  相似文献   

5.
Gould AL 《Statistics in medicine》2001,20(17-18):2625-2643
Interim findings of a clinical trial often will be useful for increasing the sample size if necessary to provide the required power against the null hypothesis when the alternative hypothesis is true. Strategies for carrying out the interim examination that have been described over the past several years include "internal pilot studies", blinded interim sample size adjustment and conditional power. Simulation studies show that the alternative methods generally control the type I error rate satisfactorily, although the power properties are more variable. The important issues associated with sample size re-estimation are strategic, not numeric. Clearly expressed regulatory preferences suggest that methods not requiring unblinding the data before completion of the trial would be most appropriate. Extending a trial has its risks. The investigators/patients enrolled later in the course of a trial are not necessarily the same as those recruited/entered early. Re-activating the enrollment process may be sufficiently complicated and expensive to justify enrolling more investigators/patients at the outset. Since sample size re-estimation adjusts the sample size on the basis of variability while efficacy interim analysis adjusts the sample size based on the basis of estimated effect size, both principles can be used in the same trial. Sample size re-estimation may not be advisable for trials involving extended follow-up of individual patients or, more generally, when the follow-up time is long relative to the recruitment time. In such cases, it may be better to estimate the sample size conservatively and introduce an interim efficacy evaluation.  相似文献   

6.
We extend a method we had previously described (Statist. Med. 2005) to estimate the within-group variance of a continuous endpoint without breaking the blind in a randomized clinical trial. Specifically, we: (a) explain how the method may be used for a wider set of designs than we had previously indicated; (b) obtain a within-group, covariate-adjusted, blinded variance estimator; (c) illustrate use of the method for sample size re-estimation; and (d) describe a procedure to determine whether or not the blinded variance estimator works well not just on average but for the data set at hand. The proposed method is simple to use and makes no additional assumptions than is made for unblinded analysis. Simulations show that for realistic sample sizes there is virtually no inflation in the Type I error rate. When weighing the burden imposed by interim unblinded re-estimation with the loss in precision with blinded re-estimation, it may be advantageous for some trials to use the blinded method.  相似文献   

7.
Adaptive designs have been proposed for clinical trials in which the nuisance parameters or alternative of interest are unknown or likely to be misspecified before the trial. Although most previous works on adaptive designs and mid-course sample size re-estimation have focused on two-stage or group-sequential designs in the normal case, we consider here a new approach that involves at most three stages and is developed in the general framework of multiparameter exponential families. This approach not only maintains the prescribed type I error probability but also provides a simple but asymptotically efficient sequential test whose finite-sample performance, measured in terms of the expected sample size and power functions, is shown to be comparable to the optimal sequential design, determined by dynamic programming, in the simplified normal mean case with known variance and prespecified alternative, and superior to the existing two-stage designs and also to adaptive group-sequential designs when the alternative or nuisance parameters are unknown or misspecified.  相似文献   

8.
Counts of events are increasingly common as primary endpoints in randomized clinical trials. With between‐patient heterogeneity leading to variances in excess of the mean (referred to as overdispersion), statistical models reflecting this heterogeneity by mixtures of Poisson distributions are frequently employed. Sample size calculation in the planning of such trials requires knowledge on the nuisance parameters, that is, the control (or overall) event rate and the overdispersion parameter. Usually, there is only little prior knowledge regarding these parameters in the design phase resulting in considerable uncertainty regarding the sample size. In this situation internal pilot studies have been found very useful and very recently several blinded procedures for sample size re‐estimation have been proposed for overdispersed count data, one of which is based on an EM‐algorithm. In this paper we investigate the EM‐algorithm based procedure with respect to aspects of their implementation by studying the algorithm's dependence on the choice of convergence criterion and find that the procedure is sensitive to the choice of the stopping criterion in scenarios relevant to clinical practice. We also compare the EM‐based procedure to other competing procedures regarding their operating characteristics such as sample size distribution and power. Furthermore, the robustness of these procedures to deviations from the model assumptions is explored. We find that some of the procedures are robust to at least moderate deviations. The results are illustrated using data from the US National Heart, Lung and Blood Institute sponsored Asymptomatic Cardiac Ischemia Pilot study. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

9.
Consider a parallel group trial for the comparison of an experimental treatment to a control, where the second‐stage sample size may depend on the blinded primary endpoint data as well as on additional blinded data from a secondary endpoint. For the setting of normally distributed endpoints, we demonstrate that this may lead to an inflation of the type I error rate if the null hypothesis holds for the primary but not the secondary endpoint. We derive upper bounds for the inflation of the type I error rate, both for trials that employ random allocation and for those that use block randomization. We illustrate the worst‐case sample size reassessment rule in a case study. For both randomization strategies, the maximum type I error rate increases with the effect size in the secondary endpoint and the correlation between endpoints. The maximum inflation increases with smaller block sizes if information on the block size is used in the reassessment rule. Based on our findings, we do not question the well‐established use of blinded sample size reassessment methods with nuisance parameter estimates computed from the blinded interim data of the primary endpoint. However, we demonstrate that the type I error rate control of these methods relies on the application of specific, binding, pre‐planned and fully algorithmic sample size reassessment rules and does not extend to general or unplanned sample size adjustments based on blinded data. © 2015 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.  相似文献   

10.
Results from clinical trials are never interpreted in isolation. Previous studies in a similar setting provide valuable information for designing a new trial. For the analysis, however, the use of trial‐external information is challenging and therefore controversial, although it seems attractive from an ethical or efficiency perspective. Here, we consider the formal use of historical control data on lesion counts in a multiple sclerosis trial. The approach to incorporating historical data is Bayesian, in that historical information is captured in a prior that accounts for between‐trial variability and hence leads to discounting of historical data. We extend the meta‐analytic‐predictive approach, a random‐effects meta‐analysis of historical data combined with the prediction of the parameter in the new trial, from normal to overdispersed count data of individual‐patient or aggregate‐trial format. We discuss the prior derivation for the lesion mean count in the control group of the new trial for two populations. For the general population (without baseline enrichment), with 1936 control patients from nine historical trials, between‐trial variability was moderate to substantial, leading to a prior effective sample size of about 45 control patients. For the more homogenous population (with enrichment), with 412 control patients from five historical trials, the prior effective sample size was approximately 63 patients. Although these numbers are small relative to the historical data, they are fairly typical in settings where between‐trial heterogeneity is moderate. For phase II, reducing the number of control patients by 45 or by 63 may be an attractive option in many multiple sclerosis trials. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

11.
Denne JS 《Statistics in medicine》2001,20(17-18):2645-2660
The sample size required to achieve a given power at a prespecified absolute difference in mean response may depend on one or more nuisance parameters, which are usually unknown. Proposed methods for using an internal pilot to recalculate the sample size using estimates of these parameters have been well studied. Most of these methods ignore the fact that data on the parameter of interest from within this internal pilot will contribute towards the value of the final test statistic. We propose a method which involves recalculating the target sample size by computing the number of further observations required to maintain the probability of rejecting the null hypothesis at the end of the study under the prespecified absolute difference in mean response conditional on the data observed so far. We do this within the framework of a two-group error-spending sequential test, modified so as to prevent inflation of the type I error rate.  相似文献   

12.
Various methods have been described for re-estimating the final sample size in a clinical trial based on an interim assessment of the treatment effect. Many re-weight the observations after re-sizing so as to control the pursuant inflation in the type I error probability alpha. Lan and Trost (Estimation of parameters and sample size re-estimation. Proceedings of the American Statistical Association Biopharmaceutical Section 1997; 48-51) proposed a simple procedure based on conditional power calculated under the current trend in the data (CPT). The study is terminated for futility if CPT < or = CL, continued unchanged if CPT > or = CU, or re-sized by a factor m to yield CPT = CU if CL < CPT < CU, where CL and CU are pre-specified probability levels. The overall level alpha can be preserved since the reduction due to stopping for futility can balance the inflation due to sample size re-estimation, thus permitting any form of final analysis with no re-weighting. Herein the statistical properties of this approach are described including an evaluation of the probabilities of stopping for futility or re-sizing, the distribution of the re-sizing factor m, and the unconditional type I and II error probabilities alpha and beta. Since futility stopping does not allow a type I error but commits a type II error, then as the probability of stopping for futility increases, alpha decreases and beta increases. An iterative procedure is described for choice of the critical test value and the futility stopping boundary so as to ensure that specified alpha and beta are obtained. However, inflation in beta is controlled by reducing the probability of futility stopping, that in turn dramatically increases the possible re-sizing factor m. The procedure is also generalized to limit the maximum sample size inflation factor, such as at m max = 4. However, doing so then allows for a non-trivial fraction of studies to be re-sized at this level that still have low conditional power. These properties also apply to other methods for sample size re-estimation with a provision for stopping for futility. Sample size re-estimation procedures should be used with caution and the impact on the overall type II error probability should be assessed.  相似文献   

13.
Cluster randomization trials in which families are the unit of allocation are commonly adopted for the evaluation of disease prevention interventions. Sample size estimation for cluster randomization trials depends on parameters that quantify the variability within and between clusters and the variability in cluster size. Accurate advance estimates of these nuisance parameters may be difficult to obtain and misspecification may lead to an underpowered study. Since families are typically recruited over time, we propose using a portion of the data to estimate the nuisance parameters and to re-estimate sample size based on the estimates. This extends the standard internal pilot study methods to the setting of cluster randomization trials. The effect of this design on the power, significance level and sample size is analysed via simulation and is shown to provide a flexible and practical approach to cluster randomization trials.  相似文献   

14.
In fixed sample size designs, precise knowledge about the magnitude of the outcome variable's variance in the planning phase of a clinical trial is mandatory for an adequate sample size determination. Wittes and Brittain introduced the internal pilot study design that allows recalculation of the sample size during an ongoing trial using the estimated variance obtained from an interim analysis. However, this procedure requires the unblinding of the treatment code. Since unblinding of an ongoing trial should be avoided whenever possible, there should be some benefit of this design compared with blinded sample size recalculation procedures to justify the unveiling of the treatment code. In this paper, we compare several sample size recalculation procedures with and without unblinding. The simulation results indicate that the procedures behave similarly. In particular, breaking of the blind is not required for an efficient sample size adjustment. We also compare these pure sample size adaptation procedures with study designs which additionally allow for early stopping. Evaluation of the cumulative distribution function of the resulting sample sizes shows that the option for early stopping may lead to lower expectation but generally to a higher variability. The procedures are illustrated by an example of a trial in the treatment of depression.  相似文献   

15.
In this article, we study blinded sample size re‐estimation in the ‘gold standard’ design with internal pilot study for normally distributed outcomes. The ‘gold standard’ design is a three‐arm clinical trial design that includes an active and a placebo control in addition to an experimental treatment. We focus on the absolute margin approach to hypothesis testing in three‐arm trials at which the non‐inferiority of the experimental treatment and the assay sensitivity are assessed by pairwise comparisons. We compare several blinded sample size re‐estimation procedures in a simulation study assessing operating characteristics including power and type I error. We find that sample size re‐estimation based on the popular one‐sample variance estimator results in overpowered trials. Moreover, sample size re‐estimation based on unbiased variance estimators such as the Xing–Ganju variance estimator results in underpowered trials, as it is expected because an overestimation of the variance and thus the sample size is in general required for the re‐estimation procedure to eventually meet the target power. To overcome this problem, we propose an inflation factor for the sample size re‐estimation with the Xing–Ganju variance estimator and show that this approach results in adequately powered trials. Because of favorable features of the Xing–Ganju variance estimator such as unbiasedness and a distribution independent of the group means, the inflation factor does not depend on the nuisance parameter and, therefore, can be calculated prior to a trial. Moreover, we prove that the sample size re‐estimation based on the Xing–Ganju variance estimator does not bias the effect estimate. Copyright © 2017 John Wiley & Sons, Ltd.  相似文献   

16.
When planning a clinical trial the sample size calculation is commonly based on an a priori estimate of the variance of the outcome variable. Misspecification of the variance can have substantial impact on the power of the trial. It is therefore attractive to update the planning assumptions during the ongoing trial using an internal estimate of the variance. For this purpose, an EM algorithm based procedure for blinded variance estimation was proposed for normally distributed data. Various simulation studies suggest a number of appealing properties of this procedure. In contrast, we show that (i) the estimates provided by this procedure depend on the initialization, (ii) the stopping rule used is inadequate to guarantee that the algorithm converges against the maximum likelihood estimator, and (iii) the procedure corresponds to the special case of simple randomization which, however, in clinical trials is rarely applied. Further, we show that maximum likelihood estimation leads to no reasonable results for blinded sample size re-estimation due to bias and high variability. The problem is illustrated by a clinical trial in asthma.  相似文献   

17.
Mixed Poisson models are often used for the design of clinical trials involving recurrent events since they provide measures of treatment effect based on rate and mean functions and accommodate between individual heterogeneity in event rates. Planning studies based on these models can be challenging when there is a little information available on the population event rates, or the extent of heterogeneity characterized by the variance of individual‐specific random effects. We consider methods for adaptive two‐stage clinical trial design, which enable investigators to revise sample size estimates using data collected during the first phase of the study. We describe blinded procedures in which the group membership and treatment received by each individual are not revealed at the interim analysis stage, and a ‘partially blinded’ procedure in which group membership is revealed but not the treatment received by the groups. An EM algorithm is proposed for the interim analyses in both cases, and the performance is investigated through simulation. The work is motivated by the design of a study involving patients with immune thrombocytopenic purpura where the aim is to reduce bleeding episodes and an illustrative application is given using data from a cardiovascular trial. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

18.
This paper describes robust procedures for estimating parameters of a mixed effects linear model as applied to longitudinal data. In addition to fixed regression parameters, the model incorporates random subject effects to accommodate between-subjects variability and autocorrelation for within-subject variability. Robust empirical Bayesian estimation of subject effects is briefly discussed. As an illustration, the procedures are applied to data from a multiple sclerosis clinical trial.  相似文献   

19.
Missing outcome data is a crucial threat to the validity of treatment effect estimates from randomized trials. The outcome distributions of participants with missing and observed data are often different, which increases bias. Causal inference methods may aid in reducing the bias and improving efficiency by incorporating baseline variables into the analysis. In particular, doubly robust estimators incorporate 2 nuisance parameters: the outcome regression and the missingness mechanism (ie, the probability of missingness conditional on treatment assignment and baseline variables), to adjust for differences in the observed and unobserved groups that can be explained by observed covariates. To consistently estimate the treatment effect, one of these nuisance parameters must be consistently estimated. Traditionally, nuisance parameters are estimated using parametric models, which often precludes consistency, particularly in moderate to high dimensions. Recent research on missing data has focused on data‐adaptive estimation to help achieve consistency, but the large sample properties of such methods are poorly understood. In this article, we discuss a doubly robust estimator that is consistent and asymptotically normal under data‐adaptive estimation of the nuisance parameters. We provide a formula for an asymptotically exact confidence interval under minimal assumptions. We show that our proposed estimator has smaller finite‐sample bias compared to standard doubly robust estimators. We present a simulation study demonstrating the enhanced performance of our estimators in terms of bias, efficiency, and coverage of the confidence intervals. We present the results of an illustrative example: a randomized, double‐blind phase 2/3 trial of antiretroviral therapy in HIV‐infected persons.  相似文献   

20.
Magnetic resonance imaging (MRI) data are routinely collected at multiple time points during phase 2 clinical trials in multiple sclerosis. However, these data are typically summarized into a single response for each patient before analysis. Models based on these summary statistics do not allow the exploration of the trade-off between numbers of patients and numbers of scans per patient or the development of optimal schedules for MRI scanning. To address these limitations, in this paper, we develop a longitudinal model to describe one MRI outcome: the number of lesions observed on an individual MRI scan. We motivate our choice of a mixed hidden Markov model based both on novel graphical diagnostic methods applied to five real data sets and on conceptual considerations. Using this model, we compare the performance of a number of different tests of treatment effect. These include standard parametric and nonparametric tests, as well as tests based on the new model. We conduct an extensive simulation study using data generated from the longitudinal model to investigate the parameters that affect test performance and to assess size and power. We determine that the parameters of the hidden Markov chain do not substantially affect the performance of the tests. Furthermore, we describe conditions under which likelihood ratio tests based on the longitudinal model appreciably outperform the standard tests based on summary statistics. These results establish that the new model is a valuable practical tool for designing and analyzing multiple sclerosis clinical trials.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号