首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Mixed Poisson models are often used for the design of clinical trials involving recurrent events since they provide measures of treatment effect based on rate and mean functions and accommodate between individual heterogeneity in event rates. Planning studies based on these models can be challenging when there is a little information available on the population event rates, or the extent of heterogeneity characterized by the variance of individual‐specific random effects. We consider methods for adaptive two‐stage clinical trial design, which enable investigators to revise sample size estimates using data collected during the first phase of the study. We describe blinded procedures in which the group membership and treatment received by each individual are not revealed at the interim analysis stage, and a ‘partially blinded’ procedure in which group membership is revealed but not the treatment received by the groups. An EM algorithm is proposed for the interim analyses in both cases, and the performance is investigated through simulation. The work is motivated by the design of a study involving patients with immune thrombocytopenic purpura where the aim is to reduce bleeding episodes and an illustrative application is given using data from a cardiovascular trial. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

2.
An improved method of sample size calculation for the one‐sample log‐rank test is provided. The one‐sample log‐rank test may be the method of choice if the survival curve of a single treatment group is to be compared with that of a historic control. Such settings arise, for example, in clinical phase‐II trials if the response to a new treatment is measured by a survival endpoint. Present sample size formulas for the one‐sample log‐rank test are based on the number of events to be observed, that is, in order to achieve approximately a desired power for allocated significance level and effect the trial is stopped as soon as a certain critical number of events are reached. We propose a new stopping criterion to be followed. Both approaches are shown to be asymptotically equivalent. For small sample size, though, a simulation study indicates that the new criterion might be preferred when planning a corresponding trial. In our simulations, the trial is usually underpowered, and the aspired significance level is not exploited if the traditional stopping criterion based on the number of events is used, whereas a trial based on the new stopping criterion maintains power with the type‐I error rate still controlled. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

3.
In this article, we study blinded sample size re‐estimation in the ‘gold standard’ design with internal pilot study for normally distributed outcomes. The ‘gold standard’ design is a three‐arm clinical trial design that includes an active and a placebo control in addition to an experimental treatment. We focus on the absolute margin approach to hypothesis testing in three‐arm trials at which the non‐inferiority of the experimental treatment and the assay sensitivity are assessed by pairwise comparisons. We compare several blinded sample size re‐estimation procedures in a simulation study assessing operating characteristics including power and type I error. We find that sample size re‐estimation based on the popular one‐sample variance estimator results in overpowered trials. Moreover, sample size re‐estimation based on unbiased variance estimators such as the Xing–Ganju variance estimator results in underpowered trials, as it is expected because an overestimation of the variance and thus the sample size is in general required for the re‐estimation procedure to eventually meet the target power. To overcome this problem, we propose an inflation factor for the sample size re‐estimation with the Xing–Ganju variance estimator and show that this approach results in adequately powered trials. Because of favorable features of the Xing–Ganju variance estimator such as unbiasedness and a distribution independent of the group means, the inflation factor does not depend on the nuisance parameter and, therefore, can be calculated prior to a trial. Moreover, we prove that the sample size re‐estimation based on the Xing–Ganju variance estimator does not bias the effect estimate. Copyright © 2017 John Wiley & Sons, Ltd.  相似文献   

4.
Important sources of variation in the spread of HIV in communities arise from overlapping sexual networks and heterogeneity in biological and behavioral risk factors in populations. These sources of variation are not routinely accounted for in the design of HIV prevention trials. In this paper, we use agent‐based models to account for these sources of variation. We illustrate the approach with an agent‐based model for the spread of HIV infection among men who have sex with men in South Africa. We find that traditional sample size approaches that rely on binomial (or Poisson) models are inadequate and can lead to underpowered studies. We develop sample size and power formulas for community randomized trials that incorporate estimates of variation determined from agent‐based models. We conclude that agent‐based models offer a useful tool in the design of HIV prevention trials. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

5.
Sample size estimation in clinical trials depends critically on nuisance parameters, such as variances or overall event rates, which have to be guessed or estimated from previous studies in the planning phase of a trial. Blinded sample size reestimation estimates these nuisance parameters based on blinded data from the ongoing trial, and allows to adjust the sample size based on the acquired information. In the present paper, this methodology is developed for clinical trials with count data as the primary endpoint. In multiple sclerosis such endpoints are commonly used in phase 2 trials (lesion counts in magnetic resonance imaging (MRI)) and phase 3 trials (relapse counts). Sample size adjustment formulas are presented for both Poisson‐distributed data and for overdispersed Poisson‐distributed data. The latter arise from sometimes considerable between‐patient heterogeneity, which can be observed in particular in MRI lesion counts. The operation characteristics of the procedure are evaluated by simulations and recommendations on how to choose the size of the internal pilot study are given. The results suggest that blinded sample size reestimation for count data maintains the required power without an increase in the type I error. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

6.
Cluster randomized designs are frequently employed in pragmatic clinical trials which test interventions in the full spectrum of everyday clinical settings in order to maximize applicability and generalizability. In this study, we propose to directly incorporate pragmatic features into power analysis for cluster randomized trials with count outcomes. The pragmatic features considered include arbitrary randomization ratio, overdispersion, random variability in cluster size, and unequal lengths of follow-up over which the count outcome is measured. The proposed method is developed based on generalized estimating equation (GEE) and it is advantageous in that the sample size formula retains a closed form, facilitating its implementation in pragmatic trials. We theoretically explore the impact of various pragmatic features on sample size requirements. An efficient Jackknife algorithm is presented to address the problem of underestimated variance by the GEE sandwich estimator when the number of clusters is small. We assess the performance of the proposed sample size method through extensive simulation and an application example to a real clinical trial is presented.  相似文献   

7.
Conventional phase II trials using binary endpoints as early indicators of a time‐to‐event outcome are not always feasible. Uveal melanoma has no reliable intermediate marker of efficacy. In pancreatic cancer and viral clearance, the time to the event of interest is short, making an early indicator unnecessary. In the latter application, Weibull models have been used to analyse corresponding time‐to‐event data. Bayesian sample size calculations are presented for single‐arm and randomised phase II trials assuming proportional hazards models for time‐to‐event endpoints. Special consideration is given to the case where survival times follow the Weibull distribution. The proposed methods are demonstrated through an illustrative trial based on uveal melanoma patient data. A procedure for prior specification based on knowledge or predictions of survival patterns is described. This enables investigation into the choice of allocation ratio in the randomised setting to assess whether a control arm is indeed required. The Bayesian framework enables sample sizes consistent with those used in practice to be obtained. When a confirmatory phase III trial will follow if suitable evidence of efficacy is identified, Bayesian approaches are less controversial than for definitive trials. In the randomised setting, a compromise for obtaining feasible sample sizes is a loss in certainty in the specified hypotheses: the Bayesian counterpart of power. However, this approach may still be preferable to running a single‐arm trial where no data is collected on the control treatment. This dilemma is present in most phase II trials, where resources are not sufficient to conduct a definitive trial. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

8.
When planning a clinical trial the sample size calculation is commonly based on an a priori estimate of the variance of the outcome variable. Misspecification of the variance can have substantial impact on the power of the trial. It is therefore attractive to update the planning assumptions during the ongoing trial using an internal estimate of the variance. For this purpose, an EM algorithm based procedure for blinded variance estimation was proposed for normally distributed data. Various simulation studies suggest a number of appealing properties of this procedure. In contrast, we show that (i) the estimates provided by this procedure depend on the initialization, (ii) the stopping rule used is inadequate to guarantee that the algorithm converges against the maximum likelihood estimator, and (iii) the procedure corresponds to the special case of simple randomization which, however, in clinical trials is rarely applied. Further, we show that maximum likelihood estimation leads to no reasonable results for blinded sample size re-estimation due to bias and high variability. The problem is illustrated by a clinical trial in asthma.  相似文献   

9.
Stratified data analysis is an important research topic in many biomedical studies and clinical trials. In this article, we develop five test statistics for testing the homogeneity of proportion ratios for stratified correlated bilateral binary data based on an equal correlation model assumption. Bootstrap procedures based on these test statistics are also considered. To evaluate the performance of these statistics and procedures, we conduct Monte Carlo simulations to study their empirical sizes and powers under various scenarios. Our results suggest that the procedure based on score statistic performs well generally and is highly recommended. When the sample size is large, procedures based on the commonly used weighted least square estimate and logarithmic transformation with Mantel–Haenszel estimate are recommended as they do not involve any computation of maximum likelihood estimates requiring iterative algorithms. We also derive approximate sample size formulas based on the recommended test procedures. Finally, we apply the proposed methods to analyze a multi‐center randomized clinical trial for scleroderma patients. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

10.
Rich meta‐epidemiological data sets have been collected to explore associations between intervention effect estimates and study‐level characteristics. Welton et al proposed models for the analysis of meta‐epidemiological data, but these models are restrictive because they force heterogeneity among studies with a particular characteristic to be at least as large as that among studies without the characteristic. In this paper we present alternative models that are invariant to the labels defining the 2 categories of studies. To exemplify the methods, we use a collection of meta‐analyses in which the Cochrane Risk of Bias tool has been implemented. We first investigate the influence of small trial sample sizes (less than 100 participants), before investigating the influence of multiple methodological flaws (inadequate or unclear sequence generation, allocation concealment, and blinding). We fit both the Welton et al model and our proposed label‐invariant model and compare the results. Estimates of mean bias associated with the trial characteristics and of between‐trial variances are not very sensitive to the choice of model. Results from fitting a univariable model show that heterogeneity variance is, on average, 88% greater among trials with less than 100 participants. On the basis of a multivariable model, heterogeneity variance is, on average, 25% greater among trials with inadequate/unclear sequence generation, 51% greater among trials with inadequate/unclear blinding, and 23% lower among trials with inadequate/unclear allocation concealment, although the 95% intervals for these ratios are very wide. Our proposed label‐invariant models for meta‐epidemiological data analysis facilitate investigations of between‐study heterogeneity attributable to certain study characteristics.  相似文献   

11.
The authors present an analysis of the choice of sample sizes for demonstrating cost-effectiveness of a new treatment or procedure, when data on both cost and efficacy will be collected in a clinical trial. The Bayesian approach to statistics is employed, as well as a novel Bayesian criterion that provides insight into the sample size problem and offers a very flexible formulation.  相似文献   

12.
Step‐up procedures have been shown to be powerful testing methods in clinical trials for comparisons of several treatments with a control. In this paper, a determination of the optimal sample size for a step‐up procedure that allows a pre‐specified power level to be attained is discussed. Various definitions of power, such as all‐pairs power, any‐pair power, per‐pair power and average power, in one‐ and two‐sided tests are considered. An extensive numerical study confirms that square root allocation of sample size among treatments provides a better approximation of the optimal sample size relative to equal allocation. Based on square root allocation, tables are constructed, and users can conveniently obtain the approximate required sample size for the selected configurations of parameters and power. For clinical studies with difficulties in recruiting patients or when additional subjects lead to a significant increase in cost, a more precise computation of the required sample size is recommended. In such circumstances, our proposed procedure may be adopted to obtain the optimal sample size. It is also found that, contrary to conventional belief, the optimal allocation may considerably reduce the total sample size requirement in certain cases. The determination of the required sample sizes using both allocation rules are illustrated with two examples in clinical studies. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

13.
Phase II clinical trials are typically designed as two‐stage studies, in order to ensure early termination of the trial if the interim results show that the treatment is ineffective. Most of two‐stage designs, developed under both a frequentist and a Bayesian framework, select the second stage sample size before observing the first stage data. This may cause some paradoxical situations during the practical carrying out of the trial. To avoid these potential problems, we suggest a Bayesian predictive strategy to derive an adaptive two‐stage design, where the second stage sample size is not selected in advance, but depends on the first stage result. The criterion we propose is based on a modification of a Bayesian predictive design recently presented in the literature (see (Statist. Med. 2008; 27 :1199–1224)). The distinction between analysis and design priors is essential for the practical implementation of the procedure: some guidelines for choosing these prior distributions are discussed and their impact on the required sample size is examined. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

14.
Many different methods have been proposed for the analysis of cluster randomized trials (CRTs) over the last 30 years. However, the evaluation of methods on overdispersed count data has been based mostly on the comparison of results using empiric data; i.e. when the true model parameters are not known. In this study, we assess via simulation the performance of five methods for the analysis of counts in situations similar to real community‐intervention trials. We used the negative binomial distribution to simulate overdispersed counts of CRTs with two study arms, allowing the period of time under observation to vary among individuals. We assessed different sample sizes, degrees of clustering and degrees of cluster‐size imbalance. The compared methods are: (i) the two‐sample t‐test of cluster‐level rates, (ii) generalized estimating equations (GEE) with empirical covariance estimators, (iii) GEE with model‐based covariance estimators, (iv) generalized linear mixed models (GLMM) and (v) Bayesian hierarchical models (Bayes‐HM). Variation in sample size and clustering led to differences between the methods in terms of coverage, significance, power and random‐effects estimation. GLMM and Bayes‐HM performed better in general with Bayes‐HM producing less dispersed results for random‐effects estimates although upward biased when clustering was low. GEE showed higher power but anticonservative coverage and elevated type I error rates. Imbalance affected the overall performance of the cluster‐level t‐test and the GEE's coverage in small samples. Important effects arising from accounting for overdispersion are illustrated through the analysis of a community‐intervention trial on Solar Water Disinfection in rural Bolivia. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

15.
Wang J 《Statistics in medicine》2001,20(16):2467-2477
Non-parametric procedures are often used for the analysis of pharmacokinetic trials. Fewer design procedures are available for non-parametric estimation than for parametric estimation. Linear interpolation is widely used for curve estimation in pharmacokinetic trials, where often only sparse sampling is feasible. Current design procedures for smoothing or local fit are not suitable as they are based on asymptotic properties and the bias of the estimate is ignored. This paper proposes optimal designs that minimize the mean squared error of linear interpolation. Optimal designs for three situations are considered. The first situation is single curve estimation based on an ordinary non-linear model. The second is estimating several curves in a non-linear mixed model setting using an average mean squared error as the design criterion. The third situation is destructive sampling where estimating the average curve is the main purpose. In the first situation, the design results in the best linear interpolation when the variance is constant. For the destructive sampling design, an algorithm based on approximations is proposed. This algorithm can be programmed in a common statistical package. Numerical examples are used to illustrate the design procedure.  相似文献   

16.
Many meta‐analyses report using ‘Cochran's Q test' to assess heterogeneity of effect‐size estimates from the individual studies. Some authors cite work by W. G. Cochran, without realizing that Cochran deliberately did not use Q itself to test for heterogeneity. Further, when heterogeneity is absent, the actual null distribution of Q is not the chi‐squared distribution assumed for ‘Cochran's Q test'. This paper reviews work by Cochran related to Q. It then discusses derivations of the asymptotic approximation for the null distribution of Q, as well as work that has derived finite‐sample moments and corresponding approximations for the cases of specific measures of effect size. Those results complicate implementation and interpretation of the popular heterogeneity index I2. Also, it turns out that the test‐based confidence intervals used with I2 are based on a fallacious approach. Software that outputs Q and I2 should use the appropriate reference value of Q for the particular measure of effect size and the current meta‐analysis. Q is a key element of the popular DerSimonian–Laird procedure for random‐effects meta‐analysis, but the assumptions of that procedure and related procedures do not reflect the actual behavior of Q and may introduce bias. The DerSimonian–Laird procedure should be regarded as unreliable. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

17.
Cook RJ  Lee KA  Li H 《Statistics in medicine》2007,26(25):4563-4577
We describe methods for designing non-inferiority trials with recurrent event responses arising from mixed-Poisson models. Sample size formulae are derived for trials in which treatment effects are expressed as relative rates and as absolute differences in cumulative mean functions at a particular time. Simulation studies are conducted to provide empirical validation of the frequency properties of the design and testing procedures under the null and alternative hypotheses using both mixed-Poisson models and robust marginal methods. The robustness of the design to mis-specification of the random effect distribution is also studied empirically. Sample size requirements based on the proposed method are contrasted with those from a design based on the time to the first event for a motivating study of patients with bone metastases at risk of skeletal complications. When the between-patient heterogeneity in the event rate is small, there may be a considerable reduction in sample size with recurrent event outcomes.  相似文献   

18.
目的 提供二分类定性资料平行设计非劣效临床试验样本含量最常用的计算公式及其 SAS和PASS过程,并为相关参数的设置提供参考。方法 基于二项分布的正态近似理论推导样本含量的估计公式,通过SAS程序和PASS过程探讨各重要参数(样本率、非劣效界值)变化时样本含量及检验效能的变化情况。结果 对率的非劣效试验样本含量的计算,公式、SAS程序和PASS过程能得到一致结果;当检验水准和对照组样本率确定时,试验组样本率越大、检验效能越小、界值越大,所需样本含量越小。结论 利用本文提供的公式、SAS程序和PASS过程,可以帮助研究者系统快速得到二分类资料2组平行非劣效设计时的样本含量。试验组样本率、检验效能和非劣效界值是非劣效临床试验估计样本含量必须认真考虑的参数。  相似文献   

19.
Statistical inference based on correlated count measurements are frequently performed in biomedical studies. Most of existing sample size calculation methods for count outcomes are developed under the Poisson model. Deviation from the Poisson assumption (equality of mean and variance) has been widely documented in practice, which indicates urgent needs of sample size methods with more realistic assumptions to ensure valid experimental design. In this study, we investigate sample size calculation for clinical trials with correlated count measurements based on the negative binomial distribution. This approach is flexible to accommodate overdispersion and unequal measurement intervals, as well as arbitrary randomization ratios, missing data patterns, and correlation structures. Importantly, the derived sample size formulas have closed forms both for the comparison of slopes and for the comparison of time-averaged responses, which greatly reduces the burden of implementation in practice. We conducted extensive simulation to demonstrate that the proposed method maintains the nominal levels of power and type I error over a wide range of design configurations. We illustrate the application of this approach using a real epileptic trial.  相似文献   

20.
多中心两组临床试验所需样本量的测定   总被引:5,自引:2,他引:3  
目的:提出用于多中心两组临床试验的样本量测定方法。方法:由Cochran检验或Mantel-Haenszel检验反推出所需总样本量,按层样本分数和组样本分数进行样本量的分配。结果:如此获得的样本量测定方法与检验方法一一匹配。该方法具有同质性,当只有一层时还原为同质假设下的简化正态法或同质假设下的简化正态法加1。针对Cochran检验的设计可为等层或非等层设计,而针对Mantel-Haenszel检验的只能是等层设计。结论:该方法可用于多中心两组临床试验方案的设计。附有工作实例描述设计过程。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号