首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
OBJECTIVE: Randomized clinical trials that compare two treatments on a continuous outcome can be analyzed using analysis of covariance (ANCOVA) or a t-test approach. We present a method for the sample size calculation when ANCOVA is used. STUDY DESIGN AND SETTING: We derived an approximate sample size formula. Simulations were used to verify the accuracy of the formula and to improve the approximation for small trials. The sample size calculations are illustrated in a clinical trial in rheumatoid arthritis. RESULTS: If the correlation between the outcome measured at baseline and at follow-up is rho, ANCOVA comparing groups of (1-rho(2))n subjects has the same power as t-test comparing groups of n subjects. When on the same data, ANCOVA is used instead of t-test, the precision of the treatment estimate is increased, and the length of the confidence interval is reduced by a factor 1-rho(2). CONCLUSION: ANCOVA may considerably reduce the number of patients required for a trial.  相似文献   

2.
Hu Z  Follmann D 《Statistics in medicine》2007,26(12):2433-2448
This paper develops methods of analysis for active extension clinical trials. Under this design, patients are randomized to treatment or placebo for a period of time (period 1), and then all patients receive treatment for an additional period of time (period 2). We assume a continuous outcome is measured at baseline and at the end of the two consecutive periods. If only period 1 data is available, classic estimators of the treatment effect include the change score, analysis of covariance, and maximum likelihood (ML). We show how to extend these estimators by incorporating period 2 data which we refer to as the period 2 estimators. Under the assumption that the mean responses for treatment and placebo arms are the same at the end of period 2, the new estimators are unbiased and more efficient than estimators that ignore period 2 data. If this assumption is not met, the period 2 tests may be more powerful than period 1 tests, but the estimators are biased downward (upward) if the treatment effect during period 2 is larger (smaller) in treatment arm than the placebo arm. In general, the proposed period 2 procedure can provide an efficient way to supplement but not supplant the usual period 1 analysis.  相似文献   

3.
Carter B 《Statistics in medicine》2010,29(29):2984-2993
Cluster randomized controlled trials are increasingly used to evaluate medical interventions. Research has found that cluster size variability leads to a reduction in the overall effective sample size. Although reporting standards of cluster trials have started to evolve, a far greater degree of transparency is needed to ensure that robust evidence is presented. The use of the numbers of patients recruited to summarize recruitment rate should be avoided in favour of an improved metric that illustrates cumulative power and accounts for cluster variability. Data from four trials is included to show the link between cluster size variability and imbalance. Furthermore, using simulations it is demonstrated that by randomising using a two block randomization strategy and weighting the second by cluster size recruitment, chance imbalance can be minimized.  相似文献   

4.
Stratified cluster randomization trials (CRTs) have been frequently employed in clinical and healthcare research. Comparing with simple randomized CRTs, stratified CRTs reduce the imbalance of baseline prognostic factors among different intervention groups. Due to the popularity, there has been a growing interest in methodological development on sample size estimation and power analysis for stratified CRTs; however, existing work mostly assumes equal cluster size within each stratum and uses multilevel models. Clusters are often naturally formed with random sizes in CRTs. With varying cluster size, commonly used ad hoc approaches ignore the variability in cluster size, which may underestimate (overestimate) the required number of clusters for each group per stratum and lead to underpowered (overpowered) clinical trials. We propose closed-form sample size formulas for estimating the required total number of subjects and for estimating the number of clusters for each group per stratum, based on Cochran-Mantel-Haenszel statistic for stratified cluster randomization design with binary outcomes, accounting for both clustering and varying cluster size. We investigate the impact of various design parameters on the relative change in the required number of clusters for each group per stratum due to varying cluster size. Simulation studies are conducted to evaluate the finite-sample performance of the proposed sample size method. A real application example of a pragmatic stratified CRT of a triad of chronic kidney disease, diabetes, and hypertension is presented for illustration.  相似文献   

5.
Xie T  Waksman J 《Statistics in medicine》2003,22(18):2835-2846
Many clinical trials involve the collection of data on the time to occurrence of the same type of multiple events within sample units, in which ordering of events is arbitrary and times are usually correlated. To design a clinical trial with this type of clustered survival times as the primary endpoint, estimating the number of subjects (sampling units) required for a given power to detect a specified treatment difference is an important issue. In this paper we derive a sample size formula for clustered survival data via Lee, Wei and Amato's marginal model. It can be easily used to plan a clinical trial in which clustered survival times are of primary interest. Simulation studies demonstrate that the formula works very well. We also discuss and compare cluster survival time design and single survival time design (for example, time to the first event) in different scenarios.  相似文献   

6.
Cluster randomized trials (CRTs) refer to experiments with randomization carried out at the cluster or the group level. While numerous statistical methods have been developed for the design and analysis of CRTs, most of the existing methods focused on testing the overall treatment effect across the population characteristics, with few discussions on the differential treatment effect among subpopulations. In addition, the sample size and power requirements for detecting differential treatment effect in CRTs remain unclear, but are helpful for studies planned with such an objective. In this article, we develop a new sample size formula for detecting treatment effect heterogeneity in two-level CRTs for continuous outcomes, continuous or binary covariates measured at cluster or individual level. We also investigate the roles of two intraclass correlation coefficients (ICCs): the adjusted ICC for the outcome of interest and the marginal ICC for the covariate of interest. We further derive a closed-form design effect formula to facilitate the application of the proposed method, and provide extensions to accommodate multiple covariates. Extensive simulations are carried out to validate the proposed formula in finite samples. We find that the empirical power agrees well with the prediction across a range of parameter constellations, when data are analyzed by a linear mixed effects model with a treatment-by-covariate interaction. Finally, we use data from the HF-ACTION study to illustrate the proposed sample size procedure for detecting heterogeneous treatment effects.  相似文献   

7.
Girardeau, Ravaud and Donner in 2008 presented a formula for sample size calculations for cluster randomised crossover trials, when the intracluster correlation coefficient, interperiod correlation coefficient and mean cluster size are specified in advance. However, in many randomised trials, the number of clusters is constrained in some way, but the mean cluster size is not. We present a version of the Girardeau formula for sample size calculations for cluster randomised crossover trials when the number of clusters is fixed. Formulae are given for the minimum number of clusters, the maximum cluster size and the relationship between the correlation coefficients when there are constraints on both the number of clusters and the cluster size. Our version of the formula may aid the efficient planning and design of cluster randomised crossover trials.  相似文献   

8.
In designing a longitudinal cluster randomized clinical trial (cluster‐RCT), the interventions are randomly assigned to clusters such as clinics. Subjects within the same clinic will receive the identical intervention. Each will be assessed repeatedly over the course of the study. A mixed‐effects linear regression model can be applied in a cluster‐RCT with three‐level data to test the hypothesis that the intervention groups differ in the course of outcome over time. Using a test statistic based on maximum likelihood estimates, we derived closed‐form formulae for statistical power to detect the intervention by time interaction and the sample size requirements for each level. Importantly, the sample size does not depend on correlations among second‐level data units and the statistical power function depends on the number of second‐ and third‐level data units through their product. A simulation study confirmed that theoretical power estimates based on the derived formulae are nearly identical to empirical estimates. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

9.
Cluster randomized trials (CRTs) are increasingly used to evaluate the effectiveness of health‐care interventions. A key feature of CRTs is that the observations on individuals within clusters are correlated as a result of between‐cluster variability. Sample size formulae exist which account for such correlations, but they make different assumptions regarding the between‐cluster variability in the intervention arm of a trial, resulting in different sample size estimates. We explore the relationship for binary outcome data between two common measures of between‐cluster variability: k, the coefficient of variation and ρ, the intracluster correlation coefficient. We then assess how the assumptions of constant k or ρ across treatment arms correspond to different assumptions about intervention effects. We assess implications for sample size estimation and present a simple solution to the problems outlined. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

10.
Despite our best efforts, missing outcomes are common in randomized controlled clinical trials. The National Research Council's Committee on National Statistics panel report titled The Prevention and Treatment of Missing Data in Clinical Trials noted that further research is required to assess the impact of missing data on the power of clinical trials and how to set useful target rates and acceptable rates of missing data in clinical trials. In this article, using binary responses for illustration, we establish that conclusions based on statistical analyses that include only complete cases can be seriously misleading, and that the adverse impact of missing data grows not only with increasing rates of missingness but also with increasing sample size. We illustrate how principled sensitivity analysis can be used to assess the robustness of the conclusions. Finally, we illustrate how sample sizes can be adjusted to account for expected rates of missingness. We find that when sensitivity analyses are considered as part of the primary analysis, the required adjustments to the sample size are dramatically larger than those that are traditionally used. Furthermore, in some cases, especially in large trials with small target effect sizes, it is impossible to achieve the desired power.  相似文献   

11.
A major methodological reason to use cluster randomization is to avoid the contamination that would arise in an individually randomized design. However, when patient recruitment cannot be completed before randomization of clusters, the non-blindedness of recruiters and patients may cause selection bias, while in the control clusters, it may slow recruitment due to patient or recruiter preferences for the intervention. As a compromise, pseudo cluster randomization has been proposed. Because no insight is available into the relative performance of methods to analyse data obtained from this design, we compared the type I and II error rates of mixed models, generalized estimating equations (GEE) and a paired t-test to those of the estimator originally proposed in this design. The bias in the point estimate and its standard error were also incorporated into this comparison. Furthermore, we evaluated the effect of the weighting scheme and the accuracy of the sample size formula that have been described previously. Power levels of the originally proposed estimator and the unweighted mixed models were in agreement with the sample size formula, but the power of paired t-test fell short. GEE produced too large type I errors, unless the number of clusters was large (>30-40 per arm). The use of the weighting scheme generally enhanced the power, but at the cost of increasing the type I error in mixed models and GEE. We recommend unweighted mixed models as the best compromise between feasibility and power to analyse data from a pseudo cluster randomized trial.  相似文献   

12.
Individual randomized trials (IRTs) and cluster randomized trials (CRTs) with binary outcomes arise in a variety of settings and are often analyzed by logistic regression (fitted using generalized estimating equations for CRTs). The effect of stratification on the required sample size is less well understood for trials with binary outcomes than for continuous outcomes. We propose easy-to-use methods for sample size estimation for stratified IRTs and CRTs and demonstrate the use of these methods for a tuberculosis prevention CRT currently being planned. For both IRTs and CRTs, we also identify the ratio of the sample size for a stratified trial vs a comparably powered unstratified trial, allowing investigators to evaluate how stratification will affect the required sample size when planning a trial. For CRTs, these can be used when the investigator has estimates of the within-stratum intracluster correlation coefficients (ICCs) or by assuming a common within-stratum ICC. Using these methods, we describe scenarios where stratification may have a practically important impact on the required sample size. We find that in the two-stratum case, for both IRTs and for CRTs with very small cluster sizes, there are unlikely to be plausible scenarios in which an important sample size reduction is achieved when the overall probability of a subject experiencing the event of interest is low. When the probability of events is not small, or when cluster sizes are large, however, there are scenarios where practically important reductions in sample size result from stratification.  相似文献   

13.
Cluster randomized and multicentre trials evaluate the effect of a treatment on persons nested within clusters, for instance, patients within clinics or pupils within schools. Optimal sample sizes at the cluster (centre) and person level have been derived under the restrictive assumption of equal sample sizes per cluster. This paper addresses the relative efficiency of unequal versus equal cluster sizes in case of cluster randomization and person randomization within clusters. Starting from maximum likelihood parameter estimation, the relative efficiency is investigated numerically for a range of cluster size distributions. An approximate formula is presented for computing the relative efficiency as a function of the mean and variance of cluster size and the intraclass correlation, which can be used for adjusting the sample size. The accuracy of this formula is checked against the numerical results and found to be quite good. It is concluded that the loss of efficiency due to variation of cluster sizes rarely exceeds 10 per cent and can be compensated by sampling 11 per cent more clusters.  相似文献   

14.
O'Brien (Biometrics 1984; 40:1079-1087) introduced a rank-sum-type global statistical test to summarize treatment's effect on multiple outcomes and to determine whether a treatment is better than others. This paper presents a sample size computation method for clinical trial design with multiple primary outcomes, and O'Brien's test or its modified test (Biometrics 2005; 61:532-539) is used for the primary analysis. A new measure, the global treatment effect (GTE), is introduced to summarize treatment's efficacy from multiple primary outcomes. Computation of the GTE under various settings is provided. Sample size methods are presented based on prespecified GTE both when pilot data are available and when no pilot data are available. The optimal randomization ratio is given for both cases. We compare our sample size method with the Bonferroni adjustment for multiple tests. Since ranks are used in our derivation, sample size formulas derived here are invariant to any monotone transformation of the data and are robust to outliers and skewed distributions. When all outcomes are binary, we show how sample size is affected by the success probabilities of outcomes. Simulation shows that these sample size formulas provide good control of type I error and statistical power. An application to a Parkinson's disease clinical trial design is demonstrated. Splus codes to compute sample size and the test statistic are provided.  相似文献   

15.
The sample size required for a cluster randomized trial depends on the magnitude of the intracluster correlation coefficient (ICC). The usual sample size calculation makes no allowance for the fact that the ICC is not known precisely in advance. We develop methods which allow for the uncertainty in a previously observed ICC, using a variety of distributional assumptions. Distributions for the power are derived, reflecting this uncertainty. Further, the observed ICC in a future study will not equal its true value, and we consider the impact of this on power. We implement calculations within a Bayesian simulation approach, and provide one simplification that can be performed using simple simulation within spreadsheet software. In our examples, recognizing the uncertainty in a previous ICC estimate decreases expected power, especially when the power calculated naively from the ICC estimate is high. To protect against the possibility of low power, sample sizes may need to be very substantially increased. Recognizing the variability in the future observed ICC has little effect if prior uncertainty has already been taken into account. We show how our method can be extended to the case in which multiple prior ICC estimates are available. The methods presented in this paper can be used by applied researchers to protect against loss of power, or to choose a design which reduces the impact of uncertainty in the ICC.  相似文献   

16.
Adjustments of sample size formulas are given for varying cluster sizes in cluster randomized trials with a binary outcome when testing the treatment effect with mixed effects logistic regression using second‐order penalized quasi‐likelihood estimation (PQL). Starting from first‐order marginal quasi‐likelihood (MQL) estimation of the treatment effect, the asymptotic relative efficiency of unequal versus equal cluster sizes is derived. A Monte Carlo simulation study shows this asymptotic relative efficiency to be rather accurate for realistic sample sizes, when employing second‐order PQL. An approximate, simpler formula is presented to estimate the efficiency loss due to varying cluster sizes when planning a trial. In many cases sampling 14 per cent more clusters is sufficient to repair the efficiency loss due to varying cluster sizes. Since current closed‐form formulas for sample size calculation are based on first‐order MQL, planning a trial also requires a conversion factor to obtain the variance of the second‐order PQL estimator. In a second Monte Carlo study, this conversion factor turned out to be 1.25 at most. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

17.
Analysis of covariance models, which adjust for a baseline covariate, are often used to compare treatment groups in a controlled trial in which individuals are randomized. Such analysis adjusts for any baseline imbalance and usually increases the precision of the treatment effect estimate. We assess the value of such adjustments in the context of a cluster randomized trial with repeated cross-sectional design and a binary outcome. In such a design, a new sample of individuals is taken from the clusters at each measurement occasion, so that baseline adjustment has to be at the cluster level. Logistic regression models are used to analyse the data, with cluster level random effects to allow for different outcome probabilities in each cluster. We compare the estimated treatment effect and its precision in models that incorporate a covariate measuring the cluster level probabilities at baseline and those that do not. In two data sets, taken from a cluster randomized trial in the treatment of menorrhagia, the value of baseline adjustment is only evident when the number of subjects per cluster is large. We assess the generalizability of these findings by undertaking a simulation study, and find that increased precision of the treatment effect requires both large cluster sizes and substantial heterogeneity between clusters at baseline, but baseline imbalance arising by chance in a randomized study can always be effectively adjusted for.  相似文献   

18.
Adaptive designs have been proposed for clinical trials in which the nuisance parameters or alternative of interest are unknown or likely to be misspecified before the trial. Although most previous works on adaptive designs and mid-course sample size re-estimation have focused on two-stage or group-sequential designs in the normal case, we consider here a new approach that involves at most three stages and is developed in the general framework of multiparameter exponential families. This approach not only maintains the prescribed type I error probability but also provides a simple but asymptotically efficient sequential test whose finite-sample performance, measured in terms of the expected sample size and power functions, is shown to be comparable to the optimal sequential design, determined by dynamic programming, in the simplified normal mean case with known variance and prespecified alternative, and superior to the existing two-stage designs and also to adaptive group-sequential designs when the alternative or nuisance parameters are unknown or misspecified.  相似文献   

19.
In this paper, the optimal sample sizes at the cluster and person levels for each of two treatment arms are obtained for cluster randomized trials where the cost‐effectiveness of treatments on a continuous scale is studied. The optimal sample sizes maximize the efficiency or power for a given budget or minimize the budget for a given efficiency or power. Optimal sample sizes require information on the intra‐cluster correlations (ICCs) for effects and costs, the correlations between costs and effects at individual and cluster levels, the ratio of the variance of effects translated into costs to the variance of the costs (the variance ratio), sampling and measuring costs, and the budget. When planning, a study information on the model parameters usually is not available. To overcome this local optimality problem, the current paper also presents maximin sample sizes. The maximin sample sizes turn out to be rather robust against misspecifying the correlation between costs and effects at the cluster and individual levels but may lose much efficiency when misspecifying the variance ratio. The robustness of the maximin sample sizes against misspecifying the ICCs depends on the variance ratio. The maximin sample sizes are robust under misspecification of the ICC for costs for realistic values of the variance ratio greater than one but not robust under misspecification of the ICC for effects. Finally, we show how to calculate optimal or maximin sample sizes that yield sufficient power for a test on the cost‐effectiveness of an intervention. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

20.
Current practice for sample size computations in clinical trials is largely based on frequentist or classical methods. These methods have the drawback of requiring a point estimate of the variance of the treatment effect and are based on arbitrary settings of type I and II errors. They also do not directly address the question of achieving the best balance between the cost of the trial and the possible benefits from using the new treatment, and fail to consider the important fact that the number of users depends on the evidence for improvement compared with the current treatment. Our approach, Behavioural Bayes (or BeBay for short), assumes that the number of patients switching to the new medical treatment depends on the strength of the evidence that is provided by clinical trials, and takes a value between zero and the number of potential patients. The better a new treatment, the more the number of patients who want to switch to it and the more the benefit is obtained. We define the optimal sample size to be the sample size that maximizes the expected net benefit resulting from a clinical trial. Gittins and Pezeshk (Drug Inf. Control 2000; 34:355-363; The Statistician 2000; 49(2):177-187) used a simple form of benefit function and assumed paired comparisons between two medical treatments and that the variance of the treatment effect is known. We generalize this setting, by introducing a logistic benefit function, and by extending the more usual unpaired case, without assuming the variance to be known.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号