首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 588 毫秒
1.
Stratified cluster randomization trials (CRTs) have been frequently employed in clinical and healthcare research. Comparing with simple randomized CRTs, stratified CRTs reduce the imbalance of baseline prognostic factors among different intervention groups. Due to the popularity, there has been a growing interest in methodological development on sample size estimation and power analysis for stratified CRTs; however, existing work mostly assumes equal cluster size within each stratum and uses multilevel models. Clusters are often naturally formed with random sizes in CRTs. With varying cluster size, commonly used ad hoc approaches ignore the variability in cluster size, which may underestimate (overestimate) the required number of clusters for each group per stratum and lead to underpowered (overpowered) clinical trials. We propose closed-form sample size formulas for estimating the required total number of subjects and for estimating the number of clusters for each group per stratum, based on Cochran-Mantel-Haenszel statistic for stratified cluster randomization design with binary outcomes, accounting for both clustering and varying cluster size. We investigate the impact of various design parameters on the relative change in the required number of clusters for each group per stratum due to varying cluster size. Simulation studies are conducted to evaluate the finite-sample performance of the proposed sample size method. A real application example of a pragmatic stratified CRT of a triad of chronic kidney disease, diabetes, and hypertension is presented for illustration.  相似文献   

2.
BACKGROUND: Cluster randomized controlled trials increasingly are used to evaluate health interventions where patients are nested within larger clusters such as practices, hospitals or communities. Patients within a cluster may be similar to each other relative to patients in other clusters on key variables; therefore, sample size calculations and analyses of results require special statistical methods. OBJECTIVE: The purpose of this study was to illustrate the calculations used for sample size estimation and data analysis and to provide estimates of the intraclass correlation coefficients (ICCs) for several variables using data from the Seniors Medication Assessment Research Trial (SMART), a community-based trial of pharmacists consulting to family physicians to optimize the drug therapy of older patients. METHODS: The study was a paired cluster randomized trial, where the family physician's practice was the cluster. The sample size calculation was based on a hypothesized reduction of 15% in mean daily units of medication in the intervention group compared with the control group, using an alpha of 0.05 (one-tailed) with 80% power, and an ICC from pilot data of 0.08. ICCs were estimated from the data for several variables. The analyses comparing the two groups used a random effects model for a meta-analysis over pairs. RESULTS: The design effect due to clustering was 2.12, resulting in an inflation in sample size from 340 patients required using individual randomization, to 720 patients using randomization of practices, with 15 patients from each of 48 practices. ICCs for medication use, health care utilization and general health were <0.1; however, the ICC for mean systolic blood pressure over the trial period was 0.199. CONCLUSIONS: Compared with individual randomization, cluster randomization may substantially increase the sample size required to maintain adequate statistical power. The differences in ICCs among potential outcome variables reinforce the need for valid estimates to ensure proper study design.  相似文献   

3.
OBJECTIVES. This methodological review aims to determine the extent to which design and analysis aspects of cluster randomization have been appropriately dealt with in reports of primary prevention trials. METHODS. All reports of primary prevention trials using cluster randomization that were published from 1990 to 1993 in the American Journal of Public Health and Preventive Medicine were identified. Each article was examined to determine whether cluster randomization was taken into account in the design and statistical analysis. RESULTS. Of the 21 articles, only 4 (19%) included sample size calculations or discussions of power that allowed for clustering, while 12 (57%) took clustering into account in the statistical analysis. CONCLUSIONS. Design and analysis issues associated with cluster randomization are not recognized widely enough. Reports of cluster randomized trials should include sample size calculations and statistical analyses that take clustering into account, estimates of design effects to help others planning trials, and a table showing the baseline distribution of important characteristics by intervention group, including the number of clusters and average cluster size for each group.  相似文献   

4.
Carter B 《Statistics in medicine》2010,29(29):2984-2993
Cluster randomized controlled trials are increasingly used to evaluate medical interventions. Research has found that cluster size variability leads to a reduction in the overall effective sample size. Although reporting standards of cluster trials have started to evolve, a far greater degree of transparency is needed to ensure that robust evidence is presented. The use of the numbers of patients recruited to summarize recruitment rate should be avoided in favour of an improved metric that illustrates cumulative power and accounts for cluster variability. Data from four trials is included to show the link between cluster size variability and imbalance. Furthermore, using simulations it is demonstrated that by randomising using a two block randomization strategy and weighting the second by cluster size recruitment, chance imbalance can be minimized.  相似文献   

5.
The sample size required for a cluster randomised trial is inflated compared with an individually randomised trial because outcomes of participants from the same cluster are correlated. Sample size calculations for longitudinal cluster randomised trials (including stepped wedge trials) need to take account of at least two levels of clustering: the clusters themselves and times within clusters. We derive formulae for sample size for repeated cross‐section and closed cohort cluster randomised trials with normally distributed outcome measures, under a multilevel model allowing for variation between clusters and between times within clusters. Our formulae agree with those previously described for special cases such as crossover and analysis of covariance designs, although simulation suggests that the formulae could underestimate required sample size when the number of clusters is small. Whether using a formula or simulation, a sample size calculation requires estimates of nuisance parameters, which in our model include the intracluster correlation, cluster autocorrelation, and individual autocorrelation. A cluster autocorrelation less than 1 reflects a situation where individuals sampled from the same cluster at different times have less correlated outcomes than individuals sampled from the same cluster at the same time. Nuisance parameters could be estimated from time series obtained in similarly clustered settings with the same outcome measure, using analysis of variance to estimate variance components. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

6.
Sample size estimation in clinical trials depends critically on nuisance parameters, such as variances or overall event rates, which have to be guessed or estimated from previous studies in the planning phase of a trial. Blinded sample size reestimation estimates these nuisance parameters based on blinded data from the ongoing trial, and allows to adjust the sample size based on the acquired information. In the present paper, this methodology is developed for clinical trials with count data as the primary endpoint. In multiple sclerosis such endpoints are commonly used in phase 2 trials (lesion counts in magnetic resonance imaging (MRI)) and phase 3 trials (relapse counts). Sample size adjustment formulas are presented for both Poisson‐distributed data and for overdispersed Poisson‐distributed data. The latter arise from sometimes considerable between‐patient heterogeneity, which can be observed in particular in MRI lesion counts. The operation characteristics of the procedure are evaluated by simulations and recommendations on how to choose the size of the internal pilot study are given. The results suggest that blinded sample size reestimation for count data maintains the required power without an increase in the type I error. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

7.
Cluster randomized designs are frequently employed in pragmatic clinical trials which test interventions in the full spectrum of everyday clinical settings in order to maximize applicability and generalizability. In this study, we propose to directly incorporate pragmatic features into power analysis for cluster randomized trials with count outcomes. The pragmatic features considered include arbitrary randomization ratio, overdispersion, random variability in cluster size, and unequal lengths of follow-up over which the count outcome is measured. The proposed method is developed based on generalized estimating equation (GEE) and it is advantageous in that the sample size formula retains a closed form, facilitating its implementation in pragmatic trials. We theoretically explore the impact of various pragmatic features on sample size requirements. An efficient Jackknife algorithm is presented to address the problem of underestimated variance by the GEE sandwich estimator when the number of clusters is small. We assess the performance of the proposed sample size method through extensive simulation and an application example to a real clinical trial is presented.  相似文献   

8.
Trials in which intact communities are the units of randomization are increasingly being used to evaluate interventions which are more naturally administered at the community level, or when there is a substantial risk of treatment contamination. In this article we focus on the planning of community intervention trials in which k communities (for example, medical practices, worksites, or villages) are to be randomly allocated to each of an intervention and a control group, and fixed cohorts of m individuals enrolled in each community prior to randomization. Formulas to determine k or m may be obtained by adjusting standard sample size formulas to account for the intracluster correlation coefficient rho. In the presence of individual-level attrition however, observed cohort sizes are likely to vary. We show that conventional approaches of accounting for potential attrition, such as dividing standard sample size formulas by the anticipated follow-up rate pi or using the average anticipated cohort size m pi, may, respectively, overestimate or underestimate the required sample size when cluster follow-up rates are highly variable, and m or rho are large. We present new sample size estimation formulas for the comparison of two means or two proportions, which appropriately account for variation among cluster follow-up rates. These formulas are derived by specifying a model for the binary missingness indicators under the population-averaged approach, assuming an exchangeable intracluster correlation coefficient, denoted by tau. To aid in the planning of future trials, we recommend that estimates for tau be reported in published community intervention trials.  相似文献   

9.
Consider a parallel group trial for the comparison of an experimental treatment to a control, where the second‐stage sample size may depend on the blinded primary endpoint data as well as on additional blinded data from a secondary endpoint. For the setting of normally distributed endpoints, we demonstrate that this may lead to an inflation of the type I error rate if the null hypothesis holds for the primary but not the secondary endpoint. We derive upper bounds for the inflation of the type I error rate, both for trials that employ random allocation and for those that use block randomization. We illustrate the worst‐case sample size reassessment rule in a case study. For both randomization strategies, the maximum type I error rate increases with the effect size in the secondary endpoint and the correlation between endpoints. The maximum inflation increases with smaller block sizes if information on the block size is used in the reassessment rule. Based on our findings, we do not question the well‐established use of blinded sample size reassessment methods with nuisance parameter estimates computed from the blinded interim data of the primary endpoint. However, we demonstrate that the type I error rate control of these methods relies on the application of specific, binding, pre‐planned and fully algorithmic sample size reassessment rules and does not extend to general or unplanned sample size adjustments based on blinded data. © 2015 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.  相似文献   

10.
Cluster randomization trials randomize groups (called clusters) of subjects (called subunits) between intervention arms, and observations are collected from each subject. In this case, subunits within each cluster share common frailties, so that the observations from subunits of each cluster tend to be correlated. Oftentimes, the outcome of a cluster randomization trial is a time-to-event endpoint with censoring. In this article, we propose a closed form sample size formula for weighted rank tests to compare the marginal survival distributions between intervention arms under cluster randomization with possibly variable cluster sizes. Extensive simulation studies are conducted to evaluate the performance of our sample size formula under various design settings. Real study examples are taken to demonstrate our method.  相似文献   

11.
Power analysis constitutes an important component of modern clinical trials and research studies. Although a variety of methods and software packages are available, almost all of them are focused on regression models, with little attention paid to correlation analysis. However, the latter is arguably a simpler and more appropriate approach for modelling concurrent events, especially in psychosocial research. In this paper, we discuss power and sample size estimation for correlation analysis arising from clustered study designs. Our approach is based on the asymptotic distribution of correlated Pearson-type estimates. Although this asymptotic distribution is easy to use in data analysis, the presence of a large number of parameters creates a major problem for power analysis due to the lack of real data to estimate them. By introducing a surrogacy-type assumption, we show that all nuisance parameters can be eliminated, making it possible to perform power analysis based only on the parameters of interest. Simulation results suggest that power and sample size estimates obtained under the proposed approach are robust to this assumption.  相似文献   

12.
Adaptive designs have been proposed for clinical trials in which the nuisance parameters or alternative of interest are unknown or likely to be misspecified before the trial. Although most previous works on adaptive designs and mid-course sample size re-estimation have focused on two-stage or group-sequential designs in the normal case, we consider here a new approach that involves at most three stages and is developed in the general framework of multiparameter exponential families. This approach not only maintains the prescribed type I error probability but also provides a simple but asymptotically efficient sequential test whose finite-sample performance, measured in terms of the expected sample size and power functions, is shown to be comparable to the optimal sequential design, determined by dynamic programming, in the simplified normal mean case with known variance and prespecified alternative, and superior to the existing two-stage designs and also to adaptive group-sequential designs when the alternative or nuisance parameters are unknown or misspecified.  相似文献   

13.
A common objective in health care quality studies involves measuring and comparing the quality of care delivered to cohorts of patients by different health care providers. The data used for inference involve observations on units grouped within clusters, such as patients treated within hospitals. Unlike cluster randomization trials where often clusters are randomized to interventions to learn about individuals, the target of inference in health quality studies is the cluster. Furthermore, randomization is often not performed and the resulting biases may invalidate standard tests. In this paper, we discuss approaches to sample size determination in the design of observational health quality studies when the outcome is binary. Methods for calculating sample size using marginal models are briefly reviewed, but the focus is on hierarchical binomial models. Sample size in unbalanced clusters and stratified designs are characterized. We draw upon the experiences that have arisen from a study funded by the Agency for Healthcare Research and Quality involving assessment of quality of care for patients with cardiovascular disease. If researchers are interested in comparing clusters, hierarchical models are preferred.  相似文献   

14.
Repeated measures are common in clinical trials and epidemiological studies. Designing studies with repeated measures requires reasonably accurate specifications of the variances and correlations to select an appropriate sample size. Underspecifying the variances leads to a sample size that is inadequate to detect a meaningful scientific difference, while overspecifying the variances results in an unnecessarily large sample size. Both lead to wasting resources and placing study participants in unwarranted risk. An internal pilot design allows sample size recalculation based on estimates of the nuisance parameters in the covariance matrix. We provide the theoretical results that account for the stochastic nature of the final sample size in a common class of linear mixed models. The results are useful for designing studies with repeated measures and balanced design. Simulations examine the impact of misspecification of the covariance matrix and demonstrate the accuracy of the approximations in controlling the type I error rate and achieving the target power. The proposed methods are applied to a longitudinal study assessing early antiretroviral therapy for youth living with HIV.  相似文献   

15.
Cluster randomized trials (CRTs) are increasingly used to evaluate the effectiveness of health‐care interventions. A key feature of CRTs is that the observations on individuals within clusters are correlated as a result of between‐cluster variability. Sample size formulae exist which account for such correlations, but they make different assumptions regarding the between‐cluster variability in the intervention arm of a trial, resulting in different sample size estimates. We explore the relationship for binary outcome data between two common measures of between‐cluster variability: k, the coefficient of variation and ρ, the intracluster correlation coefficient. We then assess how the assumptions of constant k or ρ across treatment arms correspond to different assumptions about intervention effects. We assess implications for sample size estimation and present a simple solution to the problems outlined. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

16.
In some diseases, such as multiple sclerosis, lesion counts obtained from magnetic resonance imaging (MRI) are used as markers of disease progression. This leads to longitudinal, and typically overdispersed, count data outcomes in clinical trials. Models for such data invariably include a number of nuisance parameters, which can be difficult to specify at the planning stage, leading to considerable uncertainty in sample size specification. Consequently, blinded sample size re-estimation procedures are used, allowing for an adjustment of the sample size within an ongoing trial by estimating relevant nuisance parameters at an interim point, without compromising trial integrity. To date, the methods available for re-estimation have required an assumption that the mean count is time-constant within patients. We propose a new modeling approach that maintains the advantages of established procedures but allows for general underlying and treatment-specific time trends in the mean response. A simulation study is conducted to assess the effectiveness of blinded sample size re-estimation methods over fixed designs. Sample sizes attained through blinded sample size re-estimation procedures are shown to maintain the desired study power without inflating the Type I error rate and the procedure is demonstrated on MRI data from a recent study in multiple sclerosis.  相似文献   

17.
Tsiatis AA 《Statistics in medicine》2006,25(19):3236-44; discussion 3320-5, 3326-47
When designing a clinical trial to compare the effect of different treatments on response, a key issue facing the statistician is to determine how large a study is necessary to detect a clinically important difference with sufficient power. This is the case whether the study will be analysed only once (single-analysis) or whether it will be monitored periodically with the possibility of early stopping (group-sequential). Standard sample size calculations are based on both the magnitude of difference that is considered clinically important as well as values for the nuisance parameters in the statistical model. For planning purposes, best guesses are made for the value of the nuisance parameters and these are used to determine the sample size. However, if these guesses are incorrect this will affect the subsequent power to detect the clinically important difference. It is argued in this paper that statistical precision is directly related to Statistical Information and that the study should continue until the requisite statistical information is obtained. This is referred to as information-based design and analysis of clinical trials. We also argue that this type of methodology is best suited with group-sequential trials which monitor the data periodically and allow for estimation of the statistical information as the study progresses.  相似文献   

18.
Missing outcome data is a crucial threat to the validity of treatment effect estimates from randomized trials. The outcome distributions of participants with missing and observed data are often different, which increases bias. Causal inference methods may aid in reducing the bias and improving efficiency by incorporating baseline variables into the analysis. In particular, doubly robust estimators incorporate 2 nuisance parameters: the outcome regression and the missingness mechanism (ie, the probability of missingness conditional on treatment assignment and baseline variables), to adjust for differences in the observed and unobserved groups that can be explained by observed covariates. To consistently estimate the treatment effect, one of these nuisance parameters must be consistently estimated. Traditionally, nuisance parameters are estimated using parametric models, which often precludes consistency, particularly in moderate to high dimensions. Recent research on missing data has focused on data‐adaptive estimation to help achieve consistency, but the large sample properties of such methods are poorly understood. In this article, we discuss a doubly robust estimator that is consistent and asymptotically normal under data‐adaptive estimation of the nuisance parameters. We provide a formula for an asymptotically exact confidence interval under minimal assumptions. We show that our proposed estimator has smaller finite‐sample bias compared to standard doubly robust estimators. We present a simulation study demonstrating the enhanced performance of our estimators in terms of bias, efficiency, and coverage of the confidence intervals. We present the results of an illustrative example: a randomized, double‐blind phase 2/3 trial of antiretroviral therapy in HIV‐infected persons.  相似文献   

19.
Cluster randomized and multicentre trials evaluate the effect of a treatment on persons nested within clusters, for instance, patients within clinics or pupils within schools. Optimal sample sizes at the cluster (centre) and person level have been derived under the restrictive assumption of equal sample sizes per cluster. This paper addresses the relative efficiency of unequal versus equal cluster sizes in case of cluster randomization and person randomization within clusters. Starting from maximum likelihood parameter estimation, the relative efficiency is investigated numerically for a range of cluster size distributions. An approximate formula is presented for computing the relative efficiency as a function of the mean and variance of cluster size and the intraclass correlation, which can be used for adjusting the sample size. The accuracy of this formula is checked against the numerical results and found to be quite good. It is concluded that the loss of efficiency due to variation of cluster sizes rarely exceeds 10 per cent and can be compensated by sampling 11 per cent more clusters.  相似文献   

20.
Denne JS 《Statistics in medicine》2001,20(17-18):2645-2660
The sample size required to achieve a given power at a prespecified absolute difference in mean response may depend on one or more nuisance parameters, which are usually unknown. Proposed methods for using an internal pilot to recalculate the sample size using estimates of these parameters have been well studied. Most of these methods ignore the fact that data on the parameter of interest from within this internal pilot will contribute towards the value of the final test statistic. We propose a method which involves recalculating the target sample size by computing the number of further observations required to maintain the probability of rejecting the null hypothesis at the end of the study under the prespecified absolute difference in mean response conditional on the data observed so far. We do this within the framework of a two-group error-spending sequential test, modified so as to prevent inflation of the type I error rate.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号