首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Many different methods have been proposed for the analysis of cluster randomized trials (CRTs) over the last 30 years. However, the evaluation of methods on overdispersed count data has been based mostly on the comparison of results using empiric data; i.e. when the true model parameters are not known. In this study, we assess via simulation the performance of five methods for the analysis of counts in situations similar to real community‐intervention trials. We used the negative binomial distribution to simulate overdispersed counts of CRTs with two study arms, allowing the period of time under observation to vary among individuals. We assessed different sample sizes, degrees of clustering and degrees of cluster‐size imbalance. The compared methods are: (i) the two‐sample t‐test of cluster‐level rates, (ii) generalized estimating equations (GEE) with empirical covariance estimators, (iii) GEE with model‐based covariance estimators, (iv) generalized linear mixed models (GLMM) and (v) Bayesian hierarchical models (Bayes‐HM). Variation in sample size and clustering led to differences between the methods in terms of coverage, significance, power and random‐effects estimation. GLMM and Bayes‐HM performed better in general with Bayes‐HM producing less dispersed results for random‐effects estimates although upward biased when clustering was low. GEE showed higher power but anticonservative coverage and elevated type I error rates. Imbalance affected the overall performance of the cluster‐level t‐test and the GEE's coverage in small samples. Important effects arising from accounting for overdispersion are illustrated through the analysis of a community‐intervention trial on Solar Water Disinfection in rural Bolivia. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

2.
Stratified cluster randomization trials (CRTs) have been frequently employed in clinical and healthcare research. Comparing with simple randomized CRTs, stratified CRTs reduce the imbalance of baseline prognostic factors among different intervention groups. Due to the popularity, there has been a growing interest in methodological development on sample size estimation and power analysis for stratified CRTs; however, existing work mostly assumes equal cluster size within each stratum and uses multilevel models. Clusters are often naturally formed with random sizes in CRTs. With varying cluster size, commonly used ad hoc approaches ignore the variability in cluster size, which may underestimate (overestimate) the required number of clusters for each group per stratum and lead to underpowered (overpowered) clinical trials. We propose closed-form sample size formulas for estimating the required total number of subjects and for estimating the number of clusters for each group per stratum, based on Cochran-Mantel-Haenszel statistic for stratified cluster randomization design with binary outcomes, accounting for both clustering and varying cluster size. We investigate the impact of various design parameters on the relative change in the required number of clusters for each group per stratum due to varying cluster size. Simulation studies are conducted to evaluate the finite-sample performance of the proposed sample size method. A real application example of a pragmatic stratified CRT of a triad of chronic kidney disease, diabetes, and hypertension is presented for illustration.  相似文献   

3.
We extend the pattern‐mixture approach to handle missing continuous outcome data in longitudinal cluster randomized trials, which randomize groups of individuals to treatment arms, rather than the individuals themselves. Individuals who drop out at the same time point are grouped into the same dropout pattern. We approach extrapolation of the pattern‐mixture model by applying multilevel multiple imputation, which imputes missing values while appropriately accounting for the hierarchical data structure found in cluster randomized trials. To assess parameters of interest under various missing data assumptions, imputed values are multiplied by a sensitivity parameter, k, which increases or decreases imputed values. Using simulated data, we show that estimates of parameters of interest can vary widely under differing missing data assumptions. We conduct a sensitivity analysis using real data from a cluster randomized trial by increasing k until the treatment effect inference changes. By performing a sensitivity analysis for missing data, researchers can assess whether certain missing data assumptions are reasonable for their cluster randomized trial.  相似文献   

4.
The sandwich estimator in generalized estimating equations (GEE) approach underestimates the true variance in small samples and consequently results in inflated type I error rates in hypothesis testing. This fact limits the application of the GEE in cluster‐randomized trials (CRTs) with few clusters. Under various CRT scenarios with correlated binary outcomes, we evaluate the small sample properties of the GEE Wald tests using bias‐corrected sandwich estimators. Our results suggest that the GEE Wald z‐test should be avoided in the analyses of CRTs with few clusters even when bias‐corrected sandwich estimators are used. With t‐distribution approximation, the Kauermann and Carroll (KC)‐correction can keep the test size to nominal levels even when the number of clusters is as low as 10 and is robust to the moderate variation of the cluster sizes. However, in cases with large variations in cluster sizes, the Fay and Graubard (FG)‐correction should be used instead. Furthermore, we derive a formula to calculate the power and minimum total number of clusters one needs using the t‐test and KC‐correction for the CRTs with binary outcomes. The power levels as predicted by the proposed formula agree well with the empirical powers from the simulations. The proposed methods are illustrated using real CRT data. We conclude that with appropriate control of type I error rates under small sample sizes, we recommend the use of GEE approach in CRTs with binary outcomes because of fewer assumptions and robustness to the misspecification of the covariance structure. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

5.
ObjectiveThe stepped wedge design is increasingly being used in cluster randomized trials (CRTs). However, there is not much information available about the design and analysis strategies for these kinds of trials. Approaches to sample size and power calculations have been provided, but a simple sample size formula is lacking. Therefore, our aim is to provide a sample size formula for cluster randomized stepped wedge designs.Study Design and SettingWe derived a design effect (sample size correction factor) that can be used to estimate the required sample size for stepped wedge designs. Furthermore, we compared the required sample size for the stepped wedge design with a parallel group and analysis of covariance (ANCOVA) design.ResultsOur formula corrects for clustering as well as for the design. Apart from the cluster size and intracluster correlation, the design effect depends on choices of the number of steps, the number of baseline measurements, and the number of measurements between steps. The stepped wedge design requires a substantial smaller sample size than a parallel group and ANCOVA design.ConclusionFor CRTs, the stepped wedge design is far more efficient than the parallel group and ANCOVA design in terms of sample size.  相似文献   

6.
A requirement for calculating sample sizes for cluster randomized trials (CRTs) conducted over multiple periods of time is the specification of a form for the correlation between outcomes of subjects within the same cluster, encoded via the within-cluster correlation structure. Previously proposed within-cluster correlation structures have made strong assumptions; for example, the usual assumption is that correlations between the outcomes of all pairs of subjects are identical (“uniform correlation”). More recently, structures that allow for a decay in correlation between pairs of outcomes measured in different periods have been suggested. However, these structures are overly simple in settings with continuous recruitment and measurement. We propose a more realistic “continuous-time correlation decay” structure whereby correlations between subjects' outcomes decay as the time between these subjects' measurement times increases. We investigate the use of this structure on trial planning in the context of a primary care diabetes trial, where there is evidence of decaying correlation between pairs of patients' outcomes over time. In particular, for a range of different trial designs, we derive the variance of the treatment effect estimator under continuous-time correlation decay and compare this to the variance obtained under uniform correlation. For stepped wedge and cluster randomized crossover designs, incorrectly assuming uniform correlation will underestimate the required sample size under most trial configurations likely to occur in practice. Planning of CRTs requires consideration of the most appropriate within-cluster correlation structure to obtain a suitable sample size.  相似文献   

7.
Individual randomized trials (IRTs) and cluster randomized trials (CRTs) with binary outcomes arise in a variety of settings and are often analyzed by logistic regression (fitted using generalized estimating equations for CRTs). The effect of stratification on the required sample size is less well understood for trials with binary outcomes than for continuous outcomes. We propose easy-to-use methods for sample size estimation for stratified IRTs and CRTs and demonstrate the use of these methods for a tuberculosis prevention CRT currently being planned. For both IRTs and CRTs, we also identify the ratio of the sample size for a stratified trial vs a comparably powered unstratified trial, allowing investigators to evaluate how stratification will affect the required sample size when planning a trial. For CRTs, these can be used when the investigator has estimates of the within-stratum intracluster correlation coefficients (ICCs) or by assuming a common within-stratum ICC. Using these methods, we describe scenarios where stratification may have a practically important impact on the required sample size. We find that in the two-stratum case, for both IRTs and for CRTs with very small cluster sizes, there are unlikely to be plausible scenarios in which an important sample size reduction is achieved when the overall probability of a subject experiencing the event of interest is low. When the probability of events is not small, or when cluster sizes are large, however, there are scenarios where practically important reductions in sample size result from stratification.  相似文献   

8.
Cluster randomized trials (CRTs) involve the random assignment of intact social units rather than independent subjects to intervention groups. Time‐to‐event outcomes often are endpoints in CRTs. Analyses of such data need to account for the correlation among cluster members. The intracluster correlation coefficient (ICC) is used to assess the similarity among binary and continuous outcomes that belong to the same cluster. However, estimating the ICC in CRTs with time‐to‐event outcomes is a challenge because of the presence of censored observations. The literature suggests that the ICC may be estimated using either censoring indicators or observed event times. A simulation study explores the effect of administrative censoring on estimating the ICC. Results show that ICC estimators derived from censoring indicators or observed event times are negatively biased. Analytic work further supports these results. Observed event times are preferred to estimate the ICC under minimum frequency of administrative censoring. To our knowledge, the existing literature provides no practical guidance on the estimation of ICC when substantial amount of administrative censoring is present. The results from this study corroborate the need for further methodological research on estimating the ICC for correlated time‐to‐event outcomes. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

9.
Cluster randomized trials (CRTs) refer to experiments with randomization carried out at the cluster or the group level. While numerous statistical methods have been developed for the design and analysis of CRTs, most of the existing methods focused on testing the overall treatment effect across the population characteristics, with few discussions on the differential treatment effect among subpopulations. In addition, the sample size and power requirements for detecting differential treatment effect in CRTs remain unclear, but are helpful for studies planned with such an objective. In this article, we develop a new sample size formula for detecting treatment effect heterogeneity in two-level CRTs for continuous outcomes, continuous or binary covariates measured at cluster or individual level. We also investigate the roles of two intraclass correlation coefficients (ICCs): the adjusted ICC for the outcome of interest and the marginal ICC for the covariate of interest. We further derive a closed-form design effect formula to facilitate the application of the proposed method, and provide extensions to accommodate multiple covariates. Extensive simulations are carried out to validate the proposed formula in finite samples. We find that the empirical power agrees well with the prediction across a range of parameter constellations, when data are analyzed by a linear mixed effects model with a treatment-by-covariate interaction. Finally, we use data from the HF-ACTION study to illustrate the proposed sample size procedure for detecting heterogeneous treatment effects.  相似文献   

10.
Stratified randomized designs are popular in cluster randomized trials (CRTs) because they increase the chance of the intervention groups being well balanced in terms of identified prognostic factors at baseline and may increase statistical power. The objective of this paper is to assess the gains in power obtained by stratifying randomization by cluster size, when cluster size is associated with an important cluster level factor which is otherwise unaccounted for in data analysis. A simulation study was carried out using a CRT where UK general practices were the randomized units as a template. The results show that when cluster size is strongly associated with a cluster level factor which is predictive of outcome, the stratified randomized design has superior power results to the completely randomized design and that the superiority is related to the number of clusters.  相似文献   

11.
For cluster randomized trials with a continuous outcome, the sample size is often calculated as if an analysis of the outcomes at the end of the treatment period (follow‐up scores) would be performed. However, often a baseline measurement of the outcome is available or feasible to obtain. An analysis of covariance (ANCOVA) using both the baseline and follow‐up score of the outcome will then have more power. We calculate the efficiency of an ANCOVA analysis using the baseline scores compared with an analysis on follow‐up scores only. The sample size for such an ANCOVA analysis is a factor r2 smaller, where r is the correlation of the cluster means between baseline and follow‐up. This correlation can be expressed in clinically interpretable parameters: the correlation between baseline and follow‐up of subjects (subject autocorrelation) and that of clusters (cluster autocorrelation). Because of this, subject matter knowledge can be used to provide (range of) plausible values for these correlations, when estimates from previous studies are lacking. Depending on how large the subject and cluster autocorrelations are, analysis of covariance can substantially reduce the number of clusters needed. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

12.
In cluster randomized trials (CRTs), the outcome of interest is often a count at the cluster level. This occurs, for example, in evaluating an intervention with the outcome being the number of infections of a disease such as HIV or dengue or the number of hospitalizations in the cluster. Standard practice analyzes these counts through cluster outcome rates using an appropriate denominator (eg, population size). However, such denominators are sometimes unknown, particularly when the counts depend on a passive community surveillance system. We consider direct comparison of the counts without knowledge of denominators, relying on randomization to balance denominators. We also focus on permutation tests to allow for small numbers of randomized clusters. However, such approaches are subject to bias when there is differential ascertainment of counts across arms, a situation that may occur in CRTs that cannot implement blinded interventions. We suggest the use of negative control counts as a method to remove, or reduce, this bias, discussing the key properties necessary for an effective negative control. A current example of such a design is the recent extension of test-negative designs to CRTs testing community-level interventions. Via simulation, we compare the performance of new and standard estimators based on CRTs with negative controls to approaches that only use the original counts. When there is no differential ascertainment by intervention arm, the count-only approaches perform comparably to those using debiasing negative controls. However, under even modest differential ascertainment, the count-only estimators are no longer reliable.  相似文献   

13.
Missing outcomes are a commonly occurring problem for cluster randomised trials, which can lead to biased and inefficient inference if ignored or handled inappropriately. Two approaches for analysing such trials are cluster‐level analysis and individual‐level analysis. In this study, we assessed the performance of unadjusted cluster‐level analysis, baseline covariate‐adjusted cluster‐level analysis, random effects logistic regression and generalised estimating equations when binary outcomes are missing under a baseline covariate‐dependent missingness mechanism. Missing outcomes were handled using complete records analysis and multilevel multiple imputation. We analytically show that cluster‐level analyses for estimating risk ratio using complete records are valid if the true data generating model has log link and the intervention groups have the same missingness mechanism and the same covariate effect in the outcome model. We performed a simulation study considering four different scenarios, depending on whether the missingness mechanisms are the same or different between the intervention groups and whether there is an interaction between intervention group and baseline covariate in the outcome model. On the basis of the simulation study and analytical results, we give guidance on the conditions under which each approach is valid. © 2017 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.  相似文献   

14.
Studies of individuals sampled in unbalanced clusters have become common in health services and epidemiological research, but available tools for power/sample size estimation and optimal design are currently limited. This paper presents and illustrates power estimation formulas for t-test comparisons of effect of an exposure at the cluster level on continuous outcomes in unbalanced studies with unequal numbers of clusters and/or unequal numbers of subjects per cluster in each exposure arm. Iterative application of these power formulas obtains minimal sample size needed and/or minimal detectable difference. SAS subroutines to implement these algorithms are given in the Appendices. When feasible, power is optimized by having the same number of clusters in each arm k A =k B and (irrespective of numbers of clusters in each arm) the same total number of subjects in each arm n A k A =n B k B . Cost beneficial upper limits for numbers of subjects per cluster may be approximately (5/ρ) −5 or less where ρ is the intraclass correlation. The methods presented here for simple cluster designs may be extended to some settings involving complex hierarchical weighted cluster samples.  相似文献   

15.
In this paper, we give focus to cluster randomized trials, also known as group randomized trials, which randomize clusters, or groups, of subjects to different trial arms, such as intervention or control. Outcomes from subjects within the same cluster tend to exhibit an exchangeable correlation measured by the intra‐cluster correlation coefficient (ICC). Our primary interest is to test if the intervention has an impact on the marginal mean of an outcome. Using recently developed methods, we propose how to select a working ICC structure with the goal of choosing the structure that results in the smallest standard errors for regression parameter estimates and thus the greatest power for this test. Specifically, we utilize small‐sample corrections for the estimation of the covariance matrix of regression parameter estimates. This matrix is incorporated within correlation selection criteria proposed in the generalized estimating equations literature to choose one of multiple working ICC structures under consideration. We demonstrate the potential power and utility of this approach when used in cluster randomized trial settings via a simulation study and application example, and we discuss practical considerations for its use in practice. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

16.
BACKGROUND: Cluster randomized trials are increasingly popular. In many of these trials, cluster sizes are unequal. This can affect trial power, but standard sample size formulae for these trials ignore this. Previous studies addressing this issue have mostly focused on continuous outcomes or methods that are sometimes difficult to use in practice. METHODS: We show how a simple formula can be used to judge the possible effect of unequal cluster sizes for various types of analyses and both continuous and binary outcomes. We explore the practical estimation of the coefficient of variation of cluster size required in this formula and demonstrate the formula's performance for a hypothetical but typical trial randomizing UK general practices. RESULTS: The simple formula provides a good estimate of sample size requirements for trials analysed using cluster-level analyses weighting by cluster size and a conservative estimate for other types of analyses. For trials randomizing UK general practices the coefficient of variation of cluster size depends on variation in practice list size, variation in incidence or prevalence of the medical condition under examination, and practice and patient recruitment strategies, and for many trials is expected to be approximately 0.65. Individual-level analyses can be noticeably more efficient than some cluster-level analyses in this context. CONCLUSIONS: When the coefficient of variation is <0.23, the effect of adjustment for variable cluster size on sample size is negligible. Most trials randomizing UK general practices and many other cluster randomized trials should account for variable cluster size in their sample size calculations.  相似文献   

17.
Three-level cluster randomized trials (CRTs) are increasingly used in implementation science, where 2fold-nested-correlated data arise. For example, interventions are randomly assigned to practices, and providers within the same practice who provide care to participants are trained with the assigned intervention. Teerenstra et al proposed a nested exchangeable correlation structure that accounts for two levels of clustering within the generalized estimating equations (GEE) approach. In this article, we utilize GEE models to test the treatment effect in a two-group comparison for continuous, binary, or count data in three-level CRTs. Given the nested exchangeable correlation structure, we derive the asymptotic variances of the estimator of the treatment effect for different types of outcomes. When the number of clusters is small, researchers have proposed bias-corrected sandwich estimators to improve performance in two-level CRTs. We extend the variances of two bias-corrected sandwich estimators to three-level CRTs. The equal provider and practice sizes were assumed to calculate number of practices for simplicity. However, they are not guaranteed in practice. Relative efficiency (RE) is defined as the ratio of variance of the estimator of the treatment effect for equal to unequal provider and practice sizes. The expressions of REs are obtained from both asymptotic variance estimation and bias-corrected sandwich estimators. Their performances are evaluated for different scenarios of provider and practice size distributions through simulation studies. Finally, a percentage increase in the number of practices is proposed due to efficiency loss from unequal provider and/or practice sizes.  相似文献   

18.
BACKGROUND: This paper concerns the issue of cluster randomization in primary care practice intervention trials. We present information on the cluster effect of measuring the performance of various preventive maneuvers between groups of physicians based on a successful trial. We discuss the intracluster correlation coefficient of determining the required sample size and the implications for designing randomized controlled trials where groups of subjects (e.g., physicians in a group practice) are allocated at random. METHODS: We performed a cross-sectional study involving data from 46 participating practices with 106 physicians collected using self-administered questionnaires and a chart audit of 100 randomly selected charts per practice. The population was health service organizations (HSOs) located in Southern Ontario. We analyzed performance data for 13 preventive maneuvers determined by chart review and used analysis of variance to determine the intraclass correlation coefficient. An index of "up-to-datedness" was computed for each physician and practice as the number of a recommended preventive measure done divided by the number of eligible patients. An index called "inappropriateness" was computed in the same manner for the not-recommended measures. The intraclass correlation coefficients for 2 key study outcomes (up-to-datedness and inappropriateness) were also calculated and compared. RESULTS: The mean up-to-datedness score for the practices was 53.5% (95% confidence interval [CI], 51.0%-56.0%), and the mean inappropriateness score was 21.5% (95% CI, 18.1%-24.9%). The intraclass correlation for up-to-datedness was 0.0365 compared with inappropriateness at 0.1790. The intraclass correlation for preventive maneuvers ranged from 0.005 for blood pressure measurement to 0.66 for chest radiographs of smokers, and as a consequence required the sample size ranged from 20 to 42 physicians per group. CONCLUSIONS: Randomizing by practice clusters and analyzing at the level of the physician has important implications for sample size requirements. Larger intraclass correlations indicate interdependence among the physicians within a cluster; as a consequence, variability within clusters is reduced, and the required sample size increased. The key finding that many potential outcome measures perform differently in terms of the intracluster correlation reinforces the need for researchers to carefully consider the selection of outcome measures and adjust sample sizes accordingly when the unit of analysis and randomization are not the same.  相似文献   

19.
It is often anticipated in a longitudinal cluster randomized clinical trial (cluster‐RCT) that the course of outcome over time will diverge between intervention arms. In these situations, testing the significance of a local intervention effect at the end of the trial may be more clinically relevant than evaluating overall mean differences between treatment groups. In this paper, we present a closed‐form power function for detecting this local intervention effect based on maximum likelihood estimates from a mixed‐effects linear regression model for three‐level continuous data. Sample size requirements for the number of units at each data level are derived from the power function. The power function and the corresponding sample size requirements are verified by a simulation study. Importantly, it is shown that sample size requirements computed with the proposed power function are smaller than that required when testing group mean difference using data only at the end of trial and ignoring the course of outcome over the entire study period. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

20.
We used theoretical and simulation‐based approaches to study Type I error rates for one‐stage and two‐stage analytic methods for cluster‐randomized designs. The one‐stage approach uses the observed data as outcomes and accounts for within‐cluster correlation using a general linear mixed model. The two‐stage model uses the cluster specific means as the outcomes in a general linear univariate model. We demonstrate analytically that both one‐stage and two‐stage models achieve exact Type I error rates when cluster sizes are equal. With unbalanced data, an exact size α test does not exist, and Type I error inflation may occur. Via simulation, we compare the Type I error rates for four one‐stage and six two‐stage hypothesis testing approaches for unbalanced data. With unbalanced data, the two‐stage model, weighted by the inverse of the estimated theoretical variance of the cluster means, and with variance constrained to be positive, provided the best Type I error control for studies having at least six clusters per arm. The one‐stage model with Kenward–Roger degrees of freedom and unconstrained variance performed well for studies having at least 14 clusters per arm. The popular analytic method of using a one‐stage model with denominator degrees of freedom appropriate for balanced data performed poorly for small sample sizes and low intracluster correlation. Because small sample sizes and low intracluster correlation are common features of cluster‐randomized trials, the Kenward–Roger method is the preferred one‐stage approach. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号