首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The sample size required for a cluster randomised trial is inflated compared with an individually randomised trial because outcomes of participants from the same cluster are correlated. Sample size calculations for longitudinal cluster randomised trials (including stepped wedge trials) need to take account of at least two levels of clustering: the clusters themselves and times within clusters. We derive formulae for sample size for repeated cross‐section and closed cohort cluster randomised trials with normally distributed outcome measures, under a multilevel model allowing for variation between clusters and between times within clusters. Our formulae agree with those previously described for special cases such as crossover and analysis of covariance designs, although simulation suggests that the formulae could underestimate required sample size when the number of clusters is small. Whether using a formula or simulation, a sample size calculation requires estimates of nuisance parameters, which in our model include the intracluster correlation, cluster autocorrelation, and individual autocorrelation. A cluster autocorrelation less than 1 reflects a situation where individuals sampled from the same cluster at different times have less correlated outcomes than individuals sampled from the same cluster at the same time. Nuisance parameters could be estimated from time series obtained in similarly clustered settings with the same outcome measure, using analysis of variance to estimate variance components. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

2.
The use and development of mobile interventions are experiencing rapid growth. In “just‐in‐time” mobile interventions, treatments are provided via a mobile device, and they are intended to help an individual make healthy decisions ‘in the moment,’ and thus have a proximal, near future impact. Currently, the development of mobile interventions is proceeding at a much faster pace than that of associated data science methods. A first step toward developing data‐based methods is to provide an experimental design for testing the proximal effects of these just‐in‐time treatments. In this paper, we propose a ‘micro‐randomized’ trial design for this purpose. In a micro‐randomized trial, treatments are sequentially randomized throughout the conduct of the study, with the result that each participant may be randomized at the 100s or 1000s of occasions at which a treatment might be provided. Further, we develop a test statistic for assessing the proximal effect of a treatment as well as an associated sample size calculator. We conduct simulation evaluations of the sample size calculator in various settings. Rules of thumb that might be used in designing a micro‐randomized trial are discussed. This work is motivated by our collaboration on the HeartSteps mobile application designed to increase physical activity. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

3.
A requirement for calculating sample sizes for cluster randomized trials (CRTs) conducted over multiple periods of time is the specification of a form for the correlation between outcomes of subjects within the same cluster, encoded via the within-cluster correlation structure. Previously proposed within-cluster correlation structures have made strong assumptions; for example, the usual assumption is that correlations between the outcomes of all pairs of subjects are identical (“uniform correlation”). More recently, structures that allow for a decay in correlation between pairs of outcomes measured in different periods have been suggested. However, these structures are overly simple in settings with continuous recruitment and measurement. We propose a more realistic “continuous-time correlation decay” structure whereby correlations between subjects' outcomes decay as the time between these subjects' measurement times increases. We investigate the use of this structure on trial planning in the context of a primary care diabetes trial, where there is evidence of decaying correlation between pairs of patients' outcomes over time. In particular, for a range of different trial designs, we derive the variance of the treatment effect estimator under continuous-time correlation decay and compare this to the variance obtained under uniform correlation. For stepped wedge and cluster randomized crossover designs, incorrectly assuming uniform correlation will underestimate the required sample size under most trial configurations likely to occur in practice. Planning of CRTs requires consideration of the most appropriate within-cluster correlation structure to obtain a suitable sample size.  相似文献   

4.
The sample size required for a cluster randomized trial depends on the magnitude of the intracluster correlation coefficient (ICC). The usual sample size calculation makes no allowance for the fact that the ICC is not known precisely in advance. We develop methods which allow for the uncertainty in a previously observed ICC, using a variety of distributional assumptions. Distributions for the power are derived, reflecting this uncertainty. Further, the observed ICC in a future study will not equal its true value, and we consider the impact of this on power. We implement calculations within a Bayesian simulation approach, and provide one simplification that can be performed using simple simulation within spreadsheet software. In our examples, recognizing the uncertainty in a previous ICC estimate decreases expected power, especially when the power calculated naively from the ICC estimate is high. To protect against the possibility of low power, sample sizes may need to be very substantially increased. Recognizing the variability in the future observed ICC has little effect if prior uncertainty has already been taken into account. We show how our method can be extended to the case in which multiple prior ICC estimates are available. The methods presented in this paper can be used by applied researchers to protect against loss of power, or to choose a design which reduces the impact of uncertainty in the ICC.  相似文献   

5.
Despite our best efforts, missing outcomes are common in randomized controlled clinical trials. The National Research Council's Committee on National Statistics panel report titled The Prevention and Treatment of Missing Data in Clinical Trials noted that further research is required to assess the impact of missing data on the power of clinical trials and how to set useful target rates and acceptable rates of missing data in clinical trials. In this article, using binary responses for illustration, we establish that conclusions based on statistical analyses that include only complete cases can be seriously misleading, and that the adverse impact of missing data grows not only with increasing rates of missingness but also with increasing sample size. We illustrate how principled sensitivity analysis can be used to assess the robustness of the conclusions. Finally, we illustrate how sample sizes can be adjusted to account for expected rates of missingness. We find that when sensitivity analyses are considered as part of the primary analysis, the required adjustments to the sample size are dramatically larger than those that are traditionally used. Furthermore, in some cases, especially in large trials with small target effect sizes, it is impossible to achieve the desired power.  相似文献   

6.
Individual randomized trials (IRTs) and cluster randomized trials (CRTs) with binary outcomes arise in a variety of settings and are often analyzed by logistic regression (fitted using generalized estimating equations for CRTs). The effect of stratification on the required sample size is less well understood for trials with binary outcomes than for continuous outcomes. We propose easy-to-use methods for sample size estimation for stratified IRTs and CRTs and demonstrate the use of these methods for a tuberculosis prevention CRT currently being planned. For both IRTs and CRTs, we also identify the ratio of the sample size for a stratified trial vs a comparably powered unstratified trial, allowing investigators to evaluate how stratification will affect the required sample size when planning a trial. For CRTs, these can be used when the investigator has estimates of the within-stratum intracluster correlation coefficients (ICCs) or by assuming a common within-stratum ICC. Using these methods, we describe scenarios where stratification may have a practically important impact on the required sample size. We find that in the two-stratum case, for both IRTs and for CRTs with very small cluster sizes, there are unlikely to be plausible scenarios in which an important sample size reduction is achieved when the overall probability of a subject experiencing the event of interest is low. When the probability of events is not small, or when cluster sizes are large, however, there are scenarios where practically important reductions in sample size result from stratification.  相似文献   

7.
Cluster randomized trials (CRTs) involve the random assignment of intact social units rather than independent subjects to intervention groups. Time‐to‐event outcomes often are endpoints in CRTs. Analyses of such data need to account for the correlation among cluster members. The intracluster correlation coefficient (ICC) is used to assess the similarity among binary and continuous outcomes that belong to the same cluster. However, estimating the ICC in CRTs with time‐to‐event outcomes is a challenge because of the presence of censored observations. The literature suggests that the ICC may be estimated using either censoring indicators or observed event times. A simulation study explores the effect of administrative censoring on estimating the ICC. Results show that ICC estimators derived from censoring indicators or observed event times are negatively biased. Analytic work further supports these results. Observed event times are preferred to estimate the ICC under minimum frequency of administrative censoring. To our knowledge, the existing literature provides no practical guidance on the estimation of ICC when substantial amount of administrative censoring is present. The results from this study corroborate the need for further methodological research on estimating the ICC for correlated time‐to‐event outcomes. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

8.
Studies in health research are commonly carried out in clustered settings, where the individual response data are correlated within clusters. Estimation and modelling of the extent of between-cluster variation contributes to understanding of the current study and to design of future studies. It is common to express between-cluster variation as an intracluster correlation coefficient (ICC), since this measure is directly comparable across outcomes. ICCs are generally reported unaccompanied by confidence intervals. In this paper, we describe a Bayesian modelling approach to interval estimation of the ICC. The flexibility of this framework allows useful extensions which are not easily available in existing methods, for example assumptions other than Normality for continuous outcome data, adjustment for individual-level covariates and simultaneous interval estimation of several ICCs. There is also the opportunity to incorporate prior beliefs on likely values of the ICC. The methods are exemplified using data from a cluster randomized trial.  相似文献   

9.
In this paper we discuss a design for multi-arm randomized clinical trials (RCTs) in which clinicians and their patients can selectively exclude one of the randomized treatment arms. This approach has the advantage that it should expedite protocol development, and allow easier and faster recruitment of patients into the trial. However, to preserve the randomized nature of treatment comparisons, not all recruited patients can be included in all treatment comparisons. This dictates that treatment arms are compared in a pairwise fashion, and that the numbers of patients included in different treatment comparisons may not be equal. The total trial size of a multi-arm RCT that allowed selective exclusion of arms would be greater than the size of an equivalent standard multi-arm RCT. However, the duration of time taken to recruit the study would be reduced. The implications for the design, monitoring and analysis of such RCTs are discussed.  相似文献   

10.
Girardeau, Ravaud and Donner in 2008 presented a formula for sample size calculations for cluster randomised crossover trials, when the intracluster correlation coefficient, interperiod correlation coefficient and mean cluster size are specified in advance. However, in many randomised trials, the number of clusters is constrained in some way, but the mean cluster size is not. We present a version of the Girardeau formula for sample size calculations for cluster randomised crossover trials when the number of clusters is fixed. Formulae are given for the minimum number of clusters, the maximum cluster size and the relationship between the correlation coefficients when there are constraints on both the number of clusters and the cluster size. Our version of the formula may aid the efficient planning and design of cluster randomised crossover trials.  相似文献   

11.
Hollis S 《Statistics in medicine》2002,21(24):3823-3834
Many clinical trials are analysed using an intention-to-treat (ITT) approach. A full application of the ITT approach is only possible when complete outcome data are available for all randomized subjects. In a recent survey of clinical trial reports including an ITT analysis, complete case analysis (excluding all patients with a missing response) was common. This does not comply with the basic principles of ITT since not all randomized subjects are included in the analysis. Analyses of data with missing values are based on untestable assumptions, and so sensitivity analysis presenting a range of estimates under alternative assumptions about the missing-data mechanism is recommended. For binary outcome, extreme case analysis has been suggested as a simple form of sensitivity analysis, but this is rarely conclusive. A graphical sensitivity analysis is proposed which displays the results of all possible allocations of cases with missing binary outcome. Extension to allow binomial variation in outcome is also considered. The display is based on easily interpretable parameters and allows informal examination of the effects of varying prior beliefs.  相似文献   

12.
In cluster‐randomized trials, intervention effects are often formulated by specifying marginal models, fitting them under a working independence assumption, and using robust variance estimates to address the association in the responses within clusters. We develop sample size criteria within this framework, with analyses based on semiparametric Cox regression models fitted with event times subject to right censoring. At the design stage, copula models are specified to enable derivation of the asymptotic variance of estimators from a marginal Cox regression model and to compute the number of clusters necessary to satisfy power requirements. Simulation studies demonstrate the validity of the sample size formula in finite samples for a range of cluster sizes, censoring rates, and degrees of within‐cluster association among event times. The power and relative efficiency implications of copula misspecification is studied, as well as the effect of within‐cluster dependence in the censoring times. Sample size criteria and other design issues are also addressed for the setting where the event status is only ascertained at periodic assessments and times are interval censored. Copyright © 2014 JohnWiley & Sons, Ltd.  相似文献   

13.
The cluster randomized cross-over design has been proposed in particular because it prevents an imbalance that may bring into question the internal validity of parallel group cluster trials. We derived a sample size formula for continuous outcomes that takes into account both the intraclass correlation coefficient (representing the clustering effect) and the interperiod correlation (induced by the cross-over design).  相似文献   

14.
In cluster‐randomized trials, groups of individuals (clusters) are randomized to the treatments or interventions to be compared. In many of those trials, the primary objective is to compare the time for an event to occur between randomized groups, and the shared frailty model well fits clustered time‐to‐event data. Members of the same cluster tend to be more similar than members of different clusters, causing correlations. As correlations affect the power of a trial to detect intervention effects, the clustered design has to be considered in planning the sample size. In this publication, we derive a sample size formula for clustered time‐to‐event data with constant marginal baseline hazards and correlation within clusters induced by a shared frailty term. The sample size formula is easy to apply and can be interpreted as an extension of the widely used Schoenfeld's formula, accounting for the clustered design of the trial. Simulations confirm the validity of the formula and its use also for non‐constant marginal baseline hazards. Findings are illustrated on a cluster‐randomized trial investigating methods of disseminating quality improvement to addiction treatment centers in the USA. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

15.
This work is motivated by trials in rapidly lethal cancers or cancers for which measuring shrinkage of tumours is infeasible. In either case, traditional phase II designs focussing on tumour response are unsuitable. Usually, tumour response is considered as a substitute for the more relevant but longer‐term endpoint of death. In rapidly lethal cancers such as pancreatic cancer, there is no need to use a surrogate, as the definitive endpoint is (sadly) available so soon. In uveal cancer, there is no counterpart to tumour response, and so, mortality is the only realistic response available. Cytostatic cancer treatments do not seek to kill tumours, but to mitigate their effects. Trials of such therapy might also be based on survival times to death or progression, rather than on tumour shrinkage. Phase II oncology trials are often conducted with all study patients receiving the experimental therapy, and this approach is considered here. Simple extensions of one‐stage and two‐stage designs based on binary responses are presented. Outcomes based on survival past a small number of landmark times are considered: here, the case of three such times is explored in examples. This approach allows exact calculations to be made for both design and analysis purposes. Simulations presented here show that calculations based on normal approximations can lead to loss of power when sample sizes are small. Two‐stage versions of the procedure are also suggested. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

16.
In this paper, the optimal sample sizes at the cluster and person levels for each of two treatment arms are obtained for cluster randomized trials where the cost‐effectiveness of treatments on a continuous scale is studied. The optimal sample sizes maximize the efficiency or power for a given budget or minimize the budget for a given efficiency or power. Optimal sample sizes require information on the intra‐cluster correlations (ICCs) for effects and costs, the correlations between costs and effects at individual and cluster levels, the ratio of the variance of effects translated into costs to the variance of the costs (the variance ratio), sampling and measuring costs, and the budget. When planning, a study information on the model parameters usually is not available. To overcome this local optimality problem, the current paper also presents maximin sample sizes. The maximin sample sizes turn out to be rather robust against misspecifying the correlation between costs and effects at the cluster and individual levels but may lose much efficiency when misspecifying the variance ratio. The robustness of the maximin sample sizes against misspecifying the ICCs depends on the variance ratio. The maximin sample sizes are robust under misspecification of the ICC for costs for realistic values of the variance ratio greater than one but not robust under misspecification of the ICC for effects. Finally, we show how to calculate optimal or maximin sample sizes that yield sufficient power for a test on the cost‐effectiveness of an intervention. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

17.
Cluster randomized trials (CRTs) were originally proposed for use when randomization at the subject level is practically infeasible or may lead to a severe estimation bias of the treatment effect. However, recruiting an additional cluster costs more than enrolling an additional subject in an individually randomized trial. Under budget constraints, researchers have proposed the optimal sample sizes in two-level CRTs. CRTs may have a three-level structure, in which two levels of clustering should be considered. In this paper, we propose optimal designs in three-level CRTs with a binary outcome, assuming a nested exchangeable correlation structure in generalized estimating equation models. We provide the variance of estimators of three commonly used measures: risk difference, risk ratio, and odds ratio. For a given sampling budget, we discuss how many clusters and how many subjects per cluster are necessary to minimize the variance of each measure estimator. For known association parameters, the locally optimal design is proposed. When association parameters are unknown but within predetermined ranges, the MaxiMin design is proposed to maximize the minimum of relative efficiency over the possible ranges, that is, to minimize the risk of the worst scenario.  相似文献   

18.
We extend the pattern‐mixture approach to handle missing continuous outcome data in longitudinal cluster randomized trials, which randomize groups of individuals to treatment arms, rather than the individuals themselves. Individuals who drop out at the same time point are grouped into the same dropout pattern. We approach extrapolation of the pattern‐mixture model by applying multilevel multiple imputation, which imputes missing values while appropriately accounting for the hierarchical data structure found in cluster randomized trials. To assess parameters of interest under various missing data assumptions, imputed values are multiplied by a sensitivity parameter, k, which increases or decreases imputed values. Using simulated data, we show that estimates of parameters of interest can vary widely under differing missing data assumptions. We conduct a sensitivity analysis using real data from a cluster randomized trial by increasing k until the treatment effect inference changes. By performing a sensitivity analysis for missing data, researchers can assess whether certain missing data assumptions are reasonable for their cluster randomized trial.  相似文献   

19.
Cluster randomized trials (CRTs) are increasingly used to evaluate the effectiveness of health‐care interventions. A key feature of CRTs is that the observations on individuals within clusters are correlated as a result of between‐cluster variability. Sample size formulae exist which account for such correlations, but they make different assumptions regarding the between‐cluster variability in the intervention arm of a trial, resulting in different sample size estimates. We explore the relationship for binary outcome data between two common measures of between‐cluster variability: k, the coefficient of variation and ρ, the intracluster correlation coefficient. We then assess how the assumptions of constant k or ρ across treatment arms correspond to different assumptions about intervention effects. We assess implications for sample size estimation and present a simple solution to the problems outlined. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

20.
Bayesian approaches to inference in cluster randomized trials have been investigated for normally distributed and binary outcome measures. However, relatively little attention has been paid to outcome measures which are counts of events. We discuss an extension of previously published Bayesian hierarchical models to count data, which usually can be assumed to be distributed according to a Poisson distribution. We develop two models, one based on the traditional rate ratio, and one based on the rate difference which may often be more intuitively interpreted for clinical trials, and is needed for economic evaluation of interventions. We examine the relationship between the intracluster correlation coefficient (ICC) and the between‐cluster variance for each of these two models. In practice, this allows one to use the previously published evidence on ICCs to derive an informative prior distribution which can then be used to increase the precision of the posterior distribution of the ICC. We demonstrate our models using a previously published trial assessing the effectiveness of an educational intervention and a prior distribution previously derived. We assess the robustness of the posterior distribution for effectiveness to departures from a normal distribution of the random effects. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号