首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
BACKGROUND AND OBJECTIVES: When contamination is present, randomization on a patient level leads to dilution of the treatment effect. The usual solution is to randomize on a cluster level, but at the cost of efficiency and more importantly, this may introduce selection bias. Furthermore, it may slow down recruitment in the clusters that are randomized to the "less interesting" treatment. We discuss an alternative randomization procedure to approach these problems. METHODS: Pseudo cluster randomization is a two-stage randomization procedure that balances between individual randomization and cluster randomization. For common scenarios, the design factors needed to calculate the appropriate sample size are tabulated. RESULTS: A pseudo cluster randomized design can reduce selection bias and contamination, while maintaining good efficiency and possibly improving enrollment. To make a well-informed choice of randomization procedure, we discuss the advantages of each method and provide a decision flow chart. CONCLUSION: When contamination is thought to be substantial in an individually randomized setting and a cluster randomized design would suffer from selection bias and/or slow recruitment, pseudo cluster randomization can be considered.  相似文献   

2.
Efron's biased coin design is a restricted randomization procedure that has very favorable balancing properties, yet it is fully randomized, in that subjects are always randomized to one of two treatments with a probability less than 1. The parameter of interest is the bias p of the coin, which can range from 0.5 to 1. In this note, we propose a compound optimization strategy that selects p based on a subjected weighting of the relative importance of the two fundamental criteria of interest for restricted randomization mechanisms, namely balance between the treatment assignments and allocation randomness. We use exact and asymptotic distributional properties of Efron's coin to find the optimal p under compound criteria involving imbalance variability, expected imbalance, selection bias, and accidental bias, for both small/moderate trials and large samples. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

3.
Cluster randomized trials (CRTs) are often prone to selection bias despite randomization. Using a simulation study, we investigated the use of propensity score (PS) based methods in estimating treatment effects in CRTs with selection bias when the outcome is quantitative. Of four PS‐based methods (adjustment on PS, inverse weighting, stratification, and optimal full matching method), three successfully corrected the bias, as did an approach using classical multivariable regression. However, they showed poorer statistical efficiency than classical methods, with higher standard error for the treatment effect, and type I error much smaller than the 5% nominal level. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

4.
Stratified cluster randomization trials (CRTs) have been frequently employed in clinical and healthcare research. Comparing with simple randomized CRTs, stratified CRTs reduce the imbalance of baseline prognostic factors among different intervention groups. Due to the popularity, there has been a growing interest in methodological development on sample size estimation and power analysis for stratified CRTs; however, existing work mostly assumes equal cluster size within each stratum and uses multilevel models. Clusters are often naturally formed with random sizes in CRTs. With varying cluster size, commonly used ad hoc approaches ignore the variability in cluster size, which may underestimate (overestimate) the required number of clusters for each group per stratum and lead to underpowered (overpowered) clinical trials. We propose closed-form sample size formulas for estimating the required total number of subjects and for estimating the number of clusters for each group per stratum, based on Cochran-Mantel-Haenszel statistic for stratified cluster randomization design with binary outcomes, accounting for both clustering and varying cluster size. We investigate the impact of various design parameters on the relative change in the required number of clusters for each group per stratum due to varying cluster size. Simulation studies are conducted to evaluate the finite-sample performance of the proposed sample size method. A real application example of a pragmatic stratified CRT of a triad of chronic kidney disease, diabetes, and hypertension is presented for illustration.  相似文献   

5.
In clinical trials with a small sample size, the characteristics (covariates) of patients assigned to different treatment arms may not be well balanced. This may lead to an inflated type I error rate. This problem can be more severe in trials that use response‐adaptive randomization rather than equal randomization because the former may result in smaller sample sizes for some treatment arms. We have developed a patient allocation scheme for trials with binary outcomes to adjust the covariate imbalance during response‐adaptive randomization. We used simulation studies to evaluate the performance of the proposed design. The proposed design keeps the important advantage of a standard response‐adaptive design, that is to assign more patients to the better treatment arms, and thus it is ethically appealing. On the other hand, the proposed design improves over the standard response‐adaptive design by controlling covariate imbalance between treatment arms, maintaining the nominal type I error rate, and offering greater power. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

6.
We present a Bayesian design for a multi-centre, randomized clinical trial of two chemotherapy regimens for advanced or metastatic unresectable soft tissue sarcoma. After randomization, each patient receives up to four stages of chemotherapy, with the patient's disease evaluated after each stage and categorized on a trinary scale of severity. Therapy is continued to the next stage if the patient's disease is stable, and is discontinued if either tumour response or treatment failure is observed. We assume a probability model that accounts for baseline covariates and the multi-stage treatment and disease evaluation structure. The design uses covariate-adjusted adaptive randomization based on a score that combines the patient's probabilities of overall treatment success or failure. The adaptive randomization procedure generalizes the method proposed by Thompson (1933) for two binomial distributions with beta priors. A simulation study of the design in the context of the sarcoma trial is presented.  相似文献   

7.
It is essential for the integrity of double‐blind clinical trials that during the study course the individual treatment allocations of the patients as well as the treatment effect remain unknown to any involved person. Recently, methods have been proposed for which it was claimed that they would allow reliable estimation of the treatment effect based on blinded data by using information about the block length of the randomization procedure. If this would hold true, it would be difficult to preserve blindness without taking further measures. The suggested procedures apply to continuous data. We investigate the properties of these methods thoroughly by repeated simulations per scenario. Furthermore, a method for blinded treatment effect estimation in case of binary data is proposed, and blinded tests for treatment group differences are developed both for continuous and binary data. We report results of comprehensive simulation studies that investigate the features of these procedures. It is shown that for sample sizes and treatment effects which are typical in clinical trials, no reliable inference can be made on the treatment group difference which is due to the bias and imprecision of the blinded estimates. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

8.
Increased survival is a common goal of cancer clinical trials. Owing to the long periods of observation and follow‐up to assess patient survival outcome, it is difficult to use outcome‐adaptive randomization in these trials. In practice, often information about a short‐term response is quickly available during or shortly after treatment, and this short‐term response is a good predictor for long‐term survival. For example, complete remission of leukemia can be achieved and measured after a few cycles of treatment. It is a short‐term response that is desirable for prolonging survival. We propose a new design for survival trials when such short‐term response information is available. We use the short‐term information to ‘speed up’ the adaptation of the randomization procedure. We establish a connection between the short‐term response and the long‐term survival through a Bayesian model, first by using prior clinical information, and then by dynamically updating the model according to information accumulated in the ongoing trial. Interim monitoring and final decision making are based upon inference on the primary outcome of survival. The new design uses fewer patients, and can more effectively assign patients to the better treatment arms. We demonstrate these properties through simulation studies. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

9.
Randomization designs for multiarm clinical trials are increasingly used in practice, especially in phase II dose‐ranging studies. Many new methods have been proposed in the literature; however, there is lack of systematic, head‐to‐head comparison of the competing designs. In this paper, we systematically investigate statistical properties of various restricted randomization procedures for multiarm trials with fixed and possibly unequal allocation ratios. The design operating characteristics include measures of allocation balance, randomness of treatment assignments, variations in the allocation ratio, and statistical characteristics such as type I error rate and power. The results from the current paper should help clinical investigators select an appropriate randomization procedure for their clinical trial. We also provide a web‐based R shiny application that can be used to reproduce all results in this paper and run simulations under additional user‐defined experimental scenarios.  相似文献   

10.
Studies with unequal allocation to two or more treatment groups often require a large block size for permuted block allocation. This could present a problem in small studies, multi-center studies, or adaptive design dose-finding studies. In this paper, an allocation procedure, which generalizes the maximal procedure by Berger, Ivanova, and Knoll to the case of K≥2 treatment groups and any allocation ratio, is offered. Brick tunnel (BT) randomization requires the allocation path drawn in the k-dimensional space to stay close to the allocation ray that corresponds to the targeted allocation ratio. Specifically, it requires the allocation path to be confined to the set of the k-dimensional unitary cubes that are pierced by the allocation ray (the 'brick tunnel'). The important property of the BT randomization is that the transition probabilities at each node within the tunnel are defined in such a way that the unconditional allocation ratio is the same for every allocation step. This property is not necessarily met by other allocation procedures that implement unequal allocation.  相似文献   

11.
Despite randomization, selection bias may occur in cluster randomized trials. Classical multivariable regression usually allows for adjusting treatment effect estimates with unbalanced covariates. However, for binary outcomes with low incidence, such a method may fail because of separation problems. This simulation study focused on the performance of propensity score (PS)‐based methods to estimate relative risks from cluster randomized trials with binary outcomes with low incidence. The results suggested that among the different approaches used (multivariable regression, direct adjustment on PS, inverse weighting on PS, and stratification on PS), only direct adjustment on the PS fully corrected the bias and moreover had the best statistical properties. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

12.
In this paper, we propose a model-based approach to detect and adjust for observable selection bias in a randomized clinical trial with two treatments and binary outcomes. The proposed method was evaluated using simulations of a randomized block design in which the investigator favoured the experimental treatment by attempting to enroll stronger patients (with greater probability of treatment success) if the probability of the next treatment being experimental was high, and enroll weak patients (with less probability of treatment success) if the probability of the next treatment being experimental was low. The method allows not only testing for the presence of observable selection bias, but also testing for a difference in treatment effects, adjusting for possible selection bias.  相似文献   

13.
A requirement for calculating sample sizes for cluster randomized trials (CRTs) conducted over multiple periods of time is the specification of a form for the correlation between outcomes of subjects within the same cluster, encoded via the within-cluster correlation structure. Previously proposed within-cluster correlation structures have made strong assumptions; for example, the usual assumption is that correlations between the outcomes of all pairs of subjects are identical (“uniform correlation”). More recently, structures that allow for a decay in correlation between pairs of outcomes measured in different periods have been suggested. However, these structures are overly simple in settings with continuous recruitment and measurement. We propose a more realistic “continuous-time correlation decay” structure whereby correlations between subjects' outcomes decay as the time between these subjects' measurement times increases. We investigate the use of this structure on trial planning in the context of a primary care diabetes trial, where there is evidence of decaying correlation between pairs of patients' outcomes over time. In particular, for a range of different trial designs, we derive the variance of the treatment effect estimator under continuous-time correlation decay and compare this to the variance obtained under uniform correlation. For stepped wedge and cluster randomized crossover designs, incorrectly assuming uniform correlation will underestimate the required sample size under most trial configurations likely to occur in practice. Planning of CRTs requires consideration of the most appropriate within-cluster correlation structure to obtain a suitable sample size.  相似文献   

14.
Girardeau, Ravaud and Donner in 2008 presented a formula for sample size calculations for cluster randomised crossover trials, when the intracluster correlation coefficient, interperiod correlation coefficient and mean cluster size are specified in advance. However, in many randomised trials, the number of clusters is constrained in some way, but the mean cluster size is not. We present a version of the Girardeau formula for sample size calculations for cluster randomised crossover trials when the number of clusters is fixed. Formulae are given for the minimum number of clusters, the maximum cluster size and the relationship between the correlation coefficients when there are constraints on both the number of clusters and the cluster size. Our version of the formula may aid the efficient planning and design of cluster randomised crossover trials.  相似文献   

15.
The sample size required for a cluster randomised trial is inflated compared with an individually randomised trial because outcomes of participants from the same cluster are correlated. Sample size calculations for longitudinal cluster randomised trials (including stepped wedge trials) need to take account of at least two levels of clustering: the clusters themselves and times within clusters. We derive formulae for sample size for repeated cross‐section and closed cohort cluster randomised trials with normally distributed outcome measures, under a multilevel model allowing for variation between clusters and between times within clusters. Our formulae agree with those previously described for special cases such as crossover and analysis of covariance designs, although simulation suggests that the formulae could underestimate required sample size when the number of clusters is small. Whether using a formula or simulation, a sample size calculation requires estimates of nuisance parameters, which in our model include the intracluster correlation, cluster autocorrelation, and individual autocorrelation. A cluster autocorrelation less than 1 reflects a situation where individuals sampled from the same cluster at different times have less correlated outcomes than individuals sampled from the same cluster at the same time. Nuisance parameters could be estimated from time series obtained in similarly clustered settings with the same outcome measure, using analysis of variance to estimate variance components. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

16.
When several treatment arms are administered along with a control arm in a trial, dropping the non‐promising treatments at an early stage helps to save the resources and expedite the trial. In such adaptive designs with treatment selection, a common selection rule is to pick the most promising treatment, for example, the treatment with the numerically highest mean response, at the interim stage. However, with only a single treatment selected for final evaluation, this selection rule is often too inflexible. We modified this interim selection rule by introducing a flexible selection margin to judge the acceptable treatment difference. Another treatment could be selected at the interim stage in addition to the empirically best one if the differences of the observed treatment effect between them do not exceed this margin. We considered the study starting with two treatment arms and a control arm. We developed hypothesis testing procedures to assess the selected treatment(s) by taking into account the interim selection process. Compared with the one‐winner selection designs, the modified selection rule makes the design more flexible and practical. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

17.
Stratified permuted blocks randomization is commonly applied in clinical trials, but other randomization methods that attempt to balance treatment counts marginally for the stratification variables are able to accommodate more stratification variables. When the analysis stratifies on the cells formed by crossing the stratification variables, these other randomization methods yield treatment effect estimates with larger variance than does stratified permuted blocks. When it is truly necessary to balance the randomization on many stratification variables, it is shown how this inefficiency can be improved by using a sequential randomization method where the first level balances on the crossing of the strata used in the analysis and further stratification variables fall lower in the sequential hierarchy.  相似文献   

18.
Matched-pair cluster randomization trials are frequently adopted as the design of choice for evaluating an intervention offered at the community level. However, previous research has demonstrated that a strategy of breaking the matches and performing an unmatched analysis may be more efficient than performing a matched analysis on the resulting data, particularly when the total number of communities is small and the matching is judged as relatively ineffective.The research concerning this question has naturally focused on testing the effect of intervention. However, a secondary objective of many community intervention trials is to investigate the effect of individual-level risk factors on one or more outcome variables. Focusing on the case of a continuous outcome variable, we show that the practice of performing an unmatched analysis on data arising from a matched-pair design can lead to bias in the estimated regression coefficient, and a corresponding test of significance which is overly liberal. However, for large-scale community intervention trials, which typically recruit a relatively small number of large clusters, such an analysis will generally be both valid and efficient.We also consider other approaches to testing the effect of an individual-level risk factor in a matched-pair cluster randomization design, including a generalized linear model approach that preserves the matching, a two-stage cluster-level analysis, and an approach based on generalized estimating equations.  相似文献   

19.
目的采用孟德尔随机化(MR)方法探讨饮茶与恶性肿瘤发病之间的关联。方法利用中国慢性病前瞻性研究中100 639名具有全基因组基因分型数据的研究对象, 剔除基线时患有恶性肿瘤的个体, 最终纳入分析100 218名。饮茶信息为基线自报, 按是否每日饮茶、每日饮茶杯数、每日饮茶克数分别进行分析。采用二阶段最小二乘回归模型计算3个饮茶变量与随访期间新发的全部恶性肿瘤及多种类型恶性肿瘤(胃癌、肝和肝内胆管癌、结肠直肠癌、气管/支气管和肺癌以及女性乳腺癌)的关联。为控制饮酒行为的影响, 进一步采用多变量MR法或限制在不饮酒人群中进行分析。利用逆方差加权、加权中位数法、MR-Egger法等进行敏感性分析。结果分别使用54、42、28个SNP位点构建非加权遗传风险评分作为上述3个饮茶变量的工具变量。研究对象随访(11.4±3.0)年, 期间确定新发的恶性肿瘤6 886名。模型中调整年龄、年龄2、性别、地区、芯片类型及12个遗传主成分后, MR分析的结果显示, 饮茶与全部恶性肿瘤以及各种类型的恶性肿瘤的发病无统计学关联。相比于非每日饮茶者, 每日饮茶者的全部恶性肿瘤及部分亚型(胃癌、肝和肝内胆管癌、结肠...  相似文献   

20.
A group-sequential design for clinical trials that involve treatment selection was proposed by Stallard and Todd (Statist. Med. 2003; 22:689-703). In this design, the best among a number of experimental treatments is selected on the basis of data observed at the first of a series of interim analyses. This experimental treatment then continues together with the control treatment to be assessed in one or more further analyses. The method was extended by Kelly et al. (J. Biopharm. Statist. 2005; 15:641-658) to allow more than one experimental treatment to continue beyond the first interim analysis. This design controls the familywise type I error rate under the global null hypothesis, that is in the weak sense, but may not strongly control the error rate, particularly if the treatments selected are not the best-performing ones. In some cases, for example when additional safety data are available, the restriction that the best-performing treatments continue may be unreasonable. This paper describes an extension of the approach of Stallard and Todd that enables construction of a group-sequential design for comparison of several experimental treatments with a control treatment. The new method controls the type I error rate in the strong sense if the number of treatments included at each stage is specified in advance, and is indicated by simulation studies to be conservative when the number of treatments is chosen based on the observed data in a practically relevant way.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号