首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
When several experimental treatments are available for testing, multi‐arm trials provide gains in efficiency over separate trials. Including interim analyses allows the investigator to effectively use the data gathered during the trial. Bayesian adaptive randomization (AR) and multi‐arm multi‐stage (MAMS) designs are two distinct methods that use patient outcomes to improve the efficiency and ethics of the trial. AR allocates a greater proportion of future patients to treatments that have performed well; MAMS designs use pre‐specified stopping boundaries to determine whether experimental treatments should be dropped. There is little consensus on which method is more suitable for clinical trials, and so in this paper, we compare the two under several simulation scenarios and in the context of a real multi‐arm phase II breast cancer trial. We compare the methods in terms of their efficiency and ethical properties. We also consider the practical problem of a delay between recruitment of patients and assessment of their treatment response. Both methods are more efficient and ethical than a multi‐arm trial without interim analyses. Delay between recruitment and response assessment attenuates this efficiency gain. We also consider futility stopping rules for response adaptive trials that add efficiency when all treatments are ineffective. Our comparisons show that AR is more efficient than MAMS designs when there is an effective experimental treatment, whereas if none of the experimental treatments is effective, then MAMS designs slightly outperform AR. © 2014 The Authors Statistics in Medicine Published by John Wiley & Sons, Ltd.  相似文献   

2.
We propose a class of phase II clinical trial designs with sequential stopping and adaptive treatment allocation to evaluate treatment efficacy. Our work is based on two‐arm (control and experimental treatment) designs with binary endpoints. Our overall goal is to construct more efficient and ethical randomized phase II trials by reducing the average sample sizes and increasing the percentage of patients assigned to the better treatment arms of the trials. The designs combine the Bayesian decision‐theoretic sequential approach with adaptive randomization procedures in order to achieve simultaneous goals of improved efficiency and ethics. The design parameters represent the costs of different decisions, for example, the decisions for stopping or continuing the trials. The parameters enable us to incorporate the actual costs of the decisions in practice. The proposed designs allow the clinical trials to stop early for either efficacy or futility. Furthermore, the designs assign more patients to better treatment arms by applying adaptive randomization procedures. We develop an algorithm based on the constrained backward induction and forward simulation to implement the designs. The algorithm overcomes the computational difficulty of the backward induction method, thereby making our approach practicable. The designs result in trials with desirable operating characteristics under the simulated settings. Moreover, the designs are robust with respect to the response rate of the control group. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

3.
It is well known that competing demands exist between the control of important covariate imbalance and protection of treatment allocation randomness in confirmative clinical trials. When implementing a response‐adaptive randomization algorithm in confirmative clinical trials designed under a frequentist framework, additional competing demands emerge between the shift of the treatment allocation ratio and the preservation of the power. Based on a large multicenter phase III stroke trial, we present a patient randomization scheme that manages these competing demands by applying a newly developed minimal sufficient balancing design for baseline covariates and a cap on the treatment allocation ratio shift in order to protect the allocation randomness and the power. Statistical properties of this randomization plan are studied by computer simulation. Trial operation characteristics, such as patient enrollment rate and primary outcome response delay, are also incorporated into the randomization plan. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

4.
Delay in the outcome variable is challenging for outcome‐adaptive randomization, as it creates a lag between the number of subjects accrued and the information known at the time of the analysis. Motivated by a real‐life pediatric ulcerative colitis trial, we consider a case where a short‐term predictor is available for the delayed outcome. When a short‐term predictor is not considered, studies have shown that the asymptotic properties of many outcome‐adaptive randomization designs are little affected unless the lag is unreasonably large relative to the accrual process. These theoretical results assumed independent identical delays, however, whereas delays in the presence of a short‐term predictor may only be conditionally homogeneous. We consider delayed outcomes as missing and propose mitigating the delay effect by imputing them. We apply this approach to the doubly adaptive biased coin design (DBCD) for motivating pediatric ulcerative colitis trial. We provide theoretical results that if the delays, although non‐homogeneous, are reasonably short relative to the accrual process similarly as in the iid delay case, the lag is also asymptotically ignorable in the sense that a standard DBCD that utilizes only observed outcomes attains target allocation ratios in the limit. Empirical studies, however, indicate that imputation‐based DBCDs performed more reliably in finite samples with smaller root mean square errors. The empirical studies assumed a common clinical setting where a delayed outcome is positively correlated with a short‐term predictor similarly between treatment arm groups. We varied the strength of the correlation and considered fast and slow accrual settings. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

5.
Sequential analysis is frequently employed to address ethical and financial issues in clinical trials. Sequential analysis may be performed using standard group sequential designs, or, more recently, with adaptive designs that use estimates of treatment effect to modify the maximal statistical information to be collected. In the general setting in which statistical information and clinical trial costs are functions of the number of subjects used, it has yet to be established whether there is any major efficiency advantage to adaptive designs over traditional group sequential designs. In survival analysis, however, statistical information (and hence efficiency) is most closely related to the observed number of events, while trial costs still depend on the number of patients accrued. As the number of subjects may dominate the cost of a trial, an adaptive design that specifies a reduced maximal possible sample size when an extreme treatment effect has been observed may allow early termination of accrual and therefore a more cost-efficient trial. We investigate and compare the tradeoffs between efficiency (as measured by average number of observed events required), power, and cost (a function of the number of subjects accrued and length of observation) for standard group sequential methods and an adaptive design that allows for early termination of accrual. We find that when certain trial design parameters are constrained, an adaptive approach to terminating subject accrual may improve upon the cost efficiency of a group sequential clinical trial investigating time-to-event endpoints. However, when the spectrum of group sequential designs considered is broadened, the advantage of the adaptive designs is less clear.  相似文献   

6.
Two‐stage randomization designs are broadly accepted and becoming increasingly popular in clinical trials for cancer and other chronic diseases to assess and compare the effects of different treatment policies. In this paper, we propose an inferential method to estimate the treatment effects in two‐stage randomization designs, which can improve the efficiency and reduce bias in the presence of chance imbalance of a robust covariate‐adjustment without additional assumptions required by Lokhnygina and Helterbrand (Biometrics, 63:422‐428)'s inverse probability weighting (IPW) method. The proposed method is evaluated and compared with the IPW method using simulations and an application to data from an oncology clinical trial. Given the predictive power of baseline covariates collected in this real data, our proposed method obtains 17–38% gains in efficiency compared with the IPW method in terms of overall survival outcome. Copyright © 2017 John Wiley & Sons, Ltd.  相似文献   

7.
Selective recruitment designs preferentially recruit individuals who are estimated to be statistically informative onto a clinical trial. Individuals who are expected to contribute less information have a lower probability of recruitment. Furthermore, in an information‐adaptive design, recruits are allocated to treatment arms in a manner that maximises information gain. The informativeness of an individual depends on their covariate (or biomarker) values, and how information is defined is a critical element of information‐adaptive designs. In this paper, we define and evaluate four different methods for quantifying statistical information. Using both experimental data and numerical simulations, we show that selective recruitment designs can offer a substantial increase in statistical power compared with randomised designs. In trials without selective recruitment, we find that allocating individuals to treatment arms according to information‐adaptive protocols also leads to an increase in statistical power. Consequently, selective recruitment designs can potentially achieve successful trials using fewer recruits thereby offering economic and ethical advantages. Copyright © 2017 John Wiley & Sons, Ltd.  相似文献   

8.
Over the past 25 years, adaptive designs have gradually gained acceptance and are being used with increasing frequency in confirmatory clinical trials. Recent surveys of submissions to the regulatory agencies reveal that the most popular type of adaptation is unblinded sample size re‐estimation. Concerns have nevertheless been raised that this type of adaptation is inefficient.We intend to show in our discussion that such concerns are greatly exaggerated in any practical setting and that the advantages of adaptive sample size re‐estimation usually outweigh any minor loss of efficiency. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

9.
Increased survival is a common goal of cancer clinical trials. Owing to the long periods of observation and follow‐up to assess patient survival outcome, it is difficult to use outcome‐adaptive randomization in these trials. In practice, often information about a short‐term response is quickly available during or shortly after treatment, and this short‐term response is a good predictor for long‐term survival. For example, complete remission of leukemia can be achieved and measured after a few cycles of treatment. It is a short‐term response that is desirable for prolonging survival. We propose a new design for survival trials when such short‐term response information is available. We use the short‐term information to ‘speed up’ the adaptation of the randomization procedure. We establish a connection between the short‐term response and the long‐term survival through a Bayesian model, first by using prior clinical information, and then by dynamically updating the model according to information accumulated in the ongoing trial. Interim monitoring and final decision making are based upon inference on the primary outcome of survival. The new design uses fewer patients, and can more effectively assign patients to the better treatment arms. We demonstrate these properties through simulation studies. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

10.
Adaptive clinical trial design has been proposed as a promising new approach that may improve the drug discovery process. Proponents of adaptive sample size re‐estimation promote its ability to avoid ‘up‐front’ commitment of resources, better address the complicated decisions faced by data monitoring committees, and minimize accrual to studies having delayed ascertainment of outcomes. We investigate aspects of adaptation rules, such as timing of the adaptation analysis and magnitude of sample size adjustment, that lead to greater or lesser statistical efficiency. Owing in part to the recent Food and Drug Administration guidance that promotes the use of pre‐specified sampling plans, we evaluate alternative approaches in the context of well‐defined, pre‐specified adaptation. We quantify the relative costs and benefits of fixed sample, group sequential, and pre‐specified adaptive designs with respect to standard operating characteristics such as type I error, maximal sample size, power, and expected sample size under a range of alternatives. Our results build on others’ prior research by demonstrating in realistic settings that simple and easily implemented pre‐specified adaptive designs provide only very small efficiency gains over group sequential designs with the same number of analyses. In addition, we describe optimal rules for modifying the sample size, providing efficient adaptation boundaries on a variety of scales for the interim test statistic for adaptation analyses occurring at several different stages of the trial. We thus provide insight into what are good and bad choices of adaptive sampling plans when the added flexibility of adaptive designs is desired. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

11.
Adaptive designs that are based on group‐sequential approaches have the benefit of being efficient as stopping boundaries can be found that lead to good operating characteristics with test decisions based solely on sufficient statistics. The drawback of these so called ‘pre‐planned adaptive’ designs is that unexpected design changes are not possible without impacting the error rates. ‘Flexible adaptive designs’ on the other hand can cope with a large number of contingencies at the cost of reduced efficiency. In this work, we focus on two different approaches for multi‐arm multi‐stage trials, which are based on group‐sequential ideas, and discuss how these ‘pre‐planned adaptive designs’ can be modified to allow for flexibility. We then show how the added flexibility can be used for treatment selection and sample size reassessment and evaluate the impact on the error rates in a simulation study. The results show that an impressive overall procedure can be found by combining a well chosen pre‐planned design with an application of the conditional error principle to allow flexible treatment selection. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

12.
Response‐adaptive randomization procedures are appropriate for clinical trials in which two or more treatments are to be compared, patients arrive sequentially and the response of each patient is recorded before the next patient arrives. However, for those procedures that involve sequential estimation of model parameters, start‐up designs are commonly required in order to provide initial estimates of the parameters. In this paper, a suite of such start‐up designs for two treatments and binary patient responses are considered and compared in terms of the numbers of patients required in order to give meaningful parameters estimates, the number of patients allocated to the better treatment, and the bias in the parameter estimates. It is shown that permuted block designs with blocks of size 4 are to be preferred over a wide range of parameter values. For the case of two treatments, normal responses and selected start‐up procedures, a design incorporating complete randomization followed appropriately by repeats of one of the treatments yields the minimum expected number of patients and is to be preferred. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

13.
Several adaptive design methods have been proposed to reestimate sample size using the observed treatment effect after an initial stage of a clinical trial while preserving the overall type I error at the time of the final analysis. One unfortunate property of the algorithms used in some methods is that they can be inverted to reveal the exact treatment effect at the interim analysis. We propose using a step function with an inverted U‐shape of observed treatment difference for sample size reestimation to lessen the information on treatment effect revealed. This will be referred to as stepwise two‐stage sample size adaptation. This method applies calculation methods used for group sequential designs. We minimize expected sample size among a class of these designs and compare efficiency with the fully optimized two‐stage design, optimal two‐stage group sequential design, and designs based on promising conditional power. The trade‐off between efficiency versus the improved blinding of the interim treatment effect will be discussed. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

14.
A comparison of 2 treatments with survival outcomes in a clinical study may require treatment randomization on clusters of multiple units with correlated responses. For example, for patients with otitis media in both ears, a specific treatment is normally given to a single patient, and hence, the 2 ears constitute a cluster. Statistical procedures are available for comparison of treatment efficacies. The conventional approach for treatment allocation is the adoption of a balanced design, in which half of the patients are assigned to each treatment arm. However, considering the increasing acceptability of responsive‐adaptive designs in recent years because of their desirable features, we have developed a response‐adaptive treatment allocation scheme for survival trials with clustered data. The proposed treatment allocation scheme is superior to the balanced design in that it allows more patients to receive the better treatment. At the same time, the test power for comparing treatment efficacies using our treatment allocation scheme remains highly competitive. The advantage of the proposed randomization procedure is supported by a simulation study and the redesign of a clinical study.  相似文献   

15.
In clinical trials with a small sample size, the characteristics (covariates) of patients assigned to different treatment arms may not be well balanced. This may lead to an inflated type I error rate. This problem can be more severe in trials that use response‐adaptive randomization rather than equal randomization because the former may result in smaller sample sizes for some treatment arms. We have developed a patient allocation scheme for trials with binary outcomes to adjust the covariate imbalance during response‐adaptive randomization. We used simulation studies to evaluate the performance of the proposed design. The proposed design keeps the important advantage of a standard response‐adaptive design, that is to assign more patients to the better treatment arms, and thus it is ethically appealing. On the other hand, the proposed design improves over the standard response‐adaptive design by controlling covariate imbalance between treatment arms, maintaining the nominal type I error rate, and offering greater power. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

16.
Simon's optimal two‐stage design has been widely used in early phase clinical trials for Oncology and AIDS studies with binary endpoints. With this approach, the second‐stage sample size is fixed when the trial passes the first stage with sufficient activity. Adaptive designs, such as those due to Banerjee and Tsiatis (2006) and Englert and Kieser (2013), are flexible in the sense that the second‐stage sample size depends on the response from the first stage, and these designs are often seen to reduce the expected sample size under the null hypothesis as compared with Simon's approach. An unappealing trait of the existing designs is that they are not associated with a second‐stage sample size, which is a non‐increasing function of the first‐stage response rate. In this paper, an efficient intelligent process, the branch‐and‐bound algorithm, is used in extensively searching for the optimal adaptive design with the smallest expected sample size under the null, while the type I and II error rates are maintained and the aforementioned monotonicity characteristic is respected. The proposed optimal design is observed to have smaller expected sample sizes compared to Simon's optimal design, and the maximum total sample size of the proposed adaptive design is very close to that from Simon's method. The proposed optimal adaptive two‐stage design is recommended for use in practice to improve the flexibility and efficiency of early phase therapeutic development. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

17.
Group‐randomized trials are randomized studies that allocate intact groups of individuals to different comparison arms. A frequent practical limitation to adopting such research designs is that only a limited number of groups may be available, and therefore, simple randomization is unable to adequately balance multiple group‐level covariates between arms. Therefore, covariate‐based constrained randomization was proposed as an allocation technique to achieve balance. Constrained randomization involves generating a large number of possible allocation schemes, calculating a balance score that assesses covariate imbalance, limiting the randomization space to a prespecified percentage of candidate allocations, and randomly selecting one scheme to implement. When the outcome is binary, a number of statistical issues arise regarding the potential advantages of such designs in making inference. In particular, properties found for continuous outcomes may not directly apply, and additional variations on statistical tests are available. Motivated by two recent trials, we conduct a series of Monte Carlo simulations to evaluate the statistical properties of model‐based and randomization‐based tests under both simple and constrained randomization designs, with varying degrees of analysis‐based covariate adjustment. Our results indicate that constrained randomization improves the power of the linearization F‐test, the KC‐corrected GEE t‐test (Kauermann and Carroll, 2001, Journal of the American Statistical Association 96, 1387‐1396), and two permutation tests when the prognostic group‐level variables are controlled for in the analysis and the size of randomization space is reasonably small. We also demonstrate that constrained randomization reduces power loss from redundant analysis‐based adjustment for non‐prognostic covariates. Design considerations such as the choice of the balance metric and the size of randomization space are discussed.  相似文献   

18.
In order to better inform study design decisions when sampling patients within and across health care providers we develop a simulation-based approach for designing complex multi-stage samples. The approach explores the tradeoff between competing design goals such as precision of estimates, coverage of the target population and cost.We elicit a number of sensible candidate designs, evaluate these designs with respect to multiple sampling goals, investigate their tradeoffs, and identify the design that is the best compromise among all goals. This approach recognizes that, in the practice of sampling, precision of the estimates is not the only important goal, and that there are tradeoffs with coverage and cost that should be explicitly considered. One can easily add other goals. We construct a sample frame with all phase III clinical cancer treatment trials that are conducted by cooperative oncology groups of the National Cancer Institute from October 1, 1998 through December 31, 1999. Simulation results for our study suggest sampling a different number of trials and institutions than initially considered.Simulations of different study designs can uncover efficiency gains both in terms of improved precision of the estimates and in terms of improved coverage of the target population. Simulations enable us to explore the tradeoffs between competing sampling goals and to quantify these efficiency gains. This is true even for complex designs where the stages are not strictly nested in one another.  相似文献   

19.
In developing products for rare diseases, statistical challenges arise due to the limited number of patients available for participation in drug trials and other clinical research. Bayesian adaptive clinical trial designs offer the possibility of increased statistical efficiency, reduced development cost and ethical hazard prevention via their incorporation of evidence from external sources (historical data, expert opinions, and real-world evidence), and flexibility in the specification of interim looks. In this paper, we propose a novel Bayesian adaptive commensurate design that borrows adaptively from historical information and also uses a particular payoff function to optimize the timing of the study's interim analysis. The trial payoff is a function of how many samples can be saved via early stopping and the probability of making correct early decisions for either futility or efficacy. We calibrate our Bayesian algorithm to have acceptable long-run frequentist properties (Type I error and power) via simulation at the design stage. We illustrate our approach using a pediatric trial design setting testing the effect of a new drug for a rare genetic disease. The optimIA R package available at https://github.com/wxwx1993/Bayesian_IA_Timing provides an easy-to-use implementation of our approach.  相似文献   

20.
In randomized trials, pair‐matching is an intuitive design strategy to protect study validity and to potentially increase study power. In a common design, candidate units are identified, and their baseline characteristics used to create the best n/2 matched pairs. Within the resulting pairs, the intervention is randomized, and the outcomes measured at the end of follow‐up. We consider this design to be adaptive, because the construction of the matched pairs depends on the baseline covariates of all candidate units. As a consequence, the observed data cannot be considered as n/2 independent, identically distributed pairs of units, as common practice assumes. Instead, the observed data consist of n dependent units. This paper explores the consequences of adaptive pair‐matching in randomized trials for estimation of the average treatment effect, conditional the baseline covariates of the n study units. By avoiding estimation of the covariate distribution, estimators of this conditional effect will often be more precise than estimators of the marginal effect. We contrast the unadjusted estimator with targeted minimum loss based estimation and show substantial efficiency gains from matching and further gains with adjustment. This work is motivated by the Sustainable East Africa Research in Community Health study, an ongoing community randomized trial to evaluate the impact of immediate and streamlined antiretroviral therapy on HIV incidence in rural East Africa. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号