首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
Seamless phase II/III clinical trials in which an experimental treatment is selected at an interim analysis have been the focus of much recent research interest. Many of the methods proposed are based on the group sequential approach. This paper considers designs of this type in which the treatment selection can be based on short‐term endpoint information for more patients than have primary endpoint data available. We show that in such a case, the familywise type I error rate may be inflated if previously proposed group sequential methods are used and the treatment selection rule is not specified in advance. A method is proposed to avoid this inflation by considering the treatment selection that maximises the conditional error given the data available at the interim analysis. A simulation study is reported that illustrates the type I error rate inflation and compares the power of the new approach with two other methods: a combination testing approach and a group sequential method that does not use the short‐term endpoint data, both of which also strongly control the type I error rate. The new method is also illustrated through application to a study in Alzheimer's disease. © 2015 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.  相似文献   

2.
Currently, adaptive phase II/III clinical trials are typically carried out with a strict two‐stage design. The first stage is a learning stage called phase II, and the second stage is a confirmatory stage called phase III. Following phase II analysis, inefficacious or harmful dose arms are dropped, then one or two promising dose arms are selected for the second stage. However, there are often situations in which researchers are in dilemma to make ‘go or no‐go’ decision and/or to select ‘best’ dose arm(s), as data from the first stage may not provide sufficient information for their decision making. In this case, it is challenging to follow a strict two‐stage plan. Therefore, we propose a varying‐stage adaptive phase II/III clinical trial design, in which we consider whether there is a need to have an intermediate stage to obtain more data, so that a more informative decision could be made. Hence, the number of further investigational stages in our design is determined on the basis of data accumulated to the interim analysis. With respect to adaptations, we consider dropping dose arm(s), switching another plausible endpoint as the primary study endpoint, re‐estimating sample size, and early stopping for futility. We use an adaptive combination test to perform final analyses. By applying closed testing procedure, we control family‐wise type I error rate at the nominal level of α in the strong sense. We delineate other essential design considerations including the threshold parameters and the proportion of alpha allocated in the two‐stage versus three‐stage setting. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

3.
We propose a flexible Bayesian optimal phase II (BOP2) design that is capable of handling simple (e.g., binary) and complicated (e.g., ordinal, nested, and co‐primary) endpoints under a unified framework. We use a Dirichlet‐multinomial model to accommodate different types of endpoints. At each interim, the go/no‐go decision is made by evaluating a set of posterior probabilities of the events of interest, which is optimized to maximize power or minimize the number of patients under the null hypothesis. Unlike other existing Bayesian designs, the BOP2 design explicitly controls the type I error rate, thereby bridging the gap between Bayesian designs and frequentist designs. In addition, the stopping boundary of the BOP2 design can be enumerated prior to the onset of the trial. These features make the BOP2 design accessible to a wide range of users and regulatory agencies and particularly easy to implement in practice. Simulation studies show that the BOP2 design has favorable operating characteristics with higher power and lower risk of incorrectly terminating the trial than some existing Bayesian phase II designs. The software to implement the BOP2 design is freely available at www.trialdesign.org . Copyright © 2017 John Wiley & Sons, Ltd.  相似文献   

4.
Seamless phase II/III clinical trials offer an efficient way to select an experimental treatment and perform confirmatory analysis within a single trial. However, combining the data from both stages in the final analysis can induce bias into the estimates of treatment effects. Methods for bias adjustment developed thus far have made restrictive assumptions about the design and selection rules followed. In order to address these shortcomings, we apply recent methodological advances to derive the uniformly minimum variance conditionally unbiased estimator for two‐stage seamless phase II/III trials. Our framework allows for the precision of the treatment arm estimates to take arbitrary values, can be utilised for all treatments that are taken forward to phase III and is applicable when the decision to select or drop treatment arms is driven by a multiplicity‐adjusted hypothesis testing procedure. © 2016 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.  相似文献   

5.
We consider seamless phase II/III clinical trials that compare K treatments with a common control in phase II then test the most promising treatment against control in phase III. The final hypothesis test for the selected treatment can use data from both phases, subject to controlling the familywise type I error rate. We show that the choice of method for conducting the final hypothesis test has a substantial impact on the power to demonstrate that an effective treatment is superior to control. To understand these differences in power, we derive decision rules maximizing power for particular configurations of treatment effects. A rule with such an optimal frequentist property is found as the solution to a multivariate Bayes decision problem. The optimal rules that we derive depend on the assumed configuration of treatment means. However, we are able to identify two decision rules with robust efficiency: a rule using a weighted average of the phase II and phase III data on the selected treatment and control, and a closed testing procedure using an inverse normal combination rule and a Dunnett test for intersection hypotheses. For the first of these rules, we find the optimal division of a given total sample size between phases II and III. We also assess the value of using phase II data in the final analysis and find that for many plausible scenarios, between 50% and 70% of the phase II numbers on the selected treatment and control would need to be added to the phase III sample size in order to achieve the same increase in power. © 2014 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd.  相似文献   

6.
Composite endpoints combine several events within a single variable, which increases the number of expected events and is thereby meant to increase the power. However, the interpretation of results can be difficult as the observed effect for the composite does not necessarily reflect the effects for the components, which may be of different magnitude or even point in adverse directions. Moreover, in clinical applications, the event types are often of different clinical relevance, which also complicates the interpretation of the composite effect. The common effect measure for composite endpoints is the all‐cause hazard ratio, which gives equal weight to all events irrespective of their type and clinical relevance. Thereby, the all‐cause hazard within each group is given by the sum of the cause‐specific hazards corresponding to the individual components. A natural extension of the standard all‐cause hazard ratio can be defined by a “weighted all‐cause hazard ratio” where the individual hazards for each component are multiplied with predefined relevance weighting factors. For the special case of equal weights across the components, the weighted all‐cause hazard ratio then corresponds to the standard all‐cause hazard ratio. To identify the cause‐specific hazard of the individual components, any parametric survival model might be applied. The new weighted effect measure can be tested for deviations from the null hypothesis by means of a permutation test. In this work, we systematically compare the new weighted approach to the standard all‐cause hazard ratio by theoretical considerations, Monte‐Carlo simulations, and by means of a real clinical trial example.  相似文献   

7.
The benefits and challenges of incorporating biomarkers into the development of anticancer agents have been increasingly discussed. In many cases, a sensitive subpopulation of patients is determined based on preclinical data and/or by retrospectively analyzing clinical trial data. Prospective exploration of sensitive subpopulations of patients may enable us to efficiently develop definitively effective treatments, resulting in accelerated drug development and a reduction in development costs. We consider the development of a new molecular‐targeted treatment in cancer patients. Given preliminary but promising efficacy data observed in a phase I study, it may be worth designing a phase II clinical trial that aims to identify a sensitive subpopulation. In order to achieve this goal, we propose a Bayesian randomized phase II clinical trial design incorporating a biomarker that is measured on a graded scale. We compare two Bayesian methods, one based on subgroup analysis and the other on a regression model, to analyze a time‐to‐event endpoint such as progression‐free survival (PFS) time. The two methods basically estimate Bayesian posterior probabilities of PFS hazard ratios in biomarker subgroups. Extensive simulation studies evaluate these methods’ operating characteristics, including the correct identification probabilities of the desired subpopulation under a wide range of clinical scenarios. We also examine the impact of subgroup population proportions on the methods’ operating characteristics. Although both methods’ performance depends on the distribution of treatment effect and the population proportions across patient subgroups, the regression‐based method shows more favorable operating characteristics. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

8.
Phase II clinical trials are performed to investigate whether a novel treatment shows sufficient promise of efficacy to justify its evaluation in a subsequent definitive phase III trial, and they are often also used to select the dose to take forward. In this paper we discuss different design proposals for a phase II trial in which three active treatment doses and a placebo control are to be compared in terms of a single‐ordered categorical endpoint. The sample size requirements for one‐stage and two‐stage designs are derived, based on an approach similar to that of Dunnett. Detailed computations are prepared for an illustrative example concerning a study in stroke. Allowance for early stopping for futility is made. Simulations are used to verify that the specified type I error and power requirements are valid, despite certain approximations used in the derivation of sample size. The advantages and disadvantages of the different designs are discussed, and the scope for extending the approach to different forms of endpoint is considered. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

9.
Phase II and phase III trials play a crucial role in drug development programs. They are costly and time consuming and, because of high failure rates in late development stages, at the same time risky investments. Commonly, sample size calculation of phase III is based on the treatment effect observed in phase II. Therefore, planning of phases II and III can be linked. The performance of the phase II/III program crucially depends on the allocation of the resources to phases II and III by appropriate choice of the sample size and the rule applied to decide whether to stop the program after phase II or to proceed. We present methods for a program‐wise phase II/III planning that aim at determining optimal phase II sample sizes and go/no‐go decisions in a time‐to‐event setting. Optimization is based on a utility function that takes into account (fixed and variable) costs of the drug development program and potential gains after successful launch. The proposed methods are illustrated by application to a variety of scenarios typically met in oncology drug development. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

10.
When several experimental treatments are available for testing, multi‐arm trials provide gains in efficiency over separate trials. Including interim analyses allows the investigator to effectively use the data gathered during the trial. Bayesian adaptive randomization (AR) and multi‐arm multi‐stage (MAMS) designs are two distinct methods that use patient outcomes to improve the efficiency and ethics of the trial. AR allocates a greater proportion of future patients to treatments that have performed well; MAMS designs use pre‐specified stopping boundaries to determine whether experimental treatments should be dropped. There is little consensus on which method is more suitable for clinical trials, and so in this paper, we compare the two under several simulation scenarios and in the context of a real multi‐arm phase II breast cancer trial. We compare the methods in terms of their efficiency and ethical properties. We also consider the practical problem of a delay between recruitment of patients and assessment of their treatment response. Both methods are more efficient and ethical than a multi‐arm trial without interim analyses. Delay between recruitment and response assessment attenuates this efficiency gain. We also consider futility stopping rules for response adaptive trials that add efficiency when all treatments are ineffective. Our comparisons show that AR is more efficient than MAMS designs when there is an effective experimental treatment, whereas if none of the experimental treatments is effective, then MAMS designs slightly outperform AR. © 2014 The Authors Statistics in Medicine Published by John Wiley & Sons, Ltd.  相似文献   

11.
Seamless phase I/II dose‐finding trials are attracting increasing attention nowadays in early‐phase drug development for oncology. Most existing phase I/II dose‐finding methods use sophisticated yet untestable models to quantify dose‐toxicity and dose‐efficacy relationships, which always renders them difficult to implement in practice. To simplify the practical implementation, we extend the Bayesian optimal interval design from maximum tolerated dose finding to optimal biological dose finding in phase I/II trials. In particular, optimized intervals for toxicity and efficacy are respectively derived by minimizing probabilities of incorrect classifications. If the pair of observed toxicity and efficacy probabilities at the current dose is located inside the promising region, we retain the current dose; if the observed probabilities are outside of the promising region, we propose an allocation rule by maximizing the posterior probability that the response rate of the next dose falls inside a prespecified efficacy probability interval while still controlling the level of toxicity. The proposed interval design is model‐free, thus is suitable for various dose‐response relationships. We conduct extensive simulation studies to demonstrate the small‐ and large‐sample performance of the proposed method under various scenarios. Compared to existing phase I/II dose‐finding designs, not only is our interval design easy to implement in practice, but it also possesses desirable and robust operating characteristics.  相似文献   

12.
We consider regulatory clinical trials that require a prespecified method for the comparison of two treatments for chronic diseases (e.g. Chronic Obstructive Pulmonary Disease) in which patients suffer deterioration in a longitudinal process until death occurs. We define a composite endpoint structure that encompasses both the longitudinal data for deterioration and the time‐to‐event data for death, and use multivariate time‐to‐event methods to assess treatment differences on both data structures simultaneously, without a need for parametric assumptions or modeling. Our method is straightforward to implement, and simulations show that the method has robust power in situations in which incomplete data could lead to lower than expected power for either the longitudinal or survival data. We illustrate the method on data from a study of chronic lung disease. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

13.
Conventional phase II trials using binary endpoints as early indicators of a time‐to‐event outcome are not always feasible. Uveal melanoma has no reliable intermediate marker of efficacy. In pancreatic cancer and viral clearance, the time to the event of interest is short, making an early indicator unnecessary. In the latter application, Weibull models have been used to analyse corresponding time‐to‐event data. Bayesian sample size calculations are presented for single‐arm and randomised phase II trials assuming proportional hazards models for time‐to‐event endpoints. Special consideration is given to the case where survival times follow the Weibull distribution. The proposed methods are demonstrated through an illustrative trial based on uveal melanoma patient data. A procedure for prior specification based on knowledge or predictions of survival patterns is described. This enables investigation into the choice of allocation ratio in the randomised setting to assess whether a control arm is indeed required. The Bayesian framework enables sample sizes consistent with those used in practice to be obtained. When a confirmatory phase III trial will follow if suitable evidence of efficacy is identified, Bayesian approaches are less controversial than for definitive trials. In the randomised setting, a compromise for obtaining feasible sample sizes is a loss in certainty in the specified hypotheses: the Bayesian counterpart of power. However, this approach may still be preferable to running a single‐arm trial where no data is collected on the control treatment. This dilemma is present in most phase II trials, where resources are not sufficient to conduct a definitive trial. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

14.
We propose a robust two‐stage design to identify the optimal biological dose for phase I/II clinical trials evaluating both toxicity and efficacy outcomes. In the first stage of dose finding, we use the Bayesian model averaging continual reassessment method to monitor the toxicity outcomes and adopt an isotonic regression method based on the efficacy outcomes to guide dose escalation. When the first stage ends, we use the Dirichlet‐multinomial distribution to jointly model the toxicity and efficacy outcomes and pick the candidate doses based on a three‐dimensional volume ratio. The selected candidate doses are then seamlessly advanced to the second stage for dose validation. Both toxicity and efficacy outcomes are continuously monitored so that any overly toxic and/or less efficacious dose can be dropped from the study as the trial continues. When the phase I/II trial ends, we select the optimal biological dose as the dose obtaining the minimal value of the volume ratio within the candidate set. An advantage of the proposed design is that it does not impose a monotonically increasing assumption on the shape of the dose–efficacy curve. We conduct extensive simulation studies to examine the operating characteristics of the proposed design. The simulation results show that the proposed design has desirable operating characteristics across different shapes of the underlying true dose–toxicity and dose–efficacy curves. The software to implement the proposed design is available upon request. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

15.
Immunotherapy is the most promising new cancer treatment for various pediatric tumors and has resulted in an unprecedented surge in the number of novel immunotherapeutic treatments that need to be evaluated in clinical trials. Most phase I/II trial designs have been developed for evaluating only one candidate treatment at a time, and are thus not optimal for this task. To address these issues, we propose a Bayesian phase I/II platform trial design, which accounts for the unique features of immunotherapy, thereby allowing investigators to continuously screen a large number of immunotherapeutic treatments in an efficient and seamless manner. The elicited numerical utility is adopted to account for the risk‐benefit trade‐off and to quantify the desirability of the dose. During the trial, inefficacious or overly toxic treatments are adaptively dropped from the trial and the promising treatments are graduated from the trial to the next stage of development. Once an experimental treatment is dropped or graduated, the next available new treatment can be immediately added and tested. Extensive simulation studies have demonstrated the desirable operating characteristics of the proposed design.  相似文献   

16.
Traditionally, model‐based dose‐escalation trial designs recommend a dose for escalation based on an assumed dose‐toxicity relationship. Pharmacokinetic data are often available but are currently only utilised by clinical teams in a subjective manner to aid decision making if the dose‐toxicity model recommendation is felt to be too high. Formal incorporation of pharmacokinetic data in dose‐escalation could therefore make the decision process more efficient and lead to an increase in the precision of the resulting recommended dose, as well as decreasing the subjectivity of its use. Such an approach is investigated in the dual‐agent setting using a Bayesian design, where historical single‐agent data are available to advise the use of pharmacokinetic data in the dual‐agent setting. The dose‐toxicity and dose‐exposure relationships are modelled independently and the outputs combined in the escalation rules. Implementation of stopping rules highlight the practicality of the design. This is demonstrated through an example which is evaluated using simulation. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

17.
Phase I/II trials utilize both toxicity and efficacy data to achieve efficient dose finding. However, due to the requirement of assessing efficacy outcome, which often takes a long period of time to be evaluated, the duration of phase I/II trials is often longer than that of the conventional dose‐finding trials. As a result, phase I/II trials are susceptible to the missing data problem caused by patient dropout, and the missing efficacy outcomes are often nonignorable in the sense that patients who do not experience treatment efficacy are more likely to drop out of the trial. We propose a Bayesian phase I/II trial design to accommodate nonignorable dropouts. We treat toxicity as a binary outcome and efficacy as a time‐to‐event outcome. We model the marginal distribution of toxicity using a logistic regression and jointly model the times to efficacy and dropout using proportional hazard models to adjust for nonignorable dropouts. The correlation between times to efficacy and dropout is modeled using a shared frailty. We propose a two‐stage dose‐finding algorithm to adaptively assign patients to desirable doses. Simulation studies show that the proposed design has desirable operating characteristics. Our design selects the target dose with a high probability and assigns most patients to the target dose. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

18.
Recently, there has been much work on early phase cancer designs that incorporate both toxicity and efficacy data, called phase I‐II designs because they combine elements of both phases. However, they do not explicitly address the phase II hypothesis test of H0 : p ? p0, where p is the probability of efficacy at the estimated maximum tolerated dose from phase I and p0 is the baseline efficacy rate. Standard practice for phase II remains to treat p as a fixed, unknown parameter and to use Simon's two‐stage design with all patients dosed at . We propose a phase I‐II design that addresses the uncertainty in the estimate in H0 by using sequential generalized likelihood theory. Combining this with a phase I design that incorporates efficacy data, the phase I‐II design provides a common framework that can be used all the way from the first dose of phase I through the final accept/reject decision about H0 at the end of phase II, utilizing both toxicity and efficacy data throughout. Efficient group sequential testing is used in phase II that allows for early stopping to show treatment effect or futility. The proposed phase I‐II design thus removes the artificial barrier between phase I and phase II and fulfills the objectives of searching for the maximum tolerated dose and testing if the treatment has an acceptable response rate to enter into a phase III trial. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

19.
Phase II clinical trials are typically designed as two‐stage studies, in order to ensure early termination of the trial if the interim results show that the treatment is ineffective. Most of two‐stage designs, developed under both a frequentist and a Bayesian framework, select the second stage sample size before observing the first stage data. This may cause some paradoxical situations during the practical carrying out of the trial. To avoid these potential problems, we suggest a Bayesian predictive strategy to derive an adaptive two‐stage design, where the second stage sample size is not selected in advance, but depends on the first stage result. The criterion we propose is based on a modification of a Bayesian predictive design recently presented in the literature (see (Statist. Med. 2008; 27 :1199–1224)). The distinction between analysis and design priors is essential for the practical implementation of the procedure: some guidelines for choosing these prior distributions are discussed and their impact on the required sample size is examined. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

20.
The rate of failure in phase III oncology trials is surprisingly high, partly owing to inadequate phase II studies. Recently, the use of randomized designs in phase II is being increasingly recommended, to avoid the limits of studies that use a historical control. We propose a two‐arm two‐stage design based on a Bayesian predictive approach. The idea is to ensure a large probability, expressed in terms of the prior predictive probability of the data, of obtaining a substantial posterior evidence in favour of the experimental treatment, under the assumption that it is actually more effective than the standard agent. This design is a randomized version of the two‐stage design that has been proposed for single‐arm phase II trials by Sambucini. We examine the main features of our novel design as all the parameters involved vary and compare our approach with Jung's minimax and optimal designs. An illustrative example is also provided online as a supplementary material to this article. Copyright © 2014 JohnWiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号