首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 250 毫秒
1.
Conventional phase II trials using binary endpoints as early indicators of a time‐to‐event outcome are not always feasible. Uveal melanoma has no reliable intermediate marker of efficacy. In pancreatic cancer and viral clearance, the time to the event of interest is short, making an early indicator unnecessary. In the latter application, Weibull models have been used to analyse corresponding time‐to‐event data. Bayesian sample size calculations are presented for single‐arm and randomised phase II trials assuming proportional hazards models for time‐to‐event endpoints. Special consideration is given to the case where survival times follow the Weibull distribution. The proposed methods are demonstrated through an illustrative trial based on uveal melanoma patient data. A procedure for prior specification based on knowledge or predictions of survival patterns is described. This enables investigation into the choice of allocation ratio in the randomised setting to assess whether a control arm is indeed required. The Bayesian framework enables sample sizes consistent with those used in practice to be obtained. When a confirmatory phase III trial will follow if suitable evidence of efficacy is identified, Bayesian approaches are less controversial than for definitive trials. In the randomised setting, a compromise for obtaining feasible sample sizes is a loss in certainty in the specified hypotheses: the Bayesian counterpart of power. However, this approach may still be preferable to running a single‐arm trial where no data is collected on the control treatment. This dilemma is present in most phase II trials, where resources are not sufficient to conduct a definitive trial. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

2.
Early randomized Phase II cancer chemoprevention trials which assess short-term biological activity are critical to the decision process to advance to late Phase II/Phase III trials. We have adapted published Bayesian interim analysis methods (Spiegelhalter et al., J. R. Statist. Soc A, 1994; 157: 357-416) which give greater flexibility and simplicity of inference to the monitoring of randomized controlled Phase II trials using intermediate endpoints. The Bayesian stopping rule is designed to stop the trial more quickly when the evidence suggests ineffectiveness rather than when it suggests biological activity, thus allowing resources to be concentrated on those agents that show the most promise in this early stage of testing. We investigate frequentist performance characteristics of the proposed method through simulation of randomized placebo controlled trials with a growth factor intermediate end-point using mean and variance values derived from the literature. Simulation results show expected error rates and trial size similar to other commonly used group sequential methods for this setting. These results suggest that the Bayesian approach to interim analysis is well suited for monitoring small randomized controlled Phase II chemoprevention trials for early detection of either inactive or promising agents.  相似文献   

3.
We propose a class of phase II clinical trial designs with sequential stopping and adaptive treatment allocation to evaluate treatment efficacy. Our work is based on two‐arm (control and experimental treatment) designs with binary endpoints. Our overall goal is to construct more efficient and ethical randomized phase II trials by reducing the average sample sizes and increasing the percentage of patients assigned to the better treatment arms of the trials. The designs combine the Bayesian decision‐theoretic sequential approach with adaptive randomization procedures in order to achieve simultaneous goals of improved efficiency and ethics. The design parameters represent the costs of different decisions, for example, the decisions for stopping or continuing the trials. The parameters enable us to incorporate the actual costs of the decisions in practice. The proposed designs allow the clinical trials to stop early for either efficacy or futility. Furthermore, the designs assign more patients to better treatment arms by applying adaptive randomization procedures. We develop an algorithm based on the constrained backward induction and forward simulation to implement the designs. The algorithm overcomes the computational difficulty of the backward induction method, thereby making our approach practicable. The designs result in trials with desirable operating characteristics under the simulated settings. Moreover, the designs are robust with respect to the response rate of the control group. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

4.
Phase I/II trials utilize both toxicity and efficacy data to achieve efficient dose finding. However, due to the requirement of assessing efficacy outcome, which often takes a long period of time to be evaluated, the duration of phase I/II trials is often longer than that of the conventional dose‐finding trials. As a result, phase I/II trials are susceptible to the missing data problem caused by patient dropout, and the missing efficacy outcomes are often nonignorable in the sense that patients who do not experience treatment efficacy are more likely to drop out of the trial. We propose a Bayesian phase I/II trial design to accommodate nonignorable dropouts. We treat toxicity as a binary outcome and efficacy as a time‐to‐event outcome. We model the marginal distribution of toxicity using a logistic regression and jointly model the times to efficacy and dropout using proportional hazard models to adjust for nonignorable dropouts. The correlation between times to efficacy and dropout is modeled using a shared frailty. We propose a two‐stage dose‐finding algorithm to adaptively assign patients to desirable doses. Simulation studies show that the proposed design has desirable operating characteristics. Our design selects the target dose with a high probability and assigns most patients to the target dose. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

5.
Tan SB  Machin D 《Statistics in medicine》2002,21(14):1991-2012
Many different statistical designs have been used in phase II clinical trials. The majority of these are based on frequentist statistical approaches. Bayesian methods provide a good alternative to frequentist approaches as they allow for the incorporation of relevant prior information and the presentation of the trial results in a manner which, some feel, is more intuitive and helpful. In this paper, we propose two new Bayesian designs for phase II clinical trials. These designs have been developed specifically to make them as user friendly and as familiar as possible to those who have had experience working with two-stage frequentist phase II designs. Thus, unlike many of the Bayesian designs already proposed in the literature, our designs do not require a distribution for the response rate of the currently used drug or the explicit specification of utility or loss functions. We study the properties of our designs and compare them with the Simon two-stage optimal and minimax designs. We also apply them to an example of two recently concluded phase II trials conducted at the National Cancer Centre in Singapore. Sample size tables for the designs are given.  相似文献   

6.
Information from historical trials is important for the design, interim monitoring, analysis, and interpretation of clinical trials. Meta‐analytic models can be used to synthesize the evidence from historical data, which are often only available in aggregate form. We consider evidence synthesis methods for trials with recurrent event endpoints, which are common in many therapeutic areas. Such endpoints are typically analyzed by negative binomial regression. However, the individual patient data necessary to fit such a model are usually unavailable for historical trials reported in the medical literature. We describe approaches for back‐calculating model parameter estimates and their standard errors from available summary statistics with various techniques, including approximate Bayesian computation. We propose to use a quadratic approximation to the log‐likelihood for each historical trial based on 2 independent terms for the log mean rate and the log of the dispersion parameter. A Bayesian hierarchical meta‐analysis model then provides the posterior predictive distribution for these parameters. Simulations show this approach with back‐calculated parameter estimates results in very similar inference as using parameter estimates from individual patient data as an input. We illustrate how to design and analyze a new randomized placebo‐controlled exacerbation trial in severe eosinophilic asthma using data from 11 historical trials.  相似文献   

7.
In this paper, we consider two-stage designs with failure-time endpoints in single-arm phase II trials. We propose designs in which stopping rules are constructed by comparing the Bayes risk of stopping at stage I with the expected Bayes risk of continuing to stage II using both the observed data in stage I and the predicted survival data in stage II. Terminal decision rules are constructed by comparing the posterior expected loss of a rejection decision versus an acceptance decision. Simple threshold loss functions are applied to time-to-event data modeled either parametrically or nonparametrically, and the cost parameters in the loss structure are calibrated to obtain desired type I error and power. We ran simulation studies to evaluate design properties including types I and II errors, probability of early stopping, expected sample size, and expected trial duration and compared them with the Simon two-stage designs and a design, which is an extension of the Simon's designs with time-to-event endpoints. An example based on a recently conducted phase II sarcoma trial illustrates the method.  相似文献   

8.
We propose a flexible Bayesian optimal phase II (BOP2) design that is capable of handling simple (e.g., binary) and complicated (e.g., ordinal, nested, and co‐primary) endpoints under a unified framework. We use a Dirichlet‐multinomial model to accommodate different types of endpoints. At each interim, the go/no‐go decision is made by evaluating a set of posterior probabilities of the events of interest, which is optimized to maximize power or minimize the number of patients under the null hypothesis. Unlike other existing Bayesian designs, the BOP2 design explicitly controls the type I error rate, thereby bridging the gap between Bayesian designs and frequentist designs. In addition, the stopping boundary of the BOP2 design can be enumerated prior to the onset of the trial. These features make the BOP2 design accessible to a wide range of users and regulatory agencies and particularly easy to implement in practice. Simulation studies show that the BOP2 design has favorable operating characteristics with higher power and lower risk of incorrectly terminating the trial than some existing Bayesian phase II designs. The software to implement the BOP2 design is freely available at www.trialdesign.org . Copyright © 2017 John Wiley & Sons, Ltd.  相似文献   

9.
Interim monitoring is routinely conducted in phase II clinical trials to terminate the trial early if the experimental treatment is futile. Interim monitoring requires that patients’ responses be ascertained shortly after the initiation of treatment so that the outcomes are known by the time the interim decision must be made. However, in some cases, response outcomes require a long time to be assessed, which causes difficulties for interim monitoring. To address this issue, we propose a Bayesian trial design to allow for continuously monitoring phase II clinical trials in the presence of delayed responses. We treat the delayed responses as missing data and handle them using a multiple imputation approach. Extensive simulations show that the proposed design yields desirable operating characteristics under various settings and dramatically reduces the trial duration. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

10.
Adaptive randomization is used in clinical trials to increase statistical efficiency. In addition, some clinicians and researchers believe that using adaptive randomization leads necessarily to more ethical treatment of subjects in a trial. We develop Bayesian, decision‐theoretic, clinical trial designs with response‐adaptive randomization and a primary goal of estimating treatment effect and then contrast these designs with designs that also include in their loss function a cost for poor subject outcome. When the loss function did not incorporate a cost for poor subject outcome, the gains in efficiency from response‐adaptive randomization were accompanied by ethically concerning subject allocations. Conversely, including a cost for poor subject outcome demonstrated a more acceptable balance between the competing needs in the trial. A subsequent, parallel set of trials designed to control explicitly types I and II error rates showed that much of the improvement achieved through modification of the loss function was essentially negated. Therefore, gains in efficiency from the use of a decision‐theoretic, response‐adaptive design using adaptive randomization may only be assumed to apply to those goals that are explicitly included in the loss function. Trial goals, including ethical ones, which do not appear in the loss function, are ignored and may even be compromised; it is thus inappropriate to assume that all adaptive trials are necessarily more ethical. Controlling types I and II error rates largely negates the benefit of including competing needs in favor of the goal of parameter estimation. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

11.
Seamless phase I/II dose‐finding trials are attracting increasing attention nowadays in early‐phase drug development for oncology. Most existing phase I/II dose‐finding methods use sophisticated yet untestable models to quantify dose‐toxicity and dose‐efficacy relationships, which always renders them difficult to implement in practice. To simplify the practical implementation, we extend the Bayesian optimal interval design from maximum tolerated dose finding to optimal biological dose finding in phase I/II trials. In particular, optimized intervals for toxicity and efficacy are respectively derived by minimizing probabilities of incorrect classifications. If the pair of observed toxicity and efficacy probabilities at the current dose is located inside the promising region, we retain the current dose; if the observed probabilities are outside of the promising region, we propose an allocation rule by maximizing the posterior probability that the response rate of the next dose falls inside a prespecified efficacy probability interval while still controlling the level of toxicity. The proposed interval design is model‐free, thus is suitable for various dose‐response relationships. We conduct extensive simulation studies to demonstrate the small‐ and large‐sample performance of the proposed method under various scenarios. Compared to existing phase I/II dose‐finding designs, not only is our interval design easy to implement in practice, but it also possesses desirable and robust operating characteristics.  相似文献   

12.
Molecularly targeted agent (MTA) combination therapy is in the early stages of development. When using a fixed dose of one agent in combinations of MTAs, toxicity and efficacy do not necessarily increase with an increasing dose of the other agent. Thus, in dose‐finding trials for combinations of MTAs, interest may lie in identifying the optimal biological dose combinations (OBDCs), defined as the lowest dose combinations (in a certain sense) that are safe and have the highest efficacy level meeting a prespecified target. The limited existing designs for these trials use parametric dose–efficacy and dose–toxicity models. Motivated by a phase I/II clinical trial of a combination of two MTAs in patients with pancreatic, endometrial, or colorectal cancer, we propose Bayesian dose‐finding designs to identify the OBDCs without parametric model assumptions. The proposed approach is based only on partial stochastic ordering assumptions for the effects of the combined MTAs and uses isotonic regression to estimate partially stochastically ordered marginal posterior distributions of the efficacy and toxicity probabilities. We demonstrate that our proposed method appropriately accounts for the partial ordering constraints, including potential plateaus on the dose–response surfaces, and is computationally efficient. We develop a dose‐combination‐finding algorithm to identify the OBDCs. We use simulations to compare the proposed designs with an alternative design based on Bayesian isotonic regression transformation and a design based on parametric change‐point dose–toxicity and dose–efficacy models and demonstrate desirable operating characteristics of the proposed designs. © 2014 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.  相似文献   

13.
A flexible and simple Bayesian decision‐theoretic design for dose‐finding trials is proposed in this paper. In order to reduce the computational burden, we adopt a working model with conjugate priors, which is flexible to fit all monotonic dose‐toxicity curves and produces analytic posterior distributions. We also discuss how to use a proper utility function to reflect the interest of the trial. Patients are allocated based on not only the utility function but also the chosen dose selection rule. The most popular dose selection rule is the one‐step‐look‐ahead (OSLA), which selects the best‐so‐far dose. A more complicated rule, such as the two‐step‐look‐ahead, is theoretically more efficient than the OSLA only when the required distributional assumptions are met, which is, however, often not the case in practice. We carried out extensive simulation studies to evaluate these two dose selection rules and found that OSLA was often more efficient than two‐step‐look‐ahead under the proposed Bayesian structure. Moreover, our simulation results show that the proposed Bayesian method's performance is superior to several popular Bayesian methods and that the negative impact of prior misspecification can be managed in the design stage. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

14.
In early‐phase clinical trials, interim monitoring is commonly conducted based on the estimated intent‐to‐treat effect, which is subject to bias in the presence of noncompliance. To address this issue, we propose a Bayesian sequential monitoring trial design based on the estimation of the causal effect using a principal stratification approach. The proposed design simultaneously considers efficacy and toxicity outcomes and utilizes covariates to predict a patient's potential compliance behavior and identify the causal effects. Based on accumulating data, we continuously update the posterior estimates of the causal treatment effects and adaptively make the go/no‐go decision for the trial. Numerical results show that the proposed method has desirable operating characteristics and addresses the issue of noncompliance. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

15.
Ishizuka N  Ohashi Y 《Statistics in medicine》2001,20(17-18):2661-2681
We discuss the continual reassessment method (CRM) and its extension with practical applications in phase I and I/II cancer clinical trials. The CRM has been proposed as an alternative design of a traditional cohort design and its essential features are the sequential (continual) selection of a dose level for the next patients based on the dose-toxicity relationship and the updating of the relationship based on patients' response data using Bayesian calculation. The original CRM has been criticized because it often tends to allocate too toxic doses to many patients and our proposal for overcoming this practical problem is to monitor a posterior density function of the occurrence of the dose limiting toxicity (DLT) at each dose level. A simulation study shows that strategies based on our proposal allocate a smaller number of patients to doses higher than the maximum tolerated dose (MTD) compared with the original method while the mean squared error of the probability of the DLT occurrence at the MTD is not inflated. We present a couple of extensions of the CRM with real prospective applications: (i) monitoring efficacy and toxicity simultaneously in a combination phase I/II trial; (ii) combining the idea of pharmacokinetically guided dose escalation (PKGDE) and utilization of animal toxicity data in determining the prior distribution. A stopping rule based on the idea of separation among the DLT density functions is discussed in the first example and a strategy for determining the model parameter of the dose-toxicity relationship is suggested in the second example.  相似文献   

16.
Dual‐agent trials are now increasingly common in oncology research, and many proposed dose‐escalation designs are available in the statistical literature. Despite this, the translation from statistical design to practical application is slow, as has been highlighted in single‐agent phase I trials, where a 3 + 3 rule‐based design is often still used. To expedite this process, new dose‐escalation designs need to be not only scientifically beneficial but also easy to understand and implement by clinicians. In this paper, we propose a curve‐free (nonparametric) design for a dual‐agent trial in which the model parameters are the probabilities of toxicity at each of the dose combinations. We show that it is relatively trivial for a clinician's prior beliefs or historical information to be incorporated in the model and updating is fast and computationally simple through the use of conjugate Bayesian inference. Monotonicity is ensured by considering only a set of monotonic contours for the distribution of the maximum tolerated contour, which defines the dose‐escalation decision process. Varied experimentation around the contour is achievable, and multiple dose combinations can be recommended to take forward to phase II. Code for R , Stata and Excel are available for implementation. © 2015 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.  相似文献   

17.
We present a Bayesian approach for monitoring multiple outcomes in single-arm clinical trials. Each patient's response may include both adverse events and efficacy outcomes, possibly occurring at different study times. We use a Dirichlet-multinomial model to accommodate general discrete multivariate responses. We present Bayesian decision criteria and monitoring boundaries for early termination of studies with unacceptably high rates of adverse outcomes or with low rates of desirable outcomes. Each stopping rule is constructed either to maintain equivalence or to achieve a specified level of improvement of a particular event rate for the experimental treatment, compared with that of standard therapy. We avoid explicit specification of costs and a loss function. We evaluate the joint behaviour of the multiple decision rules using frequentist criteria. One chooses a design by considering several parameterizations under relevant fixed values of the multiple outcome probability vector. Applications include trials where response is the cross-product of multiple simultaneous binary outcomes, and hierarchical structures that reflect successive stages of treatment response, disease progression and survival. We illustrate the approach with a variety of single-arm cancer trials, including bio-chemotherapy acute leukaemia trials, bone marrow transplantation trials, and an anti-infection trial. The number of elementary patient outcomes in each of these trials varies from three to seven, with as many as four monitoring boundaries running simultaneously. We provide general guidelines for eliciting and parameterizing Dirichlet priors and for specifying design parameters.  相似文献   

18.
In the era of targeted therapy and immunotherapy, the objective of dose finding is often to identify the optimal biological dose (OBD), rather than the maximum tolerated dose. We develop a utility-based Bayesian optimal interval (U-BOIN) phase I/II design to find the OBD. We jointly model toxicity and efficacy using a multinomial-Dirichlet model, and employ a utility function to measure dose risk-benefit trade-off. The U-BOIN design consists of two seamless stages. In stage I, the Bayesian optimal interval design is used to quickly explore the dose space and collect preliminary toxicity and efficacy data. In stage II, we continuously update the posterior estimate of the utility for each dose after each cohort, using accumulating efficacy and toxicity from both stages I and II, and then use the posterior estimate to direct the dose assignment and selection. Compared to existing phase I/II designs, one prominent advantage of the U-BOIN design is its simplicity for implementation. Once the trial is designed, it can be easily applied using predetermined decision tables, without complex model fitting and estimation. Our simulation study shows that, despite its simplicity, the U-BOIN design is robust and has high accuracy to identify the OBD. We extend the design to accommodate delayed efficacy by leveraging the short-term endpoint (eg, immune activity or other biological activity of targeted agents), and using it to predict the delayed efficacy outcome to facilitate real-time decision making. A user-friendly software to implement the U-BOIN is freely available at www.trialdesign.org .  相似文献   

19.
We propose a robust two‐stage design to identify the optimal biological dose for phase I/II clinical trials evaluating both toxicity and efficacy outcomes. In the first stage of dose finding, we use the Bayesian model averaging continual reassessment method to monitor the toxicity outcomes and adopt an isotonic regression method based on the efficacy outcomes to guide dose escalation. When the first stage ends, we use the Dirichlet‐multinomial distribution to jointly model the toxicity and efficacy outcomes and pick the candidate doses based on a three‐dimensional volume ratio. The selected candidate doses are then seamlessly advanced to the second stage for dose validation. Both toxicity and efficacy outcomes are continuously monitored so that any overly toxic and/or less efficacious dose can be dropped from the study as the trial continues. When the phase I/II trial ends, we select the optimal biological dose as the dose obtaining the minimal value of the volume ratio within the candidate set. An advantage of the proposed design is that it does not impose a monotonically increasing assumption on the shape of the dose–efficacy curve. We conduct extensive simulation studies to examine the operating characteristics of the proposed design. The simulation results show that the proposed design has desirable operating characteristics across different shapes of the underlying true dose–toxicity and dose–efficacy curves. The software to implement the proposed design is available upon request. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

20.
Clinical trials usually collect information on a large number of variables or endpoints, including one or more primary endpoints as well as a number of secondary endpoints representing different aspects of treatment effectiveness and safety. In this article, we focus on serial testing procedures that test multiple endpoints in a pre‐specified order, and consider how to optimize the order of endpoints subject to any clinical constraints, with respect to the expected number of successes (i.e., endpoints that reach statistical significance) or the expected gain (if endpoints are associated with numerical utilities). We consider some common approaches to this problem and propose two new approaches: a greedy algorithm based on conditional power and a simulated annealing algorithm that attempts to improve a given sequence in a random and iterative fashion. Simulation results indicate that the proposed algorithms are useful for finding a high‐performing sequence, and that optimized fixed sequence procedures can be competitive against traditional multiple testing procedures such as Holm's. The methods and findings are illustrated with two examples concerning migraine and asthma. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号