首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 21 毫秒
1.
Hommel (Biometrical Journal; 45:581-589) proposed a flexible testing procedure for seamless phase II/III clinical trials. Schmidli et al. (Statistics in Medicine; 26:4925-4938), Kimani et al. (Statistics in Medicine; 28:917-936) and Brannath et al. (Statistics in Medicine; 28:1445-1463) exploited the flexible testing of Hommel to propose adaptation in seamless phase II/III clinical trials that incorporate prior knowledge by using Bayesian methods. In this paper, we show that adaptation incorporating prior knowledge may lead to higher power. Other important issues to consider in such adaptive designs are the gain in power (or saving in patients) over traditional testing and how utility values used to make the adaptation may be used to stop a trial early. In contrast to the aforementioned authors, we discuss these issues in detail and propose a unified approach to address them so that implementing the aforementioned designs and proposing similar designs is clearer.  相似文献   

2.
In this paper we propose a predictive Bayesian approach to sample size determination (SSD) and re‐estimation in clinical trials, in the presence of multiple sources of prior information. The method we suggest is based on the use of mixtures of prior distributions for the unknown quantity of interest, typically a treatment effect or an effects‐difference. Methodologies are developed using normal models with mixtures of conjugate priors. In particular we extend the SSD analysis of Gajewski and Mayo (Statist. Med. 2006; 25 :2554–2566) and the sample size re‐estimation technique of Wang (Biometrical J. 2006; 48 (5):1–13). Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

3.
Phase II clinical trials are typically designed as two‐stage studies, in order to ensure early termination of the trial if the interim results show that the treatment is ineffective. Most of two‐stage designs, developed under both a frequentist and a Bayesian framework, select the second stage sample size before observing the first stage data. This may cause some paradoxical situations during the practical carrying out of the trial. To avoid these potential problems, we suggest a Bayesian predictive strategy to derive an adaptive two‐stage design, where the second stage sample size is not selected in advance, but depends on the first stage result. The criterion we propose is based on a modification of a Bayesian predictive design recently presented in the literature (see (Statist. Med. 2008; 27 :1199–1224)). The distinction between analysis and design priors is essential for the practical implementation of the procedure: some guidelines for choosing these prior distributions are discussed and their impact on the required sample size is examined. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

4.
Numerous meta‐analyses in healthcare research combine results from only a small number of studies, for which the variance representing between‐study heterogeneity is estimated imprecisely. A Bayesian approach to estimation allows external evidence on the expected magnitude of heterogeneity to be incorporated. The aim of this paper is to provide tools that improve the accessibility of Bayesian meta‐analysis. We present two methods for implementing Bayesian meta‐analysis, using numerical integration and importance sampling techniques. Based on 14 886 binary outcome meta‐analyses in the Cochrane Database of Systematic Reviews, we derive a novel set of predictive distributions for the degree of heterogeneity expected in 80 settings depending on the outcomes assessed and comparisons made. These can be used as prior distributions for heterogeneity in future meta‐analyses. The two methods are implemented in R, for which code is provided. Both methods produce equivalent results to standard but more complex Markov chain Monte Carlo approaches. The priors are derived as log‐normal distributions for the between‐study variance, applicable to meta‐analyses of binary outcomes on the log odds‐ratio scale. The methods are applied to two example meta‐analyses, incorporating the relevant predictive distributions as prior distributions for between‐study heterogeneity. We have provided resources to facilitate Bayesian meta‐analysis, in a form accessible to applied researchers, which allow relevant prior information on the degree of heterogeneity to be incorporated. © 2014 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd.  相似文献   

5.
Chen and Chaloner (Statist. Med. 2006; 25 :2956–2966. DOI: 10.1002/sim.2429 ) present a Bayesian stopping rule for a single‐arm clinical trial with a binary endpoint. In some cases, earlier stopping may be possible by basing the stopping rule on the time to a binary event. We investigate the feasibility of computing exact, Bayesian, decision‐theoretic time‐to‐event stopping rules for a single‐arm group sequential non‐inferiority trial relative to an objective performance criterion. For a conjugate prior distribution, exponential failure time distribution, and linear and threshold loss structures, we obtain the optimal Bayes stopping rule by backward induction. We compute frequentist operating characteristics of including Type I error, statistical power, and expected run length. We also briefly address design issues. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

6.
A two-stage model for evaluating both trial-level and patient-level surrogacy of correlated time-to-event endpoints has been introduced, using patient-level data when multiple clinical trials are available. However, the associated maximum likelihood approach often suffers from numerical problems when different baseline hazards among trials and imperfect estimation of treatment effects are assumed. To address this issue, we propose performing the second-stage, trial-level evaluation of potential surrogates within a Bayesian framework, where we may naturally borrow information across trials while maintaining these realistic assumptions. Posterior distributions on surrogacy measures of interest may then be used to compare measures or make decisions regarding the candidacy of a specific endpoint. We perform a simulation study to investigate differences in estimation performance between traditional maximum likelihood and new Bayesian representations of common meta-analytic surrogacy measures, while assessing sensitivity to data characteristics such as number of trials, trial size, and amount of censoring. Furthermore, we present both frequentist and Bayesian trial-level surrogacy evaluations of time to recurrence for overall survival in two meta-analyses of adjuvant therapy trials in colon cancer. With these results, we recommend Bayesian evaluation as an attractive and numerically stable alternative in the multitrial assessment of potential surrogate endpoints.  相似文献   

7.
The continual reassessment method (CRM) is an adaptive model-based design used to estimate the maximum tolerated dose in phase I clinical trials. Asymptotically, the method has been shown to select the correct dose given that certain conditions are satisfied. When sample size is small, specifying a reasonable model is important. While an algorithm has been proposed for the calibration of the initial guesses of the probabilities of toxicity, the calibration of the prior distribution of the parameter for the Bayesian CRM has not been addressed. In this paper, we introduce the concept of least informative prior variance for a normal prior distribution. We also propose two systematic approaches to jointly calibrate the prior variance and the initial guesses of the probability of toxicity at each dose. The proposed calibration approaches are compared with existing approaches in the context of two examples via simulations. The new approaches and the previously proposed methods yield very similar results since the latter used appropriate vague priors. However, the new approaches yield a smaller interval of toxicity probabilities in which a neighboring dose may be selected.  相似文献   

8.
Although meta-analyses are typically viewed as retrospective activities, they are increasingly being applied prospectively to provide up-to-date evidence on specific research questions. When meta-analyses are updated account should be taken of the possibility of false-positive findings due to repeated significance tests. We discuss the use of sequential methods for meta-analyses that incorporate random effects to allow for heterogeneity across studies. We propose a method that uses an approximate semi-Bayes procedure to update evidence on the among-study variance, starting with an informative prior distribution that might be based on findings from previous meta-analyses. We compare our methods with other approaches, including the traditional method of cumulative meta-analysis, in a simulation study and observe that it has Type I and Type II error rates close to the nominal level. We illustrate the method using an example in the treatment of bleeding peptic ulcers.  相似文献   

9.
The rate of failure in phase III oncology trials is surprisingly high, partly owing to inadequate phase II studies. Recently, the use of randomized designs in phase II is being increasingly recommended, to avoid the limits of studies that use a historical control. We propose a two‐arm two‐stage design based on a Bayesian predictive approach. The idea is to ensure a large probability, expressed in terms of the prior predictive probability of the data, of obtaining a substantial posterior evidence in favour of the experimental treatment, under the assumption that it is actually more effective than the standard agent. This design is a randomized version of the two‐stage design that has been proposed for single‐arm phase II trials by Sambucini. We examine the main features of our novel design as all the parameters involved vary and compare our approach with Jung's minimax and optimal designs. An illustrative example is also provided online as a supplementary material to this article. Copyright © 2014 JohnWiley & Sons, Ltd.  相似文献   

10.
In meta-analysis combining results from parallel and cross-over trials, there is a risk of bias originating from the carry-over effect in cross-over trials. When pooling treatment effects estimated from parallel trials and two-period two-treatment cross-over trials, meta-analytic estimators of treatment effect can be obtained from the combination of parallel trial results either with cross-over trial results based on data of the first period only or with cross-over trial results analysed with data from both periods. Taking data from the first cross-over period protects against carry-over but gives less efficient treatment estimators and may lead to selection bias. This study evaluates in terms of variance reduction and mean square error the cost of calculating meta-analysis estimates with data from the first period instead of data from the two cross-over periods. If the information on cross-over sequence is available, we recommend performing two combined design meta-analyses, one using the first cross-over period data and one based on data from both cross-over periods. To investigate simultaneously the statistical significance of these two estimators as well as the carry-over at meta-analysis level, a method based on a multivariate analysis of the meta-analytic treatment effect and carry-over estimates is proposed.  相似文献   

11.
In this paper, we propose a Bayesian two-stage design for phase II clinical trials, which represents a predictive version of the single threshold design (STD) recently introduced by Tan and Machin. The STD two-stage sample sizes are determined specifying a minimum threshold for the posterior probability that the true response rate exceeds a pre-specified target value and assuming that the observed response rate is slightly higher than the target. Unlike the STD, we do not refer to a fixed experimental outcome, but take into account the uncertainty about future data. In both stages, the design aims to control the probability of getting a large posterior probability that the true response rate exceeds the target value. Such a probability is expressed in terms of prior predictive distributions of the data. The performance of the design is based on the distinction between analysis and design priors, recently introduced in the literature. The properties of the method are studied when all the design parameters vary.  相似文献   

12.
Despite an enormous and growing statistical literature, formal procedures for dose‐finding are only slowly being implemented in phase I clinical trials. Even in oncology and other life‐threatening conditions in which a balance between efficacy and toxicity has to be struck, model‐based approaches, such as the Continual Reassessment Method, have not been universally adopted. Two related concerns have limited the adoption of the new methods. One relates to doubts about the appropriateness of models assumed to link the risk of toxicity to dose, and the other is the difficulty of communicating the nature of the process to clinical investigators responsible for early phase studies. In this paper, we adopt a new Bayesian approach involving a simple model assuming only monotonicity in the dose‐toxicity relationship. The parameters that define the model have immediate and simple interpretation. The approach can be applied automatically, and we present a simulation investigation of its properties when it is. More importantly, it can be used in a transparent fashion as one element in the expert consideration of what dose to administer to the next patient or group of patients. The procedure serves to summarize the opinions and the data concerning risks of a binary characterization of toxicity which can then be considered, together with additional and less tidy trial information, by the clinicians responsible for making decisions on the allocation of doses. Graphical displays of these opinions can be used to ease communication with investigators. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

13.
There is now a large literature on objective Bayesian model selection in the linear model based on the g‐prior. The methodology has been recently extended to generalized linear models using test‐based Bayes factors. In this paper, we show that test‐based Bayes factors can also be applied to the Cox proportional hazards model. If the goal is to select a single model, then both the maximum a posteriori and the median probability model can be calculated. For clinical prediction of survival, we shrink the model‐specific log hazard ratio estimates with subsequent calculation of the Breslow estimate of the cumulative baseline hazard function. A Bayesian model average can also be employed. We illustrate the proposed methodology with the analysis of survival data on primary biliary cirrhosis patients and the development of a clinical prediction model for future cardiovascular events based on data from the Second Manifestations of ARTerial disease (SMART) cohort study. Cross‐validation is applied to compare the predictive performance with alternative model selection approaches based on Harrell's c‐Index, the calibration slope and the integrated Brier score. Finally, a novel application of Bayesian variable selection to optimal conditional prediction via landmarking is described. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

14.
Point estimation for the selected treatment in a two‐stage drop‐the‐loser trial is not straightforward because a substantial bias can be induced in the standard maximum likelihood estimate (MLE) through the first stage selection process. Research has generally focused on alternative estimation strategies that apply a bias correction to the MLE; however, such estimators can have a large mean squared error. Carreras and Brannath (Stat. Med. 32:1677‐90) have recently proposed using a special form of shrinkage estimation in this context. Given certain assumptions, their estimator is shown to dominate the MLE in terms of mean squared error loss, which provides a very powerful argument for its use in practice. In this paper, we suggest the use of a more general form of shrinkage estimation in drop‐the‐loser trials that has parallels with model fitting in the area of meta‐analysis. Several estimators are identified and are shown to perform favourably to Carreras and Brannath's original estimator and the MLE. However, they necessitate either explicit estimation of an additional parameter measuring the heterogeneity between treatment effects or a quite unnatural prior distribution for the treatment effects that can only be specified after the first stage data has been observed. Shrinkage methods are a powerful tool for accurately quantifying treatment effects in multi‐arm clinical trials, and further research is needed to understand how to maximise their utility. © 2013 The Authors. Statistics in Medicine published by John Wiley & Sons, Ltd.  相似文献   

15.
Multi-arm trials meta-analysis is a methodology used in combining evidence based on a synthesis of different types of comparisons from all possible similar studies and to draw inferences about the effectiveness of multiple compared-treatments. Studies with statistically significant results are potentially more likely to be submitted and selected than studies with non-significant results; this leads to false-positive results. In meta-analysis, combining only the identified selected studies uncritically may lead to an incorrect, usually over-optimistic conclusion. This problem is known asbiselection bias. In this paper, we first define a random-effect meta-analysis model for multi-arm trials by allowing for heterogeneity among studies. This general model is based on a normal approximation for empirical log-odds ratio. We then address the problem of publication bias by using a sensitivity analysis and by defining a selection model to the available data of a meta-analysis. This method allows for different amounts of selection bias and helps to investigate how sensitive the main interest parameter is when compared with the estimates of the standard model. Throughout the paper, we use binary data from Antiplatelet therapy in maintaining vascular patency of patients to illustrate the methods.  相似文献   

16.
Tan SB  Machin D 《Statistics in medicine》2002,21(14):1991-2012
Many different statistical designs have been used in phase II clinical trials. The majority of these are based on frequentist statistical approaches. Bayesian methods provide a good alternative to frequentist approaches as they allow for the incorporation of relevant prior information and the presentation of the trial results in a manner which, some feel, is more intuitive and helpful. In this paper, we propose two new Bayesian designs for phase II clinical trials. These designs have been developed specifically to make them as user friendly and as familiar as possible to those who have had experience working with two-stage frequentist phase II designs. Thus, unlike many of the Bayesian designs already proposed in the literature, our designs do not require a distribution for the response rate of the currently used drug or the explicit specification of utility or loss functions. We study the properties of our designs and compare them with the Simon two-stage optimal and minimax designs. We also apply them to an example of two recently concluded phase II trials conducted at the National Cancer Centre in Singapore. Sample size tables for the designs are given.  相似文献   

17.
Many formal statistical procedures for phase I dose-finding studies have been proposed. Most concern a single novel agent available at a number of doses and administered to subjects participating in a single treatment period and returning a single binary indicator of toxicity. Such a structure is common when evaluating cytotoxic drugs for cancer. This paper concerns studies of combinations of two agents, both available at several doses. Subjects participate in one treatment period and provide two binary responses: one an indicator of benefit and the other of harm. The word 'benefit' is used loosely here: the response might be an early indicator of physiological change which, if induced in patients, is of potential therapeutic value. The context need not be oncology, but might be any study intended to meet both the phase I aim of establishing which doses are safe and the phase II goal of exploring potential therapeutic activity. A Bayesian approach is used based on an assumption of monotonicity in the relationship between the strength of the dose-combination and the distribution of the bivariate outcome. Special cases are described, and the procedure is evaluated using simulation. The parameters that define the model have immediate and simple interpretation. Graphical representations of the posterior opinions about model parameters are shown, and these can be used to inform the discussions of the trial safety committee.  相似文献   

18.
Many sample size criteria exist. These include power calculations and methods based on confidence interval widths from a frequentist viewpoint, and Bayesian methods based on credible interval widths or decision theory. Bayesian methods account for the inherent uncertainty of inputs to sample size calculations through the use of prior information rather than the point estimates typically used by frequentist methods. However, the choice of prior density can be problematic because there will almost always be different appreciations of the past evidence. Such differences can be accommodated a priori by robust methods for Bayesian design, for example, using mixtures or ϵ-contaminated priors. This would then ensure that the prior class includes divergent opinions. However, one may prefer to report several posterior densities arising from a “community of priors,” which cover the range of plausible prior densities, rather than forming a single class of priors. To date, however, there are no corresponding sample size methods that specifically account for a community of prior densities in the sense of ensuring a large-enough sample size for the data to sufficiently overwhelm the priors to ensure consensus across widely divergent prior views. In this paper, we develop methods that account for the variability in prior opinions by providing the sample size required to induce posterior agreement to a prespecified degree. Prototypic examples to one- and two-sample binomial outcomes are included. We compare sample sizes from criteria that consider a family of priors to those that would result from previous interval-based Bayesian criteria.  相似文献   

19.
Yuan Y  Yin G 《Statistics in medicine》2011,30(17):2098-2108
In oncology, dose escalation is often carried out to search for the maximum tolerated dose (MTD) in phase I clinical trials. We propose a Bayesian hybrid dose-finding method that inherits the robustness of model-free methods and the efficiency of model-based methods. In the Bayesian hypothesis testing framework, we compute the Bayes factor and adaptively assign a dose to each cohort of patients based on the adequacy of the dose-toxicity information that has been collected thus far. If the data observed at the current treatment dose are adequately informative about the toxicity probability of this dose (e.g. whether this dose is below or above the MTD), we make the decision of dose assignment (e.g. either to escalate or to de-escalate the dose) directly without assuming a parametric dose-toxicity curve. If the observed data at the current dose are not sufficient to deliver such a definitive decision, we resort to a parametric dose-toxicity curve, such as that of the continual reassessment method (CRM), in order to borrow strength across all the doses under study to guide dose assignment. We examine the properties of the hybrid design through extensive simulation studies, and also compare the new method with the CRM and the '3 + 3' design. The simulation results show that our design is more robust than parametric model-based methods and more efficient than nonparametric model-free methods.  相似文献   

20.
Last observation carried forward (LOCF) and analysis using only data from subjects who complete a trial (Completers) are commonly used techniques for analysing data in clinical trials with incomplete data when the endpoint is change from baseline at last scheduled visit. We propose two alternative methods. The semi-parametric method, which cumulates changes observed between consecutive time points, is conceptually similar to the familiar life-table method and corresponding Kaplan-Meier estimation when the primary endpoint is time to event. A non-parametric analogue of LOCF is obtained by carrying forward, not the observed value, but the rank of the change from baseline at the last observation for each subject. We refer to this method as the LRCF method. Both procedures retain the simplicity of LOCF and Completers analyses and, like these methods, do not require data imputation or modelling assumptions. In the absence of any incomplete data they reduce to the usual two-sample tests. In simulations intended to reflect chronic diseases that one might encounter in practice, LOCF was observed to produce markedly biased estimates and markedly inflated type I error rates when censoring was unequal in the two treatment arms. These problems did not arise with the Completers, Cumulative Change, or LRCF methods. Cumulative Change and LRCF were more powerful than Completers, and the Cumulative Change test provided more efficient estimates than the Completers analysis, in all simulations. We conclude that the Cumulative Change and LRCF methods are preferable to LOCF and Completers analyses. Mixed model repeated measures (MMRM) performed similarly to Cumulative Change and LRCF and makes somewhat less restrictive assumptions about missingness mechanisms, so that it is also a reasonable alternative to LOCF and Completers analyses.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号