首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In non‐inferiority trials that employ the synthesis method several types of dependencies among test statistics occur due to sharing of the same information from the historical trial. The conditions under which the dependencies appear may be divided into three categories. The first case is when a new drug is approved with single non‐inferiority trial. The second case is when a new drug is approved if two independent non‐inferiority trials show positive results. The third case is when two new different drugs are approved with the same active control. The problem of the dependencies is that they can make the type I error rate deviate from the nominal level. In order to study such deviations, we introduce the unconditional and conditional across‐trial type I error rates when the non‐inferiority margin is estimated from the historical trial, and investigate how the dependencies affect the type I error rates. We show that the unconditional across‐trial type I error rate increases dramatically as does the correlation between two non‐inferiority tests when a new drug is approved based on the positive results of two non‐inferiority trials. We conclude that the conditional across‐trial type I error rate involves the unknown treatment effect in the historical trial. The formulae of the conditional across‐trial type I error rates provide us with a way of investigating the conditional across‐trial type I error rates for various assumed values of the treatment effect in the historical trial. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

2.
The three‐arm clinical trial design, which includes a test treatment, an active reference, and placebo control, is the gold standard for the assessment of non‐inferiority. In the presence of non‐compliance, one common concern is that an intent‐to‐treat (ITT) analysis (which is the standard approach to non‐inferiority trials), tends to increase the chances of erroneously concluding non‐inferiority, suggesting that the per‐protocol (PP) analysis may be preferable for non‐inferiority trials despite its inherent bias. The objective of this paper was to develop statistical methodology for dealing with non‐compliance in three‐arm non‐inferiority trials for censored, time‐to‐event data. Changes in treatment were here considered the only form of non‐compliance. An approach using a three‐arm rank preserving structural failure time model and G‐estimation analysis is here presented. Using simulations, the impact of non‐compliance on non‐inferiority trials was investigated in detail using ITT, PP analyses, and the present proposed method. Results indicate that the proposed method shows good characteristics, and that neither ITT nor PP analyses can always guarantee the validity of the non‐inferiority conclusion. A Statistical Analysis System program for the implementation of the proposed test procedure is available from the authors upon request. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

3.
In this article, we study blinded sample size re‐estimation in the ‘gold standard’ design with internal pilot study for normally distributed outcomes. The ‘gold standard’ design is a three‐arm clinical trial design that includes an active and a placebo control in addition to an experimental treatment. We focus on the absolute margin approach to hypothesis testing in three‐arm trials at which the non‐inferiority of the experimental treatment and the assay sensitivity are assessed by pairwise comparisons. We compare several blinded sample size re‐estimation procedures in a simulation study assessing operating characteristics including power and type I error. We find that sample size re‐estimation based on the popular one‐sample variance estimator results in overpowered trials. Moreover, sample size re‐estimation based on unbiased variance estimators such as the Xing–Ganju variance estimator results in underpowered trials, as it is expected because an overestimation of the variance and thus the sample size is in general required for the re‐estimation procedure to eventually meet the target power. To overcome this problem, we propose an inflation factor for the sample size re‐estimation with the Xing–Ganju variance estimator and show that this approach results in adequately powered trials. Because of favorable features of the Xing–Ganju variance estimator such as unbiasedness and a distribution independent of the group means, the inflation factor does not depend on the nuisance parameter and, therefore, can be calculated prior to a trial. Moreover, we prove that the sample size re‐estimation based on the Xing–Ganju variance estimator does not bias the effect estimate. Copyright © 2017 John Wiley & Sons, Ltd.  相似文献   

4.
The three‐arm design with test treatment, reference treatment and placebo offers internal assay sensitivity for a proof of non‐inferiority. In this design the relative effects known from nonparametric theory are robust tools allowing the assessment of non‐inferiority in a range of situations. An asymptotic nonparametric theory is established in the three‐arm design based on the asymptotic distribution of rank means under alternative. A rank test for non‐inferiority is derived. Fieller's formula is used to calculate a respective confidence interval. The approach is expanded to multicentre studies. The simulation studies are conducted demonstrating the accuracy of the methods and an example is discussed. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

5.
Chen and Chaloner (Statist. Med. 2006; 25 :2956–2966. DOI: 10.1002/sim.2429 ) present a Bayesian stopping rule for a single‐arm clinical trial with a binary endpoint. In some cases, earlier stopping may be possible by basing the stopping rule on the time to a binary event. We investigate the feasibility of computing exact, Bayesian, decision‐theoretic time‐to‐event stopping rules for a single‐arm group sequential non‐inferiority trial relative to an objective performance criterion. For a conjugate prior distribution, exponential failure time distribution, and linear and threshold loss structures, we obtain the optimal Bayes stopping rule by backward induction. We compute frequentist operating characteristics of including Type I error, statistical power, and expected run length. We also briefly address design issues. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

6.
A three‐arm clinical trial design with an experimental treatment, an active control, and a placebo control, commonly referred to as the gold standard design, enables testing of non‐inferiority or superiority of the experimental treatment compared with the active control. In this paper, we propose methods for designing and analyzing three‐arm trials with negative binomially distributed endpoints. In particular, we develop a Wald‐type test with a restricted maximum‐likelihood variance estimator for testing non‐inferiority or superiority. For this test, sample size and power formulas as well as optimal sample size allocations will be derived. The performance of the proposed test will be assessed in an extensive simulation study with regard to type I error rate, power, sample size, and sample size allocation. For the purpose of comparison, Wald‐type statistics with a sample variance estimator and an unrestricted maximum‐likelihood estimator are included in the simulation study. We found that the proposed Wald‐type test with a restricted variance estimator performed well across the considered scenarios and is therefore recommended for application in clinical trials. The methods proposed are motivated and illustrated by a recent clinical trial in multiple sclerosis. The R package ThreeArmedTrials , which implements the methods discussed in this paper, is available on CRAN. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

7.
Conventional phase II trials using binary endpoints as early indicators of a time‐to‐event outcome are not always feasible. Uveal melanoma has no reliable intermediate marker of efficacy. In pancreatic cancer and viral clearance, the time to the event of interest is short, making an early indicator unnecessary. In the latter application, Weibull models have been used to analyse corresponding time‐to‐event data. Bayesian sample size calculations are presented for single‐arm and randomised phase II trials assuming proportional hazards models for time‐to‐event endpoints. Special consideration is given to the case where survival times follow the Weibull distribution. The proposed methods are demonstrated through an illustrative trial based on uveal melanoma patient data. A procedure for prior specification based on knowledge or predictions of survival patterns is described. This enables investigation into the choice of allocation ratio in the randomised setting to assess whether a control arm is indeed required. The Bayesian framework enables sample sizes consistent with those used in practice to be obtained. When a confirmatory phase III trial will follow if suitable evidence of efficacy is identified, Bayesian approaches are less controversial than for definitive trials. In the randomised setting, a compromise for obtaining feasible sample sizes is a loss in certainty in the specified hypotheses: the Bayesian counterpart of power. However, this approach may still be preferable to running a single‐arm trial where no data is collected on the control treatment. This dilemma is present in most phase II trials, where resources are not sufficient to conduct a definitive trial. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

8.
There are strong arguments, ethical, logistical and financial, for supplementing the evidence from a new clinical trial using data from previous trials with similar control treatments. There is a consensus that historical information should be down‐weighted or discounted relative to information from the new trial, but the determination of the appropriate degree of discounting is a major difficulty. The degree of discounting can be represented by a bias parameter with specified variance, but a comparison between the historical and new data gives only a poor estimate of this variance. Hence, if no strong assumption is made concerning its value (i.e. if ‘dynamic borrowing’ is practiced), there may be little or no gain from using the historical data, in either frequentist terms (type I error rate and power) or Bayesian terms (posterior distribution of the treatment effect). It is therefore best to compare the consequences of a range of assumptions. This paper presents a clear, simple graphical tool for doing so on the basis of the mean square error, and illustrates its use with historical data from clinical trials in amyotrophic lateral sclerosis. This approach makes it clear that different assumptions can lead to very different conclusions. External information can sometimes provide strong additional guidance, but different stakeholders may still make very different judgements concerning the appropriate degree of discounting. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

9.
The ‘gold standard’ design for three‐arm trials refers to trials with an active control and a placebo control in addition to the experimental treatment group. This trial design is recommended when being ethically justifiable and it allows the simultaneous comparison of experimental treatment, active control, and placebo. Parametric testing methods have been studied plentifully over the past years. However, these methods often tend to be liberal or conservative when distributional assumptions are not met particularly with small sample sizes. In this article, we introduce a studentized permutation test for testing non‐inferiority and superiority of the experimental treatment compared with the active control in three‐arm trials in the ‘gold standard’ design. The performance of the studentized permutation test for finite sample sizes is assessed in a Monte Carlo simulation study under various parameter constellations. Emphasis is put on whether the studentized permutation test meets the target significance level. For comparison purposes, commonly used Wald‐type tests, which do not make any distributional assumptions, are included in the simulation study. The simulation study shows that the presented studentized permutation test for assessing non‐inferiority in three‐arm trials in the ‘gold standard’ design outperforms its competitors, for instance the test based on a quasi‐Poisson model, for count data. The methods discussed in this paper are implemented in the R package ThreeArmedTrials which is available on the comprehensive R archive network (CRAN). Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

10.
This paper presents a simple Bayesian approach to sample size determination in clinical trials. It is required that the trial should be large enough to ensure that the data collected will provide convincing evidence either that an experimental treatment is better than a control or that it fails to improve upon control by some clinically relevant difference. The method resembles standard frequentist formulations of the problem, and indeed in certain circumstances involving 'non-informative' prior information it leads to identical answers. In particular, unlike many Bayesian approaches to sample size determination, use is made of an alternative hypothesis that an experimental treatment is better than a control treatment by some specified magnitude. The approach is introduced in the context of testing whether a single stream of binary observations are consistent with a given success rate p(0). Next the case of comparing two independent streams of normally distributed responses is considered, first under the assumption that their common variance is known and then for unknown variance. Finally, the more general situation in which a large sample is to be collected and analysed according to the asymptotic properties of the score statistic is explored.  相似文献   

11.
Many meta‐analyses combine results from only a small number of studies, a situation in which the between‐study variance is imprecisely estimated when standard methods are applied. Bayesian meta‐analysis allows incorporation of external evidence on heterogeneity, providing the potential for more robust inference on the effect size of interest. We present a method for performing Bayesian meta‐analysis using data augmentation, in which we represent an informative conjugate prior for between‐study variance by pseudo data and use meta‐regression for estimation. To assist in this, we derive predictive inverse‐gamma distributions for the between‐study variance expected in future meta‐analyses. These may serve as priors for heterogeneity in new meta‐analyses. In a simulation study, we compare approximate Bayesian methods using meta‐regression and pseudo data against fully Bayesian approaches based on importance sampling techniques and Markov chain Monte Carlo (MCMC). We compare the frequentist properties of these Bayesian methods with those of the commonly used frequentist DerSimonian and Laird procedure. The method is implemented in standard statistical software and provides a less complex alternative to standard MCMC approaches. An importance sampling approach produces almost identical results to standard MCMC approaches, and results obtained through meta‐regression and pseudo data are very similar. On average, data augmentation provides closer results to MCMC, if implemented using restricted maximum likelihood estimation rather than DerSimonian and Laird or maximum likelihood estimation. The methods are applied to real datasets, and an extension to network meta‐analysis is described. The proposed method facilitates Bayesian meta‐analysis in a way that is accessible to applied researchers. © 2016 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.  相似文献   

12.
When several experimental treatments are available for testing, multi‐arm trials provide gains in efficiency over separate trials. Including interim analyses allows the investigator to effectively use the data gathered during the trial. Bayesian adaptive randomization (AR) and multi‐arm multi‐stage (MAMS) designs are two distinct methods that use patient outcomes to improve the efficiency and ethics of the trial. AR allocates a greater proportion of future patients to treatments that have performed well; MAMS designs use pre‐specified stopping boundaries to determine whether experimental treatments should be dropped. There is little consensus on which method is more suitable for clinical trials, and so in this paper, we compare the two under several simulation scenarios and in the context of a real multi‐arm phase II breast cancer trial. We compare the methods in terms of their efficiency and ethical properties. We also consider the practical problem of a delay between recruitment of patients and assessment of their treatment response. Both methods are more efficient and ethical than a multi‐arm trial without interim analyses. Delay between recruitment and response assessment attenuates this efficiency gain. We also consider futility stopping rules for response adaptive trials that add efficiency when all treatments are ineffective. Our comparisons show that AR is more efficient than MAMS designs when there is an effective experimental treatment, whereas if none of the experimental treatments is effective, then MAMS designs slightly outperform AR. © 2014 The Authors Statistics in Medicine Published by John Wiley & Sons, Ltd.  相似文献   

13.
This paper addresses statistical issues in non‐inferiority trials where the primary outcome is a fatal event. The investigations are inspired by a recent Food and Drug Administration (FDA) draft guideline on treatments for nosocomial pneumonia. The non‐inferiority margin suggested in this guideline for the endpoint all‐cause mortality is defined on different distance measures (rate difference and odds ratio) and is discontinuous. Furthermore, the margin enables considerable power for the statistical proof of non‐inferiority at alternatives that might be regarded as clinically unacceptable, that is, even if the experimental treatment is harmful as compared with the control. We investigated the appropriateness of the proposed non‐inferiority margin as well as the performance of possible test statistics to be used for the analysis. A continuous variant of the margin proposed in the FDA guideline together with the unconditional exact test according to Barnard showed favorable characteristics with respect to type I error rate control and power. To prevent harmful new treatments from being declared as non‐inferior, we propose to add a ‘second hurdle’. We discuss examples and explore power characteristics when requiring both statistical significance and overcoming the second hurdle. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

14.
A method is introduced for computing a Bayesian 95 per cent posterior probability region for vaccine efficacy. This method assumes independent vague gamma prior distributions for the incidence rates on each arm of the trial, and a Poisson likelihood for the counts of incident cases of infection. The approach is similar in spirit to the Bayesian analysis of the binomial risk ratio described by Aitchison and Bacon-Shone. However, the focus of our interest is not on incorporating prior information into the design of trials for efficacy, but rather on evaluating whether or not the Bayesian approach with vague prior information produces comparable results to a frequentist approach. A review of methods for constructing exact and large sample intervals for vaccine efficacy is provided as a framework for comparison. The confidence interval methods are assessed by comparing the size and power of tests of vaccine efficacy in proposed intermediate sized randomized double blinded placebo controlled trials.  相似文献   

15.
The Bayesian approach to statistics has been growing rapidly in popularity as an alternative to the frequentist approach in the appraisal of healthcare technologies in clinical trials. Bayesian methods have significant advantages over classical frequentist statistical methods and the presentation of evidence to decision makers. A fundamental feature of a Bayesian analysis is the use of prior information as well as the clinical trial data in the final analysis. However, the incorporation of prior information remains a controversial subject that provides a potential barrier to the acceptance of practical uses of Bayesian methods. The purpose of this paper is to stimulate a debate on the use of prior information in evidence submitted to decision makers. We discuss the advantages of incorporating genuine prior information in cost-effectiveness analyses of clinical trial data and explore mechanisms to safeguard scientific rigor in the use of such prior information.  相似文献   

16.
In a bivariate meta‐analysis, the number of diagnostic studies involved is often very low so that frequentist methods may result in problems. Using Bayesian inference is particularly attractive as informative priors that add a small amount of information can stabilise the analysis without overwhelming the data. However, Bayesian analysis is often computationally demanding and the selection of the prior for the covariance matrix of the bivariate structure is crucial with little data. The integrated nested Laplace approximations method provides an efficient solution to the computational issues by avoiding any sampling, but the important question of priors remain. We explore the penalised complexity (PC) prior framework for specifying informative priors for the variance parameters and the correlation parameter. PC priors facilitate model interpretation and hyperparameter specification as expert knowledge can be incorporated intuitively. We conduct a simulation study to compare the properties and behaviour of differently defined PC priors to currently used priors in the field. The simulation study shows that the PC prior seems beneficial for the variance parameters. The use of PC priors for the correlation parameter results in more precise estimates when specified in a sensible neighbourhood around the truth. To investigate the usage of PC priors in practice, we reanalyse a meta‐analysis using the telomerase marker for the diagnosis of bladder cancer and compare the results with those obtained by other commonly used modelling approaches. Copyright © 2017 John Wiley & Sons, Ltd.  相似文献   

17.
We propose three‐sided testing, a testing framework for simultaneous testing of inferiority, equivalence and superiority in clinical trials, controlling for multiple testing using the partitioning principle. Like the usual two‐sided testing approach, this approach is completely symmetric in the two treatments compared. Still, because the hypotheses of inferiority and superiority are tested with one‐sided tests, the proposed approach has more power than the two‐sided approach to infer non‐inferiority or non‐superiority. Applied to the classical point null hypothesis of equivalence, the three‐sided testing approach shows that it is sometimes possible to make an inference on the sign of the parameter of interest, even when the null hypothesis itself could not be rejected. Relationships with confidence intervals are explored, and the effectiveness of the three‐sided testing approach is demonstrated in a number of recent clinical trials. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

18.
We consider the use of the assurance method in clinical trial planning. In the assurance method, which is an alternative to a power calculation, we calculate the probability of a clinical trial resulting in a successful outcome, via eliciting a prior probability distribution about the relevant treatment effect. This is typically a hybrid Bayesian‐frequentist procedure, in that it is usually assumed that the trial data will be analysed using a frequentist hypothesis test, so that the prior distribution is only used to calculate the probability of observing the desired outcome in the frequentist test. We argue that assessing the probability of a successful clinical trial is a useful part of the trial planning process. We develop assurance methods to accommodate survival outcome measures, assuming both parametric and nonparametric models. We also develop prior elicitation procedures for each survival model so that the assurance calculations can be performed more easily and reliably. We have made free software available for implementing our methods. © 2013 The Authors. Statistics in Medicine published by John Wiley & Sons, Ltd.  相似文献   

19.
We consider 2 problems of increasing importance in clinical dose finding studies. First, we assess the similarity of 2 non‐linear regression models for 2 non‐overlapping subgroups of patients over a restricted covariate space. To this end, we derive a confidence interval for the maximum difference between the 2 given models. If this confidence interval excludes the pre‐specified equivalence margin, similarity of dose response can be claimed. Second, we address the problem of demonstrating the similarity of 2 target doses for 2 non‐overlapping subgroups, using again an approach based on a confidence interval. We illustrate the proposed methods with a real case study and investigate their operating characteristics (coverage probabilities, Type I error rates, power) via simulation.  相似文献   

20.
We aim to establish whether it is ever appropriate to conduct cost‐minimisation analysis (CMA) rather than cost‐effectiveness analysis.We perform a literature review to examine how the use of CMA has changed since Briggs & O'Brien announced its death in 2001. Examples of simulated and trial data are presented: firstly to illustrate the advantages and disadvantages of CMA in the context of non‐inferiority trials and those finding no significant difference in efficacy and secondly to assess whether CMA gives biased results.We show that CMA is still used and will bias measures of uncertainty, causing overestimation or underestimation of the value of information and the probability that treatment is cost‐effective. Although bias will be negligible for non‐inferiority studies comparing treatments that differ enormously in cost, it is generally necessary to collect and analyse data on costs and efficacy (including utilities) to assess this bias. Cost‐effectiveness analysis (including evaluation of the joint distribution of costs and benefits) is almost always required to avoid biased estimation of uncertainty. The remit of CMA in trial‐based economic evaluation is therefore even narrower than previously thought, suggesting that CMA is not only dead but should also be buried. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号