首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.

Objective

To examine the registration of noninferiority trials, with a focus on the reporting of study design and noninferiority margins.

Study Design and Setting

Cross-sectional study of registry records of noninferiority trials published from 2005 to 2009 and records of noninferiority trials in the International Standard Randomized Controlled Trial Number (ISRCTN) or ClinicalTrials.gov trial registries. The main outcome was the proportion of records that reported the noninferiority design and margin.

Results

We analyzed 87 registry records of published noninferiority trials and 149 registry records describing noninferiority trials. Thirty-five (40%) of 87 records from published trials described the trial as a noninferiority trial; only two (2%) reported the noninferiority margin. Reporting of the noninferiority design was more frequent in the ISRCTN registry (13 of 18 records, 72%) compared with ClinicalTrials.gov (22 of 69 records, 32%; P = 0.002). Among the 149 records identified in the registries, 13 (9%) reported the noninferiority margin. Only one of the industry-sponsored trial compared with 11 of the publicly funded trials reported the margin (P = 0.001).

Conclusion

Most registry records of noninferiority trials do not mention the noninferiority design and do not include the noninferiority margin. The registration of noninferiority trials is unsatisfactory and must be improved.  相似文献   

2.
Noninferiority trials have recently gained importance for the clinical trials of drugs and medical devices. In these trials, most statistical methods have been used from a frequentist perspective, and historical data have been used only for the specification of the noninferiority margin Δ>0. In contrast, Bayesian methods, which have been studied recently are advantageous in that they can use historical data to specify prior distributions and are expected to enable more efficient decision making than frequentist methods by borrowing information from historical trials. In the case of noninferiority trials for response probabilities π 1,π 2, Bayesian methods evaluate the posterior probability of H 1:π 1>π 2?Δ being true. To numerically calculate such posterior probability, complicated Appell hypergeometric function or approximation methods are used. Further, the theoretical relationship between Bayesian and frequentist methods is unclear. In this work, we give the exact expression of the posterior probability of the noninferiority under some mild conditions and propose the Bayesian noninferiority test framework which can flexibly incorporate historical data by using the conditional power prior. Further, we show the relationship between Bayesian posterior probability and the P value of the Fisher exact test. From this relationship, our method can be interpreted as the Bayesian noninferior extension of the Fisher exact test, and we can treat superiority and noninferiority in the same framework. Our method is illustrated through Monte Carlo simulations to evaluate the operating characteristics, the application to the real HIV clinical trial data, and the sample size calculation using historical data.  相似文献   

3.
Ng TH 《Statistics in medicine》2008,27(26):5392-5406
Ng (Drug Inf. J. 1993; 27:705-719; Drug Inf. J. 2001; 35:1517-1527) proposed that the noninferiority (NI) margin should be a small fraction of the therapeutic effect of the active control as compared with placebo in the setting of testing the NI hypothesis of the mean difference with a continuous outcome. For testing the NI hypothesis of the mean ratio with a continuous outcome, a similar NI margin on a log scale is proposed. This approach may also be applied in the setting of testing the NI hypotheses for survival data based on hazard ratios. Some pitfalls of testing the NI hypotheses with binary endpoints based on the difference or the ratio of proportions will be discussed. Testing the NI hypothesis with binary endpoints based on the odds ratio is proposed.  相似文献   

4.
To maintain the interpretability of the effect of experimental treatment (EXP) obtained from a noninferiority trial, current statistical approaches often require the constancy assumption. This assumption typically requires that the control treatment effect in the population of the active control trial is the same as its effect presented in the population of the historical trial. To prevent constancy assumption violation, clinical trial sponsors were recommended to make sure that the design of the active control trial is as close to the design of the historical trial as possible. However, these rigorous requirements are rarely fulfilled in practice. The inevitable discrepancies between the historical trial and the active control trial have led to debates on many controversial issues. Without support from a well‐developed quantitative method to determine the impact of the discrepancies on the constancy assumption violation, a correct judgment seems difficult. In this paper, we present a covariate‐adjustment generalized linear regression model approach to achieve two goals: (1) to quantify the impact of population difference between the historical trial and the active control trial on the degree of constancy assumption violation and (2) to redefine the active control treatment effect in the active control trial population if the quantification suggests an unacceptable violation. Through achieving goal (1), we examine whether or not a population difference leads to an unacceptable violation. Through achieving goal (2), we redefine the noninferiority margin if the violation is unacceptable. This approach allows us to correctly determine the effect of EXP in the noninferiority trial population when constancy assumption is violated due to the population difference. We illustrate the covariate‐adjustment approach through a case study. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

5.
ObjectivesA concern that noninferiority (NI) trials pose a risk of degradation of the treatment effects is prevalent. Thus, we aimed to determine the fraction of positive true effects (superiority rate) and the average true effect of current NI trials based on data from registered NI trials.Study Design and SettingAll NI trials carried out between 2000 and 2007 analyzing the NI of efficacy as the primary objective and registered in one of the two major clinical trials registers were studied. Having retrieved results from these trials, random effects modeling of the effect estimates was performed to determine the distribution of true effects.ResultsEffect estimates were available for 79 of 99 eligible trials identified. For trials with binary outcome, we estimated a superiority rate of 49% (95% confidence interval = 27–70%) and a mean true log odds ratio of ?0.005 (?0.112, 0.102). For trials with continuous outcome, the superiority rate was 58% (41–74%) and the mean true effect as Cohen's d of 0.06 (?0.064, 0.192).ConclusionsThe unanticipated finding of a positive average true effect and superiority of the new treatment in most NI trials suggest that the current practice of choosing NI designs in clinical trials makes degradation on average unlikely. However, the distribution of true treatment effects demonstrates that, in some NI trials, the new treatment is distinctly inferior.  相似文献   

6.
For regulatory approval of a new drug, the United States Code of Federal Regulations (CFR) requires ‘substantial evidence’ from ‘adequate and well‐controlled investigations’. This requirement is interpreted in the Food and Drug Administration guidance as the need of ‘at least two adequate and well‐controlled studies, each convincing on its own to establish effectiveness’. The guidance also emphasizes the need of ‘independent substantiation of experimental results from multiple studies’. However, several authors have noted the loss of independence between two noninferiority trials that use the same set of historical data to make inferences, raising questions about whether the CFR requirement is met in noninferiority trials through current practice. In this article, we first propose a statistical interpretation of the CFR requirement in terms of trial‐level and overall type I error rates, which captures the essence of the requirement and can be operationalized for noninferiority trials. We next examine four typical regulatory settings in which the proposed requirement may or may not be fulfilled by existing methods of analysis (fixed margin and synthesis). In situations where the criteria are not met, we then propose adjustments to the existing methods. As illustrated with several examples, our results and findings can be helpful in designing and analyzing noninferiority trials in a way that is both compliant with the regulatory interpretation of the CFR requirement and reasonably powerful. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

7.
When a new treatment regimen is expected to have comparable or slightly worse efficacy to that of the control regimen but has benefits in other domains such as safety and tolerability, a noninferiority (NI) trial may be appropriate but is fraught with difficulty in justifying an acceptable NI margin that is based on both clinical and statistical input. To overcome this, we propose to utilize composite risk‐benefit outcomes that combine elements from domains of importance (eg, efficacy, safety, and tolerability). The composite outcome itself may be analyzed using a superiority framework, or it can be used as a tool at the design stage of a NI trial for selecting an NI margin for efficacy that balances changes in risks and benefits. In the latter case, the choice of NI margin may be based on a novel quantity called the maximum allowable decrease in efficacy (MADE), defined as the marginal difference in efficacy between arms that would yield a null treatment effect for the composite outcome given an assumed distribution for the composite outcome. We observe that MADE: (1) is larger when the safety improvement for the experimental arm is larger, (2) depends on the association between the efficacy and safety outcomes, and (3) depends on the control arm efficacy rate. We use a numerical example for power comparisons between a superiority test for the composite outcome vs a noninferiority test for efficacy using the MADE as the NI margin, and apply the methods to a TB treatment trial.  相似文献   

8.
Non-inferiority trials, which aim to demonstrate that a test product is not worse than a competitor by more than a pre-specified small amount, are of great importance to the pharmaceutical community. As a result, methodology for designing and analyzing such trials is required, and developing new methods for such analysis is an important area of statistical research. The three-arm trial consists of a placebo, a reference and an experimental treatment, and simultaneously tests the superiority of the reference over the placebo along with comparing this reference to an experimental treatment. In this paper, we consider the analysis of non-inferiority trials using Bayesian methods which incorporate both parametric as well as semi-parametric models. The resulting testing approach is both flexible and robust. The benefit of the proposed Bayesian methods is assessed via simulation, based on a study examining home-based blood pressure interventions.  相似文献   

9.
A more powerful exact test of noninferiority from binary matched-pairs data   总被引:2,自引:0,他引:2  
Assessing the therapeutic noninferiority of one medical treatment compared with another is often based on the difference in response rates from a matched binary pairs design. This paper develops a new exact unconditional test for noninferiority that is more powerful than available alternatives. There are two new elements presented in this paper. First, we introduce the likelihood ratio statistic as an alternative to the previously proposed score statistic of Nam (Biometrics 1997; 53:1422-1430). Second, we eliminate the nuisance parameter by estimation followed by maximization as an alternative to the partial maximization of Berger and Boos (Am. Stat. Assoc. 1994; 89:1012-1016) or traditional full maximization. Based on an extensive numerical study, we recommend tests based on the score statistic, the nuisance parameter being controlled by estimation followed by maximization.  相似文献   

10.
Conventional phase II trials using binary endpoints as early indicators of a time‐to‐event outcome are not always feasible. Uveal melanoma has no reliable intermediate marker of efficacy. In pancreatic cancer and viral clearance, the time to the event of interest is short, making an early indicator unnecessary. In the latter application, Weibull models have been used to analyse corresponding time‐to‐event data. Bayesian sample size calculations are presented for single‐arm and randomised phase II trials assuming proportional hazards models for time‐to‐event endpoints. Special consideration is given to the case where survival times follow the Weibull distribution. The proposed methods are demonstrated through an illustrative trial based on uveal melanoma patient data. A procedure for prior specification based on knowledge or predictions of survival patterns is described. This enables investigation into the choice of allocation ratio in the randomised setting to assess whether a control arm is indeed required. The Bayesian framework enables sample sizes consistent with those used in practice to be obtained. When a confirmatory phase III trial will follow if suitable evidence of efficacy is identified, Bayesian approaches are less controversial than for definitive trials. In the randomised setting, a compromise for obtaining feasible sample sizes is a loss in certainty in the specified hypotheses: the Bayesian counterpart of power. However, this approach may still be preferable to running a single‐arm trial where no data is collected on the control treatment. This dilemma is present in most phase II trials, where resources are not sufficient to conduct a definitive trial. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

11.
Sequential analysis is a statistical way of analysing cumulative data. Its goal is to come to a decision as soon as enough evidence is reached for one or another hypothesis. In this article three different statistical approaches, the frequentist, the Bayesian and the likelihood approach, are discussed in relation to sequential analysis. In particular, the less known likelihood approach is elucidated.  相似文献   

12.
Information from historical trials is important for the design, interim monitoring, analysis, and interpretation of clinical trials. Meta‐analytic models can be used to synthesize the evidence from historical data, which are often only available in aggregate form. We consider evidence synthesis methods for trials with recurrent event endpoints, which are common in many therapeutic areas. Such endpoints are typically analyzed by negative binomial regression. However, the individual patient data necessary to fit such a model are usually unavailable for historical trials reported in the medical literature. We describe approaches for back‐calculating model parameter estimates and their standard errors from available summary statistics with various techniques, including approximate Bayesian computation. We propose to use a quadratic approximation to the log‐likelihood for each historical trial based on 2 independent terms for the log mean rate and the log of the dispersion parameter. A Bayesian hierarchical meta‐analysis model then provides the posterior predictive distribution for these parameters. Simulations show this approach with back‐calculated parameter estimates results in very similar inference as using parameter estimates from individual patient data as an input. We illustrate how to design and analyze a new randomized placebo‐controlled exacerbation trial in severe eosinophilic asthma using data from 11 historical trials.  相似文献   

13.
This paper addresses statistical issues in non‐inferiority trials where the primary outcome is a fatal event. The investigations are inspired by a recent Food and Drug Administration (FDA) draft guideline on treatments for nosocomial pneumonia. The non‐inferiority margin suggested in this guideline for the endpoint all‐cause mortality is defined on different distance measures (rate difference and odds ratio) and is discontinuous. Furthermore, the margin enables considerable power for the statistical proof of non‐inferiority at alternatives that might be regarded as clinically unacceptable, that is, even if the experimental treatment is harmful as compared with the control. We investigated the appropriateness of the proposed non‐inferiority margin as well as the performance of possible test statistics to be used for the analysis. A continuous variant of the margin proposed in the FDA guideline together with the unconditional exact test according to Barnard showed favorable characteristics with respect to type I error rate control and power. To prevent harmful new treatments from being declared as non‐inferior, we propose to add a ‘second hurdle’. We discuss examples and explore power characteristics when requiring both statistical significance and overcoming the second hurdle. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

14.
In noninferiority (NI) trials, an ongoing methodological challenge is how to handle in the analysis the subjects who are nonadherent to their assigned treatment. Some investigators perform the intent-to-treat (ITT) as the primary analysis and the per-protocol (PP) analysis as sensitivity analysis, whereas others do the reverse since ITT results may be anticonservative in the NI setting. But even when there is agreement between the ITT and PP approaches, NI of the experimental therapy to the comparator is not guaranteed. We propose that a tipping point method be used to further assess the impact of nonadherence on the results of a NI trial. In this approach, data from the nonadherers obtained after treatment discontinuation is not used, and their outcomes under the counterfactual situation of complete adherence are considered missing. The tipping point analysis indicates how sensitive the NI trial results are to the values of these missing counterfactual outcomes. The advantages of this approach are that a model or mechanism for the missing outcomes does not have to be assumed, and all subjects who were randomized are included in the analysis. We consider both binary and continuous outcomes and propose extensions to accommodate different types of nonadherence. The methods are illustrated with examples from two NI trials, one to evaluate different doses of radiation therapy to treat painful bone metastases and the other to compare treatments for reducing depression in adolescents.  相似文献   

15.
Chu H  Nie L  Kensler TW 《Statistics in medicine》2008,27(13):2497-2508
Often in randomized clinical trials and observational studies in occupational and environmental health, a non-negative continuously distributed response variable denoting some metabolites of environmental toxicants is measured in treatment and control groups. When observations occur in both unexposed and exposed subjects, the biomarker measurement can be bimodally distributed with an extra spike at zero reflecting those unexposed. In the presence of left censoring due to values falling below biomarker assay detection limits, those unexposed with true zeros are indistinguishable from those exposed with left-censored values. Since interventions usually do not enhance or eliminate exposure, they do not have any impact on those unexposed. Thus, only the subset of individuals who are exposed should be used to make comparisons to estimate the effect of interventions. In this article, we present Bayesian approaches using non-standard mixture distributions to account for true zeros. The performance of the proposed Bayesian methods is compared with the maximum likelihood methods presented in Chu et al. (Stat. Med. 2005; 24:2053-2067) through simulation studies and a randomized chemoprevention trial conducted in Qidong, People's Republic of China.  相似文献   

16.
Conventional practice monitors accumulating information about drug safety in terms of the numbers of adverse events reported from trials in a drug development program. Estimates of between‐treatment adverse event risk differences can be obtained readily from unblinded trials with adjustment for differences among trials using conventional statistical methods. Recent regulatory guidelines require monitoring the cumulative frequency of adverse event reports to identify possible between‐treatment adverse event risk differences without unblinding ongoing trials. Conventional statistical methods for assessing between‐treatment adverse event risks cannot be applied when the trials are blinded. However, CUSUM charts can be used to monitor the accumulation of adverse event occurrences. CUSUM charts for monitoring adverse event occurrence in a Bayesian paradigm are based on assumptions about the process generating the adverse event counts in a trial as expressed by informative prior distributions. This article describes the construction of control charts for monitoring adverse event occurrence based on statistical models for the processes, characterizes their statistical properties, and describes how to construct useful prior distributions. Application of the approach to two adverse events of interest in a real trial gave nearly identical results for binomial and Poisson observed event count likelihoods. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

17.
Background. – The existence of effective reference treatments means that the superior therapeutic efficacy of new treatments is less marked and thus more difficult to demonstrate statistically. Moreover, the potential value of a new treatment is also based on other criteria, such as costs, ease of use, non invasiveness, and immediate or long-term side effects. In this context, methodological issue becomes one of looking for equivalence or non inferiority of the new treatment in comparison with an existing, high-performance reference treatment.Methods. – In the present work, we reexamine the statistical rational and methodological features of equivalence and noninferiority trials.Results. – We address equivalence margin choice, hypotheses building, and the different approaches for establishing equivalence (hypothesis testing and confidence intervals). We then discuss key aspects of equivalence trial design and the important methodological quality criteria involved in performing such studies: choice of the reference treatment, subject eligibility criteria, primary endpoint, study population and the required sample size. Lastly, we consider the possibility of adopting a new analytical strategy (noninferiority/superiority).Conclusion. – A checklist of items to include when reporting the results of randomized controlled trials (Consolidated Standards of Reporting Trials, the CONSORT recommendations) has been adapted for use in noninferiority and equivalence randomized controlled trials.  相似文献   

18.
Two different approaches have been proposed for establishing the efficacy of an experimental therapy through a non-inferiority trial: The fixed-margin approach involves first defining a non-inferiority margin and then demonstrating that the experimental therapy is not worse than the control by more than this amount, and the synthesis approach involves combining the data from the non-inferiority trial with the data from historical trials evaluating the effect of the control. In this paper, we introduce a unified approach that has both these approaches as special cases and show how the parameters of this approach can be selected to control the unconditional type 1 error rate in the presence of departures from the assumptions of assay sensitivity and constancy. It is shown that the fixed-margin approach can be extremely inefficient and that it is always possible to achieve equivalent control of the unconditional type 1 error rate, with higher power, by using an appropriately chosen synthesis method.  相似文献   

19.
In epidemiology, one approach to investigating the dependence of disease risk on an explanatory variable in the presence of several confounding variables is by fitting a binary regression using a conditional likelihood, thus eliminating the nuisance parameters. When the explanatory variable is measured with error, the estimated regression coefficient is biased usually towards zero. Motivated by the need to correct for this bias in analyses that combine data from a number of case-control studies of lung cancer risk associated with exposure to residential radon, two approaches are investigated. Both employ the conditional distribution of the true explanatory variable given the measured one. The method of regression calibration uses the expected value of the true given measured variable as the covariate. The second approach integrates the conditional likelihood numerically by sampling from the distribution of the true given measured explanatory variable. The two approaches give very similar point estimates and confidence intervals not only for the motivating example but also for an artificial data set with known properties. These results and some further simulations that demonstrate correct coverage for the confidence intervals suggest that for studies of residential radon and lung cancer the regression calibration approach will perform very well, so that nothing more sophisticated is needed to correct for measurement error.  相似文献   

20.
Kim I  Pang H  Zhao H 《Statistics in medicine》2012,31(15):1633-1651
Many statistical methods for microarray data analysis consider one gene at a time, and they may miss subtle changes at the single gene level. This limitation may be overcome by considering a set of genes simultaneously where the gene sets are derived from prior biological knowledge. Limited work has been carried out in the regression setting to study the effects of clinical covariates and expression levels of genes in a pathway either on a continuous or on a binary clinical outcome. Hence, we propose a Bayesian approach for identifying pathways related to both types of outcomes. We compare our Bayesian approaches with a likelihood‐based approach that was developed by relating a least squares kernel machine for nonparametric pathway effect with a restricted maximum likelihood for variance components. Unlike the likelihood‐based approach, the Bayesian approach allows us to directly estimate all parameters and pathway effects. It can incorporate prior knowledge into Bayesian hierarchical model formulation and makes inference by using the posterior samples without asymptotic theory. We consider several kernels (Gaussian, polynomial, and neural network kernels) to characterize gene expression effects in a pathway on clinical outcomes. Our simulation results suggest that the Bayesian approach has more accurate coverage probability than the likelihood‐based approach, and this is especially so when the sample size is small compared with the number of genes being studied in a pathway. We demonstrate the usefulness of our approaches through its applications to a type II diabetes mellitus data set. Our approaches can also be applied to other settings where a large number of strongly correlated predictors are present. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号