首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 692 毫秒
1.
This paper considers the design and interpretation of clinical trials comparing treatments for conditions so rare that worldwide recruitment efforts are likely to yield total sample sizes of 50 or fewer, even when patients are recruited over several years. For such studies, the sample size needed to meet a conventional frequentist power requirement is clearly infeasible. Rather, the expectation of any such trial has to be limited to the generation of an improved understanding of treatment options. We propose a Bayesian approach for the conduct of rare‐disease trials comparing an experimental treatment with a control where patient responses are classified as a success or failure. A systematic elicitation from clinicians of their beliefs concerning treatment efficacy is used to establish Bayesian priors for unknown model parameters. The process of determining the prior is described, including the possibility of formally considering results from related trials. As sample sizes are small, it is possible to compute all possible posterior distributions of the two success rates. A number of allocation ratios between the two treatment groups can be considered with a view to maximising the prior probability that the trial concludes recommending the new treatment when in fact it is non‐inferior to control. Consideration of the extent to which opinion can be changed, even by data from the best feasible design, can help to determine whether such a trial is worthwhile. © 2014 The Authors. Statistics in Medicine published by John Wiley & Sons, Ltd.  相似文献   

2.
Sample size estimation in clinical trials depends critically on nuisance parameters, such as variances or overall event rates, which have to be guessed or estimated from previous studies in the planning phase of a trial. Blinded sample size reestimation estimates these nuisance parameters based on blinded data from the ongoing trial, and allows to adjust the sample size based on the acquired information. In the present paper, this methodology is developed for clinical trials with count data as the primary endpoint. In multiple sclerosis such endpoints are commonly used in phase 2 trials (lesion counts in magnetic resonance imaging (MRI)) and phase 3 trials (relapse counts). Sample size adjustment formulas are presented for both Poisson‐distributed data and for overdispersed Poisson‐distributed data. The latter arise from sometimes considerable between‐patient heterogeneity, which can be observed in particular in MRI lesion counts. The operation characteristics of the procedure are evaluated by simulations and recommendations on how to choose the size of the internal pilot study are given. The results suggest that blinded sample size reestimation for count data maintains the required power without an increase in the type I error. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

3.
In early‐phase clinical trials, interim monitoring is commonly conducted based on the estimated intent‐to‐treat effect, which is subject to bias in the presence of noncompliance. To address this issue, we propose a Bayesian sequential monitoring trial design based on the estimation of the causal effect using a principal stratification approach. The proposed design simultaneously considers efficacy and toxicity outcomes and utilizes covariates to predict a patient's potential compliance behavior and identify the causal effects. Based on accumulating data, we continuously update the posterior estimates of the causal treatment effects and adaptively make the go/no‐go decision for the trial. Numerical results show that the proposed method has desirable operating characteristics and addresses the issue of noncompliance. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

4.
In clinical research and development, interim monitoring is critical for better decision‐making and minimizing the risk of exposing patients to possible ineffective therapies. For interim futility or efficacy monitoring, predictive probability methods are widely adopted in practice. Those methods have been well studied for univariate variables. However, for longitudinal studies, predictive probability methods using univariate information from only completers may not be most efficient, and data from on‐going subjects can be utilized to improve efficiency. On the other hand, leveraging information from on‐going subjects could allow an interim analysis to be potentially conducted once a sufficient number of subjects reach an earlier time point. For longitudinal outcomes, we derive closed‐form formulas for predictive probabilities, including Bayesian predictive probability, predictive power, and conditional power and also give closed‐form solutions for predictive probability of success in a future trial and the predictive probability of success of the best dose. When predictive probabilities are used for interim monitoring, we study their distributions and discuss their analytical cutoff values or stopping boundaries that have desired operating characteristics. We show that predictive probabilities utilizing all longitudinal information are more efficient for interim monitoring than that using information from completers only. To illustrate their practical application for longitudinal data, we analyze 2 real data examples from clinical trials.  相似文献   

5.
Recently, the Center for Drug Evaluation and Research at the Food and Drug Administration released a guidance that makes recommendations about how to demonstrate that a new antidiabetic therapy to treat type 2 diabetes is not associated with an unacceptable increase in cardiovascular risk. One of the recommendations from the guidance is that phases II and III trials should be appropriately designed and conducted so that a meta‐analysis can be performed. In addition, the guidance implies that a sequential meta‐analysis strategy could be adopted. That is, the initial meta‐analysis could aim at demonstrating the upper bound of a 95% confidence interval (CI) for the estimated hazard ratio to be < 1.8 for the purpose of enabling a new drug application or a biologics license application. Subsequently after the marketing authorization, a final meta‐analysis would need to show the upper bound to be < 1.3. In this context, we develop a new Bayesian sequential meta‐analysis approach using survival regression models to assess whether the size of a clinical development program is adequate to evaluate a particular safety endpoint. We propose a Bayesian sample size determination methodology for sequential meta‐analysis clinical trial design with a focus on controlling the familywise type I error rate and power. We use the partial borrowing power prior to incorporate the historical survival meta‐data into the Bayesian design. We examine various properties of the proposed methodology, and simulation‐based computational algorithms are developed to generate predictive data at various interim analyses, sample from the posterior distributions, and compute various quantities such as the power and the type I error in the Bayesian sequential meta‐analysis trial design. We apply the proposed methodology to the design of a hypothetical antidiabetic drug development program for evaluating cardiovascular risk. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

6.
As part of the evaluation of phase II trials, it is common practice to perform exploratory subgroup analyses with the aim of identifying patient populations with a beneficial treatment effect. When investigating targeted therapies, these subgroups are typically defined by biomarkers. Promising results may lead to the decision to select the respective subgroup as the target population for a subsequent phase III trial. However, a selection based on a large observed treatment effect may potentially induce an upwards‐bias leading to over‐optimistic expectations on the success probability of the phase III trial. We describe how Approximate Bayesian Computation techniques can be used to derive a simulation‐based bias adjustment method in this situation. Recommendations for the implementation of the approach are given. Simulation studies show that the proposed method reduces bias substantially compared with the maximum likelihood estimator. The procedure is illustrated with data from an oncology trial. Copyright © 2017 John Wiley & Sons, Ltd.  相似文献   

7.
Slow recruitment in clinical trials leads to increased costs and resource utilization, which includes both the clinic staff and patient volunteers. Careful planning and monitoring of the accrual process can prevent the unnecessary loss of these resources. We propose two hierarchical extensions to the existing Bayesian constant accrual model: the accelerated prior and the hedging prior. The new proposed priors are able to adaptively utilize the researcher's previous experience and current accrual data to produce the estimation of trial completion time. The performance of these models, including prediction precision, coverage probability, and correct decision‐making ability, is evaluated using actual studies from our cancer center and simulation. The results showed that a constant accrual model with strongly informative priors is very accurate when accrual is on target or slightly off, producing smaller mean squared error, high percentage of coverage, and a high number of correct decisions as to whether or not continue the trial, but it is strongly biased when off target. Flat or weakly informative priors provide protection against an off target prior but are less efficient when the accrual is on target. The accelerated prior performs similar to a strong prior. The hedging prior performs much like the weak priors when the accrual is extremely off target but closer to the strong priors when the accrual is on target or only slightly off target. We suggest improvements in these models and propose new models for future research. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

8.
We consider the use of the assurance method in clinical trial planning. In the assurance method, which is an alternative to a power calculation, we calculate the probability of a clinical trial resulting in a successful outcome, via eliciting a prior probability distribution about the relevant treatment effect. This is typically a hybrid Bayesian‐frequentist procedure, in that it is usually assumed that the trial data will be analysed using a frequentist hypothesis test, so that the prior distribution is only used to calculate the probability of observing the desired outcome in the frequentist test. We argue that assessing the probability of a successful clinical trial is a useful part of the trial planning process. We develop assurance methods to accommodate survival outcome measures, assuming both parametric and nonparametric models. We also develop prior elicitation procedures for each survival model so that the assurance calculations can be performed more easily and reliably. We have made free software available for implementing our methods. © 2013 The Authors. Statistics in Medicine published by John Wiley & Sons, Ltd.  相似文献   

9.
We present a Bayesian design for a multi-centre, randomized clinical trial of two chemotherapy regimens for advanced or metastatic unresectable soft tissue sarcoma. After randomization, each patient receives up to four stages of chemotherapy, with the patient's disease evaluated after each stage and categorized on a trinary scale of severity. Therapy is continued to the next stage if the patient's disease is stable, and is discontinued if either tumour response or treatment failure is observed. We assume a probability model that accounts for baseline covariates and the multi-stage treatment and disease evaluation structure. The design uses covariate-adjusted adaptive randomization based on a score that combines the patient's probabilities of overall treatment success or failure. The adaptive randomization procedure generalizes the method proposed by Thompson (1933) for two binomial distributions with beta priors. A simulation study of the design in the context of the sarcoma trial is presented.  相似文献   

10.
Phase I/II trials utilize both toxicity and efficacy data to achieve efficient dose finding. However, due to the requirement of assessing efficacy outcome, which often takes a long period of time to be evaluated, the duration of phase I/II trials is often longer than that of the conventional dose‐finding trials. As a result, phase I/II trials are susceptible to the missing data problem caused by patient dropout, and the missing efficacy outcomes are often nonignorable in the sense that patients who do not experience treatment efficacy are more likely to drop out of the trial. We propose a Bayesian phase I/II trial design to accommodate nonignorable dropouts. We treat toxicity as a binary outcome and efficacy as a time‐to‐event outcome. We model the marginal distribution of toxicity using a logistic regression and jointly model the times to efficacy and dropout using proportional hazard models to adjust for nonignorable dropouts. The correlation between times to efficacy and dropout is modeled using a shared frailty. We propose a two‐stage dose‐finding algorithm to adaptively assign patients to desirable doses. Simulation studies show that the proposed design has desirable operating characteristics. Our design selects the target dose with a high probability and assigns most patients to the target dose. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

11.
A biologic is a product made from living organisms. A biosimilar is a new version of an already approved branded biologic. Regulatory guidelines recommend a totality‐of‐the‐evidence approach with stepwise development for a new biosimilar. Initial steps for biosimilar development are (a) analytical comparisons to establish similarity in structure and function followed by (b) potential animal studies and a human pharmacokinetics/pharmacodynamics equivalence study. The last step is a phase III clinical trial to confirm similar efficacy, safety, and immunogenicity between the biosimilar and the biologic. A high degree of analytical and pharmacokinetics/pharmacodynamics similarity could provide justification for an eased statistical threshold in the phase III trial, which could then further facilitate an overall abbreviated approval process for biosimilars. Bayesian methods can help in the analysis of clinical trials, by adding proper prior information into the analysis, thereby potentially decreasing required sample size. We develop proper prior information for the analysis of a phase III trial for showing that a proposed biosimilar is similar to a reference biologic. For the reference product, we use a meta‐analysis of published results to set a prior for the probability of efficacy, and we propose priors for the proposed biosimilar informed by the strength of the evidence generated in the earlier steps of the approval process. A simulation study shows that with few exceptions, the Bayesian relative risk analysis provides greater power, shorter 90% credible intervals with more than 90% frequentist coverage, and better root mean squared error.  相似文献   

12.
Investigators need good statistical tools for the initial planning and for the ongoing monitoring of clinical trials. In particular, they need to carefully consider the accrual rate-how rapidly patients are being recruited into the clinical trial. A slow accrual decreases the likelihood that the research will provide results at the end of the trial with sufficient precision (or power) to make meaningful scientific inferences. In this paper, we present a method for predicting accrual. Using a Bayesian framework we combine prior information with the information known up to a monitoring point to obtain a prediction. We provide posterior predictive distributions of the accrual. The approach is attractive since it accounts for both parameter and sampling distribution uncertainties. We illustrate the approach using actual accrual data and discuss practical points surrounding the accrual problem.  相似文献   

13.
We consider clinical trial strategies to study diseases in which there is rapidly developing technology. We assume the availability of a limited number of patients for screening treatments over a time horizon, and that availability of new tratements for test is staggered over time. We assume further that patient response is binary and rapidly observable. We consider the strategy of conducting a sequence of two-armed randomized clinical trials. We carry over the treatment with the larger number of observed successes on the current trial to the next trial for comparison with a new treatment, with this process repeated at each step. For a fixed total number of patients (N), the number of trials one may conduct in sequence (k) is inversely related to the sample size per trial (2n), N = 2nk. We investigate how k and n influence (a) the expected success probability for the treatment selected at the end, and (b) the expected number of total success for the N patients. The ultimate objective is to select one treatment, the winner at stage k, to test against a standard regimen in a randomized comparative phase III trials.  相似文献   

14.
In phase II cancer trials, tumour response is either the primary or an important secondary endpoint. Tumour response is a binary composite endpoint determined, according to the Response Evaluation Criteria in Solid Tumors, by (1) whether the percentage change in tumour size is greater than a prescribed threshold and (2) (binary) criteria such as whether a patient develops new lesions. Further binary criteria, such as death or serious toxicity, may be added to these criteria. The probability of tumour response (i.e. ‘success’ on the composite endpoint) would usually be estimated simply as the proportion of successes among patients. This approach uses the tumour size variable only through a discretised form, namely whether or not it is above the threshold. In this article, we propose a method that also estimates the probability of success but that gains precision by using the information on the undiscretised (i.e. continuous) tumour size variable. This approach can also be used to increase the power to detect a difference between the probabilities of success under two different treatments in a comparative trial. We demonstrate these increases in precision and power using simulated data. We also apply the method to real data from a phase II cancer trial and show that it results in a considerably narrower confidence interval for the probability of tumour response. © 2013 The authors. Statistics in Medicine published by John Wiley & Sons, Ltd.  相似文献   

15.
Noninferiority trials have recently gained importance for the clinical trials of drugs and medical devices. In these trials, most statistical methods have been used from a frequentist perspective, and historical data have been used only for the specification of the noninferiority margin Δ>0. In contrast, Bayesian methods, which have been studied recently are advantageous in that they can use historical data to specify prior distributions and are expected to enable more efficient decision making than frequentist methods by borrowing information from historical trials. In the case of noninferiority trials for response probabilities π 1,π 2, Bayesian methods evaluate the posterior probability of H 1:π 1>π 2?Δ being true. To numerically calculate such posterior probability, complicated Appell hypergeometric function or approximation methods are used. Further, the theoretical relationship between Bayesian and frequentist methods is unclear. In this work, we give the exact expression of the posterior probability of the noninferiority under some mild conditions and propose the Bayesian noninferiority test framework which can flexibly incorporate historical data by using the conditional power prior. Further, we show the relationship between Bayesian posterior probability and the P value of the Fisher exact test. From this relationship, our method can be interpreted as the Bayesian noninferior extension of the Fisher exact test, and we can treat superiority and noninferiority in the same framework. Our method is illustrated through Monte Carlo simulations to evaluate the operating characteristics, the application to the real HIV clinical trial data, and the sample size calculation using historical data.  相似文献   

16.
Manda SO 《Statistics in medicine》2002,21(20):3011-3022
A Bayesian methodology is developed to investigate the homogeneity of the treatment effects in a multi-centre clinical trial with an ordinal response. A hierarchical model is formulated for the ordinal response, and the marginal posterior distributions of the covariates, overall treatment and the centre effects are calculated using the Gibbs sampler. The methodology is applied to data arising from a multi-centre clinical trial of therapies for acute myocardial infarction. In this trial, the overall results show that the treatment is effective. However, there appears to be substantial differences in both the baseline risk and treatment effect across centres. Thus, the observed treatment effects may not be generalized to a broader patient population, and exploratory analyses to ascertain reasons for the treatment-by-centre interaction and its possible effect on the study conclusions would be useful.  相似文献   

17.
It is well known that competing demands exist between the control of important covariate imbalance and protection of treatment allocation randomness in confirmative clinical trials. When implementing a response‐adaptive randomization algorithm in confirmative clinical trials designed under a frequentist framework, additional competing demands emerge between the shift of the treatment allocation ratio and the preservation of the power. Based on a large multicenter phase III stroke trial, we present a patient randomization scheme that manages these competing demands by applying a newly developed minimal sufficient balancing design for baseline covariates and a cap on the treatment allocation ratio shift in order to protect the allocation randomness and the power. Statistical properties of this randomization plan are studied by computer simulation. Trial operation characteristics, such as patient enrollment rate and primary outcome response delay, are also incorporated into the randomization plan. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

18.
The predictive probability of success of a future clinical trial is a key quantitative tool for decision-making in drug development. It is derived from prior knowledge and available evidence, and the latter typically comes from the accumulated data on the clinical endpoint of interest in previous clinical trials. However, a surrogate endpoint could be used as primary endpoint in early development and, usually, no or limited data are collected on the clinical endpoint of interest. We propose a general, reliable, and broadly applicable methodology to predict the success of a future trial from surrogate endpoints, in a way that makes the best use of all the available evidence. The predictions are based on an informative prior, called surrogate prior, derived from the results of past trials on one or several surrogate endpoints. If available, in a Bayesian framework, this prior could be combined with data from past trials on the clinical endpoint of interest. Two methods are proposed to address a potential discordance between the surrogate prior and the data on the clinical endpoint. We investigate the patterns of behavior of the predictions in a comprehensive simulation study, and we present an application to the development of a drug in Multiple Sclerosis. The proposed methodology is expected to support decision-making in many different situations, since the use of predictive markers is important to accelerate drug developments and to select promising drug candidates, better and earlier.  相似文献   

19.
Missing outcome data is a crucial threat to the validity of treatment effect estimates from randomized trials. The outcome distributions of participants with missing and observed data are often different, which increases bias. Causal inference methods may aid in reducing the bias and improving efficiency by incorporating baseline variables into the analysis. In particular, doubly robust estimators incorporate 2 nuisance parameters: the outcome regression and the missingness mechanism (ie, the probability of missingness conditional on treatment assignment and baseline variables), to adjust for differences in the observed and unobserved groups that can be explained by observed covariates. To consistently estimate the treatment effect, one of these nuisance parameters must be consistently estimated. Traditionally, nuisance parameters are estimated using parametric models, which often precludes consistency, particularly in moderate to high dimensions. Recent research on missing data has focused on data‐adaptive estimation to help achieve consistency, but the large sample properties of such methods are poorly understood. In this article, we discuss a doubly robust estimator that is consistent and asymptotically normal under data‐adaptive estimation of the nuisance parameters. We provide a formula for an asymptotically exact confidence interval under minimal assumptions. We show that our proposed estimator has smaller finite‐sample bias compared to standard doubly robust estimators. We present a simulation study demonstrating the enhanced performance of our estimators in terms of bias, efficiency, and coverage of the confidence intervals. We present the results of an illustrative example: a randomized, double‐blind phase 2/3 trial of antiretroviral therapy in HIV‐infected persons.  相似文献   

20.
Most statistical methodology for phase III clinical trials focuses on the comparison of a single experimental treatment with a control. An increasing desire to reduce the time before regulatory approval of a new drug is sought has led to development of two-stage or sequential designs for trials that combine the definitive analysis associated with phase III with the treatment selection element of a phase II study. In this paper we consider a trial in which the most promising of a number of experimental treatments is selected at the first interim analysis. This considerably reduces the computational load associated with the construction of stopping boundaries compared to the approach proposed by Follman, Proschan and Geller (Biometrics 1994; 50: 325-336). The computational requirement does not exceed that for the sequential comparison of a single experimental treatment with a control. Existing methods are extended in two ways. First, the use of the efficient score as a test statistic makes the analysis of binary, normal or failure-time data, as well as adjustment for covariates or stratification straightforward. Second, the question of trial power is also considered, enabling the determination of sample size required to give specified power.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号