首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 265 毫秒
1.
Sample size estimation in clinical trials depends critically on nuisance parameters, such as variances or overall event rates, which have to be guessed or estimated from previous studies in the planning phase of a trial. Blinded sample size reestimation estimates these nuisance parameters based on blinded data from the ongoing trial, and allows to adjust the sample size based on the acquired information. In the present paper, this methodology is developed for clinical trials with count data as the primary endpoint. In multiple sclerosis such endpoints are commonly used in phase 2 trials (lesion counts in magnetic resonance imaging (MRI)) and phase 3 trials (relapse counts). Sample size adjustment formulas are presented for both Poisson‐distributed data and for overdispersed Poisson‐distributed data. The latter arise from sometimes considerable between‐patient heterogeneity, which can be observed in particular in MRI lesion counts. The operation characteristics of the procedure are evaluated by simulations and recommendations on how to choose the size of the internal pilot study are given. The results suggest that blinded sample size reestimation for count data maintains the required power without an increase in the type I error. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

2.
Meta‐analyses of clinical trials often treat the number of patients experiencing a medical event as binomially distributed when individual patient data for fitting standard time‐to‐event models are unavailable. Assuming identical drop‐out time distributions across arms, random censorship, and low proportions of patients with an event, a binomial approach results in a valid test of the null hypothesis of no treatment effect with minimal loss in efficiency compared with time‐to‐event methods. To deal with differences in follow‐up—at the cost of assuming specific distributions for event and drop‐out times—we propose a hierarchical multivariate meta‐analysis model using the aggregate data likelihood based on the number of cases, fatal cases, and discontinuations in each group, as well as the planned trial duration and groups sizes. Such a model also enables exchangeability assumptions about parameters of survival distributions, for which they are more appropriate than for the expected proportion of patients with an event across trials of substantially different length. Borrowing information from other trials within a meta‐analysis or from historical data is particularly useful for rare events data. Prior information or exchangeability assumptions also avoid the parameter identifiability problems that arise when using more flexible event and drop‐out time distributions than the exponential one. We discuss the derivation of robust historical priors and illustrate the discussed methods using an example. We also compare the proposed approach against other aggregate data meta‐analysis methods in a simulation study. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

3.
We develop a new modeling approach to enhance a recently proposed method to detect increases of contrast‐enhancing lesions (CELs) on repeated magnetic resonance imaging, which have been used as an indicator for potential adverse events in multiple sclerosis clinical trials. The method signals patients with unusual increases in CEL activity by estimating the probability of observing CEL counts as large as those observed on a patient's recent scans conditional on the patient's CEL counts on previous scans. This conditional probability index (CPI), computed based on a mixed‐effect negative binomial regression model, can vary substantially depending on the choice of distribution for the patient‐specific random effects. Therefore, we relax this parametric assumption to model the random effects with an infinite mixture of beta distributions, using the Dirichlet process, which effectively allows any form of distribution. To our knowledge, no previous literature considers a mixed‐effect regression for longitudinal count variables where the random effect is modeled with a Dirichlet process mixture. As our inference is in the Bayesian framework, we adopt a meta‐analytic approach to develop an informative prior based on previous clinical trials. This is particularly helpful at the early stages of trials when less data are available. Our enhanced method is illustrated with CEL data from 10 previous multiple sclerosis clinical trials. Our simulation study shows that our procedure estimates the CPI more accurately than parametric alternatives when the patient‐specific random effect distribution is misspecified and that an informative prior improves the accuracy of the CPI estimates. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

4.
Information from historical trials is important for the design, interim monitoring, analysis, and interpretation of clinical trials. Meta‐analytic models can be used to synthesize the evidence from historical data, which are often only available in aggregate form. We consider evidence synthesis methods for trials with recurrent event endpoints, which are common in many therapeutic areas. Such endpoints are typically analyzed by negative binomial regression. However, the individual patient data necessary to fit such a model are usually unavailable for historical trials reported in the medical literature. We describe approaches for back‐calculating model parameter estimates and their standard errors from available summary statistics with various techniques, including approximate Bayesian computation. We propose to use a quadratic approximation to the log‐likelihood for each historical trial based on 2 independent terms for the log mean rate and the log of the dispersion parameter. A Bayesian hierarchical meta‐analysis model then provides the posterior predictive distribution for these parameters. Simulations show this approach with back‐calculated parameter estimates results in very similar inference as using parameter estimates from individual patient data as an input. We illustrate how to design and analyze a new randomized placebo‐controlled exacerbation trial in severe eosinophilic asthma using data from 11 historical trials.  相似文献   

5.
Stratified medicine utilizes individual‐level covariates that are associated with a differential treatment effect, also known as treatment‐covariate interactions. When multiple trials are available, meta‐analysis is used to help detect true treatment‐covariate interactions by combining their data. Meta‐regression of trial‐level information is prone to low power and ecological bias, and therefore, individual participant data (IPD) meta‐analyses are preferable to examine interactions utilizing individual‐level information. However, one‐stage IPD models are often wrongly specified, such that interactions are based on amalgamating within‐ and across‐trial information. We compare, through simulations and an applied example, fixed‐effect and random‐effects models for a one‐stage IPD meta‐analysis of time‐to‐event data where the goal is to estimate a treatment‐covariate interaction. We show that it is crucial to centre patient‐level covariates by their mean value in each trial, in order to separate out within‐trial and across‐trial information. Otherwise, bias and coverage of interaction estimates may be adversely affected, leading to potentially erroneous conclusions driven by ecological bias. We revisit an IPD meta‐analysis of five epilepsy trials and examine age as a treatment effect modifier. The interaction is ?0.011 (95% CI: ?0.019 to ?0.003; p = 0.004), and thus highly significant, when amalgamating within‐trial and across‐trial information. However, when separating within‐trial from across‐trial information, the interaction is ?0.007 (95% CI: ?0.019 to 0.005; p = 0.22), and thus its magnitude and statistical significance are greatly reduced. We recommend that meta‐analysts should only use within‐trial information to examine individual predictors of treatment effect and that one‐stage IPD models should separate within‐trial from across‐trial information to avoid ecological bias. © 2016 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd.  相似文献   

6.
Nesting of patients within care providers in trials of physical and talking therapies creates an additional level within the design. The statistical implications of this are analogous to those of cluster randomised trials, except that the clustering effect may interact with treatment and can be restricted to one or more of the arms. The statistical model that is recommended at the trial level includes a random effect for the care provider but allows the provider and patient level variances to differ across arms. Evidence suggests that, while potentially important, such within‐trial clustering effects have rarely been taken into account in trials and do not appear to have been considered in meta‐analyses of these trials. This paper describes summary measures and individual‐patient‐data methods for meta‐analysing absolute mean differences from randomised trials with two‐level nested clustering effects, contrasting fixed and random effects meta‐analysis models. It extends methods for incorporating trials with unequal variances and homogeneous clustering to allow for between‐arm and between‐trial heterogeneity in intra‐class correlation coefficient estimates. The work is motivated by a meta‐analysis of trials of counselling in primary care, where the control is no counselling and the outcome is the Beck Depression Inventory. Assuming equal counsellor intra‐class correlation coefficients across trials, the recommended random‐effects heteroscedastic model gave a pooled absolute mean difference of ?2.53 (95% CI ?5.33 to 0.27) using summary measures and ?2.51 (95% CI ?5.35 to 0.33) with the individual‐patient‐data. Pooled estimates were consistently below a minimally important clinical difference of four to five points on the Beck Depression Inventory. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

7.
There are strong arguments, ethical, logistical and financial, for supplementing the evidence from a new clinical trial using data from previous trials with similar control treatments. There is a consensus that historical information should be down‐weighted or discounted relative to information from the new trial, but the determination of the appropriate degree of discounting is a major difficulty. The degree of discounting can be represented by a bias parameter with specified variance, but a comparison between the historical and new data gives only a poor estimate of this variance. Hence, if no strong assumption is made concerning its value (i.e. if ‘dynamic borrowing’ is practiced), there may be little or no gain from using the historical data, in either frequentist terms (type I error rate and power) or Bayesian terms (posterior distribution of the treatment effect). It is therefore best to compare the consequences of a range of assumptions. This paper presents a clear, simple graphical tool for doing so on the basis of the mean square error, and illustrates its use with historical data from clinical trials in amyotrophic lateral sclerosis. This approach makes it clear that different assumptions can lead to very different conclusions. External information can sometimes provide strong additional guidance, but different stakeholders may still make very different judgements concerning the appropriate degree of discounting. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

8.
Missing outcome data are a problem commonly observed in randomized control trials that occurs as a result of participants leaving the study before its end. Missing such important information can bias the study estimates of the relative treatment effect and consequently affect the meta‐analytic results. Therefore, methods on manipulating data sets with missing participants, with regard to incorporating the missing information in the analysis so as to avoid the loss of power and minimize the bias, are of interest. We propose a meta‐analytic model that accounts for possible error in the effect sizes estimated in studies with last observation carried forward (LOCF) imputed patients. Assuming a dichotomous outcome, we decompose the probability of a successful unobserved outcome taking into account the sensitivity and specificity of the LOCF imputation process for the missing participants. We fit the proposed model within a Bayesian framework, exploring different prior formulations for sensitivity and specificity. We illustrate our methods by performing a meta‐analysis of five studies comparing the efficacy of amisulpride versus conventional drugs (flupenthixol and haloperidol) on patients diagnosed with schizophrenia. Our meta‐analytic models yield estimates similar to meta‐analysis with LOCF‐imputed patients. Allowing for uncertainty in the imputation process, precision is decreased depending on the priors used for sensitivity and specificity. Results on the significance of amisulpride versus conventional drugs differ between the standard LOCF approach and our model depending on prior beliefs on the imputation process. Our method can be regarded as a useful sensitivity analysis that can be used in the presence of concerns about the LOCF process. Copyright © 2014 JohnWiley & Sons, Ltd.  相似文献   

9.
Recently, the Center for Drug Evaluation and Research at the Food and Drug Administration released a guidance that makes recommendations about how to demonstrate that a new antidiabetic therapy to treat type 2 diabetes is not associated with an unacceptable increase in cardiovascular risk. One of the recommendations from the guidance is that phases II and III trials should be appropriately designed and conducted so that a meta‐analysis can be performed. In addition, the guidance implies that a sequential meta‐analysis strategy could be adopted. That is, the initial meta‐analysis could aim at demonstrating the upper bound of a 95% confidence interval (CI) for the estimated hazard ratio to be < 1.8 for the purpose of enabling a new drug application or a biologics license application. Subsequently after the marketing authorization, a final meta‐analysis would need to show the upper bound to be < 1.3. In this context, we develop a new Bayesian sequential meta‐analysis approach using survival regression models to assess whether the size of a clinical development program is adequate to evaluate a particular safety endpoint. We propose a Bayesian sample size determination methodology for sequential meta‐analysis clinical trial design with a focus on controlling the familywise type I error rate and power. We use the partial borrowing power prior to incorporate the historical survival meta‐data into the Bayesian design. We examine various properties of the proposed methodology, and simulation‐based computational algorithms are developed to generate predictive data at various interim analyses, sample from the posterior distributions, and compute various quantities such as the power and the type I error in the Bayesian sequential meta‐analysis trial design. We apply the proposed methodology to the design of a hypothetical antidiabetic drug development program for evaluating cardiovascular risk. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

10.
In some diseases, such as multiple sclerosis, lesion counts obtained from magnetic resonance imaging (MRI) are used as markers of disease progression. This leads to longitudinal, and typically overdispersed, count data outcomes in clinical trials. Models for such data invariably include a number of nuisance parameters, which can be difficult to specify at the planning stage, leading to considerable uncertainty in sample size specification. Consequently, blinded sample size re-estimation procedures are used, allowing for an adjustment of the sample size within an ongoing trial by estimating relevant nuisance parameters at an interim point, without compromising trial integrity. To date, the methods available for re-estimation have required an assumption that the mean count is time-constant within patients. We propose a new modeling approach that maintains the advantages of established procedures but allows for general underlying and treatment-specific time trends in the mean response. A simulation study is conducted to assess the effectiveness of blinded sample size re-estimation methods over fixed designs. Sample sizes attained through blinded sample size re-estimation procedures are shown to maintain the desired study power without inflating the Type I error rate and the procedure is demonstrated on MRI data from a recent study in multiple sclerosis.  相似文献   

11.
Including historical data may increase the power of the analysis of a current clinical trial and reduce the sample size of the study. Recently, several Bayesian methods for incorporating historical data have been proposed. One of the methods consists of specifying a so-called power prior whereby the historical likelihood is downweighted with a weight parameter. When the weight parameter is also estimated from the data, the modified power prior (MPP) is needed. This method has been used primarily when a single historical trial is available. We have adapted the MPP for incorporating multiple historical control arms into a current clinical trial, each with a separate weight parameter. Three priors for the weights are considered: (1) independent, (2) dependent, and (3) robustified dependent. The latter is developed to account for the possibility of a conflict between the historical data and the current data. We analyze two real-life data sets and perform simulation studies to compare the performance of competing Bayesian methods that allow to incorporate historical control patients in the analysis of a current trial. The dependent power prior borrows more information from comparable historical studies and thereby can improve the statistical power. Robustifying the dependent power prior seems to protect against prior-data conflict.  相似文献   

12.
In meta‐analyses, where a continuous outcome is measured with different scales or standards, the summary statistic is the mean difference standardised to a common metric with a common variance. Where trial treatment is delivered by a person, nesting of patients within care providers leads to clustering that may interact with, or be limited to, one or more of the arms. Assuming a common standardising variance is less tenable and options for scaling the mean difference become numerous. Metrics suggested for cluster‐randomised trials are within, between and total variances and for unequal variances, the control arm or pooled variances. We consider summary measures and individual‐patient‐data methods for meta‐analysing standardised mean differences from trials with two‐level nested clustering, relaxing independence and common variance assumptions, allowing sample sizes to differ across arms. A general metric is proposed with comparable interpretation across designs. The relationship between the method of standardisation and choice of model is explored, allowing for bias in the estimator and imprecision in the standardising metric. A meta‐analysis of trials of counselling in primary care motivated this work. Assuming equal clustering effects across trials, the proposed random‐effects meta‐analysis model gave a pooled standardised mean difference of ?0.27 (95% CI ?0.45 to ?0.08) using summary measures and ?0.26 (95% CI ?0.45 to ?0.09) with the individual‐patient‐data. While treatment‐related clustering has rarely been taken into account in trials, it is now recommended that it is considered in trials and meta‐analyses. This paper contributes to the uptake of this guidance. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

13.
The application of model‐based meta‐analysis in drug development has gained prominence recently, particularly for characterizing dose‐response relationships and quantifying treatment effect sizes of competitor drugs. The models are typically nonlinear in nature and involve covariates to explain the heterogeneity in summary‐level literature (or aggregate data (AD)). Inferring individual patient‐level relationships from these nonlinear meta‐analysis models leads to aggregation bias. Individual patient‐level data (IPD) are indeed required to characterize patient‐level relationships but too often this information is limited. Since combined analyses of AD and IPD allow advantage of the information they share to be taken, the models developed for AD must be derived from IPD models; in the case of linear models, the solution is a closed form, while for nonlinear models, closed form solutions do not exist. Here, we propose a linearization method based on a second order Taylor series approximation for fitting models to AD alone or combined AD and IPD. The application of this method is illustrated by an analysis of a continuous landmark endpoint, i.e., change from baseline in HbA1c at week 12, from 18 clinical trials evaluating the effects of DPP‐4 inhibitors on hyperglycemia in diabetic patients. The performance of this method is demonstrated by a simulation study where the effects of varying the degree of nonlinearity and of heterogeneity in covariates (as assessed by the ratio of between‐trial to within‐trial variability) were studied. A dose‐response relationship using an Emax model with linear and nonlinear effects of covariates on the emax parameter was used to simulate data. The simulation results showed that when an IPD model is simply used for modeling AD, the bias in the emax parameter estimate increased noticeably with an increasing degree of nonlinearity in the model, with respect to covariates. When using an appropriately derived AD model, the linearization method adequately corrected for bias. It was also noted that the bias in the model parameter estimates decreased as the ratio of between‐trial to within‐trial variability in covariate distribution increased. Taken together, the proposed linearization approach allows addressing the issue of aggregation bias in the particular case of nonlinear models of aggregate data. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

14.
Conventional practice monitors accumulating information about drug safety in terms of the numbers of adverse events reported from trials in a drug development program. Estimates of between‐treatment adverse event risk differences can be obtained readily from unblinded trials with adjustment for differences among trials using conventional statistical methods. Recent regulatory guidelines require monitoring the cumulative frequency of adverse event reports to identify possible between‐treatment adverse event risk differences without unblinding ongoing trials. Conventional statistical methods for assessing between‐treatment adverse event risks cannot be applied when the trials are blinded. However, CUSUM charts can be used to monitor the accumulation of adverse event occurrences. CUSUM charts for monitoring adverse event occurrence in a Bayesian paradigm are based on assumptions about the process generating the adverse event counts in a trial as expressed by informative prior distributions. This article describes the construction of control charts for monitoring adverse event occurrence based on statistical models for the processes, characterizes their statistical properties, and describes how to construct useful prior distributions. Application of the approach to two adverse events of interest in a real trial gave nearly identical results for binomial and Poisson observed event count likelihoods. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

15.
Meta‐analytic methods for combining data from multiple intervention trials are commonly used to estimate the effectiveness of an intervention. They can also be extended to study comparative effectiveness, testing which of several alternative interventions is expected to have the strongest effect. This often requires network meta‐analysis (NMA), which combines trials involving direct comparison of two interventions within the same trial and indirect comparisons across trials. In this paper, we extend existing network methods for main effects to examining moderator effects, allowing for tests of whether intervention effects vary for different populations or when employed in different contexts. In addition, we study how the use of individual participant data may increase the sensitivity of NMA for detecting moderator effects, as compared with aggregate data NMA that employs study‐level effect sizes in a meta‐regression framework. A new NMA diagram is proposed. We also develop a generalized multilevel model for NMA that takes into account within‐trial and between‐trial heterogeneity and can include participant‐level covariates. Within this framework, we present definitions of homogeneity and consistency across trials. A simulation study based on this model is used to assess effects on power to detect both main and moderator effects. Results show that power to detect moderation is substantially greater when applied to individual participant data as compared with study‐level effects. We illustrate the use of this method by applying it to data from a classroom‐based randomized study that involved two sub‐trials, each comparing interventions that were contrasted with separate control groups. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

16.
As evidence accumulates within a meta‐analysis, it is desirable to determine when the results could be considered conclusive to guide systematic review updates and future trial designs. Adapting sequential testing methodology from clinical trials for application to pooled meta‐analytic effect size estimates appears well suited for this objective. In this paper, we describe a Bayesian sequential meta‐analysis method, in which an informative heterogeneity prior is employed and stopping rule criteria are applied directly to the posterior distribution for the treatment effect parameter. Using simulation studies, we examine how well this approach performs under different parameter combinations by monitoring the proportion of sequential meta‐analyses that reach incorrect conclusions (to yield error rates), the number of studies required to reach conclusion, and the resulting parameter estimates. By adjusting the stopping rule thresholds, the overall error rates can be controlled within the target levels and are no higher than those of alternative frequentist and semi‐Bayes methods for the majority of the simulation scenarios. To illustrate the potential application of this method, we consider two contrasting meta‐analyses using data from the Cochrane Library and compare the results of employing different sequential methods while examining the effect of the heterogeneity prior in the proposed Bayesian approach. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

17.
A biologic is a product made from living organisms. A biosimilar is a new version of an already approved branded biologic. Regulatory guidelines recommend a totality‐of‐the‐evidence approach with stepwise development for a new biosimilar. Initial steps for biosimilar development are (a) analytical comparisons to establish similarity in structure and function followed by (b) potential animal studies and a human pharmacokinetics/pharmacodynamics equivalence study. The last step is a phase III clinical trial to confirm similar efficacy, safety, and immunogenicity between the biosimilar and the biologic. A high degree of analytical and pharmacokinetics/pharmacodynamics similarity could provide justification for an eased statistical threshold in the phase III trial, which could then further facilitate an overall abbreviated approval process for biosimilars. Bayesian methods can help in the analysis of clinical trials, by adding proper prior information into the analysis, thereby potentially decreasing required sample size. We develop proper prior information for the analysis of a phase III trial for showing that a proposed biosimilar is similar to a reference biologic. For the reference product, we use a meta‐analysis of published results to set a prior for the probability of efficacy, and we propose priors for the proposed biosimilar informed by the strength of the evidence generated in the earlier steps of the approval process. A simulation study shows that with few exceptions, the Bayesian relative risk analysis provides greater power, shorter 90% credible intervals with more than 90% frequentist coverage, and better root mean squared error.  相似文献   

18.
In non‐inferiority trials that employ the synthesis method several types of dependencies among test statistics occur due to sharing of the same information from the historical trial. The conditions under which the dependencies appear may be divided into three categories. The first case is when a new drug is approved with single non‐inferiority trial. The second case is when a new drug is approved if two independent non‐inferiority trials show positive results. The third case is when two new different drugs are approved with the same active control. The problem of the dependencies is that they can make the type I error rate deviate from the nominal level. In order to study such deviations, we introduce the unconditional and conditional across‐trial type I error rates when the non‐inferiority margin is estimated from the historical trial, and investigate how the dependencies affect the type I error rates. We show that the unconditional across‐trial type I error rate increases dramatically as does the correlation between two non‐inferiority tests when a new drug is approved based on the positive results of two non‐inferiority trials. We conclude that the conditional across‐trial type I error rate involves the unknown treatment effect in the historical trial. The formulae of the conditional across‐trial type I error rates provide us with a way of investigating the conditional across‐trial type I error rates for various assumed values of the treatment effect in the historical trial. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

19.
Rich meta‐epidemiological data sets have been collected to explore associations between intervention effect estimates and study‐level characteristics. Welton et al proposed models for the analysis of meta‐epidemiological data, but these models are restrictive because they force heterogeneity among studies with a particular characteristic to be at least as large as that among studies without the characteristic. In this paper we present alternative models that are invariant to the labels defining the 2 categories of studies. To exemplify the methods, we use a collection of meta‐analyses in which the Cochrane Risk of Bias tool has been implemented. We first investigate the influence of small trial sample sizes (less than 100 participants), before investigating the influence of multiple methodological flaws (inadequate or unclear sequence generation, allocation concealment, and blinding). We fit both the Welton et al model and our proposed label‐invariant model and compare the results. Estimates of mean bias associated with the trial characteristics and of between‐trial variances are not very sensitive to the choice of model. Results from fitting a univariable model show that heterogeneity variance is, on average, 88% greater among trials with less than 100 participants. On the basis of a multivariable model, heterogeneity variance is, on average, 25% greater among trials with inadequate/unclear sequence generation, 51% greater among trials with inadequate/unclear blinding, and 23% lower among trials with inadequate/unclear allocation concealment, although the 95% intervals for these ratios are very wide. Our proposed label‐invariant models for meta‐epidemiological data analysis facilitate investigations of between‐study heterogeneity attributable to certain study characteristics.  相似文献   

20.
Incorporating historical control data in planning phase II clinical trials   总被引:1,自引:0,他引:1  
Phase II studies of new medical treatments often use historical data on the standard treatment for comparative evaluation. Incorrectly disregarding inherent variability in the historical data may lead to erroneous conclusions regarding the efficacy of the experimental treatment. We propose an approach to phase II trial design which accounts for both inter-study and intra-study variation. Our results indicate that it is sometimes best to randomize a proportion of the patients to a control arm. We choose this proportion to maximize the precision of the estimated experimental treatment effect. We evaluate operating characteristics of the design numerically, and provide illustrations based on historical data from cancer chemotherapy trials.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号