首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
A scenario not uncommon at the end of a Phase II clinical development is that although choices are narrowed down to two to three doses, the project team cannot make a recommendation of one single dose for the Phase III confirmatory study based upon the available data. Several ‘drop‐the‐loser’ designs to monitor multiple doses of an experimental treatment compared with a control in a pivotal Phase III study are considered. Ineffective and/or toxic doses compared with the control may be dropped at the interim analyses as the study continues, and when the accumulated data have demonstrated convincing efficacy and an acceptable safety profile for one dose, the corresponding dose or the study may be stopped to make the experimental treatment available to patients. A decision to drop a toxic dose is usually based upon a comprehensive review of all the available safety data and also a risk/benefit assessment. For dropping ineffective doses, a non‐binding futility boundary may be used as guidance. The desired futility boundary can be derived by using an appropriate combination of risk level (i.e. error rate for accepting null hypothesis when the dose is truly efficacious) and spending strategy (dropping a dose aggressively in early analyses versus late). For establishing convincing evidence of the treatment efficacy, three methods for calculating the efficacy boundary are discussed: the Joint Monitoring (JM) approach, the Marginal Monitoring method with Bonferroni correction (MMB), and the Marginal Monitoring method with Adjustment for correlation (MMA). The JM approach requires intensive computation especially when there are several doses and multiple interim analyses. The marginal monitoring methods are computationally more attractive and also more flexible since each dose is monitored separately by its own alpha‐spending function. The JM and MMB methods control the false positive rate. The MMA method tends to protect the false positive rate and is more powerful than the Bonferroni‐based MMB method. The MMA method offers a practical and flexible solution when there are several doses and multiple interim looks. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

2.
We consider estimation of treatment effects in two‐stage adaptive multi‐arm trials with a common control. The best treatment is selected at interim, and the primary endpoint is modeled via a Cox proportional hazards model. The maximum partial‐likelihood estimator of the log hazard ratio of the selected treatment will overestimate the true treatment effect in this case. Several methods for reducing the selection bias have been proposed for normal endpoints, including an iterative method based on the estimated conditional selection biases and a shrinkage approach based on empirical Bayes theory. We adapt these methods to time‐to‐event data and compare the bias and mean squared error of all methods in an extensive simulation study and apply the proposed methods to reconstructed data from the FOCUS trial. We find that all methods tend to overcorrect the bias, and only the shrinkage methods can reduce the mean squared error. © 2017 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.  相似文献   

3.
The rate of failure in phase III oncology trials is surprisingly high, partly owing to inadequate phase II studies. Recently, the use of randomized designs in phase II is being increasingly recommended, to avoid the limits of studies that use a historical control. We propose a two‐arm two‐stage design based on a Bayesian predictive approach. The idea is to ensure a large probability, expressed in terms of the prior predictive probability of the data, of obtaining a substantial posterior evidence in favour of the experimental treatment, under the assumption that it is actually more effective than the standard agent. This design is a randomized version of the two‐stage design that has been proposed for single‐arm phase II trials by Sambucini. We examine the main features of our novel design as all the parameters involved vary and compare our approach with Jung's minimax and optimal designs. An illustrative example is also provided online as a supplementary material to this article. Copyright © 2014 JohnWiley & Sons, Ltd.  相似文献   

4.
A biomarker (S) measured after randomization in a clinical trial can often provide information about the true endpoint (T) and hence the effect of treatment (Z). It can usually be measured earlier and more easily than T and as such may be useful to shorten the trial length. A potential use of S is to completely replace T as a surrogate endpoint to evaluate whether the treatment is effective. Another potential use of S is to serve as an auxiliary variable to help provide information and improve the inference on the treatment effect prediction when T is not completely observed. The objective of this report is to focus on its role as an auxiliary variable and to identify situations when S can be useful to increase efficiency in predicting the treatment effect in a new trial in a multiple‐trial setting. Both S and T are continuous. We find that higher efficiency gain is associated with higher trial‐level correlation but not individual‐level correlation when only S, but not T is measured in a new trial; but, the amount of information recovery from S is usually negligible. However, when T is partially observed in the new trial and the individual‐level correlation is relatively high, there is substantial efficiency gain by using S. For design purposes, our results suggest that it is often important to collect markers that have high adjusted individual‐level correlation with T and at least a small amount of data on T. The results are illustrated using simulations and an example from a glaucoma clinical trial. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

5.
With varying, but substantial, proportions of heritability remaining unexplained by summaries of single‐SNP genetic variation, there is a demand for methods that extract maximal information from genetic association studies. One source of variation that is difficult to assess is genetic interactions. A major challenge for naive detection methods is the large number of possible combinations, with a requisite need to correct for multiple testing. Assumptions of large marginal effects, to reduce the search space, may be restrictive and miss higher order interactions with modest marginal effects. In this paper, we propose a new procedure for detecting gene‐by‐gene interactions through heterogeneity in estimated low‐order (e.g., marginal) effect sizes by leveraging population structure, or ancestral differences, among studies in which the same phenotypes were measured. We implement this approach in a meta‐analytic framework, which offers numerous advantages, such as robustness and computational efficiency, and is necessary when data‐sharing limitations restrict joint analysis. We effectively apply a dimension reduction procedure that scales to allow searches for higher order interactions. For comparison to our method, which we term phylogenY‐aware Effect‐size Tests for Interactions (YETI), we adapt an existing method that assumes interacting loci will exhibit strong marginal effects to our meta‐analytic framework. As expected, YETI excels when multiple studies are from highly differentiated populations and maintains its superiority in these conditions even when marginal effects are small. When these conditions are less extreme, the advantage of our method wanes. We assess the Type‐I error and power characteristics of complementary approaches to evaluate their strengths and limitations.  相似文献   

6.
7.
8.
Statistical inference for analyzing the results from several independent studies on the same quantity of interest has been investigated frequently in recent decades. Typically, any meta‐analytic inference requires that the quantity of interest is available from each study together with an estimate of its variability. The current work is motivated by a meta‐analysis on comparing two treatments (thoracoscopic and open) of congenital lung malformations in young children. Quantities of interest include continuous end‐points such as length of operation or number of chest tube days. As studies only report mean values (and no standard errors or confidence intervals), the question arises how meta‐analytic inference can be developed. We suggest two methods to estimate study‐specific variances in such a meta‐analysis, where only sample means and sample sizes are available in the treatment arms. A general likelihood ratio test is derived for testing equality of variances in two groups. By means of simulation studies, the bias and estimated standard error of the overall mean difference from both methodologies are evaluated and compared with two existing approaches: complete study analysis only and partial variance information. The performance of the test is evaluated in terms of type I error. Additionally, we illustrate these methods in the meta‐analysis on comparing thoracoscopic and open surgery for congenital lung malformations and in a meta‐analysis on the change in renal function after kidney donation. Copyright © 2017 John Wiley & Sons, Ltd.  相似文献   

9.
Multivariate random effects meta‐analysis (MRMA) is an appropriate way for synthesizing data from studies reporting multiple correlated outcomes. In a Bayesian framework, it has great potential for integrating evidence from a variety of sources. In this paper, we propose a Bayesian model for MRMA of mixed outcomes, which extends previously developed bivariate models to the trivariate case and also allows for combination of multiple outcomes that are both continuous and binary. We have constructed informative prior distributions for the correlations by using external evidence. Prior distributions for the within‐study correlations were constructed by employing external individual patent data and using a double bootstrap method to obtain the correlations between mixed outcomes. The between‐study model of MRMA was parameterized in the form of a product of a series of univariate conditional normal distributions. This allowed us to place explicit prior distributions on the between‐study correlations, which were constructed using external summary data. Traditionally, independent ‘vague’ prior distributions are placed on all parameters of the model. In contrast to this approach, we constructed prior distributions for the between‐study model parameters in a way that takes into account the inter‐relationship between them. This is a flexible method that can be extended to incorporate mixed outcomes other than continuous and binary and beyond the trivariate case. We have applied this model to a motivating example in rheumatoid arthritis with the aim of incorporating all available evidence in the synthesis and potentially reducing uncertainty around the estimate of interest. © 2013 The Authors. Statistics inMedicine Published by John Wiley & Sons, Ltd.  相似文献   

10.
To address the objective in a clinical trial to estimate the mean or mean difference of an expensive endpoint Y, one approach employs a two‐phase sampling design, wherein inexpensive auxiliary variables W predictive of Y are measured in everyone, Y is measured in a random sample, and the semiparametric efficient estimator is applied. This approach is made efficient by specifying the phase two selection probabilities as optimal functions of the auxiliary variables and measurement costs. While this approach is familiar to survey samplers, it apparently has seldom been used in clinical trials, and several novel results practicable for clinical trials are developed. We perform simulations to identify settings where the optimal approach significantly improves efficiency compared to approaches in current practice. We provide proofs and R code. The optimality results are developed to design an HIV vaccine trial, with objective to compare the mean ‘importance‐weighted’ breadth (Y) of the T‐cell response between randomized vaccine groups. The trial collects an auxiliary response (W) highly predictive of Y and measures Y in the optimal subset. We show that the optimal design‐estimation approach can confer anywhere between absent and large efficiency gain (up to 24 % in the examples) compared to the approach with the same efficient estimator but simple random sampling, where greater variability in the cost‐standardized conditional variance of Y given W yields greater efficiency gains. Accurate estimation of E[Y | W] is important for realizing the efficiency gain, which is aided by an ample phase two sample and by using a robust fitting method. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

11.
The process of undertaking a meta‐analysis involves a sequence of decisions, one of which is deciding which measure of treatment effect to use. In particular, for comparative binary data from randomised controlled trials, a wide variety of measures are available such as the odds ratio and the risk difference. It is often of interest to know whether important conclusions would have been substantively different if an alternative measure had been used. Here we develop a new type of sensitivity analysis that incorporates standard measures of treatment effect. Thus, rather than examining the implications of a variety of measures in an ad hoc manner, we can simultaneously examine an entire family of possibilities, including the odds ratio, the arcsine difference and the risk difference. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

12.
Rich meta‐epidemiological data sets have been collected to explore associations between intervention effect estimates and study‐level characteristics. Welton et al proposed models for the analysis of meta‐epidemiological data, but these models are restrictive because they force heterogeneity among studies with a particular characteristic to be at least as large as that among studies without the characteristic. In this paper we present alternative models that are invariant to the labels defining the 2 categories of studies. To exemplify the methods, we use a collection of meta‐analyses in which the Cochrane Risk of Bias tool has been implemented. We first investigate the influence of small trial sample sizes (less than 100 participants), before investigating the influence of multiple methodological flaws (inadequate or unclear sequence generation, allocation concealment, and blinding). We fit both the Welton et al model and our proposed label‐invariant model and compare the results. Estimates of mean bias associated with the trial characteristics and of between‐trial variances are not very sensitive to the choice of model. Results from fitting a univariable model show that heterogeneity variance is, on average, 88% greater among trials with less than 100 participants. On the basis of a multivariable model, heterogeneity variance is, on average, 25% greater among trials with inadequate/unclear sequence generation, 51% greater among trials with inadequate/unclear blinding, and 23% lower among trials with inadequate/unclear allocation concealment, although the 95% intervals for these ratios are very wide. Our proposed label‐invariant models for meta‐epidemiological data analysis facilitate investigations of between‐study heterogeneity attributable to certain study characteristics.  相似文献   

13.
Baseline risk is a proxy for unmeasured but important patient‐level characteristics, which may be modifiers of treatment effect, and is a potential source of heterogeneity in meta‐analysis. Models adjusting for baseline risk have been developed for pairwise meta‐analysis using the observed event rate in the placebo arm and taking into account the measurement error in the covariate to ensure that an unbiased estimate of the relationship is obtained. Our objective is to extend these methods to network meta‐analysis where it is of interest to adjust for baseline imbalances in the non‐intervention group event rate to reduce both heterogeneity and possibly inconsistency. This objective is complicated in network meta‐analysis by this covariate being sometimes missing, because of the fact that not all studies in a network may have a non‐active intervention arm. A random‐effects meta‐regression model allowing for inclusion of multi‐arm trials and trials without a ‘non‐intervention’ arm is developed. Analyses are conducted within a Bayesian framework using the WinBUGS software. The method is illustrated using two examples: (i) interventions to promote functional smoke alarm ownership by households with children and (ii) analgesics to reduce post‐operative morphine consumption following a major surgery. The results showed no evidence of baseline effect in the smoke alarm example, but the analgesics example shows that the adjustment can greatly reduce heterogeneity and improve overall model fit. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

14.
Bland–Altman method comparison studies are common in the medical sciences and are used to compare a new measure to a gold‐standard (often costlier or more invasive) measure. The distribution of these differences is summarized by two statistics, the ‘bias’ and standard deviation, and these measures are combined to provide estimates of the limits of agreement (LoA). When these LoA are within the bounds of clinically insignificant differences, the new non‐invasive measure is preferred. Very often, multiple Bland–Altman studies have been conducted comparing the same two measures, and random‐effects meta‐analysis provides a means to pool these estimates. We provide a framework for the meta‐analysis of Bland–Altman studies, including methods for estimating the LoA and measures of uncertainty (i.e., confidence intervals). Importantly, these LoA are likely to be wider than those typically reported in Bland–Altman meta‐analyses. Frequently, Bland–Altman studies report results based on repeated measures designs but do not properly adjust for this design in the analysis. Meta‐analyses of Bland–Altman studies frequently exclude these studies for this reason. We provide a meta‐analytic approach that allows inclusion of estimates from these studies. This includes adjustments to the estimate of the standard deviation and a method for pooling the estimates based upon robust variance estimation. An example is included based on a previously published meta‐analysis. Copyright © 2017 John Wiley & Sons, Ltd.  相似文献   

15.
Meta‐analysis of clinical trials is a methodology to summarize information from a collection of trials about an intervention, in order to make informed inferences about that intervention. Random effects allow the target population outcomes to vary among trials. Since meta‐analysis is often an important element in helping shape public health policy, society depends on biostatisticians to help ensure that the methodology is sound. Yet when meta‐analysis involves randomized binomial trials with low event rates, the overwhelming majority of publications use methods currently not intended for such data. This statistical practice issue must be addressed. Proper methods exist, but they are rarely applied. This tutorial is devoted to estimating a well‐defined overall relative risk, via a patient‐weighted random‐effects method. We show what goes wrong with methods based on ‘inverse‐variance’ weights, which are almost universally used. To illustrate similarities and differences, we contrast our methods, inverse‐variance methods, and the published results (usually inverse‐variance) for 18 meta‐analyses from 13 Journal of the American Medical Association articles. We also consider the 2007 case of rosiglitazone (Avandia), where important public health issues were at stake, involving patient cardiovascular risk. The most widely used method would have reached a different conclusion. © 2016 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd.  相似文献   

16.
Standard methods for fixed effects meta‐analysis assume that standard errors for study‐specific estimates are known, not estimated. While the impact of this simplifying assumption has been shown in a few special cases, its general impact is not well understood, nor are general‐purpose tools available for inference under more realistic assumptions. In this paper, we aim to elucidate the impact of using estimated standard errors in fixed effects meta‐analysis, showing why it does not go away in large samples and quantifying how badly miscalibrated standard inference will be if it is ignored. We also show the important role of a particular measure of heterogeneity in this miscalibration. These developments lead to confidence intervals for fixed effects meta‐analysis with improved performance for both location and scale parameters.  相似文献   

17.
Phase II clinical trials are typically designed as two‐stage studies, in order to ensure early termination of the trial if the interim results show that the treatment is ineffective. Most of two‐stage designs, developed under both a frequentist and a Bayesian framework, select the second stage sample size before observing the first stage data. This may cause some paradoxical situations during the practical carrying out of the trial. To avoid these potential problems, we suggest a Bayesian predictive strategy to derive an adaptive two‐stage design, where the second stage sample size is not selected in advance, but depends on the first stage result. The criterion we propose is based on a modification of a Bayesian predictive design recently presented in the literature (see (Statist. Med. 2008; 27 :1199–1224)). The distinction between analysis and design priors is essential for the practical implementation of the procedure: some guidelines for choosing these prior distributions are discussed and their impact on the required sample size is examined. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

18.
19.
20.
We have developed a method, called Meta‐STEPP (subpopulation treatment effect pattern plot for meta‐analysis), to explore treatment effect heterogeneity across covariate values in the meta‐analysis setting for time‐to‐event data when the covariate of interest is continuous. Meta‐STEPP forms overlapping subpopulations from individual patient data containing similar numbers of events with increasing covariate values, estimates subpopulation treatment effects using standard fixed‐effects meta‐analysis methodology, displays the estimated subpopulation treatment effect as a function of the covariate values, and provides a statistical test to detect possibly complex treatment‐covariate interactions. Simulation studies show that this test has adequate type‐I error rate recovery as well as power when reasonable window sizes are chosen. When applied to eight breast cancer trials, Meta‐STEPP suggests that chemotherapy is less effective for tumors with high estrogen receptor expression compared with those with low expression. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号