首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
A typical challenge facing the design and analysis of immuno-oncology (IO) trials is the prevalence of nonproportional hazards (NPH) patterns manifested in Kaplan-Meier curves under time-to-event endpoints. The NPH patterns would violate the proportional hazards assumption, and yet conventional design and analysis strategies often ignore such a violation, resulting in underpowered or even falsely negative IO studies. In this article, we show, both empirically and analytically, that treating nonresponders in IO studies of inadequate size would give rise to a variety of NPH patterns; we then present a novel design and analysis strategy, P %- r esponder i nformation e m b e dded (PRIME), to properly incorporate the dichotomized response incurred from treating nonresponders. Empirical studies demonstrate that the proposed strategy can achieve desirable power, whereas the conventional alternative leads to a severe power loss. The PRIME strategy allows us to quantify the impact of treating nonresponders on study efficiency, thereby enabling a proper design of IO trials with an adequate power. More importantly, it pinpoints a solution to enhance the study efficiency and alleviates the NPH patterns by enrolling more prospective responders. An R package (Immunotherapy.Design) is developed for implementation.  相似文献   

2.
Phase I clinical trials are the first step in drug development to test a new drug or drug combination on humans. Typical designs of Phase I trials use toxicity as the primary endpoint and aim to find the maximum tolerable dosage. However, these designs are poorly applicable for the development of cancer therapeutic vaccines because the expected safety concerns for these vaccines are not as much as cytotoxic agents. The primary objectives of a cancer therapeutic vaccine phase I trial thus often include determining whether the vaccine shows biologic activity and the minimum dose necessary to achieve a full immune or even clinical response. In this paper, we propose a new Bayesian phase I trial design that allows simultaneous evaluation of safety and immunogenicity outcomes. We demonstrate the proposed clinical trial design by both a numeric study and a therapeutic human papillomavirus vaccine trial.  相似文献   

3.
Cancer immunotherapy trials have two special features: a delayed treatment effect and a cure rate. Both features violate the proportional hazard model assumption and ignoring either one of the two features in an immunotherapy trial design will result in substantial loss of statistical power. To properly design immunotherapy trials, we proposed a piecewise proportional hazard cure rate model to incorporate both delayed treatment effect and cure rate into the trial design consideration. A sample size formula is derived for a weighted log-rank test under a fixed alternative hypothesis. The accuracy of sample size calculation using the new formula is assessed and compared with the existing methods via simulation studies. A real immunotherapy trial is used to illustrate the study design along with practical consideration of balance between sample size and follow-up time.  相似文献   

4.
In clinical trials with survival endpoint, it is common to observe an overlap between two Kaplan–Meier curves of treatment and control groups during the early stage of the trials, indicating a potential delayed treatment effect. Formulas have been derived for the asymptotic power of the log‐rank test in the presence of delayed treatment effect and its accompanying sample size calculation. In this paper, we first reformulate the alternative hypothesis with the delayed treatment effect in a rescaled time domain, which can yield a simplified sample size formula for the log‐rank test in this context. We further propose an intersection‐union test to examine the efficacy of treatment with delayed effect and show it to be more powerful than the log‐rank test. Simulation studies are conducted to demonstrate the proposed methods. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

5.
This paper presents methodology for designing complex group sequential survival trials when the survival curves will be compared using the logrank statistic. The method can be applied to any treatment and control survival curves as long as each hazard function can be approximated by a piecewise linear function. The approach allows arbitrary accrual patterns and permits adjustment for varying rates of non-compliance, drop-in and loss to follow-up. The calendar-time-information-time transformation is derived under these complex assumptions. This permits the exploration of the operating characteristics of various interim analysis plans, including sample size and power. By using the calendar-time-information-time transformation, information fractions corresponding to desired calendar times can be determined. In this way, the interim analyses can be scheduled in information time, assuring the desired power and realization of the spending function, while the interim analyses will take place according to the desired calendar schedule.  相似文献   

6.
Shao J  Chang M  Chow SC 《Statistics in medicine》2005,24(12):1783-1790
In cancer clinical trials, it is not uncommon that some patients switched their treatments due to lack of efficacy and/or disease progression under ethical consideration. This treatment switch makes it difficult for the evaluation of the efficacy of the treatment under investigation. The current existing methods consider random treatment switch and do not take into consideration of prognosis and/or investigator's assessment that leads to patients' treatment switch. In this paper, we model patients' treatment switching effect in a latent event times model under parametric setting or a latent hazard rate model under the semi-parametric proportional hazard model. Statistical inference procedures under both models are provided. A simulation study is performed to investigate the performance of the proposed methods.  相似文献   

7.
Girardeau, Ravaud and Donner in 2008 presented a formula for sample size calculations for cluster randomised crossover trials, when the intracluster correlation coefficient, interperiod correlation coefficient and mean cluster size are specified in advance. However, in many randomised trials, the number of clusters is constrained in some way, but the mean cluster size is not. We present a version of the Girardeau formula for sample size calculations for cluster randomised crossover trials when the number of clusters is fixed. Formulae are given for the minimum number of clusters, the maximum cluster size and the relationship between the correlation coefficients when there are constraints on both the number of clusters and the cluster size. Our version of the formula may aid the efficient planning and design of cluster randomised crossover trials.  相似文献   

8.
In this paper, a Bayesian approach is developed for simultaneously comparing multiple experimental treatments with a common control treatment in an exploratory clinical trial. The sample size is set to ensure that, at the end of the study, there will be at least one treatment for which the investigators have a strong belief that it is better than control, or else they have a strong belief that none of the experimental treatments are substantially better than control. This criterion bears a direct relationship with conventional frequentist power requirements, while allowing prior opinion to feature in the analysis with a consequent reduction in sample size. If it is concluded that at least one of the experimental treatments shows promise, then it is envisaged that one or more of these promising treatments will be developed further in a definitive phase III trial. The approach is developed in the context of normally distributed responses sharing a common standard deviation regardless of treatment. To begin with, the standard deviation will be assumed known when the sample size is calculated. The final analysis will not rely upon this assumption, although the intended properties of the design may not be achieved if the anticipated standard deviation turns out to be inappropriate. Methods that formally allow for uncertainty about the standard deviation, expressed in the form of a Bayesian prior, are then explored. Illustrations of the sample sizes computed from the new method are presented, and comparisons are made with frequentist methods devised for the same situation. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

9.
This work is motivated by trials in rapidly lethal cancers or cancers for which measuring shrinkage of tumours is infeasible. In either case, traditional phase II designs focussing on tumour response are unsuitable. Usually, tumour response is considered as a substitute for the more relevant but longer‐term endpoint of death. In rapidly lethal cancers such as pancreatic cancer, there is no need to use a surrogate, as the definitive endpoint is (sadly) available so soon. In uveal cancer, there is no counterpart to tumour response, and so, mortality is the only realistic response available. Cytostatic cancer treatments do not seek to kill tumours, but to mitigate their effects. Trials of such therapy might also be based on survival times to death or progression, rather than on tumour shrinkage. Phase II oncology trials are often conducted with all study patients receiving the experimental therapy, and this approach is considered here. Simple extensions of one‐stage and two‐stage designs based on binary responses are presented. Outcomes based on survival past a small number of landmark times are considered: here, the case of three such times is explored in examples. This approach allows exact calculations to be made for both design and analysis purposes. Simulations presented here show that calculations based on normal approximations can lead to loss of power when sample sizes are small. Two‐stage versions of the procedure are also suggested. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

10.
In recent years, developing pharmaceutical products via multiregional clinical trials (MRCTs) has become standard. Traditionally, an MRCT would assume that a treatment effect is uniform across regions. However, heterogeneity among regions may have impact upon the evaluation of a medicine's effect. In this study, we consider a random effects model using discrete distribution (DREM) to account for heterogeneous treatment effects across regions for the design and evaluation of MRCTs. We derive an power function for a treatment that is beneficial under DREM and illustrate determination of the overall sample size in an MRCT. We use the concept of consistency based on Method 2 of the Japanese Ministry of Health, Labour, and Welfare's guidance to evaluate the probability for treatment benefit and consistency under DREM. We further derive an optimal sample size allocation over regions to maximize the power for consistency. Moreover, we provide three algorithms for deriving sample size at the desired level of power for benefit and consistency. In practice, regional treatment effects are unknown. Thus, we provide some guidelines on the design of MRCTs with consistency when the regional treatment effect are assumed to fall into a specified interval. Numerical examples are given to illustrate applications of the proposed approach. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

11.
Conventional phase II trials using binary endpoints as early indicators of a time‐to‐event outcome are not always feasible. Uveal melanoma has no reliable intermediate marker of efficacy. In pancreatic cancer and viral clearance, the time to the event of interest is short, making an early indicator unnecessary. In the latter application, Weibull models have been used to analyse corresponding time‐to‐event data. Bayesian sample size calculations are presented for single‐arm and randomised phase II trials assuming proportional hazards models for time‐to‐event endpoints. Special consideration is given to the case where survival times follow the Weibull distribution. The proposed methods are demonstrated through an illustrative trial based on uveal melanoma patient data. A procedure for prior specification based on knowledge or predictions of survival patterns is described. This enables investigation into the choice of allocation ratio in the randomised setting to assess whether a control arm is indeed required. The Bayesian framework enables sample sizes consistent with those used in practice to be obtained. When a confirmatory phase III trial will follow if suitable evidence of efficacy is identified, Bayesian approaches are less controversial than for definitive trials. In the randomised setting, a compromise for obtaining feasible sample sizes is a loss in certainty in the specified hypotheses: the Bayesian counterpart of power. However, this approach may still be preferable to running a single‐arm trial where no data is collected on the control treatment. This dilemma is present in most phase II trials, where resources are not sufficient to conduct a definitive trial. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

12.
Three-arm trials including an experimental treatment, an active control and a placebo group are frequently preferred for the assessment of non-inferiority. In contrast to two-arm non-inferiority studies, these designs allow a direct proof of efficacy of a new treatment by comparison with placebo. As a further advantage, the test problem for establishing non-inferiority can be formulated in such a way that rejection of the null hypothesis assures that a pre-defined portion of the (unknown) effect the reference shows versus placebo is preserved by the treatment under investigation. We present statistical methods for this study design and the situation of a binary outcome variable. Asymptotic test procedures are given and their actual type I error rates are calculated. Approximate sample size formulae are derived and their accuracy is discussed. Furthermore, the question of optimal allocation of the total sample size is considered. Power properties of the testing strategy including a pre-test for assay sensitivity are presented. The derived methods are illustrated by application to a clinical trial in depression.  相似文献   

13.
《Vaccine》2017,35(28):3582-3590
CIGB-247 is a cancer therapeutic vaccine, based on recombinant modified human vascular endothelial growth factor (VEGF) as antigen, in combination with the adjuvant VSSP, a bacterially-derived adjuvant. The vaccine have demonstrated efficacy in several murine malignancy models. These studies supported the rationale for a phase I clinical trial where safety, tolerance, and immunogenicity of CIGB-247 was studied in patients with advanced solid tumors at three antigen dose level. Surviving individuals of this clinical trial were eligible to receive off-trial voluntary re-immunizations. The present work is focus in the immunological follow up of these patients after approximately three years of immunizations, without additional oncological treatments. Long term vaccination was feasible and safe. Our results indicated that after sustained vaccination most of the patients conserved their seroconversion status. The specific anti-VEGF IgG titer diminished, but in all the cases keeps values up from the pre-vaccination levels. Continued vaccination was also important to produce a gradual shift in the anti-VEGF IgG response from IgG1 to Ig4. Outstanding, our results indicated that long-term off-trial vaccination could be associated with the maintaining of one reserve of antibodies able to interfere with the VEGF/Receptor interaction and the production of IFNγ secretion in CD8+ cells. The results derived from the study of this series of patients suggest that long term therapeutic vaccination is a feasible strategy, and highlight the importance of continuing the clinical development program of this novel cancer therapeutic vaccine candidate. We also highlight the future clinical applications of CIGB-247 in cancer and explain knowledge gaps that future studies may address.Registration number and name of trial registry: RPCEC00000102. Cuban Public Clinical Trial Registry (WHO accepted Primary Registry). Available from: http://registroclinico.sld.cu/.  相似文献   

14.
Cluster randomized trials (CRTs) refer to experiments with randomization carried out at the cluster or the group level. While numerous statistical methods have been developed for the design and analysis of CRTs, most of the existing methods focused on testing the overall treatment effect across the population characteristics, with few discussions on the differential treatment effect among subpopulations. In addition, the sample size and power requirements for detecting differential treatment effect in CRTs remain unclear, but are helpful for studies planned with such an objective. In this article, we develop a new sample size formula for detecting treatment effect heterogeneity in two-level CRTs for continuous outcomes, continuous or binary covariates measured at cluster or individual level. We also investigate the roles of two intraclass correlation coefficients (ICCs): the adjusted ICC for the outcome of interest and the marginal ICC for the covariate of interest. We further derive a closed-form design effect formula to facilitate the application of the proposed method, and provide extensions to accommodate multiple covariates. Extensive simulations are carried out to validate the proposed formula in finite samples. We find that the empirical power agrees well with the prediction across a range of parameter constellations, when data are analyzed by a linear mixed effects model with a treatment-by-covariate interaction. Finally, we use data from the HF-ACTION study to illustrate the proposed sample size procedure for detecting heterogeneous treatment effects.  相似文献   

15.
The treatment effect sizes that can be detected with sufficient power up to the different interim analyses constitute a clinically meaningful criterion for the selection of a group sequential test for a clinical trial. For any pre-specified sequence of effect sizes, it is possible to construct group sequential boundaries such that the trial has a pre-specified power to reject the null-hypothesis at or before the corresponding interim analysis under the respective treatment effect. The principle of constructing group sequential designs on the basis of detectable treatment effects is presented. The application in common situations such as two-armed trials with continuous or binary outcome or censored survival times is described. We also present an effective algorithm.  相似文献   

16.
In clinical trials with time‐to‐event endpoints, it is not uncommon to see a significant proportion of patients being cured (or long‐term survivors), such as trials for the non‐Hodgkins lymphoma disease. The popularly used sample size formula derived under the proportional hazards (PH) model may not be proper to design a survival trial with a cure fraction, because the PH model assumption may be violated. To account for a cure fraction, the PH cure model is widely used in practice, where a PH model is used for survival times of uncured patients and a logistic distribution is used for the probability of patients being cured. In this paper, we develop a sample size formula on the basis of the PH cure model by investigating the asymptotic distributions of the standard weighted log‐rank statistics under the null and local alternative hypotheses. The derived sample size formula under the PH cure model is more flexible because it can be used to test the differences in the short‐term survival and/or cure fraction. Furthermore, we also investigate as numerical examples the impacts of accrual methods and durations of accrual and follow‐up periods on sample size calculation. The results show that ignoring the cure rate in sample size calculation can lead to either underpowered or overpowered studies. We evaluate the performance of the proposed formula by simulation studies and provide an example to illustrate its application with the use of data from a melanoma trial. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

17.
This paper develops a new formula for sample size calculations for comparative clinical trials with Poisson or over-dispersed Poisson process data. The criteria for sample size calculations is developed on the basis of asymptotic approximations for a two-sample non-parametric test to compare the empirical event rate function between treatment groups. This formula can accommodate time heterogeneity, inter-patient heterogeneity in event rate, and also, time-varying treatment effects. An application of the formula to a trial for chronic granulomatous disease is provided.  相似文献   

18.
An adaptive treatment strategy (ATS) is an outcome‐guided algorithm that allows personalized treatment of complex diseases based on patients' disease status and treatment history. Conditions such as AIDS, depression, and cancer usually require several stages of treatment because of the chronic, multifactorial nature of illness progression and management. Sequential multiple assignment randomized (SMAR) designs permit simultaneous inference about multiple ATSs, where patients are sequentially randomized to treatments at different stages depending upon response status. The purpose of the article is to develop a sample size formula to ensure adequate power for comparing two or more ATSs. Based on a Wald‐type statistic for comparing multiple ATSs with a continuous endpoint, we develop a sample size formula and test it through simulation studies. We show via simulation that the proposed sample size formula maintains the nominal power. The proposed sample size formula is not applicable to designs with time‐to‐event endpoints but the formula will be useful for practitioners while designing SMAR trials to compare adaptive treatment strategies. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

19.
The clinical trial design including a test treatment, an active control and a placebo is called the gold standard design. In this paper, we develop a statistical method for planning and evaluating non‐inferiority trials with gold standard design for right‐censored time‐to‐event data. We consider both lost to follow‐up and administrative censoring. We present a semiparametric approach that only assumes the proportionality of the hazard functions. In particular, we develop an algorithm for calculating the minimal total sample size and its optimal allocation to treatment groups such that a desired power can be attained for a specific parameter constellation under the alternative. For the purpose of sample size calculation, we assume the endpoints to be Weibull distributed. By means of simulations, we investigate the actual type I error rate, power and the accuracy of the calculated sample sizes. Finally, we compare our procedure with a previously proposed procedure assuming exponentially distributed event times. To illustrate our method, we consider a double‐blinded, randomized, active and placebo controlled trial in major depression. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

20.
In clinical trials, the study sample size is often chosen to provide specific power at a single point of a treatment difference. When this treatment difference is not close to the true one, the actual power of the trial can deviate from the specified power. To address this issue, we consider obtaining a flexible sample size design that provides sufficient power and has close to the 'ideal' sample size over possible values of the true treatment difference within an interval. A performance score is proposed to assess the overall performance of these flexible sample size designs. Its application to the determination of the best solution among considered candidate sample size designs is discussed and illustrated through computer simulations.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号