共查询到20条相似文献,搜索用时 15 毫秒
1.
We provide optimal treatment allocation schemes when the outcome variance varies across the treatment groups and our objectives are to estimate treatment effects with equal or unequal interest. Unlike other optimal designs, such as A-optimal designs, the proposed designs can be found without an iterative scheme. We evaluate robustness properties of the optimal designs to mis-specification in the expected variance from each group and identify situations when popular allocation schemes have poor efficiencies. An application to design a randomized rheumatoid arthritis trial is discussed, along with a potential application to design a cancer screening trial when the main outcome is a continuous variable. Copyright (c) 2008 John Wiley & Sons, Ltd. 相似文献
2.
Response‐adaptive treatment allocation for survival trials with clustered right‐censored data 下载免费PDF全文
A comparison of 2 treatments with survival outcomes in a clinical study may require treatment randomization on clusters of multiple units with correlated responses. For example, for patients with otitis media in both ears, a specific treatment is normally given to a single patient, and hence, the 2 ears constitute a cluster. Statistical procedures are available for comparison of treatment efficacies. The conventional approach for treatment allocation is the adoption of a balanced design, in which half of the patients are assigned to each treatment arm. However, considering the increasing acceptability of responsive‐adaptive designs in recent years because of their desirable features, we have developed a response‐adaptive treatment allocation scheme for survival trials with clustered data. The proposed treatment allocation scheme is superior to the balanced design in that it allows more patients to receive the better treatment. At the same time, the test power for comparing treatment efficacies using our treatment allocation scheme remains highly competitive. The advantage of the proposed randomization procedure is supported by a simulation study and the redesign of a clinical study. 相似文献
3.
Guogen Shan Gregory E. Wilding Alan D. Hutson Shawn Gerstenberger 《Statistics in medicine》2016,35(8):1257-1266
Simon's optimal two‐stage design has been widely used in early phase clinical trials for Oncology and AIDS studies with binary endpoints. With this approach, the second‐stage sample size is fixed when the trial passes the first stage with sufficient activity. Adaptive designs, such as those due to Banerjee and Tsiatis (2006) and Englert and Kieser (2013), are flexible in the sense that the second‐stage sample size depends on the response from the first stage, and these designs are often seen to reduce the expected sample size under the null hypothesis as compared with Simon's approach. An unappealing trait of the existing designs is that they are not associated with a second‐stage sample size, which is a non‐increasing function of the first‐stage response rate. In this paper, an efficient intelligent process, the branch‐and‐bound algorithm, is used in extensively searching for the optimal adaptive design with the smallest expected sample size under the null, while the type I and II error rates are maintained and the aforementioned monotonicity characteristic is respected. The proposed optimal design is observed to have smaller expected sample sizes compared to Simon's optimal design, and the maximum total sample size of the proposed adaptive design is very close to that from Simon's method. The proposed optimal adaptive two‐stage design is recommended for use in practice to improve the flexibility and efficiency of early phase therapeutic development. Copyright © 2015 John Wiley & Sons, Ltd. 相似文献
4.
Incorporating baseline measurements into the analysis of crossover trials with time‐to‐event endpoints 下载免费PDF全文
Two‐period two‐treatment (2×2) crossover designs are commonly used in clinical trials. For continuous endpoints, it has been shown that baseline (pretreatment) measurements collected before the start of each treatment period can be useful in improving the power of the analysis. Methods to achieve a corresponding gain for censored time‐to‐event endpoints have not been adequately studied. We propose a method in which censored values are treated as missing data and multiply imputed using prespecified parametric event time models. The event times in each imputed data set are then log‐transformed and analyzed using a linear model suitable for a 2×2 crossover design with continuous endpoints, with the difference in period‐specific baselines included as a covariate. Results obtained from the imputed data sets are synthesized for point and confidence interval estimation of the treatment ratio of geometric mean event times using model averaging in conjunction with Rubin's combination rule. We use simulations to illustrate the favorable operating characteristics of our method relative to two other methods for crossover trials with censored time‐to‐event data, ie, a hierarchical rank test that ignores the baselines and a stratified Cox model that uses each study subject as a stratum and includes period‐specific baselines as a covariate. Application to a real data example is provided. 相似文献
5.
Phase II clinical trials are often conducted to determine whether a new treatment is sufficiently promising to warrant a major controlled clinical evaluation against a standard therapy. We consider single‐arm phase II clinical trials with right censored survival time responses where the ordinary one‐sample logrank test is commonly used for testing the treatment efficacy. For planning such clinical trials, this paper presents two‐stage designs that are optimal in the sense that the expected sample size is minimized if the new regimen has low efficacy subject to constraints of the type I and type II errors. Two‐stage designs, which minimize the maximal sample size, are also determined. Optimal and minimax designs for a range of design parameters are tabulated along with examples. Copyright © 2013 John Wiley & Sons, Ltd. 相似文献
6.
In the context of observational longitudinal studies, we explored the values of the number of participants and the number of repeated measurements that maximize the power to detect the hypothesized effect, given the total cost of the study. We considered two different models, one that assumes a transient effect of exposure and one that assumes a cumulative effect. Results were derived for a continuous response variable, whose covariance structure was assumed to be damped exponential, and a binary time‐varying exposure. Under certain assumptions, we derived simple formulas for the approximate solution to the problem in the particular case in which the response covariance structure is assumed to be compound symmetry. Results showed the importance of the exposure intraclass correlation in determining the optimal combination of the number of participants and the number of repeated measurements, and therefore the optimized power. Thus, incorrectly assuming a time‐invariant exposure leads to inefficient designs. We also analyzed the sensitivity of results to dropout, mis‐specification of the response correlation structure, allowing a time‐varying exposure prevalence and potential confounding impact. We illustrated some of these results in a real study. In addition, we provide software to perform all the calculations required to explore the combination of the number of participants and the number of repeated measurements. Copyright © 2013 John Wiley & Sons, Ltd. 相似文献
7.
Bayesian methods for setting sample sizes and choosing allocation ratios in phase II clinical trials with time‐to‐event endpoints 下载免费PDF全文
Conventional phase II trials using binary endpoints as early indicators of a time‐to‐event outcome are not always feasible. Uveal melanoma has no reliable intermediate marker of efficacy. In pancreatic cancer and viral clearance, the time to the event of interest is short, making an early indicator unnecessary. In the latter application, Weibull models have been used to analyse corresponding time‐to‐event data. Bayesian sample size calculations are presented for single‐arm and randomised phase II trials assuming proportional hazards models for time‐to‐event endpoints. Special consideration is given to the case where survival times follow the Weibull distribution. The proposed methods are demonstrated through an illustrative trial based on uveal melanoma patient data. A procedure for prior specification based on knowledge or predictions of survival patterns is described. This enables investigation into the choice of allocation ratio in the randomised setting to assess whether a control arm is indeed required. The Bayesian framework enables sample sizes consistent with those used in practice to be obtained. When a confirmatory phase III trial will follow if suitable evidence of efficacy is identified, Bayesian approaches are less controversial than for definitive trials. In the randomised setting, a compromise for obtaining feasible sample sizes is a loss in certainty in the specified hypotheses: the Bayesian counterpart of power. However, this approach may still be preferable to running a single‐arm trial where no data is collected on the control treatment. This dilemma is present in most phase II trials, where resources are not sufficient to conduct a definitive trial. Copyright © 2015 John Wiley & Sons, Ltd. 相似文献
8.
John Whitehead 《Statistics in medicine》2014,33(22):3830-3843
This work is motivated by trials in rapidly lethal cancers or cancers for which measuring shrinkage of tumours is infeasible. In either case, traditional phase II designs focussing on tumour response are unsuitable. Usually, tumour response is considered as a substitute for the more relevant but longer‐term endpoint of death. In rapidly lethal cancers such as pancreatic cancer, there is no need to use a surrogate, as the definitive endpoint is (sadly) available so soon. In uveal cancer, there is no counterpart to tumour response, and so, mortality is the only realistic response available. Cytostatic cancer treatments do not seek to kill tumours, but to mitigate their effects. Trials of such therapy might also be based on survival times to death or progression, rather than on tumour shrinkage. Phase II oncology trials are often conducted with all study patients receiving the experimental therapy, and this approach is considered here. Simple extensions of one‐stage and two‐stage designs based on binary responses are presented. Outcomes based on survival past a small number of landmark times are considered: here, the case of three such times is explored in examples. This approach allows exact calculations to be made for both design and analysis purposes. Simulations presented here show that calculations based on normal approximations can lead to loss of power when sample sizes are small. Two‐stage versions of the procedure are also suggested. Copyright © 2014 John Wiley & Sons, Ltd. 相似文献
9.
A multiple‐objective allocation strategy was recently proposed for constructing response‐adaptive repeated measurement designs for continuous responses. We extend the allocation strategy to constructing response‐adaptive repeated measurement designs for binary responses. The approach with binary responses is quite different from the continuous case, as the information matrix is a function of responses, and it involves nonlinear modeling. To deal with these problems, we first build the design on the basis of success probabilities. Then we illustrate how various models can accommodate carryover effects on the basis of logits of response profiles as well as any correlation structure. Through computer simulations, we find that the allocation strategy developed for continuous responses also works well for binary responses. As expected, design efficiency in terms of mean squared error drops sharply, as more emphasis is placed on increasing treatment benefit than estimation precision. However, we find that it can successfully allocate more patients to better treatment sequences without sacrificing much estimation precision. Copyright © 2013 John Wiley & Sons, Ltd. 相似文献
10.
Simultaneous small‐sample comparisons in longitudinal or multi‐endpoint trials using multiple marginal models 下载免费PDF全文
Simultaneous inference in longitudinal, repeated‐measures, and multi‐endpoint designs can be onerous, especially when trying to find a reasonable joint model from which the interesting effects and covariances are estimated. A novel statistical approach known as multiple marginal models greatly simplifies the modelling process: the core idea is to “marginalise” the problem and fit multiple small models to different portions of the data, and then estimate the overall covariance matrix in a subsequent, separate step. Using these estimates guarantees strong control of the family‐wise error rate, however only asymptotically. In this paper, we show how to make the approach also applicable to small‐sample data problems. Specifically, we discuss the computation of adjusted P values and simultaneous confidence bounds for comparisons of randomised treatment groups as well as for levels of a nonrandomised factor such as multiple endpoints, repeated measures, or a series of points in time or space. We illustrate the practical use of the method with a data example. 相似文献
11.
The clinical trial design including a test treatment, an active control and a placebo is called the gold standard design. In this paper, we develop a statistical method for planning and evaluating non‐inferiority trials with gold standard design for right‐censored time‐to‐event data. We consider both lost to follow‐up and administrative censoring. We present a semiparametric approach that only assumes the proportionality of the hazard functions. In particular, we develop an algorithm for calculating the minimal total sample size and its optimal allocation to treatment groups such that a desired power can be attained for a specific parameter constellation under the alternative. For the purpose of sample size calculation, we assume the endpoints to be Weibull distributed. By means of simulations, we investigate the actual type I error rate, power and the accuracy of the calculated sample sizes. Finally, we compare our procedure with a previously proposed procedure assuming exponentially distributed event times. To illustrate our method, we consider a double‐blinded, randomized, active and placebo controlled trial in major depression. Copyright © 2013 John Wiley & Sons, Ltd. 相似文献
12.
Composite endpoints combine several events of interest within a single variable. These are often time‐to‐first‐event data, which are analyzed via survival analysis techniques. To demonstrate the significance of an overall clinical benefit, it is sufficient to assess the test problem formulated for the composite. However, the effect observed for the composite does not necessarily reflect the effects for the components. Therefore, it would be desirable that the sample size for clinical trials using composite endpoints provides enough power not only to detect a clinically relevant superiority for the composite but also to address the components in an adequate way. The single components of a composite endpoint assessed as time‐to‐first‐event define competing risks. We consider multiple test problems based on the cause‐specific hazards of competing events to address the problem of analyzing both a composite endpoint and its components. Thereby, we use sequentially rejective test procedures to reduce the power loss to a minimum. We show how to calculate the sample size for the given multiple test problem by using a simply applicable simulation tool in SAS . Our ideas are illustrated by two clinical study examples. Copyright © 2013 John Wiley & Sons, Ltd. 相似文献
13.
Cyrus Mehta Helmut Schäfer Hanna Daniel Sebastian Irle 《Statistics in medicine》2014,33(26):4515-4531
The development of molecularly targeted therapies for certain types of cancers has led to the consideration of population enrichment designs that explicitly factor in the possibility that the experimental compound might differentially benefit different biomarker subgroups. In such designs, enrollment would initially be open to a broad patient population with the option to restrict future enrollment, following an interim analysis, to only those biomarker subgroups that appeared to be benefiting from the experimental therapy. While this strategy could greatly improve the chances of success for the trial, it poses several statistical and logistical design challenges. Because late‐stage oncology trials are typically event driven, one faces a complex trade‐off between power, sample size, number of events, and study duration. This trade‐off is further compounded by the importance of maintaining statistical independence of the data before and after the interim analysis and of optimizing the timing of the interim analysis. This paper presents statistical methodology that ensures strong control of type 1 error for such population enrichment designs, based on generalizations of the conditional error rate approach. The special difficulties encountered with time‐to‐event endpoints are addressed by our methods. The crucial role of simulation for guiding the choice of design parameters is emphasized. Although motivated by oncology, the methods are applicable as well to population enrichment designs in other therapeutic areas. Copyright © 2014 John Wiley & Sons, Ltd. 相似文献
14.
A class of covariate‐adjusted adaptive allocation procedures is developed for a general class of responses with an aim to satisfy relevant clinical requirements. Some exact and asymptotic properties of the procedures, proposed and a reasonable competitor, are studied and compared under the presence of treatment‐covariate interaction. Copyright © 2013 John Wiley & Sons, Ltd. 相似文献
15.
In the presence of non‐compliance, conventional analysis by intention‐to‐treat provides an unbiased comparison of treatment policies but typically under‐estimates treatment efficacy. With all‐or‐nothing compliance, efficacy may be specified as the complier‐average causal effect (CACE), where compliers are those who receive intervention if and only if randomised to it. We extend the CACE approach to model longitudinal data with time‐dependent non‐compliance, focusing on the situation in which those randomised to control may receive treatment and allowing treatment effects to vary arbitrarily over time. Defining compliance type to be the time of surgical intervention if randomised to control, so that compliers are patients who would not have received treatment at all if they had been randomised to control, we construct a causal model for the multivariate outcome conditional on compliance type and randomised arm. This model is applied to the trial of alternative regimens for glue ear treatment evaluating surgical interventions in childhood ear disease, where outcomes are measured over five time points, and receipt of surgical intervention in the control arm may occur at any time. We fit the models using Markov chain Monte Carlo methods to obtain estimates of the CACE at successive times after receiving the intervention. In this trial, over a half of those randomised to control eventually receive intervention. We find that surgery is more beneficial than control at 6months, with a small but non‐significant beneficial effect at 12months. © 2015 The Authors. Statistics in Medicine Published by JohnWiley & Sons Ltd. 相似文献
16.
17.
James E. Barrett 《Statistics in medicine》2017,36(18):2803-2813
Selective recruitment designs preferentially recruit individuals who are estimated to be statistically informative onto a clinical trial. Individuals who are expected to contribute less information have a lower probability of recruitment. Furthermore, in an information‐adaptive design, recruits are allocated to treatment arms in a manner that maximises information gain. The informativeness of an individual depends on their covariate (or biomarker) values, and how information is defined is a critical element of information‐adaptive designs. In this paper, we define and evaluate four different methods for quantifying statistical information. Using both experimental data and numerical simulations, we show that selective recruitment designs can offer a substantial increase in statistical power compared with randomised designs. In trials without selective recruitment, we find that allocating individuals to treatment arms according to information‐adaptive protocols also leads to an increase in statistical power. Consequently, selective recruitment designs can potentially achieve successful trials using fewer recruits thereby offering economic and ethical advantages. Copyright © 2017 John Wiley & Sons, Ltd. 相似文献
18.
Jing Zhou Adeniyi Adewale Yue Shentu Jiajun Liu Keaven Anderson 《Statistics in medicine》2014,33(22):3801-3814
Group sequential design has become more popular in clinical trials because it allows for trials to stop early for futility or efficacy to save time and resources. However, this approach is less well‐known for longitudinal analysis. We have observed repeated cases of studies with longitudinal data where there is an interest in early stopping for a lack of treatment effect or in adapting sample size to correct for inappropriate variance assumptions. We propose an information‐based group sequential design as a method to deal with both of these issues. Updating the sample size at each interim analysis makes it possible to maintain the target power while controlling the type I error rate. We will illustrate our strategy with examples and simulations and compare the results with those obtained using fixed design and group sequential design without sample size re‐estimation. Copyright © 2014 John Wiley & Sons, Ltd. 相似文献
19.
Maria Laura Rubin Wenyaw Chan Jose‐Miguel Yamal Claudia Sue Robertson 《Statistics in medicine》2017,36(28):4570-4582
The use of longitudinal measurements to predict a categorical outcome is an increasingly common goal in research studies. Joint models are commonly used to describe two or more models simultaneously by considering the correlated nature of their outcomes and the random error present in the longitudinal measurements. However, there is limited research on joint models with longitudinal predictors and categorical cross‐sectional outcomes. Perhaps the most challenging task is how to model the longitudinal predictor process such that it represents the true biological mechanism that dictates the association with the categorical response. We propose a joint logistic regression and Markov chain model to describe a binary cross‐sectional response, where the unobserved transition rates of a two‐state continuous‐time Markov chain are included as covariates. We use the method of maximum likelihood to estimate the parameters of our model. In a simulation study, coverage probabilities of about 95%, standard deviations close to standard errors, and low biases for the parameter values show that our estimation method is adequate. We apply the proposed joint model to a dataset of patients with traumatic brain injury to describe and predict a 6‐month outcome based on physiological data collected post‐injury and admission characteristics. Our analysis indicates that the information provided by physiological changes over time may help improve prediction of long‐term functional status of these severely ill subjects. Copyright © 2017 John Wiley & Sons, Ltd. 相似文献
20.
This paper discusses survival analysis based on updated covariates with focus on proportional hazard regression in situations where some disease states may be vaguely defined. Analyses of a trial in liver cirrhosis are used to motivate the discussion. We use problems caused by inclusion of recordings from unscheduled follow-ups to illustrate the importance of appropriate coding of covariates and describe how such problems may be approached using appropriately 'lagged' covariates. The choice of time origin is discussed with emphasis on situations where disease initiation is difficult to define. Simulations are used to assess the effect of an erroneously specified time origin. It is argued that age or calendar time may frequently be sensible time variables. 相似文献