首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 10 毫秒
1.
This paper considers the sequential monitoring of multi‐armed longitudinal clinical trials. We describe an approach that is relatively simple and accessible. Sequential ranks are used to form partial sum statistics, yielding processes that have independent increments, and hence can be approximated by Brownian motions. Three monitoring procedures are proposed. The first two are asymptotic, continuous analogues of the well‐known Pocock and O'Brien–Fleming group sequential procedures, whereas the third procedure is exact. Performance of the procedures is assessed using Monte Carlo simulations. Data from an orthodontic clinical trial is used to illustrate the proposed methods, for the comparison of three treatment groups. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

2.
Most statistical methodology for phase III clinical trials focuses on the comparison of a single experimental treatment with a control. An increasing desire to reduce the time before regulatory approval of a new drug is sought has led to development of two-stage or sequential designs for trials that combine the definitive analysis associated with phase III with the treatment selection element of a phase II study. In this paper we consider a trial in which the most promising of a number of experimental treatments is selected at the first interim analysis. This considerably reduces the computational load associated with the construction of stopping boundaries compared to the approach proposed by Follman, Proschan and Geller (Biometrics 1994; 50: 325-336). The computational requirement does not exceed that for the sequential comparison of a single experimental treatment with a control. Existing methods are extended in two ways. First, the use of the efficient score as a test statistic makes the analysis of binary, normal or failure-time data, as well as adjustment for covariates or stratification straightforward. Second, the question of trial power is also considered, enabling the determination of sample size required to give specified power.  相似文献   

3.
Fleming TR 《Statistics in medicine》2006,25(19):3305-12; discussion 3313-4, 3326-47
In the standard approach to designing definitive clinical trials, the primary endpoint and test statistic to be used for the primary analysis are specified before trial initiation. The false positive error rate for the null hypothesis and statistical power to detect the targeted size of treatment effect are also specified. Standard monitoring procedures, such as the group sequential guidelines, enable interim monitoring while maintaining the integrity of this approach. In contrast, adaptive monitoring procedures seek to provide flexibility to modify these pre-specified design features during the course of the trial. However, these procedures have several undesirable properties, including lesser statistical efficiency, reduced interpretability of primary outcome results, basing design changes on unreliable interim estimates of efficacy, risks to the integrity and credibility of the trial, loss of flexibility to use emerging results from external sources to alter key design features, and overemphasis of the importance of statistical significance relative to clinical significance.  相似文献   

4.
In clinical trials with t-distributed test statistics the required sample size depends on the unknown variance. Taking estimates from previous studies often leads to a misspecification of the true value of the variance. Hence, re-estimation of the variance based on the collected data and re-calculation of the required sample size is attractive. We present a flexible method for extensions of fixed sample or group-sequential trials with t-distributed test statistics. The method can be applied at any time during the course of the trial and does not require the necessity to pre-specify a sample size re-calculation rule. All available information can be used to determine the new sample size. The advantage of our method when compared with other adaptive methods is maintenance of the efficient t-test design when no extensions are actually made. We show that the type I error rate is preserved.  相似文献   

5.
Lai TL  Shih MC  Zhu G 《Statistics in medicine》2006,25(7):1149-1167
In designing an active controlled clinical trial, one sometimes has to choose between a superiority objective (to demonstrate that a new treatment is more effective than an active control therapy) and a non-inferiority objective (to demonstrate that it is no worse than the active control within some pre-specified non-inferiority margin). It is often difficult to decide which study objective should be undertaken at the planning stage when one does not have actual data on the comparative advantage of the new treatment. By making use of recent advances in the theory of efficient group sequential tests, we show how this difficulty can be resolved by a flexible group sequential design that can adaptively choose between the superiority and non-inferiority objectives during interim analyses. While maintaining the type I error probability at a pre-specified level, the proposed test is shown to have power advantage and/or sample size saving over fixed sample size tests for either only superiority or non-inferiority, and over other group sequential designs in the literature.  相似文献   

6.
In this paper, we develop a sequential procedure to monitor clinical trials against historical controls. When there is a strong ethical concern about randomizing patients to existing treatment because biological and medical evidence suggests that the new treatment is potentially superior to the existing one, or when the enrollment is too limited for randomization of subjects into experimental and control groups, one can monitor the trial sequentially against historical controls if the historical data with required quality and sample size are available to form a valid reference for the trial. This design of trial is sometimes the only alternative to a randomized phase III trial design that is intended but not feasible in situations such as above. Monitoring this type of clinical trial leads to a statistical problem of comparing two population means in a situation in which data from one population are sequentially collected and compared with all data from the other population at each interim look. The proposed sequential procedures is based on the sequential conditional probability ratio test (SCPRT) by which the conclusion of the sequential test would be virtually the same as that arrived at by a non-sequential test based on all data at the planned end of the trial. We develop the sequential procedure by proposing a Brownian motion that emulates the test statistic, and then proposing an SCPRT that is adapted to the special properties of the trial.  相似文献   

7.
When several experimental treatments are available for testing, multi‐arm trials provide gains in efficiency over separate trials. Including interim analyses allows the investigator to effectively use the data gathered during the trial. Bayesian adaptive randomization (AR) and multi‐arm multi‐stage (MAMS) designs are two distinct methods that use patient outcomes to improve the efficiency and ethics of the trial. AR allocates a greater proportion of future patients to treatments that have performed well; MAMS designs use pre‐specified stopping boundaries to determine whether experimental treatments should be dropped. There is little consensus on which method is more suitable for clinical trials, and so in this paper, we compare the two under several simulation scenarios and in the context of a real multi‐arm phase II breast cancer trial. We compare the methods in terms of their efficiency and ethical properties. We also consider the practical problem of a delay between recruitment of patients and assessment of their treatment response. Both methods are more efficient and ethical than a multi‐arm trial without interim analyses. Delay between recruitment and response assessment attenuates this efficiency gain. We also consider futility stopping rules for response adaptive trials that add efficiency when all treatments are ineffective. Our comparisons show that AR is more efficient than MAMS designs when there is an effective experimental treatment, whereas if none of the experimental treatments is effective, then MAMS designs slightly outperform AR. © 2014 The Authors Statistics in Medicine Published by John Wiley & Sons, Ltd.  相似文献   

8.
9.
The treatment effect sizes that can be detected with sufficient power up to the different interim analyses constitute a clinically meaningful criterion for the selection of a group sequential test for a clinical trial. For any pre-specified sequence of effect sizes, it is possible to construct group sequential boundaries such that the trial has a pre-specified power to reject the null-hypothesis at or before the corresponding interim analysis under the respective treatment effect. The principle of constructing group sequential designs on the basis of detectable treatment effects is presented. The application in common situations such as two-armed trials with continuous or binary outcome or censored survival times is described. We also present an effective algorithm.  相似文献   

10.
Sequential analysis is frequently employed to address ethical and financial issues in clinical trials. Sequential analysis may be performed using standard group sequential designs, or, more recently, with adaptive designs that use estimates of treatment effect to modify the maximal statistical information to be collected. In the general setting in which statistical information and clinical trial costs are functions of the number of subjects used, it has yet to be established whether there is any major efficiency advantage to adaptive designs over traditional group sequential designs. In survival analysis, however, statistical information (and hence efficiency) is most closely related to the observed number of events, while trial costs still depend on the number of patients accrued. As the number of subjects may dominate the cost of a trial, an adaptive design that specifies a reduced maximal possible sample size when an extreme treatment effect has been observed may allow early termination of accrual and therefore a more cost-efficient trial. We investigate and compare the tradeoffs between efficiency (as measured by average number of observed events required), power, and cost (a function of the number of subjects accrued and length of observation) for standard group sequential methods and an adaptive design that allows for early termination of accrual. We find that when certain trial design parameters are constrained, an adaptive approach to terminating subject accrual may improve upon the cost efficiency of a group sequential clinical trial investigating time-to-event endpoints. However, when the spectrum of group sequential designs considered is broadened, the advantage of the adaptive designs is less clear.  相似文献   

11.
Group sequential designs are widely used in clinical trials to determine whether a trial should be terminated early. In such trials, maximum likelihood estimates are often used to describe the difference in efficacy between the experimental and reference treatments; however, these are well known for displaying conditional and unconditional biases. Established bias‐adjusted estimators include the conditional mean‐adjusted estimator (CMAE), conditional median unbiased estimator, conditional uniformly minimum variance unbiased estimator (CUMVUE), and weighted estimator. However, their performances have been inadequately investigated. In this study, we review the characteristics of these bias‐adjusted estimators and compare their conditional bias, overall bias, and conditional mean‐squared errors in clinical trials with survival endpoints through simulation studies. The coverage probabilities of the confidence intervals for the four estimators are also evaluated. We find that the CMAE reduced conditional bias and showed relatively small conditional mean‐squared errors when the trials terminated at the interim analysis. The conditional coverage probability of the conditional median unbiased estimator was well below the nominal value. In trials that did not terminate early, the CUMVUE performed with less bias and an acceptable conditional coverage probability than was observed for the other estimators. In conclusion, when planning an interim analysis, we recommend using the CUMVUE for trials that do not terminate early and the CMAE for those that terminate early. Copyright © 2017 John Wiley & Sons, Ltd.  相似文献   

12.
Fan SK  Wang YG 《Statistics in medicine》2006,25(10):1699-1714
A decision-theoretic framework is proposed for designing sequential dose-finding trials with multiple outcomes. The optimal strategy is solvable theoretically via backward induction. However, for dose-finding studies involving k doses, the computational complexity is the same as the bandit problem with k-dependent arms, which is computationally prohibitive. We therefore provide two computationally compromised strategies, which is of practical interest as the computational complexity is greatly reduced: one is closely related to the continual reassessment method (CRM), and the other improves CRM and approximates to the optimal strategy better. In particular, we present the framework for phase I/II trials with multiple outcomes. Applications to a pediatric HIV trial and a cancer chemotherapy trial are given to illustrate the proposed approach. Simulation results for the two trials show that the computationally compromised strategy can perform well and appear to be ethical for allocating patients. The proposed framework can provide better approximation to the optimal strategy if more extensive computing is available.  相似文献   

13.
Group sequential design has become more popular in clinical trials because it allows for trials to stop early for futility or efficacy to save time and resources. However, this approach is less well‐known for longitudinal analysis. We have observed repeated cases of studies with longitudinal data where there is an interest in early stopping for a lack of treatment effect or in adapting sample size to correct for inappropriate variance assumptions. We propose an information‐based group sequential design as a method to deal with both of these issues. Updating the sample size at each interim analysis makes it possible to maintain the target power while controlling the type I error rate. We will illustrate our strategy with examples and simulations and compare the results with those obtained using fixed design and group sequential design without sample size re‐estimation. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

14.
This paper presents methodology for designing complex group sequential survival trials when the survival curves will be compared using the logrank statistic. The method can be applied to any treatment and control survival curves as long as each hazard function can be approximated by a piecewise linear function. The approach allows arbitrary accrual patterns and permits adjustment for varying rates of non-compliance, drop-in and loss to follow-up. The calendar-time-information-time transformation is derived under these complex assumptions. This permits the exploration of the operating characteristics of various interim analysis plans, including sample size and power. By using the calendar-time-information-time transformation, information fractions corresponding to desired calendar times can be determined. In this way, the interim analyses can be scheduled in information time, assuring the desired power and realization of the spending function, while the interim analyses will take place according to the desired calendar schedule.  相似文献   

15.
High placebo response is widely believed to be one major reason why many psychiatric clinical trials fail to demonstrate drug efficacy. In order to alleviate this problem, research has developed several enrichment designs, including the parallel design with a placebo lead‐in phase, the sequential parallel design, and a recently proposed two‐way enriched design. While these designs have been evaluated and discussed individually, their effectiveness against each other has not been rigorously compared. The current study examines the various enrichment designs simultaneously. Building on their strengths, we introduce a new improved design named ‘sequential enriched design’ (SED) aimed at removing not only patients with high placebo response but also patients who do not respond to any treatment from the study. The SED begins with a double‐blind placebo lead‐in phase followed by a traditional parallel design in the first stage. Only patients who respond to the drug in the first stage are re‐randomized to the drug or placebo at the second stage. We simulate data for a mixed population composed of four subgroups of patients who are predetermined as to whether they respond to drug or not as well as to placebo or not. By focusing on the target patients whose responses reflect the drug's efficacy, we evaluate the bias, mean squared error, and power for different designs. We demonstrate that the SED produces a less biased estimate for the target treatment effect and yields reasonably high power in general compared with the other designs. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

16.
In clinical trials with a long period of time between randomization and the primary assessment of the patient, the same assessments are often undertaken at intermediate times. When an interim analysis is conducted, in addition to the patients who have completed the primary assessment, there will be those who have till then undergone only intermediate assessments. The efficiency of the interim analysis can be increased by the inclusion of data from these additional patients. This paper compares four methods of increasing information based on model-free estimates of transition probabilities to incorporate intermediate assessments from patients who have not completed the trial. It is assumed that the observations are binary and that there is one intermediate assessment. The methods are the score and Wald approaches, each with the log-odds ratio and probability difference parameterizations. Simulations show that all four approaches have good properties in moderate to large sample sizes.  相似文献   

17.
A scenario not uncommon at the end of a Phase II clinical development is that although choices are narrowed down to two to three doses, the project team cannot make a recommendation of one single dose for the Phase III confirmatory study based upon the available data. Several ‘drop‐the‐loser’ designs to monitor multiple doses of an experimental treatment compared with a control in a pivotal Phase III study are considered. Ineffective and/or toxic doses compared with the control may be dropped at the interim analyses as the study continues, and when the accumulated data have demonstrated convincing efficacy and an acceptable safety profile for one dose, the corresponding dose or the study may be stopped to make the experimental treatment available to patients. A decision to drop a toxic dose is usually based upon a comprehensive review of all the available safety data and also a risk/benefit assessment. For dropping ineffective doses, a non‐binding futility boundary may be used as guidance. The desired futility boundary can be derived by using an appropriate combination of risk level (i.e. error rate for accepting null hypothesis when the dose is truly efficacious) and spending strategy (dropping a dose aggressively in early analyses versus late). For establishing convincing evidence of the treatment efficacy, three methods for calculating the efficacy boundary are discussed: the Joint Monitoring (JM) approach, the Marginal Monitoring method with Bonferroni correction (MMB), and the Marginal Monitoring method with Adjustment for correlation (MMA). The JM approach requires intensive computation especially when there are several doses and multiple interim analyses. The marginal monitoring methods are computationally more attractive and also more flexible since each dose is monitored separately by its own alpha‐spending function. The JM and MMB methods control the false positive rate. The MMA method tends to protect the false positive rate and is more powerful than the Bonferroni‐based MMB method. The MMA method offers a practical and flexible solution when there are several doses and multiple interim looks. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

18.
Emerson SS 《Statistics in medicine》2006,25(19):3270-96; discussion 3302-4, 3320-5, 3326-47
Sequential sampling plans are often used in the monitoring of clinical trials in order to address the ethical and efficiency issues inherent in human testing of a new treatment or preventive agent for disease. Group sequential stopping rules are perhaps the most commonly used approaches, but in recent years, a number of authors have proposed adaptive methods of choosing a stopping rule. In general, such adaptive approaches come at a price of inefficiency (almost always) and clouding of the scientific question (sometimes). In this paper, I review the degree of adaptation possible within the largely prespecified group sequential stopping rules, and discuss the operating characteristics that can be characterized fully prior to collection of the data. I then discuss the greater flexibility possible when using several of the adaptive approaches receiving the greatest attention in the statistical literature and conclude with a discussion of the scientific and statistical issues raised by their use.  相似文献   

19.
Consider the problem of testing H0:p?p0 vs H1:p>p0, where p could, for example, represent the response rate to a new drug. The group sequential TT is an efficient alternative to a single‐stage test as it can provide a substantial reduction in the expected number of test subjects. Whitehead provides formulas for determining stopping boundaries for this test. Existing research shows that test designs based on these formulas (WTTs) may not meet Type I error and/or power specifications, or may be over‐powered at the expense of requiring more test subjects than are necessary. We present a search algorithm, with program available from the author, which provides an alternative approach to triangular test design. The primary advantage of the algorithm is that it generates test designs that consistently meet error specifications. In tests on nearly 1000 example combinations of n (group size), p0, p1, α, and β the algorithm‐determined triangular test (ATT) design met specified Type I error and power constraints in every case considered, whereas WTT designs met constraints in only 10 cases. Actual Type I error and power values for the ATTs tend to be close to specified values, leading to test designs with favorable average sample number performance. For cases where the WTT designs did meet Type I error and power constraints, the corresponding ATT designs also had the advantage of providing, on average, a modest reduction in average sample numbers calculated at p0, p1, and (p0 + p1)/2. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

20.
We describe a new method of formulating stopping rules for clinical trials, one that incorporates opinion on what difference is clinically important. We compare the method with conventional group sequential designs and illustrate it by application to a study of Pancuronium Bromide for prevention of haemorrhage in pre-term infants.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号