首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Group sequential designs are widely used in clinical trials to determine whether a trial should be terminated early. In such trials, maximum likelihood estimates are often used to describe the difference in efficacy between the experimental and reference treatments; however, these are well known for displaying conditional and unconditional biases. Established bias‐adjusted estimators include the conditional mean‐adjusted estimator (CMAE), conditional median unbiased estimator, conditional uniformly minimum variance unbiased estimator (CUMVUE), and weighted estimator. However, their performances have been inadequately investigated. In this study, we review the characteristics of these bias‐adjusted estimators and compare their conditional bias, overall bias, and conditional mean‐squared errors in clinical trials with survival endpoints through simulation studies. The coverage probabilities of the confidence intervals for the four estimators are also evaluated. We find that the CMAE reduced conditional bias and showed relatively small conditional mean‐squared errors when the trials terminated at the interim analysis. The conditional coverage probability of the conditional median unbiased estimator was well below the nominal value. In trials that did not terminate early, the CUMVUE performed with less bias and an acceptable conditional coverage probability than was observed for the other estimators. In conclusion, when planning an interim analysis, we recommend using the CUMVUE for trials that do not terminate early and the CMAE for those that terminate early. Copyright © 2017 John Wiley & Sons, Ltd.  相似文献   

2.
Estimation issues in clinical trials and overviews   总被引:2,自引:0,他引:2  
There is a general move towards greater emphasis on point and interval estimates of treatment effect in reporting of clinical trials, so that significance testing plays a lesser role. In this article we examine a number of issues which affect the use and interpretation of conventional estimation methods. Should we accept or avoid the stereotypes of 95 per cent confidence? Should the abstract of a trial report include confidence intervals for major endpoints? Are frequentist confidence intervals being interpreted correctly, and should Bayesian probability intervals be more widely used in trial reports? Does the timing of publication, such as early stopping because of a large observed treatment difference, lead to exaggerated point and interval estimates? How can we produce realistic estimates from subgroup analyses? Is publication bias seriously affecting our ability to obtain unbiased estimates? Is the emphasis on estimation methods a powerful tool for encouraging larger sample sizes? Can we resolve the controversy concerning fixed or random effects models for estimation in overviews of related trials? Our arguments are illustrated by results from recent trials in cardiovascular disease.  相似文献   

3.
Quality of life (QoL) has become an accepted and widely used endpoint in clinical trials. The analytical tools used for QoL evaluations in clinical trials differ from those used for the more traditional endpoints, such as response to disease, overall survival or progression-free survival. Since QoL assessments are generally performed on self-administered questionnaires, QoL endpoints are more prone to a placebo effect than traditional clinical endpoints. The placebo effect is a well-documented phenomenon in clinical trials, which has led to dramatic consequences on the clinical development of new therapeutic agents. In order to account for the placebo effect, a multivariate latent variable model is proposed, which allows for misclassification in the QoL item responses. The approach is flexible in the sense that it can be used for the analysis of a wide variety of multi-dimensional QoL instruments. For statistical inference, maximum likelihood estimates and their standard errors are obtained using a Monte Carlo EM algorithm. The approach is illustrated with analysis of data from a cardiovascular phase III clinical trial.  相似文献   

4.
Surrogate endpoints in clinical trials are biological markers or events observable earlier than the clinical endpoints (such as death) that are actually of primary interest. The "proportion of treatment effect captured" by a surrogate endpoint (PTE) is a frequentist measure intended to address the question of whether trials based on a surrogate endpoint reach the same conclusions as would have been reached using the true endpoint. The question of inferential interest is whether PTE for a given marker exceeds some threshold value, say 0.5. Calculating PTE requires fitting two different models to the same data. We develop a Markov chain Monte Carlo based method for estimating the Bayesian posterior distribution of PTE. The new method conditions on the truth of a single model. Obtaining the full posterior distribution enables direct statements such as "the posterior probability that PTE >0.5 is 0.085". Furthermore, credible sets do not depend on asymptotic approximations and can be computed using data sets for which the frequentist methods may be inaccurate or even impossible to apply. We illustrate with Bayesian proportional hazards models for clinical trial data. As a by-product of developing the Bayesian method, we show that the frequentist estimate of PTE also may be computed from quantities in a single model and calculate frequentist confidence intervals for PTE that tend to be narrower than those produced by standard methods but that provide equally good coverage.  相似文献   

5.
In phase III cancer clinical trials, overall survival is commonly used as the definitive endpoint. In phase II clinical trials, however, more immediate endpoints such as incidence of complete or partial response within 1 or 2 months or progression‐free survival (PFS) are generally used. Because of the limited ability to detect change in overall survival with response, the inherent variability of PFS and the long wait for progression to be observed, more informative and immediate alternatives to overall survival are desirable in exploratory phase II trials. In this paper, we show how comparative trials can be designed and analysed using change in tumour size as the primary endpoint. The test developed is based on the framework of score statistics and will formally incorporate the information of whether patients survive until the time at which change in tumour size is assessed. Using an example in non‐small cell lung cancer, we show that the sample size requirements for a trial based on change in tumour size are favourable compared with alternative randomized trials and demonstrate that these conclusions are robust to our assumptions. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

6.
Statistical analysis of quality of life data in cancer clinical trials   总被引:2,自引:0,他引:2  
In clinical trials endpoints other than total and/or disease-free survival are gaining more and more interest. In particular, quality of life (QOL) or the well-being of patients has emerged as a synonym for variables describing the subjective reactions of patients towards their disease and its treatment. The statistical analysis of such QOL data is complicated firstly by the large number of variables measured and their obvious lack of objectivity. The construction of suitable aggregate measures allowing a reduction of the measurements into a (preferably) unidimensional index are discussed in the context of an analysis at a fixed time point during the course of treatment. A second problem arises from the consideration that a patient's well-being is subject to changes over time. We discuss the modelling of QOL by suitable stochastic processes which are extensions of a multistate disease process. This allows QOL events to be incorporated into methods of survival analysis by either estimating the relevant transition probabilities between states or calculating quality-adjusted survival times. Finally, some brief guidelines for the planning of clinical trials including QOL measurements will be proposed.  相似文献   

7.
Hanna MG  Hoover HC  Vermorken JB  Harris JE  Pinedo HM 《Vaccine》2001,19(17-19):2576-2582
We performed three multi-institutional, prospectively randomized, controlled clinical trials, assessing the therapeutic effect of post-resection adjuvant active specific immunotherapy in patients with stage II and stage III colon cancer. In each study four outcomes were considered: time-to-disease recurrence, overall survival intervals, disease-free survival intervals, and recurrence-free survival intervals using the Kaplan-Meir method for generating curves and the log-rank test used to compare efficacy distributions. In addition, a meta-analysis of the three phase III trials was performed since the trials had proven homogeneity. Two main analyses were performed: (1) the intent-to-treat colon cancer patients from all three studies; and (2) analyzable colon cancer patients in all three studies. The conclusion of these analyses is that adjuvant active specific immunotherapy provided significant clinical benefits in patients with stage II colon cancer and appears to be an important new adjuvant treatment for these patients.  相似文献   

8.
In clinical trials using lifetime as primary outcome variable, it is more the rule than the exception that even for patients who are failing in the course of the study, survival time does not become known exactly since follow‐up takes place according to a restricted schedule with fixed, possibly long intervals between successive visits. In practice, the discreteness of the data obtained under such circumstances is plainly ignored both in data analysis and in sample size planning of survival time studies. As a framework for analyzing the impact of making no difference between continuous and discrete recording of failure times, we use a scenario in which the partially observed times are assigned to the points of the grid of inspection times in the natural way. Evaluating the treatment effect in a two‐arm trial fitting into this framework by means of ordinary methods based on Cox's relative risk model is shown to produce biased estimates and/or confidence bounds whose actual coverage exhibits marked discrepancies from the nominal confidence level. Not surprisingly, the amount of these distorting effects turns out to be the larger the coarser the grid of inspection times has been chosen. As a promising approach to correctly analyzing and planning studies generating discretely recorded failure times, we use large‐sample likelihood theory for parametric models accommodating the key features of the scenario under consideration. The main result is an easily implementable representation of the expected information and hence of the asymptotic covariance matrix of the maximum likelihood estimators of all parameters contained in such a model. In two real examples of large‐scale clinical trials, sample size calculation based on this result is contrasted with the traditional approach, which consists of applying the usual methods for exactly observed failure times. Copyright © 2017 John Wiley & Sons, Ltd.  相似文献   

9.
A frequent objective in medical research is the investigation of differences in patient survival between several experimental treatments and one standard treatment. In order to assess these differences statistically, we have to apply adjustments for multiple comparisons to prevent an increased number of false-positive findings. The most prominent procedure of this type is the Bonferroni correction, which maintains the error level but leads to conservative results. On the basis of a general statistical framework for simultaneous inference, we propose a new statistical procedure for many-to-one comparisons of treatments with adjustment for covariates for clustered survival data modeled by a frailty Cox model. In contrast to the Bonferroni method, dependencies between estimated effects are taken into account. The resulting simultaneous confidence intervals for the hazard ratios of the experimental treatments compared with a control can be interpreted in terms of both statistical significance and clinical importance. The quality of the new procedure is judged by the coverage probability for the simultaneous confidence intervals. Simulation results show an acceptable performance in balanced and various unbalanced designs. The practical merits are demonstrated by a reanalysis of a chronic myelogeneous leukemia clinical trial. The procedure presented here works well for multiple comparisons with a control with adjustment for covariates for survival data from multicenter clinical trials.  相似文献   

10.
Progression‐related endpoints (such as time to progression or progression‐free survival) and time to death are common endpoints in cancer clinical trials. It is of interest to study the link between progression‐related endpoints and time to death (e.g. to evaluate the degree of surrogacy). However, current methods ignore some aspects of the definitions of progression‐related endpoints. We review those definitions and investigate their impact on modeling the joint distribution. Further, we propose a multi‐state model in which the association between the endpoints is modeled through a frailty term. We also argue that interval‐censoring needs to be taken into account to more closely match the latent disease evolution. The joint distribution and an expression for Kendall's τ are derived. The model is applied to data from a clinical trial in advanced metastatic ovarian cancer. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

11.
Practice in the analysis of clinical trials with continuously measured endpoints is to focus on the difference or percentage change in mean or median response. However, treatments may have effects on the distribution of responses other than on the average response. We sought an approach to such generalized treatment effects that: (i) targets a parameter that is easily understood by our clinical colleagues; and (ii) employs confidence intervals as the basis for inference. We consider one such approach based on work in reliability theory, namely setting Pr[Y>X] as the target parameter, and compare this approach to an earlier one due to O'Brien. The two approaches have similar properties when they both seek to reject the null hypothesis of no effect due to different variances but differ when the larger variance corresponds to the larger mean. In that case, our approach views that the larger variance attenuates the effect of the larger mean. Out suggested approach applies easily to positive control (clinical) equivalence trials.  相似文献   

12.
A two-stage model for evaluating both trial-level and patient-level surrogacy of correlated time-to-event endpoints has been introduced, using patient-level data when multiple clinical trials are available. However, the associated maximum likelihood approach often suffers from numerical problems when different baseline hazards among trials and imperfect estimation of treatment effects are assumed. To address this issue, we propose performing the second-stage, trial-level evaluation of potential surrogates within a Bayesian framework, where we may naturally borrow information across trials while maintaining these realistic assumptions. Posterior distributions on surrogacy measures of interest may then be used to compare measures or make decisions regarding the candidacy of a specific endpoint. We perform a simulation study to investigate differences in estimation performance between traditional maximum likelihood and new Bayesian representations of common meta-analytic surrogacy measures, while assessing sensitivity to data characteristics such as number of trials, trial size, and amount of censoring. Furthermore, we present both frequentist and Bayesian trial-level surrogacy evaluations of time to recurrence for overall survival in two meta-analyses of adjuvant therapy trials in colon cancer. With these results, we recommend Bayesian evaluation as an attractive and numerically stable alternative in the multitrial assessment of potential surrogate endpoints.  相似文献   

13.
Randomized Phase II or Phase III clinical trials that are powered based on clinical endpoints, such as survival time, may be prohibitively expensive, in terms of both the time required for their completion and the number of patients required. As such, surrogate endpoints, such as objective tumour response or markers including prostate specific antigen or CA-125, have gained widespread popularity in clinical trials. If an improvement in a surrogate endpoint does not itself confer patient benefit, then consideration must be given to the extent to which improvement in a surrogate endpoint implies improvement in the true clinical endpoint of interest. That this is not a trivial issue is demonstrated by the results of an NIH-sponsored trial of anti-arrhythmic drugs, in which the ability to correct an irregular heart beat not only did not correspond to a survival benefit but in fact led to excess mortality. One approach to the validation of surrogate endpoints involves ensuring that a valid between-group analysis of the surrogate endpoint constitutes also a valid analysis of the true clinical endpoint. The Prentice criterion is a set of conditions that essentially specify the conditional independence of the impact of treatment on the true endpoint, given the surrogate endpoint. It is shown that this criterion alone ensures that an observed effect of the treatment on the true endpoint implies a treatment effect also on the surrogate endpoint, but contrary to popular belief, it does not ensure the converse, specifically that the observation of a significant treatment effect on the surrogate endpoint can be used to infer a treatment effect on the true endpoint.  相似文献   

14.
Analysing specific non-fatal events in isolation may lead to spurious conclusions about efficacy unless the events considered are combined with all-cause mortality. The use of combined endpoints has therefore become widespread, at least in cardiovascular disease trials. Combining all-cause mortality with selected non-fatal events is useful because event-free survival, an important criterion in therapy evaluation, is addressed in this manner. In many clinical trials, symptoms, signs or paraclinical measures (for example, blood pressure, exercise duration, quality of life scores) are used as endpoints. If the patient died before the endpoint was measured, or it was otherwise not possible to perform follow-up assessments as planned, the effect of treatment on these endpoints may be distorted if the patients concerned are ignored in the analysis. Examples are given of how distortion can be avoided by including all patients randomized in an analysis that uses a ranked combined endpoint based both on clinical events and on paraclinical measures. A distinction is made between a pseudo intention-to-treat analysis that disregards study medication status at the time of endpoint assessment but is confined to patients with data, and a true intention-to-treat analysis that takes into account all patients randomized based on a ranked combined endpoint.  相似文献   

15.
Composite endpoints are frequently used in clinical trials, but simple approaches, such as the time to first event, do not reflect any ordering among the endpoints. However, some endpoints, such as mortality, are worse than others. A variety of procedures have been proposed to reflect the severity of the individual endpoints such as pairwise ranking approaches, the win ratio, and the desirability of outcome ranking. When patients have different lengths of follow-up, however, ranking can be difficult and proposed methods do not naturally lead to regression approaches and require specialized software. This paper defines an ordering score O to operationalize the patient ranking implied by hierarchical endpoints. We show how differential right censoring of follow-up corresponds to multiple interval censoring of the ordering score allowing standard software for survival models to be used to calculate the nonparametric maximum likelihood estimators (NPMLEs) of different measures. Additionally, if one assumes that the ordering score is transformable to an exponential random variable, a semiparametric regression is obtained, which is equivalent to the proportional hazards model subject to multiple interval censoring. Standard software can be used for estimation. We show that the NPMLE can be poorly behaved compared to the simple estimators in staggered entry trials. We also show that the semiparametric estimator can be more efficient than simple estimators and explore how standard Cox regression maneuvers can be used to assess model fit, allow for flexible generalizations, and assess interactions of covariates with treatment. We analyze a trial of short versus long-term antiplatelet therapy using our methods.  相似文献   

16.
Composite endpoints are widely used as primary endpoints of randomized controlled trials across clinical disciplines. A common critique of the conventional analysis of composite endpoints is that all disease events are weighted equally, whereas their clinical relevance may differ substantially. We address this by introducing a framework for the weighted analysis of composite endpoints and interpretable test statistics, which are applicable to both binary and time‐to‐event data. To cope with the difficulty of selecting an exact set of weights, we propose a method for constructing simultaneous confidence intervals and tests that asymptotically preserve the family‐wise type I error in the strong sense across families of weights satisfying flexible inequality or order constraints based on the theory of ‐distributions. We show that the method achieves the nominal simultaneous coverage rate with substantial efficiency gains over Scheffé's procedure in a simulation study and apply it to trials in cardiovascular disease and enteric fever. © 2016 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.  相似文献   

17.
This work was motivated by the need to find surrogate endpoints for survival of patients in oncology studies. The goal of this article is to determine associations between five time-to-event outcomes coming from three clinical trials for non-small cell lung cancer. To this end, we propose to use the multivariate Dale model for time-to-event data introduced by Tibaldi et al. (Stat. Med. 2003). We fit the model to these data, using a pseudo-likelihood approach to estimate the model parameters. We evaluate and compare the performance of different dimensional models and we relate the Dale model association parameter, i.e. the odds ratio, to well-known quantities such as Kendall's tau and Spearman's rho. Finally, the results are discussed with a perspective on surrogate marker validation. Some suggestions are made regarding further studies in this field.  相似文献   

18.
Testing for or against a qualitative interaction is relevant in randomized clinical trials that use a common primary factor treatment and have a secondary factor, such as the centre, region, subgroup, gender or biomarker. Interaction contrasts are formulated for ratios of differences between the levels of the primary treatment factor. Simultaneous confidence intervals allow for interpreting the magnitude and the relevance of the qualitative interaction. The proposed method is demonstrated by means of a multi‐centre clinical trial, using the R package mratios. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

19.
In this paper, we are concerned with the estimation of the discrepancy between two treatments when right-censored survival data are accompanied with covariates. Conditional confidence intervals given the available covariates are constructed for the difference between or ratio of two median survival times under the unstratified and stratified Cox proportional hazards models, respectively. The proposed confidence intervals provide the information about the difference in survivorship for patients with common covariates but in different treatments. The results of a simulation study investigation of the coverage probability and expected length of the confidence intervals suggest the one designed for the stratified Cox model when data fit reasonably with the model. When the stratified Cox model is not feasible, however, the one designed for the unstratified Cox model is recommended. The use of the confidence intervals is finally illustrated with a HIV+ data set.  相似文献   

20.
Bruner  D.W.  Movsas  B.  Konski  A.  Roach  M.  Bondy  M.  Scarintino  C.  Scott  C.  Curran  W. 《Quality of life research》2004,13(6):1025-1041
BACKGROUND: The Radiation Therapy Oncology Group (RTOG), a National Cancer Institute sponsored cancer clinical trials research cooperative, has recently formed an Outcomes Committee to assess a comprehensive array of clinical trial endpoints and factors impacting the net effect of therapy. METHODS: To study outcomes in a consistent, comprehensive and coordinated manner, the RTOG Outcomes Committee developed a model to assess clinical, humanistic, and economic outcomes important in clinical trials. RESULTS: This paper reviews how the RTOG incorporates outcomes research into cancer clinical trials, and demonstrates utilization of the RTOG Outcomes Model to test hypotheses related to non-small-cell lung cancer (NSCLC). In this example, the clinical component of the model indicates that the addition of chemotherapy to radiotherapy (RT) improves survival but increases the risk of toxicity. The humanistic component indicates that esophagitis is the symptom impacting quality of life the greatest and may outweigh the benefits in elderly (> or =70 years) patients. The economic component of the model indicates that accounting for quality-adjusted survival, concurrent chemoRT for the treatment of NSCLC is within the range of economically acceptable recommendations. CONCLUSION: The RTOG Outcomes Model guides a comprehensive program of research that systematically measures a triad of endpoints considered important to clinical trials research.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号