首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Compared with placebo-control clinical trials, the interpretation of efficacy results from active-control trials requires more caution. This is because efficacy results from such trials cannot be reliably interpreted without a thorough understanding of the efficacy evidence that formed the basis for the approval of the active control, especially when such drug efficacy is to be established on the basis of clinical evidence from the traditional two-arm active-control clinical equivalence studies as opposed to the multi-arm active control. This is because in addition to over-reliance on the quantification of a clinically irrelevant acceptable margin of inferiority from historical data, such interpretation also depends on cross-trial inference for demonstration of experimental drug effect. We provide a brief overview of some design issues with the traditional two-arm active-control clinical trial and discuss regulators' concern regarding Type I error rate control (with the two most popular methods for the quantification of the non-inferiority margin) in cross-trial demonstration of experimental drug effect. Simulation results are presented to show that the point estimate method provides adequate control of the Type I error rate with > or =75 per cent retention of known active-control effect and that the confidence interval approach is uniformly ultra-conservative. We also report (via a numerical example from real clinical trial data) a couple of potentially less stringent alternative approaches for establishing the non-inferiority of a test drug over a control, which have been used in the past to provide additional efficacy evidence in NDA submission.  相似文献   

2.
We present a general Bayesian framework for cost-effectiveness analysis (CEA) from clinical trial data. This framework allows for very flexible modelling of both cost and efficacy related trial data. A common CEA technique is established for this wide class of models through linking mean efficacy and mean cost to the parameters of any given model. Examples are given in which efficacy may be measured as a continuous, binary, ordinal or time-to-event outcome, and in which costs are modelled as distributed normally, log-normally, as a mixture or non-parametrically. A case study is presented, illustrating the methodology and illuminating the role of prior information.  相似文献   

3.
A requirement for calculating sample sizes for cluster randomized trials (CRTs) conducted over multiple periods of time is the specification of a form for the correlation between outcomes of subjects within the same cluster, encoded via the within-cluster correlation structure. Previously proposed within-cluster correlation structures have made strong assumptions; for example, the usual assumption is that correlations between the outcomes of all pairs of subjects are identical (“uniform correlation”). More recently, structures that allow for a decay in correlation between pairs of outcomes measured in different periods have been suggested. However, these structures are overly simple in settings with continuous recruitment and measurement. We propose a more realistic “continuous-time correlation decay” structure whereby correlations between subjects' outcomes decay as the time between these subjects' measurement times increases. We investigate the use of this structure on trial planning in the context of a primary care diabetes trial, where there is evidence of decaying correlation between pairs of patients' outcomes over time. In particular, for a range of different trial designs, we derive the variance of the treatment effect estimator under continuous-time correlation decay and compare this to the variance obtained under uniform correlation. For stepped wedge and cluster randomized crossover designs, incorrectly assuming uniform correlation will underestimate the required sample size under most trial configurations likely to occur in practice. Planning of CRTs requires consideration of the most appropriate within-cluster correlation structure to obtain a suitable sample size.  相似文献   

4.
The most common data structures in the biomedical studies have been matched or unmatched designs. Data structures resulting from a hybrid of the two may create challenges for statistical inferences. The question may arise whether to use parametric or nonparametric methods on the hybrid data structure. The Early Treatment for Retinopathy of Prematurity study was a multicenter clinical trial sponsored by the National Eye Institute. The design produced data requiring a statistical method of a hybrid nature. An infant in this multicenter randomized clinical trial had high‐risk prethreshold retinopathy of prematurity that was eligible for treatment in one or both eyes at entry into the trial. During follow‐up, recognition visual acuity was accessed for both eyes. Data from both eyes (matched) and from only one eye (unmatched) were eligible to be used in the trial. The new hybrid nonparametric method is a meta‐analysis based on combining the Hodges–Lehmann estimates of treatment effects from the Wilcoxon signed rank and rank sum tests. To compare the new method, we used the classic meta‐analysis with the t‐test method to combine estimates of treatment effects from the paired and two sample t‐tests. We used simulations to calculate the empirical size and power of the test statistics, as well as the bias, mean square and confidence interval width of the corresponding estimators. The proposed method provides an effective tool to evaluate data from clinical trials and similar comparative studies. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

5.
The sequential parallel comparison design (SPCD) has been proposed to increase the likelihood of success of clinical trials in therapeutic areas where high-placebo response is a concern. The trial is run in two stages, and subjects are randomized into three groups: (i) placebo in both stages; (ii) placebo in the first stage and drug in the second stage; and (iii) drug in both stages. We consider the case of binary response data (response/no response). In the SPCD, all first-stage and second-stage data from placebo subjects who failed to respond in the first stage of the trial are utilized in the efficacy analysis. We develop 1 and 2 degree of freedom score tests for treatment effect in the SPCD. We give formulae for asymptotic power and for sample size computations and evaluate their accuracy via simulation studies. We compute the optimal allocation ratio between drug and placebo in stage 1 for the SPCD to determine from a theoretical viewpoint whether a single-stage design, a two-stage design with placebo only in the first stage, or a two-stage design is the best design for a given set of response rates. As response rates are not known before the trial, a two-stage approach with allocation to active drug in both stages is a robust design choice.  相似文献   

6.
Making inferences about the average treatment effect using the random effects model for meta‐analysis is problematic in the common situation where there is a small number of studies. This is because estimates of the between‐study variance are not precise enough to accurately apply the conventional methods for testing and deriving a confidence interval for the average effect. We have found that a refined method for univariate meta‐analysis, which applies a scaling factor to the estimated effects’ standard error, provides more accurate inference. We explain how to extend this method to the multivariate scenario and show that our proposal for refined multivariate meta‐analysis and meta‐regression can provide more accurate inferences than the more conventional approach. We explain how our proposed approach can be implemented using standard output from multivariate meta‐analysis software packages and apply our methodology to two real examples. © 2013 The Authors. Statistics in Medicine published by John Wiley & Sons, Ltd.  相似文献   

7.
The benefits and challenges of incorporating biomarkers into the development of anticancer agents have been increasingly discussed. In many cases, a sensitive subpopulation of patients is determined based on preclinical data and/or by retrospectively analyzing clinical trial data. Prospective exploration of sensitive subpopulations of patients may enable us to efficiently develop definitively effective treatments, resulting in accelerated drug development and a reduction in development costs. We consider the development of a new molecular‐targeted treatment in cancer patients. Given preliminary but promising efficacy data observed in a phase I study, it may be worth designing a phase II clinical trial that aims to identify a sensitive subpopulation. In order to achieve this goal, we propose a Bayesian randomized phase II clinical trial design incorporating a biomarker that is measured on a graded scale. We compare two Bayesian methods, one based on subgroup analysis and the other on a regression model, to analyze a time‐to‐event endpoint such as progression‐free survival (PFS) time. The two methods basically estimate Bayesian posterior probabilities of PFS hazard ratios in biomarker subgroups. Extensive simulation studies evaluate these methods’ operating characteristics, including the correct identification probabilities of the desired subpopulation under a wide range of clinical scenarios. We also examine the impact of subgroup population proportions on the methods’ operating characteristics. Although both methods’ performance depends on the distribution of treatment effect and the population proportions across patient subgroups, the regression‐based method shows more favorable operating characteristics. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

8.
Lee M  Fine JP 《Statistics in medicine》2011,30(27):3221-3235
In survival analysis, a point estimate and confidence interval for median survival time have been frequently used to summarize the survival curve. However, such quantile analyses on competing risks data have not been widely investigated. In this paper, we propose parametric inferences for quantiles from the cumulative incidence function and develop parametric confidence intervals for quantiles. In addition, we study a simplified method of inference for the nonparametric approach. We compare the parametric and nonparametric inferences in empirical studies. Simulation studies show that the procedures perform well, with parametric analyses yielding smaller mean square error when the model is not too badly misspecified. We illustrate the methods with data from a breast cancer clinical trial.  相似文献   

9.
Interim analyses are routinely used to monitor accumulating data in clinical trials. When the objective of the interim analysis is to stop the trial if the trial is deemed futile, it must ideally be conducted as early as possible. In trials where the clinical endpoint of interest is only observed after a long follow-up, many enrolled patients may therefore have no information on the primary endpoint available at the time of the interim analysis. To facilitate earlier decision-making, one may incorporate early response data that are predictive for the primary endpoint (eg, an assessment of the primary endpoint at an earlier time) in the interim analysis. Most attention so far has been given to the development of interim test statistics that include such short-term endpoints, but not to decision procedures. Existing tests moreover perform poorly when the information is scarce, eg, due to rare events, when the cohort of patients with observed primary endpoint data is small, or when the short-term endpoint is a strong but imperfect predictor. In view of this, we develop an interim decision procedure based on the conditional power approach that utilizes the short-term and long-term binary endpoints in a framework that is expected to provide reliable inferences, even when the primary endpoint is only available for a few patients, and has the added advantage that it allows the use of historical information. The operational characteristics of the proposed procedure are evaluated for the phase III clinical trial that motivated this approach, using simulation studies.  相似文献   

10.
Clinical trials often assess efficacy by comparing treatments on the basis of two or more event‐time outcomes. In the case of cancer clinical trials, progression‐free survival (PFS), which is the minimum of the time from randomization to progression or to death, summarizes the comparison of treatments on the hazards for disease progression and mortality. However, the analysis of PFS does not utilize all the information we have on patients in the trial. First, if both progression and death times are recorded, then information on death time is ignored in the PFS analysis. Second, disease progression is monitored at regular clinic visits, and progression time is recorded as the first visit at which evidence of progression is detected. However, many patients miss or have irregular visits (resulting in interval‐censored data) and sometimes die of the cancer before progression was recorded. In this case, the previous progression‐free time could provide additional information on the treatment efficacy. The aim of this paper is to propose a method for comparing treatments that could more fully utilize the data on progression and death. We develop a test for treatment effect based on of the joint distribution of progression and survival. The issue of interval censoring is handled using the very simple and intuitive approach of the Conditional Expected Score Test (CEST). We focus on the application of these methods in cancer research. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

11.
The inclusion of economic evaluations as part of clinical trials has led to concerns about the adequacy of trial sample size to support such analysis. The analytical tool of cost-effectiveness analysis is the incremental cost-effectiveness ratio (ICER), which is compared with a threshold value (lambda) as a method to determine the efficiency of a health-care intervention. Accordingly, many of the methods suggested to calculating the sample size requirements for the economic component of clinical trials are based on the properties of the ICER. However, use of the ICER and a threshold value as a basis for determining efficiency has been shown to be inconsistent with the economic concept of opportunity cost. As a result, the validity of the ICER-based approaches to sample size calculations can be challenged. Alternative methods for determining improvements in efficiency have been presented in the literature that does not depend upon ICER values. In this paper, we develop an opportunity cost approach to calculating sample size for economic evaluations alongside clinical trials, and illustrate the approach using a numerical example. We compare the sample size requirement of the opportunity cost method with the ICER threshold method. In general, either method may yield the larger required sample size. However, the opportunity cost approach, although simple to use, has additional data requirements. We believe that the additional data requirements represent a small price to pay for being able to perform an analysis consistent with both concept of opportunity cost and the problem faced by decision makers.  相似文献   

12.
Hui Xie 《Statistics in medicine》2009,28(22):2725-2747
Bayesian approach has been increasingly used for analyzing longitudinal data. When dropout occurs in the study, analysis often relies on the assumption of ignorable dropout. Because ignorability is a critical and untestable assumption without obtaining additional data or making other unverifiable assumptions, it is important to assess the impact of departures from the ignorability assumption on the key Bayesian inferences. In this paper, we extend the Bayesian index of local sensitivity to non‐ignorability (ISNI) method proposed by Zhang and Heitjan to longitudinal data with dropout. We derive formulas for the Bayesian ISNI when the complete longitudinal data follow a linear mixed‐effect model. The calculation of the index only requires the posterior draws or summary statistics of these draws from the standard analysis of the ignorable model. Thus, our approach avoids fitting any complicated nonignorable model. One can use the method to evaluate which Bayesian parameter estimates or functions of these estimates in a linear mixed‐effect model are susceptible to nonignorable dropout and which ones are not. We illustrate the method using a simulation study and two real examples: rats data set and rheumatoid arthritis clinical trial data set. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

13.
In most randomized clinical trials (RCTs) with a right-censored time-to-event outcome, the hazard ratio is taken as an appropriate measure of the effectiveness of a new treatment compared with a standard-of-care or control treatment. However, it has long been known that the hazard ratio is valid only under the proportional hazards (PH) assumption. This assumption is formally checked only rarely. Some recent trials, particularly the IPASS trial in lung cancer and the ICON7 trial in ovarian cancer, have alerted researchers to the possibility of gross non-PH, raising the critical question of how such data should be analyzed. Here, we propose the use of the restricted mean survival time at a prespecified, fixed time point as a useful general measure to report the difference between two survival curves. We describe different methods of estimating it and we illustrate its application to three RCTs in cancer. The examples are graded from a trial in kidney cancer in which there is no evidence of non-PH, to IPASS, where the opposite is clearly the case. We propose a simple, general scheme for the analysis of data from such RCTs. Key elements of our approach are Andersen's method of 'pseudo-observations,' which is based on the Kaplan-Meier estimate of the survival function, and Royston and Parmar's class of flexible parametric survival models, which may be used for analyzing data in the presence or in the absence of PH of the treatment effect.  相似文献   

14.
Clustered binary responses, such as disease status in twins, frequently arise in perinatal health and other epidemiologic applications. The scientific objective involves modelling both the marginal mean responses, such as the probability of disease, and the within-cluster association of the multivariate responses. In this regard, bivariate logistic regression is a useful procedure with advantages that include (i) a single maximization of the joint probability distribution of the bivariate binary responses, and (ii) modelling the odds ratio describing the pairwise association between the two binary responses in relation to several covariates. In addition, since the form of the joint distribution of the bivariate binary responses is assumed known, parameters for the regression model can be estimated by the method of maximum likelihood. Hence, statistical inferences may be based on likelihood ratio tests and profile likelihood confidence intervals. We apply bivariate logistic regression to a perinatal database comprising 924 twin foetuses resulting from 462 pregnancies to model obstetric and clinical risk factors for the association of small for gestational age births in twin gestations.  相似文献   

15.
A new, intuitive method has recently been proposed to explore treatment–covariate interactions in survival data arising from two treatment arms of a clinical trial. The method is based on constructing overlapping subpopulations of patients with respect to one (or more) covariates of interest and in observing the pattern of the treatment effects estimated across the subpopulations. A plot of these treatment effects is called a subpopulation treatment effect pattern plot. Here, we explore the small sample characteristics of the asymptotic results associated with the method and develop an alternative permutation distribution‐based approach to inference that should be preferred for smaller sample sizes. We then describe an extension of the method to the case in which the pattern of estimated quantiles of survivor functions is of interest. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

16.
During the course of a clinical trial, subjects may experience treatment failure. For ethical reasons, it is necessary to administer emergency or rescue medications for such subjects. However, the rescue medications may bias the set of response measurements. This bias is of particular concern if a subject has been randomized to the control group, and the rescue medications improve the subject's condition. The standard approach to analysing data from a clinical trial is to perform an intent-to-treat (ITT) analysis, wherein the data are analysed according to treatment randomization. Supplementary analyses may be performed in addition to the ITT analysis to account for the effect of treatment failures and rescue medications. A Bayesian, counterfactual approach, which uses the data augmentation (DA) algorithm, is proposed for supplemental analysis. A simulation study is conducted to compare the operating characteristics of this procedure with a likelihood-based, counterfactual approach based on the EM algorithm. An example from the Asthma Clinical Research Network (ACRN) is used to illustrate the Bayesian procedure.  相似文献   

17.
Suppression of premature ventricular contractions (PVCs) is one of the goals of antiarrhythmic therapy. In a clinical trial, however, it may be difficult to distinguish antiarrhythmic drug effect from spontaneous variation in PVCs. We propose the application of linear regression to PVC histories to ascertain drug effect in individual patients. The model determines which variables are important in explaining a patient's PVCs. One such variable indicates the presence or absence of the drug; the model determines whether the drug has an effect on the patient's PVCs, while compensating for the other explanatory variables. In addition to determining the statistical significance of any drug effect, the model estimates the strength of the effect for each patient. We demonstrate the method with data from a three-day clinical trial which used 24-hour Holter monitoring. The method is flexible and can be modified to apply to any clinical study design. It allows for inferences concerning populations and subpopulations of patients.  相似文献   

18.
From data on HIV-1 characteristics measured on viruses isolated from vaccinated and unvaccinated persons infected while enrolled in preventive HIV-1 vaccine trials, interpretable inferences into strain variations of vaccine efficacy can be made with recently developed sieve analysis models. Four assumptions are needed for the parameters in these models to have meaningful interpretations in terms of vaccine-induced reductions in strain-specific per-contact transmission probabilities: (A1) vaccination impacts each strain-specific transmission probability homogeneously in vaccinated persons (leaky vaccine effect); (A2) for each strain biological susceptibility to infection given exposure is homogeneous among vaccinated trial participants and among unvaccinated trial participants; (A3) the distribution of exposure is equal in vaccinated and unvaccinated trial participants; (A4) the relative prevalence of circulating HIV-1 strains during the trial follow-up period is constant. Through theoretical considerations and simulations of an ongoing phase III HIV-1 vaccine efficacy trial in Bangkok, we evaluate the importance and necessity of these assumptions. We show that the models still provide estimates of biologically interpretable parameters when A1 is violated, but with bias the extent to which vaccine protection is heterogeneous. We also show that the models are highly robust to departures from A4, with implication that the time-independent models are adequate for applications. In addition, we suggest extensions of the sieve analysis models which incorporate random effects that account for unmeasured heterogeneity in infection risk. With these mixed models, usefully interpretable strain-specific vaccine efficacy parameters can be estimated without requiring A2.The conclusion is that A3, which is justified by randomization and blinding, is the essential assumption for the sieve models to provide reliable interpretable inferences into strain variations in vaccine efficacy.  相似文献   

19.
Palliative medicine is an interdisciplinary specialty focusing on improving quality of life (QOL) for patients with serious illness and their families. Palliative care programs are available or under development at over 80% of large US hospitals (300+ beds). Palliative care clinical trials present unique analytic challenges relative to evaluating the palliative care treatment efficacy which is to improve patients’ diminishing QOL as disease progresses towards end of life (EOL). A unique feature of palliative care clinical trials is that patients will experience decreasing QOL during the trial despite potentially beneficial treatment. Often longitudinal QOL and survival data are highly correlated which, in the face of censoring, makes it challenging to properly analyze and interpret terminal QOL trend. To address these issues, we propose a novel semiparametric statistical approach to jointly model the terminal trend of QOL and survival data. There are two sub‐models in our approach: a semiparametric mixed effects model for longitudinal QOL and a Cox model for survival. We use regression splines method to estimate the nonparametric curves and AIC to select knots. We assess the model performance through simulation to establish a novel modeling approach that could be used in future palliative care research trials. Application of our approach in a recently completed palliative care clinical trial is also presented.  相似文献   

20.
In oncology clinical trials, overall survival, time to progression, and progression‐free survival are three commonly used endpoints. Empirical correlations among them have been published for different cancers, but statistical models describing the dependence structures are limited. Recently, Fleischer et al. proposed a statistical model that is mathematically tractable and shows some flexibility to describe the dependencies in a realistic way, based on the assumption of exponential distributions. This paper aims to extend their model to the more flexible Weibull distribution. We derived theoretical correlations among different survival outcomes, as well as the distribution of overall survival induced by the model. Model parameters were estimated by the maximum likelihood method and the goodness of fit was assessed by plotting estimated versus observed survival curves for overall survival. We applied the method to three cancer clinical trials. In the non‐small‐cell lung cancer trial, both the exponential and the Weibull models provided an adequate fit to the data, and the estimated correlations were very similar under both models. In the prostate cancer trial and the laryngeal cancer trial, the Weibull model exhibited advantages over the exponential model and yielded larger estimated correlations. Simulations suggested that the proposed Weibull model is robust for data generated from a range of distributions. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号