首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Nonadherence to assigned treatment jeopardizes the power and interpretability of intent‐to‐treat comparisons from clinical trial data and continues to be an issue for effectiveness studies, despite their pragmatic emphasis. We posit that new approaches to design need to complement developments in methods for causal inference to address nonadherence, in both experimental and practice settings. This paper considers the conventional study design for psychiatric research and other medical contexts, in which subjects are randomized to treatments that are fixed throughout the trial and presents an alternative that converts the fixed treatments into an adaptive intervention that reflects best practice. The key element is the introduction of an adaptive decision point midway into the study to address a patient's reluctance to remain on treatment before completing a full‐length trial of medication. The clinical uncertainty about the appropriate adaptation prompts a second randomization at the new decision point to evaluate relevant options. Additionally, the standard ‘all‐or‐none’ principal stratification (PS) framework is applied to the first stage of the design to address treatment discontinuation that occurs too early for a midtrial adaptation. Drawing upon the adaptive intervention features, we develop assumptions to identify the PS causal estimand and to introduce restrictions on outcome distributions to simplify expectation–maximization calculations. We evaluate the performance of the PS setup, with particular attention to the role played by a binary covariate. The results emphasize the importance of collecting covariate data for use in design and analysis. We consider the generality of our approach beyond the setting of psychiatric research. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

2.
The log‐rank test is the most powerful non‐parametric test for detecting a proportional hazards alternative and thus is the most commonly used testing procedure for comparing time‐to‐event distributions between different treatments in clinical trials. When the log‐rank test is used for the primary data analysis, the sample size calculation should also be based on the test to ensure the desired power for the study. In some clinical trials, the treatment effect may not manifest itself right after patients receive the treatment. Therefore, the proportional hazards assumption may not hold. Furthermore, patients may discontinue the study treatment prematurely and thus may have diluted treatment effect after treatment discontinuation. If a patient's treatment termination time is independent of his/her time‐to‐event of interest, the termination time can be treated as a censoring time in the final data analysis. Alternatively, we may keep collecting time‐to‐event data until study termination from those patients who discontinued the treatment and conduct an intent‐to‐treat analysis by including them in the original treatment groups. We derive formulas necessary to calculate the asymptotic power of the log‐rank test under this non‐proportional hazards alternative for the two data analysis strategies. Simulation studies indicate that the formulas provide accurate power for a variety of trial settings. A clinical trial example is used to illustrate the application of the proposed methods. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

3.
In cluster randomized trials (CRTs), individuals belonging to the same cluster are very likely to resemble one another, not only in terms of outcomes but also in terms of treatment compliance behavior. Although the impact of resemblance in outcomes is well acknowledged, little attention has been given to the possible impact of resemblance in compliance behavior. This study defines compliance intraclass correlation as the level of resemblance in compliance behavior among individuals within clusters. On the basis of Monte Carlo simulations, it is demonstrated how compliance intraclass correlation affects power to detect intention-to-treat (ITT) effect in the CRT setting. As a way of improving power to detect ITT effect in CRTs accompanied by noncompliance, this study employs an estimation method, where ITT effect estimates are obtained based on compliance-type-specific treatment effect estimates. A multilevel mixture analysis using an ML-EM estimation method is used for this estimation.  相似文献   

4.
We develop the randomized analysis for repeated binary outcomes with non-compliance. A break randomization-based semi-parametric estimation procedure for both the causal risk difference and the causal risk ratio is proposed for repeated binary data. Although we assume the simple structural models for potential outcomes, we choose to avoid making any assumptions about comparability beyond those implied by randomization at time zero. The proposed methods can incorporate non-compliance information, while preserving the validity of the test of the null hypothesis, and even in the presence of non-random non-compliance can give the estimate of the causal effect that treatment would have if all individuals complied with their assigned treatment. The methods are applied to data from a randomized clinical trial for reduction of febrile neutropenia events among acute myeloid leukaemia patients, in which a prophylactic use of macrophage colony-stimulating factor (M-CSF) was compared to placebo during the courses of intensive chemotherapies.  相似文献   

5.
Motivated by a recent National Research Council study, we discuss three aspects of the analysis of clinical trials when participants prematurely discontinue treatments. First, we distinguish treatment discontinuation from missing outcome data. Data collection is often stopped after treatment discontinuation, but outcome data could be recorded on individuals after they discontinue treatment, as the National Research Council study recommends. Conversely, outcome data may be missing for individuals who do not discontinue treatment, as when there is loss to follow up or missed clinic visits. Missing outcome data is a standard missing data problem, but treatment discontinuation is better viewed as a form of noncompliance and treated using ideas from the causal literature on noncompliance. Second, the standard intention to treat estimand, the average effect of randomization to treatment, is compared with three alternative estimands for the intention to treat population: the average effect when individuals continue on the assigned treatment after discontinuation, the average effect when individuals take a control treatment after treatment discontinuation, and a summary measure of the effect of treatment prior to discontinuation. We argue that the latter choice of estimand has advantages and should receive more consideration. Third, we consider when follow‐up measures after discontinuation are needed for valid measures of treatment effects. The answer depends on the choice of primary estimand and the plausibility of assumptions needed to address the missing data. Ideas are motivated and illustrated by a reanalysis of a past study of inhaled insulin treatments for diabetes, sponsored by Eli Lilly. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

6.
While the intent-to-treat (ITT) analysis is widely accepted for superiority trials, there remains debate about its role in non-inferiority trials. It is often said that the ITT tends to be anti-conservative in the demonstration of non-inferiority. This concern has led to some reliance on per-protocol (PP) analyses that exclude patients on the basis of post-baseline events, despite the inherent bias of such analyses. We compare ITT and PP results from antibiotic trials presented to the public at the FDA's Anti-infective Drug Advisory Committee from 1999 to 2003. While the number of available trials is too small to produce clear conclusions, these data did not support the assumption that the ITT would lead to smaller treatment difference than the PP, in the setting of antibiotic trials. Possible explanations are discussed.  相似文献   

7.
In randomized trials with nonrandom noncompliance, the causal effects of a treatment among the entire population cannot be estimated in an unbiased manner. Therefore, several authors have considered the bounds on the causal effects. Here, we propose bounds by applying an idea of VanderWeele (Biometrics 2008; 64 :702–706), who showed that the sign of the unmeasured confounding bias can be determined under monotonicity assumptions about covariates in the framework of observational studies. In randomized trials with noncompliance by switching the treatment, we show that the lower or upper bound on the expectation of the potential outcome becomes the expectation from the per‐protocol analysis under monotonicity assumptions similar to those of VanderWeele. In particular, the monotonicity assumptions can yield both the lower and the upper bounds on causal effects when the monotonic relationship between the covariates and the treatment actually received depends on the treatment assigned. The results are extended to cases of noncompliance by subjects not receiving any treatment. Although the monotonicity assumptions are not themselves identifiable, they are nonetheless reasonable in some situations. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

8.
The three‐arm clinical trial design, which includes a test treatment, an active reference, and placebo control, is the gold standard for the assessment of non‐inferiority. In the presence of non‐compliance, one common concern is that an intent‐to‐treat (ITT) analysis (which is the standard approach to non‐inferiority trials), tends to increase the chances of erroneously concluding non‐inferiority, suggesting that the per‐protocol (PP) analysis may be preferable for non‐inferiority trials despite its inherent bias. The objective of this paper was to develop statistical methodology for dealing with non‐compliance in three‐arm non‐inferiority trials for censored, time‐to‐event data. Changes in treatment were here considered the only form of non‐compliance. An approach using a three‐arm rank preserving structural failure time model and G‐estimation analysis is here presented. Using simulations, the impact of non‐compliance on non‐inferiority trials was investigated in detail using ITT, PP analyses, and the present proposed method. Results indicate that the proposed method shows good characteristics, and that neither ITT nor PP analyses can always guarantee the validity of the non‐inferiority conclusion. A Statistical Analysis System program for the implementation of the proposed test procedure is available from the authors upon request. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

9.
The standard approach for analysing a randomized clinical trial is based on intent-to-treat (ITT) where subjects are analysed according to their assigned treatment group regardless of actual adherence to the treatment protocol. For therapeutic equivalence trials, it is a common concern that an ITT analysis increases the chance of erroneously concluding equivalence. In this paper, we formally investigate the impact of non-compliance on an ITT analysis of equivalence trials with a binary outcome. We assume 'all-or-none' compliance and independence between compliance and the outcome. Our results indicate that non-compliance does not always make it easier to demonstrate equivalence. The direction and magnitude of changes in the type I error rate and power of the study depend on the patterns of non-compliance, event probabilities, the margin of equivalence and other factors.  相似文献   

10.
Group Randomized Trials (GRTs) randomize groups of people to treatment or control arms instead of individually randomizing subjects. When each subject has a binary outcome, over-dispersed binomial data may result, quantified as an intra-cluster correlation (ICC). Typically, GRTs have a small number, bin, of independent clusters, each of which can be quite large. Treating the ICC as a nuisance parameter, inference for a treatment effect can be done using quasi-likelihood with a logistic link. A Wald statistic, which, under standard regularity conditions, has an asymptotic standard normal distribution, can be used to test for a marginal treatment effect. However, we have found in our setting that the Wald statistic may have a variance less than 1, resulting in a test size smaller than its nominal value. This problem is most apparent when marginal probabilities are close to 0 or 1, particularly when n is small and the ICC is not negligible. When the ICC is known, we develop a method for adjusting the estimated standard error appropriately such that the Wald statistic will approximately have a standard normal distribution. We also propose ways to handle non-nominal test sizes when the ICC is estimated. We demonstrate the utility of our methods through simulation results covering a variety of realistic settings for GRTs.  相似文献   

11.
Hollis S 《Statistics in medicine》2002,21(24):3823-3834
Many clinical trials are analysed using an intention-to-treat (ITT) approach. A full application of the ITT approach is only possible when complete outcome data are available for all randomized subjects. In a recent survey of clinical trial reports including an ITT analysis, complete case analysis (excluding all patients with a missing response) was common. This does not comply with the basic principles of ITT since not all randomized subjects are included in the analysis. Analyses of data with missing values are based on untestable assumptions, and so sensitivity analysis presenting a range of estimates under alternative assumptions about the missing-data mechanism is recommended. For binary outcome, extreme case analysis has been suggested as a simple form of sensitivity analysis, but this is rarely conclusive. A graphical sensitivity analysis is proposed which displays the results of all possible allocations of cases with missing binary outcome. Extension to allow binomial variation in outcome is also considered. The display is based on easily interpretable parameters and allows informal examination of the effects of varying prior beliefs.  相似文献   

12.
In randomized clinical trials, it is common that patients may stop taking their assigned treatments and then switch to a standard treatment (standard of care available to the patient) but not the treatments under investigation. Although the availability of limited retrieved data on patients who switch to standard treatment, called off‐protocol data, could be highly valuable in assessing the associated treatment effect with the experimental therapy, it leads to a complex data structure requiring the development of models that link the information of per‐protocol data with the off‐protocol data. In this paper, we develop a novel Bayesian method to jointly model longitudinal treatment measurements under various dropout scenarios. Specifically, we propose a multivariate normal mixed‐effects model for repeated measurements from the assigned treatments and the standard treatment, a multivariate logistic regression model for those stopping the assigned treatments, logistic regression models for those starting a standard treatment off protocol, and a conditional multivariate logistic regression model for completely withdrawing from the study. We assume that withdrawing from the study is non‐ignorable, but intermittent missingness is assumed to be at random. We examine various properties of the proposed model. We develop an efficient Markov chain Monte Carlo sampling algorithm. We analyze in detail via the proposed method a real dataset from a clinical trial. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

13.
Cai Z  Kuroki M  Sato T 《Statistics in medicine》2007,26(16):3188-3204
Consider a clinical trial where subjects are randomized to two treatment arms but compliance to the assignment is not perfect. Concerning this problem, this paper derives non-parametric bounds on treatment effects by making use of the observed covariates information. The new bounds are narrower and more informative than the existing ones. In addition, a new non-parametric point estimation approach is proposed based on stratified analysis. Furthermore, to examine the accuracy of estimating the proposed bounds, we provide variance estimators for the proposed approach. The results of this paper can yield credible information on treatment effects, which will be useful for medical research and public health policy analysis.  相似文献   

14.
Hong S  Wang Y 《Statistics in medicine》2007,26(19):3525-3534
Randomized designs have been increasingly called for use in phase II oncology clinical trials to protect against potential patient selection bias. However, formal statistical comparison is rarely conducted due to the sample size restriction, despite its appeal. In this paper, we offer an approach to sample size reduction by extending the three-outcome design of Sargent et al. (Control Clin. Trials 2001; 22:117-125) for single-arm trials to randomized comparative trials. In addition to the usual two outcomes of a hypothesis testing (rejecting the null hypothesis or rejecting the alternative hypothesis), the three-outcome comparative design allows a third outcome of rejecting neither hypotheses when the testing result is in some 'grey area' and leaves the decision to the clinical judgment based on the overall evaluation of trial outcomes and other relevant factors. By allowing a reasonable region of uncertainty, the three-outcome design enables formal statistical comparison with considerably smaller sample size, compared to the standard two-outcome comparative design. Statistical formulation of the three-outcome comparative design is discussed for both the single-stage and two-stage trials. Sample sizes are tabulated for some common clinical scenarios.  相似文献   

15.
The most common primary statistical end point of a phase II clinical trial is the categorization of a patient as either a ‘responder’ or ‘nonresponder’. The primary objective of typical randomized phase II anticancer clinical trials is to evaluate experimental treatments that potentially will increase response rate over a historical baseline and select one to consider for further study. We propose single‐stage and two‐stage designs for randomized phase II clinical trials, precisely defining various type I error rates and powers to achieve this objective. We develop a program to compute these error rates and powers exactly, and we provide many design examples to satisfy pre‐fixed requirements on error rates and powers. Finally, we apply our method to a randomized phase II trial in patients with relapsed non‐Hodgkin's disease. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

16.
Biostatisticians have frequently uncritically accepted the measurements provided by their medical colleagues engaged in clinical research. Such measures often involve considerable loss of information. Particularly, unfortunate is the widespread use of the so‐called ‘responder analysis’, which may involve not only a loss of information through dichotomization, but also extravagant and unjustified causal inference regarding individual treatment effects at the patient level, and, increasingly, the use of the so‐called number needed to treat scale of measurement. Other problems involve inefficient use of baseline measurements, the use of covariates measured after the start of treatment, the interpretation of titrations and composite response measures. Many of these bad practices are becoming enshrined in the regulatory guidance to the pharmaceutical industry. We consider the losses involved in inappropriate measures and suggest that statisticians should pay more attention to this aspect of their work. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

17.
Shao J  Chang M  Chow SC 《Statistics in medicine》2005,24(12):1783-1790
In cancer clinical trials, it is not uncommon that some patients switched their treatments due to lack of efficacy and/or disease progression under ethical consideration. This treatment switch makes it difficult for the evaluation of the efficacy of the treatment under investigation. The current existing methods consider random treatment switch and do not take into consideration of prognosis and/or investigator's assessment that leads to patients' treatment switch. In this paper, we model patients' treatment switching effect in a latent event times model under parametric setting or a latent hazard rate model under the semi-parametric proportional hazard model. Statistical inference procedures under both models are provided. A simulation study is performed to investigate the performance of the proposed methods.  相似文献   

18.
Zhou XH  Li SM 《Statistics in medicine》2006,25(16):2737-2761
In this paper, we considered a missing outcome problem in causal inferences for a randomized encouragement design study. We proposed both moment and maximum likelihood estimators for the marginal distributions of potential outcomes and the local complier average causal effect (CACE) parameter. We illustrated our methods in a randomized encouragement design study on the effectiveness of flu shots.  相似文献   

19.
Bayesian approaches to inference in cluster randomized trials have been investigated for normally distributed and binary outcome measures. However, relatively little attention has been paid to outcome measures which are counts of events. We discuss an extension of previously published Bayesian hierarchical models to count data, which usually can be assumed to be distributed according to a Poisson distribution. We develop two models, one based on the traditional rate ratio, and one based on the rate difference which may often be more intuitively interpreted for clinical trials, and is needed for economic evaluation of interventions. We examine the relationship between the intracluster correlation coefficient (ICC) and the between‐cluster variance for each of these two models. In practice, this allows one to use the previously published evidence on ICCs to derive an informative prior distribution which can then be used to increase the precision of the posterior distribution of the ICC. We demonstrate our models using a previously published trial assessing the effectiveness of an educational intervention and a prior distribution previously derived. We assess the robustness of the posterior distribution for effectiveness to departures from a normal distribution of the random effects. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

20.
Missing outcome data is a crucial threat to the validity of treatment effect estimates from randomized trials. The outcome distributions of participants with missing and observed data are often different, which increases bias. Causal inference methods may aid in reducing the bias and improving efficiency by incorporating baseline variables into the analysis. In particular, doubly robust estimators incorporate 2 nuisance parameters: the outcome regression and the missingness mechanism (ie, the probability of missingness conditional on treatment assignment and baseline variables), to adjust for differences in the observed and unobserved groups that can be explained by observed covariates. To consistently estimate the treatment effect, one of these nuisance parameters must be consistently estimated. Traditionally, nuisance parameters are estimated using parametric models, which often precludes consistency, particularly in moderate to high dimensions. Recent research on missing data has focused on data‐adaptive estimation to help achieve consistency, but the large sample properties of such methods are poorly understood. In this article, we discuss a doubly robust estimator that is consistent and asymptotically normal under data‐adaptive estimation of the nuisance parameters. We provide a formula for an asymptotically exact confidence interval under minimal assumptions. We show that our proposed estimator has smaller finite‐sample bias compared to standard doubly robust estimators. We present a simulation study demonstrating the enhanced performance of our estimators in terms of bias, efficiency, and coverage of the confidence intervals. We present the results of an illustrative example: a randomized, double‐blind phase 2/3 trial of antiretroviral therapy in HIV‐infected persons.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号