首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In the presence of non‐compliance, conventional analysis by intention‐to‐treat provides an unbiased comparison of treatment policies but typically under‐estimates treatment efficacy. With all‐or‐nothing compliance, efficacy may be specified as the complier‐average causal effect (CACE), where compliers are those who receive intervention if and only if randomised to it. We extend the CACE approach to model longitudinal data with time‐dependent non‐compliance, focusing on the situation in which those randomised to control may receive treatment and allowing treatment effects to vary arbitrarily over time. Defining compliance type to be the time of surgical intervention if randomised to control, so that compliers are patients who would not have received treatment at all if they had been randomised to control, we construct a causal model for the multivariate outcome conditional on compliance type and randomised arm. This model is applied to the trial of alternative regimens for glue ear treatment evaluating surgical interventions in childhood ear disease, where outcomes are measured over five time points, and receipt of surgical intervention in the control arm may occur at any time. We fit the models using Markov chain Monte Carlo methods to obtain estimates of the CACE at successive times after receiving the intervention. In this trial, over a half of those randomised to control eventually receive intervention. We find that surgery is more beneficial than control at 6months, with a small but non‐significant beneficial effect at 12months. © 2015 The Authors. Statistics in Medicine Published by JohnWiley & Sons Ltd.  相似文献   

2.
A three‐arm clinical trial design with an experimental treatment, an active control, and a placebo control, commonly referred to as the gold standard design, enables testing of non‐inferiority or superiority of the experimental treatment compared with the active control. In this paper, we propose methods for designing and analyzing three‐arm trials with negative binomially distributed endpoints. In particular, we develop a Wald‐type test with a restricted maximum‐likelihood variance estimator for testing non‐inferiority or superiority. For this test, sample size and power formulas as well as optimal sample size allocations will be derived. The performance of the proposed test will be assessed in an extensive simulation study with regard to type I error rate, power, sample size, and sample size allocation. For the purpose of comparison, Wald‐type statistics with a sample variance estimator and an unrestricted maximum‐likelihood estimator are included in the simulation study. We found that the proposed Wald‐type test with a restricted variance estimator performed well across the considered scenarios and is therefore recommended for application in clinical trials. The methods proposed are motivated and illustrated by a recent clinical trial in multiple sclerosis. The R package ThreeArmedTrials , which implements the methods discussed in this paper, is available on CRAN. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

3.
A critical issue in the analysis of clinical trials is patients' noncompliance to assigned treatments. In the context of a binary treatment with all or nothing compliance, the intent‐to‐treat analysis is a straightforward approach to estimating the effectiveness of the trial. In contrast, there exist 3 commonly used estimators with varying statistical properties for the efficacy of the trial, formally known as the complier‐average causal effect. The instrumental variable estimator may be unbiased but can be extremely variable in many settings. The as treated and per protocol estimators are usually more efficient than the instrumental variable estimator, but they may suffer from selection bias. We propose a synthetic approach that incorporates all 3 estimators in a data‐driven manner. The synthetic estimator is a linear convex combination of the instrumental variable, per protocol, and as treated estimators, resembling the popular model‐averaging approach in the statistical literature. However, our synthetic approach is nonparametric; thus, it is applicable to a variety of outcome types without specific distributional assumptions. We also discuss the construction of the synthetic estimator using an analytic form derived from a simple normal mixture distribution. We apply the synthetic approach to a clinical trial for post‐traumatic stress disorder.  相似文献   

4.
The three‐arm clinical trial design, which includes a test treatment, an active reference, and placebo control, is the gold standard for the assessment of non‐inferiority. In the presence of non‐compliance, one common concern is that an intent‐to‐treat (ITT) analysis (which is the standard approach to non‐inferiority trials), tends to increase the chances of erroneously concluding non‐inferiority, suggesting that the per‐protocol (PP) analysis may be preferable for non‐inferiority trials despite its inherent bias. The objective of this paper was to develop statistical methodology for dealing with non‐compliance in three‐arm non‐inferiority trials for censored, time‐to‐event data. Changes in treatment were here considered the only form of non‐compliance. An approach using a three‐arm rank preserving structural failure time model and G‐estimation analysis is here presented. Using simulations, the impact of non‐compliance on non‐inferiority trials was investigated in detail using ITT, PP analyses, and the present proposed method. Results indicate that the proposed method shows good characteristics, and that neither ITT nor PP analyses can always guarantee the validity of the non‐inferiority conclusion. A Statistical Analysis System program for the implementation of the proposed test procedure is available from the authors upon request. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

5.
This article discusses joint modeling of compliance and outcome for longitudinal studies when noncompliance is present. We focus on two‐arm randomized longitudinal studies in which subjects are randomized at baseline, treatment is applied repeatedly over time, and compliance behaviors and clinical outcomes are measured and recorded repeatedly over time. In the proposed Markov compliance and outcome model, we use the potential outcome framework to define pre‐randomization principal strata from the joint distribution of compliance under treatment and control arms, and estimate the effect of treatment within each principal strata. Besides the causal effect of the treatment, our proposed model can estimate the impact of the causal effect of the treatment at a given time on future compliance. Bayesian methods are used to estimate the parameters. The results are illustrated using a study assessing the effect of cognitive behavior therapy on depression. A simulation study is used to assess the repeated sampling properties of the proposed model. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

6.
This article considers the problem of examining time‐varying causal effect moderation using observational, longitudinal data in which treatment, candidate moderators, and possible confounders are time varying. The structural nested mean model (SNMM) is used to specify the moderated time‐varying causal effects of interest in a conditional mean model for a continuous response given time‐varying treatments and moderators. We present an easy‐to‐use estimator of the SNMM that combines an existing regression‐with‐residuals (RR) approach with an inverse‐probability‐of‐treatment weighting (IPTW) strategy. The RR approach has been shown to identify the moderated time‐varying causal effects if the time‐varying moderators are also the sole time‐varying confounders. The proposed IPTW+RR approach provides estimators of the moderated time‐varying causal effects in the SNMM in the presence of an additional, auxiliary set of known and measured time‐varying confounders. We use a small simulation experiment to compare IPTW+RR versus the traditional regression approach and to compare small and large sample properties of asymptotic versus bootstrap estimators of the standard errors for the IPTW+RR approach. This article clarifies the distinction between time‐varying moderators and time‐varying confounders. We illustrate the methodology in a case study to assess if time‐varying substance use moderates treatment effects on future substance use. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

7.
Motivated by a study about prompt coronary angiography in myocardial infarction, we propose a method to estimate the causal effect of a treatment in two‐arm experimental studies with possible noncompliance in both treatment and control arms. We base the method on a causal model for repeated binary outcomes (before and after the treatment), which includes individual covariates and latent variables for the unobserved heterogeneity between subjects. Moreover, given the type of noncompliance, the model assumes the existence of three subpopulations of subjects: compliers, never‐takers, and always‐takers. We estimate the model using a two‐step estimator: at the first step, we estimate the probability that a subject belongs to one of the three subpopulations on the basis of the available covariates; at the second step, we estimate the causal effects through a conditional logistic method, the implementation of which depends on the results from the first step. The estimator is approximately consistent and, under certain circumstances, exactly consistent. We provide evidence that the bias is negligible in relevant situations. We compute standard errors on the basis of a sandwich formula. The application shows that prompt coronary angiography in patients with myocardial infarction may significantly decrease the risk of other events within the next 2 years, with a log‐odds of about ? 2. Given that noncompliance is significant for patients being given the treatment because of high‐risk conditions, classical estimators fail to detect, or at least underestimate, this effect. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

8.
Mendelian randomization studies estimate causal effects using genetic variants as instruments. Instrumental variable methods are straightforward for linear models, but epidemiologists often use odds ratios to quantify effects. Also, odds ratios are often the quantities reported in meta‐analyses. Many applications of Mendelian randomization dichotomize genotype and estimate the population causal log odds ratio for unit increase in exposure by dividing the genotype‐disease log odds ratio by the difference in mean exposure between genotypes. This ‘Wald‐type’ estimator is biased even in large samples, but whether the magnitude of bias is of practical importance is unclear. We study the large‐sample bias of this estimator in a simple model with a continuous normally distributed exposure, a single unobserved confounder that is not an effect modifier, and interpretable parameters. We focus on parameter values that reflect scenarios in which we apply Mendelian randomization, including realistic values for the degree of confounding and strength of the causal effect. We evaluate this estimator and the causal odds ratio using numerical integration and obtain approximate analytic expressions to check results and gain insight. A small simulation study examines finite sample bias and mild violations of the normality assumption. For our simple data‐generating model, we find that the Wald estimator is asymptotically biased with a bias of around 10% in fairly typical Mendelian randomization scenarios but which can be larger in more extreme situations. Recently developed methods such as structural mean models require fewer untestable assumptions and we recommend their use when the individual‐level data they require are available. The Wald‐type estimator may retain a role as an approximate method for meta‐analysis based on summary data. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

9.
We consider estimation of treatment effects in two‐stage adaptive multi‐arm trials with a common control. The best treatment is selected at interim, and the primary endpoint is modeled via a Cox proportional hazards model. The maximum partial‐likelihood estimator of the log hazard ratio of the selected treatment will overestimate the true treatment effect in this case. Several methods for reducing the selection bias have been proposed for normal endpoints, including an iterative method based on the estimated conditional selection biases and a shrinkage approach based on empirical Bayes theory. We adapt these methods to time‐to‐event data and compare the bias and mean squared error of all methods in an extensive simulation study and apply the proposed methods to reconstructed data from the FOCUS trial. We find that all methods tend to overcorrect the bias, and only the shrinkage methods can reduce the mean squared error. © 2017 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.  相似文献   

10.
The case–control study is a simple and an useful method to characterize the effect of a gene, the effect of an exposure, as well as the interaction between the two. The control‐free case‐only study is yet an even simpler design, if interest is centered on gene–environment interaction only. It requires the sometimes plausible assumption that the gene under study is independent of exposures among the non‐diseased in the study populations. The Hardy–Weinberg equilibrium is also sometimes reasonable to assume. This paper presents an easy‐to‐implement approach for analyzing case–control and case‐only studies under the above dual assumptions. The proposed approach, the ‘conditional logistic regression with counterfactuals’, offers the flexibility for complex modeling yet remains well within the reach to the practicing epidemiologists. When the dual assumptions are met, the conditional logistic regression with counterfactuals is unbiased and has the correct type I error rates. It also results in smaller variances and achieves higher powers as compared with using the conventional analysis (unconditional logistic regression). Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

11.
In this article, we study blinded sample size re‐estimation in the ‘gold standard’ design with internal pilot study for normally distributed outcomes. The ‘gold standard’ design is a three‐arm clinical trial design that includes an active and a placebo control in addition to an experimental treatment. We focus on the absolute margin approach to hypothesis testing in three‐arm trials at which the non‐inferiority of the experimental treatment and the assay sensitivity are assessed by pairwise comparisons. We compare several blinded sample size re‐estimation procedures in a simulation study assessing operating characteristics including power and type I error. We find that sample size re‐estimation based on the popular one‐sample variance estimator results in overpowered trials. Moreover, sample size re‐estimation based on unbiased variance estimators such as the Xing–Ganju variance estimator results in underpowered trials, as it is expected because an overestimation of the variance and thus the sample size is in general required for the re‐estimation procedure to eventually meet the target power. To overcome this problem, we propose an inflation factor for the sample size re‐estimation with the Xing–Ganju variance estimator and show that this approach results in adequately powered trials. Because of favorable features of the Xing–Ganju variance estimator such as unbiasedness and a distribution independent of the group means, the inflation factor does not depend on the nuisance parameter and, therefore, can be calculated prior to a trial. Moreover, we prove that the sample size re‐estimation based on the Xing–Ganju variance estimator does not bias the effect estimate. Copyright © 2017 John Wiley & Sons, Ltd.  相似文献   

12.
Clinical trials in phase II of drug development are frequently conducted as single‐arm two‐stage studies with a binary endpoint. Recently, adaptive designs have been proposed for this setting that enable a midcourse modification of the sample size. While these designs are elaborated with respect to hypothesis testing by assuring control of the type I error rate, the topic of point estimation has up to now not been addressed. For adaptive designs with a prespecified sample size recalculation rule, we propose a new point estimator that both assures compatibility of estimation and test decision and minimizes average mean squared error. This estimator can be interpreted as a constrained posterior mean estimate based on the non‐informative Jeffreys prior. A comparative investigation of the operating characteristics demonstrates the favorable properties of the proposed approach. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

13.
We propose a method for estimating the marginal causal log-odds ratio for binary outcomes under treatment non-compliance in placebo-randomized trials. This estimation method is a marginal alternative to the causal logistic approach by Nagelkerke et al. (2000) that conditions on partially unknown compliance (that is, adherence to treatment) status, and also differs from previous approaches that estimate risk differences or ratios in subgroups defined by compliance status. The marginal causal method proposed in this paper is based on an extension of Robins' G-estimation approach for fitting linear or log-linear structural nested models to a logistic model. Comparing the marginal and conditional causal log-odds ratio estimates provides a way of assessing the magnitude of unmeasured confounding of the treatment effect due to treatment non-adherence. More specifically, we show through simulations that under weak confounding, the conditional and marginal procedures yield similar estimates, whereas under stronger confounding, they behave differently in terms of bias and confidence interval coverage. The parametric structures that represent such confounding are not identifiable. Hence, the proof of consistency of causal estimators and corresponding simulations are based on two different models that fully identify the causal effects being estimated. These models differ in the way that compliance is related to potential outcomes, and thus differ in the way that the causal effect is identified. The simulations also show that the proposed marginal causal estimation approach performs well in terms of bias under the different levels of confounding due to non-adherence and under different causal logistic models. We also provide results from the analyses of two data sets further showing how a comparison of the marginal and conditional estimators can help evaluate the magnitude of confounding due to non-adherence.  相似文献   

14.
The matched case‐control designs are commonly used to control for potential confounding factors in genetic epidemiology studies especially epigenetic studies with DNA methylation. Compared with unmatched case‐control studies with high‐dimensional genomic or epigenetic data, there have been few variable selection methods for matched sets. In an earlier paper, we proposed the penalized logistic regression model for the analysis of unmatched DNA methylation data using a network‐based penalty. However, for popularly applied matched designs in epigenetic studies that compare DNA methylation between tumor and adjacent non‐tumor tissues or between pre‐treatment and post‐treatment conditions, applying ordinary logistic regression ignoring matching is known to bring serious bias in estimation. In this paper, we developed a penalized conditional logistic model using the network‐based penalty that encourages a grouping effect of (1) linked Cytosine‐phosphate‐Guanine (CpG) sites within a gene or (2) linked genes within a genetic pathway for analysis of matched DNA methylation data. In our simulation studies, we demonstrated the superiority of using conditional logistic model over unconditional logistic model in high‐dimensional variable selection problems for matched case‐control data. We further investigated the benefits of utilizing biological group or graph information for matched case‐control data. We applied the proposed method to a genome‐wide DNA methylation study on hepatocellular carcinoma (HCC) where we investigated the DNA methylation levels of tumor and adjacent non‐tumor tissues from HCC patients by using the Illumina Infinium HumanMethylation27 Beadchip. Several new CpG sites and genes known to be related to HCC were identified but were missed by the standard method in the original paper. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

15.
In the presence of time‐dependent confounding, there are several methods available to estimate treatment effects. With correctly specified models and appropriate structural assumptions, any of these methods could provide consistent effect estimates, but with real‐world data, all models will be misspecified and it is difficult to know if assumptions are violated. In this paper, we investigate five methods: inverse probability weighting of marginal structural models, history‐adjusted marginal structural models, sequential conditional mean models, g‐computation formula, and g‐estimation of structural nested models. This work is motivated by an investigation of the effects of treatments in cystic fibrosis using the UK Cystic Fibrosis Registry data focussing on two outcomes: lung function (continuous outcome) and annual number of days receiving intravenous antibiotics (count outcome). We identified five features of this data that may affect the performance of the methods: misspecification of the causal null, long‐term treatment effects, effect modification by time‐varying covariates, misspecification of the direction of causal pathways, and censoring. In simulation studies, under ideal settings, all five methods provide consistent estimates of the treatment effect with little difference between methods. However, all methods performed poorly under some settings, highlighting the importance of using appropriate methods based on the data available. Furthermore, with the count outcome, the issue of non‐collapsibility makes comparison between methods delivering marginal and conditional effects difficult. In many situations, we would recommend using more than one of the available methods for analysis, as if the effect estimates are very different, this would indicate potential issues with the analyses.  相似文献   

16.
Jo B 《Statistics in medicine》2002,21(21):3161-3181
Randomized trials often face complications in assessing the effect of treatment because of study participants' non-compliance. If compliance type is observed in both the treatment and control conditions, the causal effect of treatment can be estimated for a targeted subpopulation of interest based on compliance type. However, in practice, compliance type is not observed completely. Given this missing compliance information, the complier average causal effect (CACE) estimation approach provides a way to estimate differential effects of treatments by imposing the exclusion restriction for non-compliers. Under the exclusion restriction, the CACE approach estimates the effect of treatment assignment for compliers, but disallows the effect of treatment assignment for non-compliers. The exclusion restriction plays a key role in separating outcome distributions based on compliance type. However, the CACE estimate can be substantially biased if the assumption is violated. This study examines the bias mechanism in the estimation of CACE when the assumption of the exclusion restriction is violated. How covariate information affects the sensitivity of the CACE estimate to violation of the exclusion restriction assumption is also examined.  相似文献   

17.
We examine the properties of principal scores methods to estimate the causal marginal odds ratio of an intervention for compliers in the context of a randomized controlled trial with non‐compliers. The two‐stage estimation approach has been proposed for a linear model by Jo and Stuart (Statistics in Medicine 2009; 28 :2857–2875) under a principal ignorability (PI) assumption. Using a Monte Carlo simulation study, we compared the performance of several strategies to build and use principal score models and the robustness of the method to violations of underlying assumptions, in particular PI. Results showed that the principal score approach yielded unbiased estimates of the causal marginal log odds ratio under PI but that the method was sensitive to violations of PI, which occurs in particular when confounders are omitted from the analysis. For principal score analysis, probability weighting performed slightly better than full matching or 1:1 matching. Concerning the variables to be included in principal score models, the lowest mean squared error was generally obtained when using the true confounders. Using variables associated with the outcome only but not compliance however yielded very similar performance. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

18.
The ‘gold standard’ design for three‐arm trials refers to trials with an active control and a placebo control in addition to the experimental treatment group. This trial design is recommended when being ethically justifiable and it allows the simultaneous comparison of experimental treatment, active control, and placebo. Parametric testing methods have been studied plentifully over the past years. However, these methods often tend to be liberal or conservative when distributional assumptions are not met particularly with small sample sizes. In this article, we introduce a studentized permutation test for testing non‐inferiority and superiority of the experimental treatment compared with the active control in three‐arm trials in the ‘gold standard’ design. The performance of the studentized permutation test for finite sample sizes is assessed in a Monte Carlo simulation study under various parameter constellations. Emphasis is put on whether the studentized permutation test meets the target significance level. For comparison purposes, commonly used Wald‐type tests, which do not make any distributional assumptions, are included in the simulation study. The simulation study shows that the presented studentized permutation test for assessing non‐inferiority in three‐arm trials in the ‘gold standard’ design outperforms its competitors, for instance the test based on a quasi‐Poisson model, for count data. The methods discussed in this paper are implemented in the R package ThreeArmedTrials which is available on the comprehensive R archive network (CRAN). Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

19.
Although the P value from a Wilcoxon‐Mann‐Whitney test is used often with randomized experiments, it is rarely accompanied with a causal effect estimate and its confidence interval. The natural parameter for the Wilcoxon‐Mann‐Whitney test is the Mann‐Whitney parameter, ?, which measures the probability that a randomly selected individual in the treatment arm will have a larger response than a randomly selected individual in the control arm (plus an adjustment for ties). We show that the Mann‐Whitney parameter may be framed as a causal parameter and show that it is not equal to a closely related and nonidentifiable causal effect, ψ, the probability that a randomly selected individual will have a larger response under treatment than under control (plus an adjustment for ties). We review the paradox, first expressed by Hand, that the ψ parameter may imply that the treatment is worse (or better) than control, while the Mann‐Whitney parameter shows the opposite. Unlike the Mann‐Whitney parameter, ψ is nonidentifiable from a randomized experiment. We review some nonparametric assumptions that rule out Hand's paradox through bounds on ψ and use bootstrap methods to make inferences on those bounds. We explore the relationship of the proportional odds parameter to Hand's paradox, showing that the paradox may occur for proportional odds parameters between 1/9 and 9. Thus, large effects are needed to ensure that if treatment appears better by the Mann‐Whitney parameter, then treatment improves responses in most individuals. We demonstrate these issues using a vaccine trial.  相似文献   

20.
This paper describes a new approach to the estimation in a logistic regression model with two crossed random effects where special interest is in estimating the variance of one of the effects while not making distributional assumptions about the other effect. A composite likelihood is studied. For each term in the composite likelihood, a conditional likelihood is used that eliminates the influence of the random effects, which results in a composite conditional likelihood consisting of only one‐dimensional integrals that may be solved numerically. Good properties of the resulting estimator are described in a small simulation study. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号