首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Much attention has been paid to estimating the causal effect of adherence to a randomized protocol using instrumental variables to adjust for unmeasured confounding. Researchers tend to use the instrumental variable within one of the three main frameworks: regression with an endogenous variable, principal stratification, or structural‐nested modeling. We found in our literature review that even in simple settings, causal interpretations of analyses with endogenous regressors can be ambiguous or rely on a strong assumption that can be difficult to interpret. Principal stratification and structural‐nested modeling are alternative frameworks that render unambiguous causal interpretations based on assumptions that are, arguably, easier to interpret. Our interest stems from a wish to estimate the effect of cluster‐level adherence on individual‐level binary outcomes with a three‐armed cluster‐randomized trial and polytomous adherence. Principal stratification approaches to this problem are quite challenging because of the sheer number of principal strata involved. Therefore, we developed a structural‐nested modeling approach and, in the process, extended the methodology to accommodate cluster‐randomized trials with unequal probability of selecting individuals. Furthermore, we developed a method to implement the approach with relatively simple programming. The approach works quite well, but when the structural‐nested model does not fit the data, there is no solution to the estimating equation. We investigate the performance of the approach using simulated data, and we also use the approach to estimate the effect on pupil absence of school‐level adherence to a randomized water, sanitation, and hygiene intervention in western Kenya. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

2.
Nonadherence to assigned treatment jeopardizes the power and interpretability of intent‐to‐treat comparisons from clinical trial data and continues to be an issue for effectiveness studies, despite their pragmatic emphasis. We posit that new approaches to design need to complement developments in methods for causal inference to address nonadherence, in both experimental and practice settings. This paper considers the conventional study design for psychiatric research and other medical contexts, in which subjects are randomized to treatments that are fixed throughout the trial and presents an alternative that converts the fixed treatments into an adaptive intervention that reflects best practice. The key element is the introduction of an adaptive decision point midway into the study to address a patient's reluctance to remain on treatment before completing a full‐length trial of medication. The clinical uncertainty about the appropriate adaptation prompts a second randomization at the new decision point to evaluate relevant options. Additionally, the standard ‘all‐or‐none’ principal stratification (PS) framework is applied to the first stage of the design to address treatment discontinuation that occurs too early for a midtrial adaptation. Drawing upon the adaptive intervention features, we develop assumptions to identify the PS causal estimand and to introduce restrictions on outcome distributions to simplify expectation–maximization calculations. We evaluate the performance of the PS setup, with particular attention to the role played by a binary covariate. The results emphasize the importance of collecting covariate data for use in design and analysis. We consider the generality of our approach beyond the setting of psychiatric research. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

3.
Causal inference with observational longitudinal data and time‐varying exposures is complicated due to the potential for time‐dependent confounding and unmeasured confounding. Most causal inference methods that handle time‐dependent confounding rely on either the assumption of no unmeasured confounders or the availability of an unconfounded variable that is associated with the exposure (eg, an instrumental variable). Furthermore, when data are incomplete, validity of many methods often depends on the assumption of missing at random. We propose an approach that combines a parametric joint mixed‐effects model for the study outcome and the exposure with g‐computation to identify and estimate causal effects in the presence of time‐dependent confounding and unmeasured confounding. G‐computation can estimate participant‐specific or population‐average causal effects using parameters of the joint model. The joint model is a type of shared parameter model where the outcome and exposure‐selection models share common random effect(s). We also extend the joint model to handle missing data and truncation by death when missingness is possibly not at random. We evaluate the performance of the proposed method using simulation studies and compare the method to both linear mixed‐ and fixed‐effects models combined with g‐computation as well as to targeted maximum likelihood estimation. We apply the method to an epidemiologic study of vitamin D and depressive symptoms in older adults and include code using SAS PROC NLMIXED software to enhance the accessibility of the method to applied researchers.  相似文献   

4.
In the presence of time‐dependent confounding, there are several methods available to estimate treatment effects. With correctly specified models and appropriate structural assumptions, any of these methods could provide consistent effect estimates, but with real‐world data, all models will be misspecified and it is difficult to know if assumptions are violated. In this paper, we investigate five methods: inverse probability weighting of marginal structural models, history‐adjusted marginal structural models, sequential conditional mean models, g‐computation formula, and g‐estimation of structural nested models. This work is motivated by an investigation of the effects of treatments in cystic fibrosis using the UK Cystic Fibrosis Registry data focussing on two outcomes: lung function (continuous outcome) and annual number of days receiving intravenous antibiotics (count outcome). We identified five features of this data that may affect the performance of the methods: misspecification of the causal null, long‐term treatment effects, effect modification by time‐varying covariates, misspecification of the direction of causal pathways, and censoring. In simulation studies, under ideal settings, all five methods provide consistent estimates of the treatment effect with little difference between methods. However, all methods performed poorly under some settings, highlighting the importance of using appropriate methods based on the data available. Furthermore, with the count outcome, the issue of non‐collapsibility makes comparison between methods delivering marginal and conditional effects difficult. In many situations, we would recommend using more than one of the available methods for analysis, as if the effect estimates are very different, this would indicate potential issues with the analyses.  相似文献   

5.
This article considers the problem of examining time‐varying causal effect moderation using observational, longitudinal data in which treatment, candidate moderators, and possible confounders are time varying. The structural nested mean model (SNMM) is used to specify the moderated time‐varying causal effects of interest in a conditional mean model for a continuous response given time‐varying treatments and moderators. We present an easy‐to‐use estimator of the SNMM that combines an existing regression‐with‐residuals (RR) approach with an inverse‐probability‐of‐treatment weighting (IPTW) strategy. The RR approach has been shown to identify the moderated time‐varying causal effects if the time‐varying moderators are also the sole time‐varying confounders. The proposed IPTW+RR approach provides estimators of the moderated time‐varying causal effects in the SNMM in the presence of an additional, auxiliary set of known and measured time‐varying confounders. We use a small simulation experiment to compare IPTW+RR versus the traditional regression approach and to compare small and large sample properties of asymptotic versus bootstrap estimators of the standard errors for the IPTW+RR approach. This article clarifies the distinction between time‐varying moderators and time‐varying confounders. We illustrate the methodology in a case study to assess if time‐varying substance use moderates treatment effects on future substance use. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

6.
Mean‐based semi‐parametric regression models such as the popular generalized estimating equations are widely used to improve robustness of inference over parametric models. Unfortunately, such models are quite sensitive to outlying observations. The Wilcoxon‐score‐based rank regression (RR) provides more robust estimates over generalized estimating equations against outliers. However, the RR and its extensions do not sufficiently address missing data arising in longitudinal studies. In this paper, we propose a new approach to address outliers under a different framework based on the functional response models. This functional‐response‐model‐based alternative not only addresses limitations of the RR and its extensions for longitudinal data, but, with its rank‐preserving property, even provides more robust estimates than these alternatives. The proposed approach is illustrated with both real and simulated data. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

7.
In the presence of non‐compliance, conventional analysis by intention‐to‐treat provides an unbiased comparison of treatment policies but typically under‐estimates treatment efficacy. With all‐or‐nothing compliance, efficacy may be specified as the complier‐average causal effect (CACE), where compliers are those who receive intervention if and only if randomised to it. We extend the CACE approach to model longitudinal data with time‐dependent non‐compliance, focusing on the situation in which those randomised to control may receive treatment and allowing treatment effects to vary arbitrarily over time. Defining compliance type to be the time of surgical intervention if randomised to control, so that compliers are patients who would not have received treatment at all if they had been randomised to control, we construct a causal model for the multivariate outcome conditional on compliance type and randomised arm. This model is applied to the trial of alternative regimens for glue ear treatment evaluating surgical interventions in childhood ear disease, where outcomes are measured over five time points, and receipt of surgical intervention in the control arm may occur at any time. We fit the models using Markov chain Monte Carlo methods to obtain estimates of the CACE at successive times after receiving the intervention. In this trial, over a half of those randomised to control eventually receive intervention. We find that surgery is more beneficial than control at 6months, with a small but non‐significant beneficial effect at 12months. © 2015 The Authors. Statistics in Medicine Published by JohnWiley & Sons Ltd.  相似文献   

8.
Cost‐effectiveness analysis is an important tool that can be applied to the evaluation of a health treatment or policy. When the observed costs and outcomes result from a nonrandomized treatment, making causal inference about the effects of the treatment requires special care. The challenges are compounded when the observation period is truncated for some of the study subjects. This paper presents a method of unbiased estimation of cost‐effectiveness using observational study data that is not fully observed. The method—twice‐weighted multiple interval estimation of a marginal structural model—was developed in order to analyze the cost‐effectiveness of treatment protocols for advanced dementia residents living nursing homes when they become acutely ill. A key feature of this estimation approach is that it facilitates a sensitivity analysis that identifies the potential effects of unmeasured confounding on the conclusions concerning cost‐effectiveness. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

9.
In randomised controlled trials, the effect of treatment on those who comply with allocation to active treatment can be estimated by comparing their outcome to those in the comparison group who would have complied with active treatment had they been allocated to it. We compare three estimators of the causal effect of treatment on compliers when this is a parameter in a proportional hazards model and quantify the bias due to omitting baseline prognostic factors. Causal estimates are found directly by maximising a novel partial likelihood; based on a structural proportional hazards model; and based on a ‘corrected dataset’ derived after fitting a rank‐preserving structural failure time model. Where necessary, we extend these methods to incorporate baseline covariates. Comparisons use simulated data and a real data example. Analysing the simulated data, we found that all three methods are accurate when an important covariate was included in the proportional hazards model (maximum bias 5.4%). However, failure to adjust for this prognostic factor meant that causal treatment effects were underestimated (maximum bias 11.4%), because estimators were based on a misspecified marginal proportional hazards model. Analysing the real data example, we found that adjusting causal estimators is important to correct for residual imbalances in prognostic factors present between trial arms after randomisation. Our results show that methods of estimating causal treatment effects for time‐to‐event outcomes should be extended to incorporate covariates, thus providing an informative compliment to the corresponding intention‐to‐treat analysis. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

10.
The application of causal mediation analysis (CMA) considering the mediation effect of a third variable is increasing in epidemiological studies; however, this requires fitting strong assumptions on confounding bias. To address this limitation, we propose an extension of CMA combining it with Mendelian randomization (MRinCMA). We applied the new approach to analyse the causal effect of obesity and diabetes on pancreatic cancer, considering each factor as potential mediator. To check the performance of MRinCMA under several conditions/scenarios, we used it in different simulated data sets and compared it with structural equation models. For continuous variables, MRinCMA and structural equation models performed similarly, suggesting that both approaches are valid to obtain unbiased estimates. When noncontinuous variables were considered, MRinCMA presented, overall, lower bias than structural equation models. By applying MRinCMA, we did not find any evidence of causality of obesity or diabetes on pancreatic cancer. With this new methodology, researchers would be able to address CMA hypotheses by appropriately accounting for the confounding bias assumption regardless of the conditions used in their studies in different settings.  相似文献   

11.
This study investigates appropriate estimation of estimator variability in the context of causal mediation analysis that employs propensity score‐based weighting. Such an analysis decomposes the total effect of a treatment on the outcome into an indirect effect transmitted through a focal mediator and a direct effect bypassing the mediator. Ratio‐of‐mediator‐probability weighting estimates these causal effects by adjusting for the confounding impact of a large number of pretreatment covariates through propensity score‐based weighting. In step 1, a propensity score model is estimated. In step 2, the causal effects of interest are estimated using weights derived from the prior step's regression coefficient estimates. Statistical inferences obtained from this 2‐step estimation procedure are potentially problematic if the estimated standard errors of the causal effect estimates do not reflect the sampling uncertainty in the estimation of the weights. This study extends to ratio‐of‐mediator‐probability weighting analysis a solution to the 2‐step estimation problem by stacking the score functions from both steps. We derive the asymptotic variance‐covariance matrix for the indirect effect and direct effect 2‐step estimators, provide simulation results, and illustrate with an application study. Our simulation results indicate that the sampling uncertainty in the estimated weights should not be ignored. The standard error estimation using the stacking procedure offers a viable alternative to bootstrap standard error estimation. We discuss broad implications of this approach for causal analysis involving propensity score‐based weighting.  相似文献   

12.
The matched case‐control designs are commonly used to control for potential confounding factors in genetic epidemiology studies especially epigenetic studies with DNA methylation. Compared with unmatched case‐control studies with high‐dimensional genomic or epigenetic data, there have been few variable selection methods for matched sets. In an earlier paper, we proposed the penalized logistic regression model for the analysis of unmatched DNA methylation data using a network‐based penalty. However, for popularly applied matched designs in epigenetic studies that compare DNA methylation between tumor and adjacent non‐tumor tissues or between pre‐treatment and post‐treatment conditions, applying ordinary logistic regression ignoring matching is known to bring serious bias in estimation. In this paper, we developed a penalized conditional logistic model using the network‐based penalty that encourages a grouping effect of (1) linked Cytosine‐phosphate‐Guanine (CpG) sites within a gene or (2) linked genes within a genetic pathway for analysis of matched DNA methylation data. In our simulation studies, we demonstrated the superiority of using conditional logistic model over unconditional logistic model in high‐dimensional variable selection problems for matched case‐control data. We further investigated the benefits of utilizing biological group or graph information for matched case‐control data. We applied the proposed method to a genome‐wide DNA methylation study on hepatocellular carcinoma (HCC) where we investigated the DNA methylation levels of tumor and adjacent non‐tumor tissues from HCC patients by using the Illumina Infinium HumanMethylation27 Beadchip. Several new CpG sites and genes known to be related to HCC were identified but were missed by the standard method in the original paper. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

13.
This work arises from consideration of sarcoma patients in which fluorodeoxyglucose positron emission tomography (FDG‐PET) imaging pre‐therapy and post‐chemotherapy is used to assess treatment response. Our focus is on methods for evaluation of the statistical uncertainty in the measured response for an individual patient. The gamma distribution is often used to describe data with constant coefficient of variation, but it can be adapted to describe the pseudo‐Poisson character of PET measurements. We propose co‐registering the pre‐therapy and post‐ therapy images and modeling the approximately paired voxel‐level data using the gamma statistics. Expressions for the estimation of the treatment effect and its variability are provided. Simulation studies explore the performance in the context of testing for a treatment effect. The impact of misregistration errors and how test power is affected by estimation of variability using simplified sampling assumptions, as might be produced by direct bootstrapping, is also clarified. The results illustrate a marked benefit in using a properly constructed paired approach. Remarkably, the power of the paired analysis is maintained even if the pre‐image and post‐ image data are poorly registered. A theoretical explanation for this is indicated. The methodology is further illustrated in the context of a series of fluorodeoxyglucose‐PET sarcoma patient studies. These data demonstrate the additional prognostic value of the proposed treatment effect test statistic. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

14.
We develop a Bayesian approach to estimate the average treatment effect on the treated in the presence of confounding. The approach builds on developments proposed by Saarela et al in the context of marginal structural models, using importance sampling weights to adjust for confounding and estimate a causal effect. The Bayesian bootstrap is adopted to approximate posterior distributions of interest and avoid the issue of feedback that arises in Bayesian causal estimation relying on a joint likelihood. We present results from simulation studies to estimate the average treatment effect on the treated, evaluating the impact of sample size and the strength of confounding on estimation. We illustrate our approach using the classic Right Heart Catheterization data set and find a negative causal effect of the exposure on 30-day survival, in accordance with previous analyses of these data. We also apply our approach to the data set of the National Center for Health Statistics Birth Data and obtain a negative effect of maternal smoking during pregnancy on birth weight.  相似文献   

15.
It is often the case that interest lies in the effect of an exposure on each of several distinct event types. For example, we are motivated to investigate in the impact of recent injection drug use on deaths due to each of cancer, end‐stage liver disease, and overdose in the Canadian Co‐infection Cohort (CCC). We develop a marginal structural model that permits estimation of cause‐specific hazards in situations where more than one cause of death is of interest. Marginal structural models allow for the causal effect of treatment on outcome to be estimated using inverse‐probability weighting under the assumption of no unmeasured confounding; these models are particularly useful in the presence of time‐varying confounding variables, which may also mediate the effect of exposures. An asymptotic variance estimator is derived, and a cumulative incidence function estimator is given. We compare the performance of the proposed marginal structural model for multiple‐outcome data to that of conventional competing risks models in simulated data and demonstrate the use of the proposed approach in the CCC. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

16.
The behavior of the conditional logistic estimator is analyzed under a causal model for two‐arm experimental studies with possible non‐compliance in which the effect of the treatment is measured by a binary response variable. We show that, when non‐compliance may only be observed in the treatment arm, the effect (measured on the logit scale) of the treatment on compliers and that of the control on non‐compliers can be identified and consistently estimated under mild conditions. The same does not happen for the effect of the control on compliers. A simple correction of the conditional logistic estimator is then proposed, which allows us to considerably reduce the bias in estimating this quantity and the causal effect of the treatment over control on compliers. A two‐step estimator results on the basis of which we can also set up a Wald test for the hypothesis of absence of a causal effect of the treatment. The asymptotic properties of the estimator are studied by exploiting the general theory on maximum likelihood estimation of misspecified models. Finite‐sample properties of the estimator and of the related Wald test are studied by simulation. The extension of the approach to the case of missing responses is also outlined. The approach is illustrated by an application to a dataset deriving from a study on the efficacy of a training course on the breast self examination practice. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

17.
ObjectiveTo define confounding bias in difference‐in‐difference studies and compare regression‐ and matching‐based estimators designed to correct bias due to observed confounders.Data sourcesWe simulated data from linear models that incorporated different confounding relationships: time‐invariant covariates with a time‐varying effect on the outcome, time‐varying covariates with a constant effect on the outcome, and time‐varying covariates with a time‐varying effect on the outcome. We considered a simple setting that is common in the applied literature: treatment is introduced at a single time point and there is no unobserved treatment effect heterogeneity.Study designWe compared the bias and root mean squared error of treatment effect estimates from six model specifications, including simple linear regression models and matching techniques.Data collectionSimulation code is provided for replication.Principal findingsConfounders in difference‐in‐differences are covariates that change differently over time in the treated and comparison group or have a time‐varying effect on the outcome. When such a confounding variable is measured, appropriately adjusting for this confounder (ie, including the confounder in a regression model that is consistent with the causal model) can provide unbiased estimates with optimal SE. However, when a time‐varying confounder is affected by treatment, recovering an unbiased causal effect using difference‐in‐differences is difficult.ConclusionsConfounding in difference‐in‐differences is more complicated than in cross‐sectional settings, from which techniques and intuition to address observed confounding cannot be imported wholesale. Instead, analysts should begin by postulating a causal model that relates covariates, both time‐varying and those with time‐varying effects on the outcome, to treatment. This causal model will then guide the specification of an appropriate analytical model (eg, using regression or matching) that can produce unbiased treatment effect estimates. We emphasize the importance of thoughtful incorporation of covariates to address confounding bias in difference‐in‐difference studies.  相似文献   

18.
Instrumental variable (IV) methods have potential to consistently estimate the causal effect of an exposure on an outcome in the presence of unmeasured confounding. However, validity of IV methods relies on strong assumptions, some of which cannot be conclusively verified from observational data. One such assumption is that the effect of the proposed instrument on the outcome is completely mediated by the exposure. We consider the situation where this assumption is violated, but the remaining IV assumptions hold; that is, the proposed IV (1) is associated with the exposure and (2) has no unmeasured causes in common with the outcome. We propose a method to estimate multiplicative structural mean models of binary outcomes in this scenario in the presence of unmeasured confounding. We also extend the method to address multiple scenarios, including mediation analysis. The method adapts the asymptotically efficient G‐estimation approach that was previously proposed for additive structural mean models, and it can be carried out using off‐the‐shelf software for generalized method of moments. Monte Carlo simulation studies show that the method has low bias and accurate coverage. We applied the method to a case study of circulating vitamin D and depressive symptoms using season of blood collection as a (potentially invalid) instrumental variable. Potential applications of the proposed method include randomized intervention studies as well as Mendelian randomization studies with genetic variants that affect multiple phenotypes, possibly including the outcome. Published 2016. This article is a U.S. Government work and is in the public domain in the USA  相似文献   

19.
In comparative effectiveness research (CER), often the aim is to contrast survival outcomes between exposure groups defined by time‐varying interventions. With observational data, standard regression analyses (e.g., Cox modeling) cannot account for time‐dependent confounders on causal pathways between exposures and outcome nor for time‐dependent selection bias that may arise from informative right censoring. Inverse probability weighting (IPW) estimation to fit marginal structural models (MSMs) has commonly been applied to properly adjust for these expected sources of bias in real‐world observational studies. We describe the application and performance of an alternate estimation approach in such a study. The approach is based on the recently proposed targeted learning methodology and consists in targeted minimum loss‐based estimation (TMLE) with super learning (SL) within a nonparametric MSM. The evaluation is based on the analysis of electronic health record data with both IPW estimation and TMLE to contrast cumulative risks under four more or less aggressive strategies for treatment intensification in adults with type 2 diabetes already on 2+ oral agents or basal insulin. Results from randomized experiments provide a surrogate gold standard to validate confounding and selection bias adjustment. Bootstrapping is used to validate analytic estimation of standard errors. This application does the following: (1) establishes the feasibility of TMLE in real‐world CER based on large healthcare databases; (2) provides evidence of proper confounding and selection bias adjustment with TMLE and SL; and (3) motivates their application for improving estimation efficiency. Claims are reinforced with a simulation study that also illustrates the double‐robustness property of TMLE. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

20.
Propensity scores are widely used to control for confounding when estimating the effect of a binary treatment in observational studies. They have been generalized to ordinal and continuous treatments in the recent literature. Following the definition of propensity function and its parameterizations (called the propensity parameter in this paper) proposed by Imai and van Dyk, we explore sufficient conditions for selecting propensity parameters to control for confounding for continuous treatments in the context of regression‐based adjustment in linear models. Typically, investigators make parametric assumptions about the form of the dose–response function for a continuous treatment. Such assumptions often allow the analyst to use only a subset of the propensity parameters to control confounding. When the treatment is the only predictor in the structural, that is, causal model, it is sufficient to adjust only for the propensity parameters that characterize the expectation of the treatment variable or its functional form. When the structural model includes selected baseline covariates other than the treatment variable, those baseline covariates, in addition to the propensity parameters, must also be adjusted in the model. We demonstrate these points with an example estimating the dose–response relationship for the effect of erythropoietin on hematocrit level in patients with end‐stage renal disease. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号