首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Marginal structural models were developed as a semiparametric alternative to the G‐computation formula to estimate causal effects of exposures. In practice, these models are often specified using parametric regression models. As such, the usual conventions regarding regression model specification apply. This paper outlines strategies for marginal structural model specification and considerations for the functional form of the exposure metric in the final structural model. We propose a quasi‐likelihood information criterion adapted from use in generalized estimating equations. We evaluate the properties of our proposed information criterion using a limited simulation study. We illustrate our approach using two empirical examples. In the first example, we use data from a randomized breastfeeding promotion trial to estimate the effect of breastfeeding duration on infant weight at 1 year. In the second example, we use data from two prospective cohorts studies to estimate the effect of highly active antiretroviral therapy on CD4 count in an observational cohort of HIV‐infected men and women. The marginal structural model specified should reflect the scientific question being addressed but can also assist in exploration of other plausible and closely related questions. In marginal structural models, as in any regression setting, correct inference depends on correct model specification. Our proposed information criterion provides a formal method for comparing model fit for different specifications. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

2.
Motivated by a previously published study of HIV treatment, we simulated data subject to time‐varying confounding affected by prior treatment to examine some finite‐sample properties of marginal structural Cox proportional hazards models. We compared (a) unadjusted, (b) regression‐adjusted, (c) unstabilized, and (d) stabilized marginal structural (inverse probability‐of‐treatment [IPT] weighted) model estimators of effect in terms of bias, standard error, root mean squared error (MSE), and 95% confidence limit coverage over a range of research scenarios, including relatively small sample sizes and 10 study assessments. In the base‐case scenario resembling the motivating example, where the true hazard ratio was 0.5, both IPT‐weighted analyses were unbiased, whereas crude and adjusted analyses showed substantial bias towards and across the null. Stabilized IPT‐weighted analyses remained unbiased across a range of scenarios, including relatively small sample size; however, the standard error was generally smaller in crude and adjusted models. In many cases, unstabilized weighted analysis showed a substantial increase in standard error compared with other approaches. Root MSE was smallest in the IPT‐weighted analyses for the base‐case scenario. In situations where time‐varying confounding affected by prior treatment was absent, IPT‐weighted analyses were less precise and therefore had greater root MSE compared with adjusted analyses. The 95% confidence limit coverage was close to nominal for all stabilized IPT‐weighted but poor in crude, adjusted, and unstabilized IPT‐weighted analysis. Under realistic scenarios, marginal structural Cox proportional hazards models performed according to expectations based on large‐sample theory and provided accurate estimates of the hazard ratio. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

3.
Optimal timing of initiating antiretroviral therapy has been a controversial topic in HIV research. Two highly publicized studies applied different analytical approaches, a dynamic marginal structural model and a multiple imputation method, to different observational databases and came up with different conclusions. Discrepancies between the two studies' results could be due to differences between patient populations, fundamental differences between statistical methods, or differences between implementation details. For example, the two studies adjusted for different covariates, compared different thresholds, and had different criteria for qualifying measurements. If both analytical approaches were applied to the same cohort holding technical details constant, would their results be similar? In this study, we applied both statistical approaches using observational data from 12,708 HIV‐infected persons throughout the USA. We held technical details constant between the two methods and then repeated analyses varying technical details to understand what impact they had on findings. We also present results applying both approaches to simulated data. Results were similar, although not identical, when technical details were held constant between the two statistical methods. Confidence intervals for the dynamic marginal structural model tended to be wider than those from the imputation approach, although this may have been due in part to additional external data used in the imputation analysis. We also consider differences in the estimands, required data, and assumptions of the two statistical methods. Our study provides insights into assessing optimal dynamic treatment regimes in the context of starting antiretroviral therapy and in more general settings. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

4.
Inverse probability of treatment weighted (IPTW) estimation for marginal structural models (MSMs) requires the specification of a nuisance model describing the conditional relationship between treatment allocation and confounders. However, there is still limited information on the best strategy for building these treatment models in practice. We developed a series of simulations to systematically determine the effect of including different types of candidate variables in such models. We explored the performance of IPTW estimators across several scenarios of increasing complexity, including one designed to mimic the complexity typically seen in large pharmacoepidemiologic studies.Our results show that including pure predictors of treatment (i.e. not confounders) in treatment models can lead to estimators that are biased and highly variable, particularly in the context of small samples. The bias and mean-squared error of the MSM-based IPTW estimator increase as the complexity of the problem increases. The performance of the estimator is improved by either increasing the sample size or using only variables related to the outcome to develop the treatment model. Estimates of treatment effect based on the true model for the probability of treatment are asymptotically unbiased.We recommend including only pure risk factors and confounders in the treatment model when developing an IPTW-based MSM.  相似文献   

5.
Patient noncompliance complicates the analysis of many randomized trials seeking to evaluate the effect of surgical intervention as compared with a nonsurgical treatment. If selection for treatment depends on intermediate patient characteristics or outcomes, then 'as-treated' analyses may be biased for the estimation of causal effects. Therefore, the selection mechanism for treatment and/or compliance should be carefully considered when conducting analysis of surgical trials. We compare the performance of alternative methods when endogenous processes lead to patient crossover. We adopt an underlying longitudinal structural mixed model that is a natural example of a structural nested model. Likelihood-based methods are not typically used in this context; however, we show that standard linear mixed models will be valid under selection mechanisms that depend only on past covariate and outcome history. If there are underlying patient characteristics that influence selection, then likelihood methods can be extended via maximization of the joint likelihood of exposure and outcomes. Semi-parametric causal estimation methods such as marginal structural models, g-estimation, and instrumental variable approaches can also be valid, and we both review and evaluate their implementation in this setting. The assumptions required for valid estimation vary across approaches; thus, the choice of methods for analysis should be driven by which outcome and selection assumptions are plausible.  相似文献   

6.
Evaluating the impacts of clinical or policy interventions on health care utilization requires addressing methodological challenges for causal inference while also analyzing highly skewed data. We examine the impact of registering with a Family Medicine Group, an integrated primary care model in Quebec, on hospitalization and emergency department visits using propensity scores to adjust for baseline characteristics and marginal structural models to account for time‐varying exposures. We also evaluate the performance of different marginal structural generalized linear models in the presence of highly skewed data and conduct a simulation study to determine the robustness of alternative generalized linear models to distributional model mis‐specification. Although the simulations found that the zero‐inflated Poisson likelihood performed the best overall, the negative binomial likelihood gave the best fit for both outcomes in the real dataset. Our results suggest that registration to a Family Medicine Group for all 3 years caused a small reduction in the number of emergency room visits and no significant change in the number of hospitalizations in the final year. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

7.
Marginal structural models are commonly used to estimate the causal effect of a time‐varying treatment in presence of time‐dependent confounding. When fitting an MSM to data, the analyst must specify both the structural model for the outcome and the treatment models for the inverse‐probability‐of‐treatment weights. The use of stabilized weights is recommended because they are generally less variable than the standard weights. In this paper, we are concerned with the use of the common stabilized weights when the structural model is specified to only consider partial treatment history, such as the current or most recent treatments. We present various examples of settings where these stabilized weights yield biased inferences while the standard weights do not. These issues are first investigated on the basis of simulated data and subsequently exemplified using data from the Honolulu Heart Program. Unlike common stabilized weights, we find that basic stabilized weights offer some protection against bias in structural models designed to estimate current or most recent treatment effects. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

8.
In patients with chronic kidney disease (CKD), clinical interest often centers on determining treatments and exposures that are causally related to renal progression. Analyses of longitudinal clinical data in this population are often complicated by clinical competing events, such as end‐stage renal disease (ESRD) and death, and time‐dependent confounding, where patient factors that are predictive of later exposures and outcomes are affected by past exposures. We developed multistate marginal structural models (MS‐MSMs) to assess the effect of time‐varying systolic blood pressure on disease progression in subjects with CKD. The multistate nature of the model allows us to jointly model disease progression characterized by changes in the estimated glomerular filtration rate (eGFR), the onset of ESRD, and death, and thereby avoid unnatural assumptions of death and ESRD as noninformative censoring events for subsequent changes in eGFR. We model the causal effect of systolic blood pressure on the probability of transitioning into 1 of 6 disease states given the current state. We use inverse probability weights with stabilization to account for potential time‐varying confounders, including past eGFR, total protein, serum creatinine, and hemoglobin. We apply the model to data from the Chronic Renal Insufficiency Cohort Study, a multisite observational study of patients with CKD.  相似文献   

9.
Cost‐effectiveness analysis is an important tool that can be applied to the evaluation of a health treatment or policy. When the observed costs and outcomes result from a nonrandomized treatment, making causal inference about the effects of the treatment requires special care. The challenges are compounded when the observation period is truncated for some of the study subjects. This paper presents a method of unbiased estimation of cost‐effectiveness using observational study data that is not fully observed. The method—twice‐weighted multiple interval estimation of a marginal structural model—was developed in order to analyze the cost‐effectiveness of treatment protocols for advanced dementia residents living nursing homes when they become acutely ill. A key feature of this estimation approach is that it facilitates a sensitivity analysis that identifies the potential effects of unmeasured confounding on the conclusions concerning cost‐effectiveness. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

10.
Missing data are common in longitudinal studies and can occur in the exposure interest. There has been little work assessing the impact of missing data in marginal structural models (MSMs), which are used to estimate the effect of an exposure history on an outcome when time‐dependent confounding is present. We design a series of simulations based on the Framingham Heart Study data set to investigate the impact of missing data in the primary exposure of interest in a complex, realistic setting. We use a standard application of MSMs to estimate the causal odds ratio of a specific activity history on outcome. We report and discuss the results of four missing data methods, under seven possible missing data structures, including scenarios in which an unmeasured variable predicts missing information. In all missing data structures, we found that a complete case analysis, where all subjects with missing exposure data are removed from the analysis, provided the least bias. An analysis that censored individuals at the first occasion of missing exposure and includes a censorship model as well as a propensity model when creating the inverse probability weights also performed well. The presence of an unmeasured predictor of missing data only slightly increased bias, except in the situation such that the exposure had a large impact on missing data and the unmeasured variable had a large impact on missing data and outcome. A discussion of the results is provided using causal diagrams, showing the usefulness of drawing such diagrams before conducting an analysis. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

11.
It is routinely argued that, unlike standard regression‐based estimates, inverse probability weighted (IPW) estimates of the parameters of a correctly specified Cox marginal structural model (MSM) may remain unbiased in the presence of a time‐varying confounder affected by prior treatment. Previously proposed methods for simulating from a known Cox MSM lack knowledge of the law of the observed outcome conditional on the measured past. Although unbiased IPW estimation does not require this knowledge, standard regression‐based estimates rely on correct specification of this law. Thus, in typical high‐dimensional settings, such simulation methods cannot isolate bias due to complex time‐varying confounding as it may be conflated with bias due to misspecification of the outcome regression model. In this paper, we describe an approach to Cox MSM data generation that allows for a comparison of the bias of IPW estimates versus that of standard regression‐based estimates in the complete absence of model misspecification. This approach involves simulating data from a standard parametrization of the likelihood and solving for the underlying Cox MSM. We prove that solutions exist and computations are tractable under many data‐generating mechanisms. We show analytically and confirm in simulations that, in the absence of model misspecification, the bias of standard regression‐based estimates for the parameters of a Cox MSM is indeed a function of the coefficients in observed data models quantifying the presence of a time‐varying confounder affected by prior treatment. We discuss limitations of this approach including that implied by the ‘g‐null paradox’. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

12.
Marginal structural Cox models have been used to estimate the causal effect of a time-varying treatment on a survival outcome in the presence of time-dependent confounders. These methods rely on the positivity assumption, which states that the propensity scores are bounded away from zero and one. Practical violations of this assumption are common in longitudinal studies, resulting in extreme weights that may yield erroneous inferences. Truncation, which consists of replacing outlying weights with less extreme ones, is the most common approach to control for extreme weights to date. While truncation reduces the variability in the weights and the consequent sampling variability of the estimator, it can also introduce bias. Instead of truncated weights, we propose using optimal probability weights, defined as those that have a specified variance and the smallest Euclidean distance from the original, untruncated weights. The set of optimal weights is obtained by solving a constrained quadratic optimization problem. The proposed weights are evaluated in a simulation study and applied to the assessment of the effect of treatment on time to death among people in Sweden who live with human immunodeficiency virus and inject drugs.  相似文献   

13.
14.
Methodology for causal inference based on propensity scores has been developed and popularized in the last two decades. However, the majority of the methodology has concentrated on binary treatments. Only recently have these methods been extended to settings with multi-valued treatments. We propose a number of discrete choice models for estimating the propensity scores. The models differ in terms of flexibility with respect to potential correlation between treatments, and, in turn, the accuracy of the estimated propensity scores. We present the effects of discrete choice models used on performance of the causal estimators through a Monte Carlo study. We also illustrate the use of discrete choice models to estimate the effect of antipsychotic drug use on the risk of diabetes in a cohort of adults with schizophrenia.  相似文献   

15.
Mediation analysis via potential outcomes models   总被引:2,自引:0,他引:2  
This paper develops a causal or manipulation model framework for mediation analysis based on the concept of potential outcome. Using this framework, we provide new definitions and measures of mediation. Effects of manipulations are modeled via the linear structural model. Corresponding structural equation models (SEMs), in conjunction with two-stage least-squares estimation and the delta method, are used to perform inference. The methods are applied to data from a study of nursing interventions for postoperative pain. We address the cases of more than two treatment groups, and an interaction among mediators. For the latter, a sensitivity analysis approach to handle unidentified parameters is described. Interpretative advantages of the potential outcomes framework for mediation are emphasized.  相似文献   

16.
Correct specification of the inverse probability weighting (IPW) model is necessary for consistent inference from a marginal structural Cox model (MSCM). In practical applications, researchers are typically unaware of the true specification of the weight model. Nonetheless, IPWs are commonly estimated using parametric models, such as the main‐effects logistic regression model. In practice, assumptions underlying such models may not hold and data‐adaptive statistical learning methods may provide an alternative. Many candidate statistical learning approaches are available in the literature. However, the optimal approach for a given dataset is impossible to predict. Super learner (SL) has been proposed as a tool for selecting an optimal learner from a set of candidates using cross‐validation. In this study, we evaluate the usefulness of a SL in estimating IPW in four different MSCM simulation scenarios, in which we varied the specification of the true weight model specification (linear and/or additive). Our simulations show that, in the presence of weight model misspecification, with a rich and diverse set of candidate algorithms, SL can generally offer a better alternative to the commonly used statistical learning approaches in terms of MSE as well as the coverage probabilities of the estimated effect in an MSCM. The findings from the simulation studies guided the application of the MSCM in a multiple sclerosis cohort from British Columbia, Canada (1995–2008), to estimate the impact of beta‐interferon treatment in delaying disability progression. Copyright © 2017 John Wiley & Sons, Ltd.  相似文献   

17.
Two popular approaches for relating correlated measurements of a non‐Gaussian response variable to a set of predictors are to fit a marginal model using generalized estimating equations and to fit a generalized linear mixed model (GLMM) by introducing latent random variables. The first approach is effective for parameter estimation, but leaves one without a formal model for the data with which to assess quality of fit or make individual‐level predictions for future observations. The second approach overcomes these deficiencies, but leads to parameter estimates that must be interpreted conditional on the latent variables. To obtain marginal summaries, one needs to evaluate an analytically intractable integral or use attenuation factors as an approximation. Further, we note an unpalatable implication of the standard GLMM. To resolve these issues, we turn to a class of marginally interpretable GLMMs that lead to parameter estimates with a marginal interpretation while maintaining the desirable statistical properties of a conditionally specified model and avoiding problematic implications. We establish the form of these models under the most commonly used link functions and address computational issues. For logistic mixed effects models, we introduce an accurate and efficient method for evaluating the logistic‐normal integral.  相似文献   

18.
Instrumental variable (IV) methods have potential to consistently estimate the causal effect of an exposure on an outcome in the presence of unmeasured confounding. However, validity of IV methods relies on strong assumptions, some of which cannot be conclusively verified from observational data. One such assumption is that the effect of the proposed instrument on the outcome is completely mediated by the exposure. We consider the situation where this assumption is violated, but the remaining IV assumptions hold; that is, the proposed IV (1) is associated with the exposure and (2) has no unmeasured causes in common with the outcome. We propose a method to estimate multiplicative structural mean models of binary outcomes in this scenario in the presence of unmeasured confounding. We also extend the method to address multiple scenarios, including mediation analysis. The method adapts the asymptotically efficient G‐estimation approach that was previously proposed for additive structural mean models, and it can be carried out using off‐the‐shelf software for generalized method of moments. Monte Carlo simulation studies show that the method has low bias and accurate coverage. We applied the method to a case study of circulating vitamin D and depressive symptoms using season of blood collection as a (potentially invalid) instrumental variable. Potential applications of the proposed method include randomized intervention studies as well as Mendelian randomization studies with genetic variants that affect multiple phenotypes, possibly including the outcome. Published 2016. This article is a U.S. Government work and is in the public domain in the USA  相似文献   

19.
Childhood acute lymphoblastic leukaemia is treated with long-term intensive chemotherapy. During the latter part of the treatment, the maintenance therapy, the patients receive oral doses of two cytostatics. The doses are tailored to blood counts measured on a weekly basis, and the treatment is therefore highly dynamic. In 1992-1996, the Nordic Society of Paediatric Haematology and Oncology (NOPHO) conducted a randomised study (NOPHO-ALL-92) to investigate the effect of a new and more sophisticated dynamic treatment strategy. Unexpectedly, the new strategy worsened the outcome for the girls, whereas there were no treatment differences for the boys. There are as yet no general guidelines for optimising the treatment. On basis of the data from this study, our goal is to formulate an alternative dosing strategy. We use recently developed methods proposed by van der Laan et al. to obtain statistical models that may be used in the guidance of how the physicians should assign the doses to the patients to obtain the target of the treatment. We present a possible strategy and discuss the reliability of this strategy. The implementation is complicated, and we touch upon the limitations of the methods in relation to the formulation of alternative dosing strategies for the maintenance therapy.  相似文献   

20.
Parsimony is important for the interpretation of causal effect estimates of longitudinal treatments on subsequent outcomes. One method for parsimonious estimates fits marginal structural models by using inverse propensity scores as weights. This method leads to generally large variability that is uncommon in more likelihood‐based approaches. A more recent method fits these models by using simulations from a fitted g‐computation, but requires the modeling of high‐dimensional longitudinal relations that are highly susceptible to misspecification. We propose a new method that, first, uses longitudinal propensity scores as regressors to reduce the dimension of the problem and then uses the approximate likelihood for the first estimates to fit parsimonious models. We demonstrate the methods by estimating the effect of anticoagulant therapy on survival for cancer and non‐cancer patients who have inferior vena cava filters. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号