共查询到20条相似文献,搜索用时 15 毫秒
1.
We present closed-form expressions of asymptotic bias for the causal odds ratio from two estimation approaches of instrumental variable logistic regression: (i) the two-stage predictor substitution (2SPS) method and (ii) the two-stage residual inclusion (2SRI) approach. Under the 2SPS approach, the first stage model yields the predicted value of treatment as a function of an instrument and covariates, and in the second stage model for the outcome, this predicted value replaces the observed value of treatment as a covariate. Under the 2SRI approach, the first stage is the same, but the residual term of the first stage regression is included in the second stage regression, retaining the observed treatment as a covariate. Our bias assessment is for a different context from that of Terza (J. Health Econ. 2008; 27(3):531-543), who focused on the causal odds ratio conditional on the unmeasured confounder, whereas we focus on the causal odds ratio among compliers under the principal stratification framework. Our closed-form bias results show that the 2SPS logistic regression generates asymptotically biased estimates of this causal odds ratio when there is no unmeasured confounding and that this bias increases with increasing unmeasured confounding. The 2SRI logistic regression is asymptotically unbiased when there is no unmeasured confounding, but when there is unmeasured confounding, there is bias and it increases with increasing unmeasured confounding. The closed-form bias results provide guidance for using these IV logistic regression methods. Our simulation results are consistent with our closed-form analytic results under different combinations of parameter settings. 相似文献
2.
Unmeasured confounding is a common concern when researchers attempt to estimate a treatment effect using observational data or randomized studies with nonperfect compliance. To address this concern, instrumental variable methods, such as 2‐stage predictor substitution (2SPS) and 2‐stage residual inclusion (2SRI), have been widely adopted. In many clinical studies of binary and survival outcomes, 2SRI has been accepted as the method of choice over 2SPS, but a compelling theoretical rationale has not been postulated. We evaluate the bias and consistency in estimating the conditional treatment effect for both 2SPS and 2SRI when the outcome is binary, count, or time to event. We demonstrate analytically that the bias in 2SPS and 2SRI estimators can be reframed to mirror the problem of omitted variables in nonlinear models and that there is a direct relationship with the collapsibility of effect measures. In contrast to conclusions made by previous studies (Terza et al, 2008), we demonstrate that the consistency of 2SRI estimators only holds under the following conditions: (1) when the null hypothesis is true; (2) when the outcome model is collapsible; or (3) when estimating the nonnull causal effect from Cox or logistic regression models, the strong and unrealistic assumption that the effect of the unmeasured covariates on the treatment is proportional to their effect on the outcome needs to hold. We propose a novel dissimilarity metric to provide an intuitive explanation of the bias of 2SRI estimators in noncollapsible models and demonstrate that with increasing dissimilarity between the effects of the unmeasured covariates on the treatment versus outcome, the bias of 2SRI increases in magnitude. 相似文献
3.
Instrumental variable regression is one way to overcome unmeasured confounding and estimate causal effect in observational studies. Built on structural mean models, there has been considerable work recently developed for consistent estimation of causal relative risk and causal odds ratio. Such models can sometimes suffer from identification issues for weak instruments. This hampered the applicability of Mendelian randomization analysis in genetic epidemiology. When there are multiple genetic variants available as instrumental variables, and causal effect is defined in a generalized linear model in the presence of unmeasured confounders, we propose to test concordance between instrumental variable effects on the intermediate exposure and instrumental variable effects on the disease outcome, as a means to test the causal effect. We show that a class of generalized least squares estimators provide valid and consistent tests of causality. For causal effect of a continuous exposure on a dichotomous outcome in logistic models, the proposed estimators are shown to be asymptotically conservative. When the disease outcome is rare, such estimators are consistent because of the log‐linear approximation of the logistic function. Optimality of such estimators relative to the well‐known two‐stage least squares estimator and the double‐logistic structural mean model is further discussed. Copyright © 2014 John Wiley & Sons, Ltd. 相似文献
4.
Rolf H. H. Groenwold MD MSc Eelko Hak MSc PhD Olaf H. Klungel PharmD PhD Arno W. Hoes MD PhD 《Value in health》2010,13(1):132-137
Objectives: Unobserved confounding has been suggested to explain the effect of influenza vaccination on mortality reported in several observational studies. An instrumental variable (IV) is strongly related to the exposure under study, but not directly or indirectly (through other variables) with the outcome. Theoretically, analyses using IVs to control for both observed and unobserved confounding may provide unbiased estimates of influenza vaccine effects. We assessed the usefulness of IV analysis in influenza vaccination studies.
Methods: Information on patients aged 65 years and older from the computerized Utrecht General Practitioner (GP) research database over seven influenza epidemic periods was pooled to estimate the association between influenza vaccination and all-cause mortality among community-dwelling elderly. Potential IVs included in the analysis were a history of gout, a history of orthopaedic morbidity, a history of antacid medication use, and GP-specific vaccination rates.
Results: Using linear regression analyses, all possible IVs were associated with vaccination status: risk difference (RD) 7.8% (95% confidence interval [CI] 3.6%; 12.0%), RD 2.8% (95% CI 1.7%; 3.9%), RD 8.1% (95% CI 6.1%; 10.1%), and RD 100.0% (95% CI 89.0%; 111.0%) for gout, orthopaedic morbidity, antacid medication use, and GP-specific vaccination rates, respectively. Each potential IV, however, also appeared to be related to mortality through other observed confounding variables (notably age, sex, and comorbidity).
Conclusions: The potential IVs studied did not meet the necessary criteria, because they were (indirectly) associated with the outcome. These variables may, therefore, not be suited to assess unconfounded influenza vaccine effects through IV analysis. 相似文献
Methods: Information on patients aged 65 years and older from the computerized Utrecht General Practitioner (GP) research database over seven influenza epidemic periods was pooled to estimate the association between influenza vaccination and all-cause mortality among community-dwelling elderly. Potential IVs included in the analysis were a history of gout, a history of orthopaedic morbidity, a history of antacid medication use, and GP-specific vaccination rates.
Results: Using linear regression analyses, all possible IVs were associated with vaccination status: risk difference (RD) 7.8% (95% confidence interval [CI] 3.6%; 12.0%), RD 2.8% (95% CI 1.7%; 3.9%), RD 8.1% (95% CI 6.1%; 10.1%), and RD 100.0% (95% CI 89.0%; 111.0%) for gout, orthopaedic morbidity, antacid medication use, and GP-specific vaccination rates, respectively. Each potential IV, however, also appeared to be related to mortality through other observed confounding variables (notably age, sex, and comorbidity).
Conclusions: The potential IVs studied did not meet the necessary criteria, because they were (indirectly) associated with the outcome. These variables may, therefore, not be suited to assess unconfounded influenza vaccine effects through IV analysis. 相似文献
5.
Instrumental variable (IV) analysis can be used to address bias due to unobserved confounding when estimating the causal effect of a treatment on an outcome of interest. However, if a proposed IV is correlated with unmeasured confounders and/or weakly correlated with the treatment, the standard IV estimator may be more biased than an ordinary least squares (OLS) estimator. Several methods have been proposed that compare the bias of the IV and OLS estimators relying on the belief that measured covariates can be used as proxies for the unmeasured confounder. Despite these developments, there is lack of discussion about approaches that can be used to formally test whether the IV estimator may be less biased than the OLS estimator. Thus, we have developed a testing framework to compare the bias and a criterion to select informative measured covariates for bias comparison and regression adjustment. We also have developed a bias-correction method, which allows one to use an invalid IV to correct the bias of the OLS or IV estimator. Numerical studies demonstrate that the proposed methods perform well with realistic sample sizes. 相似文献
6.
We consider Bayesian sensitivity analysis for unmeasured confounding in observational studies where the association between a binary exposure, binary response, measured confounders and a single binary unmeasured confounder can be formulated using logistic regression models. A model for unmeasured confounding is presented along with a family of prior distributions that model beliefs about a possible unknown unmeasured confounder. Simulation from the posterior distribution is accomplished using Markov chain Monte Carlo. Because the model for unmeasured confounding is not identifiable, standard large-sample theory for Bayesian analysis is not applicable. Consequently, the impact of different choices of prior distributions on the coverage probability of credible intervals is unknown. Using simulations, we investigate the coverage probability when averaged with respect to various distributions over the parameter space. The results indicate that credible intervals will have approximately nominal coverage probability, on average, when the prior distribution used for sensitivity analysis approximates the sampling distribution of model parameters in a hypothetical sequence of observational studies. We motivate the method in a study of the effectiveness of beta blocker therapy for treatment of heart failure. 相似文献
7.
Vidar Hjellvik Marie L. De Bruin Sven O. Samuelsen Øystein Karlstad Morten Andersen Jari Haukka Peter Vestergaard Frank de Vries Kari Furu 《Statistics in medicine》2019,38(15):2719-2734
In epidemiology, one typically wants to estimate the risk of an outcome associated with an exposure after adjusting for confounders. Sometimes, outcome and exposure and maybe some confounders are available in a large data set, whereas some important confounders are only available in a validation data set that is typically a subset of the main data set. A generally applicable method in this situation is the two-stage calibration (TSC) method. We present a simplified easy-to-implement version of the TSC for the case where the validation data are a subset of the main data. We compared the simplified version to the standard TSC version for incidence rate ratios, odds ratios, relative risks, and hazard ratios using simulated data, and the simplified version performed better than our implementation of the standard version. The simplified version was also tested on real data and performed well. 相似文献
8.
When studies in meta‐analysis include different sets of confounders, simple analyses can cause a bias (omitting confounders that are missing in certain studies) or precision loss (omitting studies with incomplete confounders, i.e. a complete‐case meta‐analysis). To overcome these types of issues, a previous study proposed modelling the high correlation between partially and fully adjusted regression coefficient estimates in a bivariate meta‐analysis. When multiple differently adjusted regression coefficient estimates are available, we propose exploiting such correlations in a graphical model. Compared with a previously suggested bivariate meta‐analysis method, such a graphical model approach is likely to reduce the number of parameters in complex missing data settings by omitting the direct relationships between some of the estimates. We propose a structure‐learning rule whose justification relies on the missingness pattern being monotone. This rule was tested using epidemiological data from a multi‐centre survey. In the analysis of risk factors for early retirement, the method showed a smaller difference from a complete data odds ratio and greater precision than a commonly used complete‐case meta‐analysis. Three real‐world applications with monotone missing patterns are provided, namely, the association between (1) the fibrinogen level and coronary heart disease, (2) the intima media thickness and vascular risk and (3) allergic asthma and depressive episodes. The proposed method allows for the inclusion of published summary data, which makes it particularly suitable for applications involving both microdata and summary data. Copyright © 2016 John Wiley & Sons, Ltd. 相似文献
9.
Unmeasured confounding remains an important problem in observational studies, including pharmacoepidemiological studies of large administrative databases. Several recently developed methods utilize smaller validation samples, with information on additional confounders, to control for confounders unmeasured in the main, larger database. However, up‐to‐date applications of these methods to survival analyses seem to be limited to propensity score calibration, which relies on a strong surrogacy assumption. We propose a new method, specifically designed for time‐to‐event analyses, which uses martingale residuals, in addition to measured covariates, to enhance imputation of the unmeasured confounders in the main database. The method is applicable for analyses with both time‐invariant data and time‐varying exposure/confounders. In simulations, our method consistently eliminated bias because of unmeasured confounding, regardless of surrogacy violation and other relevant design parameters, and almost always yielded lower mean squared errors than other methods applicable for survival analyses, outperforming propensity score calibration in several scenarios. We apply the method to a real‐life pharmacoepidemiological database study of the association between glucocorticoid therapy and risk of type II diabetes mellitus in patients with rheumatoid arthritis, with additional potential confounders available in an external validation sample. Compared with conventional analyses, which adjust only for confounders measured in the main database, our estimates suggest a considerably weaker association. Copyright © 2016 John Wiley & Sons, Ltd. 相似文献
10.
A major, often unstated, concern of researchers carrying out epidemiological studies of medical therapy is the potential impact on validity if estimates of treatment are biased due to unmeasured confounders. One technique for obtaining consistent estimates of treatment effects in the presence of unmeasured confounders is instrumental variables analysis (IVA). This technique has been well developed in the econometrics literature and is being increasingly used in epidemiological studies. However, the approach to IVA that is most commonly used in such studies is based on linear models, while many epidemiological applications make use of non-linear models, specifically generalized linear models (GLMs) such as logistic or Poisson regression. Here we present a simple method for applying IVA within the class of GLMs using the generalized method of moments approach. We explore some of the theoretical properties of the method and illustrate its use within both a simulation example and an epidemiological study where unmeasured confounding is suspected to be present. We estimate the effects of beta-blocker therapy on one-year all-cause mortality after an incident hospitalization for heart failure, in the absence of data describing disease severity, which is believed to be a confounder. 相似文献
11.
Babette A. Brumback Zhulin He Mansi Prasad Matthew C. Freeman Richard Rheingans 《Statistics in medicine》2014,33(9):1490-1502
Much attention has been paid to estimating the causal effect of adherence to a randomized protocol using instrumental variables to adjust for unmeasured confounding. Researchers tend to use the instrumental variable within one of the three main frameworks: regression with an endogenous variable, principal stratification, or structural‐nested modeling. We found in our literature review that even in simple settings, causal interpretations of analyses with endogenous regressors can be ambiguous or rely on a strong assumption that can be difficult to interpret. Principal stratification and structural‐nested modeling are alternative frameworks that render unambiguous causal interpretations based on assumptions that are, arguably, easier to interpret. Our interest stems from a wish to estimate the effect of cluster‐level adherence on individual‐level binary outcomes with a three‐armed cluster‐randomized trial and polytomous adherence. Principal stratification approaches to this problem are quite challenging because of the sheer number of principal strata involved. Therefore, we developed a structural‐nested modeling approach and, in the process, extended the methodology to accommodate cluster‐randomized trials with unequal probability of selecting individuals. Furthermore, we developed a method to implement the approach with relatively simple programming. The approach works quite well, but when the structural‐nested model does not fit the data, there is no solution to the estimating equation. We investigate the performance of the approach using simulated data, and we also use the approach to estimate the effect on pupil absence of school‐level adherence to a randomized water, sanitation, and hygiene intervention in western Kenya. Copyright © 2013 John Wiley & Sons, Ltd. 相似文献
12.
Yun Li Yoonseok Lee Robert A. Wolfe Hal Morgenstern Jinyao Zhang Friedrich K. Port Bruce M. Robinson 《Statistics in medicine》2015,34(7):1150-1168
Treatment preferences of groups (e.g., clinical centers) have often been proposed as instruments to control for unmeasured confounding‐by‐indication in instrumental variable (IV) analyses. However, formal evaluations of these group‐preference‐based instruments are lacking. Unique challenges include the following: (i) correlations between outcomes within groups; (ii) the multi‐value nature of the instruments; (iii) unmeasured confounding occurring between and within groups. We introduce the framework of between‐group and within‐group confounding to assess assumptions required for the group‐preference‐based IV analyses. Our work illustrates that, when unmeasured confounding effects exist only within groups but not between groups, preference‐based IVs can satisfy assumptions required for valid instruments. We then derive a closed‐form expression of asymptotic bias of the two‐stage generalized ordinary least squares estimator when the IVs are valid. Simulations demonstrate that the asymptotic bias formula approximates bias in finite samples quite well, particularly when the number of groups is moderate to large. The bias formula shows that when the cluster size is finite, the IV estimator is asymptotically biased; only when both the number of groups and cluster size go to infinity, the bias disappears. However, the IV estimator remains advantageous in reducing bias from confounding‐by‐indication. The bias assessment provides practical guidance for preference‐based IV analyses. To increase their performance, one should adjust for as many measured confounders as possible, consider groups that have the most random variation in treatment assignment and increase cluster size. To minimize the likelihood for these IVs to be invalid, one should minimize unmeasured between‐group confounding. Copyright © 2014 John Wiley & Sons, Ltd. 相似文献
13.
Causal inference with observational longitudinal data and time‐varying exposures is complicated due to the potential for time‐dependent confounding and unmeasured confounding. Most causal inference methods that handle time‐dependent confounding rely on either the assumption of no unmeasured confounders or the availability of an unconfounded variable that is associated with the exposure (eg, an instrumental variable). Furthermore, when data are incomplete, validity of many methods often depends on the assumption of missing at random. We propose an approach that combines a parametric joint mixed‐effects model for the study outcome and the exposure with g‐computation to identify and estimate causal effects in the presence of time‐dependent confounding and unmeasured confounding. G‐computation can estimate participant‐specific or population‐average causal effects using parameters of the joint model. The joint model is a type of shared parameter model where the outcome and exposure‐selection models share common random effect(s). We also extend the joint model to handle missing data and truncation by death when missingness is possibly not at random. We evaluate the performance of the proposed method using simulation studies and compare the method to both linear mixed‐ and fixed‐effects models combined with g‐computation as well as to targeted maximum likelihood estimation. We apply the method to an epidemiologic study of vitamin D and depressive symptoms in older adults and include code using SAS PROC NLMIXED software to enhance the accessibility of the method to applied researchers. 相似文献
14.
For testing the efficacy of a treatment in a clinical trial with survival data, the Cox proportional hazards (PH) model is the well‐accepted, conventional tool. When using this model, one typically proceeds by confirming that the required PH assumption holds true. If the PH assumption fails to hold, there are many options available, proposed as alternatives to the Cox PH model. An important question which arises is whether the potential bias introduced by this sequential model fitting procedure merits concern and, if so, what are effective mechanisms for correction. We investigate by means of simulation study and draw attention to the considerable drawbacks, with regard to power, of a simple resampling technique, the permutation adjustment, a natural recourse for addressing such challenges. We also consider a recently proposed two‐stage testing strategy (2008) for ameliorating these effects. Copyright © 2013 John Wiley & Sons, Ltd. 相似文献
15.
Association studies of risk factors and complex diseases require careful assessment of potential confounding factors. Two‐stage regression analysis, sometimes referred to as residual‐ or adjusted‐outcome analysis, has been increasingly used in association studies of single nucleotide polymorphisms (SNPs) and quantitative traits. In this analysis, first, a residual‐outcome is calculated from a regression of the outcome variable on covariates and then the relationship between the adjusted‐outcome and the SNP is evaluated by a simple linear regression of the adjusted‐outcome on the SNP. In this article, we examine the performance of this two‐stage analysis as compared with multiple linear regression (MLR) analysis. Our findings show that when a SNP and a covariate are correlated, the two‐stage approach results in biased genotypic effect and loss of power. Bias is always toward the null and increases with the squared‐correlation between the SNP and the covariate (). For example, for , 0.1, and 0.5, two‐stage analysis results in, respectively, 0, 10, and 50% attenuation in the SNP effect. As expected, MLR was always unbiased. Since individual SNPs often show little or no correlation with covariates, a two‐stage analysis is expected to perform as well as MLR in many genetic studies; however, it produces considerably different results from MLR and may lead to incorrect conclusions when independent variables are highly correlated. While a useful alternative to MLR under , the two ‐stage approach has serious limitations. Its use as a simple substitute for MLR should be avoided. Genet. Epidemiol. 2011. © 2011 Wiley Periodicals, Inc. 35: 592‐596, 2011 相似文献
16.
Budtz-Jørgensen E Keiding N Grandjean P Weihe P White RF 《Statistics in medicine》2003,22(19):3089-3100
Non-differential measurement error in the exposure variable is known to attenuate the dose-response relationship. The amount of attenuation introduced in a given situation is not only a function of the precision of the exposure measurement but also depends on the conditional variance of the true exposure given the other independent variables. In addition, confounder effects may also be affected by the exposure measurement error. These difficulties in statistical model development are illustrated by examples from a epidemiological study performed in the Faroe Islands to investigate the adverse health effects of prenatal mercury exposure. 相似文献
17.
We consider estimation of various probabilities after termination of a group sequential phase II trial. A motivating example is that the stopping rule of a phase II oncologic trial is determined solely based on response to a drug treatment, and at the end of the trial estimating the rate of toxicity and response is desirable. The conventional maximum likelihood estimator (sample proportion) of a probability is shown to be biased, and two alternative estimators are proposed to correct for bias, a bias-reduced estimator obtained by using Whitehead's bias-adjusted approach, and an unbiased estimator from the Rao-Blackwell method of conditioning. All three estimation procedures are shown to have certain invariance property in bias. Moreover, estimators of a probability and their bias and precision can be evaluated through the observed response rate and the stage at which the trial stops, thus avoiding extensive computation. 相似文献
18.
In the outcomes research and comparative effectiveness research literature, there are strong cautionary tales on the use of instrumental variables (IVs) that may influence the newly initiated to shun this premier tool for casual inference without properly weighing their advantages. It has been recommended that IV methods should be avoided if the instrument is not econometrically perfect. The fact that IVs can produce better results than naïve regression, even in nonideal circumstances, remains underappreciated. In this paper, we propose a diagnostic criterion and related software that can be used by an applied researcher to determine the plausible superiority of IV over an ordinary least squares (OLS) estimator, which does not address the endogeneity of a covariate in question. Given a reasonable lower bound for the bias arising out of an OLS estimator, the researcher can use our proposed diagnostic tool to confirm whether the IV at hand can produce a better estimate (i.e., with lower mean square error) of the true effect parameter than the OLS, without knowing the true level of contamination in the IV. Copyright © 2013 John Wiley & Sons, Ltd. 相似文献
19.
Baiming Zou Fei Zou Jonathan J. Shuster Patrick J. Tighe Gary G. Koch Haibo Zhou 《Statistics in medicine》2016,35(20):3537-3548
Propensity score (PS) methods have been used extensively to adjust for confounding factors in the statistical analysis of observational data in comparative effectiveness research. There are four major PS‐based adjustment approaches: PS matching, PS stratification, covariate adjustment by PS, and PS‐based inverse probability weighting. Though covariate adjustment by PS is one of the most frequently used PS‐based methods in clinical research, the conventional variance estimation of the treatment effects estimate under covariate adjustment by PS is biased. As Stampf et al. have shown, this bias in variance estimation is likely to lead to invalid statistical inference and could result in erroneous public health conclusions (e.g., food and drug safety and adverse events surveillance). To address this issue, we propose a two‐stage analytic procedure to develop a valid variance estimator for the covariate adjustment by PS analysis strategy. We also carry out a simple empirical bootstrap resampling scheme. Both proposed procedures are implemented in an R function for public use. Extensive simulation results demonstrate the bias in the conventional variance estimator and show that both proposed variance estimators offer valid estimates for the true variance, and they are robust to complex confounding structures. The proposed methods are illustrated for a post‐surgery pain study. Copyright © 2016 John Wiley & Sons, Ltd. 相似文献
20.
Roseanne McNamee 《Statistics in medicine》2009,28(21):2639-2652
We consider the behaviour of three approaches to efficacy estimation—using so‐called ‘as treated’ (AT), ‘per protocol’ (PP) and ‘instrumental variable’ (IV) analyses—and of the Intention to Treat estimator, in a two‐arm randomized treatment trial with a Normally distributed outcome when there is treatment effect heterogeneity and non‐random compliance with assigned treatment. Formulae are derived for the bias of estimators when used either to estimate average treatment effect (ACE) or to estimate complier average treatment effect (CACE) under several models for the relationship between compliance and potential outcomes. These enable the expected values of AT, PP and IV estimators to be ranked in relation to ACE, and show that AT and PP estimators are generally biased for both ACE and CACE even under homogeneity. However, we show that the difference between any pair of (AT, PP, IV) estimates can be used to estimate the correlation between the latent variable determining compliance behaviour and one potential outcome. In the absence of measures that predict compliance, bounds for ACE can only be set given strong assumptions. Regarding the Intention to Treat estimator, while this is ‘biased towards the null’ if viewed as a measure of CACE, we show that it is not always so in relation to ACE. Finally we discuss the behaviour of the estimators under weak and strong null hypotheses. Copyright © 2009 John Wiley & Sons, Ltd. 相似文献