首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 218 毫秒
1.
目的 介绍敏感性分析方法,并对不同方法进行探讨和比较。方法 通过模拟试验和实例比较混杂函数敏感性分析法和边界因子敏感性分析方法在观察性研究中校正未测量混杂因素准确性的差异。结果 模拟试验与实际例子研究结果均显示,当暴露(X)与结局(Y)之间存在未测量混杂情况下,混杂函数法和边界因子相比,在分析未测量混杂因素的效应至少达到多大强度才能导致观测效应值大小和方向彻底改变的问题上,混杂函数和边界因子分析结果相似。但混杂函数法在完全解释观测效应值时所需的混杂效应强度小于边界因子做出同样解释所需的混杂效应值。边界因子分析中设置两个参数,而混杂函数中只有一个参数,混杂函数法在分析计算过程中较边界因子法简便灵敏。结论 对于真实世界观察性研究数据,分析暴露(X)与结局(Y)之间的因果效应时,敏感性分析过程必不可少,从计算过程和结果解释上,混杂函数敏感性分析方法是一个值得推荐的方法。  相似文献   

2.
观察性疗效比较研究作为随机对照研究的证据补充,其应用价值越来越受到关注。未测量混杂因素的统计学分析方法是观察性疗效比较研究中的重大挑战,本文对观察性疗效比较研究中未知或未测量的混杂因素控制的统计分析方法进行述评。未测量混杂因素的统计学方法包括工具变量法、本底事件率比校正法和双重差分模型及其衍生方法。工具变量法模型构造巧妙,但满足条件的工具变量在实际研究中并不易得;本底事件率比校正法和双重差分模型均要求研究数据有干预前信息,有些实际研究中往往无法满足。未测量混杂因素对统计学方法提出了新的要求、新的挑战,有待国内外统计学者的进一步完善和研究。  相似文献   

3.
非随机对照研究中未测量混杂因素的控制极具挑战。阴性对照理论基于“阴性对照的检测结果必须阴性”的思想,在进行人群研究时,额外设置合适的阴性对照,将关联特异度的思想融入人群研究中进行未测量混杂因素的识别和控制。本文从统计学角度解析阴性对照理论控制未测量混杂因素的基本原理,详细介绍基于阴性对照理论的系列衍生方法:死亡率标准化校正法、校正P值法、广义双重差分模型以及双向阴性对照法,并结合代表性案例对其合理应用进行述评。阴性对照是识别、校正和控制未测量混杂因素的重要统计设计思想,是基于现实世界数据开展实效比较研究的重要方法。  相似文献   

4.
观察性疗效比较研究作为随机对照研究的补充,其应用价值越来越受到关注,混杂偏倚是其重要偏倚来源。本文介绍观察性疗效比较研究中已测量的混杂因素控制的统计分析方法。对于已测量的混杂因素,可采用传统的分层分析、配对分析、协方差分析或多因素分析,也可采用倾向性评分、疾病风险评分等方法进行混杂因素匹配、分层和调整。良好的设计需从源头控制各种混杂,事后统计分析则应在理解各类方法的应用前提下,严格把握适用条件。  相似文献   

5.
观察性研究中往往存在未知或未测量的混杂因素,是流行病学因果关联研究中的重大挑战。本文介绍一种可以应用在观察性研究中的一种对未知/未测量混杂因素进行识别和效应评估的工具——“探针变量”。其主要可以分为暴露探针变量、结局探针变量以及中介探针3种形式,前2种不仅可以对未知/未测量混杂因素进行识别,也可以对其效应量进行估计,从而揭示真实的暴露与结局之间的关联。而中介探针则是针对“中介因子”进行控制,从而识别暴露和结局之间是否存在未测量混杂因素。该理论实践过程中最大的困难在于“探针变量”的选择和确定,不恰当的“探针变量”可能引入新的混杂,导致未测量混杂因素识别不准确。“探针变量”可以推荐作为观察性研究报告中的一项敏感性分析内容,有助于读者真实理解暴露与结局之间的关联,增加观察性流行病学研究中的证据力度。  相似文献   

6.
有向无环图在因果推断控制混杂因素中的应用   总被引:5,自引:4,他引:1       下载免费PDF全文
观察性研究是流行病学病因研究中最常用的方法之一,但在因果推断时混杂因素往往会歪曲暴露与结局的真实因果关联。为了消除混杂,选择变量调整是关键所在。有向无环图能够将复杂的因果关系可视化,提供识别混杂的直观方法,将识别混杂转变成识别最小充分调整集。一方面有向无环图可以选择调整更少的变量,增加分析的统计效率;另一方面有向无环图识别的最小充分调整集可以避开未被测量或有缺失值的变量。总之,有向无环图有助于充分揭示真实的因果关系。  相似文献   

7.
研究设计时混杂控制策略的结构分类   总被引:2,自引:2,他引:0       下载免费PDF全文
混杂影响着人群因果关系的发生。依据混杂因素是否已知、可测量及已测量,可将其分为4类情形。基于有向无环图,对混杂的控制策略分为两类:①混杂路径打断法,又可分为单路径和双路径打断法,分别对应于暴露完全干预法、限制法和分层法;②混杂路径保留法,分别对应于暴露不完全干预法(工具变量设计或不完美的随机对照试验)、中间变量法和匹配法。其中,随机对照试验、工具变量设计或孟德尔随机化设计、中间变量分析可满足4类混杂的控制,而限制法、分层法和匹配法仅适用于已知、可测量并已测量的混杂。识别不同类型混杂的控制机制,有助于在研究设计阶段提出应对措施,是获得正确因果效应估计的前提。  相似文献   

8.
医院感染危险因素研究中的混杂偏倚   总被引:3,自引:1,他引:2  
在医院感染危险因素的研究中,有些研究者往往忽视了混杂偏倚的存在,本文选出两篇有关论文就其混杂偏倚问题进行分析,并列出了控制混杂偏倚的几种主要方法,提醒研究者注意预防和控制混杂偏倚  相似文献   

9.
核保医学应用中对混杂因素的识别和控制非常重要,可以把混杂理解为在分析的过程中所遇到的一种逻辑现象,同时也是影响分析结果的重要因素。每个人健康状况的变化也并非归因于某一孤立的因素,这些因素相互影响和制约,构成了一个复杂的网络,而忽视混杂因素会不可避免的导致核保分析出现混杂偏倚,如何对混杂因素进行准确识别和有效控制将是未来核保医学重点关注的研究内容。  相似文献   

10.
混杂因素     
(一)什么是混杂因素?研究疾病与暴露的相互关系时,往往受无关变量的影响所混淆而出现偏倚,这个无关变量叫混杂因素。作为混杂因素的条件:一方面它是所研究疾病的一种病因或是与该病病因密切相关的变量因素(例如年龄、性别等);另一方面它又与所研究的暴露因素有关。  相似文献   

11.
We consider Bayesian sensitivity analysis for unmeasured confounding in observational studies where the association between a binary exposure, binary response, measured confounders and a single binary unmeasured confounder can be formulated using logistic regression models. A model for unmeasured confounding is presented along with a family of prior distributions that model beliefs about a possible unknown unmeasured confounder. Simulation from the posterior distribution is accomplished using Markov chain Monte Carlo. Because the model for unmeasured confounding is not identifiable, standard large-sample theory for Bayesian analysis is not applicable. Consequently, the impact of different choices of prior distributions on the coverage probability of credible intervals is unknown. Using simulations, we investigate the coverage probability when averaged with respect to various distributions over the parameter space. The results indicate that credible intervals will have approximately nominal coverage probability, on average, when the prior distribution used for sensitivity analysis approximates the sampling distribution of model parameters in a hypothetical sequence of observational studies. We motivate the method in a study of the effectiveness of beta blocker therapy for treatment of heart failure.  相似文献   

12.
Unmeasured confounding remains an important problem in observational studies, including pharmacoepidemiological studies of large administrative databases. Several recently developed methods utilize smaller validation samples, with information on additional confounders, to control for confounders unmeasured in the main, larger database. However, up‐to‐date applications of these methods to survival analyses seem to be limited to propensity score calibration, which relies on a strong surrogacy assumption. We propose a new method, specifically designed for time‐to‐event analyses, which uses martingale residuals, in addition to measured covariates, to enhance imputation of the unmeasured confounders in the main database. The method is applicable for analyses with both time‐invariant data and time‐varying exposure/confounders. In simulations, our method consistently eliminated bias because of unmeasured confounding, regardless of surrogacy violation and other relevant design parameters, and almost always yielded lower mean squared errors than other methods applicable for survival analyses, outperforming propensity score calibration in several scenarios. We apply the method to a real‐life pharmacoepidemiological database study of the association between glucocorticoid therapy and risk of type II diabetes mellitus in patients with rheumatoid arthritis, with additional potential confounders available in an external validation sample. Compared with conventional analyses, which adjust only for confounders measured in the main database, our estimates suggest a considerably weaker association. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

13.
OBJECTIVE: In the analysis of observational data, the argument is sometimes made that if adjustment for measured confounders induces little change in the treatment-outcome association, then there is less concern about the extent to which the association is driven by unmeasured confounding. We quantify this reasoning using Bayesian sensitivity analysis (BSA) for unmeasured confounding. Using hierarchical models, the confounding effect of a binary unmeasured variable is modeled as arising from the same distribution as that of measured confounders. Our objective is to investigate the performance of the method compared to sensitivity analysis, which assumes that there is no relationship between measured and unmeasured confounders. STUDY DESIGN AND SETTING: We apply the method in an observational study of the effectiveness of beta-blocker therapy in heart failure patients. RESULTS: BSA for unmeasured confounding using hierarchical prior distributions yields an odds ratio (OR) of 0.72, 95% credible interval (CrI): 0.56, 0.93 for the association between beta-blockers and mortality, whereas using independent priors yields OR=0.72, 95% CrI: 0.45, 1.15. CONCLUSION: If the confounding effect of a binary unmeasured variable is similar to that of measured confounders, then conventional sensitivity analysis may give results that overstate the uncertainty about bias.  相似文献   

14.
Measurement error in explanatory variables and unmeasured confounders can cause considerable problems in epidemiologic studies. It is well recognized that under certain conditions, nondifferential measurement error in the exposure variable produces bias towards the null. Measurement error in confounders will lead to residual confounding, but this is not a straightforward issue, and it is not clear in which direction the bias will point. Unmeasured confounders further complicate matters. There has been discussion about the amount of bias in exposure effect estimates that can plausibly occur due to residual or unmeasured confounding. In this paper, the authors use simulation studies and logistic regression analyses to investigate the size of the apparent exposure-outcome association that can occur when in truth the exposure has no causal effect on the outcome. The authors consider two cases with a normally distributed exposure and either two or four normally distributed confounders. When the confounders are uncorrelated, bias in the exposure effect estimate increases as the amount of residual and unmeasured confounding increases. Patterns are more complex for correlated confounders. With plausible assumptions, effect sizes of the magnitude frequently reported in observational epidemiologic studies can be generated by residual and/or unmeasured confounding alone.  相似文献   

15.
Robins introduced marginal structural models (MSMs) and inverse probability of treatment weighted (IPTW) estimators for the causal effect of a time-varying treatment on the mean of repeated measures. We investigate the sensitivity of IPTW estimators to unmeasured confounding. We examine a new framework for sensitivity analyses based on a nonidentifiable model that quantifies unmeasured confounding in terms of a sensitivity parameter and a user-specified function. We present augmented IPTW estimators of MSM parameters and prove their consistency for the causal effect of an MSM, assuming a correct confounding bias function for unmeasured confounding. We apply the methods to assess sensitivity of the analysis of Hernán et al., who used an MSM to estimate the causal effect of zidovudine therapy on repeated CD4 counts among HIV-infected men in the Multicenter AIDS Cohort Study. Under the assumption of no unmeasured confounders, a 95 per cent confidence interval for the treatment effect includes zero. We show that under the assumption of a moderate amount of unmeasured confounding, a 95 per cent confidence interval for the treatment effect no longer includes zero. Thus, the analysis of Hernán et al. is somewhat sensitive to unmeasured confounding. We hope that our research will encourage and facilitate analyses of sensitivity to unmeasured confounding in other applications.  相似文献   

16.
Often, data on important confounders are not available in cohort studies. Sensitivity analyses based on the relation of single, but not multiple, unmeasured confounders with an exposure of interest in a separate validation study have been proposed. In this paper, the authors controlled for measured confounding in the main cohort using propensity scores (PS's) and addressed unmeasured confounding by estimating two additional PS's in a validation study. The "error-prone" PS exclusively used information available in the main cohort. The "gold standard" PS additionally included data on covariates available only in the validation study. Based on these two PS's in the validation study, regression calibration was applied to adjust regression coefficients. This propensity score calibration (PSC) adjusts for unmeasured confounding in cohort studies with validation data under certain, usually untestable, assumptions. The authors used PSC to assess the relation between nonsteroidal antiinflammatory drugs (NSAIDs) and 1-year mortality in a large cohort of elderly persons. "Traditional" adjustment resulted in a hazard ratio for NSAID users of 0.80 (95% confidence interval (CI): 0.77, 0.83) as compared with an unadjusted hazard ratio of 0.68 (95% CI: 0.66, 0.71). Application of PSC resulted in a more plausible hazard ratio of 1.06 (95% CI: 1.00, 1.12). Until the validity and limitations of PSC have been assessed in different settings, the method should be seen as a sensitivity analysis.  相似文献   

17.
Confounding is an important and common issue in observational (non-randomized) research on the effects of pharmaceuticals or exposure to etiologic factors (determinants). Confounding is present when a third factor, related to both the determinant and the outcome, distorts the causal relation between these two. There are different methods to control for confounding. The most commonly used are restriction, stratification, multivariable regression models, and propensity score methods. With these methods it is only possible to control for variables for which data is known: measured confounders. Research in the area of confounding is currently directed at the incorporation of external knowledge on unmeasured confounders, the evaluation of instrumental variables, and the impact of time-dependent confounding.  相似文献   

18.
No unmeasured confounding is often assumed in estimating treatment effects in observational data, whether using classical regression models or approaches such as propensity scores and inverse probability weighting. However, in many such studies collection of confounders cannot possibly be exhaustive in practice, and it is crucial to examine the extent to which the resulting estimate is sensitive to the unmeasured confounders. We consider this problem for survival and competing risks data. Due to the complexity of models for such data, we adapt the simulated potential confounder approach of Carnegie et al (2016), which provides a general tool for sensitivity analysis due to unmeasured confounding. More specifically, we specify one sensitivity parameter to quantify the association between an unmeasured confounder and the exposure or treatment received, and another set of parameters to quantify the association between the confounder and the time-to-event outcomes. By varying the magnitudes of the sensitivity parameters, we estimate the treatment effect of interest using the stochastic expectation-maximization (EM) and the EM algorithms. We demonstrate the performance of our methods on simulated data, and apply them to a comparative effectiveness study in inflammatory bowel disease. An R package “survSens” is available on CRAN that implements the proposed methodology.  相似文献   

19.
BACKGROUND AND OBJECTIVE: To review methods that seek to adjust for confounding in observational studies when assessing intended drug effects. METHODS: We reviewed the statistical, economical and medical literature on the development, comparison and use of methods adjusting for confounding. RESULTS: In addition to standard statistical techniques of (logistic) regression and Cox proportional hazards regression, alternative methods have been proposed to adjust for confounding in observational studies. A first group of methods focus on the main problem of nonrandomization by balancing treatment groups on observed covariates: selection, matching, stratification, multivariate confounder score, and propensity score methods, of which the latter can be combined with stratification or various matching methods. Another group of methods look for variables to be used like randomization in order to adjust also for unobserved covariates: instrumental variable methods, two-stage least squares, and grouped-treatment approach. Identifying these variables is difficult, however, and assumptions are strong. Sensitivity analyses are useful tools in assessing the robustness and plausibility of the estimated treatment effects to variations in assumptions about unmeasured confounders. CONCLUSION: In most studies regression-like techniques are routinely used for adjustment for confounding, although alternative methods are available. More complete empirical evaluations comparing these methods in different situations are needed.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号