首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到16条相似文献,搜索用时 125 毫秒
1.
非随机对照研究中未测量混杂因素的控制极具挑战。阴性对照理论基于“阴性对照的检测结果必须阴性”的思想,在进行人群研究时,额外设置合适的阴性对照,将关联特异度的思想融入人群研究中进行未测量混杂因素的识别和控制。本文从统计学角度解析阴性对照理论控制未测量混杂因素的基本原理,详细介绍基于阴性对照理论的系列衍生方法:死亡率标准化校正法、校正P值法、广义双重差分模型以及双向阴性对照法,并结合代表性案例对其合理应用进行述评。阴性对照是识别、校正和控制未测量混杂因素的重要统计设计思想,是基于现实世界数据开展实效比较研究的重要方法。  相似文献   

2.
观察性研究中往往存在未知或未测量的混杂因素,是流行病学因果关联研究中的重大挑战。本文介绍一种可以应用在观察性研究中的一种对未知/未测量混杂因素进行识别和效应评估的工具——“探针变量”。其主要可以分为暴露探针变量、结局探针变量以及中介探针3种形式,前2种不仅可以对未知/未测量混杂因素进行识别,也可以对其效应量进行估计,从而揭示真实的暴露与结局之间的关联。而中介探针则是针对“中介因子”进行控制,从而识别暴露和结局之间是否存在未测量混杂因素。该理论实践过程中最大的困难在于“探针变量”的选择和确定,不恰当的“探针变量”可能引入新的混杂,导致未测量混杂因素识别不准确。“探针变量”可以推荐作为观察性研究报告中的一项敏感性分析内容,有助于读者真实理解暴露与结局之间的关联,增加观察性流行病学研究中的证据力度。  相似文献   

3.
观察性疗效比较研究中混杂在所难免,在利用一些统计分析方法对已测量或未测量混杂因素加以控制后,是否消除了混杂的影响不得而知,此时需进行敏感性分析。本文介绍混杂因素处理中的敏感性分析方法。基于不同的研究,敏感性分析思路各不相同,对于已测量混杂因素可采用传统的敏感性分析方法,对于未测量混杂因素目前理论相对系统的方法主要有混杂函数法、边界因子法和倾向性评分校正法,另外Monte Carlo敏感性分析和Bayes敏感性分析也是近年来备受热议的方法。当敏感性分析结果与主要分析结果一致时,无疑提高了研究结论的可靠性。  相似文献   

4.
观察性疗效比较研究作为随机对照研究的补充,其应用价值越来越受到关注,混杂偏倚是其重要偏倚来源。本文介绍观察性疗效比较研究中已测量的混杂因素控制的统计分析方法。对于已测量的混杂因素,可采用传统的分层分析、配对分析、协方差分析或多因素分析,也可采用倾向性评分、疾病风险评分等方法进行混杂因素匹配、分层和调整。良好的设计需从源头控制各种混杂,事后统计分析则应在理解各类方法的应用前提下,严格把握适用条件。  相似文献   

5.
有向无环图在因果推断控制混杂因素中的应用   总被引:5,自引:4,他引:1       下载免费PDF全文
观察性研究是流行病学病因研究中最常用的方法之一,但在因果推断时混杂因素往往会歪曲暴露与结局的真实因果关联。为了消除混杂,选择变量调整是关键所在。有向无环图能够将复杂的因果关系可视化,提供识别混杂的直观方法,将识别混杂转变成识别最小充分调整集。一方面有向无环图可以选择调整更少的变量,增加分析的统计效率;另一方面有向无环图识别的最小充分调整集可以避开未被测量或有缺失值的变量。总之,有向无环图有助于充分揭示真实的因果关系。  相似文献   

6.
<正>混杂偏倚(confounding bias)是观察性研究中的一类重要偏倚,它是指由于混杂因素既与暴露因素又与结局存在相关关系,导致暴露与结局之间的真实关系受到了干扰而产生的偏倚[1]。因此,观察性研究中如何控制混杂一直是研究人员所关注的重要问题。在统计分析阶段一种常用的处理办法是将混杂因素纳入回归模型中进行校正。实际问题中常常遇到这样的情况,即混杂变量为连续型指标,该变量与结局变量间的  相似文献   

7.
研究设计时混杂控制策略的结构分类   总被引:2,自引:2,他引:0       下载免费PDF全文
混杂影响着人群因果关系的发生。依据混杂因素是否已知、可测量及已测量,可将其分为4类情形。基于有向无环图,对混杂的控制策略分为两类:①混杂路径打断法,又可分为单路径和双路径打断法,分别对应于暴露完全干预法、限制法和分层法;②混杂路径保留法,分别对应于暴露不完全干预法(工具变量设计或不完美的随机对照试验)、中间变量法和匹配法。其中,随机对照试验、工具变量设计或孟德尔随机化设计、中间变量分析可满足4类混杂的控制,而限制法、分层法和匹配法仅适用于已知、可测量并已测量的混杂。识别不同类型混杂的控制机制,有助于在研究设计阶段提出应对措施,是获得正确因果效应估计的前提。  相似文献   

8.
目的 介绍敏感性分析方法,并对不同方法进行探讨和比较。方法 通过模拟试验和实例比较混杂函数敏感性分析法和边界因子敏感性分析方法在观察性研究中校正未测量混杂因素准确性的差异。结果 模拟试验与实际例子研究结果均显示,当暴露(X)与结局(Y)之间存在未测量混杂情况下,混杂函数法和边界因子相比,在分析未测量混杂因素的效应至少达到多大强度才能导致观测效应值大小和方向彻底改变的问题上,混杂函数和边界因子分析结果相似。但混杂函数法在完全解释观测效应值时所需的混杂效应强度小于边界因子做出同样解释所需的混杂效应值。边界因子分析中设置两个参数,而混杂函数中只有一个参数,混杂函数法在分析计算过程中较边界因子法简便灵敏。结论 对于真实世界观察性研究数据,分析暴露(X)与结局(Y)之间的因果效应时,敏感性分析过程必不可少,从计算过程和结果解释上,混杂函数敏感性分析方法是一个值得推荐的方法。  相似文献   

9.
目的 介绍双重差分模型的原理和结构,及在社区干预准实验设计研究中的应用.方法 以“农村初级卫生保健项目(2001~2005周期)”家庭问卷调查资料为例,采用stata 9.2软件,拟合双重差分模型,并比较无协变量模型和含协变量模型的双重差分估计量.结果 两种模型的差分估计量接近,含协变量模型考虑了控制变量的影响,差分估计量更加准确.结论 双重差分法在社区干预准实验设计研究的效果评价中是一种适宜的方法.  相似文献   

10.
目的本文研究对带有不可观测混杂因素的两值结果变量的处理效应估计问题。方法采用一种处理动态离散模型的方法进行估计,并分别作了数据模拟和实例研究。结果数据模拟结果和实例应用显示本文采用的方法有较好的估计效果。结论本文采用的估计量有较好的估计性质,在实际的临床观察性研究处理效应方面有较好的应用。  相似文献   

11.
Observational studies provide a rich source of information for assessing effectiveness of treatment interventions in many situations where it is not ethical or practical to perform randomized controlled trials. However, such studies are prone to bias from hidden (unmeasured) confounding. A promising approach to identifying and reducing the impact of unmeasured confounding is prior event rate ratio (PERR) adjustment, a quasi‐experimental analytic method proposed in the context of electronic medical record database studies. In this paper, we present a statistical framework for using a pairwise approach to PERR adjustment that removes bias inherent in the original PERR method. A flexible pairwise Cox likelihood function is derived and used to demonstrate the consistency of the simple and convenient alternative PERR (PERR‐ALT) estimator. We show how to estimate standard errors and confidence intervals for treatment effect estimates based on the observed information and provide R code to illustrate how to implement the method. Assumptions required for the pairwise approach (as well as PERR) are clarified, and the consequences of model misspecification are explored. Our results confirm the need for researchers to consider carefully the suitability of the method in the context of each problem. Extensions of the pairwise likelihood to more complex designs involving time‐varying covariates or more than two periods are considered. We illustrate the application of the method using data from a longitudinal cohort study of enzyme replacement therapy for lysosomal storage disorders. © 2016 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.  相似文献   

12.
Background: Unmeasured confounders are commonplace in observational studies conducted using real-world data. Prior event rate ratio (PERR) adjustment is a technique shown to perform well in addressing such confounding. However, it has been demonstrated that, in some circumstances, the PERR method actually increases rather than decreases bias. In this work, we seek to better understand the robustness of PERR adjustment. Methods: We begin with a Bayesian network representation of a generalized observational study, which is subject to unmeasured confounding. Previous work evaluating PERR performance used Monte Carlo simulation to calculate joint probabilities of interest within the study population. Here, we instead use a Bayesian networks framework. Results: Using this streamlined analytic approach, we are able to conduct probabilistic bias analysis (PBA) using large numbers of combinations of parameters and thus obtain a comprehensive picture of PERR performance. We apply our methodology to a recent study that used the PERR in evaluating elderly-specific high-dose (HD) influenza vaccine in the US Veterans Affairs population. That study obtained an HD relative effectiveness of 25% (95% CI: 2%-43%) against influenza- and pneumonia-associated hospitalization, relative to standard-dose influenza vaccine. In this instance, we find that the PERR-adjusted result is more like to underestimate rather than to overestimate the relative effectiveness of the intervention. Conclusions: Although the PERR is a powerful tool for mitigating the effects of unmeasured confounders, it is not infallible. Here, we develop some general guidance for when a PERR approach is appropriate and when PBA is a safer option.  相似文献   

13.
OBJECTIVE: In the analysis of observational data, the argument is sometimes made that if adjustment for measured confounders induces little change in the treatment-outcome association, then there is less concern about the extent to which the association is driven by unmeasured confounding. We quantify this reasoning using Bayesian sensitivity analysis (BSA) for unmeasured confounding. Using hierarchical models, the confounding effect of a binary unmeasured variable is modeled as arising from the same distribution as that of measured confounders. Our objective is to investigate the performance of the method compared to sensitivity analysis, which assumes that there is no relationship between measured and unmeasured confounders. STUDY DESIGN AND SETTING: We apply the method in an observational study of the effectiveness of beta-blocker therapy in heart failure patients. RESULTS: BSA for unmeasured confounding using hierarchical prior distributions yields an odds ratio (OR) of 0.72, 95% credible interval (CrI): 0.56, 0.93 for the association between beta-blockers and mortality, whereas using independent priors yields OR=0.72, 95% CrI: 0.45, 1.15. CONCLUSION: If the confounding effect of a binary unmeasured variable is similar to that of measured confounders, then conventional sensitivity analysis may give results that overstate the uncertainty about bias.  相似文献   

14.
BACKGROUND AND OBJECTIVE: To review methods that seek to adjust for confounding in observational studies when assessing intended drug effects. METHODS: We reviewed the statistical, economical and medical literature on the development, comparison and use of methods adjusting for confounding. RESULTS: In addition to standard statistical techniques of (logistic) regression and Cox proportional hazards regression, alternative methods have been proposed to adjust for confounding in observational studies. A first group of methods focus on the main problem of nonrandomization by balancing treatment groups on observed covariates: selection, matching, stratification, multivariate confounder score, and propensity score methods, of which the latter can be combined with stratification or various matching methods. Another group of methods look for variables to be used like randomization in order to adjust also for unobserved covariates: instrumental variable methods, two-stage least squares, and grouped-treatment approach. Identifying these variables is difficult, however, and assumptions are strong. Sensitivity analyses are useful tools in assessing the robustness and plausibility of the estimated treatment effects to variations in assumptions about unmeasured confounders. CONCLUSION: In most studies regression-like techniques are routinely used for adjustment for confounding, although alternative methods are available. More complete empirical evaluations comparing these methods in different situations are needed.  相似文献   

15.
A goal of many health studies is to determine the causal effect of a treatment or intervention on health outcomes. Often, it is not ethically or practically possible to conduct a perfectly randomized experiment, and instead, an observational study must be used. A major challenge to the validity of observational studies is the possibility of unmeasured confounding (i.e., unmeasured ways in which the treatment and control groups differ before treatment administration, which also affect the outcome). Instrumental variables analysis is a method for controlling for unmeasured confounding. This type of analysis requires the measurement of a valid instrumental variable, which is a variable that (i) is independent of the unmeasured confounding; (ii) affects the treatment; and (iii) affects the outcome only indirectly through its effect on the treatment. This tutorial discusses the types of causal effects that can be estimated by instrumental variables analysis; the assumptions needed for instrumental variables analysis to provide valid estimates of causal effects and sensitivity analysis for those assumptions; methods of estimation of causal effects using instrumental variables; and sources of instrumental variables in health studies. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

16.
We consider Bayesian sensitivity analysis for unmeasured confounding in observational studies where the association between a binary exposure, binary response, measured confounders and a single binary unmeasured confounder can be formulated using logistic regression models. A model for unmeasured confounding is presented along with a family of prior distributions that model beliefs about a possible unknown unmeasured confounder. Simulation from the posterior distribution is accomplished using Markov chain Monte Carlo. Because the model for unmeasured confounding is not identifiable, standard large-sample theory for Bayesian analysis is not applicable. Consequently, the impact of different choices of prior distributions on the coverage probability of credible intervals is unknown. Using simulations, we investigate the coverage probability when averaged with respect to various distributions over the parameter space. The results indicate that credible intervals will have approximately nominal coverage probability, on average, when the prior distribution used for sensitivity analysis approximates the sampling distribution of model parameters in a hypothetical sequence of observational studies. We motivate the method in a study of the effectiveness of beta blocker therapy for treatment of heart failure.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号