首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 187 毫秒
1.
观察性疗效比较研究作为随机对照研究的证据补充,其应用价值越来越受到关注。未测量混杂因素的统计学分析方法是观察性疗效比较研究中的重大挑战,本文对观察性疗效比较研究中未知或未测量的混杂因素控制的统计分析方法进行述评。未测量混杂因素的统计学方法包括工具变量法、本底事件率比校正法和双重差分模型及其衍生方法。工具变量法模型构造巧妙,但满足条件的工具变量在实际研究中并不易得;本底事件率比校正法和双重差分模型均要求研究数据有干预前信息,有些实际研究中往往无法满足。未测量混杂因素对统计学方法提出了新的要求、新的挑战,有待国内外统计学者的进一步完善和研究。  相似文献   

2.
有向无环图在因果推断控制混杂因素中的应用   总被引:5,自引:4,他引:1       下载免费PDF全文
观察性研究是流行病学病因研究中最常用的方法之一,但在因果推断时混杂因素往往会歪曲暴露与结局的真实因果关联。为了消除混杂,选择变量调整是关键所在。有向无环图能够将复杂的因果关系可视化,提供识别混杂的直观方法,将识别混杂转变成识别最小充分调整集。一方面有向无环图可以选择调整更少的变量,增加分析的统计效率;另一方面有向无环图识别的最小充分调整集可以避开未被测量或有缺失值的变量。总之,有向无环图有助于充分揭示真实的因果关系。  相似文献   

3.
匹配是观察性研究中选择研究对象的一种常用方法,具有控制混杂因素、提高统计效率等作用,但其控制混杂因素的作用在不同观察性研究中并不一致,匹配在队列研究中能够消除匹配变量的混杂偏倚,但在病例对照研究中匹配本身并不能消除混杂偏倚。在匹配性病例对照研究选择匹配变量时,研究者可能并不能准确判断该变量是否为混杂变量,若误将真实情况...  相似文献   

4.
目的 通过模拟数据,研究父母提供的鼓励环境是否会对儿童的认知发展产生影响,探索父母鼓励是否增强了孩子的学习动机,介绍因果中介效应分析方法的原理及SAS实现。方法 在未控制混杂因素和控制混杂因素两种情况下,运用因果中介效应分析方法对鼓励和认知得分之间的因果路径进行分解,确定中介变量动机在因果关系中的作用程度。结果 学习动机在父母鼓励与儿童认知发展之间起中介作用,中介效应占总效应的比例为47%(不控制混杂因素)、37%(控制混杂因素)。结论 学习动机是中介变量,父母鼓励可以通过增强孩子的学习动机来提高儿童的认知发展。在满足相关前提和假设下,CAUSALMED过程可以实现因果中介效应分析,探索因果关系的内在影响机制。  相似文献   

5.
<正>在观察性研究中,暴露或处理因素常常会随时间的变化而变化,在分析其对结局的效应时,常会受到时依性混杂因素的影响。时依性混杂因素是指同时满足以下三个条件的因素:(1)随时间变化;(2)是结局的影响因素;(3)会影响到随后的暴露/处理,同时又会受到前次暴露/处理的影响[1-2]。可见,时依性混杂因素既可以看作暴露/处理与结局的混杂因素,也可以当成暴露/处理与结局之间的一个中间变量。在估计暴  相似文献   

6.
<正>混杂偏倚(confounding bias)是观察性研究中的一类重要偏倚,它是指由于混杂因素既与暴露因素又与结局存在相关关系,导致暴露与结局之间的真实关系受到了干扰而产生的偏倚[1]。因此,观察性研究中如何控制混杂一直是研究人员所关注的重要问题。在统计分析阶段一种常用的处理办法是将混杂因素纳入回归模型中进行校正。实际问题中常常遇到这样的情况,即混杂变量为连续型指标,该变量与结局变量间的  相似文献   

7.
观察性疗效比较研究中混杂在所难免,在利用一些统计分析方法对已测量或未测量混杂因素加以控制后,是否消除了混杂的影响不得而知,此时需进行敏感性分析。本文介绍混杂因素处理中的敏感性分析方法。基于不同的研究,敏感性分析思路各不相同,对于已测量混杂因素可采用传统的敏感性分析方法,对于未测量混杂因素目前理论相对系统的方法主要有混杂函数法、边界因子法和倾向性评分校正法,另外Monte Carlo敏感性分析和Bayes敏感性分析也是近年来备受热议的方法。当敏感性分析结果与主要分析结果一致时,无疑提高了研究结论的可靠性。  相似文献   

8.
非随机对照研究中未测量混杂因素的控制极具挑战。阴性对照理论基于“阴性对照的检测结果必须阴性”的思想,在进行人群研究时,额外设置合适的阴性对照,将关联特异度的思想融入人群研究中进行未测量混杂因素的识别和控制。本文从统计学角度解析阴性对照理论控制未测量混杂因素的基本原理,详细介绍基于阴性对照理论的系列衍生方法:死亡率标准化校正法、校正P值法、广义双重差分模型以及双向阴性对照法,并结合代表性案例对其合理应用进行述评。阴性对照是识别、校正和控制未测量混杂因素的重要统计设计思想,是基于现实世界数据开展实效比较研究的重要方法。  相似文献   

9.
目的 介绍敏感性分析方法,并对不同方法进行探讨和比较。方法 通过模拟试验和实例比较混杂函数敏感性分析法和边界因子敏感性分析方法在观察性研究中校正未测量混杂因素准确性的差异。结果 模拟试验与实际例子研究结果均显示,当暴露(X)与结局(Y)之间存在未测量混杂情况下,混杂函数法和边界因子相比,在分析未测量混杂因素的效应至少达到多大强度才能导致观测效应值大小和方向彻底改变的问题上,混杂函数和边界因子分析结果相似。但混杂函数法在完全解释观测效应值时所需的混杂效应强度小于边界因子做出同样解释所需的混杂效应值。边界因子分析中设置两个参数,而混杂函数中只有一个参数,混杂函数法在分析计算过程中较边界因子法简便灵敏。结论 对于真实世界观察性研究数据,分析暴露(X)与结局(Y)之间的因果效应时,敏感性分析过程必不可少,从计算过程和结果解释上,混杂函数敏感性分析方法是一个值得推荐的方法。  相似文献   

10.
观察性研究是流行病学病因研究常用的研究设计,但应用观察性研究进行因果推断时,常由于未经识别、校正的混杂因素的存在,歪曲暴露因素与研究结局之间的真实因果关系。传统混杂因素判断标准在实际应用中不够直观,且有一定局限性,有时甚至出现混杂因素的误判。有向无环图(DAGs)可以直观识别观察性研究中存在的混杂因素,将复杂的因果关系可视化,判断研究中需要校正的最小校正子集,并可避免传统混杂因素判断标准的局限性,结合DAGs还可以指导混杂因素校正方法的选择,在观察性研究中因果推断具有重要指导价值,DAGs在未来的流行病学研究中将有更多的应用。  相似文献   

11.
Measurement error in explanatory variables and unmeasured confounders can cause considerable problems in epidemiologic studies. It is well recognized that under certain conditions, nondifferential measurement error in the exposure variable produces bias towards the null. Measurement error in confounders will lead to residual confounding, but this is not a straightforward issue, and it is not clear in which direction the bias will point. Unmeasured confounders further complicate matters. There has been discussion about the amount of bias in exposure effect estimates that can plausibly occur due to residual or unmeasured confounding. In this paper, the authors use simulation studies and logistic regression analyses to investigate the size of the apparent exposure-outcome association that can occur when in truth the exposure has no causal effect on the outcome. The authors consider two cases with a normally distributed exposure and either two or four normally distributed confounders. When the confounders are uncorrelated, bias in the exposure effect estimate increases as the amount of residual and unmeasured confounding increases. Patterns are more complex for correlated confounders. With plausible assumptions, effect sizes of the magnitude frequently reported in observational epidemiologic studies can be generated by residual and/or unmeasured confounding alone.  相似文献   

12.
Instrumental variable regression is one way to overcome unmeasured confounding and estimate causal effect in observational studies. Built on structural mean models, there has been considerable work recently developed for consistent estimation of causal relative risk and causal odds ratio. Such models can sometimes suffer from identification issues for weak instruments. This hampered the applicability of Mendelian randomization analysis in genetic epidemiology. When there are multiple genetic variants available as instrumental variables, and causal effect is defined in a generalized linear model in the presence of unmeasured confounders, we propose to test concordance between instrumental variable effects on the intermediate exposure and instrumental variable effects on the disease outcome, as a means to test the causal effect. We show that a class of generalized least squares estimators provide valid and consistent tests of causality. For causal effect of a continuous exposure on a dichotomous outcome in logistic models, the proposed estimators are shown to be asymptotically conservative. When the disease outcome is rare, such estimators are consistent because of the log‐linear approximation of the logistic function. Optimality of such estimators relative to the well‐known two‐stage least squares estimator and the double‐logistic structural mean model is further discussed. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

13.
The instrumental variable method has been employed within economics to infer causality in the presence of unmeasured confounding. Emphasising the parallels to randomisation may increase understanding of the underlying assumptions within epidemiology. An instrument is a variable that predicts exposure, but conditional on exposure shows no independent association with the outcome. The random assignment in trials is an example of what would be expected to be an ideal instrument, but instruments can also be found in observational settings with a naturally varying phenomenon e.g. geographical variation, physical distance to facility or physician’s preference. The fourth identifying assumption has received less attention, but is essential for the generalisability of estimated effects. The instrument identifies the group of compliers in which exposure is pseudo-randomly assigned leading to exchangeability with regard to unmeasured confounders. Underlying assumptions can only partially be tested empirically and require subject-matter knowledge. Future studies employing instruments should carefully seek to validate all four assumptions, possibly drawing on parallels to randomisation.  相似文献   

14.

Purpose

With observational epidemiologic studies, there is often concern that an unmeasured variable might confound an observed association. Investigators can assess the impact from such unmeasured variables on an observed relative risk (RR) by utilizing externally sourced information and applying an indirect adjustment procedure, for example, the “Axelson adjustment.” Although simple and easy to use, this approach applies to exposure and confounder variables that are binary. Other approaches eschew specific values and provide only bounds on the potential bias.

Methods

For both multiplicative and additive RR models, we present formulae for indirect adjustment of observed RRs for unmeasured potential confounding variables when there are multiple categories. In addition, we suggest an alternative strategy to identify the characteristics that the confounder must have to explain fully the observed association.

Results and Conclusions

We provide examples involving studies of pediatric computer tomography scanning and leukemia and nuclear radiation workers and smoking to demonstrate that with externally sourced information, an investigator can assess whether confounding from unmeasured factors is likely to occur.  相似文献   

15.
We consider Bayesian sensitivity analysis for unmeasured confounding in observational studies where the association between a binary exposure, binary response, measured confounders and a single binary unmeasured confounder can be formulated using logistic regression models. A model for unmeasured confounding is presented along with a family of prior distributions that model beliefs about a possible unknown unmeasured confounder. Simulation from the posterior distribution is accomplished using Markov chain Monte Carlo. Because the model for unmeasured confounding is not identifiable, standard large-sample theory for Bayesian analysis is not applicable. Consequently, the impact of different choices of prior distributions on the coverage probability of credible intervals is unknown. Using simulations, we investigate the coverage probability when averaged with respect to various distributions over the parameter space. The results indicate that credible intervals will have approximately nominal coverage probability, on average, when the prior distribution used for sensitivity analysis approximates the sampling distribution of model parameters in a hypothetical sequence of observational studies. We motivate the method in a study of the effectiveness of beta blocker therapy for treatment of heart failure.  相似文献   

16.
Mediation analysis helps researchers assess whether part or all of an exposure's effect on an outcome is due to an intermediate variable. The indirect effect can help in designing interventions on the mediator as opposed to the exposure and better understanding the outcome's mechanisms. Mediation analysis has seen increased use in genome‐wide epidemiological studies to test for an exposure of interest being mediated through a genomic measure such as gene expression or DNA methylation (DNAm). Testing for the indirect effect is challenged by the fact that the null hypothesis is composite. We examined the performance of commonly used mediation testing methods for the indirect effect in genome‐wide mediation studies. When there is no association between the exposure and the mediator and no association between the mediator and the outcome, we show that these common tests are overly conservative. This is a case that will arise frequently in genome‐wide mediation studies. Caution is hence needed when applying the commonly used mediation tests in genome‐wide mediation studies. We evaluated the performance of these methods using simulation studies, and performed an epigenome‐wide mediation association study in the Normative Aging Study, analyzing DNAm as a mediator of the effect of pack‐years on FEV1.  相似文献   

17.
BACKGROUND: Neurobehavioral tests are commonly used in studies of children exposed to low-level environmental concentrations of compounds known to be neurotoxic at higher levels. However, uncontrolled or incomplete control for confounding makes interpretation of results problematic because effects of confounders are often stronger than the effects of primary interest. We examined a priori the potential impact of confounding in a hypothetical study evaluating the association of a potentially neurotoxic environmental exposure with neurobehavioral function in children. METHODS: We used 2 outcome measures: the Bayley Scales of Infant Development Mental Development Index and the Stanford-Binet Intelligence Scale Composite Score. We selected 3 potential confounders: maternal intelligence, home environment, and socioeconomic status as measured by years of parental education. We conducted 3 sets of analyses measuring the effect of each of the 3 confounding factors alone, 2 confounders acting simultaneously, and all 3 confounders acting simultaneously. RESULTS: Relatively small differences (0.5 standard deviations) in confounding variables between "exposed" and "unexposed" groups, if unmeasured and unaccounted for in the analysis, could produce spurious differences in cognitive test scores. The magnitude of this difference (3-10 points) has been suggested to have a meaningful impact in populations. The method of measuring confounders (eg, maternal intelligence) could also substantially affect the results. CONCLUSIONS: It is important to carefully consider the impact of potential confounders during the planning stages of an observational study. Study-to-study differences in neurobehavioral outcomes with similar environmental exposures could be partially explained by differences in the adjustment for confounding variables.  相似文献   

18.
OBJECTIVE: In the analysis of observational data, the argument is sometimes made that if adjustment for measured confounders induces little change in the treatment-outcome association, then there is less concern about the extent to which the association is driven by unmeasured confounding. We quantify this reasoning using Bayesian sensitivity analysis (BSA) for unmeasured confounding. Using hierarchical models, the confounding effect of a binary unmeasured variable is modeled as arising from the same distribution as that of measured confounders. Our objective is to investigate the performance of the method compared to sensitivity analysis, which assumes that there is no relationship between measured and unmeasured confounders. STUDY DESIGN AND SETTING: We apply the method in an observational study of the effectiveness of beta-blocker therapy in heart failure patients. RESULTS: BSA for unmeasured confounding using hierarchical prior distributions yields an odds ratio (OR) of 0.72, 95% credible interval (CrI): 0.56, 0.93 for the association between beta-blockers and mortality, whereas using independent priors yields OR=0.72, 95% CrI: 0.45, 1.15. CONCLUSION: If the confounding effect of a binary unmeasured variable is similar to that of measured confounders, then conventional sensitivity analysis may give results that overstate the uncertainty about bias.  相似文献   

19.
Confounding is an important and common issue in observational (non-randomized) research on the effects of pharmaceuticals or exposure to etiologic factors (determinants). Confounding is present when a third factor, related to both the determinant and the outcome, distorts the causal relation between these two. There are different methods to control for confounding. The most commonly used are restriction, stratification, multivariable regression models, and propensity score methods. With these methods it is only possible to control for variables for which data is known: measured confounders. Research in the area of confounding is currently directed at the incorporation of external knowledge on unmeasured confounders, the evaluation of instrumental variables, and the impact of time-dependent confounding.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号