首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Misconceptions about the impact of case–control matching remain common. We discuss several subtle problems associated with matched case–control studies that do not arise or are minor in matched cohort studies: (1) matching, even for non-confounders, can create selection bias; (2) matching distorts dose–response relations between matching variables and the outcome; (3) unbiased estimation requires accounting for the actual matching protocol as well as for any residual confounding effects; (4) for efficiency, identically matched groups should be collapsed; (5) matching may harm precision and power; (6) matched analyses may suffer from sparse-data bias, even when using basic sparse-data methods. These problems support advice to limit case–control matching to a few strong well-measured confounders, which would devolve to no matching if no such confounders are measured. On the positive side, odds ratio modification by matched variables can be assessed in matched case–control studies without further data, and when one knows either the distribution of the matching factors or their relation to the outcome in the source population, one can estimate and study patterns in absolute rates. Throughout, we emphasize distinctions from the more intuitive impacts of cohort matching.  相似文献   

2.
In observational studies of the effect of an exposure on an outcome, the exposure–outcome association is usually confounded by other causes of the outcome (potential confounders). One common method to increase efficiency is to match the study on potential confounders. Matched case‐control studies are relatively common and well covered by the literature. Matched cohort studies are less common but do sometimes occur. It is often argued that it is valid to ignore the matching variables, in the analysis of matched cohort data. In this paper, we provide analyses delineating the scope and limits of this argument. We discuss why the argument does not carry over to effect estimation in matched case‐control studies, although it does carry over to null‐hypothesis testing. We also show how the argument does not extend to matched cohort studies when one adjusts for additional confounders in the analysis. Ignoring the matching variables can sometimes reduce variance, even though this is not guaranteed. We investigate the trade‐off between bias and variance in deciding whether adjustment for matching factors is advisable. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

3.
匹配是观察性研究中选择研究对象的一种常用方法,具有控制混杂因素、提高统计效率等作用,但其控制混杂因素的作用在不同观察性研究中并不一致,匹配在队列研究中能够消除匹配变量的混杂偏倚,但在病例对照研究中匹配本身并不能消除混杂偏倚。在匹配性病例对照研究选择匹配变量时,研究者可能并不能准确判断该变量是否为混杂变量,若误将真实情况为非混杂因素的变量进行匹配,则会形成过度匹配,造成统计效率下降或引入难以避免的偏倚或增加工作量等;若将真实情况为混杂因素的变量遗漏不予匹配,则会造成混杂偏倚。有向无环图是一种直观的展示不同流行病学研究设计、变量间复杂因果关系的可视化图形语言。本文从有向无环图视角分析匹配在不同观察性研究设计中的作用、匹配性病例对照研究中欲匹配变量的选择标准制定,为今后流行病学研究设计提供一定的参考建议。  相似文献   

4.
Confounding and misclassification   总被引:5,自引:0,他引:5  
The authors examine some recently proposed criteria for determining when to adjust for covariates related to misclassification, and show these criteria to be incorrect. In particular, they show that when misclassification is present, covariate control can sometimes increase net bias, even when the covariate would have been a confounder under perfect classification, and even if the covariate is a determinant of classification. Thus, bias due to misclassification cannot be adequately dealt with by the methods used for control of confounding. The examples presented also show that the "change-in-estimate" criterion for deciding whether to control a covariate can be systematically misleading when misclassification is present. These results demonstrate that it is necessary to consider the degree of misclassification when deciding whether to control a covariate.  相似文献   

5.
Loss to follow-up is problematic in most cohort studies and often leads to bias. Although guidelines suggest acceptable follow-up rates, the authors are unaware of studies that test the validity of these recommendations. The objective of this study was to determine whether the recommended follow-up thresholds of 60-80% are associated with biased effects in cohort studies. A simulation study was conducted using 1000 computer replications of a cohort of 500 observations. The logistic regression model included a binary exposure and three confounders. Varied correlation structures of the data represented various levels of confounding. Differing levels of loss to follow-up were generated through three mechanisms: missing completely at random (MCAR), missing at random (MAR) and missing not at random (MNAR). The authors found no important bias with levels of loss that varied from 5 to 60% when loss to follow-up was related to MCAR or MAR mechanisms. However, when observations were lost to follow-up based on a MNAR mechanism, the authors found seriously biased estimates of the odds ratios with low levels of loss to follow-up. Loss to follow-up in cohort studies rarely occurs randomly. Therefore, when planning a cohort study, one should assume that loss to follow-up is MNAR and attempt to achieve the maximum follow-up rate possible.  相似文献   

6.
Lack of power is a pertinent problem in many case-control studies of gene-environment interactions. The authors recently introduced the concept of flexible matching strategies with varying proportions of a matching factor among selected controls (degree of matching) to increase the power and efficiency of case-control studies. In this study, they extended the concept of flexible matching strategies to the field of gene-environment interactions. They assessed the power and efficiency of such studies to detect and estimate gene-environment interactions under a variety of assumptions regarding the prevalence and effects of the environmental exposure and the genetic susceptibility as well as their association in the population. For each set of parameters, 10,000 case-control studies were simulated using varying degrees of matching. Traditional frequency matching increased the power and precision in most scenarios, but even greater gains were often obtained by increasing the prevalence of the environmental exposure in controls above the one in cases. The authors concluded that flexible matching strategies can increase the power and efficiency of case-control studies to detect and estimate gene-environment interactions compared with traditional frequency matching and therefore might help to alleviate the notorious lack of power of these studies in specific situations.  相似文献   

7.
The paper presents a case-control study involving a disease, exposures and several continuous confounders. The relative efficiency and validity of a fully matched design is compared with random sampling of controls. We test a viable option of a partially matched design when inability to match all study subjects on all confounders occurs. The degree of bias in the odds ratios introduced by the different designs and by the different analytic models is assessed in comparison with the estimates obtained from a total cohort, from which both cases and controls were selected. Matched designs and analytic strategies are also evaluated in terms of the variances of the odds ratios. The results indicate that matching on continuous variables may lead to a more precise estimate of odds ratio than statistical control of confounding in unmatched designs. Partial selection of controls by matching may be a useful strategy when complete matching cannot be achieved; in practice, partial matching achieves most of the benefits of full matching.  相似文献   

8.
Retrospective studies have always taken for granted that matching should be done on factors which affect the incidence of the disease. Worcester stated that when that "when a disease group is being compared with another group, matching is usually done on variables known to be related to the disease rather than on variables related to the outcome." Miettinen et al. however disagrees. They believe that factors on which matching should be done must be related to the outcome variable, otherwise, they do not affect the measure that is the basis for a decision about the association between the putative etiologic agent and the disease. Thus in a retrospective study of the association between blood group and cervical cancer, male controls would be quite acceptable if they were healthy and were matched to the patients on race. The erroneous view that one should match on factors which affect the incidence of disease may have resulted from the confusion with follow-up studies where it is proper to match on factors that are correlated with the disease since the occurrence of disease is the outcome variable in follow-up studies. Miettinen further states that one should match only on those factors that are correlated with both the outcome variable and disease incidence. The authors state that whenever a factor is strongly correlated with the outcome, one should match on this factor or consider it in the analysis, even if, a priori, it is thought not to affect the incidence of the disease. In cases where there is uncertainty whether a variable is or is not correlated with the outcome variable, the decision to match or not may be influenced by whether the variable is kown to affect the disease incidence.  相似文献   

9.
BACKGROUND: Studies of the effect of exposure to a risk factor measured in an entire cohort may be augmented by nested case-control subsets to investigate confounding or effect modification by additional factors not practically assessed on all cohort members. We compared three control-selection strategies-matching on exposure, counter matching on exposure, and random sampling-to determine which was most efficient in a situation where exposure is a known, continuous variable and high doses are rare. METHODS: We estimated the power to detect interaction using four control-to-case ratios (1:1, 2:1, 4:1, and 8:1) in a planned case-control study of the joint effect of atomic bomb radiation exposure and serum oestradiol levels on breast cancer. Radiation dose is measured in the entire cohort, but because neither serum oestradiol level nor the true degree of interaction was known, we simulated values of oestradiol and hypothetical levels of oestradiol-radiation interaction. RESULTS: Compared with random sampling, power to detect interaction was similarly higher with either matching or counter matching with two or more controls. CONCLUSIONS: Because counter matching is generally at least as efficient as random sampling, whereas matching on exposure can result in loss of efficiency and precludes estimation of exposure risk, we recommend counter matching for selecting controls in nested case-control studies of the joint effects of multiple risk factors when one is previously measured in the full cohort.  相似文献   

10.
In many large prospective cohorts, expensive exposure measurements cannot be obtained for all individuals. Exposure–disease association studies are therefore often based on nested case–control or case–cohort studies in which complete information is obtained only for sampled individuals. However, in the full cohort, there may be a large amount of information on cheaply available covariates and possibly a surrogate of the main exposure(s), which typically goes unused. We view the nested case–control or case–cohort study plus the remainder of the cohort as a full‐cohort study with missing data. Hence, we propose using multiple imputation (MI) to utilise information in the full cohort when data from the sub‐studies are analysed. We use the fully observed data to fit the imputation models. We consider using approximate imputation models and also using rejection sampling to draw imputed values from the true distribution of the missing values given the observed data. Simulation studies show that using MI to utilise full‐cohort information in the analysis of nested case–control and case–cohort studies can result in important gains in efficiency, particularly when a surrogate of the main exposure is available in the full cohort. In simulations, this method outperforms counter‐matching in nested case–control studies and a weighted analysis for case–cohort studies, both of which use some full‐cohort information. Approximate imputation models perform well except when there are interactions or non‐linear terms in the outcome model, where imputation using rejection sampling works well. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

11.
Ambidirectional studies are useful when information about disease status is available on a cohort but a risk factor has still to be recorded. An example is the study of the influence of HLA phenotypes on the progression of HIV carriers towards AIDS. An ambidirectional design is proposed in which the cases and controls are defined by the survival duration of the subjects; it includes as special cases some other ambidirectional designs. Its efficiency is compared with that of a random selection cohort design both analytically and by computer simulation. It is shown that when the size of the cohort is large, appreciable gains in power can be achieved by this type of design even when there is no censoring.  相似文献   

12.
《Value in health》2023,26(3):344-350
ObjectivesGuidance on the conduct of health technology assessments rarely recommends accounting for anticipated future price declines that can follow loss of marketing exclusivity. This article explores when it is appropriate to account for generic pricing and whether it can influence cost-effectiveness estimates.MethodsThis article presents 4 case studies. Case study 1 considers a hypothetical drug used by a first patient cohort at branded prices and by subsequent, “downstream” cohorts at generic prices. Case study 2 explores whether statin assessments should account for generic prices for downstream cohorts that gain access after the initial cohort. Case study 3 uses a simplified spreadsheet model to assess the impact of accounting for generic pricing for inclisiran, used when statins insufficiently reduce cholesterol. Case study 4 amends this model for a hypothetical, advanced, follow-on treatment displacing inclisiran.ResultsAssessments should include generic pricing even if the first cohort using a drug pays branded prices and only downstream cohorts pay generic prices (case study 1). Because eventual generic pricing for statins did not depend on decisions for downstream cohorts, assessing reimbursement for those cohorts could safely omit generic pricing (case study 2). For inclisiran (case study 3), including generic pricing notably improved estimated cost-effectiveness. Displacing inclisiran with an advanced therapy (case study 4) modestly affected estimated cost-effectiveness.ConclusionsAlthough this analysis relies on simplified and hypothetical models, it demonstrates that accounting for generic pricing might substantially reduce estimated cost-effectiveness ratios. Doing so when warranted is crucial to improving health technology assessment validity.  相似文献   

13.
The impact of competing risks on tests of association between disease and haplotypes has been largely ignored. We consider situations in which linkage phase is ambiguous and show that tests for disease-haplotype association can lead to rejection of the null hypothesis, even when true, with more than the nominal 5 per cent frequency. This problem tends to occur if a haplotype is associated with overall mortality, even if the haplotype is not associated with disease risk. A small simulation study illustrates the magnitude of bias (high type I error rate) in the context of a cohort study in which a modest number of disease cases (about 350) occur over time. The bias remains even if the score test is based on a logistic model that includes age as a covariate. For cohort studies, we propose a new test based on a modification of the proportional hazards model and for case-control studies, a test based on a conditional likelihood that have the correct size under the null even in the presence of competing risks, and that can be used when haplotype is ambiguous.  相似文献   

14.
There are disagreements in the literature about the criteria to be used to ascertain whether or not a measure of association is confounded. The authors postulate the general principle that a crude unconfounded measure of association is structured as a weighted average of the stratum-specific values of the measure. They examine the relationships between stratum-specific measures of association, crude overall measures, and weighted averages of stratum-specific measures, and indicate how these relationships may be used to define criteria for the assessment of confounding in cohort studies in which the exposure, disease, and stratification variables are classified dichotomously. The criteria presented differ for the risk ratio and for the disease-odds ratio. In other words, one can reach different conclusions about the confounding effect of a given extraneous variable, depending on which measure of association is chosen. This view differs from that of Miettinen and Cook (Confounding: essence and detection. Am J Epidemiol 1981;114:593-603) who postulated one set of criteria for the assessment of confounding, which was applicable to both measures of association. These different approaches may lead to different conclusions about the presence or absence of confounding.  相似文献   

15.
For some diseases, there has been controversy about whether key risk factors are related linearly to the occurrence of disease events. This issue has important implications for strategies to modify risk factors, since nonlinear threshold or J-curve associations imply that risk factor modification is not beneficial beyond a certain level. This paper considers whether nonlinear risk factor associations can arise spuriously from selection mechanisms common in prospective cohort studies. Using theory, simulation, and cohort data, the authors show that selecting individuals based on their prior disease status leads to the primary risk factor being negatively confounded with other residual risk factors. If this confounding combines with effect modification between the primary and residual risk factors, as exists in cardiovascular disease, then the aggregate effect is nonlinear distortion of the risk factor relation. Such distortion can produce an apparent threshold or J-curve relation, even if the true underlying relation is linear. The authors conclude that nonlinear risk factor associations observed in primary or secondary prevention cohorts should be interpreted with caution because they may be consistent with an underlying linear lower-is-better relation. Randomized studies provide an important complement to prospective cohort studies when choosing between intensive and moderate risk factor modification strategies in high-risk populations.  相似文献   

16.
Summary The epidemiological literature on passive smoking and lung cancer is reviewed and the well-known criteria for establishing a causal relationship are applied in order to determine what level of causal evidence currently exists. Three cohort studies and 12 case control studies are analysed. Of the prospective cohort studies, one contributes very little to our knowledge, one shows no risk increase and one results in a moderate risk increase of 1.74 for women married to heavy smokers. The last is the only study which has to be taken seriously, even when other considerations show that its results might be caused by chance, bias or confounding. None of the six case control studies yielding a positive relationship was conducted according to the state of art of epidemiological research, giving reasonable and sound evidence which cannot be explained by chance, bias, confounding or misclassification. Two studies contribute nothing to the evidence. None of the four case control studies yielding no risk change or a risk decrease can exclude the possibility that a causal relation exists. The epidemiological and toxicological evidence is discussed in the light of recent findings. The volume of accumulated data is conflicting and inconclusive. The observations on nonsmokers that have been made so far are compatible with either an increased risk from passive smoking or an absence of risk. Applying the criteria proposed by IARC there is a state of inadequate evidence. The available studies, while showing some evidence of association, do not exclude chance, bias or confounding. They provide, however, a serious hypothesis. Further studies are needed, if one wants to come to an adequate and scientifically sound conclusion concerning the question as to whether passive smoking causes lung cancer in man.  相似文献   

17.
Nonrandomized studies of treatments from electronic healthcare databases are critical for producing the evidence necessary to making informed treatment decisions, but often rely on comparing rates of events observed in a small number of patients. In addition, studies constructed from electronic healthcare databases, for example, administrative claims data, often adjust for many, possibly hundreds, of potential confounders. Despite the importance of maximizing efficiency when there are many confounders and few observed outcome events, there has been relatively little research on the relative performance of different propensity score methods in this context. In this paper, we compare a wide variety of propensity‐based estimators of the marginal relative risk. In contrast to prior research that has focused on specific statistical methods in isolation of other analytic choices, we instead consider a method to be defined by the complete multistep process from propensity score modeling to final treatment effect estimation. Propensity score model estimation methods considered include ordinary logistic regression, Bayesian logistic regression, lasso, and boosted regression trees. Methods for utilizing the propensity score include pair matching, full matching, decile strata, fine strata, regression adjustment using one or two nonlinear splines, inverse propensity weighting, and matching weights. We evaluate methods via a ‘plasmode’ simulation study, which creates simulated datasets on the basis of a real cohort study of two treatments constructed from administrative claims data. Our results suggest that regression adjustment and matching weights, regardless of the propensity score model estimation method, provide lower bias and mean squared error in the context of rare binary outcomes. Copyright © 2017 John Wiley & Sons, Ltd.  相似文献   

18.
Because of the lack of power of case-control study designs to detect gene-environment interactions, flexible matching has recently been proposed as a method of improving efficiency. In this paper, the authors consider a large-sample approximation method that allows estimation of the most efficient matching strategy when genotype and exposure are either independent or associated. The authors provide tables of the sample sizes required to detect gene-environment interactions if this flexible matching strategy is followed, and they make brief comparisons with other study designs.  相似文献   

19.
Countermatching designs can provide more efficient estimates than simple matching or case–cohort designs in certain situations such as when good surrogate variables for an exposure of interest are available. We extend pseudolikelihood estimation for the Cox model under countermatching designs to models where time‐varying covariates are considered. We also implement pseudolikelihood with calibrated weights to improve efficiency in nested case–control designs in the presence of time‐varying variables. A simulation study is carried out, which considers four different scenarios including a binary time‐dependent variable, a continuous time‐dependent variable, and the case including interactions in each. Simulation results show that pseudolikelihood with calibrated weights under countermatching offers large gains in efficiency if compared to case–cohort. Pseudolikelihood with calibrated weights yielded more efficient estimators than pseudolikelihood estimators. Additionally, estimators were more efficient under countermatching than under case–cohort for the situations considered. The methods are illustrated using the Colorado Plateau uranium miners cohort. Furthermore, we present a general method to generate survival times with time‐varying covariates. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

20.
Cox's regression model is widely used for assessing associations between potential risk factors and disease occurrence in epidemiologic cohort studies. Although age is often a strong determinant of disease risk, authors have frequently used time-on-study instead of age as the time-scale, as for clinical trials. Unless the baseline hazard is an exponential function of age, this approach can yield different estimates of relative hazards than using age as the time-scale, even when age is adjusted for. We performed a simulation study in order to investigate the existence and magnitude of bias for different degrees of association between age and the covariate of interest. Age to disease onset was generated from exponential, Weibull or piecewise Weibull distributions, and both fixed and time-dependent dichotomous covariates were considered. We observed no bias upon using age as the time-scale. Upon using time-on-study, we verified the absence of bias for exponentially distributed age to disease onset. For non-exponential distributions, we found that bias could occur even when the covariate of interest was independent from age. It could be severe in case of substantial association with age, especially with time-dependent covariates. These findings were illustrated on data from a cohort of 84,329 French women followed prospectively for breast cancer occurrence. In view of our results, we strongly recommend not using time-on-study as the time-scale for analysing epidemiologic cohort data.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号