首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Assessing whether the effect of exposure on an outcome is completely mediated by a third variable is often done by conditioning on the intermediate variable. However, when an association remains, it is not always clear how this should be interpreted. It may be explained by a causal direct effect of the exposure on the disease, or the adjustment may have been distorted due to various reasons, such as error in the measured mediator or unknown confounding of association between the mediator and the outcome.In this paper, we study various situations where the conditional relationship between the exposure and the outcome is biased due to different types of measurement error in the mediator. For each of these situations, we quantify the effect on the association parameter. Such formulas can be used as tools for sensitivity analysis or to correct the association parameter for the bias due to measurement error. The performance of the bias formulas is studied by simulation and by applying them to data from a case-control study (Leiden Thrombophilia Study) on risk factors for venous thrombosis. In this study, the question was the extent to which the relationship between blood group and venous thrombosis might be mediated through coagulation factor VIII. We found that measurement error could have strongly biased the estimated direct effect of blood group on thrombosis. The formulas we propose can be a guide for researchers who find a residual association after adjusting for an intermediate variable and who wish to explore other possible explanations before concluding that there is a direct causal effect.  相似文献   

2.
In a clinical trial where some subjects receive one or more non-randomized interventions during follow-up, primary interest is in the effect of the overall treatment strategies as implemented, but it may also be of interest to adjust treatment comparisons for non-randomized interventions. We consider non-randomized interventions, especially surgical procedures, which only occur when the outcome would otherwise have been poor. Focusing on an outcome measured repeatedly over time, we describe the variety of questions that may be addressed by an adjusted analysis. The adjusted analyses involve new outcome variables defined in terms of the observed outcomes and the history of non-randomized intervention. We also show how to check the assumption that the outcome would otherwise have been poor, and how to do a sensitivity analysis. We apply these methods to a clinical trial comparing initial angioplasty with medical management in patients with angina. We find that the initial benefit of a single angioplasty in reducing angina tends to disappear with time, but a policy of additional interventions as required yields a benefit that is maintained over 4 years. Such methods may be of interest to many pragmatic randomized trials in which the effects of the initial randomized treatments and the effects of the overall treatment strategies as implemented are both of interest.  相似文献   

3.
In most nonrandomized observational studies, differences between treatment groups may arise not only due to the treatment but also because of the effect of confounders. Therefore, causal inference regarding the treatment effect is not as straightforward as in a randomized trial. To adjust for confounding due to measured covariates, the average treatment effect is often estimated by using propensity scores. Typically, propensity scores are estimated by logistic regression. More recent suggestions have been to employ nonparametric classification algorithms from machine learning. In this article, we propose a weighted estimator combining parametric and nonparametric models. Some theoretical results regarding consistency of the procedure are given. Simulation studies are used to assess the performance of the newly proposed methods relative to existing methods, and a data analysis example from the Surveillance, Epidemiology and End Results database is presented.  相似文献   

4.
Hypotheses about the mechanisms of action by which a treatment affects a clinical outcome may prompt consideration of an alternative outcome as a potential surrogate. In many cases, due to costs or other factors, the candidate for surrogacy will only be measured for patients randomized to a substudy within the main trial. In this situation, the substudy patients provide information about links between the true and surrogate outcomes and the treatment, and these links can be exploited, using methods for handling missing covariates, to allow available information for patients not in the substudy to be incorporated into the analysis. The increased precision with which the treatment effect can be estimated using these methods, compared with using substudy data alone, in turn allows more precise estimates of measures of surrogacy. This paper reviews and compares some methods for handling missing covariate data and applies the methodology to a large heart attack trial, in order to investigate the properties of four measures for assessing the extent to which early response to thrombolytic therapy, as measured by an improvement in coronary blood flow, can be regarded as a surrogate for 30-day survival following heart attack. Design issues for substudies intended to assess treatment mechanisms are also considered. In particular, we consider how the precision of surrogate measures varies with the size of the substudy relative to the main trial. The results suggest that for reasonable surrogates, substudies substantially smaller than the main study can extract most of the available information regarding surrogacy.  相似文献   

5.
It is often of interest to assess how much of the effect of an exposure on a response is mediated through an intermediate variable. However, systematic approaches are lacking, other than assessment of a surrogate marker for the endpoint of a clinical trial. We review a measure of "proportion explained" in the context of observational epidemiologic studies. The measure has been much debated; we show how several of the drawbacks are alleviated when exposures, mediators, and responses are continuous and are embedded in a structural equation framework. These conditions also allow for consideration of several intermediate variables. Binary or categorical variables can be included directly through threshold models. We call this measure the mediation proportion, that is, the part of an exposure effect on outcome explained by a third, intermediate variable. Two examples illustrate the approach. The first example is a randomized clinical trial of the effects of interferon-alpha on visual acuity in patients with age-related macular degeneration. In this example, the exposure, mediator and response are all binary. The second example is a common problem in social epidemiology-to find the proportion of a social class effect on a health outcome that is mediated by psychologic variables. Both the mediator and the response are composed of several ordered categorical variables, with confounders present. Finally, we extend the example to more than one mediator.  相似文献   

6.
This article discusses joint modeling of compliance and outcome for longitudinal studies when noncompliance is present. We focus on two‐arm randomized longitudinal studies in which subjects are randomized at baseline, treatment is applied repeatedly over time, and compliance behaviors and clinical outcomes are measured and recorded repeatedly over time. In the proposed Markov compliance and outcome model, we use the potential outcome framework to define pre‐randomization principal strata from the joint distribution of compliance under treatment and control arms, and estimate the effect of treatment within each principal strata. Besides the causal effect of the treatment, our proposed model can estimate the impact of the causal effect of the treatment at a given time on future compliance. Bayesian methods are used to estimate the parameters. The results are illustrated using a study assessing the effect of cognitive behavior therapy on depression. A simulation study is used to assess the repeated sampling properties of the proposed model. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

7.
While causal mediation analysis has seen considerable recent development for a single measured mediator (M) and final outcome (Y), less attention has been given to repeatedly measured M and Y. Previous methods have typically involved discrete-time models that limit inference to the particular measurement times used and do not recognize the continuous nature of the mediation process over time. To overcome such limitations, we present a new continuous-time approach to causal mediation analysis that uses a differential equations model in a potential outcomes framework to describe the causal relationships among model variables over time. A connection between the differential equation models and standard repeated measures models is made to provide convenient model formulation and fitting. A continuous-time extension of the sequential ignorability assumption allows for identifiable natural direct and indirect effects as functions of time, with estimation based on a two-step approach to model fitting in conjunction with a continuous-time mediation formula. Novel features include a measure of an overall mediation effect based on the “area between the curves,” and an approach for predicting the effects of new interventions. Simulation studies show good properties of estimators and the new methodology is applied to data from a cohort study to investigate sugary drink consumption as a mediator of the effect of socioeconomic status on dental caries in children.  相似文献   

8.
The population risk, for example the control group mortality rate, is an aggregate measurement of many important attributes of a clinical trial, such as the general health of the patients treated and the experience of the staff performing the trial. Plotting measurements of the population risk against the treatment effect estimates for a group of clinical trials may reveal an apparent association, suggesting that differences in the population risk might explain heterogeneity in the results of clinical trials. In this paper we consider using estimates of population risk to explain treatment effect heterogeneity, and show that using these estimates as fixed covariates will result in bias. This bias depends on the treatment effect and population risk definitions chosen, and the magnitude of measurement errors. To account for the effect of measurement error, we represent clinical trials in a bivariate two-level hierarchical model, and show how to estimate the parameters of the model by both maximum likelihood and Bayes procedures. We use two examples to demonstrate the method.  相似文献   

9.
For the estimation of controlled direct effects (i.e., direct effects controlling intermediates that are set at a fixed level for all members of the population) without bias, two fundamental assumptions must hold: the absence of unmeasured confounding factors for treatment and outcome and for intermediate variables and outcome. Even if these assumptions hold, one would nonetheless fail to estimate direct effects using standard methods, for example, stratification or regression modeling, when the treatment influences confounding factors. For such situations, the sequential g‐estimation method for structural nested mean models has been developed for estimating controlled direct effects in point‐treatment situations. In this study, we demonstrate that this method can be applied to longitudinal data with time‐varying treatments and repeatedly measured intermediate variables. We sequentially estimate the parameters in two structural nested mean models: one for a repeatedly measured intermediate and the other one for direct effects of a time‐varying treatment. The method was applied to data from a large primary prevention trial for coronary events, in which pravastatin was used to lower the cholesterol levels in patients with moderate hypercholesterolemia. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

10.
Three issues concerning the design and analysis of randomized behavioral intervention studies are illustrated and discussed within the framework of a tobacco and alcohol prevention trial among migrant Latino adolescents. The first issue arises when subjects are randomized in clusters rather than individually. Because subject observations cannot be assumed to be independent, information pertaining to the degree of clustering must be reported, and analyses must take the clustering into account. The second issue concerns the impact of compliance to the intervention and the importance of measuring compliance in the experimental and attention-control groups. A compliance analysis should control for participant contact with study personnel. Investigators must consider ways of constructing a compliance measure that is common to both conditions. Third, because outcomes are measured repeatedly over time, we illustrate the importance of assessing the impact of missing-data patterns on outcomes and the extent to which the patterns may modify the treatment effect.  相似文献   

11.
To maintain the interpretability of the effect of experimental treatment (EXP) obtained from a noninferiority trial, current statistical approaches often require the constancy assumption. This assumption typically requires that the control treatment effect in the population of the active control trial is the same as its effect presented in the population of the historical trial. To prevent constancy assumption violation, clinical trial sponsors were recommended to make sure that the design of the active control trial is as close to the design of the historical trial as possible. However, these rigorous requirements are rarely fulfilled in practice. The inevitable discrepancies between the historical trial and the active control trial have led to debates on many controversial issues. Without support from a well‐developed quantitative method to determine the impact of the discrepancies on the constancy assumption violation, a correct judgment seems difficult. In this paper, we present a covariate‐adjustment generalized linear regression model approach to achieve two goals: (1) to quantify the impact of population difference between the historical trial and the active control trial on the degree of constancy assumption violation and (2) to redefine the active control treatment effect in the active control trial population if the quantification suggests an unacceptable violation. Through achieving goal (1), we examine whether or not a population difference leads to an unacceptable violation. Through achieving goal (2), we redefine the noninferiority margin if the violation is unacceptable. This approach allows us to correctly determine the effect of EXP in the noninferiority trial population when constancy assumption is violated due to the population difference. We illustrate the covariate‐adjustment approach through a case study. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

12.
Evaluation of the treatment effect on cytogenetic ordered categorical response is considered in patients treated for chronic myelogenous leukaemia (CML) in a clinical trial initiated by the East German Group for Hematology and Oncology. A simulation model for the cytogenetic response (per cent of Philadelphia chromosome positive metaphases) serially measured in CML patients was constructed to describe roughly the sparse information available in medical literature. The model was used to construct a summary measure of response and to formulate the treatment effect as a regression with U-shape distributed ordered categorical data. Two simple models (vertical shift model and pooled conditional response model) were specifically designed to model the treatment effect ‘observed’ in a simulated ‘pilot’ data set. The powers were contrasted with the traditional proportional odds and binary models. The comparison was based both on repeated sampling from the simulated model and on bootstrap of ‘given’ pilot data set. We show that the specific models that address the treatment effect directly (as anticipated from pilot data) can gain in power as compared to the traditional proportional odds model when evaluated by bootstrap. However, the proportional odds model appears to be better with repeated sampling from the simulation model. To explain this discrepancy we generated ‘pilot data sets’ repeatedly from the simulation model and showed that the ordering of the bootstrap power estimates is unstable with reasonably complex models dependent on the random fall of the pilot data sets. This phenomenon clearly limits the usefulness of subtle modelling the form of the treatment difference observed in a small pilot data set. © 1998 John Wiley & Sons, Ltd.  相似文献   

13.
Mediation analysis helps researchers assess whether part or all of an exposure's effect on an outcome is due to an intermediate variable. The indirect effect can help in designing interventions on the mediator as opposed to the exposure and better understanding the outcome's mechanisms. Mediation analysis has seen increased use in genome‐wide epidemiological studies to test for an exposure of interest being mediated through a genomic measure such as gene expression or DNA methylation (DNAm). Testing for the indirect effect is challenged by the fact that the null hypothesis is composite. We examined the performance of commonly used mediation testing methods for the indirect effect in genome‐wide mediation studies. When there is no association between the exposure and the mediator and no association between the mediator and the outcome, we show that these common tests are overly conservative. This is a case that will arise frequently in genome‐wide mediation studies. Caution is hence needed when applying the commonly used mediation tests in genome‐wide mediation studies. We evaluated the performance of these methods using simulation studies, and performed an epigenome‐wide mediation association study in the Normative Aging Study, analyzing DNAm as a mediator of the effect of pack‐years on FEV1.  相似文献   

14.
When treatment effect modifiers influence the decision to participate in a randomized trial, the average treatment effect in the population represented by the randomized individuals will differ from the effect in other populations. In this tutorial, we consider methods for extending causal inferences about time-fixed treatments from a trial to a new target population of nonparticipants, using data from a completed randomized trial and baseline covariate data from a sample from the target population. We examine methods based on modeling the expectation of the outcome, the probability of participation, or both (doubly robust). We compare the methods in a simulation study and show how they can be implemented in software. We apply the methods to a randomized trial nested within a cohort of trial-eligible patients to compare coronary artery surgery plus medical therapy versus medical therapy alone for patients with chronic coronary artery disease. We conclude by discussing issues that arise when using the methods in applied analyses.  相似文献   

15.
The causal inference literature has provided definitions of direct and indirect effects based on counterfactuals that generalize the approach found in the social science literature. However, these definitions presuppose well-defined hypothetical interventions on the mediator. In many settings, there may be multiple ways to fix the mediator to a particular value, and these various hypothetical interventions may have very different implications for the outcome of interest. In this paper, we consider mediation analysis when multiple versions of the mediator are present. Specifically, we consider the problem of attempting to decompose a total effect of an exposure on an outcome into the portion through the intermediate and the portion through other pathways. We consider the setting in which there are multiple versions of the mediator but the investigator has access only to data on the particular measurement, not information on which version of the mediator may have brought that value about. We show that the quantity that is estimated as a natural indirect effect using only the available data does indeed have an interpretation as a particular type of mediated effect; however, the quantity estimated as a natural direct effect, in fact, captures both a true direct effect and an effect of the exposure on the outcome mediated through the effect of the version of the mediator that is not captured by the mediator measurement. The results are illustrated using 2 examples from the literature, one in which the versions of the mediator are unknown and another in which the mediator itself has been dichotomized.  相似文献   

16.
Mediation analysis is a popular approach to examine the extent to which the effect of an exposure on an outcome is through an intermediate variable (mediator) and the extent to which the effect is direct. When the mediator is mis‐measured, the validity of mediation analysis can be severely undermined. In this paper, we first study the bias of classical, non‐differential measurement error on a continuous mediator in the estimation of direct and indirect causal effects in generalized linear models when the outcome is either continuous or discrete and exposure–mediator interaction may be present. Our theoretical results as well as a numerical study demonstrate that in the presence of non‐linearities, the bias of naive estimators for direct and indirect effects that ignore measurement error can take unintuitive directions. We then develop methods to correct for measurement error. Three correction approaches using method of moments, regression calibration, and SIMEX are compared. We apply the proposed method to the Massachusetts General Hospital lung cancer study to evaluate the effect of genetic variants mediated through smoking on lung cancer risk. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

17.
Natural direct and indirect effects decompose the effect of a treatment into the part that is mediated by a covariate (the mediator) and the part that is not. Their definitions rely on the concept of outcomes under treatment with the mediator ‘set’ to its value without treatment. Typically, the mechanism through which the mediator is set to this value is left unspecified, and in many applications, it may be challenging to fix the mediator to particular values for each unit or patient. Moreover, how one sets the mediator may affect the distribution of the outcome. This article introduces ‘organic’ direct and indirect effects, which can be defined and estimated without relying on setting the mediator to specific values. Organic direct and indirect effects can be applied, for example, to estimate how much of the effect of some treatments for HIV/AIDS on mother‐to‐child transmission of HIV infection is mediated by the effect of the treatment on the HIV viral load in the blood of the mother. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

18.
Increasing the sample size based on unblinded interim result may inflate the type I error rate and appropriate statistical adjustments may be needed to control the type I error rate at the nominal level. We briefly review the existing approaches which allow early stopping due to futility, or change the test statistic by using different weights, or adjust the critical value for final test, or enforce rules for sample size recalculation. The implication of early stopping due to futility and a simple modification to the weighted Z-statistic approach are discussed. In this paper, we show that increasing the sample size when the unblinded interim result is promising will not inflate the type I error rate and therefore no statistical adjustment is necessary. The unblinded interim result is considered promising if the conditional power is greater than 50 per cent or equivalently, the sample size increment needed to achieve a desired power does not exceed an upper bound. The actual sample size increment may be determined by important factors such as budget, size of the eligible patient population and competition in the market. The 50 per cent-conditional-power approach is extended to a group sequential trial with one interim analysis where a decision may be made at the interim analysis to stop the trial early due to a convincing treatment benefit, or to increase the sample size if the interim result is not as good as expected. The type I error rate will not be inflated if the sample size may be increased only when the conditional power is greater than 50 per cent. If there are two or more interim analyses in a group sequential trial, our simulation study shows that the type I error rate is also well controlled.  相似文献   

19.
The assessment of the dose-response relationship is important but not straightforward when the therapeutic agent is administered repeatedly with dose-modification in each patient and a continuous response is measured repeatedly. We recently proposed an autoregressive linear mixed effects model for such data in which the current response is regressed on the previous response, fixed effects, and random effects. The model represents profiles approaching each patient's asymptote, takes into account the past dose history, and provides a dose-response relationship of the asymptote as a summary measure. In an autoregressive model, intermittent missing data mean the missing values in previous responses as covariates. We previously provided the marginal (unconditional on the previous response) form of the proposed model to deal with intermittent missing data. Irregular timings of dose-modification or measurement can also be treated as equally spaced data with intermittent missing values by selecting an adequately small unit of time. The likelihood is, however, expressed by matrices whose sizes depend on the number of observations for a patient, and the computational burden is large. In this study, we propose a state space form of the autoregressive linear mixed effects model to calculate the marginal likelihood without using large matrices. The regression coefficients of the fixed effects can be concentrated out of the likelihood in this model by the same way of a linear mixed effects model. As an illustration of the approach, we analyzed immunologic data from a clinical trial for multiple sclerosis patients and estimated the dose-response curves for each patient and the population mean.  相似文献   

20.
We review and discuss the practical problems encountered when analysing the effect on survival of covariates which are measured repeatedly over time. Specific issues arise over and above those met with the standard proportional hazards model and concern all stages of data preparation, data analysis and interpretation of the results. Data from a randomized clinical trial of patients with primary biliary cirrhosis, on whom several measurements were taken at regular intervals after entry, are presented as an illustration.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号