首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Instrumental variable (IV) analysis has been widely used in economics, epidemiology, and other fields to estimate the causal effects of covariates on outcomes, in the presence of unobserved confounders and/or measurement errors in covariates. However, IV methods for time‐to‐event outcome with censored data remain underdeveloped. This paper proposes a Bayesian approach for IV analysis with censored time‐to‐event outcome by using a two‐stage linear model. A Markov chain Monte Carlo sampling method is developed for parameter estimation for both normal and non‐normal linear models with elliptically contoured error distributions. The performance of our method is examined by simulation studies. Our method largely reduces bias and greatly improves coverage probability of the estimated causal effect, compared with the method that ignores the unobserved confounders and measurement errors. We illustrate our method on the Women's Health Initiative Observational Study and the Atherosclerosis Risk in Communities Study. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

2.
In an observational study of the effect of a treatment on a time‐to‐event outcome, a major problem is accounting for confounding because of unknown or unmeasured factors. We propose including covariates in a Cox model that can partially account for an unknown time‐independent frailty that is related to starting or stopping treatment as well as the outcome of interest. These covariates capture the times at which treatment is started or stopped and so are called treatment choice (TC) covariates. Three such models are developed: first, an interval TC model that assumes a very general form for the respective hazard functions of starting treatment, stopping treatment, and the outcome of interest and second, a parametric TC model that assumes that the log hazard functions for starting treatment, stopping treatment, and the outcome event include frailty as an additive term. Finally, a hybrid TC model that combines attributes from the parametric and interval TC models. As compared with an ordinary Cox model, the TC models are shown to substantially reduce the bias of the estimated hazard ratio for treatment when data are simulated from a realistic Cox model with residual confounding due to the unobserved frailty. The simulations also indicate that the bias decreases or levels off as the sample size increases. A TC model is illustrated by analyzing the Women's Health Initiative Observational Study of hormone replacement for post‐menopausal women. Published 2017. This article has been contributed to by US Government employees and their work is in the public domain in the USA.  相似文献   

3.
It is routinely argued that, unlike standard regression‐based estimates, inverse probability weighted (IPW) estimates of the parameters of a correctly specified Cox marginal structural model (MSM) may remain unbiased in the presence of a time‐varying confounder affected by prior treatment. Previously proposed methods for simulating from a known Cox MSM lack knowledge of the law of the observed outcome conditional on the measured past. Although unbiased IPW estimation does not require this knowledge, standard regression‐based estimates rely on correct specification of this law. Thus, in typical high‐dimensional settings, such simulation methods cannot isolate bias due to complex time‐varying confounding as it may be conflated with bias due to misspecification of the outcome regression model. In this paper, we describe an approach to Cox MSM data generation that allows for a comparison of the bias of IPW estimates versus that of standard regression‐based estimates in the complete absence of model misspecification. This approach involves simulating data from a standard parametrization of the likelihood and solving for the underlying Cox MSM. We prove that solutions exist and computations are tractable under many data‐generating mechanisms. We show analytically and confirm in simulations that, in the absence of model misspecification, the bias of standard regression‐based estimates for the parameters of a Cox MSM is indeed a function of the coefficients in observed data models quantifying the presence of a time‐varying confounder affected by prior treatment. We discuss limitations of this approach including that implied by the ‘g‐null paradox’. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

4.
The accelerated failure time (AFT) model has been suggested as an alternative to the Cox proportional hazards model. However, a parametric AFT model requires the specification of an appropriate distribution for the event time, which is often difficult to identify in real‐life studies and may limit applications. A semiparametric AFT model was developed by Komárek et al based on smoothed error distribution that does not require such specification. In this article, we develop a spline‐based AFT model that also does not require specification of the parametric family of event time distribution. The baseline hazard function is modeled by regression B‐splines, allowing for the estimation of a variety of smooth and flexible shapes. In comprehensive simulations, we validate the performance of our approach and compare with the results from parametric AFT models and the approach of Komárek. Both the proposed spline‐based AFT model and the approach of Komárek provided unbiased estimates of covariate effects and survival curves for a variety of scenarios in which the event time followed different distributions, including both simple and complex cases. Spline‐based estimates of the baseline hazard showed also a satisfactory numerical stability. As expected, the baseline hazard and survival probabilities estimated by the misspecified parametric AFT models deviated from the truth. We illustrated the application of the proposed model in a study of colon cancer.  相似文献   

5.
Measurement error arises through a variety of mechanisms. A rich literature exists on the bias introduced by covariate measurement error and on methods of analysis to address this bias. By comparison, less attention has been given to errors in outcome assessment and nonclassical covariate measurement error. We consider an extension of the regression calibration method to settings with errors in a continuous outcome, where the errors may be correlated with prognostic covariates or with covariate measurement error. This method adjusts for the measurement error in the data and can be applied with either a validation subset, on which the true data are also observed (eg, a study audit), or a reliability subset, where a second observation of error prone measurements are available. For each case, we provide conditions under which the proposed method is identifiable and leads to consistent estimates of the regression parameter. When the second measurement on the reliability subset has no error or classical unbiased measurement error, the proposed method is consistent even when the primary outcome and exposures of interest are subject to both systematic and random error. We examine the performance of the method with simulations for a variety of measurement error scenarios and sizes of the reliability subset. We illustrate the method's application using data from the Women's Health Initiative Dietary Modification Trial.  相似文献   

6.
This paper provides guidance for researchers with some mathematical background on the conduct of time‐to‐event analysis in observational studies based on intensity (hazard) models. Discussions of basic concepts like time axis, event definition and censoring are given. Hazard models are introduced, with special emphasis on the Cox proportional hazards regression model. We provide check lists that may be useful both when fitting the model and assessing its goodness of fit and when interpreting the results. Special attention is paid to how to avoid problems with immortal time bias by introducing time‐dependent covariates. We discuss prediction based on hazard models and difficulties when attempting to draw proper causal conclusions from such models. Finally, we present a series of examples where the methods and check lists are exemplified. Computational details and implementation using the freely available R software are documented in Supplementary Material. The paper was prepared as part of the STRATOS initiative.  相似文献   

7.
This article explores Bayesian joint models for a quantile of longitudinal response, mismeasured covariate and event time outcome with an attempt to (i) characterize the entire conditional distribution of the response variable based on quantile regression that may be more robust to outliers and misspecification of error distribution; (ii) tailor accuracy from measurement error, evaluate non‐ignorable missing observations, and adjust departures from normality in covariate; and (iii) overcome shortages of confidence in specifying a time‐to‐event model. When statistical inference is carried out for a longitudinal data set with non‐central location, non‐linearity, non‐normality, measurement error, and missing values as well as event time with being interval censored, it is important to account for the simultaneous treatment of these data features in order to obtain more reliable and robust inferential results. Toward this end, we develop Bayesian joint modeling approach to simultaneously estimating all parameters in the three models: quantile regression‐based nonlinear mixed‐effects model for response using asymmetric Laplace distribution, linear mixed‐effects model with skew‐t distribution for mismeasured covariate in the presence of informative missingness and accelerated failure time model with unspecified nonparametric distribution for event time. We apply the proposed modeling approach to analyzing an AIDS clinical data set and conduct simulation studies to assess the performance of the proposed joint models and method. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

8.
It is well known that measurement error in the covariates of regression models generally causes bias in parameter estimates. Correction for such biases requires information concerning the measurement error, which is often in the form of internal validation or replication data. Regression calibration (RC) is a popular approach to correct for covariate measurement error, which involves predicting the true covariate using error‐prone measurements. Likelihood methods have previously been proposed as an alternative approach to estimate the parameters in models affected by measurement error, but have been relatively infrequently employed in medical statistics and epidemiology, partly because of computational complexity and concerns regarding robustness to distributional assumptions. We show how a standard random‐intercepts model can be used to obtain maximum likelihood (ML) estimates when the outcome model is linear or logistic regression under certain normality assumptions, when internal error‐prone replicate measurements are available. Through simulations we show that for linear regression, ML gives more efficient estimates than RC, although the gain is typically small. Furthermore, we show that RC and ML estimates remain consistent even when the normality assumptions are violated. For logistic regression, our implementation of ML is consistent if the true covariate is conditionally normal given the outcome, in contrast to RC. In simulations, this ML estimator showed less bias in situations where RC gives non‐negligible biases. Our proposal makes the ML approach to dealing with covariate measurement error more accessible to researchers, which we hope will improve its viability as a useful alternative to methods such as RC. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

9.
Multistate Markov regression models used for quantifying the effect size of state‐specific covariates pertaining to the dynamics of multistate outcomes have gained popularity. However, the measurements of multistate outcome are prone to the errors of classification, particularly when a population‐based survey/research is involved with proxy measurements of outcome due to cost consideration. Such a misclassification may affect the effect size of relevant covariates such as odds ratio used in the field of epidemiology. We proposed a Bayesian measurement‐error‐driven hidden Markov regression model for calibrating these biased estimates with and without a 2‐stage validation design. A simulation algorithm was developed to assess various scenarios of underestimation and overestimation given nondifferential misclassification (independent of covariates) and differential misclassification (dependent on covariates). We applied our proposed method to the community‐based survey of androgenetic alopecia and found that the effect size of the majority of covariate was inflated after calibration regardless of which type of misclassification. Our proposed Bayesian measurement‐error‐driven hidden Markov regression model is practicable and effective in calibrating the effects of covariates on multistate outcome, but the prior distribution on measurement errors accrued from 2‐stage validation design is strongly recommended.  相似文献   

10.
We consider estimation of treatment effects in two‐stage adaptive multi‐arm trials with a common control. The best treatment is selected at interim, and the primary endpoint is modeled via a Cox proportional hazards model. The maximum partial‐likelihood estimator of the log hazard ratio of the selected treatment will overestimate the true treatment effect in this case. Several methods for reducing the selection bias have been proposed for normal endpoints, including an iterative method based on the estimated conditional selection biases and a shrinkage approach based on empirical Bayes theory. We adapt these methods to time‐to‐event data and compare the bias and mean squared error of all methods in an extensive simulation study and apply the proposed methods to reconstructed data from the FOCUS trial. We find that all methods tend to overcorrect the bias, and only the shrinkage methods can reduce the mean squared error. © 2017 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.  相似文献   

11.
Mediation analysis is a popular approach to examine the extent to which the effect of an exposure on an outcome is through an intermediate variable (mediator) and the extent to which the effect is direct. When the mediator is mis‐measured, the validity of mediation analysis can be severely undermined. In this paper, we first study the bias of classical, non‐differential measurement error on a continuous mediator in the estimation of direct and indirect causal effects in generalized linear models when the outcome is either continuous or discrete and exposure–mediator interaction may be present. Our theoretical results as well as a numerical study demonstrate that in the presence of non‐linearities, the bias of naive estimators for direct and indirect effects that ignore measurement error can take unintuitive directions. We then develop methods to correct for measurement error. Three correction approaches using method of moments, regression calibration, and SIMEX are compared. We apply the proposed method to the Massachusetts General Hospital lung cancer study to evaluate the effect of genetic variants mediated through smoking on lung cancer risk. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

12.
Genetic studies of survival outcomes have been proposed and conducted recently, but statistical methods for identifying genetic variants that affect disease progression are rarely developed. Motivated by our ongoing real studies, here we develop Cox proportional hazard models using functional regression (FR) to perform gene‐based association analysis of survival traits while adjusting for covariates. The proposed Cox models are fixed effect models where the genetic effects of multiple genetic variants are assumed to be fixed. We introduce likelihood ratio test (LRT) statistics to test for associations between the survival traits and multiple genetic variants in a genetic region. Extensive simulation studies demonstrate that the proposed Cox RF LRT statistics have well‐controlled type I error rates. To evaluate power, we compare the Cox FR LRT with the previously developed burden test (BT) in a Cox model and sequence kernel association test (SKAT), which is based on mixed effect Cox models. The Cox FR LRT statistics have higher power than or similar power as Cox SKAT LRT except when 50%/50% causal variants had negative/positive effects and all causal variants are rare. In addition, the Cox FR LRT statistics have higher power than Cox BT LRT. The models and related test statistics can be useful in the whole genome and whole exome association studies. An age‐related macular degeneration dataset was analyzed as an example.  相似文献   

13.
A. Guolo 《Statistics in medicine》2014,33(12):2062-2076
This paper investigates the use of SIMEX, a simulation‐based measurement error correction technique, for meta‐analysis of studies involving the baseline risk of subjects in the control group as explanatory variable. The approach accounts for the measurement error affecting the information about either the outcome in the treatment group or the baseline risk available from each study, while requiring no assumption about the distribution of the true unobserved baseline risk. This robustness property, together with the feasibility of computation, makes SIMEX very attractive. The approach is suggested as an alternative to the usual likelihood analysis, which can provide misleading inferential results when the commonly assumed normal distribution for the baseline risk is violated. The performance of SIMEX is compared to the likelihood method and to the moment‐based correction through an extensive simulation study and the analysis of two datasets from the medical literature. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

14.
In survival analysis, a competing risk is an event whose occurrence precludes the occurrence of the primary event of interest. Outcomes in medical research are frequently subject to competing risks. In survival analysis, there are 2 key questions that can be addressed using competing risk regression models: first, which covariates affect the rate at which events occur, and second, which covariates affect the probability of an event occurring over time. The cause‐specific hazard model estimates the effect of covariates on the rate at which events occur in subjects who are currently event‐free. Subdistribution hazard ratios obtained from the Fine‐Gray model describe the relative effect of covariates on the subdistribution hazard function. Hence, the covariates in this model can also be interpreted as having an effect on the cumulative incidence function or on the probability of events occurring over time. We conducted a review of the use and interpretation of the Fine‐Gray subdistribution hazard model in articles published in the medical literature in 2015. We found that many authors provided an unclear or incorrect interpretation of the regression coefficients associated with this model. An incorrect and inconsistent interpretation of regression coefficients may lead to confusion when comparing results across different studies. Furthermore, an incorrect interpretation of estimated regression coefficients can result in an incorrect understanding about the magnitude of the association between exposure and the incidence of the outcome. The objective of this article is to clarify how these regression coefficients should be reported and to propose suggestions for interpreting these coefficients.  相似文献   

15.
In cluster‐randomized trials, intervention effects are often formulated by specifying marginal models, fitting them under a working independence assumption, and using robust variance estimates to address the association in the responses within clusters. We develop sample size criteria within this framework, with analyses based on semiparametric Cox regression models fitted with event times subject to right censoring. At the design stage, copula models are specified to enable derivation of the asymptotic variance of estimators from a marginal Cox regression model and to compute the number of clusters necessary to satisfy power requirements. Simulation studies demonstrate the validity of the sample size formula in finite samples for a range of cluster sizes, censoring rates, and degrees of within‐cluster association among event times. The power and relative efficiency implications of copula misspecification is studied, as well as the effect of within‐cluster dependence in the censoring times. Sample size criteria and other design issues are also addressed for the setting where the event status is only ascertained at periodic assessments and times are interval censored. Copyright © 2014 JohnWiley & Sons, Ltd.  相似文献   

16.
If past treatment assignments are unmasked, selection bias may arise even in randomized controlled trials. The impact of such bias can be measured by considering the type I error probability. In case of a normally distributed outcome, there already exists a model accounting for selection bias that permits calculating the corresponding type I error probabilities. To model selection bias for trials with a time‐to‐event outcome, we introduce a new biasing policy for exponentially distributed data. Using this biasing policy, we derive an exact formula to compute type I error probabilities whenever an F‐test is performed and no observations are censored. Two exemplary settings, with and without random censoring, are considered in order to illustrate how our results can be applied to compare distinct randomization procedures with respect to their performance in the presence of selection bias. © 2017 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.  相似文献   

17.
Measurement error is common in epidemiological and biomedical studies. When biomarkers are measured in batches or groups, measurement error is potentially correlated within each batch or group. In regression analysis, most existing methods are not applicable in the presence of batch‐specific measurement error in predictors. We propose a robust conditional likelihood approach to account for batch‐specific error in predictors when batch effect is additive and the predominant source of error, which requires no assumptions on the distribution of measurement error. Although a regression model with batch as a categorical covariable yields the same parameter estimates as the proposed conditional likelihood approach for linear regression, this result does not hold in general for all generalized linear models, in particular, logistic regression. Our simulation studies show that the conditional likelihood approach achieves better finite sample performance than the regression calibration approach or a naive approach without adjustment for measurement error. In the case of logistic regression, our proposed approach is shown to also outperform the regression approach with batch as a categorical covariate. In addition, we also examine a ‘hybrid’ approach combining the conditional likelihood method and the regression calibration method, which is shown in simulations to achieve good performance in the presence of both batch‐specific and measurement‐specific errors. We illustrate our method by using data from a colorectal adenoma study. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

18.
BACKGROUND AND OBJECTIVE: We consider the number needed to treat (NNT) when the event of interest is defined by dichotomizing a continuous response at a threshold level. If the response is measured with error, the resulting NNT is biased. We consider methods to reduce this bias. METHODS: Bias adjustment was studied using simulations in which we varied the distributions of the underlying response and measurement error, including both normal and nonnormal distributions. We studied a maximum likelihood estimate (MLE) based on normality assumptions, and also considered a simulation-extrapolation estimate (SIMEX) without such assumptions. The treatment effect across all potential thresholds was summarized using an NNT threshold curve. RESULTS: Crude NNT estimation was substantially biased due to measurement error. The MLE performed well under normality, and it continued to perform well with nonnormal measurement error, but when the underlying response was nonnormal the MLE was unacceptably biased and was outperformed by the SIMEX estimate. The simulation results were also reflected in empirical data from a randomized study of cholesterol-lowering therapy. CONCLUSION: Ignoring measurement error can lead to substantial bias in NNT, which can have an important practical effect on the interpretation of analyses. Analysis methods that adjust for measurement error bias can be used to assess the sensitivity of NNT estimates to this effect.  相似文献   

19.
Unmeasured confounding is a common concern when researchers attempt to estimate a treatment effect using observational data or randomized studies with nonperfect compliance. To address this concern, instrumental variable methods, such as 2‐stage predictor substitution (2SPS) and 2‐stage residual inclusion (2SRI), have been widely adopted. In many clinical studies of binary and survival outcomes, 2SRI has been accepted as the method of choice over 2SPS, but a compelling theoretical rationale has not been postulated. We evaluate the bias and consistency in estimating the conditional treatment effect for both 2SPS and 2SRI when the outcome is binary, count, or time to event. We demonstrate analytically that the bias in 2SPS and 2SRI estimators can be reframed to mirror the problem of omitted variables in nonlinear models and that there is a direct relationship with the collapsibility of effect measures. In contrast to conclusions made by previous studies (Terza et al, 2008), we demonstrate that the consistency of 2SRI estimators only holds under the following conditions: (1) when the null hypothesis is true; (2) when the outcome model is collapsible; or (3) when estimating the nonnull causal effect from Cox or logistic regression models, the strong and unrealistic assumption that the effect of the unmeasured covariates on the treatment is proportional to their effect on the outcome needs to hold. We propose a novel dissimilarity metric to provide an intuitive explanation of the bias of 2SRI estimators in noncollapsible models and demonstrate that with increasing dissimilarity between the effects of the unmeasured covariates on the treatment versus outcome, the bias of 2SRI increases in magnitude.  相似文献   

20.
We consider random effects meta‐analysis where the outcome variable is the occurrence of some event of interest. The data structures handled are where one has one or more groups in each study, and in each group either the number of subjects with and without the event, or the number of events and the total duration of follow‐up is available. Traditionally, the meta‐analysis follows the summary measures approach based on the estimates of the outcome measure(s) and the corresponding standard error(s). This approach assumes an approximate normal within‐study likelihood and treats the standard errors as known. This approach has several potential disadvantages, such as not accounting for the standard errors being estimated, not accounting for correlation between the estimate and the standard error, the use of an (arbitrary) continuity correction in case of zero events, and the normal approximation being bad in studies with few events. We show that these problems can be overcome in most cases occurring in practice by replacing the approximate normal within‐study likelihood by the appropriate exact likelihood. This leads to a generalized linear mixed model that can be fitted in standard statistical software. For instance, in the case of odds ratio meta‐analysis, one can use the non‐central hypergeometric distribution likelihood leading to mixed‐effects conditional logistic regression. For incidence rate ratio meta‐analysis, it leads to random effects logistic regression with an offset variable. We also present bivariate and multivariate extensions. We present a number of examples, especially with rare events, among which an example of network meta‐analysis. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号