首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Simulations and Monte Carlo methods serve an important role in modern statistical research. They allow for an examination of the performance of statistical procedures in settings in which analytic and mathematical derivations may not be feasible. A key element in any statistical simulation is the existence of an appropriate data‐generating process: one must be able to simulate data from a specified statistical model. We describe data‐generating processes for the Cox proportional hazards model with time‐varying covariates when event times follow an exponential, Weibull, or Gompertz distribution. We consider three types of time‐varying covariates: first, a dichotomous time‐varying covariate that can change at most once from untreated to treated (e.g., organ transplant); second, a continuous time‐varying covariate such as cumulative exposure at a constant dose to radiation or to a pharmaceutical agent used for a chronic condition; third, a dichotomous time‐varying covariate with a subject being able to move repeatedly between treatment states (e.g., current compliance or use of a medication). In each setting, we derive closed‐form expressions that allow one to simulate survival times so that survival times are related to a vector of fixed or time‐invariant covariates and to a single time‐varying covariate. We illustrate the utility of our closed‐form expressions for simulating event times by using Monte Carlo simulations to estimate the statistical power to detect as statistically significant the effect of different types of binary time‐varying covariates. This is compared with the statistical power to detect as statistically significant a binary time‐invariant covariate. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

2.
Clinical trial outcomes for Alzheimer's disease are typically analyzed by using the mixed model for repeated measures (MMRM) or similar models that compare an efficacy scale change from baseline between treatment arms with or without participants' disease stage as a covariate. The MMRM focuses on a single‐point fixed follow‐up duration regardless of the exposure for each participant. In contrast to these typical models, we have developed a novel semiparametric cognitive disease progression model (DPM) for autosomal dominant Alzheimer's disease based on the Dominantly Inherited Alzheimer Network (DIAN) observational study. This model includes 3 novel features, in which the DPM (1) aligns and compares participants by disease stage, (2) uses a proportional treatment effect similar to the concept of the Cox proportional hazard ratio, and (3) incorporates extended follow‐up data from participants with different follow‐up durations using all data until last participant visit. We present the DPM model developed by using the DIAN observational study data and demonstrate through simulation that the cognitive DPM used in hypothetical intervention clinical trials produces substantial gains in power compared with the MMRM.  相似文献   

3.
Two‐period two‐treatment (2×2) crossover designs are commonly used in clinical trials. For continuous endpoints, it has been shown that baseline (pretreatment) measurements collected before the start of each treatment period can be useful in improving the power of the analysis. Methods to achieve a corresponding gain for censored time‐to‐event endpoints have not been adequately studied. We propose a method in which censored values are treated as missing data and multiply imputed using prespecified parametric event time models. The event times in each imputed data set are then log‐transformed and analyzed using a linear model suitable for a 2×2 crossover design with continuous endpoints, with the difference in period‐specific baselines included as a covariate. Results obtained from the imputed data sets are synthesized for point and confidence interval estimation of the treatment ratio of geometric mean event times using model averaging in conjunction with Rubin's combination rule. We use simulations to illustrate the favorable operating characteristics of our method relative to two other methods for crossover trials with censored time‐to‐event data, ie, a hierarchical rank test that ignores the baselines and a stratified Cox model that uses each study subject as a stratum and includes period‐specific baselines as a covariate. Application to a real data example is provided.  相似文献   

4.
5.
The self‐controlled case series method is a statistical approach to investigating associations between acute outcomes and transient exposures. The method uses cases only and compares time at risk after the transient exposure with time at risk outside the exposure period within an individual, using conditional Poisson regression. The risk of outcome and exposure often varies over time, for example, with age, and it is important to allow for such time dependence within the analysis. The standard approach for modelling time‐varying covariates is to split observation periods into blocks according to categories of the covariate and then to model the relationship using indicators for each category. However, this can be inefficient and can lead to problems with collinearity if the exposure occurs at approximately the same time in all individuals. As an alternative, we propose using fractional polynomials to model the relationship between the time‐varying covariate and incidence of the outcome. We present the results from an analysis exploring the association between rotavirus vaccination and intussusception risk as well as a simulation study. We conclude that fractional polynomials provide a useful approach to adjusting for time‐varying covariates but that it is important to explore the sensitivity of the results to the number of categories and the method of adjustment. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

6.
For time‐to‐event outcomes, a rich literature exists on the bias introduced by covariate measurement error in regression models, such as the Cox model, and methods of analysis to address this bias. By comparison, less attention has been given to understanding the impact or addressing errors in the failure time outcome. For many diseases, the timing of an event of interest (such as progression‐free survival or time to AIDS progression) can be difficult to assess or reliant on self‐report and therefore prone to measurement error. For linear models, it is well known that random errors in the outcome variable do not bias regression estimates. With nonlinear models, however, even random error or misclassification can introduce bias into estimated parameters. We compare the performance of 2 common regression models, the Cox and Weibull models, in the setting of measurement error in the failure time outcome. We introduce an extension of the SIMEX method to correct for bias in hazard ratio estimates from the Cox model and discuss other analysis options to address measurement error in the response. A formula to estimate the bias induced into the hazard ratio by classical measurement error in the event time for a log‐linear survival model is presented. Detailed numerical studies are presented to examine the performance of the proposed SIMEX method under varying levels and parametric forms of the error in the outcome. We further illustrate the method with observational data on HIV outcomes from the Vanderbilt Comprehensive Care Clinic.  相似文献   

7.
Prognostic studies often estimate survival curves for patients with different covariate vectors, but the validity of their results depends largely on the accuracy of the estimated covariate effects. To avoid conventional proportional hazards and linearity assumptions, flexible extensions of Cox's proportional hazards model incorporate non‐linear (NL) and/or time‐dependent (TD) covariate effects. However, their impact on survival curves estimation is unclear. Our primary goal is to develop and validate a flexible method for estimating individual patients' survival curves, conditional on multiple predictors with possibly NL and/or TD effects. We first obtain maximum partial likelihood estimates of NL and TD effects and use backward elimination to select statistically significant effects into a final multivariable model. We then plug the selected NL and TD estimates in the full likelihood function and estimate the baseline hazard function and the resulting survival curves, conditional on individual covariate vectors. The TD and NL functions and the log hazard are modeled with unpenalized regression B‐splines. In simulations, our flexible survival curve estimates were unbiased and had much lower mean square errors than the conventional estimates. In real‐life analyses of mortality after a septic shock, our model improved significantly the deviance (likelihood ratio test = 84.8, df = 20, p < 0.0001) and changed substantially the predicted survival for several subjects. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

8.
For risk and benefit assessment in clinical trials and observational studies with time‐to‐event data, the Cox model has usually been the model of choice. When the hazards are possibly non‐proportional, a piece‐wise Cox model over a partition of the time axis may be considered. Here, we propose to analyze clinical trials or observational studies with time‐to‐event data using a certain semiparametric model. The model allows for a time‐dependent treatment effect. It includes the important proportional hazards model as a sub‐model and can accommodate various patterns of time‐dependence of the hazard ratio. After estimation of the model parameters using a pseudo‐likelihood approach, simultaneous confidence intervals for the hazard ratio function are established using a Monte Carlo method to assess the time‐varying pattern of the treatment effect. To assess the overall treatment effect, estimated average hazard ratio and its confidence intervals are also obtained. The proposed methods are applied to data from the Women's Health Initiative. To compare the Women's Health Initiative clinical trial and observational study, we use the propensity score in building the regression model. Compared with the piece‐wise Cox model, the proposed model yields a better model fit and does not require partitioning of the time axis. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

9.
We propose novel estimation approaches for generalized varying coefficient models that are tailored for unsynchronized, irregular and infrequent longitudinal designs/data. Unsynchronized longitudinal data refer to the time‐dependent response and covariate measurements for each individual measured at distinct time points. Data from the Comprehensive Dialysis Study motivate the proposed methods. We model the potential age‐varying association between infection‐related hospitalization status and the inflammatory marker, C‐reactive protein, within the first 2 years from initiation of dialysis. We cannot directly apply traditional longitudinal modeling to unsynchronized data, and no method exists to estimate time‐varying or age‐varying effects for generalized outcomes (e.g., binary or count data) to date. In addition, through the analysis of the Comprehensive Dialysis Study data and simulation studies, we show that preprocessing steps, such as binning, needed to synchronize data to apply traditional modeling can lead to significant loss of information in this context. In contrast, the proposed approaches discard no observation; they exploit the fact that although there is little information in a single subject trajectory because of irregularity and infrequency, the moments of the underlying processes can be accurately and efficiently recovered by pooling information from all subjects using functional data analysis. We derive subject‐specific mean response trajectory predictions and study finite sample properties of the estimators. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

10.
In many time‐to‐event studies, particularly in epidemiology, the time of the first observation or study entry is arbitrary in the sense that this is not a time of risk modification. We present a formal argument that, in these situations, it is not advisable to take the first observation as the time origin, either in accelerated failure time or proportional hazards models. Instead, we advocate using birth as the time origin. We use a two‐stage process to account for the fact that baseline observations may be made at different ages in different subjects. First, we marginally regress any potentially age‐varying covariates against age, retaining the residuals. These residuals are then used as covariates in fitting an accelerated failure time or proportional hazards model — we call the procedures residual accelerated failure time regression and residual proportional hazards regression, respectively. We compare residual accelerated failure time regression with the standard approach, demonstrating superior predictive ability of the residual method in realistic examples and potentially higher power of the residual method. This highlights flaws in current approaches to communicating risks from epidemiological evidence to support clinical and health policy decisions. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

11.
Two‐phase designs are commonly used to subsample subjects from a cohort in order to study covariates that are too expensive to ascertain for everyone in the cohort. This is particularly true for the study of immune response biomarkers in vaccine immunology, where new, elaborate assays are constantly being developed to improve our understanding of the human immune responses to vaccines and how the immune response may protect humans from virus infection. It has long being recognized that if there exist variables that are correlated with expensive variables and can be measured for every subject in the cohort, they can be leveraged to improve the estimation efficiency for the effects of the expensive variables. In this research article, we developed an improved inverse probability weighted estimation approach for semiparametric transformation models with a two‐phase study design. Semiparametric transformation models are a class of models that include the Cox PH and proportional odds models. They provide an attractive way to model the effects of immune response biomarkers as human immune responses generally wane over time. Our approach is based on weights calibration, which has its origin in survey statistics and was used by Breslow et al. 1 , 2 to improve inverse probability weighted estimation of the Cox regression model. We develop asymptotic theory for our estimator and examine its performance through simulation studies. We illustrate the proposed method with application to two HIV‐1 vaccine efficacy trials. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

12.
This article considers the problem of examining time‐varying causal effect moderation using observational, longitudinal data in which treatment, candidate moderators, and possible confounders are time varying. The structural nested mean model (SNMM) is used to specify the moderated time‐varying causal effects of interest in a conditional mean model for a continuous response given time‐varying treatments and moderators. We present an easy‐to‐use estimator of the SNMM that combines an existing regression‐with‐residuals (RR) approach with an inverse‐probability‐of‐treatment weighting (IPTW) strategy. The RR approach has been shown to identify the moderated time‐varying causal effects if the time‐varying moderators are also the sole time‐varying confounders. The proposed IPTW+RR approach provides estimators of the moderated time‐varying causal effects in the SNMM in the presence of an additional, auxiliary set of known and measured time‐varying confounders. We use a small simulation experiment to compare IPTW+RR versus the traditional regression approach and to compare small and large sample properties of asymptotic versus bootstrap estimators of the standard errors for the IPTW+RR approach. This article clarifies the distinction between time‐varying moderators and time‐varying confounders. We illustrate the methodology in a case study to assess if time‐varying substance use moderates treatment effects on future substance use. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

13.
The use of propensity score methods to adjust for selection bias in observational studies has become increasingly popular in public health and medical research. A substantial portion of studies using propensity score adjustment treat the propensity score as a conventional regression predictor. Through a Monte Carlo simulation study, Austin and colleagues. investigated the bias associated with treatment effect estimation when the propensity score is used as a covariate in nonlinear regression models, such as logistic regression and Cox proportional hazards models. We show that the bias exists even in a linear regression model when the estimated propensity score is used and derive the explicit form of the bias. We also conduct an extensive simulation study to compare the performance of such covariate adjustment with propensity score stratification, propensity score matching, inverse probability of treatment weighted method, and nonparametric functional estimation using splines. The simulation scenarios are designed to reflect real data analysis practice. Instead of specifying a known parametric propensity score model, we generate the data by considering various degrees of overlap of the covariate distributions between treated and control groups. Propensity score matching excels when the treated group is contained within a larger control pool, while the model‐based adjustment may have an edge when treated and control groups do not have too much overlap. Overall, adjusting for the propensity score through stratification or matching followed by regression or using splines, appears to be a good practical strategy. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

14.
Causal inference with observational longitudinal data and time‐varying exposures is complicated due to the potential for time‐dependent confounding and unmeasured confounding. Most causal inference methods that handle time‐dependent confounding rely on either the assumption of no unmeasured confounders or the availability of an unconfounded variable that is associated with the exposure (eg, an instrumental variable). Furthermore, when data are incomplete, validity of many methods often depends on the assumption of missing at random. We propose an approach that combines a parametric joint mixed‐effects model for the study outcome and the exposure with g‐computation to identify and estimate causal effects in the presence of time‐dependent confounding and unmeasured confounding. G‐computation can estimate participant‐specific or population‐average causal effects using parameters of the joint model. The joint model is a type of shared parameter model where the outcome and exposure‐selection models share common random effect(s). We also extend the joint model to handle missing data and truncation by death when missingness is possibly not at random. We evaluate the performance of the proposed method using simulation studies and compare the method to both linear mixed‐ and fixed‐effects models combined with g‐computation as well as to targeted maximum likelihood estimation. We apply the method to an epidemiologic study of vitamin D and depressive symptoms in older adults and include code using SAS PROC NLMIXED software to enhance the accessibility of the method to applied researchers.  相似文献   

15.
Cox models are commonly used in the analysis of time to event data. One advantage of Cox models is the ability to include time‐varying covariates, often a binary covariate that codes for the occurrence of an event that affects an individual subject. A common assumption in this case is that the effect of the event on the outcome of interest is constant and permanent for each subject. In this paper, we propose a modification to the Cox model to allow the influence of an event to exponentially decay over time. Methods for generating data using the inverse cumulative density function for the proposed model are developed. Likelihood ratio tests and AIC are investigated as methods for comparing the proposed model to the commonly used permanent exposure model. A simulation study is performed, and 3 different data sets are presented as examples.  相似文献   

16.
For the estimation of controlled direct effects (i.e., direct effects controlling intermediates that are set at a fixed level for all members of the population) without bias, two fundamental assumptions must hold: the absence of unmeasured confounding factors for treatment and outcome and for intermediate variables and outcome. Even if these assumptions hold, one would nonetheless fail to estimate direct effects using standard methods, for example, stratification or regression modeling, when the treatment influences confounding factors. For such situations, the sequential g‐estimation method for structural nested mean models has been developed for estimating controlled direct effects in point‐treatment situations. In this study, we demonstrate that this method can be applied to longitudinal data with time‐varying treatments and repeatedly measured intermediate variables. We sequentially estimate the parameters in two structural nested mean models: one for a repeatedly measured intermediate and the other one for direct effects of a time‐varying treatment. The method was applied to data from a large primary prevention trial for coronary events, in which pravastatin was used to lower the cholesterol levels in patients with moderate hypercholesterolemia. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

17.
The accelerated failure time (AFT) model has been suggested as an alternative to the Cox proportional hazards model. However, a parametric AFT model requires the specification of an appropriate distribution for the event time, which is often difficult to identify in real‐life studies and may limit applications. A semiparametric AFT model was developed by Komárek et al based on smoothed error distribution that does not require such specification. In this article, we develop a spline‐based AFT model that also does not require specification of the parametric family of event time distribution. The baseline hazard function is modeled by regression B‐splines, allowing for the estimation of a variety of smooth and flexible shapes. In comprehensive simulations, we validate the performance of our approach and compare with the results from parametric AFT models and the approach of Komárek. Both the proposed spline‐based AFT model and the approach of Komárek provided unbiased estimates of covariate effects and survival curves for a variety of scenarios in which the event time followed different distributions, including both simple and complex cases. Spline‐based estimates of the baseline hazard showed also a satisfactory numerical stability. As expected, the baseline hazard and survival probabilities estimated by the misspecified parametric AFT models deviated from the truth. We illustrated the application of the proposed model in a study of colon cancer.  相似文献   

18.
Prediction of cumulative incidences is often a primary goal in clinical studies with several endpoints. We compare predictions among competing risks models with time‐dependent covariates. For a series of landmark time points, we study the predictive accuracy of a multi‐state regression model, where the time‐dependent covariate represents an intermediate state, and two alternative landmark approaches. At each landmark time point, the prediction performance is measured as the t‐year expected Brier score where pseudovalues are constructed in order to deal with right‐censored event times. We apply the methods to data from a bone marrow transplant study where graft versus host disease is considered a time‐dependent covariate for predicting relapse and death in remission. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

19.
This paper presents a Bayesian adaptive group least absolute shrinkage and selection operator method to conduct simultaneous model selection and estimation under semiparametric hidden Markov models. We specify the conditional regression model and the transition probability model in the hidden Markov model into additive nonparametric functions of covariates. A basis expansion is adopted to approximate the nonparametric functions. We introduce multivariate conditional Laplace priors to impose adaptive penalties on regression coefficients and different groups of basis expansions under the Bayesian framework. An efficient Markov chain Monte Carlo algorithm is then proposed to identify the nonexistent, constant, linear, and nonlinear forms of covariate effects in both conditional and transition models. The empirical performance of the proposed methodology is evaluated via simulation studies. We apply the proposed model to analyze a real data set that was collected from the Alzheimer's Disease Neuroimaging Initiative study. The analysis identifies important risk factors on cognitive decline and the transition from cognitive normal to Alzheimer's disease.  相似文献   

20.
This study investigates appropriate estimation of estimator variability in the context of causal mediation analysis that employs propensity score‐based weighting. Such an analysis decomposes the total effect of a treatment on the outcome into an indirect effect transmitted through a focal mediator and a direct effect bypassing the mediator. Ratio‐of‐mediator‐probability weighting estimates these causal effects by adjusting for the confounding impact of a large number of pretreatment covariates through propensity score‐based weighting. In step 1, a propensity score model is estimated. In step 2, the causal effects of interest are estimated using weights derived from the prior step's regression coefficient estimates. Statistical inferences obtained from this 2‐step estimation procedure are potentially problematic if the estimated standard errors of the causal effect estimates do not reflect the sampling uncertainty in the estimation of the weights. This study extends to ratio‐of‐mediator‐probability weighting analysis a solution to the 2‐step estimation problem by stacking the score functions from both steps. We derive the asymptotic variance‐covariance matrix for the indirect effect and direct effect 2‐step estimators, provide simulation results, and illustrate with an application study. Our simulation results indicate that the sampling uncertainty in the estimated weights should not be ignored. The standard error estimation using the stacking procedure offers a viable alternative to bootstrap standard error estimation. We discuss broad implications of this approach for causal analysis involving propensity score‐based weighting.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号