共查询到20条相似文献,搜索用时 15 毫秒
1.
M.‐A. Bind T. J. VanderWeele J. D. Schwartz B. A. Coull 《Statistics in medicine》2017,36(26):4182-4195
Mediation analysis has mostly been conducted with mean regression models. With this approach modeling means, formulae for direct and indirect effects are based on changes in means, which may not capture effects that occur in units at the tails of mediator and outcome distributions. Individuals with extreme values of medical endpoints are often more susceptible to disease and can be missed if one investigates mean changes only. We derive the controlled direct and indirect effects of an exposure along percentiles of the mediator and outcome using quantile regression models and a causal framework. The quantile regression models can accommodate an exposure‐mediator interaction and random intercepts to allow for longitudinal mediator and outcome. Because DNA methylation acts as a complex “switch” to control gene expression and fibrinogen is a cardiovascular factor, individuals with extreme levels of these markers may be more susceptible to air pollution. We therefore apply this methodology to environmental data to estimate the effect of air pollution, as measured by particle number, on fibrinogen levels through a change in interferon‐gamma (IFN‐γ) methylation. We estimate the controlled direct effect of air pollution on the qth percentile of fibrinogen and its indirect effect through a change in the pth percentile of IFN‐γ methylation. We found evidence of a direct effect of particle number on the upper tail of the fibrinogen distribution. We observed a suggestive indirect effect of particle number on the upper tail of the fibrinogen distribution through a change in the lower percentiles of the IFN‐γ methylation distribution. 相似文献
2.
3.
Longitudinal data analysis is one of the most discussed and applied areas in statistics and a great deal of literature has been developed for it. However, most of the existing literature focus on the situation where observation times are fixed or can be treated as fixed constants. This paper considers the situation where these observation times may be random variables and more importantly, they may be related to the underlying longitudinal variable or process of interest. Furthermore, covariate effects may be time-varying. For the analysis, a joint modeling approach is proposed and in particular, for estimation of time-varying regression parameters, an estimating equation-based procedure is developed. Both asymptotic and finite sample properties of the proposed estimates are established. The methodology is applied to an acute myeloid leukemia trial that motivated this study. 相似文献
4.
Causal inference with observational longitudinal data and time‐varying exposures is complicated due to the potential for time‐dependent confounding and unmeasured confounding. Most causal inference methods that handle time‐dependent confounding rely on either the assumption of no unmeasured confounders or the availability of an unconfounded variable that is associated with the exposure (eg, an instrumental variable). Furthermore, when data are incomplete, validity of many methods often depends on the assumption of missing at random. We propose an approach that combines a parametric joint mixed‐effects model for the study outcome and the exposure with g‐computation to identify and estimate causal effects in the presence of time‐dependent confounding and unmeasured confounding. G‐computation can estimate participant‐specific or population‐average causal effects using parameters of the joint model. The joint model is a type of shared parameter model where the outcome and exposure‐selection models share common random effect(s). We also extend the joint model to handle missing data and truncation by death when missingness is possibly not at random. We evaluate the performance of the proposed method using simulation studies and compare the method to both linear mixed‐ and fixed‐effects models combined with g‐computation as well as to targeted maximum likelihood estimation. We apply the method to an epidemiologic study of vitamin D and depressive symptoms in older adults and include code using SAS PROC NLMIXED software to enhance the accessibility of the method to applied researchers. 相似文献
5.
Catherine A. Welch Irene Petersen Jonathan W. Bartlett Ian R. White Louise Marston Richard W. Morris Irwin Nazareth Kate Walters James Carpenter 《Statistics in medicine》2014,33(21):3725-3737
Most implementations of multiple imputation (MI) of missing data are designed for simple rectangular data structures ignoring temporal ordering of data. Therefore, when applying MI to longitudinal data with intermittent patterns of missing data, some alternative strategies must be considered. One approach is to divide data into time blocks and implement MI independently at each block. An alternative approach is to include all time blocks in the same MI model. With increasing numbers of time blocks, this approach is likely to break down because of co‐linearity and over‐fitting. The new two‐fold fully conditional specification (FCS) MI algorithm addresses these issues, by only conditioning on measurements, which are local in time. We describe and report the results of a novel simulation study to critically evaluate the two‐fold FCS algorithm and its suitability for imputation of longitudinal electronic health records. After generating a full data set, approximately 70% of selected continuous and categorical variables were made missing completely at random in each of ten time blocks. Subsequently, we applied a simple time‐to‐event model. We compared efficiency of estimated coefficients from a complete records analysis, MI of data in the baseline time block and the two‐fold FCS algorithm. The results show that the two‐fold FCS algorithm maximises the use of data available, with the gain relative to baseline MI depending on the strength of correlations within and between variables. Using this approach also increases plausibility of the missing at random assumption by using repeated measures over time of variables whose baseline values may be missing. © 2014 The Authors. Statistics in Medicine published by John Wiley & Sons, Ltd. 相似文献
6.
Dylan S. Small Marshall M. Joffe Kevin G. Lynch Jason A. Roy A. Russell Localio 《Statistics in medicine》2014,33(20):3421-3433
Tom Ten Have made many contributions to causal inference and biostatistics before his untimely death. This paper reviews Tom's contributions and discusses potential related future research directions. We focus on Tom's contributions to longitudinal/repeated measures categorical data analysis and particularly his contributions to causal inference. Tom's work on causal inference was primarily in the areas of estimating the effect of receiving treatment in randomized trials with nonadherence and mediation analysis. A related area to mediation analysis he was working on at the time of his death was posttreatment effect modification with applications to designing adaptive treatment strategies. Copyright © 2012 John Wiley & Sons, Ltd. 相似文献
7.
In this paper we present a formal treatment of non-homogeneous Markov chains by introducing a hierarchical Bayesian framework. Our work is motivated by the analysis of correlated categorical data which arise in assessment of psychiatric treatment programs. In our development, we introduce a Markovian structure to describe the non-homogeneity of transition patterns. In doing so, we introduce a logistic regression set-up for Markov chains and incorporate covariates in our model. We present a Bayesian model using Markov chain Monte Carlo methods and develop inference procedures to address issues encountered in the analyses of data from psychiatric treatment programs. Our model and inference procedures are implemented to some real data from a psychiatric treatment study. 相似文献
8.
Derivative estimation for longitudinal data analysis: Examining features of blood pressure measured repeatedly during pregnancy 下载免费PDF全文
Andrew J. Simpkin Maria Durban Debbie A. Lawlor Corrie MacDonald‐Wallis Margaret T. May Chris Metcalfe Kate Tilling 《Statistics in medicine》2018,37(19):2836-2854
Estimating velocity and acceleration trajectories allows novel inferences in the field of longitudinal data analysis, such as estimating change regions rather than change points, and testing group effects on nonlinear change in an outcome (ie, a nonlinear interaction). In this article, we develop derivative estimation for 2 standard approaches—polynomial mixed models and spline mixed models. We compare their performance with an established method—principal component analysis through conditional expectation through a simulation study. We then apply the methods to repeated blood pressure (BP) measurements in a UK cohort of pregnant women, where the goals of analysis are to (i) identify and estimate regions of BP change for each individual and (ii) investigate the association between parity and BP change at the population level. The penalized spline mixed model had the lowest bias in our simulation study, and we identified evidence for BP change regions in over 75% of pregnant women. Using mean velocity difference revealed differences in BP change between women in their first pregnancy compared with those who had at least 1 previous pregnancy. We recommend the use of penalized spline mixed models for derivative estimation in longitudinal data analysis. 相似文献
9.
Michele Santacatterina Celia García-Pareja Rino Bellocco Anders Sönnerborg Anna Mia Ekström Matteo Bottai 《Statistics in medicine》2019,38(10):1891-1902
Marginal structural Cox models have been used to estimate the causal effect of a time-varying treatment on a survival outcome in the presence of time-dependent confounders. These methods rely on the positivity assumption, which states that the propensity scores are bounded away from zero and one. Practical violations of this assumption are common in longitudinal studies, resulting in extreme weights that may yield erroneous inferences. Truncation, which consists of replacing outlying weights with less extreme ones, is the most common approach to control for extreme weights to date. While truncation reduces the variability in the weights and the consequent sampling variability of the estimator, it can also introduce bias. Instead of truncated weights, we propose using optimal probability weights, defined as those that have a specified variance and the smallest Euclidean distance from the original, untruncated weights. The set of optimal weights is obtained by solving a constrained quadratic optimization problem. The proposed weights are evaluated in a simulation study and applied to the assessment of the effect of treatment on time to death among people in Sweden who live with human immunodeficiency virus and inject drugs. 相似文献
10.
We propose to perform a sensitivity analysis to evaluate the extent to which results from a longitudinal study can be affected by informative drop-outs. The method is based on a selection model, where the parameter relating the dropout probability to the current observation is not estimated, but fixed to a set of values. This allows to evaluate several hypotheses for the degree of informativeness of the drop-out process. Expectation and variance of missing data, conditional on the drop-out time are computed, and a stochastic EM algorithm is used to obtain maximum likelihood estimates. Simulations show that when the drop-out parameter is correctly specified, unbiased estimates of the other parameters are obtained, and coverage percentages of their confidence intervals are close to their theoretical value. More interestingly, misspecification of the drop-out parameter does not considerably alter these results. This method was applied to a randomized clinical trial, designed to demonstrate non-inferiority of an inhaled corticosteroid in terms of bone density, compared with a reference treatment. Sensitivity analysis showed that the conclusion of non-inferiority was robust against different hypotheses for the drop-out process. 相似文献
11.
Thomas Permutt 《Statistics in medicine》2016,35(17):2865-2875
The National Research Council Panel on Handling Missing Data in Clinical Trials recommended that protocols for clinical trials ‘explicitly define... causal estimands of primary interest’. In discussions with sponsors of clinical trials since the publication of the National Research Council report, the expression causal estimands has been the subject of confusion. It may not be entirely clear what the National Research Council panel meant, and in any case, it has not been clear how this recommendation might be put in practice. This paper's purpose is to say how the working group understands it and how we think it should be put in practice. We classify possible choices of estimand according to their usefulness for regulatory purposes in various clinical settings. Published 2015. This article is a U.S. Government work and is in the public domain in the USA. 相似文献
12.
Longitudinal data sets from certain fields of biomedical research often consist of several variables repeatedly measured on each subject yielding a large number of observations. This characteristic complicates the use of traditional longitudinal modelling strategies, which were primarily developed for studies with a relatively small number of repeated measures per subject. An innovative way to model such 'wide' data is to apply functional regression analysis, an emerging statistical approach in which observations of the same subject are viewed as a sample from a functional space. Shen and Faraway introduced an F test for linear models with functional responses. This paper illustrates how to apply this F test and functional regression analysis to the setting of longitudinal data. A smoking cessation study for methadone-maintained tobacco smokers is analysed for demonstration. In estimating the treatment effects, the functional regression analysis provides meaningful clinical interpretations, and the functional F test provides consistent results supported by a mixed-effects linear regression model. A simulation study is also conducted under the condition of the smoking data to investigate the statistical power for the F test, Wilks' likelihood ratio test, and the linear mixed-effects model using AIC. 相似文献
13.
It is a common practice to analyze complex longitudinal data using nonlinear mixed‐effects (NLME) models with normality assumption. The NLME models with normal distributions provide the most popular framework for modeling continuous longitudinal outcomes, assuming individuals are from a homogeneous population and relying on random‐effects to accommodate inter‐individual variation. However, the following two issues may standout: (i) normality assumption for model errors may cause lack of robustness and subsequently lead to invalid inference and unreasonable estimates, particularly, if the data exhibit skewness and (ii) a homogeneous population assumption may be unrealistically obscuring important features of between‐subject and within‐subject variations, which may result in unreliable modeling results. There has been relatively few studies concerning longitudinal data with both heterogeneity and skewness features. In the last two decades, the skew distributions have shown beneficial in dealing with asymmetric data in various applications. In this article, our objective is to address the simultaneous impact of both features arisen from longitudinal data by developing a flexible finite mixture of NLME models with skew distributions under Bayesian framework that allows estimates of both model parameters and class membership probabilities for longitudinal data. Simulation studies are conducted to assess the performance of the proposed models and methods, and a real example from an AIDS clinical trial illustrates the methodology by modeling the viral dynamics to compare potential models with different distribution specifications; the analysis results are reported. Copyright © 2014 John Wiley & Sons, Ltd. 相似文献
14.
Sample size calculations with multiplicity adjustment for longitudinal clinical trials with missing data 总被引:1,自引:0,他引:1
Lu K 《Statistics in medicine》2012,31(1):19-28
Missing data are ubiquitous in longitudinal clinical trials, and the impact on power has been extensively assessed in the literature. Multiple doses of the investigational product and multiple efficacy endpoints are often studied in randomized clinical trials and multiplicity adjustment needs to be considered in the sample size calculations. In this paper, I show how to perform sample size calculations with multiplicity adjustment for longitudinal clinical trials with missing data by converting longitudinal data with missing data to cross-sectional data without missing data. The proposed approach can drastically simplify the simulation work and facilitate the evaluation of power for various scenarios. 相似文献
15.
Functional data are increasingly collected in public health and medical studies to better understand many complex diseases. Besides the functional data, other clinical measures are often collected repeatedly. Investigating the association between these longitudinal data and time to a survival event is of great interest to these studies. In this article, we develop a functional joint model (FJM) to account for functional predictors in both longitudinal and survival submodels in the joint modeling framework. The parameters of FJM are estimated in a maximum likelihood framework via expectation maximization algorithm. The proposed FJM provides a flexible framework to incorporate many features both in joint modeling of longitudinal and survival data and in functional data analysis. The FJM is evaluated by a simulation study and is applied to the Alzheimer's Disease Neuroimaging Initiative study, a motivating clinical study testing whether serial brain imaging, clinical, and neuropsychological assessments can be combined to measure the progression of Alzheimer's disease. Copyright © 2017 John Wiley & Sons, Ltd. 相似文献
16.
For the estimation of controlled direct effects (i.e., direct effects controlling intermediates that are set at a fixed level for all members of the population) without bias, two fundamental assumptions must hold: the absence of unmeasured confounding factors for treatment and outcome and for intermediate variables and outcome. Even if these assumptions hold, one would nonetheless fail to estimate direct effects using standard methods, for example, stratification or regression modeling, when the treatment influences confounding factors. For such situations, the sequential g‐estimation method for structural nested mean models has been developed for estimating controlled direct effects in point‐treatment situations. In this study, we demonstrate that this method can be applied to longitudinal data with time‐varying treatments and repeatedly measured intermediate variables. We sequentially estimate the parameters in two structural nested mean models: one for a repeatedly measured intermediate and the other one for direct effects of a time‐varying treatment. The method was applied to data from a large primary prevention trial for coronary events, in which pravastatin was used to lower the cholesterol levels in patients with moderate hypercholesterolemia. Copyright © 2014 John Wiley & Sons, Ltd. 相似文献
17.
Sources of data for a longitudinal birth cohort 总被引:5,自引:5,他引:0
This paper outlines the variety of data sources which can be utilised in a longitudinal study. Although a longitudinal study could be carried out using just one type of data, greater depth and accuracy can be achieved by including a variety of different sources of information. 相似文献
18.
Patient noncompliance complicates the analysis of many randomized trials seeking to evaluate the effect of surgical intervention as compared with a nonsurgical treatment. If selection for treatment depends on intermediate patient characteristics or outcomes, then 'as-treated' analyses may be biased for the estimation of causal effects. Therefore, the selection mechanism for treatment and/or compliance should be carefully considered when conducting analysis of surgical trials. We compare the performance of alternative methods when endogenous processes lead to patient crossover. We adopt an underlying longitudinal structural mixed model that is a natural example of a structural nested model. Likelihood-based methods are not typically used in this context; however, we show that standard linear mixed models will be valid under selection mechanisms that depend only on past covariate and outcome history. If there are underlying patient characteristics that influence selection, then likelihood methods can be extended via maximization of the joint likelihood of exposure and outcomes. Semi-parametric causal estimation methods such as marginal structural models, g-estimation, and instrumental variable approaches can also be valid, and we both review and evaluate their implementation in this setting. The assumptions required for valid estimation vary across approaches; thus, the choice of methods for analysis should be driven by which outcome and selection assumptions are plausible. 相似文献
19.
Leila R. Zelnick Jonathan S. Schildcrout Patrick J. Heagerty 《Statistics in medicine》2018,37(13):2120-2133
The use of outcome‐dependent sampling with longitudinal data analysis has previously been shown to improve efficiency in the estimation of regression parameters. The motivating scenario is when outcome data exist for all cohort members but key exposure variables will be gathered only on a subset. Inference with outcome‐dependent sampling designs that also incorporates incomplete information from those individuals who did not have their exposure ascertained has been investigated for univariate but not longitudinal outcomes. Therefore, with a continuous longitudinal outcome, we explore the relative contributions of various sources of information toward the estimation of key regression parameters using a likelihood framework. We evaluate the efficiency gains that alternative estimators might offer over random sampling, and we offer insight into their relative merits in select practical scenarios. Finally, we illustrate the potential impact of design and analysis choices using data from the Cystic Fibrosis Foundation Patient Registry. 相似文献
20.
Xu Shi Robert Wellman Patrick J. Heagerty Jennifer C. Nelson Andrea J. Cook 《Statistics in medicine》2020,39(4):369-386
We consider the critical problem of pharmacosurveillance for adverse events once a drug or medical product is incorporated into routine clinical care. When making inference on comparative safety using large-scale electronic health records, we often encounter an extremely rare binary adverse outcome with a large number of potential confounders. In this context, it is challenging to offer flexible methods to adjust for high-dimensional confounders, whereas use of the propensity score (PS) can help address this challenge by providing both confounding control and dimension reduction. Among PS methods, regression adjustment using the PS as a covariate in an outcome model has been incompletely studied and potentially misused. Previous studies have suggested that simple linear adjustment may not provide sufficient control of confounding. Moreover, no formal representation of the statistical procedure and associated inference has been detailed. In this paper, we characterize a three-step procedure, which performs flexible regression adjustment of the estimated PS followed by standardization to estimate the causal effect in a select population. We also propose a simple variance estimation method for performing inference. Through a realistic simulation mimicking data from the Food and Drugs Administration's Sentinel Initiative comparing the effect of angiotensin-converting enzyme inhibitors and beta blockers on incidence of angioedema, we show that flexible regression on the PS resulted in less bias without loss of efficiency, and can outperform other methods when the PS model is correctly specified. In addition, the direct variance estimation method is a computationally fast and reliable approach for inference. 相似文献