首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In longitudinal studies with potentially nonignorable drop-out, one can assess the likely effect of the nonignorability in a sensitivity analysis. Troxel et al. proposed a general index of sensitivity to nonignorability, or ISNI, to measure sensitivity of key inferences in a neighbourhood of the ignorable, missing at random (MAR) model. They derived detailed formulas for ISNI in the special case of the generalized linear model with a potentially missing univariate outcome. In this paper, we extend the method to longitudinal modelling. We use a multivariate normal model for the outcomes and a regression model for the drop-out process, allowing missingness probabilities to depend on an unobserved response. The computation is straightforward, and merely involves estimating a mixed-effects model and a selection model for the drop-out, together with some simple arithmetic calculations. We illustrate the method with three examples.  相似文献   

2.
Xie H 《Statistics in medicine》2008,27(16):3155-3177
Longitudinal non-Gaussian data subject to potentially non-ignorable dropout is a challenging problem. Frequently an analysis has to rely on some strong but unverifiable assumptions, among which ignorability is a key one. Sensitivity analysis has been advocated to assess the likely effect of alternative assumptions about dropout mechanism on such an analysis. Previously, Ma et al. applied a general index of local sensitivity to non-ignorability (ISNI) to measure the sensitivity of missing at random (MAR) estimates to small departures from ignorability for multivariate normal outcomes. In this paper, we extend the ISNI methodology to handle longitudinal non-Gaussian data subject to non-ignorable dropout. Specifically, we propose to quantify the sensitivity of inferences in the neighborhood of an MAR generalized linear mixed model for longitudinal data. Through a simulation study, we evaluate the performance of the proposed methodology. We then illustrate the methodology in one real example: smoking-cessation data.  相似文献   

3.
In longitudinal studies, subjects may be lost to follow up and, thus, present incomplete response sequences. When the mechanism underlying the dropout is nonignorable, we need to account for dependence between the longitudinal and the dropout process. We propose to model such a dependence through discrete latent effects, which are outcome‐specific and account for heterogeneity in the univariate profiles. Dependence between profiles is introduced by using a probability matrix to describe the corresponding joint distribution. In this way, we separately model dependence within each outcome and dependence between outcomes. The major feature of this proposal, when compared with standard finite mixture models, is that it allows the nonignorable dropout model to properly nest its ignorable counterpart. We also discuss the use of an index of (local) sensitivity to nonignorability to investigate the effects that assumptions about the dropout process may have on model parameter estimates. The proposal is illustrated via the analysis of data from a longitudinal study on the dynamics of cognitive functioning in the elderly.  相似文献   

4.
The use of random-effects models for the analysis of longitudinal data with missing responses has been discussed by several authors. In this paper, we extend the non-linear random-effects model for a single response to the case of multiple responses, allowing for arbitrary patterns of observed and missing data. Parameters for this model are estimated via the EM algorithm and by the first-order approximation available in SAS Proc NLMIXED. The set of equations for this estimation procedure is derived and these are appropriately modified to deal with missing data. The methodology is illustrated with an example using data coming from a study involving 161 pregnant women presenting to a private obstetrics clinic in Santiago, Chile.  相似文献   

5.
We propose to perform a sensitivity analysis to evaluate the extent to which results from a longitudinal study can be affected by informative drop-outs. The method is based on a selection model, where the parameter relating the dropout probability to the current observation is not estimated, but fixed to a set of values. This allows to evaluate several hypotheses for the degree of informativeness of the drop-out process. Expectation and variance of missing data, conditional on the drop-out time are computed, and a stochastic EM algorithm is used to obtain maximum likelihood estimates. Simulations show that when the drop-out parameter is correctly specified, unbiased estimates of the other parameters are obtained, and coverage percentages of their confidence intervals are close to their theoretical value. More interestingly, misspecification of the drop-out parameter does not considerably alter these results. This method was applied to a randomized clinical trial, designed to demonstrate non-inferiority of an inhaled corticosteroid in terms of bone density, compared with a reference treatment. Sensitivity analysis showed that the conclusion of non-inferiority was robust against different hypotheses for the drop-out process.  相似文献   

6.
We propose a semiparametric marginal modeling approach for longitudinal analysis of cohorts with data missing due to death and non‐response to estimate regression parameters interpreted as conditioned on being alive. Our proposed method accommodates outcomes and time‐dependent covariates that are missing not at random with non‐monotone missingness patterns via inverse‐probability weighting. Missing covariates are replaced by consistent estimates derived from a simultaneously solved inverse‐probability‐weighted estimating equation. Thus, we utilize data points with the observed outcomes and missing covariates beyond the estimated weights while avoiding numerical methods to integrate over missing covariates. The approach is applied to a cohort of elderly female hip fracture patients to estimate the prevalence of walking disability over time as a function of body composition, inflammation, and age. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

7.
Substance abuse treatment research is complicated by the pervasive problem of non‐ignorable missing data—i.e. the occurrence of the missing data is related to the unobserved outcomes. Missing data frequently arise due to early client departure from treatment. Pattern‐mixture models (PMMs) are often employed in such situations to jointly model the outcome and the missing data mechanism. PMMs require non‐testable assumptions to identify model parameters. Several approaches to parameter identification have therefore been explored for longitudinal modeling of continuous outcomes, and informative priors have been developed in other contexts. In this paper, we describe an expert interview conducted with five substance abuse treatment clinical experts who have familiarity with the therapeutic community modality of substance abuse treatment and with treatment process scores collected using the Dimensions of Change Instrument. The goal of the interviews was to obtain expert opinion about the rate of change in continuous client‐level treatment process scores for clients who leave before completing two assessments and whose rate of change (slope) in treatment process scores is unidentified by the data. We find that the experts' opinions differed dramatically from widely utilized assumptions used to identify parameters in the PMM. Further, subjective prior assessment allows one to properly address the uncertainty inherent in the subjective decisions required to identify parameters in the PMM and to measure their effect on conclusions drawn from the analysis. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

8.
The use of outcome‐dependent sampling with longitudinal data analysis has previously been shown to improve efficiency in the estimation of regression parameters. The motivating scenario is when outcome data exist for all cohort members but key exposure variables will be gathered only on a subset. Inference with outcome‐dependent sampling designs that also incorporates incomplete information from those individuals who did not have their exposure ascertained has been investigated for univariate but not longitudinal outcomes. Therefore, with a continuous longitudinal outcome, we explore the relative contributions of various sources of information toward the estimation of key regression parameters using a likelihood framework. We evaluate the efficiency gains that alternative estimators might offer over random sampling, and we offer insight into their relative merits in select practical scenarios. Finally, we illustrate the potential impact of design and analysis choices using data from the Cystic Fibrosis Foundation Patient Registry.  相似文献   

9.
Missing data are common in longitudinal studies due to drop‐out, loss to follow‐up, and death. Likelihood‐based mixed effects models for longitudinal data give valid estimates when the data are missing at random (MAR). These assumptions, however, are not testable without further information. In some studies, there is additional information available in the form of an auxiliary variable known to be correlated with the missing outcome of interest. Availability of such auxiliary information provides us with an opportunity to test the MAR assumption. If the MAR assumption is violated, such information can be utilized to reduce or eliminate bias when the missing data process depends on the unobserved outcome through the auxiliary information. We compare two methods of utilizing the auxiliary information: joint modeling of the outcome of interest and the auxiliary variable, and multiple imputation (MI). Simulation studies are performed to examine the two methods. The likelihood‐based joint modeling approach is consistent and most efficient when correctly specified. However, mis‐specification of the joint distribution can lead to biased results. MI is slightly less efficient than a correct joint modeling approach and can also be biased when the imputation model is mis‐specified, though it is more robust to mis‐specification of the imputation distribution when all the variables affecting the missing data mechanism and the missing outcome are included in the imputation model. An example is presented from a dementia screening study. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

10.
Specific age‐related hypotheses are tested in population‐based longitudinal studies. At specific time intervals, both the outcomes of interest and the time‐varying covariates are measured. When participants are approached for follow‐up, some participants do not provide data. Investigations may show that many have died before the time of follow‐up whereas others refused to participate. Some of these non‐participants do not provide data at later follow‐ups. Few statistical methods for missing data distinguish between ‘non‐participation’ and ‘death’ among study participants. The augmented inverse probability‐weighted estimators are most commonly used in marginal structure models when data are missing at random. Treating non‐participation and death as the same, however, may lead to biased estimates and invalid inferences. To overcome this limitation, a multiple inverse probability‐weighted approach is presented to account for two types of missing data, non‐participation and death, when using a marginal mean model. Under certain conditions, the multiple weighted estimators are consistent and asymptotically normal. Simulation studies will be used to study the finite sample efficiency of the multiple weighted estimators. The proposed method will be applied to study the risk factors associated with the cognitive decline among the aging adults, using data from the Chicago Health and Aging Project (CHAP). Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

11.
The missing data problem is common in longitudinal or hierarchical structure studies. In this paper, we propose a correlated random‐effects model to fit normal longitudinal or cluster data when the missingness mechanism is nonignorable. Computational challenges arise in the model fitting due to intractable numerical integrations. We obtain the estimates of the parameters based on an accurate approximation of the log likelihood, which has higher‐order accuracy but with less computational burden than the existing approximation. We apply the proposed method it to a real data set arising from an autism study. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

12.
High‐dimensional longitudinal data involving latent variables such as depression and anxiety that cannot be quantified directly are often encountered in biomedical and social sciences. Multiple responses are used to characterize these latent quantities, and repeated measures are collected to capture their trends over time. Furthermore, substantive research questions may concern issues such as interrelated trends among latent variables that can only be addressed by modeling them jointly. Although statistical analysis of univariate longitudinal data has been well developed, methods for modeling multivariate high‐dimensional longitudinal data are still under development. In this paper, we propose a latent factor linear mixed model (LFLMM) for analyzing this type of data. This model is a combination of the factor analysis and multivariate linear mixed models. Under this modeling framework, we reduced the high‐dimensional responses to low‐dimensional latent factors by the factor analysis model, and then we used the multivariate linear mixed model to study the longitudinal trends of these latent factors. We developed an expectation–maximization algorithm to estimate the model. We used simulation studies to investigate the computational properties of the expectation–maximization algorithm and compare the LFLMM model with other approaches for high‐dimensional longitudinal data analysis. We used a real data example to illustrate the practical usefulness of the model. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

13.
Functional data are increasingly collected in public health and medical studies to better understand many complex diseases. Besides the functional data, other clinical measures are often collected repeatedly. Investigating the association between these longitudinal data and time to a survival event is of great interest to these studies. In this article, we develop a functional joint model (FJM) to account for functional predictors in both longitudinal and survival submodels in the joint modeling framework. The parameters of FJM are estimated in a maximum likelihood framework via expectation maximization algorithm. The proposed FJM provides a flexible framework to incorporate many features both in joint modeling of longitudinal and survival data and in functional data analysis. The FJM is evaluated by a simulation study and is applied to the Alzheimer's Disease Neuroimaging Initiative study, a motivating clinical study testing whether serial brain imaging, clinical, and neuropsychological assessments can be combined to measure the progression of Alzheimer's disease. Copyright © 2017 John Wiley & Sons, Ltd.  相似文献   

14.
Missingness mechanism is in theory unverifiable based only on observed data. If there is a suspicion of missing not at random, researchers often perform a sensitivity analysis to evaluate the impact of various missingness mechanisms. In general, sensitivity analysis approaches require a full specification of the relationship between missing values and missingness probabilities. Such relationship can be specified based on a selection model, a pattern-mixture model or a shared parameter model. Under the selection modeling framework, we propose a sensitivity analysis approach using a nonparametric multiple imputation strategy. The proposed approach only requires specifying the correlation coefficient between missing values and selection (response) probabilities under a selection model. The correlation coefficient is a standardized measure and can be used as a natural sensitivity analysis parameter. The sensitivity analysis involves multiple imputations of missing values, yet the sensitivity parameter is only used to select imputing/donor sets. Hence, the proposed approach might be more robust against misspecifications of the sensitivity parameter. For illustration, the proposed approach is applied to incomplete measurements of level of preoperative Hemoglobin A1c, for patients who had high-grade carotid artery stenosisa and were scheduled for surgery. A simulation study is conducted to evaluate the performance of the proposed approach.  相似文献   

15.
Zhu L  Sun J  Tong X  Pounds S 《Statistics in medicine》2011,30(12):1429-1440
Longitudinal data analysis is one of the most discussed and applied areas in statistics and a great deal of literature has been developed for it. However, most of the existing literature focus on the situation where observation times are fixed or can be treated as fixed constants. This paper considers the situation where these observation times may be random variables and more importantly, they may be related to the underlying longitudinal variable or process of interest. Furthermore, covariate effects may be time-varying. For the analysis, a joint modeling approach is proposed and in particular, for estimation of time-varying regression parameters, an estimating equation-based procedure is developed. Both asymptotic and finite sample properties of the proposed estimates are established. The methodology is applied to an acute myeloid leukemia trial that motivated this study.  相似文献   

16.
Causal inference with observational longitudinal data and time‐varying exposures is complicated due to the potential for time‐dependent confounding and unmeasured confounding. Most causal inference methods that handle time‐dependent confounding rely on either the assumption of no unmeasured confounders or the availability of an unconfounded variable that is associated with the exposure (eg, an instrumental variable). Furthermore, when data are incomplete, validity of many methods often depends on the assumption of missing at random. We propose an approach that combines a parametric joint mixed‐effects model for the study outcome and the exposure with g‐computation to identify and estimate causal effects in the presence of time‐dependent confounding and unmeasured confounding. G‐computation can estimate participant‐specific or population‐average causal effects using parameters of the joint model. The joint model is a type of shared parameter model where the outcome and exposure‐selection models share common random effect(s). We also extend the joint model to handle missing data and truncation by death when missingness is possibly not at random. We evaluate the performance of the proposed method using simulation studies and compare the method to both linear mixed‐ and fixed‐effects models combined with g‐computation as well as to targeted maximum likelihood estimation. We apply the method to an epidemiologic study of vitamin D and depressive symptoms in older adults and include code using SAS PROC NLMIXED software to enhance the accessibility of the method to applied researchers.  相似文献   

17.
The objective of this study was to develop a robust non‐linear mixed model for prostate‐specific antigen (PSA) measurements after a high‐intensity focused ultrasound (HIFU) treatment for prostate cancer. The characteristics of these data are the presence of outlying values and non‐normal random effects. A numerical study proved that parameter estimates can be biased if these characteristics are not taken into account. The intra‐patient variability was described by a Student‐t distribution and Dirichlet process priors were assumed for non‐normal random effects; a process that limited the bias and provided more efficient parameter estimates than a classical mixed model with normal residuals and random effects. It was applied to the determination of the best dynamic PSA criterion for the diagnosis of prostate cancer recurrence, but could be used in studies that rely on PSA data to improve prognosis or compare treatment efficiencies and also with other longitudinal biomarkers that, such as PSA, present outlying values and non‐normal random effects. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

18.
Missing data are a common issue in cost‐effectiveness analysis (CEA) alongside randomised trials and are often addressed assuming the data are ‘missing at random’. However, this assumption is often questionable, and sensitivity analyses are required to assess the implications of departures from missing at random. Reference‐based multiple imputation provides an attractive approach for conducting such sensitivity analyses, because missing data assumptions are framed in an intuitive way by making reference to other trial arms. For example, a plausible not at random mechanism in a placebo‐controlled trial would be to assume that participants in the experimental arm who dropped out stop taking their treatment and have similar outcomes to those in the placebo arm. Drawing on the increasing use of this approach in other areas, this paper aims to extend and illustrate the reference‐based multiple imputation approach in CEA. It introduces the principles of reference‐based imputation and proposes an extension to the CEA context. The method is illustrated in the CEA of the CoBalT trial evaluating cognitive behavioural therapy for treatment‐resistant depression. Stata code is provided. We find that reference‐based multiple imputation provides a relevant and accessible framework for assessing the robustness of CEA conclusions to different missing data assumptions.  相似文献   

19.
Outcome‐dependent sampling (ODS) scheme is a cost‐effective sampling scheme where one observes the exposure with a probability that depends on the outcome. The well‐known such design is the case‐control design for binary response, the case‐cohort design for the failure time data, and the general ODS design for a continuous response. While substantial work has been carried out for the univariate response case, statistical inference and design for the ODS with multivariate cases remain under‐developed. Motivated by the need in biological studies for taking the advantage of the available responses for subjects in a cluster, we propose a multivariate outcome‐dependent sampling (multivariate‐ODS) design that is based on a general selection of the continuous responses within a cluster. The proposed inference procedure for the multivariate‐ODS design is semiparametric where all the underlying distributions of covariates are modeled nonparametrically using the empirical likelihood methods. We show that the proposed estimator is consistent and developed the asymptotically normality properties. Simulation studies show that the proposed estimator is more efficient than the estimator obtained using only the simple‐random‐sample portion of the multivariate‐ODS or the estimator from a simple random sample with the same sample size. The multivariate‐ODS design together with the proposed estimator provides an approach to further improve study efficiency for a given fixed study budget. We illustrate the proposed design and estimator with an analysis of association of polychlorinated biphenyl exposure to hearing loss in children born to the Collaborative Perinatal Study. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

20.
Pattern‐mixture models provide a general and flexible framework for sensitivity analyses of nonignorable missing data. The placebo‐based pattern‐mixture model (Little and Yau, Biometrics 1996; 52 :1324–1333) treats missing data in a transparent and clinically interpretable manner and has been used as sensitivity analysis for monotone missing data in longitudinal studies. The standard multiple imputation approach (Rubin, Multiple Imputation for Nonresponse in Surveys, 1987) is often used to implement the placebo‐based pattern‐mixture model. We show that Rubin's variance estimate of the multiple imputation estimator of treatment effect can be overly conservative in this setting. As an alternative to multiple imputation, we derive an analytic expression of the treatment effect for the placebo‐based pattern‐mixture model and propose a posterior simulation or delta method for the inference about the treatment effect. Simulation studies demonstrate that the proposed methods provide consistent variance estimates and outperform the imputation methods in terms of power for the placebo‐based pattern‐mixture model. We illustrate the methods using data from a clinical study of major depressive disorders. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号