首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Background and ObjectivesAs a result of the development of sophisticated techniques, such as multiple imputation, the interest in handling missing data in longitudinal studies has increased enormously in past years. Within the field of longitudinal data analysis, there is a current debate on whether it is necessary to use multiple imputations before performing a mixed-model analysis to analyze the longitudinal data. In the current study this necessity is evaluated.Study Design and SettingThe results of mixed-model analyses with and without multiple imputation were compared with each other. Four data sets with missing values were created—one data set with missing completely at random, two data sets with missing at random, and one data set with missing not at random). In all data sets, the relationship between a continuous outcome variable and two different covariates were analyzed: a time-independent dichotomous covariate and a time-dependent continuous covariate.ResultsAlthough for all types of missing data, the results of the mixed-model analysis with or without multiple imputations were slightly different, they were not in favor of one of the two approaches. In addition, repeating the multiple imputations 100 times showed that the results of the mixed-model analysis with multiple imputation were quite unstable.ConclusionIt is not necessary to handle missing data using multiple imputations before performing a mixed-model analysis on longitudinal data.  相似文献   

2.
Review: a gentle introduction to imputation of missing values   总被引:1,自引:0,他引:1  
In most situations, simple techniques for handling missing data (such as complete case analysis, overall mean imputation, and the missing-indicator method) produce biased results, whereas imputation techniques yield valid results without complicating the analysis once the imputations are carried out. Imputation techniques are based on the idea that any subject in a study sample can be replaced by a new randomly chosen subject from the same source population. Imputation of missing data on a variable is replacing that missing by a value that is drawn from an estimate of the distribution of this variable. In single imputation, only one estimate is used. In multiple imputation, various estimates are used, reflecting the uncertainty in the estimation of this distribution. Under the general conditions of so-called missing at random and missing completely at random, both single and multiple imputations result in unbiased estimates of study associations. But single imputation results in too small estimated standard errors, whereas multiple imputation results in correctly estimated standard errors and confidence intervals. In this article we explain why all this is the case, and use a simple simulation study to demonstrate our explanations. We also explain and illustrate why two frequently used methods to handle missing data, i.e., overall mean imputation and the missing-indicator method, almost always result in biased estimates.  相似文献   

3.

Objective

The Mini-Mental State Examination (MMSE) is used to estimate current cognitive status and as a screen for possible dementia. Missing item-level data are commonly reported. Attention to missing data is particularly important. However, there are concerns that common procedures for dealing with missing data, for example, listwise deletion and mean item substitution, are inadequate.

Study Design and Setting

We used multiple imputation (MI) to estimate missing MMSE data in 17,303 participants who were drawn from the Dynamic Analyses to Optimize Aging project, a harmonization project of nine Australian longitudinal studies of aging.

Results

Our results indicated differences in mean MMSE scores between those participants with and without missing data, a pattern consistent over age and gender levels. MI inflated MMSE scores, but differences between those imputed and those without missing data still existed. A simulation model supported the efficacy of MI to estimate missing item level, although serious decrements in estimation occurred when 50% or more of item-level data were missing, particularly for the oldest participants.

Conclusions

Our adaptation of MI to obtain a probable estimate for missing MMSE item level data provides a suitable method when the proportion of missing item-level data is not excessive.  相似文献   

4.
Long Q  Zhang X  Hsu CH 《Statistics in medicine》2011,30(26):3149-3161
The receiver operating characteristics (ROC) curve is a widely used tool for evaluating discriminative and diagnostic power of a biomarker. When the biomarker value is missing for some observations, the ROC analysis based solely on complete cases loses efficiency because of the reduced sample size, and more importantly, it is subject to potential bias. In this paper, we investigate nonparametric multiple imputation methods for ROC analysis when some biomarker values are missing at random and there are auxiliary variables that are fully observed and predictive of biomarker values and/or missingness of biomarker values. Although a direct application of standard nonparametric imputation is robust to model misspecification, its finite sample performance suffers from curse of dimensionality as the number of auxiliary variables increases. To address this problem, we propose new nonparametric imputation methods, which achieve dimension reduction through the use of one or two working models, namely, models for prediction and propensity scores. The proposed imputation methods provide a platform for a full range of ROC analysis and hence are more flexible than existing methods that primarily focus on estimating the area under the ROC curve. We conduct simulation studies to evaluate the finite sample performance of the proposed methods and find that the proposed methods are robust to various types of model misidentification and outperform the standard nonparametric approach even when the number of auxiliary variables is moderate. We further illustrate the proposed methods by using an observational study of maternal depression during pregnancy.  相似文献   

5.
When missing data occur in one or more covariates in a regression model, multiple imputation (MI) is widely advocated as an improvement over complete‐case analysis (CC). We use theoretical arguments and simulation studies to compare these methods with MI implemented under a missing at random assumption. When data are missing completely at random, both methods have negligible bias, and MI is more efficient than CC across a wide range of scenarios. For other missing data mechanisms, bias arises in one or both methods. In our simulation setting, CC is biased towards the null when data are missing at random. However, when missingness is independent of the outcome given the covariates, CC has negligible bias and MI is biased away from the null. With more general missing data mechanisms, bias tends to be smaller for MI than for CC. Since MI is not always better than CC for missing covariate problems, the choice of method should take into account what is known about the missing data mechanism in a particular substantive application. Importantly, the choice of method should not be based on comparison of standard errors. We propose new ways to understand empirical differences between MI and CC, which may provide insights into the appropriateness of the assumptions underlying each method, and we propose a new index for assessing the likely gain in precision from MI: the fraction of incomplete cases among the observed values of a covariate (FICO). Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

6.
ObjectiveTo assess the added value of multiple imputation (MI) of missing repeated outcomes measures in longitudinal data sets analyzed with linear mixed-effects (LME) models.Study Design and SettingData were used from a trial on the effects of Rosuvastatin on rate of change in carotid intima-media thickness (CIMT). The reference treatment effect was derived from a complete data set. Scenarios and proportions of missing values in CIMT measurements were applied and LME analyses were used before and after MI. The added value of MI, in terms of bias and precision, was assessed using the mean-squared error (MSE) of the treatment effects and coverage of the 95% confidence interval.ResultsThe reference treatment effect was ?0.0177 mm/y. The MSEs for LME analysis without and with MI were similar in scenarios with up to 40% missing values. Coverage was large in all scenarios and was similar for LME with and without MI.ConclusionOur study empirically shows that MI of missing end point data before LME analyses does not increase precision in the estimated rate of change in the end point. Hence, MI had no added value in this setting and standard LME modeling remains the method of choice.  相似文献   

7.
The problem of missing data is frequently encountered in observational studies. We compared approaches to dealing with missing data. Three multiple imputation methods were compared with a method of enhancing a clinical database through merging with administrative data. The clinical database used for comparison contained information collected from 6,065 cardiac care patients in 1995 in the province of Alberta, Canada. The effectiveness of the different strategies was evaluated using measures of discrimination and goodness of fit for the 1995 data. The strategies were further evaluated by examining how well the models predicted outcomes in data collected from patients in 1996. In general, the different methods produced similar results, with one of the multiple imputation methods demonstrating a slight advantage. It is concluded that the choice of missing data strategy should be guided by statistical expertise and data resources.  相似文献   

8.
Propensity scores have been used widely as a bias reduction method to estimate the treatment effect in nonrandomized studies. Since many covariates are generally included in the model for estimating the propensity scores, the proportion of subjects with at least one missing covariate could be large. While many methods have been proposed for propensity score‐based estimation in the presence of missing covariates, little has been published comparing the performance of these methods. In this article we propose a novel method called multiple imputation missingness pattern (MIMP) and compare it with the naive estimator (ignoring propensity score) and three commonly used methods of handling missing covariates in propensity score‐based estimation (separate estimation of propensity scores within each pattern of missing data, multiple imputation and discarding missing data) under different mechanisms of missing data and degree of correlation among covariates. Simulation shows that all adjusted estimators are much less biased than the naive estimator. Under certain conditions MIMP provides benefits (smaller bias and mean‐squared error) compared with existing alternatives. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

9.
Missing data due to loss to follow-up or intercurrent events are unintended, but unfortunately inevitable in clinical trials. Since the true values of missing data are never known, it is necessary to assess the impact of untestable and unavoidable assumptions about any unobserved data in sensitivity analysis. This tutorial provides an overview of controlled multiple imputation (MI) techniques and a practical guide to their use for sensitivity analysis of trials with missing continuous outcome data. These include δ- and reference-based MI procedures. In δ-based imputation, an offset term, δ, is typically added to the expected value of the missing data to assess the impact of unobserved participants having a worse or better response than those observed. Reference-based imputation draws imputed values with some reference to observed data in other groups of the trial, typically in other treatment arms. We illustrate the accessibility of these methods using data from a pediatric eczema trial and a chronic headache trial and provide Stata code to facilitate adoption. We discuss issues surrounding the choice of δ in δ-based sensitivity analysis. We also review the debate on variance estimation within reference-based analysis and justify the use of Rubin's variance estimator in this setting, since as we further elaborate on within, it provides information anchored inference.  相似文献   

10.
We explore several approaches for imputing partially observed covariates when the outcome of interest is a censored event time and when there is an underlying subset of the population that will never experience the event of interest. We call these subjects ‘cured’, and we consider the case where the data are modeled using a Cox proportional hazards (CPH) mixture cure model. We study covariate imputation approaches using fully conditional specification. We derive the exact conditional distribution and suggest a sampling scheme for imputing partially observed covariates in the CPH cure model setting. We also propose several approximations to the exact distribution that are simpler and more convenient to use for imputation. A simulation study demonstrates that the proposed imputation approaches outperform existing imputation approaches for survival data without a cure fraction in terms of bias in estimating CPH cure model parameters. We apply our multiple imputation techniques to a study of patients with head and neck cancer. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

11.

The availability of data in the healthcare domain provides great opportunities for the discovery of new or hidden patterns in medical data, which can eventually lead to improved clinical decision making. Predictive models play a crucial role in extracting this unknown information from data. However, medical data often contain missing values that can degrade the performance of predictive models. Autoencoder models have been widely used as non-linear functions for the imputation of missing data in fields such as computer vision, transportation, and finance. In this study, we assess the shortcomings of autoencoder models for data imputation and propose modified models to improve imputation performance. To evaluate, we compare the performance of the proposed model with five well-known imputation techniques on six medical datasets and five classification methods. Through extensive experiments, we demonstrate that the proposed non-linear imputation model outperforms the other models for all degrees of missing ratios and leads to the highest disease classification accuracy for all datasets.

  相似文献   

12.
BACKGROUND AND OBJECTIVE: The International Germ Cell Consensus (IGCC) classification defines good, intermediate, and poor prognosis groups among patients with nonseminomatous germ cell cancer. In the database used to develop the IGCC classification (n = 5,202), >40% of patients were excluded because of missing values (n = 2,154). We looked for effects of this exclusion on survival estimates in the three IGCC prognosis groups. STUDY DESIGN AND SETTING: We imputed missing values using a multiple imputation procedure. The IGCC classification was applied to patients with complete data (n = 3,048) and with imputed data (n = 2,154), and 5-year survival was calculated for each prognosis group. RESULTS: Patients with missing values had a lower 5-year survival than those without missing values: 76% vs. 82%. Five-year survival in the complete and imputed data samples was 92% and 87% for the good prognosis groups and 80% and 70% for the intermediate prognosis groups, whereas 5-year survival for the poor prognosis groups in both samples was similar (50% and 47%, respectively). This difference in survival was largely explained by a higher proportion of missing values among patients treated before 1985, who had a worse survival than patients treated after 1985. CONCLUSION: Multiple imputation of the missing values led to lower survival estimates across the IGCC prognosis groups, compared with estimates based on the complete data. Although imputation of missing values gives statistically better survival estimates, adjustments for year of treatment are necessary to make the estimates applicable to currently diagnosed patients with testicular cancer.  相似文献   

13.
14.
Multiple imputation is commonly used to impute missing data, and is typically more efficient than complete cases analysis in regression analysis when covariates have missing values. Imputation may be performed using a regression model for the incomplete covariates on other covariates and, importantly, on the outcome. With a survival outcome, it is a common practice to use the event indicator D and the log of the observed event or censoring time T in the imputation model, but the rationale is not clear. We assume that the survival outcome follows a proportional hazards model given covariates X and Z. We show that a suitable model for imputing binary or Normal X is a logistic or linear regression on the event indicator D, the cumulative baseline hazard H0(T), and the other covariates Z. This result is exact in the case of a single binary covariate; in other cases, it is approximately valid for small covariate effects and/or small cumulative incidence. If we do not know H0(T), we approximate it by the Nelson–Aalen estimator of H(T) or estimate it by Cox regression. We compare the methods using simulation studies. We find that using logT biases covariate‐outcome associations towards the null, while the new methods have lower bias. Overall, we recommend including the event indicator and the Nelson–Aalen estimator of H(T) in the imputation model. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

15.
16.
In studies with repeated measures of blood pressure (BP), particularly in trials of hypertension prevention, BP measurements often become censored once a participant commences antihypertensive medication. When prescribed by non-study physicians under uncontrolled conditions, the missing data mechanism is non-ignorable and may bias the BP effects of interest. I propose a method that models the distribution of BPs measured by non-study physicians and their relation to study BPs using random effects models. If treated for hypertension, I assume that BP measured outside the study is greater than a clinical cutpoint, such as diastolic BP⩾90 mmHg. I then compute estimates for the missing study BPs conditional on previously observed study BPs and treatment for hypertension. Multiple imputation is used to model the variability of the BP values and adjust the standard error estimates of the parameters. Examples are given using simulated data and data from the weight loss intervention of phase I of the Trials of Hypertension Prevention. © 1997 John Wiley & Sons, Ltd.  相似文献   

17.
Multiple imputation (MI) is one of the most popular methods to deal with missing data, and its use has been rapidly increasing in medical studies. Although MI is rather appealing in practice since it is possible to use ordinary statistical methods for a complete data set once the missing values are fully imputed, the method of imputation is still problematic. If the missing values are imputed from some parametric model, the validity of imputation is not necessarily ensured, and the final estimate for a parameter of interest can be biased unless the parametric model is correctly specified. Nonparametric methods have been also proposed for MI, but it is not so straightforward as to produce imputation values from nonparametrically estimated distributions. In this paper, we propose a new method for MI to obtain a consistent (or asymptotically unbiased) final estimate even if the imputation model is misspecified. The key idea is to use an imputation model from which the imputation values are easily produced and to make a proper correction in the likelihood function after the imputation by using the density ratio between the imputation model and the true conditional density function for the missing variable as a weight. Although the conditional density must be nonparametrically estimated, it is not used for the imputation. The performance of our method is evaluated by both theory and simulation studies. A real data analysis is also conducted to illustrate our method by using the Duke Cardiac Catheterization Coronary Artery Disease Diagnostic Dataset.  相似文献   

18.
Multiple imputation of missing blood pressure covariates in survival analysis   总被引:24,自引:0,他引:24  
This paper studies a non-response problem in survival analysis where the occurrence of missing data in the risk factor is related to mortality. In a study to determine the influence of blood pressure on survival in the very old (85+ years), blood pressure measurements are missing in about 12.5 per cent of the sample. The available data suggest that the process that created the missing data depends jointly on survival and the unknown blood pressure, thereby distorting the relation of interest. Multiple imputation is used to impute missing blood pressure and then analyse the data under a variety of non-response models. One special modelling problem is treated in detail; the construction of a predictive model for drawing imputations if the number of variables is large. Risk estimates for these data appear robust to even large departures from the simplest non-response model, and are similar to those derived under deletion of the incomplete records.  相似文献   

19.
20.
目的 数据缺失是队列研究中几乎无法避免的问题。本文旨在通过模拟研究,比较当前常见的8种缺失数据处理方法在纵向缺失数据中的填补效果,为纵向缺失数据的处理提供有价值的参考。方法 模拟研究基于R语言编程实现,通过Monte Carlo方法产生纵向缺失数据,通过比较不同填补方法的平均绝对偏差、平均相对偏差和回归分析的Ⅰ类错误,评价不同填补方法对于纵向缺失数据的填补效果及对后续多因素分析的影响。结果 均值填补、k近邻填补(KNN)、回归填补和随机森林的填补效果接近,且表现稳定;多重插补和热卡填充次于以上填补方法;K均值聚类和EM算法填补效果最差,表现也最不稳定。均值填补、EM算法、随机森林、KNN和回归填补可较好地控制Ⅰ类错误,多重插补、热卡填充和K均值聚类不能有效控制Ⅰ类错误。结论 对于纵向缺失数据,在随机缺失机制下,均值填补、KNN、回归填补和随机森林均可作为较好的填补方法,当缺失比例不太大时,多重插补和热卡填充也表现较好,不推荐K均值聚类和EM算法。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号