首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In clinical research, it is often of interest to estimate the response rate (i.e. the proportion of subjects who achieve a clinically meaningful threshold) for a particular variable. The standard estimator of the response rate is generally biased in the presence of measurement error. The estimation accounting for the measurement error utilizing fully nonparametric (NP) methods is complicated and may not be efficient. Therefore, we propose a model-based approach assuming a parametric model for the true value and only the first few moments for the measurement error. The estimator for the true response rate and the variance for the estimator are derived. An innovative method using bootstrap simulation is proposed to check the model assumption. Simulations show that the proposed estimator outperforms a fully NP estimator if the model assumption for X holds. This method is applied to address a commonly occurring question in osteoporosis regarding response to treatment in terms of longitudinal changes in bone mineral density (BMD). Bootstrap simulations showed that the model utilized is appropriate. The proposed method can also be applied in other fields of clinical research.  相似文献   

2.
We consider structural measurement error models for group testing data. Likelihood inference based on structural measurement error models requires one to specify a model for the latent true predictors. Inappropriate specification of this model can lead to erroneous inference. We propose a new method tailored to detect latent‐variable model misspecification in structural measurement error models for group testing data. Compared with the existing diagnostic methods developed for the same purpose, our method shows vast improvement in the power to detect latent‐variable model misspecification in group testing design. We illustrate the implementation and performance of the proposed method via simulation and application to a real data example. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

3.
Identification of the latency period for the effect of a time-varying exposure is key when assessing many environmental, nutritional, and behavioral risk factors. A pre-specified exposure metric involving an unknown latency parameter is often used in the statistical model for the exposure-disease relationship. Likelihood-based methods have been developed to estimate this latency parameter for generalized linear models but do not exist for scenarios where the exposure is measured with error, as is usually the case. Here, we explore the performance of naive estimators for both the latency parameter and the regression coefficients, which ignore exposure measurement error, assuming a linear measurement error model. We prove that, in many scenarios under this general measurement error setting, the least squares estimator for the latency parameter remains consistent, while the regression coefficient estimates are inconsistent as has previously been found in standard measurement error models where the primary disease model does not involve a latency parameter. Conditions under which this result holds are generalized to a wide class of covariance structures and mean functions. The findings are illustrated in a study of body mass index in relation to physical activity in the Health Professionals Follow-Up Study.  相似文献   

4.
Estimating and testing interactions in a linear regression model when normally distributed explanatory variables are subject to classical measurement error is complex, since the interaction term is a product of two variables and involves errors of more complex structure. Our aim is to develop simple methods, based on the method of moments (MM) and regression calibration (RC) that yield consistent estimators of the regression coefficients and their standard errors when the model includes one or more interactions. In contrast to previous work using structural equations models framework, our methods allow errors that are correlated with each other and can deal with measurements of relatively low reliability. Using simulations, we show that, under the normality assumptions, the RC method yields estimators with negligible bias and is superior to MM in both bias and variance. We also show that the RC method also yields the correct type I error rate of the test of the interaction. However, when the true covariates are not normally distributed, we recommend using MM. We provide an example relating homocysteine to serum folate and B12 levels.  相似文献   

5.
Guo Y  Little RJ 《Statistics in medicine》2011,30(18):2278-2294
We consider the estimation of the regression of an outcome Y on a covariate X, where X is unobserved, but a variable W that measures X with error is observed. A calibration sample that measures pairs of values of X and W is also available; we consider calibration samples where Y is measured (internal calibration) and not measured (external calibration). One common approach for measurement error correction is Regression Calibration (RC), which substitutes the unknown values of X by predictions from the regression of X on W estimated from the calibration sample. An alternative approach is to multiply impute the missing values of X given Y and W based on an imputation model, and then use multiple imputation (MI) combining rules for inferences. Most of current work assumes that the measurement error of W has a constant variance, whereas in many situations, the variance varies as a function of X. We consider extensions of the RC and MI methods that allow for heteroscedastic measurement error, and compare them by simulation. The MI method is shown to provide better inferences in this setting. We also illustrate the proposed methods using a data set from the BioCycle study.  相似文献   

6.
In clinical chemistry and medical research, there is often a need to calibrate the values obtained from an old or discontinued laboratory procedure to the values obtained from a new or currently used laboratory method. The objective of the calibration study is to identify a transformation that can be used to convert the test values of one laboratory measurement procedure into the values that would be obtained using another measurement procedure. However, in the presence of heteroscedastic measurement error, there is no good statistical method available for estimating the transformation. In this paper, we propose a set of statistical methods for a calibration study when the magnitude of the measurement error is proportional to the underlying true level. The corresponding sample size estimation method for conducting a calibration study is discussed as well. The proposed new method is theoretically justified and evaluated for its finite sample properties via an extensive numerical study. Two examples based on real data are used to illustrate the procedure. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

7.
Wearable device technology allows continuous monitoring of biological markers and thereby enables study of time-dependent relationships. For example, in this paper, we are interested in the impact of daily energy expenditure over a period of time on subsequent progression toward obesity among children. Data from these devices appear as either sparsely or densely observed functional data and methods of functional regression are often used for their statistical analyses. We study the scalar-on-function regression model with imprecisely measured values of the predictor function. In this setting, we have a scalar-valued response and a function-valued covariate that are both collected at a single time period. We propose a generalized method of moments-based approach for estimation, while an instrumental variable belonging in the same time space as the imprecisely measured covariate is used for model identification. Additionally, no distributional assumptions regarding the measurement errors are assumed, while complex covariance structures are allowed for the measurement errors in the implementation of our proposed methods. We demonstrate that our proposed estimator is L2 consistent and enjoys the optimal rate of convergence for univariate nonparametric functions. In a simulation study, we illustrate that ignoring measurement error leads to biased estimations of the functional coefficient. The simulation studies also confirm our ability to consistently estimate the function-valued coefficient when compared to approaches that ignore potential measurement errors. Our proposed methods are applied to our motivating example to assess the impact of baseline levels of energy expenditure on body mass index among elementary school–aged children.  相似文献   

8.
Outliers, measurement error, and missing data are commonly seen in longitudinal data because of its data collection process. However, no method can address all three of these issues simultaneously. This paper focuses on the robust estimation of partially linear models for longitudinal data with dropouts and measurement error. A new robust estimating equation, simultaneously tackling outliers, measurement error, and missingness, is proposed. The asymptotic properties of the proposed estimator are established under some regularity conditions. The proposed method is easy to implement in practice by utilizing the existing standard generalized estimating equations algorithms. The comprehensive simulation studies show the strength of the proposed method in dealing with longitudinal data with all three features. Finally, the proposed method is applied to data from the Lifestyle Education for Activity and Nutrition study and confirms the effectiveness of the intervention in producing weight loss at month 9. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

9.
He W  Yi GY  Xiong J 《Statistics in medicine》2007,26(26):4817-4832
It has been well known that ignoring measurement error may result in substantially biased estimates in many contexts including linear and nonlinear regressions. For survival data with measurement error in covariates there has been extensive discussion in the literature with the focus being on the Cox proportional hazards models. However, the impact of measurement error on accelerated failure time (AFT) models has received little attention, though AFT models are very useful in survival data analysis. In this paper, we discuss AFT models with error-prone covariates and study the bias induced by the naive approach of ignoring measurement error in covariates. To adjust for such a bias, we describe a simulation and extrapolation method. This method is appealing because it is simple to implement and it does not require modelling the true but error-prone covariate process that is often not observable. Asymptotic normality for the resulting estimators is established. Simulation studies are carried out to evaluate the performance of the proposed method as well as the impact of ignoring measurement error in covariates. The proposed method is applied to analyse a data set arising from the Busselton Health study (Australian J. Public Health 1994; 18:129-135).  相似文献   

10.
There are many settings in which the distribution of error in a mismeasured covariate varies with the value of another covariate. Take, for example, the case of HIV phylogenetic cluster size, large values of which are an indication of rapid HIV transmission. Researchers wish to find behavioral correlates of HIV phylogenetic cluster size; however, the distribution of its measurement error depends on the correctly measured variable, HIV status, and does not have a mean of zero. Further, it is not feasible to obtain validation data or repeated measurements. We propose an extension of simulation–extrapolation, an estimation technique for bias reduction in the presence of measurement error that does not require validation data and can accommodate errors whose distribution depends on other, error‐free covariates. The proposed extension performs well in simulation, typically exhibiting less bias and variability than either regression calibration or multiple imputation for measurement error. We apply the proposed method to data from the province of Quebec in Canada to examine the association between HIV phylogenetic cluster size and the number of reported sex partners. Copyright © 2017 John Wiley & Sons, Ltd.  相似文献   

11.
Multiple indicators, multiple causes (MIMIC) models are often employed by researchers studying the effects of an unobservable latent variable on a set of outcomes, when causes of the latent variable are observed. There are times, however, when the causes of the latent variable are not observed because measurements of the causal variable are contaminated by measurement error. The objectives of this paper are as follows: (i) to develop a novel model by extending the classical linear MIMIC model to allow both Berkson and classical measurement errors, defining the MIMIC measurement error (MIMIC ME) model; (ii) to develop likelihood‐based estimation methods for the MIMIC ME model; and (iii) to apply the newly defined MIMIC ME model to atomic bomb survivor data to study the impact of dyslipidemia and radiation dose on the physical manifestations of dyslipidemia. As a by‐product of our work, we also obtain a data‐driven estimate of the variance of the classical measurement error associated with an estimate of the amount of radiation dose received by atomic bomb survivors at the time of their exposure. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

12.
This study investigates appropriate estimation of estimator variability in the context of causal mediation analysis that employs propensity score‐based weighting. Such an analysis decomposes the total effect of a treatment on the outcome into an indirect effect transmitted through a focal mediator and a direct effect bypassing the mediator. Ratio‐of‐mediator‐probability weighting estimates these causal effects by adjusting for the confounding impact of a large number of pretreatment covariates through propensity score‐based weighting. In step 1, a propensity score model is estimated. In step 2, the causal effects of interest are estimated using weights derived from the prior step's regression coefficient estimates. Statistical inferences obtained from this 2‐step estimation procedure are potentially problematic if the estimated standard errors of the causal effect estimates do not reflect the sampling uncertainty in the estimation of the weights. This study extends to ratio‐of‐mediator‐probability weighting analysis a solution to the 2‐step estimation problem by stacking the score functions from both steps. We derive the asymptotic variance‐covariance matrix for the indirect effect and direct effect 2‐step estimators, provide simulation results, and illustrate with an application study. Our simulation results indicate that the sampling uncertainty in the estimated weights should not be ignored. The standard error estimation using the stacking procedure offers a viable alternative to bootstrap standard error estimation. We discuss broad implications of this approach for causal analysis involving propensity score‐based weighting.  相似文献   

13.
Measurement error in covariates can affect the accuracy in count data modeling and analysis. In overdispersion identification, the true mean–variance relationship can be obscured under the influence of measurement error in covariates. In this paper, we propose three tests for detecting overdispersion when covariates are measured with error: a modified score test and two score tests based on the proposed approximate likelihood and quasi‐likelihood, respectively. The proposed approximate likelihood is derived under the classical measurement error model, and the resulting approximate maximum likelihood estimator is shown to have superior efficiency. Simulation results also show that the score test based on approximate likelihood outperforms the test based on quasi‐likelihood and other alternatives in terms of empirical power. By analyzing a real dataset containing the health‐related quality‐of‐life measurements of a particular group of patients, we demonstrate the importance of the proposed methods by showing that the analyses with and without measurement error correction yield significantly different results. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

14.
Nutritional epidemiology relies largely on self‐reported measures of dietary intake, errors in which give biased estimated diet–disease associations. Self‐reported measurements come from questionnaires and food records. Unbiased biomarkers are scarce; however, surrogate biomarkers, which are correlated with intake but not unbiased, can also be useful. It is important to quantify and correct for the effects of measurement error on diet–disease associations. Challenges arise because there is no gold standard, and errors in self‐reported measurements are correlated with true intake and each other. We describe an extended model for error in questionnaire, food record, and surrogate biomarker measurements. The focus is on estimating the degree of bias in estimated diet–disease associations due to measurement error. In particular, we propose using sensitivity analyses to assess the impact of changes in values of model parameters which are usually assumed fixed. The methods are motivated by and applied to measures of fruit and vegetable intake from questionnaires, 7‐day diet diaries, and surrogate biomarker (plasma vitamin C) from over 25000 participants in the Norfolk cohort of the European Prospective Investigation into Cancer and Nutrition. Our results show that the estimated effects of error in self‐reported measurements are highly sensitive to model assumptions, resulting in anything from a large attenuation to a small amplification in the diet–disease association. Commonly made assumptions could result in a large overcorrection for the effects of measurement error. Increased understanding of relationships between potential surrogate biomarkers and true dietary intake is essential for obtaining good estimates of the effects of measurement error in self‐reported measurements on observed diet–disease associations. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

15.
It is well known that measurement error in the covariates of regression models generally causes bias in parameter estimates. Correction for such biases requires information concerning the measurement error, which is often in the form of internal validation or replication data. Regression calibration (RC) is a popular approach to correct for covariate measurement error, which involves predicting the true covariate using error‐prone measurements. Likelihood methods have previously been proposed as an alternative approach to estimate the parameters in models affected by measurement error, but have been relatively infrequently employed in medical statistics and epidemiology, partly because of computational complexity and concerns regarding robustness to distributional assumptions. We show how a standard random‐intercepts model can be used to obtain maximum likelihood (ML) estimates when the outcome model is linear or logistic regression under certain normality assumptions, when internal error‐prone replicate measurements are available. Through simulations we show that for linear regression, ML gives more efficient estimates than RC, although the gain is typically small. Furthermore, we show that RC and ML estimates remain consistent even when the normality assumptions are violated. For logistic regression, our implementation of ML is consistent if the true covariate is conditionally normal given the outcome, in contrast to RC. In simulations, this ML estimator showed less bias in situations where RC gives non‐negligible biases. Our proposal makes the ML approach to dealing with covariate measurement error more accessible to researchers, which we hope will improve its viability as a useful alternative to methods such as RC. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

16.
The area (A) under the receiver operating characteristic curve is commonly used to quantify the ability of a biomarker to correctly classify individuals into two populations. However, many markers are subject to measurement error, which must be accounted for to prevent understating their effectiveness. In this paper, we develop a new confidence interval procedure for A which is adjusted for measurement error using either external or internal replicated measurements. Based on the observation that A is a function of normal means and variances, we develop the procedure by recovering variance estimates needed from confidence limits for normal means and variances. Simulation results show that the procedure performs better than the previous ones based on the delta‐method in terms of coverage percentage, balance of tail errors and interval width. Two examples are presented. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

17.
It is widely acknowledged that the predictive performance of clinical prediction models should be studied in patients that were not part of the data in which the model was derived. Out-of-sample performance can be hampered when predictors are measured differently at derivation and external validation. This may occur, for instance, when predictors are measured using different measurement protocols or when tests are produced by different manufacturers. Although such heterogeneity in predictor measurement between derivation and validation data is common, the impact on the out-of-sample performance is not well studied. Using analytical and simulation approaches, we examined out-of-sample performance of prediction models under various scenarios of heterogeneous predictor measurement. These scenarios were defined and clarified using an established taxonomy of measurement error models. The results of our simulations indicate that predictor measurement heterogeneity can induce miscalibration of prediction and affects discrimination and overall predictive accuracy, to extents that the prediction model may no longer be considered clinically useful. The measurement error taxonomy was found to be helpful in identifying and predicting effects of heterogeneous predictor measurements between settings of prediction model derivation and validation. Our work indicates that homogeneity of measurement strategies across settings is of paramount importance in prediction research.  相似文献   

18.
OBJECTIVE: To assess the extent of measurement error bias due to methods used to allocate nursing staff to the acute care inpatient setting and to recommend estimation methods designed to overcome this bias. DATA SOURCES/STUDY SETTING: Secondary data obtained from the California Office of Statewide Health Planning and Development (OSHPD) and the Centers for Medicare and Medicaid Services' Healthcare Cost Report Information System for 279 general acute care hospitals from 1996 to 2001. STUDY DESIGN: California OSHPD provides detailed nurse staffing data for acute care inpatients. We estimate the measurement error and the resulting bias from applying different staffing allocation methods. Estimates of the measurement errors also allow insights into the best choices for alternate estimation strategies. PRINCIPAL FINDINGS: The bias induced by the adjusted patient days method (and its modification) is smaller than for other methods, but the bias is still substantial: in the benchmark simple regression model, the estimated coefficient for staffing level on quality of care is expected to be one-third smaller than its true value (and the bias is larger in a multiple regression model). Instrumental variable estimation, using one staffing allocation measure as an instrument for another, addresses this bias, but only particular choices of staffing allocation measures and instruments are suitable. CONCLUSIONS: Staffing allocation methods induce substantial attenuation bias, but there are easily implemented estimation methods that overcome this bias.  相似文献   

19.
Mediation analysis is a popular approach to examine the extent to which the effect of an exposure on an outcome is through an intermediate variable (mediator) and the extent to which the effect is direct. When the mediator is mis‐measured, the validity of mediation analysis can be severely undermined. In this paper, we first study the bias of classical, non‐differential measurement error on a continuous mediator in the estimation of direct and indirect causal effects in generalized linear models when the outcome is either continuous or discrete and exposure–mediator interaction may be present. Our theoretical results as well as a numerical study demonstrate that in the presence of non‐linearities, the bias of naive estimators for direct and indirect effects that ignore measurement error can take unintuitive directions. We then develop methods to correct for measurement error. Three correction approaches using method of moments, regression calibration, and SIMEX are compared. We apply the proposed method to the Massachusetts General Hospital lung cancer study to evaluate the effect of genetic variants mediated through smoking on lung cancer risk. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

20.
Within‐person variability in measured values of multiple risk factors can bias their associations with disease. The multivariate regression calibration (RC) approach can correct for such measurement error and has been applied to studies in which true values or independent repeat measurements of the risk factors are observed on a subsample. We extend the multivariate RC techniques to a meta‐analysis framework where multiple studies provide independent repeat measurements and information on disease outcome. We consider the cases where some or all studies have repeat measurements, and compare study‐specific, averaged and empirical Bayes estimates of RC parameters. Additionally, we allow for binary covariates (e.g. smoking status) and for uncertainty and time trends in the measurement error corrections. Our methods are illustrated using a subset of individual participant data from prospective long‐term studies in the Fibrinogen Studies Collaboration to assess the relationship between usual levels of plasma fibrinogen and the risk of coronary heart disease, allowing for measurement error in plasma fibrinogen and several confounders. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号