首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
Kim I  Cheong HK  Kim H 《Statistics in medicine》2011,30(15):1837-1851
In matched case-crossover studies, it is generally accepted that covariates on which a case and associated controls are matched cannot exert a confounding effect on independent predictors included in the conditional logistic regression model because any stratum effect is removed by the conditioning on the fixed number of sets of a case and controls in the stratum. Hence, the conditional logistic regression model is not able to detect any effects associated with the matching covariates by stratum. In addition, the matching covariates may be effect modification and the methods for assessing and characterizing effect modification by matching covariates are quite limited. In this article, we propose a unified approach in its ability to detect both parametric and nonparametric relationships between the predictor and the relative risk of disease or binary outcome, as well as potential effect modifications by matching covariates. Two methods are developed using two semiparametric models: (1) the regression spline varying coefficients model and (2) the regression spline interaction model. Simulation results show that the two approaches are comparable. These methods can be used in any matched case-control study and extend to multilevel effect modification studies. We demonstrate the advantage of our approach using an epidemiological example of a 1-4 bi-directional case-crossover study of childhood aseptic meningitis associated with drinking water turbidity.  相似文献   

2.
Cai B  Small DS  Have TR 《Statistics in medicine》2011,30(15):1809-1824
We present closed-form expressions of asymptotic bias for the causal odds ratio from two estimation approaches of instrumental variable logistic regression: (i) the two-stage predictor substitution (2SPS) method and (ii) the two-stage residual inclusion (2SRI) approach. Under the 2SPS approach, the first stage model yields the predicted value of treatment as a function of an instrument and covariates, and in the second stage model for the outcome, this predicted value replaces the observed value of treatment as a covariate. Under the 2SRI approach, the first stage is the same, but the residual term of the first stage regression is included in the second stage regression, retaining the observed treatment as a covariate. Our bias assessment is for a different context from that of Terza (J. Health Econ. 2008; 27(3):531-543), who focused on the causal odds ratio conditional on the unmeasured confounder, whereas we focus on the causal odds ratio among compliers under the principal stratification framework. Our closed-form bias results show that the 2SPS logistic regression generates asymptotically biased estimates of this causal odds ratio when there is no unmeasured confounding and that this bias increases with increasing unmeasured confounding. The 2SRI logistic regression is asymptotically unbiased when there is no unmeasured confounding, but when there is unmeasured confounding, there is bias and it increases with increasing unmeasured confounding. The closed-form bias results provide guidance for using these IV logistic regression methods. Our simulation results are consistent with our closed-form analytic results under different combinations of parameter settings.  相似文献   

3.
Clinical prediction models (CPMs) can predict clinically relevant outcomes or events. Typically, prognostic CPMs are derived to predict the risk of a single future outcome. However, there are many medical applications where two or more outcomes are of interest, meaning this should be more widely reflected in CPMs so they can accurately estimate the joint risk of multiple outcomes simultaneously. A potentially naïve approach to multi‐outcome risk prediction is to derive a CPM for each outcome separately, then multiply the predicted risks. This approach is only valid if the outcomes are conditionally independent given the covariates, and it fails to exploit the potential relationships between the outcomes. This paper outlines several approaches that could be used to develop CPMs for multiple binary outcomes. We consider four methods, ranging in complexity and conditional independence assumptions: namely, probabilistic classifier chain, multinomial logistic regression, multivariate logistic regression, and a Bayesian probit model. These are compared with methods that rely on conditional independence: separate univariate CPMs and stacked regression. Employing a simulation study and real‐world example, we illustrate that CPMs for joint risk prediction of multiple outcomes should only be derived using methods that model the residual correlation between outcomes. In such a situation, our results suggest that probabilistic classification chains, multinomial logistic regression or the Bayesian probit model are all appropriate choices. We call into question the development of CPMs for each outcome in isolation when multiple correlated or structurally related outcomes are of interest and recommend more multivariate approaches to risk prediction.  相似文献   

4.
The self‐controlled case series method, commonly used to investigate potential associations between vaccines and adverse events, requires information on cases only and automatically controls all age‐independent multiplicative confounders while allowing for an age‐dependent baseline incidence. In the parametric version of the method, we modelled the age‐specific relative incidence by using a piecewise constant function, whereas in the semiparametric version, we left it unspecified. However, mis‐specification of age groups in the parametric version can lead to biassed estimates of exposure effect, and the semiparametric approach runs into computational problems when the number of cases in the study is moderately large. We, thus, propose to use a penalized likelihood approach where the age effect is modelled using splines. We use a linear combination of cubic M‐splines to approximate the age‐specific relative incidence and integrated splines for the cumulative relative incidence. We conducted a simulation study to evaluate the performance of the new approach and its efficiency relative to the parametric and semiparametric approaches. Results show that the new approach performs equivalently to the existing methods when the sample size is small and works well for large data sets. We applied the new spline‐based approach to data on febrile convulsions and paediatric vaccines. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

5.
We present a model for meta‐regression in the presence of missing information on some of the study level covariates, obtaining inferences using Bayesian methods. In practice, when confronted with missing covariate data in a meta‐regression, it is common to carry out a complete case or available case analysis. We propose to use the full observed data, modelling the joint density as a factorization of a meta‐regression model and a conditional factorization of the density for the covariates. With the inclusion of several covariates, inter‐relations between these covariates are modelled. Under this joint likelihood‐based approach, it is shown that the lesser assumption of the covariates being Missing At Random is imposed, instead of the more usual Missing Completely At Random (MCAR) assumption. The model is easily programmable in WinBUGS, and we examine, through the analysis of two real data sets, sensitivity and robustness of results to the MCAR assumption. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

6.
Multistate Markov regression models used for quantifying the effect size of state‐specific covariates pertaining to the dynamics of multistate outcomes have gained popularity. However, the measurements of multistate outcome are prone to the errors of classification, particularly when a population‐based survey/research is involved with proxy measurements of outcome due to cost consideration. Such a misclassification may affect the effect size of relevant covariates such as odds ratio used in the field of epidemiology. We proposed a Bayesian measurement‐error‐driven hidden Markov regression model for calibrating these biased estimates with and without a 2‐stage validation design. A simulation algorithm was developed to assess various scenarios of underestimation and overestimation given nondifferential misclassification (independent of covariates) and differential misclassification (dependent on covariates). We applied our proposed method to the community‐based survey of androgenetic alopecia and found that the effect size of the majority of covariate was inflated after calibration regardless of which type of misclassification. Our proposed Bayesian measurement‐error‐driven hidden Markov regression model is practicable and effective in calibrating the effects of covariates on multistate outcome, but the prior distribution on measurement errors accrued from 2‐stage validation design is strongly recommended.  相似文献   

7.
Individuals may vary in their responses to treatment, and identification of subgroups differentially affected by a treatment is an important issue in medical research. The risk of misleading subgroup analyses has become well known, and some exploratory analyses can be helpful in clarifying how covariates potentially interact with the treatment. Motivated by a real data study of pediatric kidney transplant, we consider a semiparametric Bayesian latent model and examine its utility for an exploratory subgroup effect analysis using secondary data. The proposed method is concerned with a clinical setting where the number of subgroups is much smaller than that of potential predictors and subgroups are only latently associated with observed covariates. The semiparametric model is flexible in capturing the latent structure driven by data rather than dictated by parametric modeling assumptions. Since it is difficult to correctly specify the conditional relationship between the response and a large number of confounders in modeling, we use propensity score matching to improve the model robustness by balancing the covariates distribution. Simulation studies show that the proposed analysis can find the latent subgrouping structure and, with propensity score matching adjustment, yield robust estimates even when the outcome model is misspecified. In the real data analysis, the proposed analysis reports significant subgroup effects on steroid avoidance in kidney transplant patients, whereas standard proportional hazards regression analysis does not.  相似文献   

8.
In most epidemiological investigations, the study units are people, the outcome variable (or the response) is a health‐related event, and the explanatory variables are usually environmental and/or socio‐demographic factors. The fundamental task in such investigations is to quantify the association between the explanatory variables (covariates/exposures) and the outcome variable through a suitable regression model. The accuracy of such quantification depends on how precisely the relevant covariates are measured. In many instances, we cannot measure some of the covariates accurately. Rather, we can measure noisy (mismeasured) versions of them. In statistical terminology, mismeasurement in continuous covariates is known as measurement errors or errors‐in‐variables. Regression analyses based on mismeasured covariates lead to biased inference about the true underlying response–covariate associations. In this paper, we suggest a flexible parametric approach for avoiding this bias when estimating the response–covariate relationship through a logistic regression model. More specifically, we consider the flexible generalized skew‐normal and the flexible generalized skew‐t distributions for modeling the unobserved true exposure. For inference and computational purposes, we use Bayesian Markov chain Monte Carlo techniques. We investigate the performance of the proposed flexible parametric approach in comparison with a common flexible parametric approach through extensive simulation studies. We also compare the proposed method with the competing flexible parametric method on a real‐life data set. Though emphasis is put on the logistic regression model, the proposed method is unified and is applicable to the other generalized linear models, and to other types of non‐linear regression models as well. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

9.
Logistic regression is one of the most widely used regression models in practice, but alternatives to conventional maximum likelihood estimation methods may be more appropriate for small or sparse samples. Modification of the logistic regression score function to remove first-order bias is equivalent to penalizing the likelihood by the Jeffreys prior, and yields penalized maximum likelihood estimates (PLEs) that always exist, even in samples in which maximum likelihood estimates (MLEs) are infinite. PLEs are an attractive alternative in small-to-moderate-sized samples, and are preferred to exact conditional MLEs when there are continuous covariates. We present methods to construct confidence intervals (CI) in the penalized multinomial logistic regression model, and compare CI coverage and length for the PLE-based methods to that of conventional MLE-based methods in trinomial logistic regressions with both binary and continuous covariates. Based on simulation studies in sparse data sets, we recommend profile CIs over asymptotic Wald-type intervals for the PLEs in all cases. Furthermore, when finite sample bias and data separation are likely to occur, we prefer PLE profile CIs over MLE methods.  相似文献   

10.
The matched case‐control designs are commonly used to control for potential confounding factors in genetic epidemiology studies especially epigenetic studies with DNA methylation. Compared with unmatched case‐control studies with high‐dimensional genomic or epigenetic data, there have been few variable selection methods for matched sets. In an earlier paper, we proposed the penalized logistic regression model for the analysis of unmatched DNA methylation data using a network‐based penalty. However, for popularly applied matched designs in epigenetic studies that compare DNA methylation between tumor and adjacent non‐tumor tissues or between pre‐treatment and post‐treatment conditions, applying ordinary logistic regression ignoring matching is known to bring serious bias in estimation. In this paper, we developed a penalized conditional logistic model using the network‐based penalty that encourages a grouping effect of (1) linked Cytosine‐phosphate‐Guanine (CpG) sites within a gene or (2) linked genes within a genetic pathway for analysis of matched DNA methylation data. In our simulation studies, we demonstrated the superiority of using conditional logistic model over unconditional logistic model in high‐dimensional variable selection problems for matched case‐control data. We further investigated the benefits of utilizing biological group or graph information for matched case‐control data. We applied the proposed method to a genome‐wide DNA methylation study on hepatocellular carcinoma (HCC) where we investigated the DNA methylation levels of tumor and adjacent non‐tumor tissues from HCC patients by using the Illumina Infinium HumanMethylation27 Beadchip. Several new CpG sites and genes known to be related to HCC were identified but were missed by the standard method in the original paper. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

11.
An adjustment for an uncorrelated covariate in a logistic regression changes the true value of an odds ratio for a unit increase in a risk factor. Even when there is no variation due to covariates, the odds ratio for a unit increase in a risk factor also depends on the distribution of the risk factor. We can use an instrumental variable to consistently estimate a causal effect in the presence of arbitrary confounding. With a logistic outcome model, we show that the simple ratio or two‐stage instrumental variable estimate is consistent for the odds ratio of an increase in the population distribution of the risk factor equal to the change due to a unit increase in the instrument divided by the average change in the risk factor due to the increase in the instrument. This odds ratio is conditional within the strata of the instrumental variable, but marginal across all other covariates, and is averaged across the population distribution of the risk factor. Where the proportion of variance in the risk factor explained by the instrument is small, this is similar to the odds ratio from a RCT without adjustment for any covariates, where the intervention corresponds to the effect of a change in the population distribution of the risk factor. This implies that the ratio or two‐stage instrumental variable method is not biased, as has been suggested, but estimates a different quantity to the conditional odds ratio from an adjusted multiple regression, a quantity that has arguably more relevance to an epidemiologist or a policy maker, especially in the context of Mendelian randomization. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

12.
Analysis of health care cost data is often complicated by a high level of skewness, heteroscedastic variances and the presence of missing data. Most of the existing literature on cost data analysis have been focused on modeling the conditional mean. In this paper, we study a weighted quantile regression approach for estimating the conditional quantiles health care cost data with missing covariates. The weighted quantile regression estimator is consistent, unlike the naive estimator, and asymptotically normal. Furthermore, we propose a modified BIC for variable selection in quantile regression when the covariates are missing at random. The quantile regression framework allows us to obtain a more complete picture of the effects of the covariates on the health care cost and is naturally adapted to the skewness and heterogeneity of the cost data. The method is semiparametric in the sense that it does not require to specify the likelihood function for the random error or the covariates. We investigate the weighted quantile regression procedure and the modified BIC via extensive simulations. We illustrate the application by analyzing a real data set from a health care cost study. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

13.
Hong Zhu 《Statistics in medicine》2014,33(14):2467-2479
Regression methods for survival data with right censoring have been extensively studied under semiparametric transformation models such as the Cox regression model and the proportional odds model. However, their practical application could be limited because of possible violation of model assumption or lack of ready interpretation for the regression coefficients in some cases. As an alternative, in this paper, the proportional likelihood ratio model introduced by Luo and Tsai is extended to flexibly model the relationship between survival outcome and covariates. This model has a natural connection with many important semiparametric models such as generalized linear model and density ratio model and is closely related to biased sampling problems. Compared with the semiparametric transformation model, the proportional likelihood ratio model is appealing and practical in many ways because of its model flexibility and quite direct clinical interpretation. We present two likelihood approaches for the estimation and inference on the target regression parameters under independent and dependent censoring assumptions. Based on a conditional likelihood approach using uncensored failure times, a numerically simple estimation procedure is developed by maximizing a pairwise pseudo‐likelihood. We also develop a full likelihood approach, and the most efficient maximum likelihood estimator is obtained by a profile likelihood. Simulation studies are conducted to assess the finite‐sample properties of the proposed estimators and compare the efficiency of the two likelihood approaches. An application to survival data for bone marrow transplantation patients of acute leukemia is provided to illustrate the proposed method and other approaches for handling non‐proportionality. The relative merits of these methods are discussed in concluding remarks. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

14.
The statistical analysis of panel count data has recently attracted a great deal of attention, and a number of approaches have been developed. However, most of these approaches are for situations where the observation and follow‐up processes are independent of the underlying recurrent event process unconditional or conditional on covariates. In this paper, we discuss a more general situation where both the observation and the follow‐up processes may be related with the recurrent event process of interest. For regression analysis, we present a class of semiparametric transformation models and develop some estimating equations for estimation of regression parameters. Numerical studies under different settings conducted for assessing the proposed methodology suggest that it works well for practical situations, and the approach is applied to a skin cancer study that motivated the study. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

15.
We present the most comprehensive comparison to date of the predictive benefit of genetics in addition to currently used clinical variables, using genotype data for 33 single‐nucleotide polymorphisms (SNPs) in 1,547 Caucasian men from the placebo arm of the REduction by DUtasteride of prostate Cancer Events (REDUCE®) trial. Moreover, we conducted a detailed comparison of three techniques for incorporating genetics into clinical risk prediction. The first method was a standard logistic regression model, which included separate terms for the clinical covariates and for each of the genetic markers. This approach ignores a substantial amount of external information concerning effect sizes for these Genome Wide Association Study (GWAS)‐replicated SNPs. The second and third methods investigated two possible approaches to incorporating meta‐analysed external SNP effect estimates – one via a weighted PCa 'risk' score based solely on the meta analysis estimates, and the other incorporating both the current and prior data via informative priors in a Bayesian logistic regression model. All methods demonstrated a slight improvement in predictive performance upon incorporation of genetics. The two methods that incorporated external information showed the greatest receiver‐operating‐characteristic AUCs increase from 0.61 to 0.64. The value of our methods comparison is likely to lie in observations of performance similarities, rather than difference, between three approaches of very different resource requirements. The two methods that included external information performed best, but only marginally despite substantial differences in complexity.  相似文献   

16.
In a recent paper (Weller EA, Milton DK, Eisen EA, Spiegelman D. Regression calibration for logistic regression with multiple surrogates for one exposure. Journal of Statistical Planning and Inference 2007; 137: 449‐461), the authors discussed fitting logistic regression models when a scalar main explanatory variable is measured with error by several surrogates, that is, a situation with more surrogates than variables measured with error. They compared two methods of adjusting for measurement error using a regression calibration approximate model as if it were exact. One is the standard regression calibration approach consisting of substituting an estimated conditional expectation of the true covariate given observed data in the logistic regression. The other is a novel two‐stage approach when the logistic regression is fitted to multiple surrogates, and then a linear combination of estimated slopes is formed as the estimate of interest. Applying estimated asymptotic variances for both methods in a single data set with some sensitivity analysis, the authors asserted superiority of their two‐stage approach. We investigate this claim in some detail. A troubling aspect of the proposed two‐stage method is that, unlike standard regression calibration and a natural form of maximum likelihood, the resulting estimates are not invariant to reparameterization of nuisance parameters in the model. We show, however, that, under the regression calibration approximation, the two‐stage method is asymptotically equivalent to a maximum likelihood formulation, and is therefore in theory superior to standard regression calibration. However, our extensive finite‐sample simulations in the practically important parameter space where the regression calibration model provides a good approximation failed to uncover such superiority of the two‐stage method. We also discuss extensions to different data structures. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

17.
We consider a general semiparametric hazards regression model that encompasses the Cox proportional hazards model and the accelerated failure time model for survival analysis. To overcome the nonexistence of the maximum likelihood, we derive a kernel‐smoothed profile likelihood function and prove that the resulting estimates of the regression parameters are consistent and achieve semiparametric efficiency. In addition, we develop penalized structure selection techniques to determine which covariates constitute the accelerated failure time model and which covariates constitute the proportional hazards model. The proposed method is able to estimate the model structure consistently and model parameters efficiently. Furthermore, variance estimation is straightforward. The proposed estimation performs well in simulation studies and is applied to the analysis of a real data set. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

18.
Logistic regression is the standard method for assessing predictors of diseases. In logistic regression analyses, a stepwise strategy is often adopted to choose a subset of variables. Inference about the predictors is then made based on the chosen model constructed of only those variables retained in that model. This method subsequently ignores both the variables not selected by the procedure, and the uncertainty due to the variable selection procedure. This limitation may be addressed by adopting a Bayesian model averaging approach, which selects a number of all possible such models, and uses the posterior probabilities of these models to perform all inferences and predictions. This study compares the Bayesian model averaging approach with the stepwise procedures for selection of predictor variables in logistic regression using simulated data sets and the Framingham Heart Study data. The results show that in most cases Bayesian model averaging selects the correct model and out-performs stepwise approaches at predicting an event of interest.  相似文献   

19.
Measurement error is common in epidemiological and biomedical studies. When biomarkers are measured in batches or groups, measurement error is potentially correlated within each batch or group. In regression analysis, most existing methods are not applicable in the presence of batch‐specific measurement error in predictors. We propose a robust conditional likelihood approach to account for batch‐specific error in predictors when batch effect is additive and the predominant source of error, which requires no assumptions on the distribution of measurement error. Although a regression model with batch as a categorical covariable yields the same parameter estimates as the proposed conditional likelihood approach for linear regression, this result does not hold in general for all generalized linear models, in particular, logistic regression. Our simulation studies show that the conditional likelihood approach achieves better finite sample performance than the regression calibration approach or a naive approach without adjustment for measurement error. In the case of logistic regression, our proposed approach is shown to also outperform the regression approach with batch as a categorical covariate. In addition, we also examine a ‘hybrid’ approach combining the conditional likelihood method and the regression calibration method, which is shown in simulations to achieve good performance in the presence of both batch‐specific and measurement‐specific errors. We illustrate our method by using data from a colorectal adenoma study. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

20.
One can fruitfully approach survival problems without covariates in an actuarial way. In narrow time bins, the number of people at risk is counted together with the number of events. The relationship between time and probability of an event can then be estimated with a parametric or semi-parametric model. The number of events observed in each bin is described using a Poisson distribution with the log mean specified using a flexible penalized B-splines model with a large number of equidistant knots. Regression on pertinent covariates can easily be performed using the same log-linear model, leading to the classical proportional hazard model. We propose to extend that model by allowing the regression coefficients to vary in a smooth way with time. Penalized B-splines models will be proposed for each of these coefficients. We show how the regression parameters and the penalty weights can be estimated efficiently using Bayesian inference tools based on the Metropolis-adjusted Langevin algorithm.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号