首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
2.
3.
We reconsider the relationship between income and health taking a distributional perspective rather than one centered on conditional expectation. Using structured additive distributional regression, we find that the association between income and health is larger than generally estimated because aspects of the conditional health distribution that go beyond the expectation imply worse outcomes for those with lower incomes. Looking at German data from the Socio‐Economic Panel, we find that the risk of bad health is roughly halved when doubling the net equivalent income from 15,000 to 30,000€. This is more than tenfold of the magnitude of change found when considering expected health measures. A distributional perspective thus highlights another dimension of the income–health relation—that the poor are in particular faced with greater health risk at the lower end of the health distribution. We therefore argue that when studying health outcomes, a distributional approach that considers stochastic variation among observationally equivalent individuals is warranted.  相似文献   

4.
Gene‐gene (G×G) interactions have been shown to be critical for the fundamental mechanisms and development of complex diseases beyond main genetic effects. The commonly adopted marginal analysis is limited by considering only a small number of G factors at a time. With the “main effects, interactions” hierarchical constraint, many of the existing joint analysis methods suffer from prohibitively high computational cost. In this study, we propose a new method for identifying important G×G interactions under joint modeling. The proposed method adopts tensor regression to accommodate high data dimensionality and the penalization technique for selection. It naturally accommodates the strong hierarchical structure without imposing additional constraints, making optimization much simpler and faster than in the existing studies. It outperforms multiple alternatives in simulation. The analysis of The Cancer Genome Atlas (TCGA) data on lung cancer and melanoma demonstrates that it can identify markers with important implications and better prediction performance.  相似文献   

5.
Annual Percentage Change (APC) summarizes trends in age‐adjusted cancer rates over short time‐intervals. This measure implicitly assumes linearity of the log‐rates over the intervals in question, which may not be valid, especially for relatively longer time‐intervals. An alternative is the Average Annual Percentage Change (AAPC), which computes a weighted average of APC values over intervals where log‐rates are piece‐wise linear. In this article, we propose a Bayesian approach to calculating APC and AAPC values from age‐adjusted cancer rate data. The procedure involves modeling the corresponding counts using age‐specific Poisson regression models with a log‐link function that contains unknown joinpoints. The slope‐changes at the joinpoints are assumed to have a mixture distribution with point mass at zero and the joinpoints are assumed to be uniformly distributed subject to order‐restrictions. Additionally, the age‐specific intercept parameters are modeled nonparametrically using a Dirichlet process prior. The proposed method can be used to construct Bayesian credible intervals for AAPC using age‐adjusted mortality rates. This provides a significant improvement over the currently available frequentist method, where variance calculations are done conditional on the joinpoint locations. Simulation studies are used to demonstrate the success of the method in capturing trend‐changes. Finally, the proposed method is illustrated using data on prostate cancer incidence. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

6.
7.
Many meta‐analyses combine results from only a small number of studies, a situation in which the between‐study variance is imprecisely estimated when standard methods are applied. Bayesian meta‐analysis allows incorporation of external evidence on heterogeneity, providing the potential for more robust inference on the effect size of interest. We present a method for performing Bayesian meta‐analysis using data augmentation, in which we represent an informative conjugate prior for between‐study variance by pseudo data and use meta‐regression for estimation. To assist in this, we derive predictive inverse‐gamma distributions for the between‐study variance expected in future meta‐analyses. These may serve as priors for heterogeneity in new meta‐analyses. In a simulation study, we compare approximate Bayesian methods using meta‐regression and pseudo data against fully Bayesian approaches based on importance sampling techniques and Markov chain Monte Carlo (MCMC). We compare the frequentist properties of these Bayesian methods with those of the commonly used frequentist DerSimonian and Laird procedure. The method is implemented in standard statistical software and provides a less complex alternative to standard MCMC approaches. An importance sampling approach produces almost identical results to standard MCMC approaches, and results obtained through meta‐regression and pseudo data are very similar. On average, data augmentation provides closer results to MCMC, if implemented using restricted maximum likelihood estimation rather than DerSimonian and Laird or maximum likelihood estimation. The methods are applied to real datasets, and an extension to network meta‐analysis is described. The proposed method facilitates Bayesian meta‐analysis in a way that is accessible to applied researchers. © 2016 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.  相似文献   

8.
This article considers the problem of examining time‐varying causal effect moderation using observational, longitudinal data in which treatment, candidate moderators, and possible confounders are time varying. The structural nested mean model (SNMM) is used to specify the moderated time‐varying causal effects of interest in a conditional mean model for a continuous response given time‐varying treatments and moderators. We present an easy‐to‐use estimator of the SNMM that combines an existing regression‐with‐residuals (RR) approach with an inverse‐probability‐of‐treatment weighting (IPTW) strategy. The RR approach has been shown to identify the moderated time‐varying causal effects if the time‐varying moderators are also the sole time‐varying confounders. The proposed IPTW+RR approach provides estimators of the moderated time‐varying causal effects in the SNMM in the presence of an additional, auxiliary set of known and measured time‐varying confounders. We use a small simulation experiment to compare IPTW+RR versus the traditional regression approach and to compare small and large sample properties of asymptotic versus bootstrap estimators of the standard errors for the IPTW+RR approach. This article clarifies the distinction between time‐varying moderators and time‐varying confounders. We illustrate the methodology in a case study to assess if time‐varying substance use moderates treatment effects on future substance use. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

9.
10.
Multi‐state models of chronic disease are becoming increasingly important in medical research to describe the progression of complicated diseases. However, studies seldom observe health outcomes over long time periods. Therefore, current clinical research focuses on the secondary data analysis of the published literature to estimate a single transition probability within the entire model. Unfortunately, there are many difficulties when using secondary data, especially since the states and transitions of published studies may not be consistent with the proposed multi‐state model. Early approaches to reconciling published studies with the theoretical framework of a multi‐state model have been limited to data available as cumulative counts of progression. This paper presents an approach that allows the use of published regression data in a multi‐state model when the published study may have ignored intermediary states in the multi‐state model. Colloquially, we call this approach the Lemonade Method since when study data give you lemons, make lemonade. The approach uses maximum likelihood estimation. An example is provided for the progression of heart disease in people with diabetes. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

11.
We consider the problem of assessing the joint effect of a set of genetic markers on multiple, possibly correlated phenotypes of interest. We develop a kernel machine based multivariate regression framework, where the joint effect of the marker set on each of the phenotypes is modeled using prespecified kernel functions with unknown variance components. Unlike most existing methods that mainly focus on the global association between the marker set and the phenotype set, we develop estimation and testing procedures to study phenotype‐specific associations. Specifically, we develop an estimation method based on the penalized likelihood approach to estimate phenotype‐specific effects and their corresponding standard errors while accounting for possible correlation among the phenotypes. We develop testing procedures for the association of the marker set with any subset of phenotypes using a score‐based variance components testing method. We assess the performance of our proposed methodology via a simulation study and demonstrate the utility of the proposed method using the Clinical Antipsychotic Trials of Intervention Effectiveness (CATIE) data.  相似文献   

12.
13.
14.
15.
Delay in the outcome variable is challenging for outcome‐adaptive randomization, as it creates a lag between the number of subjects accrued and the information known at the time of the analysis. Motivated by a real‐life pediatric ulcerative colitis trial, we consider a case where a short‐term predictor is available for the delayed outcome. When a short‐term predictor is not considered, studies have shown that the asymptotic properties of many outcome‐adaptive randomization designs are little affected unless the lag is unreasonably large relative to the accrual process. These theoretical results assumed independent identical delays, however, whereas delays in the presence of a short‐term predictor may only be conditionally homogeneous. We consider delayed outcomes as missing and propose mitigating the delay effect by imputing them. We apply this approach to the doubly adaptive biased coin design (DBCD) for motivating pediatric ulcerative colitis trial. We provide theoretical results that if the delays, although non‐homogeneous, are reasonably short relative to the accrual process similarly as in the iid delay case, the lag is also asymptotically ignorable in the sense that a standard DBCD that utilizes only observed outcomes attains target allocation ratios in the limit. Empirical studies, however, indicate that imputation‐based DBCDs performed more reliably in finite samples with smaller root mean square errors. The empirical studies assumed a common clinical setting where a delayed outcome is positively correlated with a short‐term predictor similarly between treatment arm groups. We varied the strength of the correlation and considered fast and slow accrual settings. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

16.
Multiple‐treatments meta‐analyses are increasingly used to evaluate the relative effectiveness of several competing regimens. In some fields which evolve with the continuous introduction of new agents over time, it is possible that in trials comparing older with newer regimens the effectiveness of the latter is exaggerated. Optimism bias, conflicts of interest and other forces may be responsible for this exaggeration, but its magnitude and impact, if any, needs to be formally assessed in each case. Whereas such novelty bias is not identifiable in a pair‐wise meta‐analysis, it is possible to explore it in a network of trials involving several treatments. To evaluate the hypothesis of novel agent effects and adjust for them, we developed a multiple‐treatments meta‐regression model fitted within a Bayesian framework. When there are several multiple‐treatments meta‐analyses for diverse conditions within the same field/specialty with similar agents involved, one may consider either different novel agent effects in each meta‐analysis or may consider the effects to be exchangeable across the different conditions and outcomes. As an application, we evaluate the impact of modelling and adjusting for novel agent effects for chemotherapy and other non‐hormonal systemic treatments for three malignancies. We present the results and the impact of different model assumptions to the relative ranking of the various regimens in each network. We established that multiple‐treatments meta‐regression is a good method for examining whether novel agent effects are present and estimation of their magnitude in the three worked examples suggests an exaggeration of the hazard ratio by 6 per cent (2–11 per cent). Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

17.
In epidemiology, cohort studies utilised to monitor and assess disease status and progression often result in short‐term and sparse follow‐up data. Thus, gaining an understanding of the full‐term disease pathogenesis can be difficult, requiring shorter‐term data from many individuals to be collated. We investigate and evaluate methods to construct and quantify the underlying long‐term longitudinal trajectories for disease markers using short‐term follow‐up data, specifically applied to Alzheimer's disease. We generate individuals' follow‐up data to investigate approaches to this problem adopting a four‐step modelling approach that (i) determines individual slopes and anchor points for their short‐term trajectory, (ii) fits polynomials to these slopes and anchor points, (iii) integrates the reciprocated polynomials and (iv) inverts the resulting curve providing an estimate of the underlying longitudinal trajectory. To alleviate the potential problem of roots of polynomials falling into the region over which we integrate, we propose the use of non‐negative polynomials in Step 2. We demonstrate that our approach can construct underlying sigmoidal trajectories from individuals' sparse, short‐term follow‐up data. Furthermore, to determine an optimal methodology, we consider variations to our modelling approach including contrasting linear mixed effects regression to linear regression in Step 1 and investigating different orders of polynomials in Step 2. Cubic order polynomials provided more accurate results, and there were negligible differences between regression methodologies. We use bootstrap confidence intervals to quantify the variability in our estimates of the underlying longitudinal trajectory and apply these methods to data from the Alzheimer's Disease Neuroimaging Initiative to demonstrate their practical use. Copyright © 2017 John Wiley & Sons, Ltd.  相似文献   

18.
19.
Nonresponses and missing data are common in observational studies. Ignoring or inadequately handling missing data may lead to biased parameter estimation, incorrect standard errors and, as a consequence, incorrect statistical inference and conclusions. We present a strategy for modelling non‐ignorable missingness where the probability of nonresponse depends on the outcome. Using a simple case of logistic regression, we quantify the bias in regression estimates and show the observed likelihood is non‐identifiable under non‐ignorable missing data mechanism. We then adopt a selection model factorisation of the joint distribution as the basis for a sensitivity analysis to study changes in estimated parameters and the robustness of study conclusions against different assumptions. A Bayesian framework for model estimation is used as it provides a flexible approach for incorporating different missing data assumptions and conducting sensitivity analysis. Using simulated data, we explore the performance of the Bayesian selection model in correcting for bias in a logistic regression. We then implement our strategy using survey data from the 45 and Up Study to investigate factors associated with worsening health from the baseline to follow‐up survey. Our findings have practical implications for the use of the 45 and Up Study data to answer important research questions relating to health and quality‐of‐life. Copyright © 2017 John Wiley & Sons, Ltd.  相似文献   

20.
Measurement error is common in epidemiological and biomedical studies. When biomarkers are measured in batches or groups, measurement error is potentially correlated within each batch or group. In regression analysis, most existing methods are not applicable in the presence of batch‐specific measurement error in predictors. We propose a robust conditional likelihood approach to account for batch‐specific error in predictors when batch effect is additive and the predominant source of error, which requires no assumptions on the distribution of measurement error. Although a regression model with batch as a categorical covariable yields the same parameter estimates as the proposed conditional likelihood approach for linear regression, this result does not hold in general for all generalized linear models, in particular, logistic regression. Our simulation studies show that the conditional likelihood approach achieves better finite sample performance than the regression calibration approach or a naive approach without adjustment for measurement error. In the case of logistic regression, our proposed approach is shown to also outperform the regression approach with batch as a categorical covariate. In addition, we also examine a ‘hybrid’ approach combining the conditional likelihood method and the regression calibration method, which is shown in simulations to achieve good performance in the presence of both batch‐specific and measurement‐specific errors. We illustrate our method by using data from a colorectal adenoma study. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号