首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In this paper, we investigate the effects of poverty and inequality on the number of HIV‐related deaths in 62 New York counties via Bayesian zero‐inflated Poisson models that exhibit spatial dependence. We quantify inequality via the Theil index and poverty via the ratios of two Census 2000 variables, the number of people under the poverty line and the number of people for whom poverty status is determined, in each Zip Code Tabulation Area. The purpose of this study was to investigate the effects of inequality and poverty in addition to spatial dependence between neighboring regions on HIV mortality rate, which can lead to improved health resource allocation decisions. In modeling county‐specific HIV counts, we propose Bayesian zero‐inflated Poisson models whose rates are functions of both covariate and spatial/random effects. To show how the proposed models work, we used three different publicly available data sets: TIGER Shapefiles, Census 2000, and mortality index files. In addition, we introduce parameter estimation issues of Bayesian zero‐inflated Poisson models and discuss MCMC method implications. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

2.
A widely used method in classic random‐effects meta‐analysis is the DerSimonian–Laird method. An alternative meta‐analytical approach is the Hartung–Knapp method. This article reports results of an empirical comparison and a simulation study of these two methods and presents corresponding analytical results. For the empirical evaluation, we took 157 meta‐analyses with binary outcomes, analysed each one using both methods and performed a comparison of the results based on treatment estimates, standard errors and associated P‐values. In several simulation scenarios, we systematically evaluated coverage probabilities and confidence interval lengths. Generally, results are more conservative with the Hartung–Knapp method, giving wider confidence intervals and larger P‐values for the overall treatment effect. However, in some meta‐analyses with very homogeneous individual treatment results, the Hartung–Knapp method yields narrower confidence intervals and smaller P‐values than the classic random‐effects method, which in this situation, actually reduces to a fixed‐effect meta‐analysis. Therefore, it is recommended to conduct a sensitivity analysis based on the fixed‐effect model instead of solely relying on the result of the Hartung–Knapp random‐effects meta‐analysis. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

3.
Our aim is to develop a rich and coherent framework for modeling correlated time‐to‐event data, including (1) survival regression models with different links and (2) flexible modeling for time‐dependent and nonlinear effects with rich postestimation. We extend the class of generalized survival models, which expresses a transformed survival in terms of a linear predictor, by incorporating a shared frailty or random effects for correlated survival data. The proposed approach can include parametric or penalized smooth functions for time, time‐dependent effects, nonlinear effects, and their interactions. The maximum (penalized) marginal likelihood method is used to estimate the regression coefficients and the variance for the frailty or random effects. The optimal smoothing parameters for the penalized marginal likelihood estimation can be automatically selected by a likelihood‐based cross‐validation criterion. For models with normal random effects, Gauss‐Hermite quadrature can be used to obtain the cluster‐level marginal likelihoods. The Akaike Information Criterion can be used to compare models and select the link function. We have implemented these methods in the R package rstpm2. Simulating for both small and larger clusters, we find that this approach performs well. Through 2 applications, we demonstrate (1) a comparison of proportional hazards and proportional odds models with random effects for clustered survival data and (2) the estimation of time‐varying effects on the log‐time scale, age‐varying effects for a specific treatment, and two‐dimensional splines for time and age.  相似文献   

4.
Studies of HIV dynamics in AIDS research are very important in understanding the pathogenesis of HIV‐1 infection and also in assessing the effectiveness of antiviral therapies. Nonlinear mixed‐effects (NLME) models have been used for modeling between‐subject and within‐subject variations in viral load measurements. Mostly, normality of both within‐subject random error and random‐effects is a routine assumption for NLME models, but it may be unrealistic, obscuring important features of between‐subject and within‐subject variations, particularly, if the data exhibit skewness. In this paper, we develop a Bayesian approach to NLME models and relax the normality assumption by considering both model random errors and random‐effects to have a multivariate skew‐normal distribution. The proposed model provides flexibility in capturing a broad range of non‐normal behavior and includes normality as a special case. We use a real data set from an AIDS study to illustrate the proposed approach by comparing various candidate models. We find that the model with skew‐normality provides better fit to the observed data and the corresponding estimates of parameters are significantly different from those based on the model with normality when skewness is present in the data. These findings suggest that it is very important to assume a model with skew‐normal distribution in order to achieve robust and reliable results, in particular, when the data exhibit skewness. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

5.
Tao Lu 《Statistics in medicine》2017,36(16):2614-2629
In AIDS studies, heterogeneous between and within subject variations are often observed on longitudinal endpoints. To accommodate heteroscedasticity in the longitudinal data, statistical methods have been developed to model the mean and variance jointly. Most of these methods assume (conditional) normal distributions for random errors, which is not realistic in practice. In this article, we propose a Bayesian mixed‐effects location scale model with skew‐t distribution and mismeasured covariates for heterogeneous longitudinal data with skewness. The proposed model captures the between‐subject and within‐subject (WS) heterogeneity by modeling the between‐subject and WS variations with covariates as well as a random effect at subject level in the WS variance. Further, the proposed model also takes into account the covariate measurement errors, and commonly assumed normal distributions for model errors are substituted by skew‐t distribution to account for skewness. Parameter estimation is carried out in a Bayesian framework. The proposed method is illustrated with a Multicenter AIDS Cohort Study. Simulation studies are performed to assess the performance of the proposed method. Copyright © 2017 John Wiley & Sons, Ltd.  相似文献   

6.
Multivariate interval‐censored failure time data arise commonly in many studies of epidemiology and biomedicine. Analysis of these type of data is more challenging than the right‐censored data. We propose a simple multiple imputation strategy to recover the order of occurrences based on the interval‐censored event times using a conditional predictive distribution function derived from a parametric gamma random effects model. By imputing the interval‐censored failure times, the estimation of the regression and dependence parameters in the context of a gamma frailty proportional hazards model using the well‐developed EM algorithm is made possible. A robust estimator for the covariance matrix is suggested to adjust for the possible misspecification of the parametric baseline hazard function. The finite sample properties of the proposed method are investigated via simulation. The performance of the proposed method is highly satisfactory, whereas the computation burden is minimal. The proposed method is also applied to the diabetic retinopathy study (DRS) data for illustration purpose and the estimates are compared with those based on other existing methods for bivariate grouped survival data. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

7.
The objective of this study was to develop a robust non‐linear mixed model for prostate‐specific antigen (PSA) measurements after a high‐intensity focused ultrasound (HIFU) treatment for prostate cancer. The characteristics of these data are the presence of outlying values and non‐normal random effects. A numerical study proved that parameter estimates can be biased if these characteristics are not taken into account. The intra‐patient variability was described by a Student‐t distribution and Dirichlet process priors were assumed for non‐normal random effects; a process that limited the bias and provided more efficient parameter estimates than a classical mixed model with normal residuals and random effects. It was applied to the determination of the best dynamic PSA criterion for the diagnosis of prostate cancer recurrence, but could be used in studies that rely on PSA data to improve prognosis or compare treatment efficiencies and also with other longitudinal biomarkers that, such as PSA, present outlying values and non‐normal random effects. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

8.
The quantile approximation method has recently been proposed as a simple method for deriving confidence intervals for the treatment effect in a random effects meta‐analysis. Although easily implemented, the quantiles used to construct intervals are derived from a single simulation study. Here it is shown that altering the study parameters, and in particular introducing changes to the distribution of the within‐study variances, can have a dramatic impact on the resulting quantiles. This is further illustrated analytically by examining the scenario where all trials are assumed to be the same size. A more cautious approach is therefore suggested, where the conventional standard normal quantile is used in the primary analysis, but where the use of alternative quantiles is also considered in a sensitivity analysis. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

9.
In this paper, we consider a full likelihood method to analyze continuous longitudinal responses with non‐ignorable non‐monotone missing data. We consider a transition probability model for the missingness mechanism. A first‐order Markov dependence structure is assumed for both the missingness mechanism and observed data. This process fits the natural data structure in the longitudinal framework. Our main interest is in estimating the parameters of the marginal model and evaluating the missing‐at‐random assumption in the Effects of Public Information Study, a cancer‐related study recently conducted at the University of Pennsylvania. We also present a simulation study to assess the performance of the model. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

10.
Mixed models incorporating spatially correlated random effects are often used for the analysis of areal data. In this setting, spatial smoothing is introduced at the second stage of a hierarchical framework, and this smoothing is often based on a latent Gaussian Markov random field. The Markov random field provides a computationally convenient framework for modeling spatial dependence; however, the Gaussian assumption underlying commonly used models can be overly restrictive in some applications. This can be a problem in the presence of outliers or discontinuities in the underlying spatial surface, and in such settings, models based on non‐Gaussian spatial random effects are useful. Motivated by a study examining geographic variation in the treatment of acute coronary syndrome, we develop a robust model for smoothing small‐area health service utilization rates. The model incorporates non‐Gaussian spatial random effects, and we develop a formulation for skew‐elliptical areal spatial models. We generalize the Gaussian conditional autoregressive model to the non‐Gaussian case, allowing for asymmetric skew‐elliptical marginal distributions having flexible tail behavior. The resulting new models are flexible, computationally manageable, and can be implemented in the standard Bayesian software WinBUGS. We demonstrate performance of the proposed methods and comparisons with other commonly used Gaussian and non‐Gaussian spatial prior formulations through simulation and analysis in our motivating application, mapping rates of revascularization for patients diagnosed with acute coronary syndrome in Quebec, Canada. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

11.
Autoregressive and cross‐lagged models have been widely used to understand the relationship between bivariate commensurate outcomes in social and behavioral sciences, but not much work has been carried out in modeling bivariate non‐commensurate (e.g., mixed binary and continuous) outcomes simultaneously. We develop a likelihood‐based methodology combining ordinary autoregressive and cross‐lagged models with a shared subject‐specific random effect in the mixed‐model framework to model two correlated longitudinal non‐commensurate outcomes. The estimates of the cross‐lagged and the autoregressive effects from our model are shown to be consistent with smaller mean‐squared error than the estimates from the univariate generalized linear models. Inclusion of the subject‐specific random effects in the proposed model accounts for between‐subject variability arising from the omitted and/or unobservable, but possibly explanatory, subject‐level predictors. Our model is not restricted to the case with equal number of events per subject, and it can be extended to different types of bivariate outcomes. We apply our model to an ecological momentary assessment study with complex dependence and sampling data structures. Specifically, we study the dependence between the condom use and sexual satisfaction based on the data reported in a longitudinal study of sexually transmitted infections. We find negative cross‐lagged effect between these two outcomes and positive autoregressive effect within each outcome. Copyright © 2017 John Wiley & Sons, Ltd.  相似文献   

12.
Generalized linear mixed models have played an important role in the analysis of longitudinal data; however, traditional approaches have limited flexibility in accommodating skewness and complex correlation structures. In addition, the existing estimation approaches generally rely heavily on the specifications of random effects distributions; therefore, the corresponding inferences are sometimes sensitive to the choice of random effect distributions under certain circumstance. In this paper, we incorporate serially dependent distribution‐free random effects into Tweedie generalized linear models to accommodate a wide range of skewness and covariance structures for discrete and continuous longitudinal data. An optimal estimation of our model has been developed using the orthodox best linear unbiased predictors of random effects. Our approach unifies population‐averaged and subject‐specific inferences. Our method is illustrated through the analyses of patient‐controlled analgesia data and Framingham cholesterol data.  相似文献   

13.
Incomplete multi‐level data arise commonly in many clinical trials and observational studies. Because of multi‐level variations in this type of data, appropriate data analysis should take these variations into account. A random effects model can allow for the multi‐level variations by assuming random effects at each level, but the computation is intensive because high‐dimensional integrations are often involved in fitting models. Marginal methods such as the inverse probability weighted generalized estimating equations can involve simple estimation computation, but it is hard to specify the working correlation matrix for multi‐level data. In this paper, we introduce a latent variable method to deal with incomplete multi‐level data when the missing mechanism is missing at random, which fills the gap between the random effects model and marginal models. Latent variable models are built for both the response and missing data processes to incorporate the variations that arise at each level. Simulation studies demonstrate that this method performs well in various situations. We apply the proposed method to an Alzheimer's disease study. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

14.
In many medical problems that collect multiple observations per subject, the time to an event is often of interest. Sometimes, the occurrence of the event can be recorded at regular intervals leading to interval‐censored data. It is further desirable to obtain the most parsimonious model in order to increase predictive power and to obtain ease of interpretation. Variable selection and often random effects selection in case of clustered data become crucial in such applications. We propose a Bayesian method for random effects selection in mixed effects accelerated failure time (AFT) models. The proposed method relies on the Cholesky decomposition on the random effects covariance matrix and the parameter‐expansion method for the selection of random effects. The Dirichlet prior is used to model the uncertainty in the random effects. The error distribution for the accelerated failure time model has been specified using a Gaussian mixture to allow flexible error density and prediction of the survival and hazard functions. We demonstrate the model using extensive simulations and the Signal Tandmobiel Study®. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

15.
Bivariate random‐effects meta‐analysis (BVMA) is a method of data synthesis that accounts for treatment effects measured on two outcomes. BVMA gives more precise estimates of the population mean and predicted values than two univariate random‐effects meta‐analyses (UVMAs). BVMA also addresses bias from incomplete reporting of outcomes. A few tutorials have covered technical details of BVMA of categorical or continuous outcomes. Limited guidance is available on how to analyze datasets that include trials with mixed continuous‐binary outcomes where treatment effects on one outcome or the other are not reported. Given the advantages of Bayesian BVMA for handling missing outcomes, we present a tutorial for Bayesian BVMA of incompletely reported treatment effects on mixed bivariate outcomes. This step‐by‐step approach can serve as a model for our intended audience, the methodologist familiar with Bayesian meta‐analysis, looking for practical advice on fitting bivariate models. To facilitate application of the proposed methods, we include our WinBUGS code. As an example, we use aggregate‐level data from published trials to demonstrate the estimation of the effects of vitamin K and bisphosphonates on two correlated bone outcomes, fracture, and bone mineral density. We present datasets where reporting of the pairs of treatment effects on both outcomes was ‘partially’ complete (i.e., pairs completely reported in some trials), and we outline steps for modeling the incompletely reported data. To assess what is gained from the additional work required by BVMA, we compare the resulting estimates to those from separate UVMAs. We discuss methodological findings and make four recommendations. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

16.
The multivariate nonlinear mixed‐effects model (MNLMM) has emerged as an effective tool for modeling multi‐outcome longitudinal data following nonlinear growth patterns. In the framework of MNLMM, the random effects and within‐subject errors are assumed to be normally distributed for mathematical tractability and computational simplicity. However, a serious departure from normality may cause lack of robustness and subsequently make invalid inference. This paper presents a robust extension of the MNLMM by considering a joint multivariate t distribution for the random effects and within‐subject errors, called the multivariate t nonlinear mixed‐effects model. Moreover, a damped exponential correlation structure is employed to capture the extra serial correlation among irregularly observed multiple repeated measures. An efficient expectation conditional maximization algorithm coupled with the first‐order Taylor approximation is developed for maximizing the complete pseudo‐data likelihood function. The techniques for the estimation of random effects, imputation of missing responses and identification of potential outliers are also investigated. The methodology is motivated by a real data example on 161 pregnant women coming from a study in a private fertilization obstetrics clinic in Santiago, Chile and used to analyze these data. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

17.
In this research article, we propose a class of models for positive and zero responses by means of a zero‐augmented mixed regression model. Under this class, we are particularly interested in studying positive responses whose distribution accommodates skewness. At the same time, responses can be zero, and therefore, we justify the use of a zero‐augmented mixture model. We model the mean of the positive response in a logarithmic scale and the mixture probability in a logit scale, both as a function of fixed and random effects. Moreover, the random effects link the two random components through their joint distribution and incorporate within‐subject correlation because of the repeated measurements and between‐subject heterogeneity. A Markov chain Monte Carlo algorithm is tailored to obtain Bayesian posterior distributions of the unknown quantities of interest, and Bayesian case‐deletion influence diagnostics based on the q‐divergence measure is performed. We apply the proposed method to a dataset from a 24hour dietary recall study conducted in the city of São Paulo and present a simulation study to evaluate the performance of the proposed methods. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

18.
Parent‐of‐origin effects have been pointed out to be one plausible source of the heritability that was unexplained by genome‐wide association studies. Here, we consider a case‐control mother‐child pair design for studying parent‐of‐origin effects of offspring genes on neonatal/early‐life disorders or pregnancy‐related conditions. In contrast to the standard case‐control design, the case‐control mother‐child pair design contains valuable parental information and therefore permits powerful assessment of parent‐of‐origin effects. Suppose the region under study is in Hardy‐Weinberg equilibrium, inheritance is Mendelian at the diallelic locus under study, there is random mating in the source population, and the SNP under study is not related to risk for the phenotype under study because of linkage disequilibrium (LD) with other SNPs. Using a maximum likelihood method that simultaneously assesses likely parental sources and estimates effect sizes of the two offspring genotypes, we investigate the extent of power increase for testing parent‐of‐origin effects through the incorporation of genotype data for adjacent markers that are in LD with the test locus. Our method does not need to assume the outcome is rare because it exploits supplementary information on phenotype prevalence. Analysis with simulated SNP data indicates that incorporating genotype data for adjacent markers greatly help recover the parent‐of‐origin information. This recovery can sometimes substantially improve statistical power for detecting parent‐of‐origin effects. We demonstrate our method by examining parent‐of‐origin effects of the gene PPARGC1A on low birth weight using data from 636 mother‐child pairs in the Jerusalem Perinatal Study.  相似文献   

19.
A multiple‐objective allocation strategy was recently proposed for constructing response‐adaptive repeated measurement designs for continuous responses. We extend the allocation strategy to constructing response‐adaptive repeated measurement designs for binary responses. The approach with binary responses is quite different from the continuous case, as the information matrix is a function of responses, and it involves nonlinear modeling. To deal with these problems, we first build the design on the basis of success probabilities. Then we illustrate how various models can accommodate carryover effects on the basis of logits of response profiles as well as any correlation structure. Through computer simulations, we find that the allocation strategy developed for continuous responses also works well for binary responses. As expected, design efficiency in terms of mean squared error drops sharply, as more emphasis is placed on increasing treatment benefit than estimation precision. However, we find that it can successfully allocate more patients to better treatment sequences without sacrificing much estimation precision. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

20.
Identifying unusual growth‐related measurements or longitudinal patterns in growth is often the focus in fetal and pediatric medicine. For example, the goal of the ongoing National Fetal Growth Study is to develop both cross‐sectional and longitudinal reference curves for ultrasound fetal growth measurements that can be used for this purpose. Current methodology for estimating cross‐sectional and longitudinal reference curves relies mainly on the linear mixed model. The focus of this paper is on examining the robustness of percentile estimation to the assumptions with respect to the Gaussian random‐effect assumption implicitly made in the standard linear mixed model. We also examine a random‐effects distribution based on mixtures of normals and compare the two approaches under both correct and misspecified random‐effects distributions. In general, we find that the standard linear mixed model is relatively robust for cross‐sectional percentile estimation but less robust for longitudinal or ‘personalized’ reference curves based on the conditional distribution given prior ultrasound measurements. The methodology is illustrated with data from a longitudinal fetal growth study. Published 2012. This article is a US Government work and is in the public domain in the USA.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号