首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Using original data that we have collected on referral relations between 110 hospitals serving a large regional community, we show how recently derived Bayesian exponential random graph models may be adopted to illuminate core empirical issues in research on relational coordination among healthcare organisations. We show how a rigorous Bayesian computation approach supports a fully probabilistic analytical framework that alleviates well‐known problems in the estimation of model parameters of exponential random graph models. We also show how the main structural features of interhospital patient referral networks that prior studies have described can be reproduced with accuracy by specifying the system of local dependencies that produce – but at the same time are induced by – decentralised collaborative arrangements between hospitals. Copyright © 2017 John Wiley & Sons, Ltd.  相似文献   

2.
Epidemiologic and clinical studies routinely collect longitudinal measures of multiple outcomes, including biomarker measures, cognitive functions, and clinical symptoms. These longitudinal outcomes can be used to establish the temporal order of relevant biological processes and their association with the onset of clinical symptoms. Univariate change point models have been used to model various clinical endpoints, such as CD4 count in studying the progression of HIV infection and cognitive function in the elderly. We propose to use bivariate change point models for two longitudinal outcomes with a focus on the correlation between the two change points. We consider three types of change point models in the bivariate model setting: the broken‐stick model, the Bacon–Watts model, and the smooth polynomial model. We adopt a Bayesian approach using a Markov chain Monte Carlo sampling method for parameter estimation and inference. We assess the proposed methods in simulation studies and demonstrate the methodology using data from a longitudinal study of dementia. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

3.
Multi‐state models generalize survival or duration time analysis to the estimation of transition‐specific hazard rate functions for multiple transitions. When each of the transition‐specific risk functions is parametrized with several distinct covariate effect coefficients, this leads to a model of potentially high dimension. To decrease the parameter space dimensionality and to work out a clear image of the underlying multi‐state model structure, one can either aim at setting some coefficients to zero or to make coefficients for the same covariate but two different transitions equal. The first issue can be approached by penalizing the absolute values of the covariate coefficients as in lasso regularization. If, instead, absolute differences between coefficients of the same covariate on different transitions are penalized, this leads to sparse competing risk relations within a multi‐state model, that is, equality of covariate effect coefficients. In this paper, a new estimation approach providing sparse multi‐state modelling by the aforementioned principles is established, based on the estimation of multi‐state models and a simultaneous penalization of the L1‐norm of covariate coefficients and their differences in a structured way. The new multi‐state modelling approach is illustrated on peritoneal dialysis study data and implemented in the R package penMSM . Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

4.
When conducting a meta‐analysis of studies with bivariate binary outcomes, challenges arise when the within‐study correlation and between‐study heterogeneity should be taken into account. In this paper, we propose a marginal beta‐binomial model for the meta‐analysis of studies with binary outcomes. This model is based on the composite likelihood approach and has several attractive features compared with the existing models such as bivariate generalized linear mixed model (Chu and Cole, 2006) and Sarmanov beta‐binomial model (Chen et al., 2012). The advantages of the proposed marginal model include modeling the probabilities in the original scale, not requiring any transformation of probabilities or any link function, having closed‐form expression of likelihood function, and no constraints on the correlation parameter. More importantly, because the marginal beta‐binomial model is only based on the marginal distributions, it does not suffer from potential misspecification of the joint distribution of bivariate study‐specific probabilities. Such misspecification is difficult to detect and can lead to biased inference using currents methods. We compare the performance of the marginal beta‐binomial model with the bivariate generalized linear mixed model and the Sarmanov beta‐binomial model by simulation studies. Interestingly, the results show that the marginal beta‐binomial model performs better than the Sarmanov beta‐binomial model, whether or not the true model is Sarmanov beta‐binomial, and the marginal beta‐binomial model is more robust than the bivariate generalized linear mixed model under model misspecifications. Two meta‐analyses of diagnostic accuracy studies and a meta‐analysis of case–control studies are conducted for illustration. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

5.
The random effect Tobit model is a regression model that accommodates both left‐ and/or right‐censoring and within‐cluster dependence of the outcome variable. Regression coefficients of random effect Tobit models have conditional interpretations on a constructed latent dependent variable and do not provide inference of overall exposure effects on the original outcome scale. Marginalized random effects model (MREM) permits likelihood‐based estimation of marginal mean parameters for the clustered data. For random effect Tobit models, we extend the MREM to marginalize over both the random effects and the normal space and boundary components of the censored response to estimate overall exposure effects at population level. We also extend the ‘Average Predicted Value’ method to estimate the model‐predicted marginal means for each person under different exposure status in a designated reference group by integrating over the random effects and then use the calculated difference to assess the overall exposure effect. The maximum likelihood estimation is proposed utilizing a quasi‐Newton optimization algorithm with Gauss–Hermite quadrature to approximate the integration of the random effects. We use these methods to carefully analyze two real datasets. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

6.
Birthweight and gestational age are closely related and represent important indicators of a healthy pregnancy. Customary modeling for birthweight is conditional on gestational age. However, joint modeling directly addresses the relationship between gestational age and birthweight, and provides increased flexibility and interpretation as well as a strategy to avoid using gestational age as an intermediate variable. Previous proposals have utilized finite mixtures of bivariate regression models to incorporate well‐established risk factors into analysis (e.g. sex and birth order of the baby, maternal age, race, and tobacco use) while examining the non‐Gaussian shape of the joint birthweight and gestational age distribution. We build on this approach by demonstrating the inferential (prognostic) benefits of joint modeling (e.g. investigation of ‘age inappropriate’ outcomes like small for gestational age) and hence re‐emphasize the importance of capturing the non‐Gaussian distributional shapes. We additionally extend current models through a latent specification which admits interval‐censored gestational age. We work within a Bayesian framework which enables inference beyond customary parameter estimation and prediction as well as exact uncertainty assessment. The model is applied to a portion of the 2003–2006 North Carolina Detailed Birth Record data (n=336129) available through the Children's Environmental Health Initiative and is fitted using the Bayesian methodology and Markov chain Monte Carlo approaches. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

7.
Proportional hazards models are among the most popular regression models in survival analysis. Multi‐state models generalize them by jointly considering different types of events and their interrelations, whereas frailty models incorporate random effects to account for unobserved risk factors, possibly shared by clusters of subjects. The integration of multi‐state and frailty methodology is an interesting way to control for unobserved heterogeneity in the presence of complex event history structures and is particularly appealing for multicenter clinical trials. We propose the incorporation of correlated frailties in the transition‐specific hazard function, thanks to a nested hierarchy. We studied a semiparametric estimation approach based on maximum integrated partial likelihood. We show in a simulation study that the nested frailty multi‐state model improves the estimation of the effect of covariates, as well as the coverage probability of their confidence intervals. We present a case study concerning a prostate cancer multicenter clinical trial. The multi‐state nature of the model allows us to evidence the effect of treatment on death taking into account intermediate events. Copyright © 2015 JohnWiley & Sons, Ltd.  相似文献   

8.
Incomplete multi‐level data arise commonly in many clinical trials and observational studies. Because of multi‐level variations in this type of data, appropriate data analysis should take these variations into account. A random effects model can allow for the multi‐level variations by assuming random effects at each level, but the computation is intensive because high‐dimensional integrations are often involved in fitting models. Marginal methods such as the inverse probability weighted generalized estimating equations can involve simple estimation computation, but it is hard to specify the working correlation matrix for multi‐level data. In this paper, we introduce a latent variable method to deal with incomplete multi‐level data when the missing mechanism is missing at random, which fills the gap between the random effects model and marginal models. Latent variable models are built for both the response and missing data processes to incorporate the variations that arise at each level. Simulation studies demonstrate that this method performs well in various situations. We apply the proposed method to an Alzheimer's disease study. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

9.
Recent studies found that infection‐related hospitalization was associated with increased risk of cardiovascular (CV) events, such as myocardial infarction and stroke in the dialysis population. In this work, we develop time‐varying effects modeling tools in order to examine the CV outcome risk trajectories during the time periods before and after an initial infection‐related hospitalization. For this, we propose partly conditional and fully conditional partially linear generalized varying coefficient models (PL‐GVCMs) for modeling time‐varying effects in longitudinal data with substantial follow‐up truncation by death. Unconditional models that implicitly target an immortal population is not a relevant target of inference in applications involving a population with high mortality, like the dialysis population. A partly conditional model characterizes the outcome trajectory for the dynamic cohort of survivors, where each point in the longitudinal trajectory represents a snapshot of the population relationships among subjects who are alive at that time point. In contrast, a fully conditional approach models the time‐varying effects of the population stratified by the actual time of death, where the mean response characterizes individual trends in each cohort stratum. We compare and contrast partly and fully conditional PL‐GVCMs in our aforementioned application using hospitalization data from the United States Renal Data System. For inference, we develop generalized likelihood ratio tests. Simulation studies examine the efficacy of estimation and inference procedures. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

10.
Multi‐state transition models are widely applied tools to analyze individual event histories in the medical or social sciences. In this paper, we propose the use of (discrete‐time) competing‐risks duration models to analyze multi‐transition data. Unlike conventional Markov transition models, these models allow the estimated transition probabilities to depend on the time spent in the current state. Moreover, the models can be readily extended to allow for correlated transition probabilities. A further virtue of these models is that they can be estimated using conventional regression tools for discrete‐response data, such as the multinomial logit model. The latter is implemented in many statistical software packages and can be readily applied by empirical researchers. Moreover, model estimation is feasible, even when dealing with very large data sets, and simultaneously allowing for a flexible form of duration dependence and correlation between transition probabilities. We derive the likelihood function for a model with three competing target states and discuss a feasible and readily applicable estimation method. We also present the results from a simulation study, which indicate adequate performance of the proposed approach. In an empirical application, we analyze dementia patients’ transition probabilities from the domestic setting, taking into account several, partly duration‐dependent covariates. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

11.
Zero‐inflated Poisson (ZIP) and negative binomial (ZINB) models are widely used to model zero‐inflated count responses. These models extend the Poisson and negative binomial (NB) to address excessive zeros in the count response. By adding a degenerate distribution centered at 0 and interpreting it as describing a non‐risk group in the population, the ZIP (ZINB) models a two‐component population mixture. As in applications of Poisson and NB, the key difference between ZIP and ZINB is the allowance for overdispersion by the ZINB in its NB component in modeling the count response for the at‐risk group. Overdispersion arising in practice too often does not follow the NB, and applications of ZINB to such data yield invalid inference. If sources of overdispersion are known, other parametric models may be used to directly model the overdispersion. Such models too are subject to assumed distributions. Further, this approach may not be applicable if information about the sources of overdispersion is unavailable. In this paper, we propose a distribution‐free alternative and compare its performance with these popular parametric models as well as a moment‐based approach proposed by Yu et al. [Statistics in Medicine 2013; 32 : 2390–2405]. Like the generalized estimating equations, the proposed approach requires no elaborate distribution assumptions. Compared with the approach of Yu et al., it is more robust to overdispersed zero‐inflated responses. We illustrate our approach with both simulated and real study data. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

12.
Two‐phase designs are commonly used to subsample subjects from a cohort in order to study covariates that are too expensive to ascertain for everyone in the cohort. This is particularly true for the study of immune response biomarkers in vaccine immunology, where new, elaborate assays are constantly being developed to improve our understanding of the human immune responses to vaccines and how the immune response may protect humans from virus infection. It has long being recognized that if there exist variables that are correlated with expensive variables and can be measured for every subject in the cohort, they can be leveraged to improve the estimation efficiency for the effects of the expensive variables. In this research article, we developed an improved inverse probability weighted estimation approach for semiparametric transformation models with a two‐phase study design. Semiparametric transformation models are a class of models that include the Cox PH and proportional odds models. They provide an attractive way to model the effects of immune response biomarkers as human immune responses generally wane over time. Our approach is based on weights calibration, which has its origin in survey statistics and was used by Breslow et al. 1 , 2 to improve inverse probability weighted estimation of the Cox regression model. We develop asymptotic theory for our estimator and examine its performance through simulation studies. We illustrate the proposed method with application to two HIV‐1 vaccine efficacy trials. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

13.
Simultaneous inference in longitudinal, repeated‐measures, and multi‐endpoint designs can be onerous, especially when trying to find a reasonable joint model from which the interesting effects and covariances are estimated. A novel statistical approach known as multiple marginal models greatly simplifies the modelling process: the core idea is to “marginalise” the problem and fit multiple small models to different portions of the data, and then estimate the overall covariance matrix in a subsequent, separate step. Using these estimates guarantees strong control of the family‐wise error rate, however only asymptotically. In this paper, we show how to make the approach also applicable to small‐sample data problems. Specifically, we discuss the computation of adjusted P values and simultaneous confidence bounds for comparisons of randomised treatment groups as well as for levels of a nonrandomised factor such as multiple endpoints, repeated measures, or a series of points in time or space. We illustrate the practical use of the method with a data example.  相似文献   

14.
Varying‐coefficient models have claimed an increasing portion of statistical research and are now applied to censored data analysis in medical studies. We incorporate such flexible semiparametric regression tools for interval censored data with a cured proportion. We adopted a two‐part model to describe the overall survival experience for such complicated data. To fit the unknown functional components in the model, we take the local polynomial approach with bandwidth chosen by cross‐validation. We establish consistency and asymptotic distribution of the estimation and propose to use bootstrap for inference. We constructed a BIC‐type model selection method to recommend an appropriate specification of parametric and nonparametric components in the model. We conducted extensive simulations to assess the performance of our methods. An application on a decompression sickness data illustrates our methods. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

15.
The stereotype regression model for categorical outcomes, proposed by Anderson (J. Roy. Statist. Soc. B. 1984; 46 :1–30) is nested between the baseline‐category logits and adjacent category logits model with proportional odds structure. The stereotype model is more parsimonious than the ordinary baseline‐category (or multinomial logistic) model due to a product representation of the log‐odds‐ratios in terms of a common parameter corresponding to each predictor and category‐specific scores. The model could be used for both ordered and unordered outcomes. For ordered outcomes, the stereotype model allows more flexibility than the popular proportional odds model in capturing highly subjective ordinal scaling which does not result from categorization of a single latent variable, but are inherently multi‐dimensional in nature. As pointed out by Greenland (Statist. Med. 1994; 13 :1665–1677), an additional advantage of the stereotype model is that it provides unbiased and valid inference under outcome‐stratified sampling as in case–control studies. In addition, for matched case–control studies, the stereotype model is amenable to classical conditional likelihood principle, whereas there is no reduction due to sufficiency under the proportional odds model. In spite of these attractive features, the model has been applied less, as there are issues with maximum likelihood estimation and likelihood‐based testing approaches due to non‐linearity and lack of identifiability of the parameters. We present comprehensive Bayesian inference and model comparison procedure for this class of models as an alternative to the classical frequentist approach. We illustrate our methodology by analyzing data from The Flint Men's Health Study, a case–control study of prostate cancer in African‐American men aged 40–79 years. We use clinical staging of prostate cancer in terms of Tumors, Nodes and Metastasis as the categorical response of interest. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

16.
Competing compartment models of different complexities have been used for the quantitative analysis of dynamic contrast‐enhanced magnetic resonance imaging data. We present a spatial elastic net approach that allows to estimate the number of compartments for each voxel such that the model complexity is not fixed a priori. A multi‐compartment approach is considered, which is translated into a restricted least square model selection problem. This is done by using a set of basis functions for a given set of candidate rate constants. The form of the basis functions is derived from a kinetic model and thus describes the contribution of a specific compartment. Using a spatial elastic net estimator, we chose a sparse set of basis functions per voxel, and hence, rate constants of compartments. The spatial penalty takes into account the voxel structure of an image and performs better than a penalty treating voxels independently. The proposed estimation method is evaluated for simulated images and applied to an in vivo dataset. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

17.
We study the problem of estimation and inference on the average treatment effect in a smoking cessation trial where an outcome and some auxiliary information were measured longitudinally, and both were subject to missing values. Dynamic generalized linear mixed effects models linking the outcome, the auxiliary information, and the covariates are proposed. The maximum likelihood approach is applied to the estimation and inference on the model parameters. The average treatment effect is estimated by the G‐computation approach, and the sensitivity of the treatment effect estimate to the nonignorable missing data mechanisms is investigated through the local sensitivity analysis approach. The proposed approach can handle missing data that form arbitrary missing patterns over time. We applied the proposed method to the analysis of the smoking cessation trial. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

18.
This article explores Bayesian joint models for a quantile of longitudinal response, mismeasured covariate and event time outcome with an attempt to (i) characterize the entire conditional distribution of the response variable based on quantile regression that may be more robust to outliers and misspecification of error distribution; (ii) tailor accuracy from measurement error, evaluate non‐ignorable missing observations, and adjust departures from normality in covariate; and (iii) overcome shortages of confidence in specifying a time‐to‐event model. When statistical inference is carried out for a longitudinal data set with non‐central location, non‐linearity, non‐normality, measurement error, and missing values as well as event time with being interval censored, it is important to account for the simultaneous treatment of these data features in order to obtain more reliable and robust inferential results. Toward this end, we develop Bayesian joint modeling approach to simultaneously estimating all parameters in the three models: quantile regression‐based nonlinear mixed‐effects model for response using asymmetric Laplace distribution, linear mixed‐effects model with skew‐t distribution for mismeasured covariate in the presence of informative missingness and accelerated failure time model with unspecified nonparametric distribution for event time. We apply the proposed modeling approach to analyzing an AIDS clinical data set and conduct simulation studies to assess the performance of the proposed joint models and method. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

19.
Multi‐state models of chronic disease are becoming increasingly important in medical research to describe the progression of complicated diseases. However, studies seldom observe health outcomes over long time periods. Therefore, current clinical research focuses on the secondary data analysis of the published literature to estimate a single transition probability within the entire model. Unfortunately, there are many difficulties when using secondary data, especially since the states and transitions of published studies may not be consistent with the proposed multi‐state model. Early approaches to reconciling published studies with the theoretical framework of a multi‐state model have been limited to data available as cumulative counts of progression. This paper presents an approach that allows the use of published regression data in a multi‐state model when the published study may have ignored intermediary states in the multi‐state model. Colloquially, we call this approach the Lemonade Method since when study data give you lemons, make lemonade. The approach uses maximum likelihood estimation. An example is provided for the progression of heart disease in people with diabetes. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

20.
Often the effect of at least one of the prognostic factors in a Cox regression model changes over time, which violates the proportional hazards assumption of this model. As a consequence, the average hazard ratio for such a prognostic factor is under‐ or overestimated. While there are several methods to appropriately cope with non‐proportional hazards, in particular by including parameters for time‐dependent effects, weighted estimation in Cox regression is a parsimonious alternative without additional parameters. The methodology, which extends the weighted k‐sample logrank tests of the Tarone‐Ware scheme to models with multiple, binary and continuous covariates, has been introduced in the nineties of the last century and is further developed and re‐evaluated in this contribution. The notion of an average hazard ratio is defined and its connection to the effect size measure P(X<Y) is emphasized. The suggested approach accomplishes estimation of intuitively interpretable average hazard ratios and provides tools for inference. A Monte Carlo study confirms the satisfactory performance. Advantages of the approach are exemplified by comparing standard and weighted analyses of an international lung cancer study. SAS and R programs facilitate application. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号