首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 543 毫秒
1.
Model‐based standardization uses a statistical model to estimate a standardized, or unconfounded, population‐averaged effect. With it, one can compare groups had the distribution of confounders been identical in both groups to that of the standard population. We develop two methods for model‐based standardization with complex survey data that accommodate a categorical confounder that clusters the individual observations into a very large number of subgroups. The first method combines a random‐intercept generalized linear mixed model with a conditional pseudo‐likelihood estimator of the fixed effects. The second method combines a between–within generalized linear mixed model with census data on the cluster‐level means of the individual‐level covariates. We conduct simulation studies to compare the two approaches. We apply the two methods to the 2008 Florida Behavioral Risk Factor Surveillance System survey data to estimate standardized proportions of people who drink alcohol, within age groups, adjusting for measured individual‐level and unmeasured cluster‐level confounders. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

2.
Various frailty models have been developed and are now widely used for analysing multivariate survival data. It is therefore important to develop an information criterion for model selection. However, in frailty models there are several alternative ways of forming a criterion and the particular criterion chosen may not be uniformly best. In this paper, we study an Akaike information criterion (AIC) on selecting a frailty structure from a set of (possibly) non-nested frailty models. We propose two new AIC criteria, based on a conditional likelihood and an extended restricted likelihood (ERL) given by Lee and Nelder (J. R. Statist. Soc. B 1996; 58:619-678). We compare their performance using well-known practical examples and demonstrate that the two criteria may yield rather different results. A simulation study shows that the AIC based on the ERL is recommended, when attention is focussed on selecting the frailty structure rather than the fixed effects.  相似文献   

3.
Lee Y 《Statistics in medicine》2002,21(16):2325-2330
A preference trial is a special form of cross-over trial where clinical conditions determine when patients change treatment, in a prescribed order. This leads to binary responses with variable lengths. In cross-over trials with normal responses, patient effect may be treated as either fixed or random. However, with binary responses, random- and fixed-effect assumptions may lead to very different conclusions, so that one is no longer an alternative to the other.  相似文献   

4.
A generalized linear mixed model is an increasingly popular choice for the modelling of correlated, non-normal responses in a regression setting. A number of methods are currently available for fitting a generalized linear mixed model including Monte-Carlo Markov-Chain maximum likelihood algorithms, approximate maximum likelihood (PQL), iterative bias correction, and others. Of interest in this paper is to compare the parameter estimation of the various methods in the modelling of a count data set, the incidence of polio in the USA over the period 1970-1983, using a longlinear generalized linear mixed model with an autoregressive correlation structure. Despite the fact that all of these methods are considered valid modelling techniques, we find that parameter estimates and standard errors differ substantially between analyses, particularly in the estimation of the parameters describing the random effects distribution. A small simulation study is helpful in understanding some of these differences. The methods lead to reasonably similar predictions for future observations, with small differences observed in some monthly counts.  相似文献   

5.
Despite the use of standardized protocols in, multi-centre, randomized clinical trials, outcome may vary between centres. Such heterogeneity may alter the interpretation and reporting of the treatment effect. Below, we propose a general frailty modelling approach for investigating, inter alia, putative treatment-by-centre interactions in time-to-event data in multi-centre clinical trials. A correlated random effects model is used to model the baseline risk and the treatment effect across centres. It may be based on shared, individual or correlated random effects. For inference we develop the hierarchical-likelihood (or h-likelihood) approach which facilitates computation of prediction intervals for the random effects with proper precision. We illustrate our methods using disease-free time-to-event data on bladder cancer patients participating in an European Organization for Research and Treatment of Cancer trial, and a simulation study. We also demonstrate model selection using h-likelihood criteria.  相似文献   

6.
Yun S  Lee Y 《Statistics in medicine》2006,25(22):3877-3892
We introduce a model to account for abrupt changes among repeated measures with non-monotone missingness. Development of likelihood inferences for such models is hard because it involves intractable integration to obtain the marginal likelihood. We use hierarchical likelihood to overcome such difficulty. Abrupt changes among repeated measures can be well described by introducing random effects in the dispersion. A simulation study shows that the resulting estimator is efficient, robust against misspecification of fatness of tails. For illustration we use a schizophrenic behaviour data presented by Rubin and Wu.  相似文献   

7.
Liu L  Ma JZ  Johnson BA 《Statistics in medicine》2008,27(18):3528-3539
Two-part random effects models (J. Am. Statist. Assoc. 2001; 96:730-745; Statist. Methods Med. Res. 2002; 11:341-355) have been applied to longitudinal studies for semi-continuous outcomes, characterized by a large portion of zero values and continuous non-zero (positive) values. Examples include repeated measures of daily drinking records, monthly medical costs, and annual claims of car insurance. However, the question of how to apply such models to multi-level data settings remains. In this paper, we propose a novel multi-level two-part random effects model. Distinct random effects are used to characterize heterogeneity at different levels. Maximum likelihood estimation and inference are carried out through Gaussian quadrature technique, which can be implemented conveniently in freely available software-aML. The model is applied to the analysis of repeated measures of the daily drinking record in a randomized controlled trial of topiramate for alcohol-dependence treatment.  相似文献   

8.
We propose statistical definitions of the individual benefit of a medical or behavioral treatment and of the severity of a chronic illness. These definitions are used to develop a graphical method that can be used by statisticians and clinicians in the data analysis of clinical trials from the perspective of personalized medicine. The method focuses on assessing and comparing individual effects of treatments rather than average effects and can be used with continuous and discrete responses, including dichotomous and count responses. The method is based on new developments in generalized linear mixed‐effects models, which are introduced in this article. To illustrate, analyses of data from the Sequenced Treatment Alternatives to Relieve Depression clinical trial of sequences of treatments for depression and data from a clinical trial of respiratory treatments are presented. The estimation of individual benefits is also explained. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

9.
Longitudinal binomial data are frequently generated from multiple questionnaires and assessments in various scientific settings for which the binomial data are often overdispersed. The standard generalized linear mixed effects model may result in severe underestimation of standard errors of estimated regression parameters in such cases and hence potentially bias the statistical inference. In this paper, we propose a longitudinal beta‐binomial model for overdispersed binomial data and estimate the regression parameters under a probit model using the generalized estimating equation method. A hybrid algorithm of the Fisher scoring and the method of moments is implemented for computing the method. Extensive simulation studies are conducted to justify the validity of the proposed method. Finally, the proposed method is applied to analyze functional impairment in subjects who are at risk of Huntington disease from a multisite observational study of prodromal Huntington disease. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

10.
Wang CY  Huang Y 《Statistics in medicine》2003,22(16):2577-2590
We consider regression analysis of a disease outcome in relation to longitudinal data which are observations from a random effects model. The covariate variables of interest are the values of the underlying trajectory at some time points, which may be fixed or subject-specific. Because the underlying random coefficients are unknown, the covariates to the primary model are generally unobserved. In addition, measurements are often not observed at the time points of interest. A motivating example to our model is the effects of age at adiposity rebound and the associated body mass index on the risk of adult obesity. The adiposity rebound is a time point at which the trajectory of a child's body fatness declines to a minimum. This general error in timing problem may be applied to an analysis when time-dependent marker variables follow a polynomial model in which the effect of a local maximum or minimum point may be of interest. It can be seen that directly applying estimated covariates, possibly obtained from estimated time points, may lead to bias. Estimation procedures based on expected estimating equations, regression calibration and simulation extrapolation are applied to this problem.  相似文献   

11.
For major genes known to influence the risk of cancer, an important task is to determine the risks conferred by individual variants, so that one can appropriately counsel carriers of these mutations. This is a challenging task, since new mutations are continually being identified, and there is typically relatively little empirical evidence available about each individual mutation. Hierarchical modeling offers a natural strategy to leverage the collective evidence from these rare variants with sparse data. This can be accomplished when there are available higher-level covariates that characterize the variants in terms of attributes that could distinguish their association with disease. In this article, we explore the use of hierarchical modeling for this purpose using data from a large population-based study of the risks of melanoma conferred by variants in the CDKN2A gene. We employ both a pseudo-likelihood approach and a Bayesian approach using Gibbs sampling. The results indicate that relative risk estimates tend to be primarily influenced by the individual case-control frequencies when several cases and/or controls are observed with the variant under study, but that relative risk estimates for variants with very sparse data are more influenced by the higher-level covariate values, as one would expect. The analysis offers encouragement that we can draw strength from the aggregating power of hierarchical models to provide guidance to medical geneticists when they offer counseling to patients with rare or even hitherto unobserved variants. However, further research is needed to validate the application of asymptotic methods to such sparse data.  相似文献   

12.
Standard meta‐analytic theory assumes that study outcomes are normally distributed with known variances. However, methods derived from this theory are often applied to effect sizes having skewed distributions with estimated variances. Both shortcomings can be largely overcome by first applying a variance stabilizing transformation. Here we concentrate on study outcomes with Student t‐distributions and show that we can better estimate parameters of fixed or random effects models with confidence intervals using stable weights or with profile approximate likelihood intervals following stabilization. We achieve even better coverage with a finite sample bias correction. Further, a simple t‐interval provides very good coverage of an overall effect size without estimation of the inter‐study variance. We illustrate the methodology on two meta‐analytic studies from the medical literature, the effect of salt reduction on systolic blood pressure and the effect of opioids for the relief of breathlessness. Substantial simulation studies compare traditional methods with those newly proposed. We can apply the theoretical results to other study outcomes for which an effective variance stabilizer is available. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

13.
For analyses of longitudinal repeated‐measures data, statistical methods include the random effects model, fixed effects model and the method of generalized estimating equations. We examine the assumptions that underlie these approaches to assessing covariate effects on the mean of a continuous, dichotomous or count outcome. Access to statistical software to implement these models has led to widespread application in numerous disciplines. However, careful consideration should be paid to their critical assumptions to ascertain which model might be appropriate in a given setting. To illustrate similarities and differences that might exist in empirical results, we use a study that assessed depressive symptoms in low‐income pregnant women using a structured instrument with up to five assessments that spanned the pre‐natal and post‐natal periods. Understanding the conceptual differences between the methods is important in their proper application even though empirically they might not differ substantively. The choice of model in specific applications would depend on the relevant questions being addressed, which in turn informs the type of design and data collection that would be relevant. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

14.
We present a simple semiparametric model for fitting subject-specific curves for longitudinal data. Individual curves are modelled as penalized splines with random coefficients. This model has a mixed model representation, and it is easily implemented in standard statistical software. We conduct an analysis of the long-term effect of radiation therapy on the height of children suffering from acute lymphoblastic leukaemia using penalized splines in the framework of semiparametric mixed effects models. The analysis revealed significant differences between therapies and showed that the growth rate of girls in the study cannot be fully explained by the group-average curve and that individual curves are necessary to reflect the individual response to treatment. We also show how to implement these models in S-PLUS and R in the appendix.  相似文献   

15.
The U.S. Food and Drug Administration (FDA) has proposed new regulations that address the 'prescribability' and 'switchability' of new formulations of already-approved drugs. These new criteria are known, respectively, as population and individual bioequivalence. Two methods have been proposed in the bioequivalence literature for assessing population and individual bioequivalence that calculate an upper 95 per cent confidence bound for the bioequivalence criterion in question, and then test bio-equivalence by comparing this bound to the limit established by the FDA. In this paper we propose applying the generalized test function (GTF) methodology of Tsui and Weerahandi (Journal of the American Statistical Association 1989; 602-607) to this problem to produce tests based on a generalized p-value (GPV). This methodology allows us to construct hypothesis tests in the presence of nuisance parameters. Using simulation we show that these tests perform well in comparison to the confidence interval methods and have superior power for assessing population bioequivalence.  相似文献   

16.
This article presents the rationale for using multilevel analysis to address the broad environmental contexts in patient satisfaction research. This study utilized patient satisfaction data and the American Hospital Association Hospital Guide Book (2004). This study found significant contributions of individual patient attribute reactions (nursing care, physician care, etc.), and also clearly demonstrated hospital-level effects and cross-level interactions on patient satisfaction. Thus, it is clear that patient satisfaction is not solely explained by patients' attribute reactions and their demographic variables, but is also explained by patients' hospital levels. This approach would offer additional understanding in patient satisfaction research.  相似文献   

17.
In clinical trials and biomedical studies, treatments are compared to determine which one is effective against illness; however, individuals can react to the same treatment very differently. We propose a complete process for longitudinal data that identifies subgroups of the population that would benefit from a specific treatment. A random effects linear model is used to evaluate individual treatment effects longitudinally where the random effects identify a positive or negative reaction to the treatment over time. With the individual treatment effects and characteristics of the patients, various classification algorithms are applied to build prediction models for subgrouping. While many subgrouping approaches have been developed recently, most of them do not check its validity. In this paper, we further propose a simple validation approach which not only determines if the subgroups used are appropriate and beneficial but also compares methods to predict individual treatment effects. This entire procedure is readily implemented by existing packages in statistical software. The effectiveness of the proposed method is confirmed with simulation studies and analysis of data from the Women Entering Care study on depression.  相似文献   

18.
Two main classes of methodology have been developed for addressing the analytical intractability of generalized linear mixed models: likelihood‐based methods and Bayesian methods. Likelihood‐based methods such as the penalized quasi‐likelihood approach have been shown to produce biased estimates especially for binary clustered data with small clusters sizes. More recent methods using adaptive Gaussian quadrature perform well but can be overwhelmed by problems with large numbers of random effects, and efficient algorithms to better handle these situations have not yet been integrated in standard statistical packages. Bayesian methods, although they have good frequentist properties when the model is correct, are known to be computationally intensive and also require specialized code, limiting their use in practice. In this article, we introduce a modification of the hybrid approach of Capanu and Begg, 2011, Biometrics 67 , 371–380, as a bridge between the likelihood‐based and Bayesian approaches by employing Bayesian estimation for the variance components followed by Laplacian estimation for the regression coefficients. We investigate its performance as well as that of several likelihood‐based methods in the setting of generalized linear mixed models with binary outcomes. We apply the methods to three datasets and conduct simulations to illustrate their properties. Simulation results indicate that for moderate to large numbers of observations per random effect, adaptive Gaussian quadrature and the Laplacian approximation are very accurate, with adaptive Gaussian quadrature preferable as the number of observations per random effect increases. The hybrid approach is overall similar to the Laplace method, and it can be superior for data with very sparse random effects. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

19.
In this paper, we present a unified modeling framework to combine aggregated data from randomized controlled trials (RCTs) with individual participant data (IPD) from observational studies. Rather than simply pooling the available evidence into an overall treatment effect, adjusted for potential confounding, the intention of this work is to explore treatment effects in specific patient populations reflected by the IPD. In this way, by collecting IPD, we can potentially gain new insights from RCTs' results, which cannot be seen using only a meta‐analysis of RCTs. We present a new Bayesian hierarchical meta‐regression model, which combines submodels, representing different types of data into a coherent analysis. Predictors of baseline risk are estimated from the individual data. Simultaneously, a bivariate random effects distribution of baseline risk and treatment effects is estimated from the combined individual and aggregate data. Therefore, given a subgroup of interest, the estimated treatment effect can be calculated through its correlation with baseline risk. We highlight different types of model parameters: those that are the focus of inference (e.g., treatment effect in a subgroup of patients) and those that are used to adjust for biases introduced by data collection processes (e.g., internal or external validity). The model is applied to a case study where RCTs' results, investigating efficacy in the treatment of diabetic foot problems, are extrapolated to groups of patients treated in medical routine and who were enrolled in a prospective cohort study. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

20.
As medical applications for cluster randomization designs become more common, investigators look for guidance on optimal methods for estimating the effect of group-based interventions over time. This study examines two distinct cluster randomization designs: (1) the repeated cross-sectional design in which centres are followed over time but patients change, and (2) the longitudinal design in which individual patients are followed over time within treatment clusters. Simulations of each study design stipulated a multiplicative treatment effect (on the log odds scale), between 5 and 15 clusters in each of two treatment arms, and followed over two time periods. Estimation options included linear mixed effects models using restricted maximum likelihood (REML), generalized estimating equations (GEE), mixed effects logistic regression using both penalized quasi likelihood (PQL) and numerical integration, and Bayesian Monte Carlo analysis. For the repeated cross-sectional designs, most methods performed well in terms of bias and coverage when clusters were numerous (30) and variability across clusters of baseline risk and treatment effect was modest. With few clusters (two groups of five) and higher variability, only the Bayesian methods maintained coverage. In the longitudinal designs, the common methods of REML, GEE, or PQL performed poorly when compared to numerical integration, while Bayesian methods demonstrated less bias and better coverage for estimates of both log odds ratios and risk differences. The performance of common statistical tools for the analysis of cluster randomization designs depends heavily on the precise design, the number of clusters, and the variability of baseline outcomes and treatment effects across centres.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号