首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到4条相似文献,搜索用时 3 毫秒
1.
Analysis of count data from clinical trials using mixed effect analysis has recently become widely used. However, algorithms available for the parameter estimation, including LAPLACE and Gaussian quadrature (GQ), are associated with certain limitations, including bias in parameter estimates and the long analysis runtime. The stochastic approximation expectation maximization (SAEM) algorithm has proven to be a very efficient and powerful tool in the analysis of continuous data. The aim of this study was to implement and investigate the performance of a new SAEM algorithm for application to count data. A new SAEM algorithm was implemented in MATLAB for estimation of both, parameters and the Fisher information matrix. Stochastic Monte Carlo simulations followed by re-estimation were performed according to scenarios used in previous studies (part I) to investigate properties of alternative algorithms (Plan et al., 2008, Abstr 1372 []). A single scenario was used to explore six probability distribution models. For parameter estimation, the relative bias was less than 0.92% and 4.13% for fixed and random effects, for all models studied including ones accounting for over- or under-dispersion. Empirical and estimated relative standard errors were similar, with distance between them being <1.7% for all explored scenarios. The longest CPU time was 95 s for parameter estimation and 56 s for SE estimation. The SAEM algorithm was extended for analysis of count data. It provides accurate estimates of both, parameters and standard errors. The estimation is significantly faster compared to LAPLACE and GQ. The algorithm is implemented in Monolix 3.1, (beta-version available in July 2009).  相似文献   

2.
The application of proportional odds models to ordered categorical data using the mixed-effects modeling approach has become more frequently reported within the pharmacokinetic/pharmacodynamic area during the last decade. The aim of this paper was to investigate the bias in parameter estimates, when models for ordered categorical data were estimated using methods employing different approximations of the likelihood integral; the Laplacian approximation in NONMEM (without and with the centering option) and NLMIXED, and the Gaussian quadrature approximations in NLMIXED. In particular, we have focused on situations with non-even distributions of the response categories and the impact of interpatient variability. This is a Monte Carlo simulation study where original data sets were derived from a known model and fixed study design. The simulated response was a four-category variable on the ordinal scale with categories 0, 1, 2 and 3. The model used for simulation was fitted to each data set for assessment of bias. Also, simulations of new data based on estimated population parameters were performed to evaluate the usefulness of the estimated model. For the conditions tested, Gaussian quadrature performed without appreciable bias in parameter estimates. However, markedly biased parameter estimates were obtained using the Laplacian estimation method without the centering option, in particular when distributions of observations between response categories were skewed and when the interpatient variability was moderate to large. Simulations under the model could not mimic the original data when bias was present, but resulted in overestimation of rare events. The bias was considerably reduced when the centering option in NONMEM was used. The cause for the biased estimates appears to be related to the conditioning on uninformative and uncertain empirical Bayes estimate of interindividual random effects during the estimation, in conjunction with the normality assumption.  相似文献   

3.
We propose an efficient algorithm for screening covariates in population model building using Wald's approximation to the likelihood ratio test (LRT) statistic in conjunction with Schwarz's Bayesian criterion. The algorithm can be applied to a full model fit of k covariate parameters to calculate the approximate LRT for all 2k–1 possible restricted models. The algorithm's efficiency also permits internal validation of the model selection process via bootstrap methods. We illustrate the use of this algorithm for both model selection and validation with data from a Daypro® pediatric study. The algorithm is easily implemented using standard statistical software such as SAS/IML and S-Plus. A SAS/IML macro to perform the algorithm is provided.  相似文献   

4.
Count data with skewness and many zeros are common in substance abuse and addiction research. Zero-adjusting models, especially zero-inflated models, have become increasingly popular in analyzing this type of data. This paper reviews and compares five mixed-effects Poisson family models commonly used to analyze count data with a high proportion of zeros by analyzing a longitudinal outcome: number of smoking quit attempts from the New Hampshire Dual Disorders Study. The findings of our study indicated that count data with many zeros do not necessarily require zero-inflated or other zero-adjusting models. For rare event counts or count data with small means, a simpler model such as the negative binomial model may provide a better fit.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号