首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
Estimation methods for nonlinear mixed-effects modelling have considerably improved over the last decades. Nowadays, several algorithms implemented in different software are used. The present study aimed at comparing their performance for dose-response models. Eight scenarios were considered using a sigmoid E(max) model, with varying sigmoidicity and residual error models. One hundred simulated datasets for each scenario were generated. One hundred individuals with observations at four doses constituted the rich design and at two doses, the sparse design. Nine parametric approaches for maximum likelihood estimation were studied: first-order conditional estimation (FOCE) in NONMEM and R, LAPLACE in NONMEM and SAS, adaptive Gaussian quadrature (AGQ) in SAS, and stochastic approximation expectation maximization (SAEM) in NONMEM and MONOLIX (both SAEM approaches with default and modified settings). All approaches started first from initial estimates set to the true values and second, using altered values. Results were examined through relative root mean squared error (RRMSE) of the estimates. With true initial conditions, full completion rate was obtained with all approaches except FOCE in R. Runtimes were shortest with FOCE and LAPLACE and longest with AGQ. Under the rich design, all approaches performed well except FOCE in R. When starting from altered initial conditions, AGQ, and then FOCE in NONMEM, LAPLACE in SAS, and SAEM in NONMEM and MONOLIX with tuned settings, consistently displayed lower RRMSE than the other approaches. For standard dose-response models analyzed through mixed-effects models, differences were identified in the performance of estimation methods available in current software, giving material to modellers to identify suitable approaches based on an accuracy-versus-runtime trade-off.  相似文献   

2.
It is not uncommon that the outcome measurements, symptoms or side effects, of a clinical trial belong to the family of event type data, e.g., bleeding episodes or emesis events. Event data is often low in information content and the mixed-effects modeling software NONMEM has previously been shown to perform poorly with low information ordered categorical data. The aim of this investigation was to assess the performance of the Laplace method, the stochastic approximation expectation-maximization (SAEM) method, and the importance sampling method when modeling repeated time-to-event data. The Laplace method already existed, whereas the two latter methods have recently become available in NONMEM 7. A stochastic simulation and estimation study was performed to assess the performance of the three estimation methods when applied to a repeated time-to-event model with a constant hazard associated with an exponential interindividual variability. Various conditions were investigated, ranging from rare to frequent events and from low to high interindividual variability. The method performance was assessed by parameter bias and precision. Due to the lack of information content under conditions where very few events were observed, all three methods exhibit parameter bias and imprecision, however most pronounced by the Laplace method. The performance of the SAEM and importance sampling were generally higher than Laplace when the frequency of individuals with events was less than 43%, while at frequencies above that all methods were equal in performance.  相似文献   

3.
Using simulated viral load data for a given maraviroc monotherapy study design, the feasibility of different algorithms to perform parameter estimation for a pharmacokinetic-pharmacodynamic-viral dynamics (PKPD-VD) model was assessed. The assessed algorithms are the first-order conditional estimation method with interaction (FOCEI) implemented in NONMEM VI and the SAEM algorithm implemented in MONOLIX version 2.4. Simulated data were also used to test if an effect compartment and/or a lag time could be distinguished to describe an observed delay in onset of viral inhibition using SAEM. The preferred model was then used to describe the observed maraviroc monotherapy plasma concentration and viral load data using SAEM. In this last step, three modelling approaches were compared; (i) sequential PKPD-VD with fixed individual Empirical Bayesian Estimates (EBE) for PK, (ii) sequential PKPD-VD with fixed population PK parameters and including concentrations, and (iii) simultaneous PKPD-VD. Using FOCEI, many convergence problems (56%) were experienced with fitting the sequential PKPD-VD model to the simulated data. For the sequential modelling approach, SAEM (with default settings) took less time to generate population and individual estimates including diagnostics than with FOCEI without diagnostics. For the given maraviroc monotherapy sampling design, it was difficult to separate the viral dynamics system delay from a pharmacokinetic distributional delay or delay due to receptor binding and subsequent cellular signalling. The preferred model included a viral load lag time without inter-individual variability. Parameter estimates from the SAEM analysis of observed data were comparable among the three modelling approaches. For the sequential methods, computation time is approximately 25% less when fixing individual EBE of PK parameters with omission of the concentration data compared with fixed population PK parameters and retention of concentration data in the PD-VD estimation step. Computation times were similar for the sequential method with fixed population PK parameters and the simultaneous PKPD-VD modelling approach. The current analysis demonstrated that the SAEM algorithm in MONOLIX is useful for fitting complex mechanistic models requiring multiple differential equations. The SAEM algorithm allowed simultaneous estimation of PKPD and viral dynamics parameters, as well as investigation of different model sub-components during the model building process. This was not possible with the FOCEI method (NONMEM version VI or below). SAEM provides a more feasible alternative to FOCEI when facing lengthy computation times and convergence problems with complex models.  相似文献   

4.
The paper compares performance of Nonmem estimation methods--first order conditional estimation with interaction (FOCEI), iterative two stage (ITS), Monte Carlo importance sampling (IMP), importance sampling assisted by mode a posteriori (IMPMAP), stochastic approximation expectation-maximization (SAEM), and Markov chain Monte Carlo Bayesian (BAYES), on the simulated examples of a monoclonal antibody with target-mediated drug disposition (TMDD), demonstrates how optimization of the estimation options improves performance, and compares standard errors of Nonmem parameter estimates with those predicted by PFIM 3.2 optimal design software. In the examples of the one- and two-target quasi-steady-state TMDD models with rich sampling, the parameter estimates and standard errors of the new Nonmem 7.2.0 ITS, IMP, IMPMAP, SAEM and BAYES estimation methods were similar to the FOCEI method, although larger deviation from the true parameter values (those used to simulate the data) was observed using the BAYES method for poorly identifiable parameters. Standard errors of the parameter estimates were in general agreement with the PFIM 3.2 predictions. The ITS, IMP, and IMPMAP methods with the convergence tester were the fastest methods, reducing the computation time by about ten times relative to the FOCEI method. Use of lower computational precision requirements for the FOCEI method reduced the estimation time by 3-5 times without compromising the quality of the parameter estimates, and equaled or exceeded the speed of the SAEM and BAYES methods. Use of parallel computations with 4-12 processors running on the same computer improved the speed proportionally to the number of processors with the efficiency (for 12 processor run) in the range of 85-95% for all methods except BAYES, which had parallelization efficiency of about 70%.  相似文献   

5.
Analysis of longitudinal ordered categorical efficacy or safety data in clinical trials using mixed models is increasingly performed. However, algorithms available for maximum likelihood estimation using an approximation of the likelihood integral, including LAPLACE approach, may give rise to biased parameter estimates. The SAEM algorithm is an efficient and powerful tool in the analysis of continuous/count mixed models. The aim of this study was to implement and investigate the performance of the SAEM algorithm for longitudinal categorical data. The SAEM algorithm is extended for parameter estimation in ordered categorical mixed models together with an estimation of the Fisher information matrix and the likelihood. We used Monte Carlo simulations using previously published scenarios evaluated with NONMEM. Accuracy and precision in parameter estimation and standard error estimates were assessed in terms of relative bias and root mean square error. This algorithm was illustrated on the simultaneous analysis of pharmacokinetic and discretized efficacy data obtained after a single dose of warfarin in healthy volunteers. The new SAEM algorithm is implemented in MONOLIX 3.1 for discrete mixed models. The analyses show that for parameter estimation, the relative bias is low for both fixed effects and variance components in all models studied. Estimated and empirical standard errors are similar. The warfarin example illustrates how simple and rapid it is to analyze simultaneously continuous and discrete data with MONOLIX 3.1. The SAEM algorithm is extended for analysis of longitudinal categorical data. It provides accurate estimates parameters and standard errors. The estimation is fast and stable.  相似文献   

6.
Multiple imputation (MI) is an approach widely used in statistical analysis of incomplete data. However, its application to missing data problems in nonlinear mixed-effects modelling is limited. The objective was to implement a four-step MI method for handling missing covariate data in NONMEM and to evaluate the method’s sensitivity to η-shrinkage. Four steps were needed; (1) estimation of empirical Bayes estimates (EBEs) using a base model without the partly missing covariate, (2) a regression model for the covariate values given the EBEs from subjects with covariate information, (3) imputation of covariates using the regression model and (4) estimation of the population model. Steps (3) and (4) were repeated several times. The procedure was automated in PsN and is now available as the mimp functionality (http://psn.sourceforge.net/). The method’s sensitivity to shrinkage in EBEs was evaluated in a simulation study where the covariate was missing according to a missing at random type of missing data mechanism. The η-shrinkage was increased in steps from 4.5 to 54%. Two hundred datasets were simulated and analysed for each scenario. When shrinkage was low the MI method gave unbiased and precise estimates of all population parameters. With increased shrinkage the estimates became less precise but remained unbiased.  相似文献   

7.
Analysis of count data from clinical trials using mixed effect analysis has recently become widely used. However, algorithms available for the parameter estimation, including LAPLACE and Gaussian quadrature (GQ), are associated with certain limitations, including bias in parameter estimates and the long analysis runtime. The stochastic approximation expectation maximization (SAEM) algorithm has proven to be a very efficient and powerful tool in the analysis of continuous data. The aim of this study was to implement and investigate the performance of a new SAEM algorithm for application to count data. A new SAEM algorithm was implemented in MATLAB for estimation of both, parameters and the Fisher information matrix. Stochastic Monte Carlo simulations followed by re-estimation were performed according to scenarios used in previous studies (part I) to investigate properties of alternative algorithms (Plan et al., 2008, Abstr 1372 []). A single scenario was used to explore six probability distribution models. For parameter estimation, the relative bias was less than 0.92% and 4.13% for fixed and random effects, for all models studied including ones accounting for over- or under-dispersion. Empirical and estimated relative standard errors were similar, with distance between them being <1.7% for all explored scenarios. The longest CPU time was 95 s for parameter estimation and 56 s for SE estimation. The SAEM algorithm was extended for analysis of count data. It provides accurate estimates of both, parameters and standard errors. The estimation is significantly faster compared to LAPLACE and GQ. The algorithm is implemented in Monolix 3.1, (beta-version available in July 2009).  相似文献   

8.
For the purpose of population pharmacometric modeling, a variety of mathematic algorithms are implemented in major modeling software packages to facilitate the maximum likelihood modeling, such as FO, FOCE, Laplace, ITS and EM. These methods are all designed to estimate the set of parameters that maximize the joint likelihood of observations in a given problem. While FOCE is still currently the most widely used method in population modeling, EM methods are getting more popular as the current-generation methods of choice because of their robustness with more complex models and sparse data structures. There are several versions of EM method implementation that are available in public modeling software packages. Although there have been several studies and reviews comparing the performance of different methods in handling relatively simple models, there has not been a dedicated study to compare different versions of EM algorithms in solving complex PBPK models. This study took everolimus as a model drug and simulated PK data based on published results. Three most popular EM methods (SAEM, IMP and QRPEM) and FOCE (as a benchmark reference) were evaluated for their estimation accuracy and converging speed when solving models of increased complexity. Both sparse and rich sampling data structure were tested. We concluded that FOCE was superior to EM methods for simple structured models. For more complex models and/ or sparse data, EM methods are much more robust. While the estimation accuracy was very close across EM methods, the general ranking of speed (fastest to slowest) was: QRPEM, IMP and SAEM. IMP gave the most realistic estimation of parameter standard errors, while under- and over- estimation of standard errors were observed in SAEM and QRPEM methods.  相似文献   

9.
In nonlinear mixed-effects models, estimation methods based on a linearization of the likelihood are widely used although they have several methodological drawbacks. Kuhn and Lavielle (Comput. Statist. Data Anal. 49:1020–1038 (2005)) developed an estimation method which combines the SAEM (Stochastic Approximation EM) algorithm, with a MCMC (Markov Chain Monte Carlo) procedure for maximum likelihood estimation in nonlinear mixed-effects models without linearization. This method is implemented in the Matlab software MONOLIX which is available at http://www.math.u-psud.fr/~lavielle/monolix/logiciels. In this paper we apply MONOLIX to the analysis of the pharmacokinetics of saquinavir, a protease inhibitor, from concentrations measured after single dose administration in 100 HIV patients, some with advance disease. We also illustrate how to use MONOLIX to build the covariate model using the Bayesian Information Criterion. Saquinavir oral clearance (CL/F) was estimated to be 1.26 L/h and to increase with body mass index, the inter-patient variability for CL/F being 120%. Several methodological developments are ongoing to extend SAEM which is a very promising estimation method for population pharmacockinetic/pharmacodynamic analyses.  相似文献   

10.
Pharmacogenetics is now widely investigated and health institutions acknowledge its place in clinical pharmacokinetics. Our objective is to assess through a simulation study, the impact of design on the statistical performances of three different tests used for analysis of pharmacogenetic information with nonlinear mixed effects models: (i) an ANOVA to test the relationship between the empirical Bayes estimates of the model parameter of interest and the genetic covariate, (ii) a global Wald test to assess whether estimates for the gene effect are significant, and (iii) a likelihood ratio test (LRT) between the model with and without the genetic covariate. We use the stochastic EM algorithm (SAEM) implemented in MONOLIX 2.1 software. The simulation setting is inspired from a real pharmacokinetic study. We investigate four designs with N the number of subjects and n the number of samples per subject: (i) N = 40/n = 4, similar to the original study, (ii) N = 80/n = 2 sorted in 4 groups, a design optimized using the PFIM software, (iii) a combined design, N = 20/n = 4 plus N = 80 with only a trough concentration and (iv) N = 200/n = 4, to approach asymptotic conditions. We find that the ANOVA has a correct type I error estimate regardless of design, however the sparser design was optimized. The type I error of the Wald test and LRT are moderatly inflated in the designs far from the asymptotic (<10%). For each design, the corrected power is analogous for the three tests. Among the three designs with a total of 160 observations, the design N = 80/n = 2 optimized with PFIM provides both the lowest standard error on the effect coefficients and the best power for the Wald test and the LRT while a high shrinkage decreases the power of the ANOVA. In conclusion, a correction method should be used for model-based tests in pharmacogenetic studies with reduced sample size and/or sparse sampling and, for the same amount of samples, some designs have better power than others.  相似文献   

11.
To characterise the pharmacokinetics of dofetilide in patients and to identify clinically relevant parameter–covariate relationships. To investigate three different modelling strategies in covariate model building using dofetilide as an example: (1) using statistical criteria only or in combination with clinical irrelevance criteria for covariate selection, (2) applying covariate effects on total clearance or separately on non-renal and renal clearances and (3) using separate data sets for covariate selection and parameter estimation. Pooled concentration-time data (1,445 patients, 10,133 observations) from phase III clinical trials was used. A population pharmacokinetic model was developed using NONMEM. Stepwise covariate model building was applied to identify important covariates using the strategies described above. Inclusion and exclusion of covariates using clinical irrelevance was based on reduction in interindividual variability and changes in parameters at the extremes of the covariate distribution. Parametric separation of the elimination pathways was accomplished using creatinine clearance as an indicator of renal function. The pooled data was split in three parts which were used for covariate selection, parameter estimation and evaluation of predictive performance. Parameter estimations were done using the first-order (FO) and the first-order conditional estimation (FOCE) methods. A one-compartment model with first order absorption adequately described the data. Using clinical irrelevance criteria resulted in models containing less parameter–covariate relationships with a minor loss in predictive power. A larger number of covariates were found significant when the elimination was divided into a renal part and a non-renal part, but no gain in predictive power could be seen with this data set. The FO and FOCE estimation methods gave almost identical final covariate model structures with similar predictive performance. Clinical irrelevance criteria may be valuable for practical reasons since stricter inclusion/exclusion criteria shortens the run times of the covariate model building procedure and because only the covariates important for the predictive performance are included in the model. K. Tunblad and L. Lindbom contributed equally to the work.  相似文献   

12.
13.
The purpose of this study was to evaluate whether mixed effects modeling (MEM) performs better than either noncompartmental or compartmental naïve pooled data (NPD) analysis for the interpretation of single sample per subject pharmacokinetic (PK) data. Using PK parameters determined during a toxicokinetic study in rats, we simulated data sets that might emerge from similar experiments. Data sets were simulated with varying numbers of animals at each sampling time (4–48) and the number of samples taken (1–3) from each individual. Each data set was replicated 50 times and analyzed using several variations of MEM that differed in the assumptions made regarding intraindividual error, NPD, and a graphical noncompartmental method. These analyses attempted to retrieve the underlying parameter and covariate effect values. We compared these analysis methods with respect to how well the underlying values were retrieved. All analysis methods performed poorly with single sample per subject data but MEM gave less biased estimates under the simulated conditions used here. MEM performance increased when covariate effects were sought in the analysis compared with analyses seeking only PK parameters. Decreasing the number of animals used per sampling time from 48 to 16 did not influence the quality of parameter estimates but further reductions (<16 animals per sampling time) resulted in a reduced proportion of acceptable estimates. Parameter estimate quality improved and worsened with MEM and NPD, respectively, when additional samples were obtained from each individual. Assumptions made regarding the magnitude of intraindividual error were unimportant with single sample per subject data but influenced parameter estimates if more samples were obtained from each individual. MEM is preferable to both NPD and noncompartmental approaches for the analysis of single sample per subject data but even with MEM estimates of clearance are often biased.  相似文献   

14.
The aim of this study was to compare 2 stepwise covariate model-building strategies, frequently used in the analysis of pharmacokinetic-pharmacodynamic (PK-PD) data using nonlinear mixed-effects models, with respect to included covariates and predictive performance. In addition, the effects of stepwise regression on the estimated covariate coefficients wise regression on the estimated covariate coefficients were assessed. Using simulated and real PK data, covariate models were built applying (1) stepwise generalized additive models (GAM) for identifying potential covariates, followed by backward elimination in the computer program NONMEM, and (2) stepwise forward inclusion and backward elimination in NONMEM. Different versions of these procedures were tried (eg, treating different study occasions as separate individuals in the GAM, or fixing a part of the parameters when the NONMEM procedure was used). The final covariate models were compared, including their ability to predict a separate data set or their performance in cross-validation. The bias in the estimated coefficients (selection bias) was assessed. The model-building procedures performed similarly in the data sets explored. No major differences in the resulting covariate models were seen, and the predictive performances overlapped. Therefore, the choice of model-building procedure in these examples could be based on other aspects such as analyst-and computer-time efficiency. There was a tendency to selection bias in the estimates, although this was small relative to the overall variability in the estimates. The predictive performances of the stepwise models were also reasonably good. Thus, selection bias seems to be a minor problem in this typical PK covariate analysis.  相似文献   

15.
Many clinical trials have time-to-event variables as principal response criteria. When adjustment for covariates is of some importance, the relative role of methods for such analysis may be of some concern. For the Wilcoxon and logrank tests, there is an issue of how covariance adjustment can be nonparametric in the sense of not involving any further assumptions beyond those of the logrank and Wilcoxon test. Also of particular interest in a clinical trial is the estimation of the difference between survival probabilities for the treatment groups at several points in time. As with the Wilcoxon and logrank tests, there is no well known nonparametric way to incorporate covariate adjustment into such estimation of treatment effects for survival rates. We propose a method that enables covariate adjustment for hypothesis testing with logrank or Wilcoxon scores. Related extensions for applying covariate adjustment to estimation of treatment effects are provided for differences in survival-rate counterparts to Kaplan-Meier survival rates. The results represent differences in population average survival rates with adjustment for random imbalance of covariates between treatment groups. The methods are illustrated with a clinical trial example.  相似文献   

16.
The uncertainty associated with parameter estimations is essential for population model building, evaluation, and simulation. Summarized by the standard error (SE), its estimation is sometimes questionable. Herein, we evaluate SEs provided by different non linear mixed-effect estimation methods associated with their estimation performances. Methods based on maximum likelihood (FO and FOCE in NONMEMTM, nlme in SplusTM, and SAEM in MONOLIX) and Bayesian theory (WinBUGS) were evaluated on datasets obtained by simulations of a one-compartment PK model using 9 different designs. Bootstrap techniques were applied to FO, FOCE, and nlme. We compared SE estimations, parameter estimations, convergence, and computation time. Regarding SE estimations, methods provided concordant results for fixed effects. On random effects, SAEM and WinBUGS, tended respectively to under or over-estimate them. With sparse data, FO provided biased estimations of SE and discordant results between bootstrapped and original datasets. Regarding parameter estimations, FO showed a systematic bias on fixed and random effects. WinBUGS provided biased estimations, but only with sparse data. SAEM and WinBUGS converged systematically while FOCE failed in half of the cases. Applying bootstrap with FOCE yielded CPU times too large for routine application and bootstrap with nlme resulted in frequent crashes. In conclusion, FO provided bias on parameter estimations and on SE estimations of random effects. Methods like FOCE provided unbiased results but convergence was the biggest issue. Bootstrap did not improve SEs for FOCE methods, except when confidence interval of random effects is needed. WinBUGS gave consistent results but required long computation times. SAEM was in-between, showing few under-estimated SE but unbiased parameter estimations.  相似文献   

17.
We evaluate by simulation three model-based methods to test the influence of a single nucleotide polymorphism on a pharmacokinetic parameter of a drug: analysis of variance (ANOVA) on the empirical Bayes estimates of the individual parameters, likelihood ratio test between models with and without genetic covariate, and Wald tests on the parameters of the model with covariate. Analyses are performed using the FO and FOCE method implemented in the NONMEM software. We compare several approaches for model selection based on tests and global criteria. We illustrate the results with pharmacokinetic data on indinavir from HIV-positive patients included in COPHAR 2-ANRS 111 to study the gene effect prospectively. Only the tests based on the EBE obtain an empirical type I error close to the expected 5%. The approximation made with the FO algorithm results in a significant inflation of the type I error of the LRT and Wald tests.  相似文献   

18.
We evaluate by simulation three model-based methods to test the influence of a single nucleotide polymorphism on a pharmacokinetic parameter of a drug: analysis of variance (ANOVA) on the empirical Bayes estimates of the individual parameters, likelihood ratio test between models with and without genetic covariate, and Wald tests on the parameters of the model with covariate. Analyses are performed using the FO and FOCE method implemented in the NONMEM software. We compare several approaches for model selection based on tests and global criteria. We illustrate the results with pharmacokinetic data on indinavir from HIV-positive patients included in COPHAR 2-ANRS 111 to study the gene effect prospectively. Only the tests based on the EBE obtain an empirical type I error close to the expected 5%. The approximation made with the FO algorithm results in a significant inflation of the type I error of the LRT and Wald tests.  相似文献   

19.
20.
No HeadingPurpose. To introduce partially linear mixed effects models (PLMEMs), to illustrate their use, and to compare the power and Type I error rate in detecting a covariate effect with nonlinear mixed effects modeling using NONMEM.Methods. Sparse concentration-time data from males and females (1:1) were simulated under a 1-compartment oral model where clearance was sex-dependent. All possible combinations of number of subjects (50, 75, 100, 150, 250), samples per subject (2, 4, 6), and clearance multipliers (1 to 1.25) were generated. Data were analyzed with and without sex as a covariate using PLMEM (maximum likelihood estimation) and NONMEM (first-order conditional estimation). Four covariate screening methods were examined: NONMEM using the likelihood ratio test (LRT), PLMEM using the LRT, PLMEM using Walds test, and analysis of variance (ANOVA) of the empirical Bayes estimates (EBEs) for CL treating sex as a categorical variable. The percent of simulations rejecting the null hypothesis of no covariate effect at the 0.05 level was determined. 300 simulations were done to calculate power curves and 1000 simulations were done (with no covariate effect) to calculate Type I error rate. Actual implementation of PLMEMs is illustrated using previously published teicoplanin data.Results. Type I error rates were similar between PLMEM and NONMEM using the LRT, but were inflated (as high as 36%) based on PLMEM using Walds test. Type I error rate tended to increase as the number of observations per subject increased for the LRT methods. Power curves were similar between the PLMEM and NONMEM LRT methods and were slightly more than the power curve using ANOVA on the EBEs of CL. 80% power was achieved with 4 samples per subject and 50 subjects total when the effect size was approximately 1.07, 1.07, 1.08, and 1.05 for LRT using PLMEMs, LRT using NONMEM, ANOVA on the EBEs, and Walds test using PLMEMs, respectively.Conclusions. PLMEM and NONMEM covariate screening using the LRT had similar Type I error rates and power under the data generating model. PLMEMs offers a viable alternative to NONMEM-based covariate screening.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号