首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 514 毫秒
1.
Using simulated viral load data for a given maraviroc monotherapy study design, the feasibility of different algorithms to perform parameter estimation for a pharmacokinetic-pharmacodynamic-viral dynamics (PKPD-VD) model was assessed. The assessed algorithms are the first-order conditional estimation method with interaction (FOCEI) implemented in NONMEM VI and the SAEM algorithm implemented in MONOLIX version 2.4. Simulated data were also used to test if an effect compartment and/or a lag time could be distinguished to describe an observed delay in onset of viral inhibition using SAEM. The preferred model was then used to describe the observed maraviroc monotherapy plasma concentration and viral load data using SAEM. In this last step, three modelling approaches were compared; (i) sequential PKPD-VD with fixed individual Empirical Bayesian Estimates (EBE) for PK, (ii) sequential PKPD-VD with fixed population PK parameters and including concentrations, and (iii) simultaneous PKPD-VD. Using FOCEI, many convergence problems (56%) were experienced with fitting the sequential PKPD-VD model to the simulated data. For the sequential modelling approach, SAEM (with default settings) took less time to generate population and individual estimates including diagnostics than with FOCEI without diagnostics. For the given maraviroc monotherapy sampling design, it was difficult to separate the viral dynamics system delay from a pharmacokinetic distributional delay or delay due to receptor binding and subsequent cellular signalling. The preferred model included a viral load lag time without inter-individual variability. Parameter estimates from the SAEM analysis of observed data were comparable among the three modelling approaches. For the sequential methods, computation time is approximately 25% less when fixing individual EBE of PK parameters with omission of the concentration data compared with fixed population PK parameters and retention of concentration data in the PD-VD estimation step. Computation times were similar for the sequential method with fixed population PK parameters and the simultaneous PKPD-VD modelling approach. The current analysis demonstrated that the SAEM algorithm in MONOLIX is useful for fitting complex mechanistic models requiring multiple differential equations. The SAEM algorithm allowed simultaneous estimation of PKPD and viral dynamics parameters, as well as investigation of different model sub-components during the model building process. This was not possible with the FOCEI method (NONMEM version VI or below). SAEM provides a more feasible alternative to FOCEI when facing lengthy computation times and convergence problems with complex models.  相似文献   

2.
For the purpose of population pharmacometric modeling, a variety of mathematic algorithms are implemented in major modeling software packages to facilitate the maximum likelihood modeling, such as FO, FOCE, Laplace, ITS and EM. These methods are all designed to estimate the set of parameters that maximize the joint likelihood of observations in a given problem. While FOCE is still currently the most widely used method in population modeling, EM methods are getting more popular as the current-generation methods of choice because of their robustness with more complex models and sparse data structures. There are several versions of EM method implementation that are available in public modeling software packages. Although there have been several studies and reviews comparing the performance of different methods in handling relatively simple models, there has not been a dedicated study to compare different versions of EM algorithms in solving complex PBPK models. This study took everolimus as a model drug and simulated PK data based on published results. Three most popular EM methods (SAEM, IMP and QRPEM) and FOCE (as a benchmark reference) were evaluated for their estimation accuracy and converging speed when solving models of increased complexity. Both sparse and rich sampling data structure were tested. We concluded that FOCE was superior to EM methods for simple structured models. For more complex models and/ or sparse data, EM methods are much more robust. While the estimation accuracy was very close across EM methods, the general ranking of speed (fastest to slowest) was: QRPEM, IMP and SAEM. IMP gave the most realistic estimation of parameter standard errors, while under- and over- estimation of standard errors were observed in SAEM and QRPEM methods.  相似文献   

3.
Analysis of longitudinal ordered categorical efficacy or safety data in clinical trials using mixed models is increasingly performed. However, algorithms available for maximum likelihood estimation using an approximation of the likelihood integral, including LAPLACE approach, may give rise to biased parameter estimates. The SAEM algorithm is an efficient and powerful tool in the analysis of continuous/count mixed models. The aim of this study was to implement and investigate the performance of the SAEM algorithm for longitudinal categorical data. The SAEM algorithm is extended for parameter estimation in ordered categorical mixed models together with an estimation of the Fisher information matrix and the likelihood. We used Monte Carlo simulations using previously published scenarios evaluated with NONMEM. Accuracy and precision in parameter estimation and standard error estimates were assessed in terms of relative bias and root mean square error. This algorithm was illustrated on the simultaneous analysis of pharmacokinetic and discretized efficacy data obtained after a single dose of warfarin in healthy volunteers. The new SAEM algorithm is implemented in MONOLIX 3.1 for discrete mixed models. The analyses show that for parameter estimation, the relative bias is low for both fixed effects and variance components in all models studied. Estimated and empirical standard errors are similar. The warfarin example illustrates how simple and rapid it is to analyze simultaneously continuous and discrete data with MONOLIX 3.1. The SAEM algorithm is extended for analysis of longitudinal categorical data. It provides accurate estimates parameters and standard errors. The estimation is fast and stable.  相似文献   

4.
Analysis of count data from clinical trials using mixed effect analysis has recently become widely used. However, algorithms available for the parameter estimation, including LAPLACE and Gaussian quadrature (GQ), are associated with certain limitations, including bias in parameter estimates and the long analysis runtime. The stochastic approximation expectation maximization (SAEM) algorithm has proven to be a very efficient and powerful tool in the analysis of continuous data. The aim of this study was to implement and investigate the performance of a new SAEM algorithm for application to count data. A new SAEM algorithm was implemented in MATLAB for estimation of both, parameters and the Fisher information matrix. Stochastic Monte Carlo simulations followed by re-estimation were performed according to scenarios used in previous studies (part I) to investigate properties of alternative algorithms (Plan et al., 2008, Abstr 1372 []). A single scenario was used to explore six probability distribution models. For parameter estimation, the relative bias was less than 0.92% and 4.13% for fixed and random effects, for all models studied including ones accounting for over- or under-dispersion. Empirical and estimated relative standard errors were similar, with distance between them being <1.7% for all explored scenarios. The longest CPU time was 95 s for parameter estimation and 56 s for SE estimation. The SAEM algorithm was extended for analysis of count data. It provides accurate estimates of both, parameters and standard errors. The estimation is significantly faster compared to LAPLACE and GQ. The algorithm is implemented in Monolix 3.1, (beta-version available in July 2009).  相似文献   

5.
Analysis of repeated time-to-event data is increasingly performed in pharmacometrics using parametric frailty models. The aims of this simulation study were (1) to assess estimation performance of Stochastic Approximation Expectation Maximization (SAEM) algorithm in MONOLIX, Adaptive Gaussian Quadrature (AGQ), and Laplace algorithm in PROC NLMIXED of SAS and (2) to evaluate properties of test of a dichotomous covariate on occurrence of events. The simulation setting is inspired from an analysis of occurrence of bone events after the initiation of treatment by imiglucerase in patients with Gaucher Disease (GD). We simulated repeated events with an exponential model and various dropout rates: no, low, or high. Several values of baseline hazard model, variability, number of subject, and effect of covariate were studied. For each scenario, 100 datasets were simulated for estimation performance and 500 for test performance. We evaluated estimation performance through relative bias and relative root mean square error (RRMSE). We studied properties of Wald and likelihood ratio test (LRT). We used these methods to analyze occurrence of bone events in patients with GD after starting an enzyme replacement therapy. SAEM with three chains and AGQ algorithms provided good estimates of parameters much better than SAEM with one chain and Laplace which often provided poor estimates. Despite a small number of repeated events, SAEM with three chains and AGQ gave small biases and RRMSE. Type I errors were closed to 5%, and power varied as expected for SAEM with three chains and AGQ. Probability of having at least one event under treatment was 19.1%.  相似文献   

6.
Statistical evaluation of nonisothermal prediction of drug stability   总被引:2,自引:0,他引:2  
Nonisothermal prediction of drug stability based on direct nonlinear estimation of the shelf-life was compared with the isothermal approach. The reliability of the statistics for the estimates of the shelf-life (the time period required for a drug to degrade to 90% remaining at 25 degrees C) and activation energy obtained by the two methods was evaluated by the Monte Carlo method of computer simulations. The accuracy and precision of the estimates obtained by the nonisothermal method depended largely on the experimental conditions, such as experimental periods, sampling time, and temperature rise programs. The uncertainty of the estimates was determined mainly by the extents of drug degradation and temperature change achieved during the experiment. The nonisothermal method needed suitable experimental designs and precise assay methods of drug contents to provide reliable parameter estimates.  相似文献   

7.
Optimal sampling times for pharmacokinetic experiments   总被引:10,自引:0,他引:10  
A sequential estimation procedure is presented which uses optimal sampling times to estimate the parameters of a model from data obtained from a group of subjects. This optimal sampling sequential estimation procedure utilizes parameter estimates from previous subjects in the group to determine the optimal sampling times for the next subject. Parameter estimates obtained from the optimal sampling procedure are compared to those obtained from a conventional sampling scheme by using Monte Carlo simulations which include noise terms for both assay error and intersubject variability. The results of these numerical experiments, for the two examples considered here, show that the parameter estimates obtained from data collected at optimal sampling times have significantly less variability than those generated using the conventional sampling procedure. We conclude that optimal sampling and preexperiment simulation may be useful tools for designing informative pharmacokinetic experiments.Presented at the First Annual Conference of the American College of Clinical Pharmacy, Boston, July 1980.  相似文献   

8.
NONMEM is the most widely used software for population pharmacokinetic (PK)-pharmacodynamic (PD) analyses. The latest version, NONMEM 7 (NM7), includes several sampling-based estimation methods in addition to the classical methods. In this study, performance of the estimation methods available in NM7 was investigated with respect to bias, precision, robustness and runtime for a diverse set of PD models. Simulations of 500 data sets from each PD model were reanalyzed with the available estimation methods to investigate bias and precision. Simulations of 100 data sets were used to investigate robustness by comparing final estimates obtained after estimations starting from the true parameter values and initial estimates randomly generated using the CHAIN feature in NM7. Average estimation time for each algorithm and each model was calculated from the runtimes reported by NM7. The method giving the lowest bias and highest precision across models was importance sampling, closely followed by FOCE/LAPLACE and stochastic approximation expectation-maximization. The methods relative robustness differed between models and no method showed clear superior performance. FOCE/LAPLACE was the method with the shortest runtime for all models, followed by iterative two-stage. The Bayesian Markov Chain Monte Carlo method, used in this study for point estimation, performed worst in all tested metrics.  相似文献   

9.
Currently available software for nonlinear regression does not account for errors in both the independent and the dependent variables. In pharmacodynamics, measurement errors are involved in the drug concentrations as well as in the effects. Instead of minimizing the sum of squared vertical errors (OLS), a Fortran program was written to find the closest distance from a measured data point to the tangent line of an estimated nonlinear curve and to minimize the sum of squared perpendicular distances (PLS). A Monte Carlo simulation was conducted with the sigmoidal Emax model to compare the OLS and PLS methods. The area between the true pharmacodynamic relationship and the fitted curve was compared as a measure of goodness of fit. The PLS demonstrated an improvement over the OLS by 20·8% with small differences in the parameter estimates when the random noise level had a standard deviation of five for both concentration and effect. Consideration of errors in both concentrations and effects with the PLS could lead to a more rational estimation of pharmacodynamic parameters. © 1997 John Wiley & Sons, Ltd.  相似文献   

10.
In nonlinear mixed-effects models, estimation methods based on a linearization of the likelihood are widely used although they have several methodological drawbacks. Kuhn and Lavielle (Comput. Statist. Data Anal. 49:1020–1038 (2005)) developed an estimation method which combines the SAEM (Stochastic Approximation EM) algorithm, with a MCMC (Markov Chain Monte Carlo) procedure for maximum likelihood estimation in nonlinear mixed-effects models without linearization. This method is implemented in the Matlab software MONOLIX which is available at http://www.math.u-psud.fr/~lavielle/monolix/logiciels. In this paper we apply MONOLIX to the analysis of the pharmacokinetics of saquinavir, a protease inhibitor, from concentrations measured after single dose administration in 100 HIV patients, some with advance disease. We also illustrate how to use MONOLIX to build the covariate model using the Bayesian Information Criterion. Saquinavir oral clearance (CL/F) was estimated to be 1.26 L/h and to increase with body mass index, the inter-patient variability for CL/F being 120%. Several methodological developments are ongoing to extend SAEM which is a very promising estimation method for population pharmacockinetic/pharmacodynamic analyses.  相似文献   

11.
Nonlinear regression is widely used in pharmacokinetic and pharmacodynamic modeling by applying nonlinear ordinary least squares. Although the assumption of independent errors is frequently not fulfilled, this has received scant attention in the pharmacokinetic literature. As in linear regression, leaving correlation of errors out of account leads to an underestimation of the standard deviations of parameter estimates. On the other hand, the use of models that accommodate correlated errors requires more care and more computation. This paper describes a method to fit log-normal functions to individual response curves containing correlated errors by means of statistical software for time series. A sample computer program is given in which the SAS/ETS procedure MODEL is used. In particular, the problem of finding appropriate starting values for nonlinear iterative algorithms is considered. A linear weighted least squares approach for initial parameter estimation is developed. The adequacy of the method is investigated by means of Monte Carlo simulations. Furthermore, the statistical properties of nonlinear least squares with and without accommodating correlated errors are compared. Time action profiles of a long-acting insulin preparation injected subcutaneously in humans are analyzed to illustrate the usefulness of the method proposed.  相似文献   

12.
Optimization of the sampling schedule can be used in pharmacokinetic (PK) experiments to increase the accuracy and the precision of parameter estimation or to reduce the number of samples required. Several optimization criteria that formally incorporate prior parameter uncertainty have been proposed earlier. These criteria consist in finding the sampling schedule that maximizes the expectation (over a given parameter distribution) of det F (ED-optimality) or Log(det F) (API-optimality), or minimizes the expectation of 1/det F (EID-optimality), where F is the Fisher information matrix. The precision and the accuracy of parameter estimation after having fitted a PK model to a small number of optimal data points (determined according to D, ED, EID, and API criteria) or to a naive sampling schedule were compared in a Monte Carlo simulation study. A one-compartment model with first-order absorption rate (3 parameters) and a two-compartment model with zero-order infusion rate (4 parameters) were considered. Data were simulated for 300 subjects with both structural models, combined with several residual error models (homoscedastic, heteroscedastic with constant or variable coefficient of variation). Interindividual variabilities in PK parameters ranged from 25–66%. ED-, EID-, and API-optimal sampling times were calculated using the software OSP-Fit. Three or five samples were allowed for parameter estimation by extended least-squares. Performances of each design criterion were evaluated in terms of mean prediction error, root mean squared error, and number of acceptable estimates (i.e., with a SE less than 30%). Compared to the D-optimal design, the EID and API designs reduced the bias and the imprecision of the estimation of the parameters having a large interindividual variability. Moreover, the API design resulted in some cases in a higher number of acceptable estimates.  相似文献   

13.
The uncertainty associated with parameter estimations is essential for population model building, evaluation, and simulation. Summarized by the standard error (SE), its estimation is sometimes questionable. Herein, we evaluate SEs provided by different non linear mixed-effect estimation methods associated with their estimation performances. Methods based on maximum likelihood (FO and FOCE in NONMEMTM, nlme in SplusTM, and SAEM in MONOLIX) and Bayesian theory (WinBUGS) were evaluated on datasets obtained by simulations of a one-compartment PK model using 9 different designs. Bootstrap techniques were applied to FO, FOCE, and nlme. We compared SE estimations, parameter estimations, convergence, and computation time. Regarding SE estimations, methods provided concordant results for fixed effects. On random effects, SAEM and WinBUGS, tended respectively to under or over-estimate them. With sparse data, FO provided biased estimations of SE and discordant results between bootstrapped and original datasets. Regarding parameter estimations, FO showed a systematic bias on fixed and random effects. WinBUGS provided biased estimations, but only with sparse data. SAEM and WinBUGS converged systematically while FOCE failed in half of the cases. Applying bootstrap with FOCE yielded CPU times too large for routine application and bootstrap with nlme resulted in frequent crashes. In conclusion, FO provided bias on parameter estimations and on SE estimations of random effects. Methods like FOCE provided unbiased results but convergence was the biggest issue. Bootstrap did not improve SEs for FOCE methods, except when confidence interval of random effects is needed. WinBUGS gave consistent results but required long computation times. SAEM was in-between, showing few under-estimated SE but unbiased parameter estimations.  相似文献   

14.
目的比较预测性伪似然法(PQL)和基于马尔可夫蒙特卡罗(MCMC)的贝叶斯方法在广义线性混合模型参数估计的偏差和精度。方法针对样本含量不等的层次数据,运用SAS/glimmix过程和WinBUGS软件分别进行PQL和贝叶斯法参数估计。结果两种方法固定效应参数估计结果基本一致,但对随机效应方差的估计,基于MCMC的贝叶斯法偏差远小于PQL法。结论对二分类层次数据,采用广义线性混合效应模型贝叶斯估计精度更高,偏差更小。  相似文献   

15.
It is not uncommon that the outcome measurements, symptoms or side effects, of a clinical trial belong to the family of event type data, e.g., bleeding episodes or emesis events. Event data is often low in information content and the mixed-effects modeling software NONMEM has previously been shown to perform poorly with low information ordered categorical data. The aim of this investigation was to assess the performance of the Laplace method, the stochastic approximation expectation-maximization (SAEM) method, and the importance sampling method when modeling repeated time-to-event data. The Laplace method already existed, whereas the two latter methods have recently become available in NONMEM 7. A stochastic simulation and estimation study was performed to assess the performance of the three estimation methods when applied to a repeated time-to-event model with a constant hazard associated with an exponential interindividual variability. Various conditions were investigated, ranging from rare to frequent events and from low to high interindividual variability. The method performance was assessed by parameter bias and precision. Due to the lack of information content under conditions where very few events were observed, all three methods exhibit parameter bias and imprecision, however most pronounced by the Laplace method. The performance of the SAEM and importance sampling were generally higher than Laplace when the frequency of individuals with events was less than 43%, while at frequencies above that all methods were equal in performance.  相似文献   

16.
17.
The Monte Carlo Parametric Expectation Maximization (MC-PEM) algorithm can approximate the true log-likelihood as precisely as needed and is efficiently parallelizable. Our objectives were to evaluate an importance sampling version of the MC-PEM algorithm for mechanistic models and to qualify the default estimation settings in SADAPT-TRAN. We assessed bias, imprecision and robustness of this algorithm in S-ADAPT for mechanistic models with up to 45 simultaneously estimated structural parameters, 14 differential equations, and 10 dependent variables (one drug concentration and nine pharmacodynamic effects). Simpler models comprising 15 parameters were estimated using three of the ten dependent variables. We set initial estimates to 0.1 or 10 times the true value and evaluated 30 bootstrap replicates with frequent or sparse sampling. Datasets comprised three dose levels with 16 subjects each. For simultaneous estimation of the full model, the ratio of estimated to true values for structural model parameters (median [5-95% percentile] over 45 parameters) was 1.01 [0.94-1.13] for means and 0.99 [0.68-1.39] for between-subject variances for frequent sampling and 1.02 [0.81-1.47] for means and 1.02 [0.47-2.56] for variances for sparse sampling. Imprecision was ≤25% for 43 of 45 means for frequent sampling. Bias and imprecision was well comparable for the full and simpler models. Parallelized estimation was 23-fold (6.9-fold) faster using 48 threads (eight threads) relative to one thread. The MC-PEM algorithm was robust and provided unbiased and adequately precise means and variances during simultaneous estimation of complex, mechanistic models in a 45 dimensional parameter space with rich or sparse data using poor initial estimates.  相似文献   

18.
Pharmacokinetic studies are commonly analyzed using a two-stage approach where the first stage involves estimation of pharmacokinetic parameters for each subject separately and the second stage uses the individual parameter estimates for statistical inference. This two-stage approach is not applicable in sparse sampling situations where only one sample is available per subject. Nonlinear models are often applied to analyze pharmacokinetic data assessed in such serial sampling designs. Modelling approaches are suitable provided that the form of the true model is known, which is rarely the case in early stages of drug development. This paper presents an alternative approach to estimate pharmacokinetic parameters based on non-compartmental and asymptotic theories in the case of serial sampling when a drug is given as an intravenous bolus. The statistical properties of estimators of the pharmacokinetic parameters are investigated and evaluated using Monte Carlo simulations.  相似文献   

19.
Estimating the power for a non-linear mixed-effects model-based analysis is challenging due to the lack of a closed form analytic expression. Often, computationally intensive Monte Carlo studies need to be employed to evaluate the power of a planned experiment. This is especially time consuming if full power versus sample size curves are to be obtained. A novel parametric power estimation (PPE) algorithm utilizing the theoretical distribution of the alternative hypothesis is presented in this work. The PPE algorithm estimates the unknown non-centrality parameter in the theoretical distribution from a limited number of Monte Carlo simulation and estimations. The estimated parameter linearly scales with study size allowing a quick generation of the full power versus study size curve. A comparison of the PPE with the classical, purely Monte Carlo-based power estimation (MCPE) algorithm for five diverse pharmacometric models showed an excellent agreement between both algorithms, with a low bias of less than 1.2 % and higher precision for the PPE. The power extrapolated from a specific study size was in a very good agreement with power curves obtained with the MCPE algorithm. PPE represents a promising approach to accelerate the power calculation for non-linear mixed effect models.  相似文献   

20.
Optimal sampling design with nonparametric population modeling offers the opportunity to determine pharmacokinetic parameters for patients in whom blood sampling is restricted. This approach was compared to a standard individualized modeling method for meropenem pharmacokinetics in febrile neutropenic patients. The population modeling program, nonparametric approach of expectation maximization (NPEM), with a full data set was compared to a sparse data set selected by D-optimal sampling design. The authors demonstrated that the D-optimal sampling strategy, when applied to this clinical population, provided good pharmacokinetic parameter estimates along with their variability. Four individualized and optimally selected sampling time points provided the same parameter estimates as more intensive sampling regimens using traditional and population modeling techniques. The different modeling methods were considerably consistent, except for the estimation of CL(d) with sparse sampling. The findings suggest that D-optimal sparse sampling is a reasonable approach to population pharmacokinetic/pharmacodynamic studies during drug development when limited sampling is necessary.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号