首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Analysis of repeated time-to-event data is increasingly performed in pharmacometrics using parametric frailty models. The aims of this simulation study were (1) to assess estimation performance of Stochastic Approximation Expectation Maximization (SAEM) algorithm in MONOLIX, Adaptive Gaussian Quadrature (AGQ), and Laplace algorithm in PROC NLMIXED of SAS and (2) to evaluate properties of test of a dichotomous covariate on occurrence of events. The simulation setting is inspired from an analysis of occurrence of bone events after the initiation of treatment by imiglucerase in patients with Gaucher Disease (GD). We simulated repeated events with an exponential model and various dropout rates: no, low, or high. Several values of baseline hazard model, variability, number of subject, and effect of covariate were studied. For each scenario, 100 datasets were simulated for estimation performance and 500 for test performance. We evaluated estimation performance through relative bias and relative root mean square error (RRMSE). We studied properties of Wald and likelihood ratio test (LRT). We used these methods to analyze occurrence of bone events in patients with GD after starting an enzyme replacement therapy. SAEM with three chains and AGQ algorithms provided good estimates of parameters much better than SAEM with one chain and Laplace which often provided poor estimates. Despite a small number of repeated events, SAEM with three chains and AGQ gave small biases and RRMSE. Type I errors were closed to 5%, and power varied as expected for SAEM with three chains and AGQ. Probability of having at least one event under treatment was 19.1%.  相似文献   

2.
A significant bias in parameters, estimated with the proportional odds model using the software NONMEM, has been reported. Typically, this bias occurs with ordered categorical data, when most of the observations are found at one extreme of the possible outcomes. The aim of this study was to assess, through simulations, the performance of the Back-Step Method (BSM), a novel approach for obtaining unbiased estimates when the standard approach provides biased estimates. BSM is an iterative method involving sequential simulation-estimation steps. BSM was compared with the standard approach in the analysis of a 4-category ordered variable using the Laplacian method in NONMEM. The bias in parameter estimates and the accuracy of model predictions were determined for the 2 methods on 3 conditions: (1) a nonskewed distribution of the response with low interindividual variability (IIV), (2) a skewed distribution with low IIV, and (3) a skewed distribution with high IIV. An increase in bias with increasing skewness and IIV was shown in parameters estimated using the standard approach in NONMEM. BSM performed without appreciable bias in the estimates under the 3 conditions, and the model predictions were in good agreement with the original data. Each BSM estimation represents a random sample of the population; hence, repeating the BSM estimation reduces the imprecision of the parameter estimates. The BSM is an accurate estimation method when the standard modeling approach in NONMEM gives biased estimates.  相似文献   

3.
A significant bias in parameters, estimated with the proportional odds model using the software NONMEM, has been reported. Typically, this bias occurs with ordered categorical data, when most of the observations are found at one extreme of the possible outcomes. The aim of this study was to assess, through simulations, the performance of the Back-Step Method (BSM), a novel approach for obtaining unbiased estimates when the standard approach provides biased estimates. BSM is an iterative method involving sequential simulation-estimation steps. BSM was compared with the standard approach in the analysis of a 4-category ordered variable using the Laplacian method in NONMEM. The bias in parameter estimates and the accuracy of model predictions were determined for the 2 methods on 3 conditions: (1) a nonskewed distribution of the response with low interindividual variability (IIV), (2) a skewed distribution with low IIV, and (3) a skewed distribution with high IIV. An increase in bias with increasing skewness and IIV was shown in parameters estimated using the standard approach in NON-MEM. BSM performed without appreciable bias in the estimates under the 3 conditions, and the model predictions were in good agreement with the original data. Each BSM estimation represents a random sample of the population; hence, repeating the BSM estimation reduces the imprecision of the parameter estimates. The BSM is an accurate estimation method when the standard modeling approach in NONMEM gives biased estimates.  相似文献   

4.
For the purpose of population pharmacometric modeling, a variety of mathematic algorithms are implemented in major modeling software packages to facilitate the maximum likelihood modeling, such as FO, FOCE, Laplace, ITS and EM. These methods are all designed to estimate the set of parameters that maximize the joint likelihood of observations in a given problem. While FOCE is still currently the most widely used method in population modeling, EM methods are getting more popular as the current-generation methods of choice because of their robustness with more complex models and sparse data structures. There are several versions of EM method implementation that are available in public modeling software packages. Although there have been several studies and reviews comparing the performance of different methods in handling relatively simple models, there has not been a dedicated study to compare different versions of EM algorithms in solving complex PBPK models. This study took everolimus as a model drug and simulated PK data based on published results. Three most popular EM methods (SAEM, IMP and QRPEM) and FOCE (as a benchmark reference) were evaluated for their estimation accuracy and converging speed when solving models of increased complexity. Both sparse and rich sampling data structure were tested. We concluded that FOCE was superior to EM methods for simple structured models. For more complex models and/ or sparse data, EM methods are much more robust. While the estimation accuracy was very close across EM methods, the general ranking of speed (fastest to slowest) was: QRPEM, IMP and SAEM. IMP gave the most realistic estimation of parameter standard errors, while under- and over- estimation of standard errors were observed in SAEM and QRPEM methods.  相似文献   

5.
Analysis of longitudinal ordered categorical efficacy or safety data in clinical trials using mixed models is increasingly performed. However, algorithms available for maximum likelihood estimation using an approximation of the likelihood integral, including LAPLACE approach, may give rise to biased parameter estimates. The SAEM algorithm is an efficient and powerful tool in the analysis of continuous/count mixed models. The aim of this study was to implement and investigate the performance of the SAEM algorithm for longitudinal categorical data. The SAEM algorithm is extended for parameter estimation in ordered categorical mixed models together with an estimation of the Fisher information matrix and the likelihood. We used Monte Carlo simulations using previously published scenarios evaluated with NONMEM. Accuracy and precision in parameter estimation and standard error estimates were assessed in terms of relative bias and root mean square error. This algorithm was illustrated on the simultaneous analysis of pharmacokinetic and discretized efficacy data obtained after a single dose of warfarin in healthy volunteers. The new SAEM algorithm is implemented in MONOLIX 3.1 for discrete mixed models. The analyses show that for parameter estimation, the relative bias is low for both fixed effects and variance components in all models studied. Estimated and empirical standard errors are similar. The warfarin example illustrates how simple and rapid it is to analyze simultaneously continuous and discrete data with MONOLIX 3.1. The SAEM algorithm is extended for analysis of longitudinal categorical data. It provides accurate estimates parameters and standard errors. The estimation is fast and stable.  相似文献   

6.
Using simulated viral load data for a given maraviroc monotherapy study design, the feasibility of different algorithms to perform parameter estimation for a pharmacokinetic-pharmacodynamic-viral dynamics (PKPD-VD) model was assessed. The assessed algorithms are the first-order conditional estimation method with interaction (FOCEI) implemented in NONMEM VI and the SAEM algorithm implemented in MONOLIX version 2.4. Simulated data were also used to test if an effect compartment and/or a lag time could be distinguished to describe an observed delay in onset of viral inhibition using SAEM. The preferred model was then used to describe the observed maraviroc monotherapy plasma concentration and viral load data using SAEM. In this last step, three modelling approaches were compared; (i) sequential PKPD-VD with fixed individual Empirical Bayesian Estimates (EBE) for PK, (ii) sequential PKPD-VD with fixed population PK parameters and including concentrations, and (iii) simultaneous PKPD-VD. Using FOCEI, many convergence problems (56%) were experienced with fitting the sequential PKPD-VD model to the simulated data. For the sequential modelling approach, SAEM (with default settings) took less time to generate population and individual estimates including diagnostics than with FOCEI without diagnostics. For the given maraviroc monotherapy sampling design, it was difficult to separate the viral dynamics system delay from a pharmacokinetic distributional delay or delay due to receptor binding and subsequent cellular signalling. The preferred model included a viral load lag time without inter-individual variability. Parameter estimates from the SAEM analysis of observed data were comparable among the three modelling approaches. For the sequential methods, computation time is approximately 25% less when fixing individual EBE of PK parameters with omission of the concentration data compared with fixed population PK parameters and retention of concentration data in the PD-VD estimation step. Computation times were similar for the sequential method with fixed population PK parameters and the simultaneous PKPD-VD modelling approach. The current analysis demonstrated that the SAEM algorithm in MONOLIX is useful for fitting complex mechanistic models requiring multiple differential equations. The SAEM algorithm allowed simultaneous estimation of PKPD and viral dynamics parameters, as well as investigation of different model sub-components during the model building process. This was not possible with the FOCEI method (NONMEM version VI or below). SAEM provides a more feasible alternative to FOCEI when facing lengthy computation times and convergence problems with complex models.  相似文献   

7.
Estimation methods for nonlinear mixed-effects modelling have considerably improved over the last decades. Nowadays, several algorithms implemented in different software are used. The present study aimed at comparing their performance for dose-response models. Eight scenarios were considered using a sigmoid E(max) model, with varying sigmoidicity and residual error models. One hundred simulated datasets for each scenario were generated. One hundred individuals with observations at four doses constituted the rich design and at two doses, the sparse design. Nine parametric approaches for maximum likelihood estimation were studied: first-order conditional estimation (FOCE) in NONMEM and R, LAPLACE in NONMEM and SAS, adaptive Gaussian quadrature (AGQ) in SAS, and stochastic approximation expectation maximization (SAEM) in NONMEM and MONOLIX (both SAEM approaches with default and modified settings). All approaches started first from initial estimates set to the true values and second, using altered values. Results were examined through relative root mean squared error (RRMSE) of the estimates. With true initial conditions, full completion rate was obtained with all approaches except FOCE in R. Runtimes were shortest with FOCE and LAPLACE and longest with AGQ. Under the rich design, all approaches performed well except FOCE in R. When starting from altered initial conditions, AGQ, and then FOCE in NONMEM, LAPLACE in SAS, and SAEM in NONMEM and MONOLIX with tuned settings, consistently displayed lower RRMSE than the other approaches. For standard dose-response models analyzed through mixed-effects models, differences were identified in the performance of estimation methods available in current software, giving material to modellers to identify suitable approaches based on an accuracy-versus-runtime trade-off.  相似文献   

8.
Therapeutic drug monitoring of factor VIII is well established in the treatment of patients with hemophilia attributable to important interindividual variability. The individual initial factor VIII dosage is usually calculated according to individual pharmacokinetic parameters obtained after a dose test administered before the surgery, using at least five-concentration data. The authors proposed a limited sampling strategy to estimate individual pharmacokinetic parameters from one- or two-concentration data in patients with hemophilia A before surgery. The mean population pharmacokinetic parameters and the interindividual variability (CV) were obtained from a group of 33 patients according to a two-compartment model using NONMEM. Eighteen additional patients were used to estimate the predictive performances of the population parameters and to evaluate the limited sampling strategies. Population parameters were clearance 2.6 mL/h per kilogram (CV 45.4%), initial volume of distribution 2.8 L (CV 21.1%). From two sampling times (0.5 and 6 hours or 0.5 and 8 hours after the end of infusion), the estimation of pharmacokinetic parameters was precise and not biased. Until now, in the hemophilic center of Lyon, the factor VIII dosage before surgery was based on the determination of the clearance, estimated from five- to nine-concentration data and on the target concentration (infusion rate = clearance x target). Ruffo et al proposed a limited sampling strategy (two-stage method) to estimate pharmacokinetic parameters from two concentration measurements drawn 3 and 9 hours after the dose. No information was given on the bias and precision of the estimation. This paper reports a one-stage method for a population pharmacokinetic study of factor VIII. The Bayesian estimation of individual pharmacokinetic parameters based on only two sampling times (0.5 and 6 hours or 0.5 and 8 hours after the end of infusion) is useful to define the best factor VIII dosage in hemophilic patients before surgery.  相似文献   

9.
The application of proportional odds models to ordered categorical data using the mixed-effects modeling approach has become more frequently reported within the pharmacokinetic/pharmacodynamic area during the last decade. The aim of this paper was to investigate the bias in parameter estimates, when models for ordered categorical data were estimated using methods employing different approximations of the likelihood integral; the Laplacian approximation in NONMEM (without and with the centering option) and NLMIXED, and the Gaussian quadrature approximations in NLMIXED. In particular, we have focused on situations with non-even distributions of the response categories and the impact of interpatient variability. This is a Monte Carlo simulation study where original data sets were derived from a known model and fixed study design. The simulated response was a four-category variable on the ordinal scale with categories 0, 1, 2 and 3. The model used for simulation was fitted to each data set for assessment of bias. Also, simulations of new data based on estimated population parameters were performed to evaluate the usefulness of the estimated model. For the conditions tested, Gaussian quadrature performed without appreciable bias in parameter estimates. However, markedly biased parameter estimates were obtained using the Laplacian estimation method without the centering option, in particular when distributions of observations between response categories were skewed and when the interpatient variability was moderate to large. Simulations under the model could not mimic the original data when bias was present, but resulted in overestimation of rare events. The bias was considerably reduced when the centering option in NONMEM was used. The cause for the biased estimates appears to be related to the conditioning on uninformative and uncertain empirical Bayes estimate of interindividual random effects during the estimation, in conjunction with the normality assumption.  相似文献   

10.
NONMEM is the most widely used software for population pharmacokinetic (PK)-pharmacodynamic (PD) analyses. The latest version, NONMEM 7 (NM7), includes several sampling-based estimation methods in addition to the classical methods. In this study, performance of the estimation methods available in NM7 was investigated with respect to bias, precision, robustness and runtime for a diverse set of PD models. Simulations of 500 data sets from each PD model were reanalyzed with the available estimation methods to investigate bias and precision. Simulations of 100 data sets were used to investigate robustness by comparing final estimates obtained after estimations starting from the true parameter values and initial estimates randomly generated using the CHAIN feature in NM7. Average estimation time for each algorithm and each model was calculated from the runtimes reported by NM7. The method giving the lowest bias and highest precision across models was importance sampling, closely followed by FOCE/LAPLACE and stochastic approximation expectation-maximization. The methods relative robustness differed between models and no method showed clear superior performance. FOCE/LAPLACE was the method with the shortest runtime for all models, followed by iterative two-stage. The Bayesian Markov Chain Monte Carlo method, used in this study for point estimation, performed worst in all tested metrics.  相似文献   

11.
The paper compares performance of Nonmem estimation methods--first order conditional estimation with interaction (FOCEI), iterative two stage (ITS), Monte Carlo importance sampling (IMP), importance sampling assisted by mode a posteriori (IMPMAP), stochastic approximation expectation-maximization (SAEM), and Markov chain Monte Carlo Bayesian (BAYES), on the simulated examples of a monoclonal antibody with target-mediated drug disposition (TMDD), demonstrates how optimization of the estimation options improves performance, and compares standard errors of Nonmem parameter estimates with those predicted by PFIM 3.2 optimal design software. In the examples of the one- and two-target quasi-steady-state TMDD models with rich sampling, the parameter estimates and standard errors of the new Nonmem 7.2.0 ITS, IMP, IMPMAP, SAEM and BAYES estimation methods were similar to the FOCEI method, although larger deviation from the true parameter values (those used to simulate the data) was observed using the BAYES method for poorly identifiable parameters. Standard errors of the parameter estimates were in general agreement with the PFIM 3.2 predictions. The ITS, IMP, and IMPMAP methods with the convergence tester were the fastest methods, reducing the computation time by about ten times relative to the FOCEI method. Use of lower computational precision requirements for the FOCEI method reduced the estimation time by 3-5 times without compromising the quality of the parameter estimates, and equaled or exceeded the speed of the SAEM and BAYES methods. Use of parallel computations with 4-12 processors running on the same computer improved the speed proportionally to the number of processors with the efficiency (for 12 processor run) in the range of 85-95% for all methods except BAYES, which had parallelization efficiency of about 70%.  相似文献   

12.
Analysis of count data from clinical trials using mixed effect analysis has recently become widely used. However, algorithms available for the parameter estimation, including LAPLACE and Gaussian quadrature (GQ), are associated with certain limitations, including bias in parameter estimates and the long analysis runtime. The stochastic approximation expectation maximization (SAEM) algorithm has proven to be a very efficient and powerful tool in the analysis of continuous data. The aim of this study was to implement and investigate the performance of a new SAEM algorithm for application to count data. A new SAEM algorithm was implemented in MATLAB for estimation of both, parameters and the Fisher information matrix. Stochastic Monte Carlo simulations followed by re-estimation were performed according to scenarios used in previous studies (part I) to investigate properties of alternative algorithms (Plan et al., 2008, Abstr 1372 []). A single scenario was used to explore six probability distribution models. For parameter estimation, the relative bias was less than 0.92% and 4.13% for fixed and random effects, for all models studied including ones accounting for over- or under-dispersion. Empirical and estimated relative standard errors were similar, with distance between them being <1.7% for all explored scenarios. The longest CPU time was 95 s for parameter estimation and 56 s for SE estimation. The SAEM algorithm was extended for analysis of count data. It provides accurate estimates of both, parameters and standard errors. The estimation is significantly faster compared to LAPLACE and GQ. The algorithm is implemented in Monolix 3.1, (beta-version available in July 2009).  相似文献   

13.
14.
Assessment of type I error rates for the statistical sub-model in NONMEM   总被引:6,自引:0,他引:6  
The aim of this study was to assess the type I error rate when applying the likelihood ratio (LR) test, for components of the statistical sub-model in NONMEM. Data were simulated from a pharmacokinetic one compartment intravenous bolus model. Two models were fitted to the data, the simulation model and a model containing one additional parameter, and the difference in objective function values between models was calculated. The additional parameter was either (i) a covariate effect on the interindividual variability in CL or V, (ii) a covariate effect on the residual error variability, (iii) a covariance term between CL and V, or (iv) interindividual variability in V. Factors in the simulation conditions (number of individuals and samples per individual, interindividual and residual error magnitude, residual error model) were varied systematically to assess their potential influence on the type I error rate. Different estimation methods within NONMEM were tried. When the first-order conditional estimation method with interaction (FOCE INTER) was used the estimated type I error rates for inclusion of a covariate effect (i) on the interindividual variability, or (ii) on the residual error variability, were in agreement with the type I error rate expected under the assumption that the model approximations made by the estimation method are negligible. When the residual error variability was increased, the type I error rates for (iii) inclusion of covariance between CLV were inflated if the underlying residual distribution was lognormal, or if a normal distribution was combined with too little information in the data (too few samples per subject or sampling at uninformative time-points). For inclusion of (iv)V, the type I error rates were affected by the underlying residual error distribution; with a normal distribution the estimated type I error rates were close to the expected, while if a non-normal distribution was used the type I errors rates increased with increasing residual variability. When the first-order (FO) estimation method was used the estimated type I error rates were higher than the expected in most situations. For the FOCE INTER method, but not the FO method, the LR test is appropriate when the underlying assumptions of normality of residuals, and of enough information in the data, hold true. Deviations from these assumptions may lead to inflated type I error rates.  相似文献   

15.
Bauer RJ  Guzy S  Ng C 《The AAPS journal》2007,9(1):E60-E83
An overview is provided of the present population analysis methods and an assessment of which software packages are most appropriate for various PK/PD modeling problems. Four PK/PD example problems were solved using the programs NONMEM VI beta version, PDx-MCPEM, S-ADAPT, MONOLIX, and WinBUGS, informally assessed for reasonable accuracy and stability in analyzing these problems. Also, for each program we describe their general interface, ease of use, and abilities. We conclude with discussing which algorithms and software are most suitable for which types of PK/PD problems. NONMEM FO method is accurate and fast with 2-compartment models, if intra-individual and interindividual variances are small. The NONMEM FOCE method is slower than FO, but gives accurate population values regardless of size of intra- and interindividual errors. However, if data are very sparse, the NONMEM FOCE method can lead to inaccurate values, while the Laplace method can provide more accurate results. The exact EM methods (performed using S-ADAPT, PDx-MCPEM, and MONOLIX) have greater stability in analyzing complex PK/PD models, and can provide accurate results with sparse or rich data. MCPEM methods perform more slowly than NONMEM FOCE for simple models, but perform more quickly and stably than NONMEM FOCE for complex models. WinBUGS provides accurate assessments of the population parameters, standard errors and 95% confidence intervals for all examples. Like the MCPEM methods, WinBUGS's efficiency increases relative to NONMEM when solving the complex PK/PD models.  相似文献   

16.
Cyclosporine A (CsA) is an immunosuppressive drug widely used in pediatric renal graft recipients. Its large interindividual pharmacokinetic variability and narrow therapeutic index render therapeutic drug monitoring necessary. However, information about CsA pharmacokinetics is scarce and no population pharmacokinetic (popPK) studies in these populations have been reported so far. to the objectives of this study were 1) to develop a PKpop model and identify the individual factors influencing the variability of CsA pharmacokinetics in pediatric kidney recipients; and 2) to build a Bayesian estimator allowing the estimation of the main PK parameters and exposure indices to CsA on the basis of a limited sampling strategy (LSS). The popPK analysis was performed using the NONMEM program. A total of 256 PK profiles of CsA collected in 98 pediatric renal transplant patients (mean age 9.7 +/- 4.5 years old) within the first year posttransplantation were studied. A 2-compartment model with first-order elimination, and Erlang distribution to describe the absorption phase, fitted the data adequately. For Bayesian estimation, the best LSS was determined based on its performance in estimating area under the concentration-time curve (AUC0-12h) and validated in an independent group of 20 patients. The popPK analysis identified body weight and posttransplant delay as individual factors influencing the apparent central volume of distribution and the apparent clearance, respectively. Bayesian estimation allowed accurate prediction of AUC0-12h using predose, C1h, and C3h blood samples with a mean bias between observed and estimated AUC of 0.5% +/- 11% and good precision (root mean square error = 10.9%). This article reports the first popPK study of CsA in pediatric renal transplant patients. It confirms the reliability and feasibility of CsA AUC estimation in this population. The body weight and the posttransplantation delay were identified to influence PK interindividual variability of CsA and were included in the Bayesian estimator developed, which could be helpful in further clinical trials.  相似文献   

17.
Informative dropout can lead to bias in statistical analyses if not handled appropriately. The objective of this simulation study was to investigate the performance of nonlinear mixed effects models with regard to bias and precision, with and without handling informative dropout. An efficacy variable and dropout depending on that efficacy variable were simulated and model parameters were reestimated, with or without including a dropout model. The Laplace and FOCE-I estimation methods in NONMEM 7, and the stochastic simulations and estimations (SSE) functionality in PsN, were used in the analysis. For the base scenario, bias was low, less than 5% for all fixed effects parameters, when a dropout model was used in the estimations. When a dropout model was not included, bias increased up to 8% for the Laplace method and up to 21% if the FOCE-I estimation method was applied. The bias increased with decreasing number of observations per subject, increasing placebo effect and increasing dropout rate, but was relatively unaffected by the number of subjects in the study. This study illustrates that ignoring informative dropout can lead to biased parameters in nonlinear mixed effects modeling, but even in cases with few observations or high dropout rate, the bias is relatively low and only translates into small effects on predictions of the underlying effect variable. A dropout model is, however, crucial in the presence of informative dropout in order to make realistic simulations of trial outcomes.KEY WORDS: bias, informative dropout, nonlinear mixed effects, NONMEM  相似文献   

18.
The impact of assay variability on pharmacokinetic modeling was investigated. Simulated replications (150) of three "individuals" resulted in 450 data sets. A one-compartment model with first-order absorption was simulated. Random assay errors of 10, 20, or 30% were introduced and the ratio of absorption rate (Ka) to elimination rate (Ke) constants was 2, 10, or 20. The analyst was blinded as to the rate constants chosen for the simulations. Parameter estimates from the sequential method (Ke estimated with log-linear regression followed by estimation of Ka) and nonlinear regression with various weighting schemes were compared. NONMEM was run on the 9 data sets as well. Assay error caused a sizable number of curves to have apparent multicompartmental distribution or complex absorption kinetic characteristics. Routinely tabulated parameters (maximum concentration, area under the curve, and, to a lesser extent, mean residence time) were consistently overestimated as assay error increased. When Ka/Ke = 2, all methods except NONMEM underestimated Ke, overestimated Ka, and overestimated apparent volume of distribution. These significant biases increased with the magnitude of assay error. With improper weighting, nonlinear regression significantly overestimated Ke when Ka/Ke = 20. In general, however, the sequential approach was most biased and least precise. Although no interindividual variability was included in the simulations, estimation error caused large standard deviations to be associated with derived parameters, which would be interpreted as interindividual error in a nonsimulation environment. NONMEM, however, acceptably estimated all parameters and variabilities. Routinely applied pharmacokinetic estimation methods do not consistently provide unbiased answers. In the specific case of extended-release drug formulations, there is clearly a possibility that certain estimation methods yield Ka and relative bioavailability estimates that would be imprecise and biased.  相似文献   

19.
Population pharmacokinetics of mitoxantrone performed by a NONMEM method   总被引:1,自引:0,他引:1  
To date, the pharmacokinetics of mitoxantrone (1,4-dihydroxy-5,8-bis[[2-[(2- hydroxyethyl)amino]ethyl]amino]anthraquinone) has been described either by an open two- or three-compartment model, showing high interindividual variability. In order to evaluate this variability, residual intraindividual variability, and measurement error, we carried out a population study. A sensitive HPLC method allowed analysis of blood samples drawn from 21 patients with breast cancer or acute nonlymphocytic leukemia. Individual data treatment (22 kinetics) using weighted nonlinear least squares regression confirmed the huge interindividual variability whatever the administration protocol of mitoxantrone: bi- or tri-exponential models fitted the data. The NONMEM population method used herein describes all concentration-time curves by a single three-compartment model, considering biphasic kinetics as fragmentary data. Residual intraindividual variability was 21.4%. Population mean values (+/- interindividual SD) of clearance, terminal half-life, and total volume of distribution were, respectively, 23.40 (+/- 10.76) L/h, 46.87 (+/- 12.18) h, and 385.49 (+/- 196.60) L. These results are of particular interest in clinical routines to calculate dosage regimens by Bayesian estimation methods.  相似文献   

20.
In the recent years, interest in the application of experimental design theory to population pharmacokinetic (PK) and pharmacodynamic (PD) experiments has increased. The aim is to improve the efficiency and the precision with which parameters are estimated during data analysis and sometimes to increase the power and reduce the sample size required for hypothesis testing. The population Fisher information matrix (PFIM) has been described for uniresponse and multiresponse population PK experiments for design evaluation and optimisation. Despite these developments and availability of tools for optimal design of population PK and PD experiments much of the effort has been focused on repeated continuous variable measurements with less work being done on repeated discrete type measurements. Discrete data arise mainly in PDs e.g. ordinal, nominal, dichotomous or count measurements. This paper implements expressions for the PFIM for repeated ordinal, dichotomous and count measurements based on analysis by a mixed-effects modelling technique. Three simulation studies were used to investigate the performance of the expressions. Example 1 is based on repeated dichotomous measurements, Example 2 is based on repeated count measurements and Example 3 is based on repeated ordinal measurements. Data simulated in MATLAB were analysed using NONMEM (Laplace method) and the glmmML package in R (Laplace and adaptive Gauss-Hermite quadrature methods). The results obtained for Examples 1 and 2 showed good agreement between the relative standard errors obtained using the PFIM and simulations. The results obtained for Example 3 showed the importance of sampling at the most informative time points. Implementation of these expressions will provide the opportunity for efficient design of population PD experiments that involve discrete type data through design evaluation and optimisation.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号