首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Analysis of longitudinal ordered categorical efficacy or safety data in clinical trials using mixed models is increasingly performed. However, algorithms available for maximum likelihood estimation using an approximation of the likelihood integral, including LAPLACE approach, may give rise to biased parameter estimates. The SAEM algorithm is an efficient and powerful tool in the analysis of continuous/count mixed models. The aim of this study was to implement and investigate the performance of the SAEM algorithm for longitudinal categorical data. The SAEM algorithm is extended for parameter estimation in ordered categorical mixed models together with an estimation of the Fisher information matrix and the likelihood. We used Monte Carlo simulations using previously published scenarios evaluated with NONMEM. Accuracy and precision in parameter estimation and standard error estimates were assessed in terms of relative bias and root mean square error. This algorithm was illustrated on the simultaneous analysis of pharmacokinetic and discretized efficacy data obtained after a single dose of warfarin in healthy volunteers. The new SAEM algorithm is implemented in MONOLIX 3.1 for discrete mixed models. The analyses show that for parameter estimation, the relative bias is low for both fixed effects and variance components in all models studied. Estimated and empirical standard errors are similar. The warfarin example illustrates how simple and rapid it is to analyze simultaneously continuous and discrete data with MONOLIX 3.1. The SAEM algorithm is extended for analysis of longitudinal categorical data. It provides accurate estimates parameters and standard errors. The estimation is fast and stable.  相似文献   

2.
There has been little evaluation of maximum likelihood approximation methods for non-linear mixed effects modelling of count data. The aim of this study was to explore the estimation accuracy of population parameters from six count models, using two different methods and programs. Simulations of 100 data sets were performed in NONMEM for each probability distribution with parameter values derived from a real case study on 551 epileptic patients. Models investigated were: Poisson (PS), Poisson with Markov elements (PMAK), Poisson with a mixture distribution for individual observations (PMIX), Zero Inflated Poisson (ZIP), Generalized Poisson (GP) and Negative Binomial (NB). Estimations of simulated datasets were completed with Laplacian approximation (LAPLACE) in NONMEM and LAPLACE/Gaussian Quadrature (GQ) in SAS. With LAPLACE, the average absolute value of the bias (AVB) in all models was 1.02% for fixed effects, and ranged 0.32–8.24% for the estimation of the random effect of the mean count (λ). The random effect of the overdispersion parameter present in ZIP, GP and NB was underestimated (−25.87, −15.73 and −21.93% of relative bias, respectively). Analysis with GQ 9 points resulted in an improvement in these parameters (3.80% average AVB). Methods implemented in SAS had a lower fraction of successful minimizations, and GQ 9 points was considerably slower than 1 point. Simulations showed that parameter estimates, even when biased, resulted in data that were only marginally different from data simulated from the true model. Thus all methods investigated appear to provide useful results for the investigated count data models.  相似文献   

3.
Estimation methods for nonlinear mixed-effects modelling have considerably improved over the last decades. Nowadays, several algorithms implemented in different software are used. The present study aimed at comparing their performance for dose-response models. Eight scenarios were considered using a sigmoid E(max) model, with varying sigmoidicity and residual error models. One hundred simulated datasets for each scenario were generated. One hundred individuals with observations at four doses constituted the rich design and at two doses, the sparse design. Nine parametric approaches for maximum likelihood estimation were studied: first-order conditional estimation (FOCE) in NONMEM and R, LAPLACE in NONMEM and SAS, adaptive Gaussian quadrature (AGQ) in SAS, and stochastic approximation expectation maximization (SAEM) in NONMEM and MONOLIX (both SAEM approaches with default and modified settings). All approaches started first from initial estimates set to the true values and second, using altered values. Results were examined through relative root mean squared error (RRMSE) of the estimates. With true initial conditions, full completion rate was obtained with all approaches except FOCE in R. Runtimes were shortest with FOCE and LAPLACE and longest with AGQ. Under the rich design, all approaches performed well except FOCE in R. When starting from altered initial conditions, AGQ, and then FOCE in NONMEM, LAPLACE in SAS, and SAEM in NONMEM and MONOLIX with tuned settings, consistently displayed lower RRMSE than the other approaches. For standard dose-response models analyzed through mixed-effects models, differences were identified in the performance of estimation methods available in current software, giving material to modellers to identify suitable approaches based on an accuracy-versus-runtime trade-off.  相似文献   

4.
Using simulated viral load data for a given maraviroc monotherapy study design, the feasibility of different algorithms to perform parameter estimation for a pharmacokinetic-pharmacodynamic-viral dynamics (PKPD-VD) model was assessed. The assessed algorithms are the first-order conditional estimation method with interaction (FOCEI) implemented in NONMEM VI and the SAEM algorithm implemented in MONOLIX version 2.4. Simulated data were also used to test if an effect compartment and/or a lag time could be distinguished to describe an observed delay in onset of viral inhibition using SAEM. The preferred model was then used to describe the observed maraviroc monotherapy plasma concentration and viral load data using SAEM. In this last step, three modelling approaches were compared; (i) sequential PKPD-VD with fixed individual Empirical Bayesian Estimates (EBE) for PK, (ii) sequential PKPD-VD with fixed population PK parameters and including concentrations, and (iii) simultaneous PKPD-VD. Using FOCEI, many convergence problems (56%) were experienced with fitting the sequential PKPD-VD model to the simulated data. For the sequential modelling approach, SAEM (with default settings) took less time to generate population and individual estimates including diagnostics than with FOCEI without diagnostics. For the given maraviroc monotherapy sampling design, it was difficult to separate the viral dynamics system delay from a pharmacokinetic distributional delay or delay due to receptor binding and subsequent cellular signalling. The preferred model included a viral load lag time without inter-individual variability. Parameter estimates from the SAEM analysis of observed data were comparable among the three modelling approaches. For the sequential methods, computation time is approximately 25% less when fixing individual EBE of PK parameters with omission of the concentration data compared with fixed population PK parameters and retention of concentration data in the PD-VD estimation step. Computation times were similar for the sequential method with fixed population PK parameters and the simultaneous PKPD-VD modelling approach. The current analysis demonstrated that the SAEM algorithm in MONOLIX is useful for fitting complex mechanistic models requiring multiple differential equations. The SAEM algorithm allowed simultaneous estimation of PKPD and viral dynamics parameters, as well as investigation of different model sub-components during the model building process. This was not possible with the FOCEI method (NONMEM version VI or below). SAEM provides a more feasible alternative to FOCEI when facing lengthy computation times and convergence problems with complex models.  相似文献   

5.
Analysis of repeated time-to-event data is increasingly performed in pharmacometrics using parametric frailty models. The aims of this simulation study were (1) to assess estimation performance of Stochastic Approximation Expectation Maximization (SAEM) algorithm in MONOLIX, Adaptive Gaussian Quadrature (AGQ), and Laplace algorithm in PROC NLMIXED of SAS and (2) to evaluate properties of test of a dichotomous covariate on occurrence of events. The simulation setting is inspired from an analysis of occurrence of bone events after the initiation of treatment by imiglucerase in patients with Gaucher Disease (GD). We simulated repeated events with an exponential model and various dropout rates: no, low, or high. Several values of baseline hazard model, variability, number of subject, and effect of covariate were studied. For each scenario, 100 datasets were simulated for estimation performance and 500 for test performance. We evaluated estimation performance through relative bias and relative root mean square error (RRMSE). We studied properties of Wald and likelihood ratio test (LRT). We used these methods to analyze occurrence of bone events in patients with GD after starting an enzyme replacement therapy. SAEM with three chains and AGQ algorithms provided good estimates of parameters much better than SAEM with one chain and Laplace which often provided poor estimates. Despite a small number of repeated events, SAEM with three chains and AGQ gave small biases and RRMSE. Type I errors were closed to 5%, and power varied as expected for SAEM with three chains and AGQ. Probability of having at least one event under treatment was 19.1%.  相似文献   

6.
NONMEM is the most widely used software for population pharmacokinetic (PK)-pharmacodynamic (PD) analyses. The latest version, NONMEM 7 (NM7), includes several sampling-based estimation methods in addition to the classical methods. In this study, performance of the estimation methods available in NM7 was investigated with respect to bias, precision, robustness and runtime for a diverse set of PD models. Simulations of 500 data sets from each PD model were reanalyzed with the available estimation methods to investigate bias and precision. Simulations of 100 data sets were used to investigate robustness by comparing final estimates obtained after estimations starting from the true parameter values and initial estimates randomly generated using the CHAIN feature in NM7. Average estimation time for each algorithm and each model was calculated from the runtimes reported by NM7. The method giving the lowest bias and highest precision across models was importance sampling, closely followed by FOCE/LAPLACE and stochastic approximation expectation-maximization. The methods relative robustness differed between models and no method showed clear superior performance. FOCE/LAPLACE was the method with the shortest runtime for all models, followed by iterative two-stage. The Bayesian Markov Chain Monte Carlo method, used in this study for point estimation, performed worst in all tested metrics.  相似文献   

7.
For the purpose of population pharmacometric modeling, a variety of mathematic algorithms are implemented in major modeling software packages to facilitate the maximum likelihood modeling, such as FO, FOCE, Laplace, ITS and EM. These methods are all designed to estimate the set of parameters that maximize the joint likelihood of observations in a given problem. While FOCE is still currently the most widely used method in population modeling, EM methods are getting more popular as the current-generation methods of choice because of their robustness with more complex models and sparse data structures. There are several versions of EM method implementation that are available in public modeling software packages. Although there have been several studies and reviews comparing the performance of different methods in handling relatively simple models, there has not been a dedicated study to compare different versions of EM algorithms in solving complex PBPK models. This study took everolimus as a model drug and simulated PK data based on published results. Three most popular EM methods (SAEM, IMP and QRPEM) and FOCE (as a benchmark reference) were evaluated for their estimation accuracy and converging speed when solving models of increased complexity. Both sparse and rich sampling data structure were tested. We concluded that FOCE was superior to EM methods for simple structured models. For more complex models and/ or sparse data, EM methods are much more robust. While the estimation accuracy was very close across EM methods, the general ranking of speed (fastest to slowest) was: QRPEM, IMP and SAEM. IMP gave the most realistic estimation of parameter standard errors, while under- and over- estimation of standard errors were observed in SAEM and QRPEM methods.  相似文献   

8.
The paper compares performance of Nonmem estimation methods--first order conditional estimation with interaction (FOCEI), iterative two stage (ITS), Monte Carlo importance sampling (IMP), importance sampling assisted by mode a posteriori (IMPMAP), stochastic approximation expectation-maximization (SAEM), and Markov chain Monte Carlo Bayesian (BAYES), on the simulated examples of a monoclonal antibody with target-mediated drug disposition (TMDD), demonstrates how optimization of the estimation options improves performance, and compares standard errors of Nonmem parameter estimates with those predicted by PFIM 3.2 optimal design software. In the examples of the one- and two-target quasi-steady-state TMDD models with rich sampling, the parameter estimates and standard errors of the new Nonmem 7.2.0 ITS, IMP, IMPMAP, SAEM and BAYES estimation methods were similar to the FOCEI method, although larger deviation from the true parameter values (those used to simulate the data) was observed using the BAYES method for poorly identifiable parameters. Standard errors of the parameter estimates were in general agreement with the PFIM 3.2 predictions. The ITS, IMP, and IMPMAP methods with the convergence tester were the fastest methods, reducing the computation time by about ten times relative to the FOCEI method. Use of lower computational precision requirements for the FOCEI method reduced the estimation time by 3-5 times without compromising the quality of the parameter estimates, and equaled or exceeded the speed of the SAEM and BAYES methods. Use of parallel computations with 4-12 processors running on the same computer improved the speed proportionally to the number of processors with the efficiency (for 12 processor run) in the range of 85-95% for all methods except BAYES, which had parallelization efficiency of about 70%.  相似文献   

9.
The uncertainty associated with parameter estimations is essential for population model building, evaluation, and simulation. Summarized by the standard error (SE), its estimation is sometimes questionable. Herein, we evaluate SEs provided by different non linear mixed-effect estimation methods associated with their estimation performances. Methods based on maximum likelihood (FO and FOCE in NONMEMTM, nlme in SplusTM, and SAEM in MONOLIX) and Bayesian theory (WinBUGS) were evaluated on datasets obtained by simulations of a one-compartment PK model using 9 different designs. Bootstrap techniques were applied to FO, FOCE, and nlme. We compared SE estimations, parameter estimations, convergence, and computation time. Regarding SE estimations, methods provided concordant results for fixed effects. On random effects, SAEM and WinBUGS, tended respectively to under or over-estimate them. With sparse data, FO provided biased estimations of SE and discordant results between bootstrapped and original datasets. Regarding parameter estimations, FO showed a systematic bias on fixed and random effects. WinBUGS provided biased estimations, but only with sparse data. SAEM and WinBUGS converged systematically while FOCE failed in half of the cases. Applying bootstrap with FOCE yielded CPU times too large for routine application and bootstrap with nlme resulted in frequent crashes. In conclusion, FO provided bias on parameter estimations and on SE estimations of random effects. Methods like FOCE provided unbiased results but convergence was the biggest issue. Bootstrap did not improve SEs for FOCE methods, except when confidence interval of random effects is needed. WinBUGS gave consistent results but required long computation times. SAEM was in-between, showing few under-estimated SE but unbiased parameter estimations.  相似文献   

10.
In this paper, the two non-linear mixed-effects programs NONMEM and NLME were compared for their use in population pharmacokinetic/pharmacodynamic (PK/PD) modelling. We have described the first-order conditional estimation (FOCE) method as implemented in NONMEM and the alternating algorithm in NLME proposed by Lindstrom and Bates. The two programs were tested using clinical PK/PD data of a new gonadotropin-releasing hormone (GnRH) antagonist degarelix currently being developed for prostate cancer treatment. The pharmacokinetics of intravenous administered degarelix was analysed using a three compartment model while the pharmacodynamics was analysed using a turnover model with a pool compartment. The results indicated that the two algorithms produce consistent parameter estimates. The bias and precision of the two algorithms were further investigated using a parametric bootstrap procedure which showed that NONMEM produced more accurate results than NLME together with the nlmeODE package for this specific study.  相似文献   

11.
Estimating the power for a non-linear mixed-effects model-based analysis is challenging due to the lack of a closed form analytic expression. Often, computationally intensive Monte Carlo studies need to be employed to evaluate the power of a planned experiment. This is especially time consuming if full power versus sample size curves are to be obtained. A novel parametric power estimation (PPE) algorithm utilizing the theoretical distribution of the alternative hypothesis is presented in this work. The PPE algorithm estimates the unknown non-centrality parameter in the theoretical distribution from a limited number of Monte Carlo simulation and estimations. The estimated parameter linearly scales with study size allowing a quick generation of the full power versus study size curve. A comparison of the PPE with the classical, purely Monte Carlo-based power estimation (MCPE) algorithm for five diverse pharmacometric models showed an excellent agreement between both algorithms, with a low bias of less than 1.2 % and higher precision for the PPE. The power extrapolated from a specific study size was in a very good agreement with power curves obtained with the MCPE algorithm. PPE represents a promising approach to accelerate the power calculation for non-linear mixed effect models.  相似文献   

12.
It is not uncommon that the outcome measurements, symptoms or side effects, of a clinical trial belong to the family of event type data, e.g., bleeding episodes or emesis events. Event data is often low in information content and the mixed-effects modeling software NONMEM has previously been shown to perform poorly with low information ordered categorical data. The aim of this investigation was to assess the performance of the Laplace method, the stochastic approximation expectation-maximization (SAEM) method, and the importance sampling method when modeling repeated time-to-event data. The Laplace method already existed, whereas the two latter methods have recently become available in NONMEM 7. A stochastic simulation and estimation study was performed to assess the performance of the three estimation methods when applied to a repeated time-to-event model with a constant hazard associated with an exponential interindividual variability. Various conditions were investigated, ranging from rare to frequent events and from low to high interindividual variability. The method performance was assessed by parameter bias and precision. Due to the lack of information content under conditions where very few events were observed, all three methods exhibit parameter bias and imprecision, however most pronounced by the Laplace method. The performance of the SAEM and importance sampling were generally higher than Laplace when the frequency of individuals with events was less than 43%, while at frequencies above that all methods were equal in performance.  相似文献   

13.
A note on variance estimation in random effects meta-regression   总被引:4,自引:0,他引:4  
For random effects meta-regression inference, variance estimation for the parameter estimates is discussed. Because estimated weights are used for meta-regression analysis in practice, the assumed or estimated covariance matrix used in meta-regression is not strictly correct, due to possible errors in estimating the weights. Therefore, this note investigates the use of a robust variance estimation approach for obtaining variances of the parameter estimates in random effects meta-regression inference. This method treats the assumed covariance matrix of the effect measure variables as a working covariance matrix. Using an example of meta-analysis data from clinical trials of a vaccine, the robust variance estimation approach is illustrated in comparison with two other methods of variance estimation. A simulation study is presented, comparing the three methods of variance estimation in terms of bias and coverage probability. We find that, despite the seeming suitability of the robust estimator for random effects meta-regression, the improved variance estimator of Knapp and Hartung (2003) yields the best performance among the three estimators, and thus may provide the best protection against errors in the estimated weights.  相似文献   

14.
Abstract

Single response population (1 sample / animal) simulation studies were carried out (assuming a 1 compartment model) to investigate the influence of inter-animal variability (in clearance (σCl) and volume (σv)) on the estimation of population pharmacokinetic parameters. NONMEM was used for parameter estimation. Individual and joint confidence intervals coverage for parameter estimates were computed to reveal the influence of bias and standard error (SE) on interval estimates. The coverage of interval estimates, percent prediction error and correlation analysis were used to judge the efficiency of parameter estimation. The efficiency of estimation of Cl and V was good, on average, irrespective of the values of σCl and σv Estimates of σCl and σv were biased and imprecise. Small biases and high precision resulted in good confidence intervals coverage for Cl and V. SE was the major determinant of confidence intervals coverage for the random effect parameters, σCl and σv and the joint confidence intervals coverage for all parameter estimates. The usual confidence intervals computed may give an erroneous impression of the precision with which the random effect parameters are estimated because of the large standard errors associated with these parameters. Conservative approach to data interpretation is required when biases associated with σCl and σv are large.  相似文献   

15.
A significant bias in parameters, estimated with the proportional odds model using the software NONMEM, has been reported. Typically, this bias occurs with ordered categorical data, when most of the observations are found at one extreme of the possible outcomes. The aim of this study was to assess, through simulations, the performance of the Back-Step Method (BSM), a novel approach for obtaining unbiased estimates when the standard approach provides biased estimates. BSM is an iterative method involving sequential simulation-estimation steps. BSM was compared with the standard approach in the analysis of a 4-category ordered variable using the Laplacian method in NONMEM. The bias in parameter estimates and the accuracy of model predictions were determined for the 2 methods on 3 conditions: (1) a nonskewed distribution of the response with low interindividual variability (IIV), (2) a skewed distribution with low IIV, and (3) a skewed distribution with high IIV. An increase in bias with increasing skewness and IIV was shown in parameters estimated using the standard approach in NON-MEM. BSM performed without appreciable bias in the estimates under the 3 conditions, and the model predictions were in good agreement with the original data. Each BSM estimation represents a random sample of the population; hence, repeating the BSM estimation reduces the imprecision of the parameter estimates. The BSM is an accurate estimation method when the standard modeling approach in NONMEM gives biased estimates.  相似文献   

16.
A significant bias in parameters, estimated with the proportional odds model using the software NONMEM, has been reported. Typically, this bias occurs with ordered categorical data, when most of the observations are found at one extreme of the possible outcomes. The aim of this study was to assess, through simulations, the performance of the Back-Step Method (BSM), a novel approach for obtaining unbiased estimates when the standard approach provides biased estimates. BSM is an iterative method involving sequential simulation-estimation steps. BSM was compared with the standard approach in the analysis of a 4-category ordered variable using the Laplacian method in NONMEM. The bias in parameter estimates and the accuracy of model predictions were determined for the 2 methods on 3 conditions: (1) a nonskewed distribution of the response with low interindividual variability (IIV), (2) a skewed distribution with low IIV, and (3) a skewed distribution with high IIV. An increase in bias with increasing skewness and IIV was shown in parameters estimated using the standard approach in NONMEM. BSM performed without appreciable bias in the estimates under the 3 conditions, and the model predictions were in good agreement with the original data. Each BSM estimation represents a random sample of the population; hence, repeating the BSM estimation reduces the imprecision of the parameter estimates. The BSM is an accurate estimation method when the standard modeling approach in NONMEM gives biased estimates.  相似文献   

17.
In nonlinear mixed-effects models, estimation methods based on a linearization of the likelihood are widely used although they have several methodological drawbacks. Kuhn and Lavielle (Comput. Statist. Data Anal. 49:1020–1038 (2005)) developed an estimation method which combines the SAEM (Stochastic Approximation EM) algorithm, with a MCMC (Markov Chain Monte Carlo) procedure for maximum likelihood estimation in nonlinear mixed-effects models without linearization. This method is implemented in the Matlab software MONOLIX which is available at http://www.math.u-psud.fr/~lavielle/monolix/logiciels. In this paper we apply MONOLIX to the analysis of the pharmacokinetics of saquinavir, a protease inhibitor, from concentrations measured after single dose administration in 100 HIV patients, some with advance disease. We also illustrate how to use MONOLIX to build the covariate model using the Bayesian Information Criterion. Saquinavir oral clearance (CL/F) was estimated to be 1.26 L/h and to increase with body mass index, the inter-patient variability for CL/F being 120%. Several methodological developments are ongoing to extend SAEM which is a very promising estimation method for population pharmacockinetic/pharmacodynamic analyses.  相似文献   

18.
When parameter estimates are used in predictions or decisions, it is important to consider the magnitude of imprecision associated with the estimation. Such imprecision estimates are, however, presently lacking for nonparametric algorithms intended for nonlinear mixed effects models. The objective of this study was to develop resampling-based methods for estimating imprecision in nonparametric distribution (NPD) estimates obtained in NONMEM. A one-compartment PK model was used to simulate datasets for which the random effect of clearance conformed to a (i) normal (ii) bimodal and (iii) heavy-tailed underlying distributional shapes. Re-estimation was conducted assuming normality under FOCE, and NPDs were estimated sequential to this step. Imprecision in the NPD was then estimated by means of two different resampling procedures. The first (full) method relies on bootstrap sampling from the raw data and a re-estimation of both the preceding parametric (FOCE) and the nonparametric step. The second (simplified) method relies on bootstrap sampling of individual nonparametric probability distributions. Nonparametric 95% confidence intervals (95% CIs) were obtained and mean errors (MEs) of the 95% CI width were computed. Standard errors (SEs) of nonparametric population estimates were obtained using the simplified method and evaluated through 100 stochastic simulations followed by estimations (SSEs). Both methods were successfully implemented to provide imprecision estimates for NPDs. The imprecision estimates adequately reflected the reference imprecision in all distributional cases and regardless of the numbers of individuals in the original data. Relative MEs of the 95% CI width of CL marginal density when original data contained 200 individuals were equal to: (i) −22 and −12%, (ii) −22 and −9%, (iii) −13 and −5% for the full and simplified (n = 100), respectively. SEs derived from the simplified method were consistent with the ones obtained from 100 SSEs. In conclusion, two novel bootstrapping methods intended for nonparametric estimation methods are proposed. In addition of providing information about the precision of nonparametric parameter estimates, they can serve as diagnostic tools for the detection of misspecified parameter distributions.  相似文献   

19.
A simulation study was performed to determine how inestimable standard errors could be obtained when population pharmacokinetic analysis is performed with the NONMEM software on data from small sample size phase I studies. Plausible sets of concentration-time data for nineteen subjects were simulated using an incomplete longitudinal population pharmacokinetic study design, and parameters of a drug in development that exhibits two compartment linear pharmacokinetics with single dose first order input. They were analyzed with the NONMEM program. Standard errors for model parameters were computed from the simulated parameter values to serve as true standard errors of estimates. The nonparametric bootstrap approach was used to generate replicate data sets from the simulated data and analyzed with NONMEM. Because of the sensitivity of the bootstrap to extreme values, winsorization was applied to parameter estimates. Winsorized mean parameters and their standard errors were computed and compared with their true values as well as the non-winsorized estimates. Percent bias was used to judge the performance of the bootstrap approach (with or without winsorization) in estimating inestimable standard errors of population pharmacokinetic parameters. Winsorized standard error estimates were generally more accurate than non-winsorized estimates because the distribution of most parameter estimates were skewed, sometimes with heavy tails. Using the bootstrap approach combined with winsorization, inestimable robust standard errors can be obtained for NONMEM estimated population pharmacokinetic parameters with > or = 150 bootstrap replicates. This approach was also applied to a real data set and a similar outcome was obtained. This investigation provides a structural framework for estimating inestimable standard errors when NONMEM is used for population pharmacokinetic modeling involving small sample sizes.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号