首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
It is not uncommon in pharmacokinetic (PK) studies that some concentrations are censored by the bioanalytical laboratory and reported qualitatively as below the lower limit of quantification (LLOQ). Censoring concentrations below the quantification limit (BQL) has been shown to adversely affect bias and precision of parameter estimates; however, its impact on structural model decision has not been studied. The current simulation study investigated the impact of the percentage of data censored as BQL on the PK structural model decision; evaluated the effect of different coefficient of variation (CV) values to define the LLOQ; and tested the maximum conditional likelihood estimation method in NONMEM VI (YLO). Using a one-compartment intravenous model, data were simulated with 10–50% BQL censoring, while maintaining a 20% CV at LLOQ. In another set of experiments, the LLOQ was chosen to attain CVs of 10, 20, 50 and 100%. Parameters were estimated with both one- and two-compartment models using NONMEM. A type I error was defined as a significantly lower objective function value for the two-compartment model compared to the one-compartment model using the standard likelihood ratio test at α = 0.05 and α = 0.01. The type I error rate substantially increased to as high as 96% as the median of percent censored data increased at both the 5% and 1% alpha levels. Restricting the CV to 10% caused a higher type I error rate compared to the 20% CV, while the error rate was reduced to the nominal value as the CV increased to 100%. The YLO option prevented the type I error rate from being elevated. This simulation study has shown that the practice of assigning a LLOQ during analytical methods development, although well intentioned, can lead to incorrect decisions regarding the structure of the pharmacokinetic model. The standard operating procedures in analytical laboratories should be adjusted to provide a quantitative value for all samples assayed in the drug development setting where sophisticated modeling may occur. However, the current level of precision may need to be maintained when laboratory results are to be used for direct patient care in a clinical setting. Finally, the YLO option should be considered when more than 10% of data are censored as BQL.  相似文献   

2.
Area under the drug-concentration-over-time curve (AUC) is an important endpoint for many phase I/II clinical trials and laboratory assays. Drug concentrations are measured using laboratory assays with a lower limit of quantification (LLOQ). How to calculate AUC when some drug concentration data are below the LLOQ remains as a challenge. In this article, we develop a maximum likelihood method to estimate AUC and relative exposure (i.e., ratio of two AUCs) when data below LLOQ exists. We also compare the proposed method to several commonly used methods, including imputation with model-predicted values or ad hoc values (i.e., LLOQ, LLOQ/2, or zero) through a simulation study. The proposed method gives unbiased inference. Commonly used methods can provide biased estimation, especially when a large proportion of data is below LLOQ. Application to a case study is also presented.  相似文献   

3.
Purpose To evaluate the likelihood-based methods for handling data below the quantification limit (BQL) using new features in NONMEM VI. Methods A two-compartment pharmacokinetic model with first-order absorption was chosen for investigation. Methods evaluated were: discarding BQL observations (M1), discarding BQL observations but adjusting the likelihood for the remaining data (M2), maximizing the likelihood for the data above the limit of quantification (LOQ) and treating BQL data as censored (M3), and like M3 but conditioning on the observation being greater than zero (M4). These four methods were compared using data simulated with a proportional error model. M2, M3, and M4 were also compared using data simulated from a positively truncated normal distribution. Successful terminations and bias and precision of parameter estimates were assessed. Results For the data simulated with a proportional error model, the overall performance was best for M3 followed by M2 and M1. M3 and M4 resulted in similar estimates in analyses without log transformation. For data simulated with the truncated normal distribution, M4 performed better than M3. Conclusions Analyses that maximized the likelihood of the data above the LOQ and treated BQL data as censored provided the most accurate and precise parameter estimates.  相似文献   

4.
5.
Data below the quantification limit (BQL data) are a common challenge in data analyses using nonlinear mixed effect models (NLMEM). In the estimation step, these data can be adequately handled by several reliable methods. However, they are usually omitted or imputed at an arbitrary value in most evaluation graphs and/or methods. This can cause trends to appear in diagnostic graphs, therefore, confuse model selection and evaluation. We extended in this paper two metrics for evaluating NLMEM, prediction discrepancies (pd) and normalised prediction distribution errors (npde), to handle BQL data. For a BQL observation, the pd is randomly sampled in a uniform distribution over the interval from 0 to the probability of being BQL predicted by the model, estimated using Monte Carlo (MC) simulation. To compute npde in presence of BQL observations, we proposed to impute BQL values in both validation dataset and MC samples using their computed pd and the inverse of the distribution function. The imputed dataset and MC samples contain original data and imputed values for BQL data. These data are then decorrelated using the mean and variance-covariance matrix to compute npde. We applied these metrics on a model built to describe viral load obtained from 35 patients in the COPHAR 3-ANRS 134 clinical trial testing a continued antiretroviral therapy. We also conducted a simulation study inspired from the real model. The proposed metrics show better behaviours than naive approaches that discard BQL data in evaluation, especially when large amounts of BQL data are present.  相似文献   

6.
Evaluation of population (NONMEM) pharmacokinetic parameter estimates   总被引:2,自引:0,他引:2  
The application of population pharmacokinetic analysis has received increasing attention in the last few years. The main goal of this report is to make investigators aware of the necessity of independent evaluation of the results obtained from a population analysis based on observational studies. We also describe with the help of a specific example (a new synthetic opiate Alfentanil) how such evaluation can be performed for parameter estimates obtained with the software system NONMEM. The method differs depending on the type of serum concentration data that are used for the evaluation. A general method is described, based on the regression model used in NONMEM, that can test for bias in the estimates of fixed and random effects independent of the number of observations per patient and dosing. Since the procedure for testing for statistically significant bias in the prediction of the average concentration and its variability can be relatively complex, we propose that generally available program packages performing estimation of the pharmacokinetic parameters from observational data should contain the necessary software to evaluate the reliability of the parameter estimates on a second data set.  相似文献   

7.
Evaluation of population (NONMEM) pharmacokinetic parameter estimates   总被引:2,自引:0,他引:2  
The application of population pharmacokinetic analysis has received increasing attention in the last few years. The main goal of this report is to make investigators aware of the necessity of independent evaluation of the results obtained from a population analysis based on observational studies. We also describe with the help of a specific example (a new synthetic opiate Alfentanil) how such evaluation can be performed for parameter estimates obtained with the software system NONMEM. The method differs depending on the type of serum concentration data that are used for the evaluation. A general method is described, based on the regression model used in NONMEM, that can test for bias in the estimates af fixed and random effects independent of the number of observations per patient and dosing. Since the procedure for testing for statistically significant bias in the prediction of the average concentration and its variability can be relatively complex, we propose that generally available program packages performing estimation of the pharmacokinetic parameters from observational data should contain the necessary software to evaluate the reliability of the parameter estimates on a second data set.Supported by the Professor Max Cloëtta Foundation, Switzerland and the National Institute on Aging Grant ROI-04594.  相似文献   

8.
This is the second in a series of tutorial articles discussing the analysis of pharmacokinetic data using parametric models. In this article the basic issue is how to estimate the parameters of such models. Primary emphasis is placed on point estimates of the parameters of the structural (pharmacokinetic) model. All the estimation methods discussed are least squares (LS) methods: ordinary least squares, weighted least squares, iteratively reweighted least squares, and extended least squares. The choice of LS method depends on the variance model. Some discussion is also provided of computer methods used to find the LS estimates, identifiability, and robust LS-based estimation methods.Work supported in part by NIH grants GM26676 and GM 26691.  相似文献   

9.
This is the second in a series of tutorial articles discussing the analysis of pharmacokinetic data using parametric models. In this article the basic issue is how to estimate the parameters of such models. Primary emphasis is placed on point estimates of the parameters of the structural (pharmacokinetic) model. All the estimation methods discussed are least squares (LS) methods: ordinary least squares, weighted least squares, iteratively reweighted least squares, and extended least squares. The choice of LS method depends on the variance model. Some discussion is also provided of computer methods used to find the LS estimates, identifiability, and robust LS-based estimation methods.  相似文献   

10.
This is the first in a series of tutorial articles discussing the analysis of pharmacokinetic data using parametric models. In this article, the purposes of modelling are discussed; regression models for individuals and populations are defined; and structural and variance models are discussed as the two required submodels of the overall regression model. Topics of future articles are: point estimates of parameters; interval estimates of parameters; model criticism and choosing among contending models; population kinetic models and estimation; and elements of optimal design.Work supported in part by NIH Grant GM 26676, and GM 26691.  相似文献   

11.
12.
We consider the development of the concentration of a drug in the blood after single oral or intravenous administration. We introduce a new nonlinear model capable of describing concentration-time curves following intravenous administration. A similar model is proposed for oral data. Both models have four parameters, of which two regulate the shape of the curve and two determine the scale of the time and concentration axes. All the parameters are closely related to geometric properties of the curve. The scale parameters determine a point in the curve, and the shape parameters can be calculated by using numerical integration. The models can be used when the object of the analysis is to quantify the shape of a concentration-time curve. We discuss the usefulness of the models in bioequivalency trials, in clinical safety and efficacy trials, and in population pharmacokinetics. The models are applied to two previously presented data sets. To reduce the number of parameters, the shape parameters are assumed common for all individuals. Encouraging results are obtained. We also present a new four-parameter Michaelis-Menten model.  相似文献   

13.
A pharmacokinetic screen has been advocated for the characterization of the population pharmacokinetics of drugs during Phase 3 clinical trials. A common perception encountered in the collection of such data is that the accuracy of sampling times relative to dose is inadequate. A prospective simulation study was carried out to evaluate the effect of error in the recording of sampling times on the accuracy and precision of population parameter estimates from repeated measures pharmacokinetic data. A two-compartment model with intravenous bolus input(s) (single and multiple doses) was assumed. Random and systematic error in sampling times ranging from 5–50% using profile (block) randomized design were introduced. Sampling times were simulated in EXCEL while concentration data simulation and analysis were done in NONMEM. The effect of error in sampling times was studied at levels of variability ranging from 15–45% for a drug assumed to be dosed at its elimination half-life. One hundred replicate data sets of 100 subjects each were simulated for each case. although estimates of clearance (CitCL) and variability in clearance were robust for most of the sampling time errors, there was an increase in bias and imprecision in overall parameter estimation as intersubject variability was increased. If there is interest in parameters other thanCL, then the design of prospective population studies should include procedures for minimizing the error in the recording of sample times relative to dosing history. The views expressed in this article are personal opinions of the authors and not those of the US Food and Drug Administration.  相似文献   

14.
15.
This is the first in a series of tutorial articles discussing the analysis of pharmacokinetic data using parametric models. In this article, the purposes of modelling are discussed; regression models for individuals and populations are defined; and structural and variance models are discussed as the two required submodels of the overall regression model. Topics of future articles are: point estimates of parameters; interval estimates of parameters; model criticism and choosing among contending models; population kinetic models and estimation; and elements of optimal design.  相似文献   

16.
Nonlinear mixed-effects models are frequently used for pharmacokinetic data analysis, and they account for inter-subject variability in pharmacokinetic parameters by incorporating subject-specific random effects into the model. The random effects are often assumed to follow a (multivariate) normal distribution. However, many articles have shown that misspecifying the random-effects distribution can introduce bias in the estimates of parameters and affect inferences about the random effects themselves, such as estimation of the inter-subject variability. Because random effects are unobservable latent variables, it is difficult to assess their distribution. In a recent paper we developed a diagnostic tool based on the so-called gradient function to assess the random-effects distribution in mixed models. There we evaluated the gradient function for generalized liner mixed models and in the presence of a single random effect. However, assessing the random-effects distribution in nonlinear mixed-effects models is more challenging, especially when multiple random effects are present, and therefore the results from linear and generalized linear mixed models may not be valid for such nonlinear models. In this paper, we further investigate the gradient function and evaluate its performance for such nonlinear mixed-effects models which are common in pharmacokinetics and pharmacodynamics. We use simulations as well as real data from an intensive pharmacokinetic study to illustrate the proposed diagnostic tool.  相似文献   

17.
The impact of assay variability on pharmacokinetic modeling was investigated. Simulated replications (150) of three "individuals" resulted in 450 data sets. A one-compartment model with first-order absorption was simulated. Random assay errors of 10, 20, or 30% were introduced and the ratio of absorption rate (Ka) to elimination rate (Ke) constants was 2, 10, or 20. The analyst was blinded as to the rate constants chosen for the simulations. Parameter estimates from the sequential method (Ke estimated with log-linear regression followed by estimation of Ka) and nonlinear regression with various weighting schemes were compared. NONMEM was run on the 9 data sets as well. Assay error caused a sizable number of curves to have apparent multicompartmental distribution or complex absorption kinetic characteristics. Routinely tabulated parameters (maximum concentration, area under the curve, and, to a lesser extent, mean residence time) were consistently overestimated as assay error increased. When Ka/Ke = 2, all methods except NONMEM underestimated Ke, overestimated Ka, and overestimated apparent volume of distribution. These significant biases increased with the magnitude of assay error. With improper weighting, nonlinear regression significantly overestimated Ke when Ka/Ke = 20. In general, however, the sequential approach was most biased and least precise. Although no interindividual variability was included in the simulations, estimation error caused large standard deviations to be associated with derived parameters, which would be interpreted as interindividual error in a nonsimulation environment. NONMEM, however, acceptably estimated all parameters and variabilities. Routinely applied pharmacokinetic estimation methods do not consistently provide unbiased answers. In the specific case of extended-release drug formulations, there is clearly a possibility that certain estimation methods yield Ka and relative bioavailability estimates that would be imprecise and biased.  相似文献   

18.
The impact of assay variability on pharmacokinetic modeling was investigated. Simulated replications (150) of three individuals resulted in 450 data sets. A one-compartment model with first-order absorption was simulated. Random assay errors of 10, 20, or 30% were introduced and the ratio of absorption rate (K a )to elimination rate (K e )constants was 2, 10, or 20. The analyst was blinded as to the rate constants chosen for the simulations. Parameter estimates from the sequential method (K e )estimated with log-linear regression followed by estimation of K a and nonlinear regression with various weighting schemes were compared. NONMEM was run on the 9 data sets as well. Assay error caused a sizable number of curves to have apparent multicompartmental distribution or complex absorption kinetic characteristics. Routinely tabulated parameters (maximum concentration, area under the curve, and, to a lesser extent, mean residence time) were consistently overestimated as assay error increased. When K a /K e =2,all methods except NONMEM underestimated K e ,overestimated K a ,and overestimated apparent volume of distribution. These significant biases increased with the magnitude of assay error. With improper weighting, nonlinear regression significantly overestimated K e when K a /K e ,=20. In general, however, the sequential approach was most biased and least precise. Although no interindividual variability was included in the simulations, estimation error caused large standard deviations to be associated with derived parameters, which would be interpreted as interindividual error in a nonsimulation environment. NONMEM, however, acceptably estimated all parameters and variabilities. Routinely applied pharmacokinetic estimation methods do not consistently provide unbiased answers. In the specific case of extended-release drug formulations, there is clearly a possibility that certain estimation methods yield K a and relative bioavailability estimates that would be imprecise and biased.  相似文献   

19.
A fully validated simple, sensitive and selective square-wave stripping voltammetry procedure was described for the trace quantification of cefoperazone in bulk form, formulations and human serum/plasma. The procedure was based on reduction of the adsorbed drug onto a hanging mercury drop electrode. The procedural conditions were optimized as: frequency=60Hz, scan increment=8mV, pulse amplitude=25mV, preconcentration potential=-0.3V (versus Ag/Ag/KCl(s)), preconcentration duration=60-150s and an acetate buffer of pH 4.2 as a supporting electrolyte. A limit of detection of 4.5x10(-10)M and a limit of quantitation of 1.5x10(-9)M bulk cefoperazone were achieved following preconcentration of the drug onto the hanging mercury drop electrode for 150s. The proposed square-wave adsorptive cathodic stripping voltammetric procedure was successfully applied for trace quantification of cefoperazone in human serum and plasma. The achieved limits of detection and quantitation of the drug in human serum were 6x10(-10)M (0.375ngml(-1)) and 2x10(-9)M (1.250ngml(-1)), respectively. The pharmacokinetic parameters of cefoperazone in plasma of hospitalized volunteers were successfully estimated.  相似文献   

20.
When the two-compartment model with absorption is fitted to data by nonlinear least squares, in general six different outcomes can be obtained, arising from permutation of the three exponential rate constants. The existence of multiple solutions in this sense is analogous to the flip-flop phenomenon in the one-compartment model. It is possible for parameter estimates to be inconsistent with the underlying physical model. Methods for recognizing such illegal estimates are described. Other common difficulties are that estimated values for two of the rate constants are almost identical with very large standard deviations, or that the parameter estimation algorithm converges poorly. Such unwanted outcomes usually signal a local (false) minimum of the sum of squares. They can be recognized from the ratio of largest to smallest singular value of the Jacobian matrix, and are, in principle, avoidable by starting the estimation algorithm with different initial values. There also exists a class of data sets for which all outcomes of fitting the usual equations are anomalous. A better fit to these data sets (smaller sum of squares) is obtained if two of the relevant rate constants are allowed to take complex conjugate values. Such data sets have usually been described as having “equal rate constants.” A special form of the model equation is available for parameter estimation in this case. Precautions relating to its use are discussed.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号