首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In vitro and in vivo techniques have been utilized to estimate mass transfer coefficients for physiological pharmacokinetic models. No single method has been adopted for estimating this parameter, in part, due to the different model structures with which this parameter may be associated. A specific method has been derived to calculate mass transfer coefficients for non-eliminating membrane-limited tissue compartments. The present method is based on observed concentration-time data, and requires the calculation of the areas under the zero and first moment curves for plasma, and the first moment curve for the tissue. A Monte Carlo simulation technique was used to determine the percentage biases of the method based on a published model for streptozoticin and adriamycin. For the latter model, the method was compared to a non-linear regression parameter estimation technique.  相似文献   

2.
Efficient power calculation methods have previously been suggested for Wald test-based inference in mixed-effects models but the only available alternative for Likelihood ratio test-based hypothesis testing has been to perform computer-intensive multiple simulations and re-estimations. The proposed Monte Carlo Mapped Power (MCMP) method is based on the use of the difference in individual objective function values (ΔiOFV) derived from a large dataset simulated from a full model and subsequently re-estimated with the full and reduced models. The ΔiOFV is sampled and summed (∑ΔiOFVs) for each study at each sample size of interest to study, and the percentage of ∑ΔiOFVs greater than the significance criterion is taken as the power. The power versus sample size relationship established via the MCMP method was compared to traditional assessment of model-based power for six different pharmacokinetic and pharmacodynamic models and designs. In each case, 1,000 simulated datasets were analysed with the full and reduced models. There was concordance in power between the traditional and MCMP methods such that for 90% power, the difference in required sample size was in most investigated cases less than 10%. The MCMP method was able to provide relevant power information for a representative pharmacometric model at less than 1% of the run-time of an SSE. The suggested MCMP method provides a fast and accurate prediction of the power and sample size relationship.  相似文献   

3.
The problem of power and sample size determination for distribution-free multiple comparison tests of K treatments versus a control group is addressed. We define the power as the probability of correctly rejecting one specified or all K hypotheses, corresponding to the per-pair and all-pairs power, respectively. The power formulas are derived for both joint ranking and pairwise ranking mechanism for general multiple comparison problems, followed by explicit form of these formulas when the single-step, step-down, or step-up adjustments are applied. The proposed power and sample size calculation methods apply to scenarios both when the underlying distributions are known and when they are unknown but a pilot study is available. Numerical methods via quasi-Monte Carlo integration and Monte Carlo integration are assessed. Our simulation studies show the accuracy of the power and sample size calculation formulas. We recommend the Monte Carlo integration as the calculation algorithm. An example from a mouse peritoneal cavity study is used to demonstrate the application of the methods.  相似文献   

4.
Analysis of count data from clinical trials using mixed effect analysis has recently become widely used. However, algorithms available for the parameter estimation, including LAPLACE and Gaussian quadrature (GQ), are associated with certain limitations, including bias in parameter estimates and the long analysis runtime. The stochastic approximation expectation maximization (SAEM) algorithm has proven to be a very efficient and powerful tool in the analysis of continuous data. The aim of this study was to implement and investigate the performance of a new SAEM algorithm for application to count data. A new SAEM algorithm was implemented in MATLAB for estimation of both, parameters and the Fisher information matrix. Stochastic Monte Carlo simulations followed by re-estimation were performed according to scenarios used in previous studies (part I) to investigate properties of alternative algorithms (Plan et al., 2008, Abstr 1372 []). A single scenario was used to explore six probability distribution models. For parameter estimation, the relative bias was less than 0.92% and 4.13% for fixed and random effects, for all models studied including ones accounting for over- or under-dispersion. Empirical and estimated relative standard errors were similar, with distance between them being <1.7% for all explored scenarios. The longest CPU time was 95 s for parameter estimation and 56 s for SE estimation. The SAEM algorithm was extended for analysis of count data. It provides accurate estimates of both, parameters and standard errors. The estimation is significantly faster compared to LAPLACE and GQ. The algorithm is implemented in Monolix 3.1, (beta-version available in July 2009).  相似文献   

5.
We propose a new adequacy test and a graphical evaluation tool for nonlinear dynamic models. The proposed techniques can be applied in any set‐up where parametric conditional distribution of the data is specified and, in particular, to models involving conditional volatility, conditional higher moments, conditional quantiles, asymmetry, Value at Risk models, duration models, diffusion models, etc. Compared to other tests, the new test properly controls the nonlinear dynamic behaviour in conditional distribution and does not rely on smoothing techniques that require a choice of several tuning parameters. The test is based on a new kind of multivariate empirical process of contemporaneous and lagged probability integral transforms. We establish weak convergence of the process under parameter uncertainty and local alternatives. We justify a parametric bootstrap approximation that accounts for parameter estimation effects often ignored in practice. Monte Carlo experiments show that the test has good finite‐sample size and power properties. Using the new test and graphical tools, we check the adequacy of various popular heteroscedastic models for stock exchange index data.  相似文献   

6.
Analysis of longitudinal ordered categorical efficacy or safety data in clinical trials using mixed models is increasingly performed. However, algorithms available for maximum likelihood estimation using an approximation of the likelihood integral, including LAPLACE approach, may give rise to biased parameter estimates. The SAEM algorithm is an efficient and powerful tool in the analysis of continuous/count mixed models. The aim of this study was to implement and investigate the performance of the SAEM algorithm for longitudinal categorical data. The SAEM algorithm is extended for parameter estimation in ordered categorical mixed models together with an estimation of the Fisher information matrix and the likelihood. We used Monte Carlo simulations using previously published scenarios evaluated with NONMEM. Accuracy and precision in parameter estimation and standard error estimates were assessed in terms of relative bias and root mean square error. This algorithm was illustrated on the simultaneous analysis of pharmacokinetic and discretized efficacy data obtained after a single dose of warfarin in healthy volunteers. The new SAEM algorithm is implemented in MONOLIX 3.1 for discrete mixed models. The analyses show that for parameter estimation, the relative bias is low for both fixed effects and variance components in all models studied. Estimated and empirical standard errors are similar. The warfarin example illustrates how simple and rapid it is to analyze simultaneously continuous and discrete data with MONOLIX 3.1. The SAEM algorithm is extended for analysis of longitudinal categorical data. It provides accurate estimates parameters and standard errors. The estimation is fast and stable.  相似文献   

7.
The Monte Carlo Parametric Expectation Maximization (MC-PEM) algorithm can approximate the true log-likelihood as precisely as needed and is efficiently parallelizable. Our objectives were to evaluate an importance sampling version of the MC-PEM algorithm for mechanistic models and to qualify the default estimation settings in SADAPT-TRAN. We assessed bias, imprecision and robustness of this algorithm in S-ADAPT for mechanistic models with up to 45 simultaneously estimated structural parameters, 14 differential equations, and 10 dependent variables (one drug concentration and nine pharmacodynamic effects). Simpler models comprising 15 parameters were estimated using three of the ten dependent variables. We set initial estimates to 0.1 or 10 times the true value and evaluated 30 bootstrap replicates with frequent or sparse sampling. Datasets comprised three dose levels with 16 subjects each. For simultaneous estimation of the full model, the ratio of estimated to true values for structural model parameters (median [5-95% percentile] over 45 parameters) was 1.01 [0.94-1.13] for means and 0.99 [0.68-1.39] for between-subject variances for frequent sampling and 1.02 [0.81-1.47] for means and 1.02 [0.47-2.56] for variances for sparse sampling. Imprecision was ≤25% for 43 of 45 means for frequent sampling. Bias and imprecision was well comparable for the full and simpler models. Parallelized estimation was 23-fold (6.9-fold) faster using 48 threads (eight threads) relative to one thread. The MC-PEM algorithm was robust and provided unbiased and adequately precise means and variances during simultaneous estimation of complex, mechanistic models in a 45 dimensional parameter space with rich or sparse data using poor initial estimates.  相似文献   

8.
目的比较预测性伪似然法(PQL)和基于马尔可夫蒙特卡罗(MCMC)的贝叶斯方法在广义线性混合模型参数估计的偏差和精度。方法针对样本含量不等的层次数据,运用SAS/glimmix过程和WinBUGS软件分别进行PQL和贝叶斯法参数估计。结果两种方法固定效应参数估计结果基本一致,但对随机效应方差的估计,基于MCMC的贝叶斯法偏差远小于PQL法。结论对二分类层次数据,采用广义线性混合效应模型贝叶斯估计精度更高,偏差更小。  相似文献   

9.
Summary A new class of multivariate threshold GARCH models is proposed for the analysis and modelling of volatility asymmetries in financial time series. The approach is based on the idea of a binary tree where every terminal node parametrizes a (local) multivariate GARCH model for a specific partition of the data. A Bayesian stochastic method is developed and presented for the analysis of the proposed model consisting of parameter estimation, model selection and volatility prediction. A computationally feasible algorithm that explores the posterior distribution of the tree structure is designed using Markov chain Monte Carlo stochastic search methods. Simulation experiments are conducted to assess the performance of the proposed method, and an empirical application of the proposed model is illustrated using real financial time series.  相似文献   

10.
The paper compares performance of Nonmem estimation methods--first order conditional estimation with interaction (FOCEI), iterative two stage (ITS), Monte Carlo importance sampling (IMP), importance sampling assisted by mode a posteriori (IMPMAP), stochastic approximation expectation-maximization (SAEM), and Markov chain Monte Carlo Bayesian (BAYES), on the simulated examples of a monoclonal antibody with target-mediated drug disposition (TMDD), demonstrates how optimization of the estimation options improves performance, and compares standard errors of Nonmem parameter estimates with those predicted by PFIM 3.2 optimal design software. In the examples of the one- and two-target quasi-steady-state TMDD models with rich sampling, the parameter estimates and standard errors of the new Nonmem 7.2.0 ITS, IMP, IMPMAP, SAEM and BAYES estimation methods were similar to the FOCEI method, although larger deviation from the true parameter values (those used to simulate the data) was observed using the BAYES method for poorly identifiable parameters. Standard errors of the parameter estimates were in general agreement with the PFIM 3.2 predictions. The ITS, IMP, and IMPMAP methods with the convergence tester were the fastest methods, reducing the computation time by about ten times relative to the FOCEI method. Use of lower computational precision requirements for the FOCEI method reduced the estimation time by 3-5 times without compromising the quality of the parameter estimates, and equaled or exceeded the speed of the SAEM and BAYES methods. Use of parallel computations with 4-12 processors running on the same computer improved the speed proportionally to the number of processors with the efficiency (for 12 processor run) in the range of 85-95% for all methods except BAYES, which had parallelization efficiency of about 70%.  相似文献   

11.
Surveys in developing countries are often taken at unequally spaced intervals. This paper provides for the estimation of dynamic pseudo‐panel models with such data. Non‐linear least squares, minimum distance, and one‐step estimators are used to impose the non‐linear parameter restrictions which occur in dynamic models over unequally spaced periods. Consistency and asymptotic normality of the estimators is established. A small‐scale Monte Carlo simulation study corroborates the results. The paper also shows how these methods can be applied to allow estimation of dynamic models with irregularly spaced genuine panel data.  相似文献   

12.
BACKGROUND AND OBJECTIVES: This study examined parametric and nonparametric population modelling methods in three different analyses. The first analysis was of a real, although small, clinical dataset from 17 patients receiving intramuscular amikacin. The second analysis was of a Monte Carlo simulation study in which the populations ranged from 25 to 800 subjects, the model parameter distributions were Gaussian and all the simulated parameter values of the subjects were exactly known prior to the analysis. The third analysis was again of a Monte Carlo study in which the exactly known population sample consisted of a unimodal Gaussian distribution for the apparent volume of distribution (V(d)), but a bimodal distribution for the elimination rate constant (k(e)), simulating rapid and slow eliminators of a drug. METHODS: For the clinical dataset, the parametric iterative two-stage Bayesian (IT2B) approach, with the first-order conditional estimation (FOCE) approximation calculation of the conditional likelihoods, was used together with the nonparametric expectation-maximisation (NPEM) and nonparametric adaptive grid (NPAG) approaches, both of which use exact computations of the likelihood. For the first Monte Carlo simulation study, these programs were also used. A one-compartment model with unimodal Gaussian parameters V(d) and k(e) was employed, with a simulated intravenous bolus dose and two simulated serum concentrations per subject. In addition, a newer parametric expectation-maximisation (PEM) program with a Faure low discrepancy computation of the conditional likelihoods, as well as nonlinear mixed-effects modelling software (NONMEM), both the first-order (FO) and the FOCE versions, were used. For the second Monte Carlo study, a one-compartment model with an intravenous bolus dose was again used, with five simulated serum samples obtained from early to late after dosing. A unimodal distribution for V(d) and a bimodal distribution for k(e) were chosen to simulate two subpopulations of 'fast' and 'slow' metabolisers of a drug. NPEM results were compared with that of a unimodal parametric joint density having the true population parameter means and covariance. RESULTS: For the clinical dataset, the interindividual parameter percent coefficients of variation (CV%) were smallest with IT2B, suggesting less diversity in the population parameter distributions. However, the exact likelihood of the results was also smaller with IT2B, and was 14 logs greater with NPEM and NPAG, both of which found a greater and more likely diversity in the population studied.For the first Monte Carlo dataset, NPAG and PEM, both using accurate likelihood computations, showed statistical consistency. Consistency means that the more subjects studied, the closer the estimated parameter values approach the true values. NONMEM FOCE and NONMEM FO, as well as the IT2B FOCE methods, do not have this guarantee. Results obtained by IT2B FOCE, for example, often strayed visibly away from the true values as more subjects were studied. Furthermore, with respect to statistical efficiency (precision of parameter estimates), NPAG and PEM had good efficiency and precise parameter estimates, while precision suffered with NONMEM FOCE and IT2B FOCE, and severely so with NONMEM FO. For the second Monte Carlo dataset, NPEM closely approximated the true bimodal population joint density, while an exact parametric representation of an assumed joint unimodal density having the true population means, standard deviations and correlation gave a totally different picture. CONCLUSIONS: The smaller population interindividual CV% estimates with IT2B on the clinical dataset are probably the result of assuming Gaussian parameter distributions and/or of using the FOCE approximation. NPEM and NPAG, having no constraints on the shape of the population parameter distributions, and which compute the likelihood exactly and estimate parameter values with greater precision, detected the more likely greater diversity in the parameter values in the population studied. In the first Monte Carlo study, NPAG and PEM had more precise parameter estimates than either IT2B FOCE or NONMEM FOCE, as well as much more precise estimates than NONMEM FO. In the second Monte Carlo study, NPEM easily detected the bimodal parameter distribution at this initial step without requiring any further information. Population modelling methods using exact or accurate computations have more precise parameter estimation, better stochastic convergence properties and are, very importantly, statistically consistent. Nonparametric methods are better than parametric methods at analysing populations having unanticipated non-Gaussian or multimodal parameter distributions.  相似文献   

13.
NONMEM is the most widely used software for population pharmacokinetic (PK)-pharmacodynamic (PD) analyses. The latest version, NONMEM 7 (NM7), includes several sampling-based estimation methods in addition to the classical methods. In this study, performance of the estimation methods available in NM7 was investigated with respect to bias, precision, robustness and runtime for a diverse set of PD models. Simulations of 500 data sets from each PD model were reanalyzed with the available estimation methods to investigate bias and precision. Simulations of 100 data sets were used to investigate robustness by comparing final estimates obtained after estimations starting from the true parameter values and initial estimates randomly generated using the CHAIN feature in NM7. Average estimation time for each algorithm and each model was calculated from the runtimes reported by NM7. The method giving the lowest bias and highest precision across models was importance sampling, closely followed by FOCE/LAPLACE and stochastic approximation expectation-maximization. The methods relative robustness differed between models and no method showed clear superior performance. FOCE/LAPLACE was the method with the shortest runtime for all models, followed by iterative two-stage. The Bayesian Markov Chain Monte Carlo method, used in this study for point estimation, performed worst in all tested metrics.  相似文献   

14.
Reliable estimation methods for non-linear mixed-effects models are now available and, although these models are increasingly used, only a limited number of statistical developments for their evaluation have been reported. We develop a criterion and a test to evaluate nonlinear mixed-effects models based on the whole predictive distribution. For each observation, we define the prediction discrepancy (pd) as the percentile of the observation in the whole marginal predictive distribution under H0. We propose to compute prediction discrepancies using Monte Carlo integration which does not require model approximation. If the model is valid, these pd should be uniformly distributed over [0, 1] which can be tested by a Kolmogorov–Smirnov test. In a simulation study based on a standard population pharmacokinetic model, we compare and show the interest of this criterion with respect to the one most frequently used to evaluate nonlinear mixed-effects models: standardized prediction errors (spe) which are evaluated using a first order approximation of the model. Trends in pd can also be evaluated via several plots to check for specific departures from the model  相似文献   

15.
This paper examines the efficacy of the general-to-specific modeling approachassociated with the LSE school of econometrics using a simulation framework. A mechanical algorithm is developed which mimics some aspects of the search procedures used by LSE practitioners. The algorithm is tested using 1000 replications of each of nine regression models and a data set patterned after Lovell’s (1983) study of data mining. The algorithm is assessed for its ability to recover the data-generating process. Monte Carlo estimatesof the size and power of exclusion tests based on t-statistics for individual variables in the specification are also provided. The roles of alternative sizes for specification tests in the algorithm, the consequences of different signal-to-noise ratios, and strategies for reducing overparameterization are also investigated. The results are largely favorable to the general-to-specific approach. In particular, the size of exclusion tests remains close to the nominal size used in the algorithm despite extensive search.  相似文献   

16.
Many modern serial dilution assays are based on fluorescence intensity (FI) readouts. We study the optimal transformation model choice for fitting five-parameter logistic curves (5PL) to FI-based serial dilution assay data. We first develop a generalized least squares-pseudolikelihood type algorithm for fitting heteroscedastic logistic models. Next, we show that the 5PL and log 5PL functions can approximate each other well. We then compare four 5PL models with different choices of log transformation and variance modeling through a Monte Carlo study and real data. Our findings are that the optimal choice depends on the intended use of the fitted curves. Supplementary materials for this article are available online.  相似文献   

17.
18.
Stochastic volatility (SV) models provide more realistic and flexible alternatives to ARCH‐type models for describing time‐varying volatility exhibited in many financial time series. They belong to the wide class of nonlinear state‐space models. As classical parameter estimation for SV models is difficult due to the intractable form of the likelihood, Bayesian approaches using Markov chain Monte Carlo (MCMC) techniques for posterior computations have been suggested. In this paper, an efficient MCMC algorithm for posterior computation in SV models is presented. It is related to the integration sampler of 26 but does not need an offset mixture of normals approximation to the likelihood. Instead, the extended Kalman Filter is combined with the Laplace approximation to compute the likelihood function by integrating out all unknown system states. We make use of automatic differentiation in computing the posterior mode and in designing an efficient Metropolis–Hastings algorithm. We compare the new algorithm to the single‐update Gibbs sampler and the integration sampler using a well‐known time series of pound/dollar exchange rates.  相似文献   

19.
In the literature on tests of normality, much concern has been expressed over the problems associated with residual-based procedures. Indeed, the specialized tables of critical points which are needed to perform the tests have been derived for the location-scale model; hence, reliance on available significance points in the context of regression models may cause size distortions. We propose a general solution to the problem of controlling the size of normality tests for the disturbances of standard linear regressions, which is based on using the technique of Monte Carlo tests. We study procedures based on 11 well-known test statistics: the Kolmogorov–Smirnov, Anderson–Darling, Cramér–von Mises, Shapiro–Wilk, Jarque–Bera and D’Agostino criteria. Evidence from a simulation study is reported showing that the usual critical values lead to severe size problems (over-rejections or under-rejections). In contrast, we show that Monte Carlo tests achieve perfect size control for any design matrix and have good power.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号