首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 37 毫秒
1.
A significant bias in parameters, estimated with the proportional odds model using the software NONMEM, has been reported. Typically, this bias occurs with ordered categorical data, when most of the observations are found at one extreme of the possible outcomes. The aim of this study was to assess, through simulations, the performance of the Back-Step Method (BSM), a novel approach for obtaining unbiased estimates when the standard approach provides biased estimates. BSM is an iterative method involving sequential simulation-estimation steps. BSM was compared with the standard approach in the analysis of a 4-category ordered variable using the Laplacian method in NONMEM. The bias in parameter estimates and the accuracy of model predictions were determined for the 2 methods on 3 conditions: (1) a nonskewed distribution of the response with low interindividual variability (IIV), (2) a skewed distribution with low IIV, and (3) a skewed distribution with high IIV. An increase in bias with increasing skewness and IIV was shown in parameters estimated using the standard approach in NON-MEM. BSM performed without appreciable bias in the estimates under the 3 conditions, and the model predictions were in good agreement with the original data. Each BSM estimation represents a random sample of the population; hence, repeating the BSM estimation reduces the imprecision of the parameter estimates. The BSM is an accurate estimation method when the standard modeling approach in NONMEM gives biased estimates.  相似文献   

2.
The application of proportional odds models to ordered categorical data using the mixed-effects modeling approach has become more frequently reported within the pharmacokinetic/pharmacodynamic area during the last decade. The aim of this paper was to investigate the bias in parameter estimates, when models for ordered categorical data were estimated using methods employing different approximations of the likelihood integral; the Laplacian approximation in NONMEM (without and with the centering option) and NLMIXED, and the Gaussian quadrature approximations in NLMIXED. In particular, we have focused on situations with non-even distributions of the response categories and the impact of interpatient variability. This is a Monte Carlo simulation study where original data sets were derived from a known model and fixed study design. The simulated response was a four-category variable on the ordinal scale with categories 0, 1, 2 and 3. The model used for simulation was fitted to each data set for assessment of bias. Also, simulations of new data based on estimated population parameters were performed to evaluate the usefulness of the estimated model. For the conditions tested, Gaussian quadrature performed without appreciable bias in parameter estimates. However, markedly biased parameter estimates were obtained using the Laplacian estimation method without the centering option, in particular when distributions of observations between response categories were skewed and when the interpatient variability was moderate to large. Simulations under the model could not mimic the original data when bias was present, but resulted in overestimation of rare events. The bias was considerably reduced when the centering option in NONMEM was used. The cause for the biased estimates appears to be related to the conditioning on uninformative and uncertain empirical Bayes estimate of interindividual random effects during the estimation, in conjunction with the normality assumption.  相似文献   

3.
A simulation study was performed to determine how inestimable standard errors could be obtained when population pharmacokinetic analysis is performed with the NONMEM software on data from small sample size phase I studies. Plausible sets of concentration-time data for nineteen subjects were simulated using an incomplete longitudinal population pharmacokinetic study design, and parameters of a drug in development that exhibits two compartment linear pharmacokinetics with single dose first order input. They were analyzed with the NONMEM program. Standard errors for model parameters were computed from the simulated parameter values to serve as true standard errors of estimates. The nonparametric bootstrap approach was used to generate replicate data sets from the simulated data and analyzed with NONMEM. Because of the sensitivity of the bootstrap to extreme values, winsorization was applied to parameter estimates. Winsorized mean parameters and their standard errors were computed and compared with their true values as well as the non-winsorized estimates. Percent bias was used to judge the performance of the bootstrap approach (with or without winsorization) in estimating inestimable standard errors of population pharmacokinetic parameters. Winsorized standard error estimates were generally more accurate than non-winsorized estimates because the distribution of most parameter estimates were skewed, sometimes with heavy tails. Using the bootstrap approach combined with winsorization, inestimable robust standard errors can be obtained for NONMEM estimated population pharmacokinetic parameters with > or = 150 bootstrap replicates. This approach was also applied to a real data set and a similar outcome was obtained. This investigation provides a structural framework for estimating inestimable standard errors when NONMEM is used for population pharmacokinetic modeling involving small sample sizes.  相似文献   

4.
Analysis of longitudinal ordered categorical efficacy or safety data in clinical trials using mixed models is increasingly performed. However, algorithms available for maximum likelihood estimation using an approximation of the likelihood integral, including LAPLACE approach, may give rise to biased parameter estimates. The SAEM algorithm is an efficient and powerful tool in the analysis of continuous/count mixed models. The aim of this study was to implement and investigate the performance of the SAEM algorithm for longitudinal categorical data. The SAEM algorithm is extended for parameter estimation in ordered categorical mixed models together with an estimation of the Fisher information matrix and the likelihood. We used Monte Carlo simulations using previously published scenarios evaluated with NONMEM. Accuracy and precision in parameter estimation and standard error estimates were assessed in terms of relative bias and root mean square error. This algorithm was illustrated on the simultaneous analysis of pharmacokinetic and discretized efficacy data obtained after a single dose of warfarin in healthy volunteers. The new SAEM algorithm is implemented in MONOLIX 3.1 for discrete mixed models. The analyses show that for parameter estimation, the relative bias is low for both fixed effects and variance components in all models studied. Estimated and empirical standard errors are similar. The warfarin example illustrates how simple and rapid it is to analyze simultaneously continuous and discrete data with MONOLIX 3.1. The SAEM algorithm is extended for analysis of longitudinal categorical data. It provides accurate estimates parameters and standard errors. The estimation is fast and stable.  相似文献   

5.
There has recently been concern about confidence intervals calculated using the standard error of parameter estimates from NONMEM, a computer program that uses a non-linear mixed-effects model to calculate relative bioavailability (F), because of possible downward bias of these estimates. In this study an alternate approach, the log-likelihood procedure, was used to calculate the confidence intervals for F from NONMEM. These were then compared with those calculated using the standard error of the parameter estimates, the traditional NONMEM approach, and the standard model-independent method, to determine whether bias exists. By use of data from a single dose, open cross-over study of ibuprofen using 14 healthy male volunteers, NONMEM was shown to give results consistent with those obtained using the standard model-independent method of analysis and could be a useful tool in the determination of F where conditions for using the standard method of analysis are not optimum. The width of the confidence interval for F using the log-likelihood procedure was narrower and non-symmetrical when compared with that obtained using the traditional NONMEM approach. The width of the confidence interval obtained using the traditional NONMEM method was similar to that from the standard approach, however the parameter estimate for F was higher than that obtained from the standard method. This could have been because of an outlier in the data set to which the standard approach is more sensitive. No downward bias was found in the confidence intervals from NONMEM. The bioavailability data set was of relatively low variability and more research with highly variable data is necessary before it can be concluded that the confidence intervals calculated from NONMEM can be used for hypothesis testing.  相似文献   

6.
The impact of assay variability on pharmacokinetic modeling was investigated. Simulated replications (150) of three "individuals" resulted in 450 data sets. A one-compartment model with first-order absorption was simulated. Random assay errors of 10, 20, or 30% were introduced and the ratio of absorption rate (Ka) to elimination rate (Ke) constants was 2, 10, or 20. The analyst was blinded as to the rate constants chosen for the simulations. Parameter estimates from the sequential method (Ke estimated with log-linear regression followed by estimation of Ka) and nonlinear regression with various weighting schemes were compared. NONMEM was run on the 9 data sets as well. Assay error caused a sizable number of curves to have apparent multicompartmental distribution or complex absorption kinetic characteristics. Routinely tabulated parameters (maximum concentration, area under the curve, and, to a lesser extent, mean residence time) were consistently overestimated as assay error increased. When Ka/Ke = 2, all methods except NONMEM underestimated Ke, overestimated Ka, and overestimated apparent volume of distribution. These significant biases increased with the magnitude of assay error. With improper weighting, nonlinear regression significantly overestimated Ke when Ka/Ke = 20. In general, however, the sequential approach was most biased and least precise. Although no interindividual variability was included in the simulations, estimation error caused large standard deviations to be associated with derived parameters, which would be interpreted as interindividual error in a nonsimulation environment. NONMEM, however, acceptably estimated all parameters and variabilities. Routinely applied pharmacokinetic estimation methods do not consistently provide unbiased answers. In the specific case of extended-release drug formulations, there is clearly a possibility that certain estimation methods yield Ka and relative bioavailability estimates that would be imprecise and biased.  相似文献   

7.
Abstract

Single response population (1 sample / animal) simulation studies were carried out (assuming a 1 compartment model) to investigate the influence of inter-animal variability (in clearance (σCl) and volume (σv)) on the estimation of population pharmacokinetic parameters. NONMEM was used for parameter estimation. Individual and joint confidence intervals coverage for parameter estimates were computed to reveal the influence of bias and standard error (SE) on interval estimates. The coverage of interval estimates, percent prediction error and correlation analysis were used to judge the efficiency of parameter estimation. The efficiency of estimation of Cl and V was good, on average, irrespective of the values of σCl and σv Estimates of σCl and σv were biased and imprecise. Small biases and high precision resulted in good confidence intervals coverage for Cl and V. SE was the major determinant of confidence intervals coverage for the random effect parameters, σCl and σv and the joint confidence intervals coverage for all parameter estimates. The usual confidence intervals computed may give an erroneous impression of the precision with which the random effect parameters are estimated because of the large standard errors associated with these parameters. Conservative approach to data interpretation is required when biases associated with σCl and σv are large.  相似文献   

8.
The development of non-linear mixed pharmacokinetic/pharmacodynamic models for continuous variables is usually guided by graphical assessment of goodness of fit and statistical significance criteria. The latter is usually the likelihood ratio test (LR). When the variable to be modeled is categorical, on the other hand, the available graphical methods are less informative and/or more complicated to use and the modeler needs to rely more heavily on statistical significance assessment in the model development. The aim of this study was to evaluate the type I error rates, obtained from using the LR test, for inclusion of a false parameter in a non-linear mixed effects model for ordered categorical data when modeling with NONMEM. Data with four ordinal categories were simulated from a logistic model. Two nested multinomial models were fitted to the data, the model used for simulation and a model containing one additional parameter. The difference in fit (objective function value) between models was calculated. Three types of models were explored; (i) a model without interindividual variability (IIV) where the addition of a parameter describing IIV was assessed, (ii) a model with IIV where the addition of a drug effect parameter (either categorical or continuous drug exposure measure) was evaluated, and (iii) a model including IIV and drug effect where the inclusion of a random effects parameter on the drug effect was assessed. Alterations were made to the simulation conditions, for example, varying the number of individuals and the size and distribution of the IIV, to explore potential influences on the type I error rate. The estimated type I error rate for inclusion of a false random effect parameter in model (i) and (iii) were, as expected, lower than the nominal. When the additional parameter was a fixed effects parameter describing drug effect (model(II)) the estimated type I error rates were in agreement with the nominal. None of the different simulation conditions tried changed this pattern. Thus, the LR test seems appropriate for judging the statistical significance of fixed effects parameters when modeling categorical data with NONMEM.  相似文献   

9.
We fit a mixed effects logistic regression model to longitudinal adverse event (AE) severity data (four-point ordered categorical response) to describe the dose-AE severity response for an investigational drug. The distribution of the predicted interindividual random effects (Bayes predictions) was extremely bimodal. This extreme bimodality indicated that biased parameter estimates and poor predictive performance were likely. The distribution's primary mode was composed of patients that did not experience an AE. Moreover, the Bayes predictions of these non-AE patients were nearly degenerative; i.e., the predictions were nearly identical. To resolve this extreme bimodality we propose using a two-part mixture modeling approach. The first part models the incidence of AE's, and the second part models the severity grade given the patient had an AE. Unconditional probability predictions are calculated by mixing the incidence and severity model probability predictions. We also report results of simulation studies, which assess the predictive and statistical (bias and precision) performance of our approach.  相似文献   

10.
In a simulation study of the estimation of population pharmacokinetic parameters, including fixed and random effects, the estimates and confidence intervals produced by NONMEM were evaluated. Data were simulated according to a monoexponential model with a wide range of design and statistical parameters, under both steady state (SS) and non-SS conditions. Within the range of values for population parameters commonly encountered in research and clinical settings, NONMEM produced parameter estimates for CL, V, sigma CL, and sigma epsilon which exhibit relatively small biases. As the range of variability increases, these biases became larger and more variable. An important exception was bias in the estimate for sigma V which was large even when the underlying variability was small. NONMEM standard error estimates are appropriate as estimates of standard deviation when the underlying variability is small. Except in the case of CL, standard error estimates tend to deteriorate as underlying variability increases. An examination of confidence interval coverage indicates that caution should be exercised when the usual 95% confidence intervals are used for hypothesis testing. Finally, simulation-based corrections of point and interval estimates are possible but corrections must be performed on a case-by-case basis.  相似文献   

11.
Individual pharmacokinetic parameters quantify the pharmacokinetics of an individual, while population pharmacokinetic parameters quantify population mean kinetics, interindividual variability, and residual intraindividual variability plus measurement error. Individual pharmacokinetics are estimated by fitting individual data to a pharmacokinetic model. Population pharmacokinetic parameters are estimated either by fitting all individual's data together as though there were no individual kinetic differences (the naive pooled data approach), or by fitting each individual's data separately, and then combining the individual parameter estimates (the two-stage approach). A third approach, NONMEM, takes a middle course between these, and avoids shortcomings of each of them. A data set consisting of 124 steady-state phenytoin concentration-dosage pairs from 49 patients, obtained in the routine course of their therapy, was analyzed by each method. The resulting population parameter estimates differ considerably (population mean Km, for example, is estimated as 1.57, 5.36, and 4.44 g/ml by the naive pooled data, two-stage, and NONMEM approaches, respectively). Simulations of the data were analyzed to investigate these differences. The simulations indicate that the pooled data approach fails to estimate variabilities and produces imprecise estimates of mean kinetics. The two-stage appproach produces good estimates of mean kinetics, but biased and imprecise estimates of interindividual variability. NONMEM produces accurate and precise estimates of all parameters, and also reasonable confidence intervals for them. This performance is exactly what is expected from theoretical considerations and provides empirical support for the use of NONMEM when estimating population pharmacokinetics from routine type patient data.Work supported in part by NIH Grants GM 26676 and GM 26691.  相似文献   

12.
A simulation study was conducted to compare the performance of alternative approaches for analyzing the distorted pharmacodynamic data. The pharmacodynamic data were assumed to be obtained from the natriurertic peptide-type drug, where the diuretic effect arises from the hyperbolic (Emax) dose–response model and is biased by the dose-dependent hypotensive effect. The nonlinear mixed effect model (NONMEM) method enabled assessment of the effects of hemodynamics on the diuretic effects and also quantification of intrinsic diuretic activities, but the standard two-stage (STS) and naive pooled data (NPD) methods did not give accurate estimates. Both the STS and the NONMEM methods performed well for unbiased data arising from a one-compartment model with saturable (Michaelis–Menten) elimination, whereas the NPD method resulted in inaccurate estimates. The findings suggest that nonlinearity and/or bias problems result in poor estimation by NPD and STS analyses and that the NONMEM method is useful for analyzing such nonlinear and distorted pharmacodynamic data.  相似文献   

13.
Population pharmacokinetic analysis is being increasingly applied to individual data collected in different studies and pooled in a single database. However, individual pharmacokinetic parameters may change randomly from one study to another. In this article, we show by simulation that neglecting inter-study variability (ISV) does not introduce any bias for the fixed parameters or for the residual variability but may result in an overestimation of inter-individual (IIV) variability, depending on the magnitude of the ISV. Two random study-effect (RSE) estimation methods were investigated: (i) estimation, in a single step, of the three-nested random effects (inter-study, inter-individual and residual variability); (ii) estimation of residual variability and a mixture of ISV and IIV in the first step, then separation of ISV from IIV in the second. The one-stage RSE model performed well for population parameter assessment, whereas, the two-stage model yielded good estimates of IIV only with a rich sampling design. Finally, irrespective of the method used, ISV estimates were valid only when a large number of studies was pooled. The analysis of one real data set illustrated the use of an ISV model. It showed that the fixed parameter estimates were not modified, whether an RSE model was used or not, probably because of the homogeneity of the experimental designs of the studies, and suggest no study-effect in this example.  相似文献   

14.
There has been little evaluation of maximum likelihood approximation methods for non-linear mixed effects modelling of count data. The aim of this study was to explore the estimation accuracy of population parameters from six count models, using two different methods and programs. Simulations of 100 data sets were performed in NONMEM for each probability distribution with parameter values derived from a real case study on 551 epileptic patients. Models investigated were: Poisson (PS), Poisson with Markov elements (PMAK), Poisson with a mixture distribution for individual observations (PMIX), Zero Inflated Poisson (ZIP), Generalized Poisson (GP) and Negative Binomial (NB). Estimations of simulated datasets were completed with Laplacian approximation (LAPLACE) in NONMEM and LAPLACE/Gaussian Quadrature (GQ) in SAS. With LAPLACE, the average absolute value of the bias (AVB) in all models was 1.02% for fixed effects, and ranged 0.32–8.24% for the estimation of the random effect of the mean count (λ). The random effect of the overdispersion parameter present in ZIP, GP and NB was underestimated (−25.87, −15.73 and −21.93% of relative bias, respectively). Analysis with GQ 9 points resulted in an improvement in these parameters (3.80% average AVB). Methods implemented in SAS had a lower fraction of successful minimizations, and GQ 9 points was considerably slower than 1 point. Simulations showed that parameter estimates, even when biased, resulted in data that were only marginally different from data simulated from the true model. Thus all methods investigated appear to provide useful results for the investigated count data models.  相似文献   

15.
Tumor growth profiles were simulated for 2 years using the Wang and Claret models under a phase 3 clinical trial design. Profiles were censored when tumor size increased >20% from nadir similar to clinical practice. The percent of patients censored varied from 0% (perfect case) to 100% (real-life case). The model used to generate the data was then fit to the censored data using FOCE in NONMEM. The percent bias in the estimated model parameters determined with censored data was compared to the true values. A total of 100 simulation replicates was used. For the Wang model, under clinical conditions (100% censoring), the parameter related to tumor reduction SR was underpredicted by 30% and the parameter related to tumor growth PR was underpredicted by ~45%. Most of the variance components in the model were within ±20% of the true values. However, biased parameter estimates in the Wang model did not translate to biased tumor size predictions as the mean percent prediction error between true and model predicted tumor size never exceeded 10%. For the Claret model, at 100% censoring, the tumor growth parameter KL was unaffected by censoring. Both tumor shrinkage parameters, KD and λ, were overestimated by ~20% in both cases. Future research needs to be directed to develop less empirically based models and to use simulation as a way to improve clinical oncology trials designs.  相似文献   

16.
Neural Network Predicted Peak and Trough Gentamicin Concentrations   总被引:3,自引:0,他引:3  
Predictions of steady state peak and trough serum gentamicin concentrations were compared between a traditional population kinetic method using the computer program NONMEM to an empirical approach using neural networks. Predictions were made in 111 patients with peak concentrations between 2.5 and 6.0 µg/ml using the patient factors age, height, weight, dose, dose interval, body surface area, serum creatinine, and creatinine clearance. Predictions were also made on 33 observations that were outside the 2.5 and 6.0 µg/ml range. Neural networks made peak serum concentration predictions within the 2.5-6.0 µg/ml range with statistically less bias and comparable precision with paired NONMEM predictions. Trough serum concentration predictions were similar using both neural networks and NONMEM. The prediction error for peak serum concentrations averaged 16.5% for the neural networks and 18.6% for NONMEM. Average prediction errors for serum trough concentrations were 48.3% for neural networks and 59.0% for NONMEM. NONMEM provided numerically more precise and less biased predictions when extrapolating outside the 2.5 and 6.0 µg/ml range. The observed peak serum concentration distribution was multimodal and the neural network reproduced this distribution with less difference between the actual distribution and the predicted distribution than NONMEM. It is concluded that neural networks can predict serum drug concentrations of gentamicin. Neural networks may be useful in predicting the clinical pharmacokinetics of drugs.  相似文献   

17.
Routine clinical pharmacokinetic (PK) data collected from patients receiving inulin were analyzed to estimate population PK parameters; 560 plasma concentration determinations for inulin were obtained from 90 patients. The data were analyzed using NONMEM. The population PK parameters were estimated using a Constrained Longitudinal Splines (CLS) semiparametric approach and a first-order conditional method (FOCE). The mean posterior individual clearance values were 7.73 L/hr using both parametric and semiparametric methods. This estimation was compared with clearances estimated using standard nonlinear weighted least squares approach (reference value, 7.64 L/hr). The bias was not statistically different from zero and the precision of the estimates was 0.415 L/hr using parametric method and 0.984 L/hr using semiparametric method. To evaluate the predictive performances of the population parameters, 17 new subjects were used. First, the individual inulin clearance values were estimated from drug concentration-time curve using a nonlinear weighted least-squares method then they were estimated using the NONMEM POSTHOC method obtained using parametric and CLS methods as well as an alternative method based on a Monte Carlo simulation approach. The population parameters combined with two individual inulin plasma concentrations (0.25 and 2 hr) led to an estimation of individual clearances without bias and with a good precision. This paper not only evaluates the relative performance of the parametric and the CLS methods for sparse data but also introduces a new method for individual estimation.  相似文献   

18.
The purpose of this study was to evaluate the effects of population size, number of samples per individual, and level of interindividual variability (IIV) on the accuracy and precision of pharmacodynamic (PD) parameter estimates. Response data were simulated from concentration input data for an inhibitory sigmoid drug efficacy (E(max)) model using Nonlinear Mixed Effect Modeling, version 5 (NONMEM). Seven designs were investigated using different concentration sampling windows ranging from 0 to 3 EC(50) (EC(50) is the drug concentration at 50% of the E(max)) units. The response data were used to estimate the PD and variability parameters in NONMEM. The accuracy and precision of parameter estimates after 100 replications were assessed using the mean and SD of percent prediction error, respectively. Four samples per individual were sufficient to provide accurate and precise estimate of almost all of the PD and variability parameters, with 100 individuals and IIV of 30%. Reduction of sample size resulted in imprecise estimates of the variability parameters; however, the PD parameter estimates were still precise. At 45% IIV, designs with 5 samples per individual behaved better than those designs with 4 samples per individual. For a moderately variable drug with a high Hill coefficient, sampling from the 0.1 to 1, 1 to 2, 2 to 2.5, and 2.5 to 3 EC(50) window is sufficient to estimate the parameters reliably in a PD study.  相似文献   

19.
A nonparametric population method with support points from the empirical Bayes estimates (EBE) has recently been introduced (default method). However, EBE distribution may, with sparse and small datasets, not provide a suitable range of support points. This study aims to develop a method based on a prior parametric analysis capable of providing a nonparametric grid with adequate support points range. A new method extends the nonparametric grid with additional support points generated by simulation from the parametric distribution, hence the name extended-grid method. The joint probability density function is estimated at the extended grid. The performance of the new method was evaluated and compared to the default method via Monte Carlo simulations using simple IV bolus model and sparse (200 subject, two samples per subject) or small (30 subjects, three samples per subjects) datasets and two scenarios based on real case studies. Parameter distributions estimated by the default and the extended-grid method were compared to the true distributions; bias and precision were assessed at different percentiles. With small datasets, the bias was similar between methods (<10%); however, precision was markedly improved with the new method (by 43%). With sparse datasets, both bias (from 5.9% to 3%) and precision (by 60%) were improved. For simulated scenarios based on real study designs, extended-grid predictions were in a good agreement with true values. A new approach to obtain support points for the nonparametric method has been developed, and it displayed good estimation properties. The extended-grid method is automated, using the program PsN, for implementation into the NONMEM.  相似文献   

20.
Nonlinear mixed effects models parameters are commonly estimated using maximum likelihood. The properties of these estimators depend on the assumption that residual errors are independent and normally distributed with mean zero and correctly defined variance. Violations of this assumption can cause bias in parameter estimates, invalidate the likelihood ratio test and preclude simulation of real-life like data. The choice of error model is mostly done on a case-by-case basis from a limited set of commonly used models. In this work, two strategies are proposed to extend and unify residual error modeling: a dynamic transform-both-sides approach combined with a power error model (dTBS) capable of handling skewed and/or heteroscedastic residuals, and a t-distributed residual error model allowing for symmetric heavy tails. Ten published pharmacokinetic and pharmacodynamic models as well as stochastic simulation and estimation were used to evaluate the two approaches. dTBS always led to significant improvements in objective function value, with most examples displaying some degree of right-skewness and variances proportional to predictions raised to powers between 0 and 1. The t-distribution led to significant improvement for 5 out of 10 models with degrees of freedom between 3 and 9. Six models were most improved by the t-distribution while four models benefited more from dTBS. Changes in other model parameter estimates were observed. In conclusion, the use of dTBS and/or t-distribution models provides a flexible and easy-to-use framework capable of characterizing all commonly encountered residual error distributions.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号