首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 125 毫秒
1.
Schenzle等人提出了感染力随时间变化的改进简单催化模型,其参数估计较为复杂。在此模型中不加判断的一概认为感染力的变化形式为Logistic函数,这可能使感染力函数选择不当。作者提出一种直接拟合简单催化模型及其改进型的方法,即首先用通常的曲线回归方法拟合数据,得到拟合方程u(t),而根据u(t),直接求出感染力函数,即r(t)=-u′(t)/u(t),而用此式可直接确定感染力函数的形式,使得感染力函数的确定有其理论根据。  相似文献   

2.
简单催化模型中感染力函数的确定与判别是模型应用是否合理的关键。对感染力函数的分类、判别以及确定作了进一步地分析,为合理应用该模型提供了理论基础,并对各种参数的确定提供了相应算法。  相似文献   

3.
本文用微分拟合法拟合催化模型、通过对5份资料的实例分析,显示有计算简便、数据准确、结果可靠的优点,有一定的推广应用价值。  相似文献   

4.
一种简便实用的外周镇痛模型   总被引:4,自引:0,他引:4  
<正> 镇痛药的药理研究很需要外周镇痛模型。外周神经放电虽然可用作评价外用镇痛的指标,但需一定的技术设备,且不一定代表疼痛的情况。我们建立了一种简易、实用的外周镇痛模型,在兔耳动脉局部给药,两侧耳缘静脉阻断并引流至体外,以给药前后促进K~+皮下渗透诱发的痛反应阈电压值的变化,来评价药物的末梢镇痛作用  相似文献   

5.
催化模型拟合中感染力的选择与计算   总被引:1,自引:1,他引:0  
针对简单催化模型中感染力的选择与判断,提出了选择的方法,根据该方法可直接判别感染力是常量还是变量,为模型的选取提供了理论基础。  相似文献   

6.
李永  李庆生 《云南医药》1998,19(3):185-187
催化模型在结核病流行病学的应用李永李庆生罗强李廷华攸福Muench1959年提出了流行病学的催化模型设计。以后一些学者把催化模型应用于沙眼、乙型肝炎、血吸虫病、白喉等流行病学分析。表明此模型能定量地测知某疾病在某地的感染力,能反映该病在人群...  相似文献   

7.
有关可逆与两级复合型催化模型的讨论   总被引:1,自引:0,他引:1  
为了研究某些寄生虫感染率的年龄分布先上升后又下降的特征 ,Yangxi Zhang首先建议在适当条件下 ,将可逆催化模型与两级催化模型结合起来 ,建立可逆与两级复合型催化模型 ,并用来分析四种长寿县钩虫年龄别感染率[1 ] 。可逆与两级复合型催化模型所要描述的疾病流行模式为 A   a   b  B c  C,其中 A代表易感者 ,B代表感染者 ,C代表失去感染指征而又不再受感染者。 A以 a的比率转变为 B,B又以 b的比率逆转为 A,又可以 c的比率转变为 C,疾病的流行表现为可逆与两级过程并存的复合关系。设易感者原为 1,用 x表示人群在任何时间 (年…  相似文献   

8.
可逆催化模型在结核感染率曲线拟合上的应用福安市卫协会陈焕光福安市卫生防疫站薛成福本文试用可逆催化模型[1,2]拟合结核感染率,以此反映人群年龄分布特征和感染力等。一、拟合基本原理人群结核感染率一方面以感染力a转变为感染者,其感染指征为结核菌素阳性;另...  相似文献   

9.
目的:提出理论流行病学库室模型理论方法。方法:建立理论流行病学一阶库室模型理论:疗法和语言。结果:利用理论流行病学一阶库室模型理论方法重建和改进了Meuch.H的催化模型,克服了催化模型存在的仅适用感染力恒定、封闭系统的单类型流行病学状态的问题。结论:提出的理论流行病学库室模型理论方法,改进后的催化模型可称为广义催化模型,并给出此模型的理论分析。该模型可用于放射性物质在自然界的转移、处理和分析。  相似文献   

10.
本文应用可逆与两组复合型催化模型分析了阳新县部分人群不同年龄组丝虫微丝蚴感染率资料。通过X检验,证明拟合效果满意。根据模型中不同的a、b、c值可分析比较该病的平均传播速度,探讨影响流行分布的因素,评价防治措施的效果。  相似文献   

11.
Summary For model identification and parameter estimation in the framework of linear pharmacokinetics it is most often assumed that the disposition function is a finite sum of exponential functions with time constants i and associated coefficients Ci. Least-square fitting procedures are used to estimate the coefficients Ci and the corresponding discrete locations i on the -axes.This work presents an alternative approach. It does not assume that the non-zero coefficients are located at sharply defined values of , but that they are represented by a continuous function h(), the spectrum of the disposition function. This turns the non-linear least-square problem into a linear problem, which is known to be as so-called ill-posed. Regularisation methods have been developed in recent years as suitable tools for the treatment of such ill-posed problems.Application of Tikhonov regularisation to the case of the bolus kinetics of propofol in 8 volunteers is demonstrated. In 7 of the 8 cases a spectrum with 4 to 5 peaks was found, and in one volunteer there were only 2 peaks. All spectra with more than 2 peaks showed negative values of h().The method used is described and the results are compared with those of conventional compartment analysis.  相似文献   

12.
When the two-compartment model with absorption is fitted to data by nonlinear least squares, in general six different outcomes can be obtained, arising from permutation of the three exponential rate constants. The existence of multiple solutions in this sense is analogous to the flip-flop phenomenon in the one-compartment model. It is possible for parameter estimates to be inconsistent with the underlying physical model. Methods for recognizing such illegal estimates are described. Other common difficulties are that estimated values for two of the rate constants are almost identical with very large standard deviations, or that the parameter estimation algorithm converges poorly. Such unwanted outcomes usually signal a local (false) minimum of the sum of squares. They can be recognized from the ratio of largest to smallest singular value of the Jacobian matrix, and are, in principle, avoidable by starting the estimation algorithm with different initial values. There also exists a class of data sets for which all outcomes of fitting the usual equations are anomalous. A better fit to these data sets (smaller sum of squares) is obtained if two of the relevant rate constants are allowed to take complex conjugate values. Such data sets have usually been described as having “equal rate constants.” A special form of the model equation is available for parameter estimation in this case. Precautions relating to its use are discussed.  相似文献   

13.
14.
Summary In dealing with the problem of endogeneity in a time‐varying parameter model, we develop the joint and two‐step estimation procedures based on the control function approach. We show that a key to the success of the joint estimation procedure is in an appropriate state‐space representation of the model. On the other hand, a correct treatment of the problem of generated regressors plays an important role in our two‐step estimation procedure. Monte Carlo experiments confirm that the estimation procedures proposed in this paper work well in finite samples. Concerning our proposed endogeneity tests, the asymptotic distribution of both the likelihood ratio and Wald tests based on the second‐step regression are reasonably well approximated by a χ2 distribution even in finite samples.  相似文献   

15.
T R?sner  R Franke  R Kühne 《Die Pharmazie》1978,33(4):226-228
A method is proposed which yields an approximate solution of the Free-Wilson model very rapidly without using a computer. Although the resulting group contributions are numerically somewhat different from the exact Free-Wilson solution they correctly reflect the relative order of the substituents with respect to their effect on biological activity within each position as well as the relative importance of different positions. Thus, the approximate results can well be used to select the most promising candidates for further synthesis and testing.  相似文献   

16.
The impact of assay variability on pharmacokinetic modeling was investigated. Simulated replications (150) of three individuals resulted in 450 data sets. A one-compartment model with first-order absorption was simulated. Random assay errors of 10, 20, or 30% were introduced and the ratio of absorption rate (K a )to elimination rate (K e )constants was 2, 10, or 20. The analyst was blinded as to the rate constants chosen for the simulations. Parameter estimates from the sequential method (K e )estimated with log-linear regression followed by estimation of K a and nonlinear regression with various weighting schemes were compared. NONMEM was run on the 9 data sets as well. Assay error caused a sizable number of curves to have apparent multicompartmental distribution or complex absorption kinetic characteristics. Routinely tabulated parameters (maximum concentration, area under the curve, and, to a lesser extent, mean residence time) were consistently overestimated as assay error increased. When K a /K e =2,all methods except NONMEM underestimated K e ,overestimated K a ,and overestimated apparent volume of distribution. These significant biases increased with the magnitude of assay error. With improper weighting, nonlinear regression significantly overestimated K e when K a /K e ,=20. In general, however, the sequential approach was most biased and least precise. Although no interindividual variability was included in the simulations, estimation error caused large standard deviations to be associated with derived parameters, which would be interpreted as interindividual error in a nonsimulation environment. NONMEM, however, acceptably estimated all parameters and variabilities. Routinely applied pharmacokinetic estimation methods do not consistently provide unbiased answers. In the specific case of extended-release drug formulations, there is clearly a possibility that certain estimation methods yield K a and relative bioavailability estimates that would be imprecise and biased.  相似文献   

17.
A significant bias in parameters, estimated with the proportional odds model using the software NONMEM, has been reported. Typically, this bias occurs with ordered categorical data, when most of the observations are found at one extreme of the possible outcomes. The aim of this study was to assess, through simulations, the performance of the Back-Step Method (BSM), a novel approach for obtaining unbiased estimates when the standard approach provides biased estimates. BSM is an iterative method involving sequential simulation-estimation steps. BSM was compared with the standard approach in the analysis of a 4-category ordered variable using the Laplacian method in NONMEM. The bias in parameter estimates and the accuracy of model predictions were determined for the 2 methods on 3 conditions: (1) a nonskewed distribution of the response with low interindividual variability (IIV), (2) a skewed distribution with low IIV, and (3) a skewed distribution with high IIV. An increase in bias with increasing skewness and IIV was shown in parameters estimated using the standard approach in NON-MEM. BSM performed without appreciable bias in the estimates under the 3 conditions, and the model predictions were in good agreement with the original data. Each BSM estimation represents a random sample of the population; hence, repeating the BSM estimation reduces the imprecision of the parameter estimates. The BSM is an accurate estimation method when the standard modeling approach in NONMEM gives biased estimates.  相似文献   

18.
A pharmacokinetic screen has been advocated for the characterization of the population pharmacokinetics of drugs during Phase 3 clinical trials. A common perception encountered in the collection of such data is that the accuracy of sampling times relative to dose is inadequate. A prospective simulation study was carried out to evaluate the effect of error in the recording of sampling times on the accuracy and precision of population parameter estimates from repeated measures pharmacokinetic data. A two-compartment model with intravenous bolus input(s) (single and multiple doses) was assumed. Random and systematic error in sampling times ranging from 5–50% using profile (block) randomized design were introduced. Sampling times were simulated in EXCEL while concentration data simulation and analysis were done in NONMEM. The effect of error in sampling times was studied at levels of variability ranging from 15–45% for a drug assumed to be dosed at its elimination half-life. One hundred replicate data sets of 100 subjects each were simulated for each case. although estimates of clearance (CitCL) and variability in clearance were robust for most of the sampling time errors, there was an increase in bias and imprecision in overall parameter estimation as intersubject variability was increased. If there is interest in parameters other thanCL, then the design of prospective population studies should include procedures for minimizing the error in the recording of sample times relative to dosing history. The views expressed in this article are personal opinions of the authors and not those of the US Food and Drug Administration.  相似文献   

19.
Few attempts have been made to examine the statistical problems that the user of compartmental models must face. Some properties of the estimators of parameters for one and two compartmental models based on nonlinear estimation were studied through simulation. Of particular interest were the effect of the experimental design and the effect of different error structures on the empirical sampling distribution for the estimators. For the one compartment model it was found that nonlinear estimation yielded essentially unbiased estimators that were normally distributed unless the random error for the model was large. In the two compartment model simulations, bias appeared in the estimators to the extent that bimodal sampling distributions of the estimators were observed as the random error for the model was increased.This work was supported in part by NIOSH Grant OH 07091-03 and by the Lipid Research Clinic NHLBI Contract N01-HV-2-2914L from the National Institutes of Health.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号