首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Analysis of repeated binary measurements presents a challenge in terms of the correlation between measurements within an individual and a mixed-effects modelling approach has been used for the analysis of such data. Sample size calculation is an important part of clinical trial design and it is often based on the method of analysis. We present a method for calculating the sample size for repeated binary pharmacodynamic measurements based on analysis by mixed-effects modelling and using a logit transformation. Wald test is used for hypothesis testing. The method can be used to calculate the sample size required for detecting parameter differences between subpopulations. Extensions to account for unequal allocation of subjects across groups and unbalanced sampling designs between and within groups were also derived. The proposed method has been assessed via simulation of a linear model and estimation using NONMEM. The results showed good agreement between nominal power and power estimated from the NONMEM simulations. The results also showed that sample size increases with increased variability at a rate that depends on the difference in parameter estimates between groups, and designs that involve sampling based on an optimal design can help to reduce cost.  相似文献   

2.
Population pharmacodynamic experiments sometime involve repeated measurements of ordinal random variables at specific time points. Such longitudinal data presents a challenge during modelling due to correlation between measurements within an individual and often mixed-effects modelling approach may be used for the analysis. It is important that these studies are adequately powered by including an adequate number of subjects in order to detect a significant treatment effect. This paper describes a method for calculating sample size for repeated ordinal measurements in population pharmacodynamic experiments based on analysis by a mixed-effects modelling approach. The Wald test is used for testing the significance of treatment effects. This method is fast, simple and efficient. It can also be extended to account for differential allocation of subjects to the groups and unbalanced sampling designs between and within groups. The results obtained from two simulation studies using nonlinear mixed-effects modelling software (NONMEM) showed good agreement between the power obtained from simulation and nominal power used for sample size calculations.  相似文献   

3.
Repeated discrete outcome variables such as count measurements often arise in pharmacodynamic experiments. Count measurements can only take nonnegative integer values; this and correlation between repeated measurements from an individual make the design and analysis of repeated-count data special. Sample size/power calculation is an important part of clinical trial design to ensure adequate power for detecting significant effect, and it is often based on the procedure for analysis. This paper describes an approach for calculating sample size/power for population pharmacokinetic/pharmacodynamic experiments involving repeated-count measurements modeled as a Poisson process based on mixed-effects modeling technique. The noncentral version of the Wald χ2 test is used for testing parameter/treatment significance. The approach was applied to two examples and the results were compared to results obtained from simulations in NONMEM. The first example involves calculating the power of a design to detect parameter significance between two groups: placebo and treatment group. The second example involves characterization of the dose-efficacy relationship of oxybutynin using a mixed-effects modeling approach. Weekly urge urinary incontinence episodes (a discrete count variable) is the primary efficacy variable and is modeled as a Poisson variable. A prospective study based on two different formulations of oxybutynin was designed using published population pharmacokinetic/pharmacodynamic model. The results of simulation studies showed good agreement between the proposed method and NONMEM simulations.  相似文献   

4.
In the recent years, interest in the application of experimental design theory to population pharmacokinetic (PK) and pharmacodynamic (PD) experiments has increased. The aim is to improve the efficiency and the precision with which parameters are estimated during data analysis and sometimes to increase the power and reduce the sample size required for hypothesis testing. The population Fisher information matrix (PFIM) has been described for uniresponse and multiresponse population PK experiments for design evaluation and optimisation. Despite these developments and availability of tools for optimal design of population PK and PD experiments much of the effort has been focused on repeated continuous variable measurements with less work being done on repeated discrete type measurements. Discrete data arise mainly in PDs e.g. ordinal, nominal, dichotomous or count measurements. This paper implements expressions for the PFIM for repeated ordinal, dichotomous and count measurements based on analysis by a mixed-effects modelling technique. Three simulation studies were used to investigate the performance of the expressions. Example 1 is based on repeated dichotomous measurements, Example 2 is based on repeated count measurements and Example 3 is based on repeated ordinal measurements. Data simulated in MATLAB were analysed using NONMEM (Laplace method) and the glmmML package in R (Laplace and adaptive Gauss-Hermite quadrature methods). The results obtained for Examples 1 and 2 showed good agreement between the relative standard errors obtained using the PFIM and simulations. The results obtained for Example 3 showed the importance of sampling at the most informative time points. Implementation of these expressions will provide the opportunity for efficient design of population PD experiments that involve discrete type data through design evaluation and optimisation.  相似文献   

5.
Simultaneous modelling of dense and sparse pharmacokinetic data is possible with a population approach. To determine the number of individuals required to detect the effect of a covariate, simulation-based power calculation methodologies can be employed. The Monte Carlo Mapped Power method (a simulation-based power calculation methodology using the likelihood ratio test) was extended in the current study to perform sample size calculations for mixed pharmacokinetic studies (i.e. both sparse and dense data collection). A workflow guiding an easy and straightforward pharmacokinetic study design, considering also the cost-effectiveness of alternative study designs, was used in this analysis. Initially, data were simulated for a hypothetical drug and then for the anti-malarial drug, dihydroartemisinin. Two datasets (sampling design A: dense; sampling design B: sparse) were simulated using a pharmacokinetic model that included a binary covariate effect and subsequently re-estimated using (1) the same model and (2) a model not including the covariate effect in NONMEM 7.2. Power calculations were performed for varying numbers of patients with sampling designs A and B. Study designs with statistical power >80% were selected and further evaluated for cost-effectiveness. The simulation studies of the hypothetical drug and the anti-malarial drug dihydroartemisinin demonstrated that the simulation-based power calculation methodology, based on the Monte Carlo Mapped Power method, can be utilised to evaluate and determine the sample size of mixed (part sparsely and part densely sampled) study designs. The developed method can contribute to the design of robust and efficient pharmacokinetic studies.  相似文献   

6.
We present a method for calculating the sample size of a pharmacokinetic study analyzed using a mixed effects model within a hypothesis testing framework. A sample size calculation method for repeated measurement data analyzed using generalized estimating equations has been modified for nonlinear models. The Wald test is used for hypothesis testing of pharmacokinetic parameters. A marginal model for the population pharmacokinetic is obtained by linearizing the structural model around the subject specific random effects. The proposed method is general in that it allows unequal allocation of subjects to the groups and accounts for situations where different blood sampling schedules are required in different groups of patients. The proposed method has been assessed using Monte Carlo simulations under a range of scenarios. NONMEM was used for simulations and data analysis and the results showed good agreement.  相似文献   

7.
It is not uncommon that the outcome measurements, symptoms or side effects, of a clinical trial belong to the family of event type data, e.g., bleeding episodes or emesis events. Event data is often low in information content and the mixed-effects modeling software NONMEM has previously been shown to perform poorly with low information ordered categorical data. The aim of this investigation was to assess the performance of the Laplace method, the stochastic approximation expectation-maximization (SAEM) method, and the importance sampling method when modeling repeated time-to-event data. The Laplace method already existed, whereas the two latter methods have recently become available in NONMEM 7. A stochastic simulation and estimation study was performed to assess the performance of the three estimation methods when applied to a repeated time-to-event model with a constant hazard associated with an exponential interindividual variability. Various conditions were investigated, ranging from rare to frequent events and from low to high interindividual variability. The method performance was assessed by parameter bias and precision. Due to the lack of information content under conditions where very few events were observed, all three methods exhibit parameter bias and imprecision, however most pronounced by the Laplace method. The performance of the SAEM and importance sampling were generally higher than Laplace when the frequency of individuals with events was less than 43%, while at frequencies above that all methods were equal in performance.  相似文献   

8.
We present a method for calculating the sample size of a pharmacokinetic study analyzed using a mixed effects model within a hypothesis testing framework. A sample size calculation method for repeated measurement data analyzed using generalized estimating equations has been modified for nonlinear models. The Wald test is used for hypothesis testing of pharmacokinetic parameters. A marginal model for the population pharmacokinetic is obtained by linearizing the structural model around the subject specific random effects. The proposed method is general in that it allows unequal allocation of subjects to the groups and accounts for situations where different blood sampling schedules are required in different groups of patients. The proposed method has been assessed using Monte Carlo simulations under a range of scenarios. NONMEM was used for simulations and data analysis and the results showed good agreement.  相似文献   

9.
Estimation of population pharmacokinetics using the Gibbs sampler   总被引:4,自引:0,他引:4  
Quantification of the average and interindividual variation in pharmacokinetic behavior within the patient population is an important aspect of drug development. Population pharmacokinetic models typically involve large numbers of parameters related nonlinearly to sparse, observational data, which creates difficulties for conventional methods of analysis. The nonlinear mixed-effects method implemented in the computer program NONMEM is a widely used approach to the estimation of population parameters. However, the method relies on somewhat restrictive modeling assumptions to enable efficient parameter estimation. In this paper we describe a Bayesian approach to population pharmacokinetic analysis which used a technique known as Gibbs sampling to simulate values for each model parameter. We provide details of how to implement the method in the context of population pharmacokinetic analysis, and illustrate this via an application to gentamicin population pharmacokinetics in neonates. A grant from the British Heart Foundation supported Nicola G. Best.  相似文献   

10.
The purpose of this study was to evaluate the effects of population size, number of samples per individual, and level of interindividual variability (IIV) on the accuracy and precision of pharmacodynamic (PD) parameter estimates. Response data were simulated from concentration input data for an inhibitory sigmoid drug efficacy (E(max)) model using Nonlinear Mixed Effect Modeling, version 5 (NONMEM). Seven designs were investigated using different concentration sampling windows ranging from 0 to 3 EC(50) (EC(50) is the drug concentration at 50% of the E(max)) units. The response data were used to estimate the PD and variability parameters in NONMEM. The accuracy and precision of parameter estimates after 100 replications were assessed using the mean and SD of percent prediction error, respectively. Four samples per individual were sufficient to provide accurate and precise estimate of almost all of the PD and variability parameters, with 100 individuals and IIV of 30%. Reduction of sample size resulted in imprecise estimates of the variability parameters; however, the PD parameter estimates were still precise. At 45% IIV, designs with 5 samples per individual behaved better than those designs with 4 samples per individual. For a moderately variable drug with a high Hill coefficient, sampling from the 0.1 to 1, 1 to 2, 2 to 2.5, and 2.5 to 3 EC(50) window is sufficient to estimate the parameters reliably in a PD study.  相似文献   

11.
Tango (Biostatistics 2016) proposed a new repeated measures design called the S:T repeated measures design, combined with generalized linear mixed-effects models and sample size calculations for a test of the average treatment effect that depend not only on the number of subjects but on the number of repeated measures before and after randomization per subject used for analysis. The main advantages of the proposed design combined with the generalized linear mixed-effects models are (1) it can easily handle missing data by applying the likelihood-based ignorable analyses under the missing at random assumption and (2) it may lead to a reduction in sample size compared with the simple pre-post design. In this article, we present formulas for calculating power and sample sizes for a test of the average treatment effect allowing for missing data within the framework of the S:T repeated measures design with a continuous response variable combined with a linear mixed-effects model. Examples are provided to illustrate the use of these formulas.  相似文献   

12.
Longitudinal study designs are commonly applied in much scientific research, especially in the medical, social, and economic sciences. Longitudinal studies allow researchers to measure changes in each individual’s responses over time and often have higher statistical power than cross-sectional studies. Choosing an appropriate sample size is a crucial step in a successful study. In longitudinal studies, because of the complexity of their design, including the selection of the number of individuals and the number of repeated measurements, sample size determination is less studied. This paper uses a simulation-based method to determine the sample size from a Bayesian perspective. For this purpose, several Bayesian criteria for sample size determination are used, of which the most important one is the Bayesian power criterion. We determine the sample size of a longitudinal study based on the scientific question of interest, by the choice of an appropriate model. Most of the methods of determining sample size are based on the definition of a single hypothesis. In this paper, in addition to using this method, we determine the sample size using multiple hypotheses. Using several examples, the proposed Bayesian methods are illustrated and discussed.  相似文献   

13.
Objective  To describe an approach using simulations for determining sample size for population pharmacokinetic experiments Methods  We address this problem by proposing method based on the estimation of the model parameters. The power to estimate the confidence interval of a parameter of choice to a particular precision is determined for different sample sizes by making stepwise increases in the sample size until the power is achieved. The method is based on simulation using previous information about the model and parameter estimates. Two examples are presented based on one-compartment and two-compartment first-order absorption models. Results  Sample size depends on the parameter of choice, the sampling designs and the method for the analysis of the collected data among other things. For the one-compartment first-order absorption model, assuming the parameter of choice is rate of absorption, the sample sizes required to estimate the 95% confidence interval within a 20% precision level with a power of 0.9 using the FO, FOCE and FOCE/INTERACTION methods in NONMEM and WinBUGS for a design that involved sampling at 0.01, 7.75 and 12 h (the population optimal design) are 20, 30, 30 and 30 respectively. For an extensive design (sampling at 0.5, 2, 4, 8, 12 and 24 h), the sample sizes are 20, 20, 20 and 30, respectively. For the two-compartment first-order absorption model, assuming the parameter of choice is initial volume of distribution, the sample sizes required to estimate the 95% confidence interval within a 50% precision level with a power of 0.8 for FO, FOCE/INTERACTION and WinBUGS were 50, 50 and 20, respectively. Conclusion  The determination of sample size using the confidence interval approach appears to be a pragmatic approach to determine the minimum number of subjects for a population pharmacokinetic experiment.  相似文献   

14.
In this paper, the two non-linear mixed-effects programs NONMEM and NLME were compared for their use in population pharmacokinetic/pharmacodynamic (PK/PD) modelling. We have described the first-order conditional estimation (FOCE) method as implemented in NONMEM and the alternating algorithm in NLME proposed by Lindstrom and Bates. The two programs were tested using clinical PK/PD data of a new gonadotropin-releasing hormone (GnRH) antagonist degarelix currently being developed for prostate cancer treatment. The pharmacokinetics of intravenous administered degarelix was analysed using a three compartment model while the pharmacodynamics was analysed using a turnover model with a pool compartment. The results indicated that the two algorithms produce consistent parameter estimates. The bias and precision of the two algorithms were further investigated using a parametric bootstrap procedure which showed that NONMEM produced more accurate results than NLME together with the nlmeODE package for this specific study.  相似文献   

15.
There has recently been concern about confidence intervals calculated using the standard error of parameter estimates from NONMEM, a computer program that uses a non-linear mixed-effects model to calculate relative bioavailability (F), because of possible downward bias of these estimates. In this study an alternate approach, the log-likelihood procedure, was used to calculate the confidence intervals for F from NONMEM. These were then compared with those calculated using the standard error of the parameter estimates, the traditional NONMEM approach, and the standard model-independent method, to determine whether bias exists. By use of data from a single dose, open cross-over study of ibuprofen using 14 healthy male volunteers, NONMEM was shown to give results consistent with those obtained using the standard model-independent method of analysis and could be a useful tool in the determination of F where conditions for using the standard method of analysis are not optimum. The width of the confidence interval for F using the log-likelihood procedure was narrower and non-symmetrical when compared with that obtained using the traditional NONMEM approach. The width of the confidence interval obtained using the traditional NONMEM method was similar to that from the standard approach, however the parameter estimate for F was higher than that obtained from the standard method. This could have been because of an outlier in the data set to which the standard approach is more sensitive. No downward bias was found in the confidence intervals from NONMEM. The bioavailability data set was of relatively low variability and more research with highly variable data is necessary before it can be concluded that the confidence intervals calculated from NONMEM can be used for hypothesis testing.  相似文献   

16.
Procedures for determining sample size in multivariate repeated measures experiments are discussed. The focus is on designs with either one group or two independent groups of subjects, the point of departure being multivariate repeated measures. Determination of the minimum sample size required is based on power considerations associated with Hotelling's T 2 and the F-test. We first consider procedures for determining sample size assuming an arbitrary covariance matrix. The special case where we assume that the transformed data have a multivariate spherical covariance matrix is also considered. Tables of the minimum sample sizes required for several hypotheses from multivariate analysis are presented.  相似文献   

17.
Flexible sample size designs based on interim efficacy results can ensure adequate power by adjusting the sample size, which potentially saves time and resources. However, the Type I error can often be inflated due to such adjustments. We use a unified approach to quantify the Type I error rate and to adjust the stopping boundary accordingly to maintain the overall Type I error. This unified approach can be applied to normal, survival, and binary endpoints. Several aspects of sample size adjustments are considered based on information time. The Type I error inflation can be well controlled by giving up some unrealistic power. Simulations show the proposed method works well for survival and binary endpoints.  相似文献   

18.
Assuming a linear growth curve model under a suitable link function, we compute the sample size for comparing two treatment groups when the repeated measurements marginally follow exponential family distributions. From the treatment profiles of the chosen link function, we compute the common intercept beta0 and the regression slopes beta1 and beta2 to define delta = beta1 - beta2, the difference to be detected, under a specified alternative hypothesis. The dispersion matrices of the generalized estimating equations estimators are obtained under the null and alternative hypotheses using a suitable working correlation matrix. We compute the sample size assuming that delta is asymptotically normal. Details are worked out for repeated measures designs with binary and count data along with numerical examples.  相似文献   

19.
Efficient power calculation methods have previously been suggested for Wald test-based inference in mixed-effects models but the only available alternative for Likelihood ratio test-based hypothesis testing has been to perform computer-intensive multiple simulations and re-estimations. The proposed Monte Carlo Mapped Power (MCMP) method is based on the use of the difference in individual objective function values (ΔiOFV) derived from a large dataset simulated from a full model and subsequently re-estimated with the full and reduced models. The ΔiOFV is sampled and summed (∑ΔiOFVs) for each study at each sample size of interest to study, and the percentage of ∑ΔiOFVs greater than the significance criterion is taken as the power. The power versus sample size relationship established via the MCMP method was compared to traditional assessment of model-based power for six different pharmacokinetic and pharmacodynamic models and designs. In each case, 1,000 simulated datasets were analysed with the full and reduced models. There was concordance in power between the traditional and MCMP methods such that for 90% power, the difference in required sample size was in most investigated cases less than 10%. The MCMP method was able to provide relevant power information for a representative pharmacometric model at less than 1% of the run-time of an SSE. The suggested MCMP method provides a fast and accurate prediction of the power and sample size relationship.  相似文献   

20.
合理的采样设计是建立可靠群体药动学模型的重要基础。应用非线性混合效应模型的群体药动学研究,是一种有效利用稀疏血样数据估算群体药动学参数的方法。本研究根据D最优化设计和贝叶斯法设计采样方案。以已报道的氨氯地平群体药动学模型为基础,根据临床研究的目的、给药方案和随访时间等设计几套采样方案,采用WinPOPT软件计算优化采样方案,用蒙特卡罗法(Monte Carlo)对各优化的采样方案分别建立一套含400个患者的NONMEM数据文件,用NONMEM7.2软件模拟氨氯地平的浓度数据,然后估算其重要药动学参数(CL/F,V/F和Ka),并计算其平均预测误差(MPE)和平均绝对预测误差(MAPE)。在6个采样方案中,以方案6和3对CL/F估算的准确度和精密度较优,MPE分别为0.1%和0.6%,MAPE均为0.7%。对V的估算,各采样方案间无明显差异。因此,选用氨氯地平拟合药动学参数的准确度和精密度较优,且采样点个数较少的方案3为最佳临床研究方案。本研究旨在为开展氨氯地平在肾功能损害合并高血压患者的群体药动学研究提供科学、有效的采样方法,为群体PK/PD研究提供一种优化设计临床研究方案的科学方法。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号