首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Within the pharmaceutical industry it is common practice to transfer analytical methods from one laboratory to other laboratories. An experiment or interlaboratory study is performed to estimate the repeatability, the intermediate precision, and the reproducibility of the analytical method. These measures of precision are quantified by appropriate sums of variance components from an analysis of variance model describing the structure of the data. In the literature, several methods have been described for calculating approximate (closed-form) confidence intervals on sums of variance components, i.e., Welch, Satterthwaite, and modified large-sample (MLS). Comparisons between these methods have been performed for one-way and two-way classification analysis of variance models only. Interlaboratory studies though often need higher order classifications. Therefore, these methods for constructing confidence intervals are compared on the measures of precision from a specific three-way classification analysis of variance model that is frequently used for method transfer studies. Using a simulation study, the coverage probability for these methods is evaluated in situations where variance components may be estimated negatively with the standard moment estimates and where either the standard moment estimates are adjusted to zero or remain unadjusted. The MLS method is superior to the other two methods in case the standard moment estimates are used. If the adjusted moment estimates are used, then the method of Satterthwaite performs similar to the MLS method for many settings of the variance components and sample sizes but much better for some particular settings. The method of Satterthwaite performs better than the method of Welch for all the selected settings of variance components and sample sizes, irrespective of the standard or adjusted moment estimates.  相似文献   

2.
Within the pharmaceutical industry it is common practice to transfer analytical methods from one laboratory to other laboratories. An experiment or interlaboratory study is performed to estimate the repeatability, the intermediate precision, and the reproducibility of the analytical method. These measures of precision are quantified by appropriate sums of variance components from an analysis of variance model describing the structure of the data. In the literature, several methods have been described for calculating approximate (closed-form) confidence intervals on sums of variance components, i.e., Welch, Satterthwaite, and modified large-sample (MLS). Comparisons between these methods have been performed for one-way and two-way classification analysis of variance models only. Interlaboratory studies though often need higher order classifications. Therefore, these methods for constructing confidence intervals are compared on the measures of precision from a specfic three-way classification analysis of variance model that is frequently used for method transfer studies. Using a simulation study, the coverage probability for these methods is evaluated in situations where variance components may be estimated negatively with the standard moment estimates and where either the standard moment estimates are adjusted to zero or remain unadjusted. The MLS method is superior to the other two methods in case the standard moment estimates are used. If the adjusted moment estimates are used, then the method of Satterthwaite performs similar to the MLS method for many settings of the variance components and sample sizes but much better for some particular settings. The method of Satterthwaite performs better than the method of Welch for all the selected settings of variance components and sample sizes, irrespective of the standard or adjusted moment estimates.  相似文献   

3.
A procedure for constructing two-sided beta-content, gamma-confidence tolerance intervals is proposed for general random effects models, in both balanced and unbalanced data scenarios. The proposed intervals are based on the concept of effective sample size and modified large sample methods for constructing confidence bounds on functions of variance components. The performance of the proposed intervals is evaluated via simulation techniques. The results indicate that the proposed intervals generally maintain the nominal confidence and content levels. Application of the proposed procedure is illustrated with a one-fold nested design used to evaluate the performance of a quantitative bioanalytical method.  相似文献   

4.
Discussion     
We consider a multicenter clinical trial with treatments as fixed effects and centers and residuals as bivariate and univariate random effects, respectively. There exist situations where it is difficult to justify the conventional normality assumptions for the random components. Following Khatri and Patel (Commun. Stat.—Theory Methods 1992, 21, 21–39), we propose the weighted least-squares (WLS) method and two bootstrap methods, percentile and BCa, that are robust to the departure from normality. Through a simulation study, we compare WLS and bootstrap confidence intervals for the treatment difference. While all three methods give confidence intervals with desired coverage rates, the WLS method gives shorter intervals. We also propose a bootstrap method that is robust to outliers. A numerical example is given to illustrate the methodology.  相似文献   

5.
It has previously been shown that the extended least squares (ELS) method for fitting pharmacokinetic models behaves better than other methods when there is possible heteroscedasticity (unequal error variance) in the data. Confidence intervals for pharmacokinetic parameters, at the target confidence level of 95%, computed in simulations with several pharmacokinetic and error variance models, using a theoretically reasonable approximation to the asymptotic covariance matrix of the ELS parameter estimator, are found to include the true parameter values considerably less than 95% of the time. Intervals with the ordinary least squares method perform better. Two adjustments to the ELS confidence intervals, taken together, result in better performance. These are: (i) apply a bias correction to the ELS estimate of variance, which results in wider confidence intervals, and (ii) use confidence intervals with a target level of 99% to obtain confidence intervals with actual level closer to 95%. Kineticists wishing to use the ELS method may wish to use these adjustments.  相似文献   

6.
We address the noninferiority assessment problem defined in terms of the ratio of population means in a parallel group design analysis of variance setting. The sample ratio as a point estimate of the corresponding population ratio has been considered. It has been shown that the Fieller–Hinkley distribution of the ratio of two correlated normally distributed random variables readily provide a technique for constructing confidence intervals comparable to the bootstrap percentile and Fieller's confidence intervals. A finite parameter space based level α test of an inferiority hypothesis formulated in terms of a fixed margin has been derived. We illustrate our approach using the forced vital capacity (FVC) data. We claim that it is easy to construct and straight forward to interpret our bootstrap equivalent confidence intervals that are used to assess noninferiority. We discuss appropriate methods for calculation of sample sizes.  相似文献   

7.
Abstract

Single response population (1 sample / animal) simulation studies were carried out (assuming a 1 compartment model) to investigate the influence of inter-animal variability (in clearance (σCl) and volume (σv)) on the estimation of population pharmacokinetic parameters. NONMEM was used for parameter estimation. Individual and joint confidence intervals coverage for parameter estimates were computed to reveal the influence of bias and standard error (SE) on interval estimates. The coverage of interval estimates, percent prediction error and correlation analysis were used to judge the efficiency of parameter estimation. The efficiency of estimation of Cl and V was good, on average, irrespective of the values of σCl and σv Estimates of σCl and σv were biased and imprecise. Small biases and high precision resulted in good confidence intervals coverage for Cl and V. SE was the major determinant of confidence intervals coverage for the random effect parameters, σCl and σv and the joint confidence intervals coverage for all parameter estimates. The usual confidence intervals computed may give an erroneous impression of the precision with which the random effect parameters are estimated because of the large standard errors associated with these parameters. Conservative approach to data interpretation is required when biases associated with σCl and σv are large.  相似文献   

8.
We address the noninferiority assessment problem defined in terms of the ratio of population means in a parallel group design analysis of variance setting. The sample ratio as a point estimate of the corresponding population ratio has been considered. It has been shown that the Fieller-Hinkley distribution of the ratio of two correlated normally distributed random variables readily provide a technique for constructing confidence intervals comparable to the bootstrap percentile and Fieller's confidence intervals. A finite parameter space based level alpha test of an inferiority hypothesis formulated in terms of a fixed margin has been derived. We illustrate our approach using the forced vital capacity (FVC) data. We claim that it is easy to construct and straight forward to interpret our bootstrap equivalent confidence intervals that are used to assess noninferiority. We discuss appropriate methods for calculation of sample sizes.  相似文献   

9.
Abstract

We discuss using prediction as a flexible and practical approach for monitoring futility in clinical trials with two co-primary endpoints (CPE). This approach is appealing in that it provides quantitative evaluation of potential effect sizes and associated precision, and can be combined with flexible error-spending strategies. We extend prediction of effect size estimates and the construction of predicted intervals to the two CPE case, and illustrate interim futility monitoring of treatment effects using prediction with an example. We also discuss alternative approaches based on the conditional and predictive powers, compare these methods and provide some guidance on the use of prediction for better decision in clinical trials with CPE.  相似文献   

10.
Many questions in biomedical research can be addressed effectively with simultaneous confidence intervals for multiple contrasts. While procedures for normal outcome data are readily available, there is still a need for developing practical methods for binary outcomes. In this article, we construct simultaneous confidence intervals for multiple contrasts of binomial proportions using the two-step method of variance estimates recovery (Zou and Donner 2008; Zou 2008; Zou et al. 2009a). First, we obtain confidence limits about single proportions using critical values from the multivariate normal distribution that account for correlations among contrasts. Second, we set confidence limits for these contrasts using variance estimates recovered from the limits. Simulation results show this approach performs well in small to moderate sample sizes when either the Wilson or Jeffreys method is used for constructing confidence limits about a single proportion. We illustrate the procedure with examples.  相似文献   

11.
Population pharmacokinetic (PPK) analysis usually employs nonlinear mixed effects models using first-order linearization methods. It is well known that linearization methods do not always perform well in actual situations. To avoid linearization, the Monte Carlo integration method has been proposed. Moreover, we generally utilize asymptotic confidence intervals for PPK parameters based on Fisher information. It is known that likelihood-based confidence intervals are more accurate than those from the usual asymptotic confidence intervals. We propose profile likelihood-based confidence intervals using Monte Carlo integration. We have evaluated the performance of the proposed method through a simulation study, and analyzed the erythropoietin concentration data set by the method.  相似文献   

12.
Population pharmacokinetic (PPK) analysis usually employs nonlinear mixed effects models using first-order linearization methods. It is well known that linearization methods do not always perform well in actual situations. To avoid linearization, the Monte Carlo integration method has been proposed. Moreover, we generally utilize asymptotic confidence intervals for PPK parameters based on Fisher information. It is known that likelihood-based confidence intervals are more accurate than those from the usual asymptotic confidence intervals. We propose profile likelihood-based confidence intervals using Monte Carlo integration. We have evaluated the performance of the proposed method through a simulation study, and analyzed the erythropoietin concentration data set by the method.  相似文献   

13.
Investigated in this paper is the point estimation and confidence intervals of the treatment efficacy parameter and related secondary parameters in a two-stage adaptive trial. Based on the minimal sufficient statistics, several alternative estimators to the sample averages are proposed to reduce the bias and to improve the precision of estimation. Confidence intervals are constructed using Woodroofe's pivot method. Numerical studies are conducted to evaluate the bias and mean squared error of the estimators and the coverage probability of the confidence intervals.  相似文献   

14.
The classical definitions of method detection limit (MDL) and limit of quantitation (LOQ) are redefined using calibration curve regression analysis. Traditional ways of determining these parameters provide scant information on the daily precision of the analysis at all concentration levels. These parameters are time consuming to acquire and hold only for the moment of acquisition. It is illustrated, with experimental data, using isotope dilution analyses of THC-COOH, a marijuana metabolite, that not only can MDL and LOQ be obtained directly from the calibration curve, but confidence intervals for the calibration curve can also be obtained, which define the precision of the analyses at all levels calibrated. If calibration and analyses of unknowns take place simultaneously, then an MDL and confidence intervals that are relevant to data acquisition are obtained. Confidence intervals at particular analyte concentrations of interest were of greater value than the MDL and LOQ in evaluation of the analytical method. The parameter from the calibration curve is defined as the calibrated quantitation limit.  相似文献   

15.
According to the ICH E9 recommendation, the evaluation of randomized dose-finding trials focuses on the graphical presentation of different kinds of simultaneous confidence intervals: i) superiority of at least one dose vs. placebo with and without the assumption of order restriction, ii) noninferiority of at least one dose vs. active control, iii) identification of the minimum effective dose, iv) identification of the peak dose, v) identification of the maximum safe dose for a safety endpoint, and vi) estimation of simultaneous confidence intervals for “many-to-one-by-condition interaction contrasts.” Moreover, global tests for a monotone trend or a trend with a possible downturn effect are discussed. The basic approach involved obtaining multiple contrasts for different problem-related contrast definitions. For all approaches, definitions of relevance margins for superiority or noninferiority are needed. Because consensus on margins only exists for selected therapeutic areas and the definition of absolute thresholds may be difficult, simultaneous confidence intervals for ratio to placebo were also used. All approaches are demonstrated in an example-based manner using the R-packages multcomp (difference), for hypotheses based on difference, and mratios (ratio), for hypotheses based on ratios.  相似文献   

16.
ABSTRACT

In analytical similarity assessment of a biosimilar product, key quality attributes of the test and reference products need to be shown statistically similar. When there were multiple references, similarity among the reference products is also required. We proposed a simultaneous confidence approach based on the fiducial inference theory as an alternative to the pairwise comparison method. Three versions with two types of simultaneous confidence intervals for each version were proposed based on different assumptions of the population variance. We conducted extensive simulation studies to compare the performance of our proposed method and the pairwise method, and provided examples to illustrate the concern of using pairwise method.  相似文献   

17.
Simultaneous confidence interval for differences or ratios to control are described for both the proof of hazard and the proof of safety for the typical design in toxicology including several doses and a control. For most endpoints the direction of harmfulness is a priori known; therefore one-sided confidence intervals for Gaussian distributed endpoints, proportions, and poly-k-adjusted tumor rates are used. Special packages in the software R are provided to estimate these confidence intervals. Real data examples are given to demonstrate the estimation of the confidence intervals and their interpretation for selected toxicological studies.  相似文献   

18.
Abstract

The article reviews 63 studies and uses statistical procedures to combine, synthesize and integrate information about pharmacokinetic data derived from the use of vancomycin in 2150 patients during the last 20 years. Many drugs, in particular vancomycin, are excreted primarily by the kidneys and the pharmacokinetics depend mainly on renal function. Vancomycin pharmacokinetic values from the individual studies were collected and grouped based on renal function and dialysis methods. Estimates of the important population pharmacokinetic parameters, half-life, clearance, and volume of distribution of vancomycin with measures of the statistical accuracy, such as standard error and confidence interval were calculated for the adjustment of individual drug dosages. Tests to prove the statistical homogeneity of the mean values from several studies were performed. the random effects model was given preference over the fixed model in presence of heterogeneity, to take the between-trial components of variance into account. Maximum likelihood estimation methods were used instead of moment estimators to obtain the between-study variation, the overall estimate of the pharmacokinetic quantity of interest and the measures of the statistical accuracy.  相似文献   

19.
Abstract

The treatment effect of a therapeutic product on a binary endpoint is often expressed as the difference in proportions of subjects with the outcome of interest between the treated and control groups of a clinical trial. Analysis of the proportional difference and construction of the associated confidence interval (CI) is often complicated due to the baseline covariate(s) being associated with the primary endpoint. Analysis adjusting for such baseline covariate(s) generally improves efficiency of hypothesis testing and precision of treatment effect estimation, and avoids possible bias caused by baseline covariate imbalances. Most existing literatures focus on constructing unadjusted or categorical covariate(s) adjusted only CI, which provides very limited advice on how different statistical methods perform and which method is optimal in terms of constructing both categorical and continuous baseline covariate(s) adjusted CI for proportional difference. We review and compare the performance of three commonly used model-based methods as well as the traditional nonparametric weighted-difference methods for the construction of covariate-adjusted CI for proportional difference via a real data application and simulations. The coverage of 95% CI, Type I error control, and power are examined. We also examine the factors leading to the model convergence failure in different scenarios via simulations.  相似文献   

20.
In an earlier published paper, a confidence interval approach was used to show that the use of a large amount of internal standard (relative to the analyte) would adversely affect the precision of the analytical method. However, the confidence intervals were not calculated correctly. The authors recalculated the confidence intervals and found that there is no effect on the precision as measured by the confidence intervals.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号