首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
This paper is an overview that summarizes recently developed tools in assessing agreement for methods comparison and instrument/assay validation in medical devices. This paper emphasizes concept, sample sizes, and examples more than analytical formulas. We have considered a unified approach of evaluating agreement among multiple instruments (k), each with multiple replicates (m) for both continuous and categorical data. We start with the basic scenario of two instruments (k = 2), each with only one measurement (m = 1). In this basic scenario for continuous data, we also consider if the target values are considered random (values of a gold standard instrument) or fixed (known values). In the more general case, we will not consider when the target values are fixed. We discuss the simplified sample size calculations. When there is a disagreement between methods, one needs to know if the source of the disagreement was due to a systematic shift (bias) or random error. The coefficients of accuracy and precision will be discussed to characterize these sources. This is important because a systematic shift usually can be easily fixed through calibration, while a random error usually is a more cumbersome variation reduction exercise.

For categorical variables, we consider scaled agreement statistics. For continuous variables, we use scaled or unscaled agreement statistics. For variables with proportional error, we can simply apply a log transformation to the data. Finally, three examples are given: one for assay validation, one for a lab proficiency assessment, and one for a lab comparison on categorical assay.  相似文献   

2.
3.
We describe a tolerance interval approach for assessing agreement in method comparison data that may be left censored. We model the data using a mixed model and discuss a Bayesian and a frequentist methodology for inference. A simulation study suggests that the Bayesian approach with noninformative priors provides a good alternative to the frequentist one for moderate sample sizes as the latter tends to be liberal. Both may be used for sample sizes 100 or more, with the Bayesian one being slightly conservative. The proposed methods are illustrated with real data involving comparison of two assays for quantifying viral load in HIV patients.  相似文献   

4.
目的:研究精密度、准确度和范围的具体考察方法。方法:采用RP—HPLC法,以厚朴药材为例,对该药中厚朴酚、和厚朴酚含量进行测定,在方法学验证过程中,重复性和回收率试验分别设计不同范围和不同考察方法。结果:重复性试验结果表明厚朴酚、和厚朴酚含量的RSD分别为2.5%~5.8%,2.5%~5.7%;平均回收率厚朴酚为95.1%~97.7%(RSD:2.2%~4.3%);和厚朴酚为94.8%~97.7%(RSD=2.0%~7.5%)。结论:重复性RSD值随考察范围增大而增大,且同一浓度6份供试品溶液的测定结果与低、中、高浓度9份供试品溶液的测定结果方差非齐性。回收率RSD值亦随考察范围增大而增大。当药材取样量不同,根据被测成分含量按1:1添加对照品时,误差影响相对较小。而确定相同取样量,通过改变对照品加入量考察回收率时,9份样品测定结果RSD明显增大。  相似文献   

5.
利用支付累积函数将生存年金与保险费的各种支付方式统一起来,从而得到了各种付款流(包括生存年金和保险费)的精算现值综合表达式E(Y)=∫0+∞ST(t)vtdB(t)。从理论的角度来说,高度抽象化的公式通常能使我们更好地认识事物的本质,同时,在基本的计算不再成为讨论问题的障碍的今天,这样做可以简化知识的表述方式,这在理论与实际中都非常有意义。  相似文献   

6.
The present work presents an in-depth evaluation of continuously collected data during a twin-screw granulation and drying process performed on a continuous manufacturing line. During operation, the continuous line logs 49 univariate process variables, hence generating a large amount of data. Three identical 5-h continuous manufacturing runs were performed. Multivariate data analysis tools, more specifically latent variable modeling tools such as principal component analysis, were used to extract information from the generated data sets unveiling process trends and drifts. Furthermore, a statistical process monitoring strategy is presented. The approach is based on the application of multivariate statistical process monitoring to model the variables that remain around a steady state.  相似文献   

7.
Abstract

In studies of quality control of oligonucleotide array data, one objective is to screen out ineligible arrays. Incomparable arrays (one type of ineligible arrays) arise as the experimental factors are poorly controlled. Due to the high volume of data in gene arrays, examination of array comparability requires special treatments to reduce data dimension without distortion. This paper proposes a graphical approach to address these issues. The proposed approach uses percentile methods to group data, and applies the 2D image plot to display the grouped data. Moreover, an invariant band is employed to quantify degrees of array comparability. We use two publicly available oligonucleotide array datasets from Affymetrix GeneChip System for evaluation. The results demonstrate the utility of our approach to examine data quality and also as an exploratory tool to verify differentially expressed genes selected by vigorous statistical methods.  相似文献   

8.
Abstract

Noninferiority clinical trials aim to show an experimental treatment is therapeutically no worse than standard of care, particularly if the new treatment is preferred for reasons such as cost, convenience, safety, and so on. Noninferiority trials are by nature less conservative than superiority studies: protocol violations may increase bias toward the alternative hypothesis of noninferiority. Our objective was to compare multiple imputation, a linear mixed model, and other methods for analyzing a longitudinal trial with missing data in intention-to-treat and per-protocol populations. We simulated trials with missing data and noncompliance due to treatment inefficacy under varying trial conditions (e.g., trajectory of treatment effects, correlation between repeated measures, and missing data mechanism), assessing each approach by estimating bias, Type I error, and power. We found that multiple imputation using auxiliary data on noncompliance in the imputation model performed best. A hybrid intention-to-treat/per-protocol multiple imputation approach with a missing not at random imputation model produced low Type I error, was unbiased and maintained reasonable power to detect noninferiority. We conclude that the anti-conservatism of noninferiority trial estimands conforming with the intention-to-treat principle may be offset by imputation models that include variables on intercurrent events. Supplementary materials for this article are available online.  相似文献   

9.
In this article, we consider three-arm noninferiority (NI) trial that includes an experimental, a reference, and a placebo arm. While the risk difference (RD) is the most common and well-explored functional form for testing efficacy (or effectiveness), for binary outcomes, recent FDA guideline suggested other measures, such as relative risk (RR) and odds ratio (OR) on the basis of which NI can be claimed. However, developing test based on these different functions of binary outcomes are challenging since the construction and interpretation of NI margin for such functions are not trivial extensions of RD-based approach. Recently, we have proposed Frequentist approach for testing NI for these functionals. In this article, we further develop Bayesian approaches for testing NI based on effect retention approach for RR and OR. Bayesian paradigm provides a natural path to integrate historical trials’ information, as well as it allows using patients’/clinicians’ opinions as prior information via sequential learning. In addition we discuss, in detail, the sample size/power calculation which could be readily used while designing such trials in practice.  相似文献   

10.
Gene expression profiling has played an important role in cancer risk classification and has shown promising results. Since gene expression profiling often involves determination of a set of top rank genes for analysis, it is important to evaluate how modeling performance varies with the number of selected top ranked genes incorporated in the model. We used a colon data set collected at Moffitt Cancer Center as an example of the study, and ranked genes based on the univariate Cox proportional hazards model. A set of top ranked genes was selected for evaluation. The selection was done by choosing the top k ranked genes for k  = 1 to 12,500. An analysis indicated a considerable variation of classification outcomes when the number of top ranked genes was changed. We developed a predictive risk probability approach to accommodate this variation by identifying a range number of top ranked genes. For each number of top ranked genes, the procedure classifies each patient as having high risk (score = 1) or low risk (score = 0). The categorizations are then averaged, giving a risk score between 0 and 1, thus providing a ranking for the patient's need for further treatment. This approach was applied to the colon data set and demonstrated the strength of this approach by three criteria: First, a univariate Cox proportional hazards model showed a highly statistically significant level (log-rank χ 2 statistics = 110 with p -value <10 ?16) for the predictive risk probability classification. Second, the survival tree model used the risk probability to partition patients into five risk groups showing a good separation of survival curves (log-rank χ 2 statistics = 215). In addition, utilization of the risk group status identified a small set of risk genes that may be practical for biological validation. Third, analysis of resampling the risk probability suggested the variation pattern of the log-rank χ 2 in the colon cancer data set was unlikely caused by chance.  相似文献   

11.
Dichotomizing a continuous biomarker is a common practice in medical research. Various methods exist in the literature for dichotomizing continuous biomarkers. The most widely adopted minimum p-value approach uses a sequence of test statistics for all possible dichotomizations of a continuous biomarker, and it chooses the cutpoint that is associated with the maximum test statistic, or equivalently, the minimum p-value of the test. We herein propose a likelihood and resampling-based approach to dichotomizing a continuous biomarker. In this approach, the cutpoint is considered as an unknown variable in addition to the unknown outcome variables, and the likelihood function is maximized with respect to the cutpoint variable as well as the outcome variables to obtain the optimal cutpoint for the continuous biomarker. The significance level of the test for whether a cutpoint exists is assessed via a permutation test using the maximum likelihood values calculated based on the original as well as the permutated data sets. Numerical comparisons of the proposed approach and the minimum p-value approach showed that the proposed approach was not only more powerful in detecting the cutpoint but also provided markedly more accurate estimates of the cutpoint than the minimum p-value approach in all the simulation scenarios considered.  相似文献   

12.
13.
This article develops a framework for benchmark dose estimation that allows intrinsically nonlinear dose–response models to be used for continuous data in much the same way as is already possible for quantal data. This means that the same dose–response model equations may be applied to both continuous and quantal data, facilitating benchmark dose estimation in general for a wide range of candidate models commonly used in toxicology. Moreover, the proposed framework provides a convenient means for extending benchmark dose concepts through the use of model averaging and random effects modeling for hierarchical data structures, reflecting increasingly common types of assay data. We illustrate the usefulness of the methodology by means of a cytotoxicology example where the sensitivity of two types of assays are evaluated and compared. By means of a simulation study, we show that the proposed framework provides slightly conservative, yet useful, estimates of benchmark dose lower limit under realistic scenarios.  相似文献   

14.
No HeadingPurpose. To develop a data supplementation [i.e., a pharmacokinetic/pharmacodynamics (PK/PD) knowledge creation] approach for generating supplemental data to be used in characterizing a targeted unexplored segment of the response surface.Methods. The procedure for data supplementation can be summarized as follows: 1) statement of the objective of data supplementation for PK/PD knowledge creation, 2) performance of PK knowledge discovery, 3) PK data synthesis for target dose group(s), 4) covariate data synthesis for virtual subjects in the target dose group(s), 5) discovery of hidden knowledge from real data set to which supplemental data will be added, 6) implementation of a data supplementation methodology, and 7) discovery and communication of the created knowledge. A nonparametric approximate Bayesian multiple supplementation and its modification, structure-based multiple supplementation, which is an adaptation of the approximate Bayesian bootstrap, is proposed as a method of data supplementation for PK/PD knowledge creation. The structured-based multiple supplementation methodology was applied to characterize the effect of a target dose of 100 mg that was unexplored in a previously concluded study that investigated the effect of 200- and 600-mg doses on biomarker response.Results. The target dose of 100 mg was found to produce a response comparable with that of the 200 mg and better than that obtained with the 600 mg.Conclusions. Implementation of the PK/PD knowledge creation process through data supplementation resulted in gaining knowledge about a targeted region of a response surface (i.e., the effect of a target dose) that was not previously studied in a completed study without expending resources in conducting a new study.  相似文献   

15.
Purpose. The overall aim of the present study was to investigate retrospectively the feasibility and utility of model-based clinical trial simulation as applied to the clinical development of naratriptan with effect measured on a categorical scale. Methods. A PK-PD model for naratriptan was developed by using information gathered from previous naratriptan and sumatriptan preclinical and clinical trials. The phase IIa naratriptan data were used to check the PK-PD model in its ability to describe future data. A further PK-PD model was developed by using the phase IIa naratriptan data, and a phase IIb trial was designed by simulation with the use of Matlab. The design resulting from clinical trial simulation was compared with that derived by using D-optimal design. Results. The PK-PD model showed reasonable agreement with the data observed in the phase IIa naratriptan clinical trial. Clinical trial simulation resulted in a design with four or five arms at 0 mg, 2.5 and/or 5 mg, 10 mg, and 20 mg, PD measurements to be taken at 0, 2, and 4 or 6 h and at least 150 patients per arm. A sub-D-optimal design resulted in two dosing arms at 0 and 10 mg and PD measurements to be taken at 1 and 2 h. Conclusions. Clinical trial simulation is a useful tool for the quantitative assessment of the influence of the controllable factors and is the only tool for the quantitative assessment of the uncontrollable factors on the power of a clinical trial.  相似文献   

16.
The cellular fingerprint, a novel in silico screening approach, was developed to identify new biologically active compounds in combination with structural fingerprints. To this end, high‐throughput screening (HTS) data from the National Cancer Institute have been used. To validate this method, we have selected the proapoptotic, natural compound betulinic acid (BA). Because of its antiproliferative effect on a variety of cancer cell lines, the identification of novel BA analogs is of great interest. Novel analogs have been identified and validated in different apoptosis assays. In addition, the novel approach exhibited a strong correlation between structural similarity and biological activity, so that it offers enormous potential for the identification of novel biologically active compounds.  相似文献   

17.
在"数据流分析"这一数据挖掘的应用领域中,常规的算法显得很不适用.主要是因为这些算法的挖掘过程不能适应数据流的动态环境,其挖掘模型、挖掘结果不能满足实际应用中用户的需求.针对这一问题,本文提出了一种基于网格和密度的聚类方法,来有效地完成对数据流的分析任务.该方法打破传统聚类方法的束缚,把整个挖掘过程分为离线和在线两步,最终通过基于网格和密度的聚类方法实现数据流聚类.  相似文献   

18.
Even with two doses of an experimental drug in Phase III studies, with the commonly used approach for assessing treatment effects of individual doses, it may still be difficult to determine the final commercial dose. In such a scenario, with plasma concentration data collected in the studies, a modeling approach can be applied to predict treatment effects at different plasma concentration levels. Through an established relationship between plasma concentration and dose, the treatment effects of doses not studied in the Phase III studies can then be predicted. The results can further be applied to justify the final dose confirmation or selection. In this article, a Phase III program example with count data as the primary endpoint in the multiple sclerosis area is used to illustrate the application of such a technique for dose confirmation. Several models, such as the overdispersion Poisson model, negative binomial model, and recurrent event models, are considered. The negative binomial model is preferable due to better data fitting and the capability of within-treatment assessment and between-treatment comparison.  相似文献   

19.
20.
The ICH Q2 (R1) Guidance on Validation of Analytical Procedures states that a robustness assessment for an analytical method should provide “an indication of its reliability during normal usage.” The concept of “design space” as specified in the ICH Q8 Guidance may be used to create a zone of reliable robustness for an analytical method or pharmaceutical process. A Bayesian approach to design space as outlined by Peterson (2004) accounts for model parameter uncertainty, correlation among the quality responses at each fixed operating condition, and method response multiplicity. Two examples are provided to illustrate the application of a Bayesian design space to assessing reliability/robustness. One example is about assessing the ability of an HPLC analytical method to meet system suitability criteria and the other deals with a crystallization process for an active pharmaceutical ingredient.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号