首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到13条相似文献,搜索用时 8 毫秒
1.
The concordance correlation coefficient (CCC) is an index that is commonly used to assess the degree of agreement between observers on measuring a continuous characteristic. Here, a CCC for longitudinal repeated measurements is developed through the appropriate specification of the intraclass correlation coefficient from a variance components linear mixed model. A case example and the results of a simulation study are provided.  相似文献   

2.
ABSTRACT

Some assay validation studies are conducted to assess agreement between repeated, paired continuous data measured on the same subject with different measurement systems. The goal of these studies is to show that there is an acceptable level of agreement between the measurement systems. Equivalence testing is a reasonable approach in assay validation. In this article, we use an equivalence-testing criterion based on a decomposition of a concordance correlation coefficient proposed by Lin (1989 Lin , L. K. ( 1989 ). A concordance correlation coefficient to evaluate reproducibility . Biometrics 45 : 255268 . [PUBMED] [INFOTRIEVE] [CSA] [Crossref], [PubMed], [Web of Science ®] [Google Scholar] 1992 Lin , L. K. ( 1992 ). Assay validation using the concordance correlation coefficient . Biometrics 48 : 599604 . [CSA] [Crossref], [Web of Science ®] [Google Scholar]). Using a variance components approach, we develop bounds for conducting statistical tests using the proposed equivalence criterion. We conduct a simulation study to assess the performance of the bounds. The criteria are the ability to maintain the stated test size and the simulated power of the tests using these bounds. Bounds that perform well for small sample size are preferred. We present a computational example to demonstrate the methods described in the article.  相似文献   

3.
Microarray is one of the breakthrough technologies in the twenty-first century. Despite of its great potential, transition and realization of microarray technology into the clinically useful commercial products have not been as rapid as the technology could promise. One of the primary reasons is lack of agreement and poor reproducibility of the intensity measurements on gene expression obtained from microarray experiments. Current practices often use the testing the hypothesis of zero Pearson correlation coefficient to assess the agreement of gene expression levels between the technical replicates from microarray experiments. However, Pearson correlation coefficient is to evaluate linear association between two variables and fail to take into account changes in accuracy and precision. Hence, it is not appropriate for evaluation of agreement of gene expression levels between technical replicates. Therefore, we propose to use the concordance correlation coefficient to assess agreement of gene expression levels between technical replicates. We also apply the Generalized Pivotal Quantities to obtain the exact confidence interval for concordance coefficient. In addition, based on the concept of noninferiority test, a one-sided (1 ? α) lower confidence limit for concordance correlation coefficient is employed to test the hypothesis that the agreement of expression levels of the same genes between two technical replicates exceeds some minimal requirement of agreement. We conducted a simulation study, under various combinations of mean differences, variability, and sample size, to empirically compare the performance of different methods for assessment of agreement in terms of coverage probability, expected length, size, and power. Numerical data from published papers illustrate the application of the proposed methods.  相似文献   

4.
Phase II clinical trials are conducted to test whether a drug has a minimum desired effect and to assess whether further development of the drug is warranted. They are often designed as one-arm trials with response rate as the primary endpoint, and a two-stage design is often used to ensure early termination of the trial for futility. To control the type I error rate and guarantee the specified power of the study, planned sample sizes for both stages must be rigidly followed, but a literature review suggests that actual sample size often differs from that planned. We propose to extend simple two-stage designs to allow more flexible sampling plans in both stages. Our designs are preferable to similar extensions proposed to control type I and II error probabilities. Additionally, our assumptions regarding distribution of the actual sample size at the end of stage 1 are more lenient. A list of optimal designs for typical error rates and the selected null and alternative response rates is presented.  相似文献   

5.
The intraclass correlation coefficient (ICC) with fixed raters or, equivalently, the concordance correlation coefficient (CCC) for continuous outcomes is a widely accepted aggregate index of agreement in settings with small number of raters. Quantifying the precision of the CCC by constructing its confidence interval (CI) is important in early drug development applications, in particular in qualification of biomarker platforms. In recent years, there have been several new methods proposed for construction of CIs for the CCC, but their comprehensive comparison has not been attempted. The methods consisted of the delta method and jackknifing with and without Fisher's Z-transformation, respectively, and Bayesian methods with vague priors. In this study, we carried out a simulation study, with data simulated from multivariate normal as well as heavier tailed distribution (t-distribution with 5 degrees of freedom), to compare the state-of-the-art methods for assigning CI to the CCC. When the data are normally distributed, the jackknifing with Fisher's Z-transformation (JZ) tended to provide superior coverage and the difference between it and the closest competitor, the Bayesian method with the Jeffreys prior was in general minimal. For the nonnormal data, the jackknife methods, especially the JZ method, provided the coverage probabilities closest to the nominal in contrast to the others which yielded overly liberal coverage. Approaches based upon the delta method and Bayesian method with conjugate prior generally provided slightly narrower intervals and larger lower bounds than others, though this was offset by their poor coverage. Finally, we illustrated the utility of the CIs for the CCC in an example of a wake after sleep onset (WASO) biomarker, which is frequently used in clinical sleep studies of drugs for treatment of insomnia.  相似文献   

6.
The repeated measures concordance correlation coefficient was proposed for measuring agreement between two raters or two methods of measuring a response in the presence of repeated measurements (King et al., 2007 King , T. S. , Chinchilli , V. M. , Carrasco , J. ( 2007 ). A repeated measures concordance correlation coefficient . Statistics in Medicine 26 . [Google Scholar]). This paper proposes a class of repeated measures concordance correlation coefficients that are appropriate for both continuous and categorical data. We illustrate the methodology with examples comparing (1) 1-hour vs. 2-hour blood draws for measuring cortisol in an asthma clinical trial, (2) two measurements of percentage body fat, from skinfold calipers and dual energy x-ray absorptiometry, and (3) two binary measures of quality of health from an asthma clinical trial.  相似文献   

7.
A need for assessment of agreement arises in many situations including statistical biomarker qualification or assay or method validation. Concordance correlation coefficient (CCC) is one of the most popular scaled indices reported in evaluation of agreement. Robust methods for CCC estimation currently present an important statistical challenge. Here, we propose a novel Bayesian method of robust estimation of CCC based on multivariate Student’s t-distribution and compare it with its alternatives. Furthermore, we extend the method to practically relevant settings, enabling incorporation of confounding covariates and replications. The superiority of the new approach is demonstrated using simulation as well as real datasets from biomarker application in electroencephalography (EEG). This biomarker is relevant in neuroscience for development of treatments for insomnia.  相似文献   

8.
Older community-dwelling adults often take multiple medications for numerous chronic diseases. Nonadherence to these medications can have a large public health impact. Therefore, the measurement and modeling of medication adherence in the setting of polypharmacy is an important area of research. We apply a variety of different modeling techniques (standard linear regression; weighted linear regression; adjusted linear regression; naïve logistic regression; beta-binomial (BB) regression; generalized estimating equations (GEE)) to binary medication adherence data from a study in a North Carolina-based population of older adults, where each medication an individual was taking was classified as adherent or nonadherent. In addition, through simulation we compare these different methods based on Type I error rates, bias, power, empirical 95% coverage, and goodness of fit. We find that estimation and inference using GEE is robust to a wide variety of scenarios and we recommend using this in the setting of polypharmacy when adherence is dichotomously measured for multiple medications per person.  相似文献   

9.
For analyzing incomplete longitudinal data, there has been recent interest in comparing estimates with and without the use of multiple imputation along with mixed effects model and generalized estimating equations. Empirically, the additional use of multiple imputation generally led to overestimated variances and may yield more heavily biased estimates than the use of last observation carried forward. Under ignorable or nonignorable missing values, a mixed effects model or generalized estimating equations alone yielded more unbiased estimates. The different methods were also assessed in a randomized controlled clinical trial.  相似文献   

10.
It is shown how hierarchical biomedical data, such as coming from longitudinal clinical trials, meta-analyses, or a combination of both, can be used to provide evidence for quantitative strength of reliability, agreement, generalizability, and related measures that derive from association concepts. When responses are of a continuous, Gaussian type, the linear mixed model is shown to be a versatile framework. At the same time, the framework is embedded in the generalized linear mixed models, such that non-Gaussian, e.g., binary, outcomes can be studied as well. Similarities and, above all, important differences are studied. All developments are exemplified using clinical studies in schizophrenia, with focus on the endpoints Clinician's Global Impression (CGI) or Positive and Negative Syndrome Scale (PANSS).  相似文献   

11.
目的 用生物信息学一致性相关系数(CCC)法预测非小细胞肺癌一线化疗方案NP方案(长春瑞滨+顺铂)药物敏感性基因.方法 使用5种统计学方法:Pearson相关分析、Spearman相关分析、Welch's t-test、ANCOVA和rank-based ANCOVA,从NCI 60数据库筛选药物敏感性基因,通过CCC...  相似文献   

12.
It is important yet challenging to choose an appropriate analysis method for the analysis of repeated binary responses with missing data. The conventional method using the last observation carried forward (LOCF) approach can be biased in both parameter estimates and hypothesis tests. The generalized estimating equations (GEE) method is valid only when missing data are missing completely at random, which may not be satisfied in many clinical trials. Several random-effects models based on likelihood or pseudo-likelihood methods and multiple-imputation-based methods have been proposed in the literature. In this paper, we evaluate the random-effects models with full- or pseudo-likelihood methods, GEE, and several multiple-imputation approaches. Simulations are used to compare the results and performance among these methods under different simulation settings.  相似文献   

13.
Evaluation of Mixture Modeling with Count Data Using NONMEM   总被引:3,自引:0,他引:3  
Mixture modeling within the context of pharmacokinetic (PK)/pharmacodynamic (PD) mixed effects modeling is a useful tool to explore a population for the presence of two or more subpopulations, not explained by evaluated covariates. At present, statistical tests for the existence of mixed populations have not been developed. Therefore, a simulation study was undertaken to evaluate mixture modeling with NONMEM and explore the following questions. First, what is the probability of concluding that a mixed population exists when there truly is not a mixture (false positive significance level)? Second, what is the probability of concluding that a mixed population (two subpopulations) exists when there is truly a mixed population (power), and how well can the mixture be estimated, both in terms of the population parameters and the individual subject classification. Seizure count data were simulated using a Poisson distribution such that each subject's count could decrease from its baseline value, as a function of dose via an Emax model. The dosing design for the simulation was based on a trial with the investigational anti-epileptic drug pregabalin. Four hundred and forty seven subjects received pregabalin as add on therapy for partial seizures, each with a baseline seizure count and up to three subsequent seizure counts. For the mixtures, the two subpopulations were simulated to differ in their Emax values and relative proportions. One subpopulation always had its Emax set to unity (Emax hi), allowing the count to approach zero with increasing dose. The other subpopulation was allowed to vary in its Emax value (Emax lo=0.75, 0.5, 0.25, and 0) and in its relative proportion (pr) of the population (pr=0.05, 0.10, 0.25, and 0.50) giving a total of 4 ? 4=16 different mixtures explored. Three hundred data sets were simulated for each scenario and estimations performed using NONMEM. Metrics used information about the parameter estimates, their standard errors (SE), the difference between minimum objective function (MOF) values for mixture and non-mixture models (MOF(δ)), the proportion of subjects classified correctly, and the estimated conditional probabilities of a subject being simulated as having Emax lo (Emax hi) given that they were estimated as having Emaxlo (Emax hi) and being estimated as having Emaxlo (Emax hi) given that they were simulated as having Emax lo (Emax hi). The false positive significance level was approximately 0.04 (using all 300 runs) or 0.078 (using only those runs with a successful covariance step), when there was no mixture. When simulating mixed data and for those characterizations with successful estimation and covariance steps, the median (range) percentage of 95% confidence intervals containing the true values for the parameters defining the mixture were 94% (89–96%), 89.5% (58–96%), and 95% (92–97%) for pr, Emax lo, and Emax hi, respectively. The median value of the estimated parameters pr, Emax lo (excluding the case when Emax lo was simulated to equal 0) and Emax hi within a scenario were within ±28% of the true values. The median proportion of subjects classified correctly ranged from 0.59 to 0.96. In conclusion, when no mixture was present the false positive probability was less than 0.078 and when mixtures were present they were characterized with varying degrees of success, depending on the nature of the mixture. When the difference between subpopulations was greater (as Emax lo approached zero or pr approached 0.5) the mixtures became easier to characterize.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号