首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
We propose a sample size calculation approach for the estimation of sensitivity and specificity of diagnostic tests with multiple observations per subjects. Many diagnostic tests such as diagnostic imaging or periodontal tests are characterized by the presence of multiple observations for each subject. The number of observations frequently varies among subjects in diagnostic imaging experiments or periodontal studies. Nonparametric statistical methods for the analysis of clustered binary data have been recently developed by various authors. In this paper, we derive a sample size formula for sensitivity and specificity of diagnostic tests using the sign test while accounting for multiple observations per subjects. Application of the sample size formula for the design of a diagnostic test is discussed. Since the sample size formula is based on large sample theory, simulation studies are conducted to evaluate the finite sample performance of the proposed method. We compare the performance of the proposed sample size formula with that of the parametric sample size formula that assigns equal weight to each observation. Simulation studies show that the proposed sample size formula generally yields empirical powers closer to the nominal level than the parametric method. Simulation studies also show that the number of subjects required increases as the variability in the number of observations per subject increases and the intracluster correlation increases.  相似文献   

2.
In confidence interval estimation of the difference between two proportions with overdispersion due to positive correlations, the usual asymptotic normality-based method generally has lower coverage rates than desired, especially when sample size is moderate. Applying the concept of effective sample size to existing methods for independent data, we propose three new asymptotic normality-based methods. It is demonstrated through an extensive Monte Carlo study that the proposed methods generally perform better than the usual method. The proposed methods are illustrated in the application to a motivating data example.  相似文献   

3.
In assessing a noninferiority trial, the investigator intends to show efficacy by demonstrating that a new experimental drug/treatment is not worse than a known active control/reference by a small predefined margin. If it is ethically justifiable, it may be advisable to include an additional placebo group for internal validation purpose. This constitutes the well-known three-arm clinical trial with placebo. In this paper, we study two asymptotic statistical methods for testing of noninferiority in three-arm clinical trials with placebo for binary outcomes based on rate difference. They are sample-based estimation method and restricted maximum likelihood estimation method, respectively. We investigate the performance of the proposed test procedures under different sample size allocation settings via a simulation study. Both methods perform satisfactorily under moderate to large sample settings. However, the restricted maximum likelihood estimation method usually possesses slightly smaller actual type I error rates, which are relatively close to the prespecified nominal level, while the sample-based method can be expressed in a simple closed-form format. Real examples from a pharmacological study of patients with functional dyspepsia and a placebo-controlled trial of subjects with acute migraine are used to demonstrate our methodologies.  相似文献   

4.
For designing single-arm phase II trials with time-to-event endpoints, a sample size formula is derived for the modified one-sample log-rank test under the proportional hazards model. The derived formula enables new methods for designing trials that allow a flexible choice of the underlying survival distribution. Simulation results showed that the proposed formula provides an accurate estimation of sample size. The sample size calculation has been implemented in an R function for the purpose of trial design. Supplementary materials for this article are available online.  相似文献   

5.
Microarray is a technology to screen a large number of genes to discover those differentially expressed between clinical subtypes or different conditions of human diseases. Gene discovery using microarray data requires adjustment for the large-scale multiplicity of candidate genes. The family-wise error rate (FWER) has been widely chosen as a global type I error rate adjusting for the multiplicity. Typically in microarray data, the expression levels of different genes are correlated because of coexpressing genes and the common experimental conditions shared by the genes on each array. To accurately control the FWER, the statistical testing procedure should appropriately reflect the dependency among the genes. Permutation methods have been used for accurate control of the FWER in analyzing microarray data. It is important to calculate the required sample size at the design stage of a new (confirmatory) microarray study. Because of the high dimensionality and complexity of the correlation structure in microarray data, however, there have been no sample size calculation methods accurately reflecting the true correlation structure of real microarray data. We propose sample size and power calculation methods that are useful when pilot data are available to design a confirmatory experiment. If no pilot data are available, we recommend a two-stage sample size recalculation based on our proposed method using the first stage data as pilot data. The calculated sample sizes are shown to accurately maintain the power through simulations. A real data example is taken to illustrate the proposed method.  相似文献   

6.
Microarray is a technology to screen a large number of genes to discover those differentially expressed between clinical subtypes or different conditions of human diseases. Gene discovery using microarray data requires adjustment for the large-scale multiplicity of candidate genes. The family-wise error rate (FWER) has been widely chosen as a global type I error rate adjusting for the multiplicity. Typically in microarray data, the expression levels of different genes are correlated because of coexpressing genes and the common experimental conditions shared by the genes on each array. To accurately control the FWER, the statistical testing procedure should appropriately reflect the dependency among the genes. Permutation methods have been used for accurate control of the FWER in analyzing microarray data. It is important to calculate the required sample size at the design stage of a new (confirmatory) microarray study. Because of the high dimensionality and complexity of the correlation structure in microarray data, however, there have been no sample size calculation methods accurately reflecting the true correlation structure of real microarray data. We propose sample size and power calculation methods that are useful when pilot data are available to design a confirmatory experiment. If no pilot data are available, we recommend a two-stage sample size recalculation based on our proposed method using the first stage data as pilot data. The calculated sample sizes are shown to accurately maintain the power through simulations. A real data example is taken to illustrate the proposed method.  相似文献   

7.
The primary objective of a multiregional clinical trial (MRCT) is to assess the efficacy of all participating regions and evaluate the probability of applying the overall results to a specific region. The consistency assessment of the target region with the overall results is the most common way of evaluating the efficacy in a specific region. Recently, Huang et al. (2012) proposed an additional trial in the target region to an MRCT to evaluate the efficacy in the target ethnic (TE) population under the framework of simultaneous global drug development program (SGDDP). However, the operating characteristics of this statistical framework were not well considered. Therefore, a nested group sequential program for regional efficacy evaluation is proposed in this paper. It is an extension of Huang’s SGDDP framework and allows interim analysis after MRCT and in the course of local clinical trial (LCT) phase. It is able to well control the family-wise type I error in the program level and enhances the flexibility of the program. In LCT sample size estimation, we introduce virtual trial, which is transformed from the original program by using discounting factor, and an iteration method is employed to calculate the sample size and stopping boundaries of interim analyses. The proposed sample size estimation method is validated in the simulations and the effect of varied weight, effect size of TE population, and design setting is explored. Examples with normal end point, binary end point, and survival end point are shown to illustrate the application of the proposed nested group sequential program.  相似文献   

8.
Laboratory data with a lower quantification limit (censored data) are sometimes analyzed by replacing non-quantifiable values with a single value equal to or less than the quantification limit, yielding possibly biased point estimates and variance estimates that are too small. Motivated by a three-period, three-treatment crossover study of a candidate vaginal microbicide in human immunodeficiency virus (HIV)-infected women, we consider four analysis methods for censored Gaussian data with a single follow-up measurement: nonparametric methods, mixed models, mixture models, and dichotomous measures of a treatment effect. We apply these methods to the crossover study data and use simulation to evaluate the statistical properties of these methods in analyzing the treatment effect in a two-treatment parallel-arm or crossover study with censored Gaussian data. Our simulated data and our mixed and mixture models consider treated follow-up data with the same variance as the baseline data or with an inflated variance. Mixed models have the correct type I error, the best power, the least biased Gaussian parameter treatment-effect estimates, and appropriate confidence interval coverage for these estimates. A crossover study analysis with a period effect can greatly increase the required study sample size. For both designs and both variance assumptions, published sample-size estimation methods do not yield a good estimate of the sample size to obtain the stated power.  相似文献   

9.
While planning clinical trials, when simple formulas are unavailable to calculate sample size, statistical simulations are used instead. However, one has to spend much computation time obtaining adequately precise and accurate simulated sample size estimates, especially when there are many scenarios for the planning and/or the specified statistical method is complicated. In this article, we summarize the theoretical aspect of statistical simulation-based sample size calculation. Then, we propose a new simulation procedure for sample size calculation by fitting the probit model to simulation result data. From the theoretical and simulation-based evaluations, it is suggested that the proposed simulation procedure provide more efficient and accurate sample size estimates than ordinary algorithm-based simulation procedure especially when estimated sample sizes are moderate to large, therefore it would help to dramatically reduce the computational time required to conduct clinical trial simulations.  相似文献   

10.
Sample size adjustment at an interim analysis can mitigate the risk of failing to meet the study objective due to lower-than-expected treatment effect. Without modification to the conventional statistical methods, the type I error rate will be inflated, primarily caused by increasing sample size when the interim observed treatment effect is close to null or no treatment effect. Modifications to the conventional statistical methods, such as changing critical values or using weighted test statistics, have been proposed to address primarily such a scenario at the cost of flexibility or interpretability. In reality, increasing sample size when interim results indicate no or very small treatment effect could unnecessarily waste limited resource on an ineffective drug candidate. Such considerations lead to the recently increased interest in sample size adjustment based on promising interim results. The 50% conditional power principle allows sample size increase only when the unblinded interim results are promising or the conditional power is greater than 50%. The conventional unweighted test statistics and critical values can be used without inflation of type I error rate. In this paper, statistical inference following such a design is assessed. As shown in the numerical study, the bias of the conventional maximum likelihood estimate (MLE) and coverage error of its conventional confidence interval are generally small following sample size adjustment. We recommend use of conventional, MLE-based statistical inference when applying the 50% conditional power principle for sample size adjustment. In such a way, consistent statistics will be used in both hypothesis test and statistical inference.  相似文献   

11.
The cut point of the immunogenicity screening assay is the level of response of the immunogenicity screening assay at or above which a sample is defined to be positive and below which it is defined to be negative. The Food and Drug Administration Guidance for Industry on Assay Development for Immunogenicity Testing of Therapeutic recommends the cut point to be an upper 95 percentile of the negative control patients. In this article, we assume that the assay data are a random sample from a normal distribution. The sample normal percentile is a point estimate with a variability that decreases with the increase of sample size. Therefore, the sample percentile does not assure at least 5% false-positive rate (FPR) with a high confidence level (e.g., 90%) when the sample size is not sufficiently enough. With this concern, we propose to use a lower confidence limit for a percentile as the cut point instead. We have conducted an extensive literature review on the estimation of the statistical cut point and compare several selected methods for the immunogenicity screening assay cut-point determination in terms of bias, the coverage probability, and FPR. The selected methods evaluated for the immunogenicity screening assay cut-point determination are sample normal percentile, the exact lower confidence limit of a normal percentile (Chakraborti and Li, 2007) and the approximate lower confidence limit of a normal percentile. It is shown that the actual coverage probability for the lower confidence limit of a normal percentile using approximate normal method is much larger than the required confidence level with a small number of assays conducted in practice. We recommend using the exact lower confidence limit of a normal percentile for cut-point determination.  相似文献   

12.
Longitudinal study designs are commonly applied in much scientific research, especially in the medical, social, and economic sciences. Longitudinal studies allow researchers to measure changes in each individual’s responses over time and often have higher statistical power than cross-sectional studies. Choosing an appropriate sample size is a crucial step in a successful study. In longitudinal studies, because of the complexity of their design, including the selection of the number of individuals and the number of repeated measurements, sample size determination is less studied. This paper uses a simulation-based method to determine the sample size from a Bayesian perspective. For this purpose, several Bayesian criteria for sample size determination are used, of which the most important one is the Bayesian power criterion. We determine the sample size of a longitudinal study based on the scientific question of interest, by the choice of an appropriate model. Most of the methods of determining sample size are based on the definition of a single hypothesis. In this paper, in addition to using this method, we determine the sample size using multiple hypotheses. Using several examples, the proposed Bayesian methods are illustrated and discussed.  相似文献   

13.
Some assay validation studies are conducted to assess agreement between repeated, paired continuous data measured on the same subject with different measurement systems. The goal of these studies is to show that there is an acceptable level of agreement between the measurement systems. Equivalence testing is a reasonable approach in assay validation. In this article, we use an equivalence-testing criterion based on a decomposition of a concordance correlation coefficient proposed by Lin (1989, 1992). Using a variance components approach, we develop bounds for conducting statistical tests using the proposed equivalence criterion. We conduct a simulation study to assess the performance of the bounds. The criteria are the ability to maintain the stated test size and the simulated power of the tests using these bounds. Bounds that perform well for small sample size are preferred. We present a computational example to demonstrate the methods described in the article.  相似文献   

14.
The van Elteren test, as a type of stratified Wilcoxon-Mann-Whitney test for comparing two treatments accounting for stratum effects, has been used to replace the analysis of variance when the normality assumption was seriously violated. The sample size estimation methods for the van Elteren test have been proposed and evaluated previously. However, in designing an active-comparator trial where a sample of responses from the new treatment is available but the patient response data to the comparator are limited to summary statistics, the existing methods are either inapplicable or poorly behaved. In this paper we develop a new method for active-comparator trials assuming the responses from both treatments are from the same location-scale family. Theories and simulations have shown that the new method performs well when the location-scale assumption holds and works reasonably when the assumption does not hold. Thus, the new method is preferred when computing sample sizes for the van Elteren test in active-comparator trials.  相似文献   

15.
Summary We suggest and compare different methods for estimating spatial autoregressive models with randomly missing data in the dependent variable. Aside from the traditional expectation‐maximization (EM) algorithm, a nonlinear least squares method is suggested and a generalized method of moments estimation is developed for the model. A two‐stage least squares estimation with imputation is proposed as well. We analytically compare these estimation methods and find that generalized nonlinear least squares, best generalized two‐stage least squares with imputation and best method of moments estimators have identical asymptotic variances. These methods are less efficient than maximum likelihood estimation implemented with the EM algorithm. When unknown heteroscedasticity exists, however, EM estimation produces inconsistent estimates. Under this situation, these methods outperform EM. We provide finite sample evidence through Monte Carlo experiments.  相似文献   

16.
A theory is developed for estimation of a population value of AUC along with its standard deviation, in the case, when only one concentration-time (C-t) sample is available for each individual. This theory is based on model-independent pharmacokinetics. Integration methods are classified due to their applicability to the presented approach. The main goal of this work is to establish a statistical hypothesis-testing procedure which would make single C-t samples usable for bioequivalence studies. An application of the theory to a number of integration methods currently in use is analyzed in detail. A real data illustration is included.  相似文献   

17.
The van Elteren test, as a type of stratified Wilcoxon-Mann-Whitney test for comparing two treatments accounting for stratum effects, has been used to replace the analysis of variance when the normality assumption was seriously violated. The sample size estimation methods for the van Elteren test have been proposed and evaluated previously. However, in designing an active-comparator trial where a sample of responses from the new treatment is available but the patient response data to the comparator are limited to summary statistics, the existing methods are either inapplicable or poorly behaved. In this paper we develop a new method for active-comparator trials assuming the responses from both treatments are from the same location-scale family. Theories and simulations have shown that the new method performs well when the location-scale assumption holds and works reasonably when the assumption does not hold. Thus, the new method is preferred when computing sample sizes for the van Elteren test in active-comparator trials.  相似文献   

18.
Summary The calculation of classical pharmacokinetic parameters from microdialysis data has been described in a previous paper. In this paper I have derived methods for calculating AUMC and AUC from the time-integral type of data that are generated in microdialysis pharmacokinetics experiments. The method derived to estimate AUC is elementary, but is given a theoretical basis using principles of mathematical real analysis, clearly stating the assumptions.The method derived to estimate AUMC is a numerical approximation method based on the linear trapezoidal method. A simulation study was performed to evaluate the precision of the methods and to compare them with corresponding methods for analysis of blood sample data.The estimates from the presently derived methods have a small bias and a small variance. In the simulation study I investigated the influence of model parameters, number of samples, size of statistical error, and the size of the AUC beyond the last sample. Finally, I have given numerical examples from real data to illustrate the use of the method.  相似文献   

19.
For patient's convenience, dose administration of insulin via oral inhalation is often considered as an alternative to subcutaneous administration. An important statistical problem is to estimate dose equivalence, which is the amount of drug needed to be delivered by inhalation to generate an equivalent pharmacokinetic (PK) response produced by a therapeutic dose of subcutaneous insulin. Because of high intersubject variability, a crossover design clinical trial is typically used where data from both routes of administration are obtained from the same subject. A linear mixed effects model is proposed to describe the relationship between AK response and insulin dose for the two routes of administration. Estimation of dose equivalence in this setting has not been discussed in the statistical literature. Several competing methods for estimating dose equivalence are proposed and contrasted. A formula for calculating an approximate sample size necessary to estimate dose equivalence with a desired precision for the new route of administration is also provided.  相似文献   

20.
In clinical research, it is not uncommon to modify a trial procedure and/or statistical methods of ongoing clinical trials through protocol amendments. A major modification to the study protocol could result in a shift in target patient population. In addition, frequent and significant modifications could lead to a totally different study that is unable to address the medical questions that the original study intended to answer. In this article, we propose a logistic regression model for statistical inference based on a binary study endpoint for trials with protocol amendments. Under the proposed method, sample size adjustment is also derived.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号