首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 421 毫秒
1.
Statistical inference based on correlated count measurements are frequently performed in biomedical studies. Most of existing sample size calculation methods for count outcomes are developed under the Poisson model. Deviation from the Poisson assumption (equality of mean and variance) has been widely documented in practice, which indicates urgent needs of sample size methods with more realistic assumptions to ensure valid experimental design. In this study, we investigate sample size calculation for clinical trials with correlated count measurements based on the negative binomial distribution. This approach is flexible to accommodate overdispersion and unequal measurement intervals, as well as arbitrary randomization ratios, missing data patterns, and correlation structures. Importantly, the derived sample size formulas have closed forms both for the comparison of slopes and for the comparison of time-averaged responses, which greatly reduces the burden of implementation in practice. We conducted extensive simulation to demonstrate that the proposed method maintains the nominal levels of power and type I error over a wide range of design configurations. We illustrate the application of this approach using a real epileptic trial.  相似文献   

2.
Cluster randomized designs are frequently employed in pragmatic clinical trials which test interventions in the full spectrum of everyday clinical settings in order to maximize applicability and generalizability. In this study, we propose to directly incorporate pragmatic features into power analysis for cluster randomized trials with count outcomes. The pragmatic features considered include arbitrary randomization ratio, overdispersion, random variability in cluster size, and unequal lengths of follow-up over which the count outcome is measured. The proposed method is developed based on generalized estimating equation (GEE) and it is advantageous in that the sample size formula retains a closed form, facilitating its implementation in pragmatic trials. We theoretically explore the impact of various pragmatic features on sample size requirements. An efficient Jackknife algorithm is presented to address the problem of underestimated variance by the GEE sandwich estimator when the number of clusters is small. We assess the performance of the proposed sample size method through extensive simulation and an application example to a real clinical trial is presented.  相似文献   

3.
Sample size estimation in clinical trials depends critically on nuisance parameters, such as variances or overall event rates, which have to be guessed or estimated from previous studies in the planning phase of a trial. Blinded sample size reestimation estimates these nuisance parameters based on blinded data from the ongoing trial, and allows to adjust the sample size based on the acquired information. In the present paper, this methodology is developed for clinical trials with count data as the primary endpoint. In multiple sclerosis such endpoints are commonly used in phase 2 trials (lesion counts in magnetic resonance imaging (MRI)) and phase 3 trials (relapse counts). Sample size adjustment formulas are presented for both Poisson‐distributed data and for overdispersed Poisson‐distributed data. The latter arise from sometimes considerable between‐patient heterogeneity, which can be observed in particular in MRI lesion counts. The operation characteristics of the procedure are evaluated by simulations and recommendations on how to choose the size of the internal pilot study are given. The results suggest that blinded sample size reestimation for count data maintains the required power without an increase in the type I error. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

4.
目的 提供二分类定性资料平行设计非劣效临床试验样本含量最常用的计算公式及其 SAS和PASS过程,并为相关参数的设置提供参考。方法 基于二项分布的正态近似理论推导样本含量的估计公式,通过SAS程序和PASS过程探讨各重要参数(样本率、非劣效界值)变化时样本含量及检验效能的变化情况。结果 对率的非劣效试验样本含量的计算,公式、SAS程序和PASS过程能得到一致结果;当检验水准和对照组样本率确定时,试验组样本率越大、检验效能越小、界值越大,所需样本含量越小。结论 利用本文提供的公式、SAS程序和PASS过程,可以帮助研究者系统快速得到二分类资料2组平行非劣效设计时的样本含量。试验组样本率、检验效能和非劣效界值是非劣效临床试验估计样本含量必须认真考虑的参数。  相似文献   

5.
Counts of events are increasingly common as primary endpoints in randomized clinical trials. With between‐patient heterogeneity leading to variances in excess of the mean (referred to as overdispersion), statistical models reflecting this heterogeneity by mixtures of Poisson distributions are frequently employed. Sample size calculation in the planning of such trials requires knowledge on the nuisance parameters, that is, the control (or overall) event rate and the overdispersion parameter. Usually, there is only little prior knowledge regarding these parameters in the design phase resulting in considerable uncertainty regarding the sample size. In this situation internal pilot studies have been found very useful and very recently several blinded procedures for sample size re‐estimation have been proposed for overdispersed count data, one of which is based on an EM‐algorithm. In this paper we investigate the EM‐algorithm based procedure with respect to aspects of their implementation by studying the algorithm's dependence on the choice of convergence criterion and find that the procedure is sensitive to the choice of the stopping criterion in scenarios relevant to clinical practice. We also compare the EM‐based procedure to other competing procedures regarding their operating characteristics such as sample size distribution and power. Furthermore, the robustness of these procedures to deviations from the model assumptions is explored. We find that some of the procedures are robust to at least moderate deviations. The results are illustrated using data from the US National Heart, Lung and Blood Institute sponsored Asymptomatic Cardiac Ischemia Pilot study. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

6.
Owing to the rapid development of biomarkers in clinical trials, joint modeling of longitudinal and survival data has gained its popularity in the recent years because it reduces bias and provides improvements of efficiency in the assessment of treatment effects and other prognostic factors. Although much effort has been put into inferential methods in joint modeling, such as estimation and hypothesis testing, design aspects have not been formally considered. Statistical design, such as sample size and power calculations, is a crucial first step in clinical trials. In this paper, we derive a closed-form sample size formula for estimating the effect of the longitudinal process in joint modeling, and extend Schoenfeld's sample size formula to the joint modeling setting for estimating the overall treatment effect. The sample size formula we develop is quite general, allowing for p-degree polynomial trajectories. The robustness of our model is demonstrated in simulation studies with linear and quadratic trajectories. We discuss the impact of the within-subject variability on power and data collection strategies, such as spacing and frequency of repeated measurements, in order to maximize the power. When the within-subject variability is large, different data collection strategies can influence the power of the study in a significant way. Optimal frequency of repeated measurements also depends on the nature of the trajectory with higher polynomial trajectories and larger measurement error requiring more frequent measurements.  相似文献   

7.
In clinical trials with time‐to‐event endpoints, it is not uncommon to see a significant proportion of patients being cured (or long‐term survivors), such as trials for the non‐Hodgkins lymphoma disease. The popularly used sample size formula derived under the proportional hazards (PH) model may not be proper to design a survival trial with a cure fraction, because the PH model assumption may be violated. To account for a cure fraction, the PH cure model is widely used in practice, where a PH model is used for survival times of uncured patients and a logistic distribution is used for the probability of patients being cured. In this paper, we develop a sample size formula on the basis of the PH cure model by investigating the asymptotic distributions of the standard weighted log‐rank statistics under the null and local alternative hypotheses. The derived sample size formula under the PH cure model is more flexible because it can be used to test the differences in the short‐term survival and/or cure fraction. Furthermore, we also investigate as numerical examples the impacts of accrual methods and durations of accrual and follow‐up periods on sample size calculation. The results show that ignoring the cure rate in sample size calculation can lead to either underpowered or overpowered studies. We evaluate the performance of the proposed formula by simulation studies and provide an example to illustrate its application with the use of data from a melanoma trial. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

8.
Many different methods have been proposed for the analysis of cluster randomized trials (CRTs) over the last 30 years. However, the evaluation of methods on overdispersed count data has been based mostly on the comparison of results using empiric data; i.e. when the true model parameters are not known. In this study, we assess via simulation the performance of five methods for the analysis of counts in situations similar to real community‐intervention trials. We used the negative binomial distribution to simulate overdispersed counts of CRTs with two study arms, allowing the period of time under observation to vary among individuals. We assessed different sample sizes, degrees of clustering and degrees of cluster‐size imbalance. The compared methods are: (i) the two‐sample t‐test of cluster‐level rates, (ii) generalized estimating equations (GEE) with empirical covariance estimators, (iii) GEE with model‐based covariance estimators, (iv) generalized linear mixed models (GLMM) and (v) Bayesian hierarchical models (Bayes‐HM). Variation in sample size and clustering led to differences between the methods in terms of coverage, significance, power and random‐effects estimation. GLMM and Bayes‐HM performed better in general with Bayes‐HM producing less dispersed results for random‐effects estimates although upward biased when clustering was low. GEE showed higher power but anticonservative coverage and elevated type I error rates. Imbalance affected the overall performance of the cluster‐level t‐test and the GEE's coverage in small samples. Important effects arising from accounting for overdispersion are illustrated through the analysis of a community‐intervention trial on Solar Water Disinfection in rural Bolivia. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

9.
Jung SH  Ahn CW 《Statistics in medicine》2005,24(17):2583-2596
Controlled clinical trials often randomize subjects to two treatment groups and repeatedly evaluate them at baseline and intervals across a treatment period of fixed duration. A popular primary objective in these trials is to compare the change rates in the repeated measurements between treatment groups. Repeated measurements usually involve missing data and a serial correlation within each subject. The generalized estimating equation (GEE) method has been widely used to fit the time trend in repeated measurements because of its robustness to random missing and mispecification of the true correlation structure. In this paper, we propose a closed form sample size formula for comparing the change rates of binary repeated measurements using GEE for a two-group comparison. The sample size formula is derived incorporating missing patterns, such as independent missing and monotone missing, and correlation structures, such as AR(1) model. We also propose an algorithm to generate correlated binary data with arbitrary marginal means and a Markov dependency and use it in simulation studies.  相似文献   

10.
Sequential tests are increasingly used to reduce the expected sample size of trials in medical research. The majority of such methods are based on the assumption of normality for test statistics. In clinical trials yielding a single sample of discrete data, that assumption is often poorly satisfied. In this paper we show how a novel application of the spending function approach of Lan and DeMets can be used together with exact calculation methods to design sequential procedures for a single sample of discrete random variables without the assumption of normality. A special case is that of binomial data and the paper is illustrated by the design of a cytogenetic study which motivated this work.  相似文献   

11.
In cluster‐randomized trials, groups of individuals (clusters) are randomized to the treatments or interventions to be compared. In many of those trials, the primary objective is to compare the time for an event to occur between randomized groups, and the shared frailty model well fits clustered time‐to‐event data. Members of the same cluster tend to be more similar than members of different clusters, causing correlations. As correlations affect the power of a trial to detect intervention effects, the clustered design has to be considered in planning the sample size. In this publication, we derive a sample size formula for clustered time‐to‐event data with constant marginal baseline hazards and correlation within clusters induced by a shared frailty term. The sample size formula is easy to apply and can be interpreted as an extension of the widely used Schoenfeld's formula, accounting for the clustered design of the trial. Simulations confirm the validity of the formula and its use also for non‐constant marginal baseline hazards. Findings are illustrated on a cluster‐randomized trial investigating methods of disseminating quality improvement to addiction treatment centers in the USA. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

12.
The two-test two-population model, originally formulated by Hui and Walter, for estimation of test accuracy and prevalence estimation assumes conditionally independent tests, constant accuracy across populations and binomial sampling. The binomial assumption is incorrect if all individuals in a population e.g. child-care centre, village in Africa, or a cattle herd are sampled or if the sample size is large relative to population size. In this paper, we develop statistical methods for evaluating diagnostic test accuracy and prevalence estimation based on finite sample data in the absence of a gold standard. Moreover, two tests are often applied simultaneously for the purpose of obtaining a 'joint' testing strategy that has either higher overall sensitivity or specificity than either of the two tests considered singly. Sequential versions of such strategies are often applied in order to reduce the cost of testing. We thus discuss joint (simultaneous and sequential) testing strategies and inference for them. Using the developed methods, we analyse two real and one simulated data sets, and we compare 'hypergeometric' and 'binomial-based' inferences. Our findings indicate that the posterior standard deviations for prevalence (but not sensitivity and specificity) based on finite population sampling tend to be smaller than their counterparts for infinite population sampling. Finally, we make recommendations about how small the sample size should be relative to the population size to warrant use of the binomial model for prevalence estimation.  相似文献   

13.
Type I error probability spending functions are commonly used for designing sequential analysis of binomial data in clinical trials, but it is also quickly emerging for near–continuous sequential analysis of post–market drug and vaccine safety surveillance. It is well known that, for clinical trials, when the null hypothesis is not rejected, it is still important to minimize the sample size. Unlike in post–market drug and vaccine safety surveillance, that is not important. In post–market safety surveillance, specially when the surveillance involves identification of potential signals, the meaningful statistical performance measure to be minimized is the expected sample size when the null hypothesis is rejected. The present paper shows that, instead of the convex Type I error spending shape conventionally used in clinical trials, a concave shape is more indicated for post–market drug and vaccine safety surveillance. This is shown for both, continuous and group sequential analysis.  相似文献   

14.
BACKGROUND: Cluster randomized trials are increasingly popular. In many of these trials, cluster sizes are unequal. This can affect trial power, but standard sample size formulae for these trials ignore this. Previous studies addressing this issue have mostly focused on continuous outcomes or methods that are sometimes difficult to use in practice. METHODS: We show how a simple formula can be used to judge the possible effect of unequal cluster sizes for various types of analyses and both continuous and binary outcomes. We explore the practical estimation of the coefficient of variation of cluster size required in this formula and demonstrate the formula's performance for a hypothetical but typical trial randomizing UK general practices. RESULTS: The simple formula provides a good estimate of sample size requirements for trials analysed using cluster-level analyses weighting by cluster size and a conservative estimate for other types of analyses. For trials randomizing UK general practices the coefficient of variation of cluster size depends on variation in practice list size, variation in incidence or prevalence of the medical condition under examination, and practice and patient recruitment strategies, and for many trials is expected to be approximately 0.65. Individual-level analyses can be noticeably more efficient than some cluster-level analyses in this context. CONCLUSIONS: When the coefficient of variation is <0.23, the effect of adjustment for variable cluster size on sample size is negligible. Most trials randomizing UK general practices and many other cluster randomized trials should account for variable cluster size in their sample size calculations.  相似文献   

15.
目的探讨设计以率作为终点评价指标的单组目标值试验时,不同样本量计算方法间的区别。方法通过样本量计算原理与计算结果的比较,分析不同样本量计算方法间的差异,进一步通过MonteCarlo随机模拟方法,探讨使用不同方法所得样本量估计实际的检验把握度,验证所得结果的正确性。结果当预计事件发生率P和目标值P。相差10%时,近似正态法和一般精确概率法所得样本量和把握度较相近,但是当P接近1.0时(P〉0.85),精确概率法所得样本量略低于近似正态法,且把握度明显低于近似正态法。小概率事件的精确概率法所需样本量始终低于近似正态法和一般精确概率法。随机模拟显示,在相同的参数设置下,近似正态法给出的样本量能够提供足够的研究把握度,而小概率事件的精确概率法所得样本量能提供的把握度过低。结论如果要检验某医疗器械的使用成功率是否不低于某个临床认可的标准,按照近似正态法计算的样本量,更能提供足够的检验把握度,尤其对于小规模的临床试验。  相似文献   

16.
In this paper, we develop estimation procedure for the parameters of a zero‐inflated over‐dispersed/under‐dispersed count model in the presence of missing responses. In particular, we deal with a zero‐inflated extended negative binomial model in the presence of missing responses. A weighted expectation maximization algorithm is used for the maximum likelihood estimation of the parameters involved. Some simulations are conducted to study the properties of the estimators. Robustness of the procedure is shown when count data follow other over‐dispersed models, such as the log‐normal mixture of the Poisson distribution or even from a zero‐inflated Poisson model. An illustrative example and a discussion leading to some conclusions are given. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

17.
In the field of diagnostic medicine, comparative clinical trials are necessary for assessing the utility of one diagnostic test over another. The area under the receiver operating characteristic (ROC) curve, commonly referred to as AUC, is a general measure of a test's inherent ability to distinguish between patients with and without a condition. Standardized AUC difference is the most frequently used statistic for comparing two diagnostic tests. In therapeutic comparative clinical trials with sequential patient entry, fixed sample design (FSD) is unjustified on ethical and economical grounds and group sequential design (GSD) is frequently used. In this paper, we argue that the same reasoning exists for the comparative clinical trials in diagnostic medicine and hence GSD should be utilized in this field for designing trials. Since computation of the stopping boundaries of GSD and data analysis after a group sequential test rely heavily on Brownian motion approximation, we derive the asymptotic distribution of the standardized AUC difference statistic and point out its resemblance to the Brownian motion. Boundary determination and sample size calculation are then illustrated through an example from a cancer clinical trial.  相似文献   

18.
Xie T  Waksman J 《Statistics in medicine》2003,22(18):2835-2846
Many clinical trials involve the collection of data on the time to occurrence of the same type of multiple events within sample units, in which ordering of events is arbitrary and times are usually correlated. To design a clinical trial with this type of clustered survival times as the primary endpoint, estimating the number of subjects (sampling units) required for a given power to detect a specified treatment difference is an important issue. In this paper we derive a sample size formula for clustered survival data via Lee, Wei and Amato's marginal model. It can be easily used to plan a clinical trial in which clustered survival times are of primary interest. Simulation studies demonstrate that the formula works very well. We also discuss and compare cluster survival time design and single survival time design (for example, time to the first event) in different scenarios.  相似文献   

19.
Mouse embryo assays are recommended to test materials used for in vitro fertilization for toxicity. In such assays, a number of embryos is divided in a control group, which is exposed to a neutral medium, and a test group, which is exposed to a potentially toxic medium. Inferences on toxicity are based on observed differences in successful embryo development between the two groups. However, mouse embryo assays tend to lack power due to small group sizes. This paper focuses on the sample size calculations for one such assay, the Nijmegen mouse embryo assay (NMEA), in order to obtain an efficient and statistically validated design. The NMEA follows a stratified (mouse), randomized (embryo), balanced design (also known as a split-cluster design). We adopted a beta-binomial approach and obtained a closed sample size formula based on an estimator for the within-cluster variance. Our approach assumes that the average success rate of the mice and the variance thereof, which are breed characteristics that can be easily estimated from historical data, are known. To evaluate the performance of the sample size formula, a simulation study was undertaken which suggested that the predicted sample size was quite accurate. We confirmed that incorporating the a priori knowledge and exploiting the intra-cluster correlations enable a smaller sample size. Also, we explored some departures from the beta-binomial assumption. First, departures from the compound beta-binomial distribution to an arbitrary compound binomial distribution lead to the same formulas, as long as some general assumptions hold. Second, our sample size formula compares to the one derived from a linear mixed model for continuous outcomes in case the compound (beta-)binomial estimator is used for the within-cluster variance.  相似文献   

20.
In this paper we propose a sample size calculation method for testing on a binomial proportion when binary observations are dependent within clusters. In estimating the binomial proportion in clustered binary data, two weighting systems have been popular: equal weights to clusters and equal weights to units within clusters. When the number of units varies cluster by cluster, performance of these two weighting systems depends on the extent of correlation among units within each cluster. In addition to them, we will also use an optimal weighting method that minimizes the variance of the estimator. A sample size formula is derived for each of the estimators with different weighting schemes. We apply these methods to the sample size calculation for the sensitivity of a periodontal diagnostic test. Simulation studies are conducted to evaluate a finite sample performance of the three estimators. We also assess the influence of misspecified input parameter values on the calculated sample size. The optimal estimator requires equal or smaller sample sizes and is more robust to the misspecification of an input parameter than those assigning equal weights to units or clusters.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号