首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Negative binomial model has been increasingly used to model the count data in recent clinical trials. It is frequently chosen over Poisson model in cases of overdispersed count data that are commonly seen in clinical trials. One of the challenges of applying negative binomial model in clinical trial design is the sample size estimation. In practice, simulation methods have been frequently used for sample size estimation. In this paper, an explicit formula is developed to calculate sample size based on the negative binomial model. Depending on different approaches to estimate the variance under null hypothesis, three variations of the sample size formula are proposed and discussed. Important characteristics of the formula include its accuracy and its ability to explicitly incorporate dispersion parameter and exposure time. The performance of the formula with each variation is assessed using simulations. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

2.
Counts of events are increasingly common as primary endpoints in randomized clinical trials. With between‐patient heterogeneity leading to variances in excess of the mean (referred to as overdispersion), statistical models reflecting this heterogeneity by mixtures of Poisson distributions are frequently employed. Sample size calculation in the planning of such trials requires knowledge on the nuisance parameters, that is, the control (or overall) event rate and the overdispersion parameter. Usually, there is only little prior knowledge regarding these parameters in the design phase resulting in considerable uncertainty regarding the sample size. In this situation internal pilot studies have been found very useful and very recently several blinded procedures for sample size re‐estimation have been proposed for overdispersed count data, one of which is based on an EM‐algorithm. In this paper we investigate the EM‐algorithm based procedure with respect to aspects of their implementation by studying the algorithm's dependence on the choice of convergence criterion and find that the procedure is sensitive to the choice of the stopping criterion in scenarios relevant to clinical practice. We also compare the EM‐based procedure to other competing procedures regarding their operating characteristics such as sample size distribution and power. Furthermore, the robustness of these procedures to deviations from the model assumptions is explored. We find that some of the procedures are robust to at least moderate deviations. The results are illustrated using data from the US National Heart, Lung and Blood Institute sponsored Asymptomatic Cardiac Ischemia Pilot study. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

3.
Sumi M  Tango T 《Statistics in medicine》2010,29(30):3186-3193
As statistical methods for testing the null hypothesis of no difference between two groups for the matched pairs design, the paired-t test, Wilcoxon signed rank sum test and McNemar test are well known. However, there is no simple test for the comparison of incidence rate of recurrent events. This paper proposes a simple statistical method and a sample size formula for the comparison of counts of recurrent events over a specified period of observation under the matched pairs design, where the subject-specific incidence of recurrent events is assumed to follow a time-homogeneous Poisson process. As a special case, the proposed method is found to be virtually equivalent in form to Mantel-Haenszel method for a common rate ratio among the set of stratified tables based on person-time data. The proposed methods are illustrated with the within-arm comparison of data from a clinical trial of 59 epileptics with baseline count data.  相似文献   

4.
Jung SH  Ahn C 《Statistics in medicine》2003,22(8):1305-1315
Sample size calculation is an important component at the design stage of clinical trials. Controlled clinical trials often use a repeated measurement design in which individuals are randomly assigned to treatment groups and followed-up for measurements at intervals across a treatment period of fixed duration. In studies with repeated measurements, one of the popular primary interests is the comparison of the rates of change in a response variable between groups. Statistical models for calculating sample sizes for repeated measurement designs often fail to take into account the impact of missing data correctly. In this paper we propose to use the generalized estimating equation (GEE) method in comparing the rates of change in repeated measurements and introduce closed form formulae for sample size and power that can be calculated using a scientific calculator. Since the sample size formula is based on asymptotic theory, we investigate the performance of the estimated sample size in practical settings through simulations.  相似文献   

5.
In randomized clinical trials, it is standard to include baseline variables in the primary analysis as covariates, as it is recommended by international guidelines. For the study design to be consistent with the analysis, these variables should also be taken into account when calculating the sample size to appropriately power the trial. Because assumptions made in the sample size calculation are always subject to some degree of uncertainty, a blinded sample size reestimation (BSSR) is recommended to adjust the sample size when necessary. In this article, we introduce a BSSR approach for count data outcomes with baseline covariates. Count outcomes are common in clinical trials and examples include the number of exacerbations in asthma and chronic obstructive pulmonary disease, relapses, and scan lesions in multiple sclerosis and seizures in epilepsy. The introduced methods are based on Wald and likelihood ratio test statistics. The approaches are illustrated by a clinical trial in epilepsy. The BSSR procedures proposed are compared in a Monte Carlo simulation study and shown to yield power values close to the target while not inflating the type I error rate.  相似文献   

6.
In the three‐arm ‘gold standard’ non‐inferiority design, an experimental treatment, an active reference, and a placebo are compared. This design is becoming increasingly popular, and it is, whenever feasible, recommended for use by regulatory guidelines. We provide a general method to calculate the required sample size for clinical trials performed in this design. As special cases, the situations of continuous, binary, and Poisson distributed outcomes are explored. Taking into account the correlation structure of the involved test statistics, the proposed approach leads to considerable savings in sample size as compared with application of ad hoc methods for all three scale levels. Furthermore, optimal sample size allocation ratios are determined that result in markedly smaller total sample sizes as compared with equal assignment. As optimal allocation makes the active treatment groups larger than the placebo group, implementation of the proposed approach is also desirable from an ethical viewpoint. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

7.
Sample size estimation in clinical trials depends critically on nuisance parameters, such as variances or overall event rates, which have to be guessed or estimated from previous studies in the planning phase of a trial. Blinded sample size reestimation estimates these nuisance parameters based on blinded data from the ongoing trial, and allows to adjust the sample size based on the acquired information. In the present paper, this methodology is developed for clinical trials with count data as the primary endpoint. In multiple sclerosis such endpoints are commonly used in phase 2 trials (lesion counts in magnetic resonance imaging (MRI)) and phase 3 trials (relapse counts). Sample size adjustment formulas are presented for both Poisson‐distributed data and for overdispersed Poisson‐distributed data. The latter arise from sometimes considerable between‐patient heterogeneity, which can be observed in particular in MRI lesion counts. The operation characteristics of the procedure are evaluated by simulations and recommendations on how to choose the size of the internal pilot study are given. The results suggest that blinded sample size reestimation for count data maintains the required power without an increase in the type I error. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

8.
Sequential tests are increasingly used to reduce the expected sample size of trials in medical research. The majority of such methods are based on the assumption of normality for test statistics. In clinical trials yielding a single sample of discrete data, that assumption is often poorly satisfied. In this paper we show how a novel application of the spending function approach of Lan and DeMets can be used together with exact calculation methods to design sequential procedures for a single sample of discrete random variables without the assumption of normality. A special case is that of binomial data and the paper is illustrated by the design of a cytogenetic study which motivated this work.  相似文献   

9.
Important sources of variation in the spread of HIV in communities arise from overlapping sexual networks and heterogeneity in biological and behavioral risk factors in populations. These sources of variation are not routinely accounted for in the design of HIV prevention trials. In this paper, we use agent‐based models to account for these sources of variation. We illustrate the approach with an agent‐based model for the spread of HIV infection among men who have sex with men in South Africa. We find that traditional sample size approaches that rely on binomial (or Poisson) models are inadequate and can lead to underpowered studies. We develop sample size and power formulas for community randomized trials that incorporate estimates of variation determined from agent‐based models. We conclude that agent‐based models offer a useful tool in the design of HIV prevention trials. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

10.
The ‘gold standard’ design for three‐arm trials refers to trials with an active control and a placebo control in addition to the experimental treatment group. This trial design is recommended when being ethically justifiable and it allows the simultaneous comparison of experimental treatment, active control, and placebo. Parametric testing methods have been studied plentifully over the past years. However, these methods often tend to be liberal or conservative when distributional assumptions are not met particularly with small sample sizes. In this article, we introduce a studentized permutation test for testing non‐inferiority and superiority of the experimental treatment compared with the active control in three‐arm trials in the ‘gold standard’ design. The performance of the studentized permutation test for finite sample sizes is assessed in a Monte Carlo simulation study under various parameter constellations. Emphasis is put on whether the studentized permutation test meets the target significance level. For comparison purposes, commonly used Wald‐type tests, which do not make any distributional assumptions, are included in the simulation study. The simulation study shows that the presented studentized permutation test for assessing non‐inferiority in three‐arm trials in the ‘gold standard’ design outperforms its competitors, for instance the test based on a quasi‐Poisson model, for count data. The methods discussed in this paper are implemented in the R package ThreeArmedTrials which is available on the comprehensive R archive network (CRAN). Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

11.
An equivalence trial is appropriate when it is desired to demonstrate equivalence between two treatments, regimens or interventions (methods) or non-inferiority of a new one compared to a standard one. The conduct of an equivalence trial requires different techniques during design and analysis compared to a superiority trial. The existing formulae for sample size calculation to demonstrate equivalence between two methods using the confidence interval approach are reviewed. The establishment of the margin of equivalence and the choice of the type of test are discussed. Plots of sample sizes required to demonstrate equivalence in the case of binary outcomes are presented for values of proportions and margins of equivalence common in the reproductive health field. Examples are given of method comparisons in the reproductive health field in which the relevant question is to demonstrate non-inferiority. The approach to equivalence is described in the trials included in three published systematic reviews in which these comparisons were conducted, addressing the statement of hypotheses, sample size calculation and the interpretation of results. The use of the conventional superiority approach to design equivalence trials has led to underpowered trials to show equivalence within clinical relevant margins. The analysis and interpretation of results from such trials has resulted in conclusions of equivalence based on lack of significance. We draw attention to the lack of awareness of the appropriate techniques for equivalence trials among researchers in the field of reproductive health. Finally, the issue of interim analyses and stopping rules in equivalence trials is addressed.  相似文献   

12.
Between-group comparison based on the restricted mean survival time (RMST) is getting attention as an alternative to the conventional logrank/hazard ratio approach for time-to-event outcomes in randomized controlled trials (RCTs). The validity of the commonly used nonparametric inference procedure for RMST has been well supported by large sample theories. However, we sometimes encounter cases with a small sample size in practice, where we cannot rely on the large sample properties. Generally, the permutation approach can be useful to handle these situations in RCTs. However, a numerical issue arises when implementing permutation tests for difference or ratio of RMST from two groups. In this article, we discuss the numerical issue and consider six permutation methods for comparing survival time distributions between two groups using RMST in RCTs setting. We conducted extensive numerical studies and assessed type I error rates of these methods. Our numerical studies demonstrated that the inflation of the type I error rate of the asymptotic methods is not negligible when sample size is small, and that all of the six permutation methods are workable solutions. Although some permutation methods became a little conservative, no remarkable inflation of the type I error rates were observed. We recommend using permutation tests instead of the asymptotic tests, especially when the sample size is less than 50 per arm.  相似文献   

13.
Traditional methods of sample size and power calculations in clinical trials with a time‐to‐event end point are based on the logrank test (and its variations), Cox proportional hazards (PH) assumption, or comparison of means of 2 exponential distributions. Of these, sample size calculation based on PH assumption is likely the most common and allows adjusting for the effect of one or more covariates. However, when designing a trial, there are situations when the assumption of PH may not be appropriate. Additionally, when it is known that there is a rapid decline in the survival curve for a control group, such as from previously conducted observational studies, a design based on the PH assumption may confer only a minor statistical improvement for the treatment group that is neither clinically nor practically meaningful. For such scenarios, a clinical trial design that focuses on improvement in patient longevity is proposed, based on the concept of proportional time using the generalized gamma ratio distribution. Simulations are conducted to evaluate the performance of the proportional time method and to identify the situations in which such a design will be beneficial as compared to the standard design using a PH assumption, piecewise exponential hazards assumption, and specific cases of a cure rate model. A practical example in which hemorrhagic stroke patients are randomized to 1 of 2 arms in a putative clinical trial demonstrates the usefulness of this approach by drastically reducing the number of patients needed for study enrollment.  相似文献   

14.
Few studies have proposed methods for sample size determination and specification of passing criterion (e.g., number needed to pass from a given size panel) for respirator fit-tests. One approach is to account for between- and within- subject variability, and thus take full advantage of the multiple donning measurements within subject, using a random effects model. The corresponding sample size calculation, however, may be difficult to implement in practice, as it depends on the model-specific and test panel-specific variance estimates, and thus does not yield a single sample size or specific cutoff for number needed to pass. A simple binomial approach is therefore proposed to simultaneously determine both the required sample size and the optimal cutoff for the number of subjects needed to achieve a passing result. The method essentially conducts a global search of the type I and type II errors under different null and alternative hypotheses, across the range of possible sample sizes, to find the lowest sample size which yields at least one cutoff satisfying, or approximately satisfying all pre-determined limits for the different error rates. Benchmark testing of 98 respirators (conducted by the National Institute for Occupational Safety and Health) is used to illustrate the binomial approach and show how sample size estimates from the random effects model can vary substantially depending on estimated variance components. For the binomial approach, probability calculations show that a sample size of 35 to 40 yields acceptable error rates under different null and alternative hypotheses. For the random effects model, the required sample sizes are generally smaller, but can vary substantially based on the estimate variance components. Overall, despite some limitations, the binomial approach represents a highly practical approach with reasonable statistical properties.  相似文献   

15.
This paper describes a method for planning the duration of a randomized parallel group study in which the response of interest is a potentially recurrent event. At the design stage we assume patients accrue at a constant rate, we model events via a homogeneous Poisson process, and we utilize an independent exponential censoring mechanism to reflect loss to follow-up. We derive the appropriate study duration to ensure satisfaction of power requirements for the effect size of interest under a Poisson regression model. An application to a kidney transplant study illustrates the potential savings of the Possion-based design relative to a design based on the time to the first event. Revised design criteria are also derived to accomodate overdispersed Poisson count data. We examine the frequency properties of two non-parametric tests recently proposed by Lawless and Nadeau for trials based on the above design criteria. In simulation studies involving homogeneous and non-homogeneous Poisson processes they performed well with respect to their type I error rate and power. Results from supplementary simulation studies indicate that these tests are also robust to extra-Posson variation and to clustering in the event times, making these tests attractive in their generality. We illustrate both tests by application to data from a completed kidney transplant study.  相似文献   

16.
Zero‐inflated count outcomes arise quite often in research and practice. Parametric models such as the zero‐inflated Poisson and zero‐inflated negative binomial are widely used to model such responses. Like most parametric models, they are quite sensitive to departures from assumed distributions. Recently, new approaches have been proposed to provide distribution‐free, or semi‐parametric, alternatives. These methods extend the generalized estimating equations to provide robust inference for population mixtures defined by zero‐inflated count outcomes. In this paper, we propose methods to extend smoothly clipped absolute deviation (SCAD)‐based variable selection methods to these new models. Variable selection has been gaining popularity in modern clinical research studies, as determining differential treatment effects of interventions for different subgroups has become the norm, rather the exception, in the era of patent‐centered outcome research. Such moderation analysis in general creates many explanatory variables in regression analysis, and the advantages of SCAD‐based methods over their traditional counterparts render them a great choice for addressing this important and timely issues in clinical research. We illustrate the proposed approach with both simulated and real study data. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

17.
Characterizing the technical precision of measurements is a necessary stage in the planning of experiments and in the formal sample size calculation for optimal design. Instruments that measure multiple analytes simultaneously, such as in high‐throughput assays arising in biomedical research, pose particular challenges from a statistical perspective. The current most popular method for assessing precision of high‐throughput assays is by scatterplotting data from technical replicates. Here, we question the statistical rationale of this approach from both an empirical and theoretical perspective, illustrating our discussion using four example data sets from different genomic platforms. We demonstrate that such scatterplots convey little statistical information of relevance and are potentially highly misleading. We present an alternative framework for assessing the precision of high‐throughput assays and planning biomedical experiments. Our methods are based on repeatability—a long‐established statistical quantity also known as the intraclass correlation coefficient. We provide guidance and software for estimation and visualization of repeatability of high‐throughput assays, and for its incorporation into study design. © 2016 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.  相似文献   

18.
OBJECTIVE: Due to a shared environment and similarities among workers within a worksite, the strongest analytical design to evaluate the efficacy of an intervention to reduce occupational health or safety hazards is to randomly assign worksites, not workers, to the intervention and comparison conditions. Statistical methods are well described for estimating the sample size when the unit of assignment is a group but these methods have not been applied in the evaluation of occupational health and safety interventions. We review and apply the statistical methods for group-randomized trials in planning a study to evaluate the effectiveness of technical/behavioral interventions to reduce wood dust levels among small woodworking businesses. METHODS: We conducted a pilot study in five small woodworking businesses to estimate variance components between and within worksites and between and within workers. In each worksite, 8 h time-weighted dust concentrations were obtained for each production employee on between two and five occasions. With these data, we estimated the parameters necessary to calculate the percent change in dust concentrations that we could detect (alpha = 0.05, power = 80%) for a range of worksites per condition, workers per worksite and repeat measurements per worker. RESULTS: The mean wood dust concentration across woodworking businesses was 4.53 mg/m3. The measure of similarity among workers within a woodworking business was large (intraclass correlation = 0.5086). Repeated measurements within a worker were weakly correlated (r = 0.1927) while repeated measurements within a worksite were strongly correlated (r = 0.8925). The dominant factor in the sample size calculation was the number of worksites per condition, with the number of workers per worksite playing a lesser role. We also observed that increasing the number of repeat measurements per person had little benefit given the low within-worker correlation in our data. We found that 30 worksites per condition and 10 workers per worksite would give us 80% power to detect a reduction of approximately 30% in wood dust levels (alpha = 0.05). CONCLUSIONS: Our results demonstrate the application of the group-randomized trials methodology to evaluate interventions to reduce occupational hazards. The methodology is widely applicable and not limited to the context of wood dust reduction.  相似文献   

19.
There have been articles on comparing methods for global clustering evaluation and cluster detection in disease surveillance, but power and sample size (SS) requirements have not been explored for spatially correlated data in this area. We are developing such requirements for tests of spatial clustering and cluster detection for regional cancer cases. We compared global clustering methods including Moran's I, Tango's and Besag-Newell's R statistics, and cluster detection methods including circular and elliptic spatial scan statistics (SaTScan), flexibly shaped spatial scan statistics, Turnbull's cluster evaluation permutation procedure, local indicators of spatial association, and upper-level set scan statistics. We identified eight geographic patterns that are representative of patterns of mortality due to various types of cancer in the U.S. from 1998 to 2002. We then evaluated the selected spatial methods based on state- and county-level data simulated from these different spatial patterns in terms of geographic locations and relative risks, and varying SSs using the 2000 population in each county. The comparison provides insight into the performance of the spatial methods when applied to varying cancer count data in terms of power and precision of cluster detection.  相似文献   

20.
在研究设计中,常要为对两样本均数的差别作统计学检验而计算所需的样本含量。在统计学教科书上已介绍了两样本含量相等时的计算公式及样本含量便查表。而在实际工作中,常会遇到样本含量不等的情况,本文提出了两样本含量为1:K时以及指定一个样本的含量时所需的样本含量的计算方法与公式,有较大的实用意义。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号