首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Analysis of repeated binary measurements presents a challenge in terms of the correlation between measurements within an individual and a mixed-effects modelling approach has been used for the analysis of such data. Sample size calculation is an important part of clinical trial design and it is often based on the method of analysis. We present a method for calculating the sample size for repeated binary pharmacodynamic measurements based on analysis by mixed-effects modelling and using a logit transformation. Wald test is used for hypothesis testing. The method can be used to calculate the sample size required for detecting parameter differences between subpopulations. Extensions to account for unequal allocation of subjects across groups and unbalanced sampling designs between and within groups were also derived. The proposed method has been assessed via simulation of a linear model and estimation using NONMEM. The results showed good agreement between nominal power and power estimated from the NONMEM simulations. The results also showed that sample size increases with increased variability at a rate that depends on the difference in parameter estimates between groups, and designs that involve sampling based on an optimal design can help to reduce cost.  相似文献   

2.
We present a method for calculating the sample size of a pharmacokinetic study analyzed using a mixed effects model within a hypothesis testing framework. A sample size calculation method for repeated measurement data analyzed using generalized estimating equations has been modified for nonlinear models. The Wald test is used for hypothesis testing of pharmacokinetic parameters. A marginal model for the population pharmacokinetic is obtained by linearizing the structural model around the subject specific random effects. The proposed method is general in that it allows unequal allocation of subjects to the groups and accounts for situations where different blood sampling schedules are required in different groups of patients. The proposed method has been assessed using Monte Carlo simulations under a range of scenarios. NONMEM was used for simulations and data analysis and the results showed good agreement.  相似文献   

3.
Poisson and negative binomial models are frequently used to analyze count data in clinical trials. While several sample size calculation methods have recently been developed for superiority tests for these two models, similar methods for noninferiority and equivalence tests are not available. When a noninferiority or equivalence trial is designed to compare Poisson or negative binomial rates, an appropriate method is needed to estimate the sample size to ensure the trial is properly powered. In this article, several sample size calculation methods for noninferiority and equivalence tests have been derived based on Poisson and negative binomial models. All of these methods accounted for potential over-dispersion that commonly exists in count data obtained from clinical trials. The precision of these methods was evaluated using simulations. Supplementary materials for this article are available online.  相似文献   

4.
An analysis of QTc data collected in four thorough QT studies conducted at Eli Lilly and Company was performed to estimate the variability of the QTc interval and to calculate the variance components related to time-to-time, day-to-day variability, etc. The results were used to develop a sample size calculation framework that enables clinical trial researchers to account for key features of their thorough QT studies, including study design (parallel and crossover designs), number of ECG replicates, number of post-baseline ECG recordings, and subject population (based on subject gender and age). The sample size calculation framework is illustrated using several popular study designs.  相似文献   

5.
Abstract

Over-dispersed count variables are frequently encountered in biomedical research. Despite extensive research in analytical methods, addressing over-dispersion in the design of clinical trials has received much less attention. In this study, we propose to directly incorporate over-dispersion into sample size calculation for clinical trials where a count outcome is repeatedly measured on each subject. The proposed method is applicable to the comparison of slopes as well as time-averaged responses. It is easy to compute and flexible enough to account for unbalanced randomization, arbitrary missing patterns, and different correlation structures. We show that sample size requirement is proportional to over-dispersion, which highlights the danger of ignoring over-dispersion in experimental design. Simulation results demonstrate that the proposed sample size calculation methods maintain the nominal levels of power and Type I error over a wide range of scenarios. Application example to an epileptic trial is presented. Supplementary materials for this article are available online.  相似文献   

6.
Subject attrition is a ubiquitous problem in any type of clinical trial and, thus, needs to be taken into consideration at the design stage particularly to secure adequate statistical power. Here, we focus on longitudinal cluster randomized clinical trials (cluster-RCT) that aim to test the hypothesis that an intervention has an effect on the rate of change in the outcome over time. In this setting, the cluster-RCT assumes a three-level hierarchical data structure in which subjects are nested within a higher level unit such as clinics and are evaluated for outcome repeatedly over the study period. Furthermore, the subject-specific slopes can be modeled in terms of fixed or random coefficients in a mixed-effects linear model. Closed-form sample size formulas for testing the preceding hypothesis have been developed under an assumption of no attrition. In this article, we propose closed-form approximate samples size determinations with anticipated attrition rates by modifying those existing sample size formulas. With extensive simulations, we examine performances of the modified formulas under three attrition mechanisms: attrition completely at random, attrition at random, and attrition not at random. In conclusion, the proposed modification is very effective under fixed-slope models but yields biased, perhaps substantially so, statistical power under random slope models.  相似文献   

7.
Under the classical statistical framework, sample size calculations for a hypothesis test of interest maintain prespecified type I and type II error rates. These methods often suffer from several practical limitations. We propose a framework for hypothesis testing and sample size determination using Bayesian average errors. We consider rejecting the null hypothesis, in favor of the alternative, when a test statistic exceeds a cutoff. We choose the cutoff to minimize a weighted sum of Bayesian average errors and choose the sample size to bound the total error for the hypothesis test. We apply this methodology to several designs common in medical studies.  相似文献   

8.
In this article we study sample size calculation methods for the asymptotic van Elteren test. Because the existing methods are only applicable to continuous data without ties, in this article we develop a new method that can be used on ordinal data. The new method has a closed form formula and is very easy to calculate. The new sample size formula performs very well because our simulations show that the corresponding actual powers are close to the nominal powers.  相似文献   

9.
In ethnic sensitivity studies, it is of interest to know whether the same dose has the same effect over populations in different regions. Glasbrenner and Rosenkranz (2006 Glasbrenner , M. and Rosenkranz , G. ( 2006 ). A note on ethnic sensitivity studies . Journal of Biopharmaceutical Statistics 16 : 1523 .[Taylor &; Francis Online], [Web of Science ®] [Google Scholar]) proposed a criterion for ethnic sensitivity studies in the context of different dose-exposure models. Their method is liberal in the sense that their sample size will not achieve the target power. We will show that the power function can be easily calculated by numeric integration, and the sample size can be determined by bisection.  相似文献   

10.
We develop a Bayesian analysis for the study of fixed-dose combinations of two or more drugs. The approach described here does not require knowledge of the dose-response relationships of the components or large sample approximations. We provide a procedure to estimate sample size in this context. In addition, we explore the performance of the Bayesian procedure in situations where existing methods are known to perform poorly.  相似文献   

11.
Microarray is a technology to screen a large number of genes to discover those differentially expressed between clinical subtypes or different conditions of human diseases. Gene discovery using microarray data requires adjustment for the large-scale multiplicity of candidate genes. The family-wise error rate (FWER) has been widely chosen as a global type I error rate adjusting for the multiplicity. Typically in microarray data, the expression levels of different genes are correlated because of coexpressing genes and the common experimental conditions shared by the genes on each array. To accurately control the FWER, the statistical testing procedure should appropriately reflect the dependency among the genes. Permutation methods have been used for accurate control of the FWER in analyzing microarray data. It is important to calculate the required sample size at the design stage of a new (confirmatory) microarray study. Because of the high dimensionality and complexity of the correlation structure in microarray data, however, there have been no sample size calculation methods accurately reflecting the true correlation structure of real microarray data. We propose sample size and power calculation methods that are useful when pilot data are available to design a confirmatory experiment. If no pilot data are available, we recommend a two-stage sample size recalculation based on our proposed method using the first stage data as pilot data. The calculated sample sizes are shown to accurately maintain the power through simulations. A real data example is taken to illustrate the proposed method.  相似文献   

12.
We propose a method that compares Poisson distributed outcomes. Our method uses the exact distribution of the difference between two Poisson variables to calculate the sample size required to detect a given difference with prespecified power. When the true difference between the two Poisson rates is more than 1.2 units, the number of subjects and events needed at the desired power and Type I error rate is 5–10% less than that computed by simulation based on the normal approximation method. The normal approximation method is more comparable to the exact sample size method when the difference between the rates is less than 1.2 units. The proposed method is more intuitive, efficient, and less subjective than the normal approximation method. A simple code is developed in R to estimate the sample size and critical values.  相似文献   

13.
Purpose To develop a population pharmacokinetic/pharmacodynamic (PK/PD) model that characterizes the effects of major systemic corticosteroids on lymphocyte trafficking and responsiveness. Materials and Methods Single, presumably equivalent, doses of intravenous hydrocortisone (HC), dexamethasone (DEX), methylprednisolone (MPL), and oral prednisolone (PNL) were administered to five healthy male subjects in a five - way crossover, placebo - controlled study. Measurements included plasma drug and cortisol concentrations, total lymphocyte counts, and whole blood lymphocyte proliferation (WBLP). Population data analysis was performed using a Monte Carlo-Parametric Expectation Maximization algorithm. Results The final indirect, multi-component, mechanism-based model well captured the circadian rhythm exhibited in cortisol production and suppression, lymphocyte trafficking, and WBLP temporal profiles. In contrast to PK parameters, variability of drug concentrations producing 50% maximal immunosuppression (IC50) were larger between subjects (73–118%). The individual log-transformed reciprocal posterior Bayesian estimates of IC50 for ex vivo WBLP were highly correlated with those determined in vitro for the four drugs (r 2  = 0.928). Conclusions The immunosuppressive dynamics of the four corticosteroids was well described by the population PK/PD model with the incorporation of inter-occasion variability for several model components. This study provides improvements in modeling systemic corticosteroid effects and demonstrates greater variability of system and dynamic parameters compared to pharmacokinetics. Electronic Supplementary Material The online version of this article (doi:) contains supplementary material, which is available to authorized users.  相似文献   

14.
We describe an accurate, yet simple and fast sample size computation method for hypothesis testing in population PK/PD studies. We use a first order approximation to the nonlinear mixed effects model and chi-square distributed Wald statistic to compute the minimum sample size to achieve given degree of power in rejecting a null hypothesis in population PK/PD studies. The method is an extension of Rochon’s sample size computation method for repeated measurement experiments. We compute sample sizes for PK and PK/PD models with different conditions, and use Monte Carlo simulation to show that the computed sample size retrieves the required power. We also show the effect of different sampling strategies, such as minimal, i.e., as many observations per individual as parameters in the model, and intensive on sample size. The proposed sample size computation method can produce estimates of minimum sample size to achieve the desired power in hypothesis testing in a greatly reduced time than currently available simulation-based methods. The method is rapid and efficient for sample size computation in population PK/PD study using nonlinear mixed effect models. The method is general and can accommodate any type of hierarchical models. Simulation results suggest that intensive sampling allows the reduction of the number of patients enrolled in a clinical study.  相似文献   

15.
When identifying the differentially expressed genes (DEGs) in microarray data, we often observe heteroscedasticity between groups and dependence among genes. Incorporating these factors is necessary for sample size calculation in microarray experiments. A penalized t-statistic is widely used to improve the identifiability of DEGs. We develop a formula to calculate sample size with dependence adjustment for the penalized t-statistic. Sample size is determined on the basis of overall power under certain conditions to maintain a certain false discovery rate. The usefulness of the proposed method is demonstrated by numerical studies using both simulated data and real data.  相似文献   

16.
Clinical trials in the context of comparative effectiveness research (CER) are often conducted to evaluate health outcomes under real-world conditions and standard health care settings. In such settings, three-level hierarchical study designs are increasingly common. For example, patients may be nested within treating physicians, who in turn are nested within an urgent care center or hospital. While many trials randomize the third-level units (e.g., centers) to intervention, in some cases randomization may occur at lower levels of the hierarchy, such as patients or physicians. In this article, we present and verify explicit closed-form sample size and power formulas for three-level designs assuming randomization is at the first or second level. The formulas are based on maximum likelihood estimates from mixed-effect linear models and verified by simulation studies. Results indicate that even with smaller sample sizes, theoretical power derived with known variances is nearly identical to empirically estimated power for the more realistic setting when variances are unknown. In addition, we show that randomization at the second or first level of the hierarchy provides an increasingly statistically efficient alternative to third-level randomization. Power to detect a treatment effect under second-level randomization approaches that of patient-level randomization when there are few patients within each randomized second-level cluster and, most importantly, when the correlation attributable to second-level variation is a small proportion of the overall correlation between patient outcomes.  相似文献   

17.
Some studies are designed to assess the agreement between different raters and/or different instruments in the medical sciences and pharmaceutical research. In practice, the same sample will be used to compare the agreement of two or more assessment methods for simplicity and to take advantage of the positive correlation of the ratings. The concordance correlation coefficient (CCC) is often used as a measure of agreement when the rating is a continuous variable. We present an approach for calculating the sample size required for testing the equality of two CCCs, H0: CCC1 = CCC2 vs. HA: CCC1 ≠ CCC2, where two assessment methods are used on the same sample, with two raters resulting in correlated CCC estimates. Our approach is to simulate one large “exemplary” dataset based on the specification of the joint distribution of the pairwise ratings for the two methods. We then create two new random variables from the simulated data that have the same variance–covariance matrix as the two dependent CCC estimates using the Taylor series linearization method. The method requires minimal computing time and can be easily extended to comparing more than two CCCs, or Kappa statistics.  相似文献   

18.
ABSTRACT

In clinical research, parameters required for sample size calculation are usually unknown. A typical approach is to use estimates from some pilot studies as the true parameters in the calculation. This approach, however, does not take into consideration sampling error. Thus, the resulting sample size could be misleading if the sampling error is substantial. As an alternative, we suggest a Bayesian approach with noninformative prior to reflect the uncertainty of the parameters induced by the sampling error. Based on the informative prior and data from pilot samples, the Bayesian estimators based on appropriate loss functions can be obtained. Then, the traditional sample size calculation procedure can be carried out using the Bayesian estimates instead of the frequentist estimates. The results indicate that the sample size obtained using the Bayesian approach differs from the traditional sample size obtained by a constant inflation factor, which is purely determined by the size of the pilot study. An example is given for illustration purposes.  相似文献   

19.
In this article, we present a simple method to calculate sample size and power for a simulation-based multiple testing procedure which gives a sharper critical value than the standard Bonferroni method. The method is especially useful when several highly correlated test statistics are involved in a multiple-testing procedure. The formula for sample size calculation will be useful in designing clinical trials with multiple endpoints or correlated outcomes. We illustrate our method with a quality-of-life study for patients with early stage prostate cancer. Our method can also be used for comparing multiple independent groups.  相似文献   

20.
Wald tests and F tests are commonly used for analysis, particularly when the regression model is a generalized linear model. When these tests are proposed for analysis it is important to also estimate the power and sample size during the design phase using this same test. Often, though, the information prior to a study is insufficient to assess whether response variable distributions assumed for power or sample size calculations are appropriate. This article demonstrates that such complete assumptions about the response distribution are not necessary to estimate power and sample size for moderate to large studies using quasi-likelihood methods. This method replaces the need to specify the response variable distribution with the weaker specification of only the mean-to-variance relationship. Complex designs, such as designs with interaction terms, are accommodated by this approach. Results are presented for data from one- and two-parameter exponential family distributions, which are among the most common distributions assumed in the medical, epidemiologic, and social sciences literature. Examples from mixture distributions are also presented. Monte Carlo simulation was used to estimate power for comparison. Quasi-likelihood power estimates were within 0.03 of estimates generated via simulation for most examples presented.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号