首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Under the classical statistical framework, sample size calculations for a hypothesis test of interest maintain prespecified type I and type II error rates. These methods often suffer from several practical limitations. We propose a framework for hypothesis testing and sample size determination using Bayesian average errors. We consider rejecting the null hypothesis, in favor of the alternative, when a test statistic exceeds a cutoff. We choose the cutoff to minimize a weighted sum of Bayesian average errors and choose the sample size to bound the total error for the hypothesis test. We apply this methodology to several designs common in medical studies.  相似文献   

2.
Abstract

The sample size of a prospective clinical study aimed at validation of a diagnostic biomarker-based test may be prohibitively large. We present a Bayesian framework that allows incorporating available development-study information about the performance of the test. As a result, the framework allows reducing the sample size required in the validation study, which may render the latter study feasible. The validation is based on the Bayesian testing of a hypothesis regarding possible values of AUC. Toward this end, first, available information is translated into a prior distribution. Next, this prior distribution is used in a Bayesian design to evaluate the performance of the diagnostic-test. We perform a simulation study to compare the power of the proposed Bayesian design to the approach ignoring development-study information. For each scenario, 1000 studies of sample size 100, 400, and 800 are simulated. Overall, the proposed Bayesian design leads to a substantially higher power than the flat-prior design. In some of the considered simulation settings, the Bayesian design requires as little as 50% of the flat-prior traditional design’s sample size to reach approximately the same power. Moreover, a simulation-based application strategy is proposed and presented with respect to a case-study involving the development of a biomarker-based diagnostic-test for Alzheimer’s disease.  相似文献   

3.
In recent years, Bayesian response-adaptive designs have been used to improve the efficiency of learning in dose-finding studies. Many current methods for analyzing the data at the time of the interim analysis only use the data from patients who have completed the study. Therefore, data collected at intermediate time points are not used for decision making in these studies. However, in some disease areas such as diabetes and obesity, patients may need to be studied for several weeks or months for a drug effect to emerge. Additionally, slow enrollment rates can limit the number of patients who complete the study in a given period of time. Consequently, at the time of an interim analysis, there may be only a small proportion (e.g., 20%) of patients who have completed the study. In this paper, we propose a new Bayesian prediction model to incorporate all the data (from patients who have completed the study and those who have not completed) to make decisions about the study at the interim analysis. Examples of decisions made at the interim analysis include adaptive treatment allocation, dropping nonefficacious dose arms, stopping the study for positive efficacy, and stopping the study for futility. The model is able to handle incomplete longitudinal data including missing data considered missing at random (MAR). A utility-function-based decision rule is also discussed. The benefit of our new method is demonstrated through trial simulations. Three scenarios are examined, and the simulation results demonstrate that this new method outperforms traditional design with the same sample size in each of these scenarios.  相似文献   

4.
When Phase III treatment effect is diluted from what was observed from Phase II results, we propose to determine the Bayesian sample size for a Phase III clinical trial based on the normal, uniform, and truncated normal prior distributions of the treatment effects on an interval, which starts from an acceptable treatment effect to the observed treatment effect from Phase II. After incorporating the prior information of the treatment effects, the Bayesian sample size is the number of patients of the Phase III trial for a given Bayesian Predictive Power (BPP) or Bayesian Historical Predictive Power (BHPP). After that, the numerical simulations are carried out to determine the Bayesian sample size for the Phase III clinical trial. In particular, there exists a hook phenomenon for the BHPP when the number of patients of the Phase II trial equals 70 assuming the normal, uniform, or truncated normal treatment effect. Moreover, we add some sensitivity analysis of the Bayesian sample size about the parameters in the simulations. Finally, we determine the Bayesian sample size (number of events or deaths) of the Phase III trial for a fixed power, Bayesian Historical Power (BHP), and BHPP in the axitinib example.  相似文献   

5.
ABSTRACT

In clinical research, parameters required for sample size calculation are usually unknown. A typical approach is to use estimates from some pilot studies as the true parameters in the calculation. This approach, however, does not take into consideration sampling error. Thus, the resulting sample size could be misleading if the sampling error is substantial. As an alternative, we suggest a Bayesian approach with noninformative prior to reflect the uncertainty of the parameters induced by the sampling error. Based on the informative prior and data from pilot samples, the Bayesian estimators based on appropriate loss functions can be obtained. Then, the traditional sample size calculation procedure can be carried out using the Bayesian estimates instead of the frequentist estimates. The results indicate that the sample size obtained using the Bayesian approach differs from the traditional sample size obtained by a constant inflation factor, which is purely determined by the size of the pilot study. An example is given for illustration purposes.  相似文献   

6.
This article presents a Bayesian approach to sample size determination in binomial and Poisson clinical trials. It uses exact methods and Bayesian methodology. Our sample size estimations are based on power calculations under the one-sided alternative hypothesis that a new treatment is better than a control by a clinically important margin. The method resembles a standard frequentist problem formulation and, in the case of conjugate prior distributions with integer parameters, is similar to the frequentist approach. We evaluate Type I and II errors through the use of credible limits in Bayesian models and through the use of confidence limits in frequentist models. Particularly, for conjugate priors with integer parameters, credible limits are identical to frequentist confidence limits with adjusted numbers of events and sample sizes. We consider conditions under which the minimal Bayesian sample size is less than the frequentist one and vice versa.  相似文献   

7.
Willan AR 《PharmacoEconomics》2011,29(11):933-949
Methods for determining sample size requirements for cost-effectiveness studies are reviewed and illustrated. Traditional methods based on tests of hypothesis and power arguments are given for the incremental cost-effectiveness ratio and incremental net benefit (INB). In addition, a full Bayesian approach using decision theory to determine optimal sample size is given for INB. The full Bayesian approach, based on the value of information, is proposed in reaction to concerns that traditional methods rely on arbitrarily chosen error probabilities and an ill-defined notion of the smallest clinically important difference. Furthermore, the results of cost-effectiveness studies are used for decision making (e.g. should a new intervention be adopted or the old one retained), and employing decision theory, which permits optimal use of current information and the optimal design of new studies, provides a more consistent approach.  相似文献   

8.
Some clinical trialists, especially those working in rare or pediatric disease, have suggested borrowing information from similar but already-completed clinical trials. This article begins with a case study in which relying solely on historical control information would have erroneously resulted in concluding a significant treatment effect. We then attempt to catalog situations where borrowing historical information may or may not be advisable using a series of carefully designed simulation studies. We use an MCMC-driven Bayesian hierarchical parametric survival modeling approach to analyze data from a sponsor’s colorectal cancer study. We also apply these same models to simulated data comparing the effective historical sample size, bias, 95% credible interval widths, and empirical coverage probabilities across the simulated cases. We find that even after accounting for variations in study design, baseline characteristics, and standard-of-care improvement, our approach consistently identifies Bayesianly significant differences between the historical and concurrent controls under a range of priors on the degree of historical data borrowing. Our simulation studies are far from exhaustive, but inform the design of future trials. When the historical and current controls are dissimilar, Bayesian methods can still moderate borrowing to a more appropriate level by adjusting for important covariates and adopting sensible priors.  相似文献   

9.
Poisson and negative binomial models are frequently used to analyze count data in clinical trials. While several sample size calculation methods have recently been developed for superiority tests for these two models, similar methods for noninferiority and equivalence tests are not available. When a noninferiority or equivalence trial is designed to compare Poisson or negative binomial rates, an appropriate method is needed to estimate the sample size to ensure the trial is properly powered. In this article, several sample size calculation methods for noninferiority and equivalence tests have been derived based on Poisson and negative binomial models. All of these methods accounted for potential over-dispersion that commonly exists in count data obtained from clinical trials. The precision of these methods was evaluated using simulations. Supplementary materials for this article are available online.  相似文献   

10.
Abstract

Limited resources are a challenge when planning comparative effectiveness studies of multiple promising treatments, often prompting study planners to reduce the sample size to meet the financial constraints. The practical solution is often to increase the efficiency of this sample size by selecting a pair of treatments among the pool of promising treatments before the clinical trial begins. The problem with this approach is that the investigator may inadvertently leave out the most beneficial treatment. This article demonstrates a possible solution to this problem by using Bayesian adaptive designs. We use a planned comparative effectiveness clinical trial of treatments for sialorrhea in amyotrophic lateral sclerosis as an example of the approach. Rather than having to guess at the two best treatments to compare based on limited data, we suggest putting more arms in the trial and letting response adaptive randomization (RAR) determine better arms. To ground this study relative to previous literature we first compare RAR, adaptive equal randomization, arm(s) dropping, and a fixed design. Given the goals of this trial we demonstrate that we may avoid “Type III errors”—inadvertently leaving out the best treatment—with little loss in power compared to a two-arm design, even when choosing the correct two arms for the two-armed design. There are appreciable gains in power when the two arms are prescreened at random.  相似文献   

11.
In high-dimensional data analyses, such as in microarray experiments, the false discovery rate (FDR) has been widely used as an appropriate method to control false positive error rate, and some progress has been made on the issue of sample size calculation. However, there is still lack of a simple and practically useful method for routine use.

This article investigates the power and the related problem of sample size determination methods for current FDR controlling procedures under a mixture model involving independent test statistics. An approach is proposed where one can use traditional sample size calculation for a single hypothesis with appropriately adjusted Type I error rate. This adjustment is based on a simple relationship between the desired FDR and power level and the individual Type I error rate. Simulation results show that our approach can be applied successfully under both an independence assumption and certain commonly used correlation structures.  相似文献   

12.
In this article, we show how to estimate a transition period for the evolvement of drug resistance to antiretroviral (ARV) drug or other related treatments in the framework of developing a Bayesian method for jointly analyzing time-to-event and longitudinal data. For HIV/AIDS longitudinal data, developmental trajectories of viral loads tend to show a gradual change from a declining trend after initiation of treatment to an increasing trend without an abrupt change. Such characteristics of trajectories are also associated with a time-to-event process. To assess these clinically important features, we develop a joint bent-cable Tobit model for the time-to-event and left-censored response variable with skewness and phasic developments. Random effects are used to determine the stochastic dependence between the time-to-event process and response process. The proposed method is illustrated using real data from an AIDS clinical study.  相似文献   

13.
Single-arm studies are typically used in phase II of clinical trials, whose main objective is to determine whether a new treatment warrants further testing in a randomized phase III trial. The introduction of randomization in phase II, to avoid the limits of studies based on historical controls, is a critical issue widely debated in the recent literature. We use a Bayesian approach to compare single-arm and randomized studies, based on a binary response variable, in terms of their abilities of reaching the correct decision about the new treatment, both when it performs better than the standard one and when it is less effective. We evaluate how the historical control rate, the total sample size, and the elicitation of the prior distributions affect the decision about which study performs better.  相似文献   

14.
Wald tests and F tests are commonly used for analysis, particularly when the regression model is a generalized linear model. When these tests are proposed for analysis it is important to also estimate the power and sample size during the design phase using this same test. Often, though, the information prior to a study is insufficient to assess whether response variable distributions assumed for power or sample size calculations are appropriate. This article demonstrates that such complete assumptions about the response distribution are not necessary to estimate power and sample size for moderate to large studies using quasi-likelihood methods. This method replaces the need to specify the response variable distribution with the weaker specification of only the mean-to-variance relationship. Complex designs, such as designs with interaction terms, are accommodated by this approach. Results are presented for data from one- and two-parameter exponential family distributions, which are among the most common distributions assumed in the medical, epidemiologic, and social sciences literature. Examples from mixture distributions are also presented. Monte Carlo simulation was used to estimate power for comparison. Quasi-likelihood power estimates were within 0.03 of estimates generated via simulation for most examples presented.  相似文献   

15.
We present a Bayesian approach to determining the optimal sample size for a historically controlled clinical trial. This work is motivated by a trial of a new coronary stent that uses a retrospective control group formed from seven trials of coronary stents currently marketed in the United States. In studies involving nonrandomized control groups, hierarchical regression, propensity score methods, or other sophisticated models are typically required to account for heterogeneity among groups which, if ignored could bias the results. Sample size calculations for historically controlled trials of medical devices are often based on formulae derived for randomized trials and fail to account for estimation of model parameters, correlation of observations, and uncertainty in the distribution of covariates of the patients recruited in the new trial. We propose methodology based on stochastic optimization that overcomes these deficiencies. The methodology is demonstrated using an objective function based on the power of the trial from a Bayesian approach. Analytic approximations based on a covariate-free analysis that convey features of the power function are developed. Our principle conclusions are that exact sample size calculations can be substantially different from current approximations, and stochastic optimization provides a convenient method of computation.  相似文献   

16.
We present a Bayesian adaptive design for a confirmatory trial to select a trial’s sample size based on accumulating data. During accrual, frequent sample size selection analyses are made and predictive probabilities are used to determine whether the current sample size is sufficient or whether continuing accrual would be futile. The algorithm explicitly accounts for complete follow-up of all patients before the primary analysis is conducted. We refer to this as a Goldilocks trial design, as it is constantly asking the question, “Is the sample size too big, too small, or just right?” We describe the adaptive sample size algorithm, describe how the design parameters should be chosen, and show examples for dichotomous and time-to-event endpoints.  相似文献   

17.
The frailty model is increasingly popular for analyzing multivariate time-to-event data. The most common model is the shared frailty model. Although study design consideration is as important as analysis strategies, sample size determination methodology in studies with multivariate time-to-event data is greatly lacking in the literature. In this article, we develop a sample size determination method for the shared frailty model to investigate the treatment effect on multivariate event times. We analyzed the data using both a parametric model and a piecewise model with unknown baseline hazard, and compare the empirical power with the calculated power. Last, we discuss the formula for testing the treatment effect on recurrent events.  相似文献   

18.
In clinical research, parameters required for sample size calculation are usually unknown. A typical approach is to use estimates from some pilot studies as the true parameters in the calculation. This approach, however, does not take into consideration sampling error. Thus, the resulting sample size could be misleading if the sampling error is substantial. As an alternative, we suggest a Bayesian approach with noninformative prior to reflect the uncertainty of the parameters induced by the sampling error. Based on the informative prior and data from pilot samples, the Bayesian estimators based on appropriate loss functions can be obtained. Then, the traditional sample size calculation procedure can be carried out using the Bayesian estimates instead of the frequentist estimates. The results indicate that the sample size obtained using the Bayesian approach differs from the traditional sample size obtained by a constant inflation factor, which is purely determined by the size of the pilot study. An example is given for illustration purposes.  相似文献   

19.
The analysis and planning methods for competing risks model have been described in the literature in recent decades, and noninferiority clinical trials are helpful in current pharmaceutical practice. Analytical methods for noninferiority clinical trials in the presence of competing risks (NiCTCR) were investigated by Parpia et al., who indicated that the proportional sub-distribution hazard (SDH) model is appropriate in the context of biological studies. However, the analytical methods of the competing risks model differ from those appropriate for analyzing noninferiority clinical trials with a single outcome; thus, a corresponding method for planning such trials is necessary. A sample size formula for NiCTCR based on the proportional SDH model is presented in this paper. The primary endpoint relies on the SDH ratio. A total of 120 simulations and an example based on a randomized controlled trial verified the empirical performance of the presented formula. The results demonstrate that the empirical power of sample size formulas based on the Weibull distribution for noninferiority clinical trials with competing risks can reach the targeted power.  相似文献   

20.
Meta-analysis has been widely applied to rare adverse event data because it is very difficult to reliably detect the effect of a treatment on such events in an individual clinical study. However, it is known that standard meta-analysis methods are often biased, especially when the background incidence rate is very low. A recent work by Bhaumik et al. proposed new moment-based approaches under a natural random effects model, to improve estimation and testing of the treatment effect and the between-study heterogeneity parameter. It has been demonstrated that for rare binary events, their methods have superior performance to commonly used meta-analysis methods. However, their comparison does not include any Bayesian methods, although Bayesian approaches are a natural and attractive choice under the random-effects model. In this article, we study a Bayesian hierarchical approach to estimation and testing in meta-analysis of rare binary events using the random effects model in Bhaumik et al. We develop Bayesian estimators of the treatment effect and the heterogeneity parameter, as well as hypothesis testing methods based on Bayesian model selection procedures. We compare them with the existing methods through simulation. A data example is provided to illustrate the Bayesian approach as well.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号