首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
This article presents a Bayesian approach to sample size determination in binomial and Poisson clinical trials. It uses exact methods and Bayesian methodology. Our sample size estimations are based on power calculations under the one-sided alternative hypothesis that a new treatment is better than a control by a clinically important margin. The method resembles a standard frequentist problem formulation and, in the case of conjugate prior distributions with integer parameters, is similar to the frequentist approach. We evaluate Type I and II errors through the use of credible limits in Bayesian models and through the use of confidence limits in frequentist models. Particularly, for conjugate priors with integer parameters, credible limits are identical to frequentist confidence limits with adjusted numbers of events and sample sizes. We consider conditions under which the minimal Bayesian sample size is less than the frequentist one and vice versa.  相似文献   

2.
We present a Bayesian approach to determining the optimal sample size for a historically controlled clinical trial. This work is motivated by a trial of a new coronary stent that uses a retrospective control group formed from seven trials of coronary stents currently marketed in the United States. In studies involving nonrandomized control groups, hierarchical regression, propensity score methods, or other sophisticated models are typically required to account for heterogeneity among groups which, if ignored could bias the results. Sample size calculations for historically controlled trials of medical devices are often based on formulae derived for randomized trials and fail to account for estimation of model parameters, correlation of observations, and uncertainty in the distribution of covariates of the patients recruited in the new trial. We propose methodology based on stochastic optimization that overcomes these deficiencies. The methodology is demonstrated using an objective function based on the power of the trial from a Bayesian approach. Analytic approximations based on a covariate-free analysis that convey features of the power function are developed. Our principle conclusions are that exact sample size calculations can be substantially different from current approximations, and stochastic optimization provides a convenient method of computation.  相似文献   

3.
ABSTRACT

We are now at an amazing time for medical product development in drugs, biological products and medical devices. As a result of dramatic recent advances in biomedical science, information technology and engineering, ``big data’’ from health care in the real-world have become available. Although big data may not necessarily be attuned to provide the preponderance of evidence to a clinical study, high-quality real-world data can be transformed into scientific evidence for regulatory and healthcare decision-making using proven analytical methods and techniques, such as propensity score methodology and Bayesian inference. In this paper, we extend the Bayesian power prior approach for a single-arm study (the current study) to leverage external real-world data. We use propensity score methodology to pre-select a subset of real-world data containing patients that are similar to those in the current study in terms of covariates, and to stratify the selected patients together with those in the current study into more homogeneous strata. The power prior approach is then applied in each stratum to obtain stratum-specific posterior distributions, which are combined to complete the Bayesian inference for the parameters of interest. We evaluate the performance of the proposed method as compared to that of the ordinary power prior approach by simulation and illustrate its implementation using a hypothetical example, based on our regulatory review experience.  相似文献   

4.
The Food and Drug Administration (FDA) has proposed a parametric tolerance interval test (PTIT) for batch-release testing of inhalation devices. The proposed test examines dose uniformity based on several inhalation units from a batch, with two observations per unit. An underlying assumption is that the observations are a random sample from a univariate normal distribution. Because there are two observations per unit, it may be more appropriate to model the data as stemming from a bivariate normal distribution. We take a bivariate approach and use generalized confidence interval methodology to derive a parametric tolerance interval for the distribution of doses within a batch. We then use Monte Carlo simulation to compare results based on this bivariate approach with those based on the FDA-proposed PTIT.  相似文献   

5.
We describe a tolerance interval approach for assessing agreement in method comparison data that may be left censored. We model the data using a mixed model and discuss a Bayesian and a frequentist methodology for inference. A simulation study suggests that the Bayesian approach with noninformative priors provides a good alternative to the frequentist one for moderate sample sizes as the latter tends to be liberal. Both may be used for sample sizes 100 or more, with the Bayesian one being slightly conservative. The proposed methods are illustrated with real data involving comparison of two assays for quantifying viral load in HIV patients.  相似文献   

6.
Population approaches to modeling pharmacokinetic and or pharmacodynamic data attempt to separate the variability in observed data into within- and between-individual components. This is most naturally achieved via a multistage model. At the first stage of the model the data of a particular individual is modeled with each individual having his own set of parameters. At the second stage these individual parameters are assumed to have arisen from some unknown population distribution which we shall denote F. The importance of the choice of second stage distribution has led to a number of flexible approaches to the modeling of F. A nonparametric maximum likelihood estimate of F was suggested by Mallet whereas Davidian and Gallant proposed a semiparametric maximum likelihood approach where the maximum likelihood estimate is obtained over a smooth class of distributions. Previous Bayesian work has concentrated largely on F being assigned to a parametric family, typically the normal or Student's t. We describe a Bayesian nonparametric approach using the Dirichlet process. We use Markov chain Monte Carlo simulation to implement the procedure. We discuss each procedure and compare our approach with those of Mallet and Davidian and Gallant, using simulated data for a pharmacodynamic dose-response model.  相似文献   

7.
ASTM Standard E2810 provides a methodology for establishing confidence in passing the USP <905> Uniformity of Dosage Units (UDU) test, and provides acceptance limits for sample means and standard deviations that can be used as elements of lot release. These acceptance limits are quite conservative, however, due to both the nature and shape of an inverted triangular joint confidence region for the lot mean and standard deviation. We obtain improved (wider) acceptance limits by using a Bayesian approach that focuses on the posterior distribution of the probability of passing the USP <905> UDU test. The Bayesian approach has good sampling properties, and the improvement in acceptance limits can be considerable. For example, for sample size of 10 units and sample means between 97 and 103, the Bayesian approach results in acceptance limits for sample standard deviations that are at least 25% greater than those in E2810. The impact of the improved acceptance limits is illustrated with operating characteristic curves.  相似文献   

8.
We describe a tolerance interval approach for assessing agreement in method comparison data that may be left censored. We model the data using a mixed model and discuss a Bayesian and a frequentist methodology for inference. A simulation study suggests that the Bayesian approach with noninformative priors provides a good alternative to the frequentist one for moderate sample sizes as the latter tends to be liberal. Both may be used for sample sizes 100 or more, with the Bayesian one being slightly conservative. The proposed methods are illustrated with real data involving comparison of two assays for quantifying viral load in HIV patients.  相似文献   

9.
10.
The ICH Q8 defines “design space” (DS) as “The multidimensional combination and interaction of input variables (e.g., material attributes) and process parameters that have been demonstrated to provide assurance of quality.” Unfortunately, some pharmaceutical scientists appear to misinterpret the definition of DS as a process monitoring strategy. A more subtle and possibly more misleading issue, however, is the application of standard response surface methodology software applications in an attempt to construct a DS. The methodology of “overlapping mean responses” (OMR), available in many point-and-click oriented statistical packages, provides a tempting opportunity to use this methodology to create a DS. Furthermore, a few recent (and two possibly very influential) papers have been published that appear to propose the use of OMR as a way to construct a DS. However, such a DS may harbor operating conditions with a low probability of meeting process specifications. In this article we compare the OMR approach with a Bayesian predictive approach to DS, and show that the OMR approach produces DS’s that are too large and may contain conditions with a low probability of meeting process specifications. In some cases, even the best operating conditions do not have a high probability of meeting all process specifications.  相似文献   

11.
Human exposure to a specific pesticide or other chemical can occur from a combination of food and drink products. Probabilistic risk assessments are used to quantify the distribution of mean total daily exposures in the population, from the available data on residues and consumptions. We present a new statistical method for estimating this distribution, based on dietary survey data for multiple food types and residue monitoring data. The model allows for between-food correlations in both frequency and amounts of consumption. Three case studies are presented based on consumption data for UK children, considering the distribution of daily intakes of pyrimethanil, captan and chlorpyrifos aggregated over 4, 6 and 10 food types, respectively. We compared three alternative approaches, each using a Bayesian approach to quantify uncertainty: (i) a multivariate model that explicitly includes correlation parameters; (ii) separate independent parametric models for individual food types and (iii) a single parametric model applied to intakes aggregated directly from the data. The results demonstrate the importance of accounting for correlations between foods, using model (i) or (iii), for example, but also show that model (iii) can produce very different results when the aggregated intakes distribution is bimodal. The influence of residue uncertainty is also demonstrated.  相似文献   

12.
Modelling is an important applied tool in drug discovery and development for the prediction and interpretation of drug pharmacokinetics. Preclinical information is used to decide whether a compound will be taken forwards and its pharmacokinetics investigated in human. After proceeding to human little to no use is made of these often very rich data. We suggest a method where the preclinical data are integrated into a whole body physiologically based pharmacokinetic (WBPBPK) model and this model is then used for estimating population PK parameters in human. This approach offers a continuous flow of information from preclinical to clinical studies without the need for different models or model reduction. Additionally, predictions are based upon single parameter values, but making realistic predictions involves incorporating the various sources of variability and uncertainty. Currently, WBPBPK modelling is undertaken as a two-stage process: (i) estimation (optimisation) of drug-dependent parameters by either least squares regression or maximum likelihood and (ii) accounting for the existing parameter variability and uncertainty by stochastic simulation. To address these issues a general Bayesian approach using WinBUGS for estimation of drug-dependent parameters in WBPBPK models is described. Initially applied to data in rat, this approach is further adopted for extrapolation to human, which allows retention of some parameters and updating others with the available human data. While the issues surrounding the incorporation of uncertainty and variability within prediction have been explored within WBPBPK modeling methodology they have equal application to other areas of pharmacokinetics, as well as to pharmacodynamics.  相似文献   

13.
One of the aims of Phase II clinical trials is to determine the dosage regimen(s) that will be investigated during a confirmatory Phase III clinical trial. During Phase II, pharmacodynamic data are collected that enables the efficacy and safety of the drug to be assessed. It is proposed in this paper to use Bayesian decision analysis to determine the optimal dosage regimen based on efficacy and toxicity of the drug oxybutynin used in the treatment of urinary urge incontinence. Such an approach results in a general framework allowing modeling, inference and decision making to be carried out. For oxybutynin, the repeated measurement efficacy and toxicity data were modeled using nonlinear hierarchical models and inferences were based on posterior probabilities. The optimal decision in this problem was to determine the dosage regimen that maximized the posterior expected utility given the prior information on the model parameters and the patient response data. The utility function was defined using clinical opinion on the satisfactory levels of efficacy and toxicity and then combined by weighting the relative importance of each pharmacodynamic response. Markov chain Monte Carlo (MCMC) methodology implemented in WinBUGS 1.3 was used to obtain posterior estimates of the model parameters, probabilities and utilities.  相似文献   

14.
Microarray technology allows one to measure gene expression levels simultaneously on the whole-genome scale. The rapid progress generates both a great wealth of information and challenges in making inferences from such massive data sets. Bayesian statistical modeling offers an alternative approach to frequentist methodologies, and has several features that make these methods advantageous for the analysis of microarray data. These include the incorporation of prior information, flexible exploration of arbitrarily complex hypotheses, easy inclusion of nuisance parameters, and relatively well developed methods to handle missing data. Recent developments in Bayesian methodology generated a variety of techniques for the identification of differentially expressed genes, finding genes with similar expression profiles, and uncovering underlying gene regulatory networks. Bayesian methods will undoubtedly become more common in the future because of their great utility in microarray analysis.  相似文献   

15.
This paper describes a use of Monte Carlo integration for population pharmacokinetics with multivariate population distribution. In the proposed approach, a multivariate lognormal distribution is assumed for a population distribution of pharmacokinetic (PK) parameters. The maximum likelihood method is employed to estimate the population means, variances, and correlation coefficients of the multivariate lognormal distribution. Instead of a first-order Taylor series approximation to a nonlinear PK model, the proposed approach employs a Monte Carlo integration for the multiple integral in maximizing the log likelihood function. Observations below the lower limit of detection, which are usually included in Phase 1 PK data, are also incorporated into the analysis. Applications are given to a simulated data set and an actual Phase 1 trial to show how the proposed approach works in practice.  相似文献   

16.
Meta-analysis has been widely applied to rare adverse event data because it is very difficult to reliably detect the effect of a treatment on such events in an individual clinical study. However, it is known that standard meta-analysis methods are often biased, especially when the background incidence rate is very low. A recent work by Bhaumik et al. proposed new moment-based approaches under a natural random effects model, to improve estimation and testing of the treatment effect and the between-study heterogeneity parameter. It has been demonstrated that for rare binary events, their methods have superior performance to commonly used meta-analysis methods. However, their comparison does not include any Bayesian methods, although Bayesian approaches are a natural and attractive choice under the random-effects model. In this article, we study a Bayesian hierarchical approach to estimation and testing in meta-analysis of rare binary events using the random effects model in Bhaumik et al. We develop Bayesian estimators of the treatment effect and the heterogeneity parameter, as well as hypothesis testing methods based on Bayesian model selection procedures. We compare them with the existing methods through simulation. A data example is provided to illustrate the Bayesian approach as well.  相似文献   

17.
We wish to use prior information on an existing drug in the design and analysis of a dose-response study for a new drug candidate within the same pharmacological class. Using the Bayesian methodology, this prior information can be used quantitatively and the randomization can be weighted in favor of the new compound, where there is less information. An E max model is used to describe the dose-response of the existing drug. The estimates from this model are used to provide informative prior information used for the design and analysis of the new study to establish the relative potency between the new compound and the existing drug therapy. The assumption is made that the data from previous trials and the new study are exchangeable. The impact of departures from this assumption can be quantified through simulations and by assessing the operating characteristics of various scenarios. Simulations show that relatively modest sample sizes can yield informative results about the magnitude of the relative potency using this approach. The operating characteristics are good when assessing model estimates against clinically important changes in relative potency.  相似文献   

18.
In this paper we propose a Bayesian method to combine safety data collected from two separate drug development programs using the same active drug substance but for different indications, formulations, or patient populations. The objective of combining the data across the programs is to better define the level of safety risk associated with the new indication or target population. There may be adverse events (AEs) observed in the new program that represent new safety signals. Our method is to explore the AEs using data from both development programs. Our approach utilizes data collected previously to assist in analyzing safety data from the new program. It is assumed that the frequency of a certain AE follows a distribution with a parameter that characterizes the safety risk level. The parameter is assumed to follow a distribution function. In the Bayesian framework, this distribution function is called a prior distribution in the absence of data and posterior distribution when updated by real data. The key concept behind our method is to use data from the previous program to construct a posterior distribution that will in turn serve as a prior distribution for the new program. The construction of this updated prior down weights data from the previous program to emphasize the new program and thus avoids simple pooling of the data across programs. Such “soft use” of previous information minimizes the potential for undue influence of previous data on the analysis. Data from the new program are used to update the prior distribution and compute the posterior distribution for the new program. Key statistics are then calculated from the posterior distribution to quantify the risk level for the new program. We have tested the proposed approach using data from a real Phase 2 study that was conducted as part of a clinical development program for a new indication of an approved drug. The results indicate that the estimated risk level was affected both by the observed event rates and the extents of exposure across the two development programs. This approach appropriately characterizes the safety profile across the two development programs and properly contextualizes new safety signals from the new program.  相似文献   

19.
A procedure was developed for the determination of mercurials of pharmaceutical interest. Protic acid cleavage of the compound was followed by reduction of the resulting mercuric ion and vapor phase atomic absorption spectroscopy. This procedure was applied to 11 different mercurial compounds in various pharmaceutical preparations and offers excellent sensitivity with respect to presently used compendial assays. Comparative analytical data between this procedure and compendial methodology are presented.  相似文献   

20.
The Bayesian approach has been suggested as a suitable method in the context of mechanistic pharmacokinetic-pharmacodynamic (PK-PD) modeling, as it allows for efficient use of both data and prior knowledge regarding the drug or disease state. However, to this day, published examples of its application to real PK-PD problems have been scarce.

We present an example of a fully Bayesian re-analysis of a previously published mechanistic model describing the time course of circulating neutrophils in stroke patients and healthy individuals.

While priors could be established for all population parameters in the model, not all variability terms were known with any degree of precision. A sensitivity analysis around the assigned priors used was performed by testing three different sets of prior values for the population variance terms for which no data were available in the literature: “informative”, “semi-informative”, and “noninformative”, respectively. For all variability terms, inverse gamma distributions were used.

It was possible to fit the model to the data using the “informative” priors. However, when the “semi-informative” and “noninformative” priors were used, it was impossible to accomplish convergence due to severe correlations between parameters. In addition, due to the complexity of the model, the process of defining priors and running the Markov chains was very time-consuming.

We conclude that the present analysis represents a first example of the fully transparent application of Bayesian methods to a complex, mechanistic PK-PD problem with real data. The approach is time-consuming, but enables us to make use of all available information from data and scientific evidence. Thereby, it shows potential both for detection of data gaps and for more reliable predictions of various outcomes and “what if” scenarios.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号