首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The maximum effective dose (MaxED) is an important quantity for therapeutic drugs. The MaxED for therapeutic drugs is defined as the dose above which no improvement in efficacy is obtained. In this article, we propose two experimental designs and analytic methods (one single-stage design and one two-stage design) to select the MaxED among several fixed doses and to compare the therapeutic effect of the selected MaxED with a control. The selection of MaxED is based on the isotonic regression under the restriction of monotonicity. In the single-stage design, both the selection of the MaxED and assessing its efficacy are carried out at the end of experiment. In the two-stage design, the selection of the MaxED and assessment of its efficacy are carried out at the interim analysis (first stage), the experiment in the second stage is carried out only at the selected MaxED and control if the first-stage test is not significant. Thus, the two-stage design enables selection of the MaxED at an earlier stage and stopping the trial earlier if the treatment effect at MaxED is extreme. Williams’ test (1972) is applied to test whether the selected MaxED is significantly different from control for the single-stage design and the first-stage test of the two-stage design. The sample size calculation for each design is provided. Extensive simulations are carried out to illustrate the performances of the proposed methods.  相似文献   

2.
Different mixed-effects models were compared to evaluate the population dose–response and relative potency of two albuterol inhalers. Bronchodilator response was measured after ascending doses of each inhaler in 37 asthmatic patients. A linear mixed-effects model was developed based on the approach proposed by Finney for the evaluation of bioassay data. A nonlinear mixed-effects (Emax ) model with interindividual and interoccasion variability (IOV) in the different pharmacodynamic parameters was also fit to the data. Both methods produced a similar estimate of relative potency. However, the estimate of relative potency was 22% lower with the nonlinear mixed-effects model if IOV was not taken into account. Monte Carlo simulations based on a similar study design demonstrated that more biased and variable estimates of ED50 and relative potency were obtained when the nonlinear mixed-effects model ignored the presence of IOV in the data. Furthermore, the linear mixed-effects model that did not account for IOV produced confidence intervals for relative potency that were too narrow and thus could lead to erroneous conclusions. These problems were avoided when the estimation model could account for IOV. Results of the simulations were consistent with those of the experimental data. Although the linear or the nonlinear mixed-effects model may be used to evaluate population dose–response and relative potency, there are important differences in the assumptions made by each method.  相似文献   

3.
Univariate isotonic regression (IR) has been used for nonparametric estimation in dose–response and dose-finding studies. One undesirable property of IR is the prevalence of piecewise-constant stretches in its estimates, whereas the dose–response function is usually assumed to be strictly increasing. We propose a simple modification to IR, called centered isotonic regression (CIR). CIR's estimates are strictly increasing in the interior of the dose range. In the absence of monotonicity violations, CIR and IR both return the original observations. Numerical examination indicates that for sample sizes typical of dose–response studies and with realistic dose–response curves, CIR provides a substantial reduction in estimation error compared with IR when monotonicity violations occur. We also develop analytical interval estimates for IR and CIR, with good coverage behavior. An R package implements these point and interval estimates. Supplementary materials for this article are available online.  相似文献   

4.
Abstract

An efficient method to reduce the dimensionality of microarray gene expression data from thousands or tens of thousands of cDNA clones down to a subset of the most differentially expressed cDNA clones is essential in order to simplify the massive amount of data generated from microarray experiments. An extension to the methods of Efron et al. [Efron, B., Tibshirani, R., Storey, J., Tusher, V. (2001). Empirical Bayes analysis of a microarray experiment. J. Am. Statist. Assoc. 96:1151–1160] is applied to a differential time-course experiment to determine a subset of cDNAs that have the largest probability of being differentially expressed with respect to treatment conditions across a set of unequally spaced time points. The proposed extension, which is advocated to be a screening tool, allows for inference across a continuous variable in addition to incorporating a more complex experimental design and allowing for multiple design replications. With the current data the focus is on a time-course experiment; however, the proposed methods can easily be implemented on a dose–response experiment, or any other microarray experiment that contains a continuous variable of interest. The proposed empirical Bayes gene-screening tool is compared with the Efron et al. (2001) method in addition to an adjusted model-based t-value using a time-course data set where the toxicological effect of a specific mixture of chemicals is being studied.  相似文献   

5.
This article reports the results of a meta-analysis based on dose–response studies conducted by a large pharmaceutical company between 1998–2009. Data collection targeted efficacy endpoints from all compounds with evidence of clinical efficacy during the time period. Safety data were not extracted. The goal of the meta-analysis was to identify consistent quantitative patterns in dose–response across different compounds and diseases. The article presents summaries of the study designs, including the number of studies conducted for each compound, dosing range, the number of doses evaluated, and the number of patients per dose. The Emax? model, ubiquitous in pharmacology research, was fit for each compound. It described the data well, except for a single compound, which had nonmonotone dose–response. Compound-specific estimates and Bayesian hierarchical modeling showed that dose–response curves for most compounds can be approximated by Emax? models with “Hill” parameters close to 1.0. Summaries of the potency estimates show pharmacometric predictions of potency made before the first dose ranging study within a (1/10, 10) multiple of the final estimates for 90% of compounds. The results of the meta-analysis, when combined with compound-specific information, provide an empirical basis for designing and analyzing new dose finding studies using parametric Emax models and Bayesian estimation with empirically derived prior distributions.  相似文献   

6.
A meta-analysis of dose–response studies is reported for small-molecule drugs approved by the U.S. Food and Drug Administration (FDA) between January 2009 and May 2014. Summaries of the study designs are presented, including the number of studies conducted for each drug, dosing range, and the number of doses evaluated. Most drugs were studied on a ? 4-fold range. Most of the study designs and their analyses focused on a small number of pairwise comparisons of dose groups to placebo. For the meta-analysis, efficacy endpoints were evaluated at a single landmark time. Safety endpoints were not collected. The commonly used Emax? model was fit for each drug. Due to the limited number of doses and dosing ranges, maximum likelihood estimation applied to drugs separately performed poorly. Bayesian hierarchical models were successfully fit producing Emax? curves that represented the data well. The distributions of the Emax? model parameters were consistent with previously reported distributions estimated from a sponsor-specific meta-analysis of dose response. Assessment of model fit, which focused on potential nonmonotone loss of efficacy at the highest doses, supported the use of the Emax? curves. The meta-analysis provides additional empirical basis for Bayesian prior distributions for model parameters. Supplementary materials for this article are available online.  相似文献   

7.
The disposition kinetics of Cyclosporine A (CyA) in rat, based on measurement in arterial blood, appeared dose-linear over a wide iv dose range (1.2–30mg/kg). Physiologically based pharmacokinetic (PBPK) analysis, however, demonstrated that this was an apparent observation resulting from counterbalancing nonlinear factors, such as saturable blood and tissue distribution, as well as clearance (CLb ). A PBPK model was successfully developed taking into account these multiple nonlinear factors. Tissue distribution was distinctly different among various organs, being best described by either a linear model (muscle, fat; Model 1), one involving instantaneous saturation (lung, heart, bone, skin, thymus; Model 2), noninstantaneous saturation (kidney, spleen, liver, gut; Model 3), or one with saturable efflux (brain; Model 4). Overall, the whole body volume of distribution at steady state for unbound CyA (Vuss ) decreased with increasing dose, due at least in part to saturation of tissue-cellular cyclophilin binding. Clearance, essentially hepatic, and described by the well-stirred model, was also adequately characterized by Michaelis–Menten kinetics, Km 0.60 g/ml. In model-based simulations, both volume of distribution at steady state (V ss,b ) and CLb varied in a similar manner with dose, such that terminal t 1/2 remained apparently unchanged; these dose responses were attenuated by saturable blood binding. CyA concentration measured in arterial blood was not always directly proportional to the true exposure, i.e., unbound or target tissue concentrations. The PBPK model not only described comprehensively such complicated PK relationships but also permitted assessment of the sensitivity of individual parameters to variation in local nonlinear kinetics. Using this approach, dose-dependent CyA uptake into brain was shown to be sensitive to both active and passive transport processes, and not merely the affinity of the active (efflux) transporter at the level of the blood–brain barrier.  相似文献   

8.
In dose–response studies with censored time-to-event outcomes, D-optimal designs depend on the true model and the amount of censored data. In practice, such designs can be implemented adaptively, by performing dose assignments according to updated knowledge of the dose–response curve at interim analysis. It is also essential that treatment allocation involves randomization—to mitigate various experimental biases and enable valid statistical inference at the end of the trial. In this work, we perform a comparison of several adaptive randomization procedures that can be used for implementing D-optimal designs for dose–response studies with time-to-event outcomes with small to moderate sample sizes. We consider single-stage, two-stage, and multi-stage adaptive designs. We also explore robustness of the designs to experimental (chronological and selection) biases. Simulation studies provide evidence that both the choice of an allocation design and a randomization procedure to implement the target allocation impact the quality of dose–response estimation, especially for small samples. For best performance, a multi-stage adaptive design with small cohort sizes should be implemented using a randomization procedure that closely attains the targeted D-optimal design at each stage. The results of the current work should help clinical investigators select an appropriate randomization procedure for their dose–response study.  相似文献   

9.
《Substance use & misuse》2013,48(4):517-519
Images of drugs and drug use(rs) convey meaning, feelings, and beliefs, and what is being seen is often believed. Images can also deceive in content, meaning, and belief. Drug use(r) researchers, who use images as data, must be cautious in interpreting what is being conveyed and why. As technological advances continue to shape the creation, modification, storage, and analysis of images, researchers must be ever more vigilant about what they are seeing and believing.  相似文献   

10.
Methoxyethanol (ME) produces embryotoxic effects in rodents, rabbits, and nonhuman primates. Mechanistic evaluations of ME dysmorphogenesis have focused mainly on developmental insults and chemical disposition in the mouse. These assessments in mice were based on developmental phase specificity (DPS) and dose–response relationship (DRR) of ME. DPS and DRR indicated treatments for selectively inducing defects to study ME disposition and expressed dysmorphogenesis. This study was conducted to establish DPS and DRR of ME in the rat. DPS was determined by injecting 500 mg ME/kg (6.6 mmol/kg) into the tail vein on Gestational Day (gd; sperm-positive day = gd 0) 10, 11, 12, 13, 14, or 15 (n= 6 dams/gd; saline controls on gd 12). On gd 20, embryolethality incidence was 100% after gd 10 dosing; at gd 11 through 15, it was 50, 32, 15, 2, and 5%, respectively (control, 2%). Incidences of external defects in live fetuses exposed on gd 11–15 were 97, 98, 100, 44, and 0% and those of viscera were 100, 62, 44, 10, and 0%, respectively. The predominant anomalies observed were ectrodactyly and renal agenesis. DRR was determined on gd 13, when live embryos/litter and external malformations (ectro- and syndactyly, micromelia) were maximal. Dams (n= 8/dose group) were injected intravenously with 0, 100, 250, 350, or 500 mg ME/kg. On gd 20, fetal defect rates were 0, 0, 82.5, 83.0, and 100% at these concentrations, respectively. Based on these studies, appropriate ME doses, times of maternal exposure, and critical phases of development in the rat model are available for reproducing selective defects to investigate biochemical and pharmacokinetic determinants underlying their expression.  相似文献   

11.
Several derivatives of triamterene were synthesized with the aim of obtaining physicochemical properties superior to those of triamterene. Their effects on electrolyte excretion were tested with dose–response curves in rats: a dissociation of ED50 values of Na+ excretion from those of K+ retention was found; while the ED50 values of natriuresis were structure independent, the ED50 values for potassium retention depended highly on the charge of the side chain of the triamterene derivatives. Acidic compounds displayed low and amines high K+-retaining potencies. Hence we postulate that there are at least two sites of action of the tested compounds in the kidney, (i) The first is the Na+ conductance. Its blockade is responsible for the reduction in the lumen-negative electrical potential difference; this is the main driving force for K+ secretion. The affinity to the Na+ conductance is not correlated with the basic/acidic properties of the compounds, (ii) The second site is the finite K+ conductance of the luminal membrane of the distal tubule. The affinity of the drugs to this K+ conductance depends strongly on the charge of the molecule. Only pteridine derivatives with a basic side chain, i.e., with a high pK a value, block the membrane K+ conductance and are therefore potent potassium-retaining drugs.  相似文献   

12.
Dose–response analysis is one of the accepted efficacy endpoints to establish effectiveness. The purpose of this research was to inform selection of an appropriate pre-specified primary dose–response analysis to demonstrate drug efficacy in a registration trial. The power and the type I error rate of the placebo-corrected (i.e., simply adjusting the observed treatment value by subtracting the placebo mean) and the placebo-anchored (i.e., including the placebo data as dose 0 in the regression) slope analyses were assessed based on regulatory submission data for two antihypertensive drugs and simulated data from hypothetical clinical trials. In the simulated hypothetical trials, the impact of different dosing strategies (i.e., the fixed dose versus the weight-based per kilogram dose), sample size, and scenarios governing the drug exposure–response relationship (e.g., E max, ED 50 , and SD) was also evaluated. For each scenario, a total 300 replications were simulated. The placebo-anchored slope analysis is always more powerful to demonstrate effectiveness in all plausible scenarios. The difference between the placebo-anchored and the placebo-corrected analyses was maximum when the studied doses were too high. However, the dose–response analysis is not sensitive to the dosing strategies. Furthermore, the type I error rate of these two methods was also found to be comparable. The design of dose–response studies should carefully consider these results to justify the inclusion of placebo and the analysis method. The pharmaceutical industry and the regulatory agencies are equally responsible for using the appropriate methods of primary analysis and providing justification in the protocol.  相似文献   

13.
To test the anticancer effect of combining two drugs targeting different biological pathways, the popular way to show synergistic effect of drug combination is a heat map or surface plot based on the percent excess the Bliss prediction using the average response measures at each combination dose. Such graphs, however, are inefficient in the drug screening process and it does not give a statistical inference on synergistic effect. To make a statistically rigorous and robust conclusion for drug combination effect, we present a two-stage Bliss independence response surface model to estimate an overall interaction index (τ) with 95% confidence interval (CI). By taking into all data points account, the overall τ with 95% CI can be applied to determine if the drug combination effect is synergistic overall. Using some example data, the two-stage model was compared to a couple of classic models following Bliss rule. The data analysis results obtained from our model reflect the pattern shown from other models. The application of overall τ helps investigators to make decision easier and accelerate the preclinical drug screening.  相似文献   

14.
Clinical trials often involve comparing 2–4 doses or regimens of an experimental therapy with a control treatment. These studies might occur early in a drug development process, where the aim might be to demonstrate a basic level of proof (the so-called proof of concept (PoC) studies), at a later stage, to help establish a dose or doses that should be used in phase III trials (dose-finding), or even in confirmatory studies, where the registration of several doses might be considered. When a small number of doses are examined, the ability to implement parametric modeling is somewhat limited. As an alternative, in this paper, a flexible Bayesian model is suggested. In particular, we draw on the idea of using Bayesian model averaging (BMA) to exploit an assumed monotonic dose–response relationship, without using strong parametric assumptions. The approach is exemplified by assessing operating characteristics in the design of a PoC study examining a new treatment for psoriatic arthritis and a post hoc data analysis involving three confirmatory clinical trials, which examined an adjunctive treatment for partial epilepsy. Key difficulties, such as prior specification and computation, are discussed. A further extension, based on combining the flexible modeling with a classical multiple comparisons procedure, known as MCP–MOD, is examined. The benefit of this extension is a potential reduction in the number of simulations that might be needed to investigate operating characteristics of the statistical analysis.  相似文献   

15.
The purpose of this research was to evaluate and compare liquid–liquid emulsions (water-in-oil and oil-in-water) prepared using sonication and microfluidization. liquid–liquid emulsions were characterized on the basis of emulsion droplet size determined using a laser-based particle size analyzer. An ultrasonic-driven benchtop sonicator and an air-driven microfluidizer were used for emulsification. Sonication generated emulsions through ultrasound-driven mechanical vibrations, which caused cavitation. The force associated with implosion of vapor bubbles caused emulsion size reduction and the flow of the bubbles resulted in mixing. An increase in viscosity of the dispersion phase improved the sonicator's emulsification capability, but an increase in the viscosity of the dispersed phase decreased the sonicator's emulsification capability. Although sonication might be comparable to homogenization in terms of emulsification efficiency, homogenization was relatively more effective in emulsifying more viscous solutions. Microfluidization, which used a high pressure to force the fluid into microchannels of a special configuration and initiated emulsification via a combined mechanism of cavitation, shear, and impact, exhibited excellent emulsification efficiency. Of the three methodologies, sonication generated more heat and might be less suitable for emulsion systems involving heat-sensitive materials. Homogenization is in general a more effective liquid–liquid emulsification method. The results derived from this study can serve as a basis for the evaluation of large-scale liquid–liquid emulsification in the microencapsulation process.  相似文献   

16.
A public workshop, organized by a Steering Committee of scientists from government, industry, universities and research organizations, was held at the National Institute of Environmental Health Sciences (NIEHS) in September, 2010. The workshop explored the dose–response implications of toxicant modes of action (MOA) mediated by nuclear receptors. The dominant paradigm in human health risk assessment has been linear extrapolation without a threshold for cancer, and estimation of sub-threshold doses for non-cancer and (in appropriate cases) cancer endpoints. However, recent publications question the application of dose–response modeling approaches with a threshold. The growing body of molecular toxicology information and computational toxicology tools has allowed for exploration of the presence or absence of sub-threshold doses for a number of receptor-mediated MOAs. The workshop explored the development of dose–response approaches for nuclear receptor-mediated liver cancer, within a MOA Human Relevance Framework (HRF). Case studies addressed activation of the AHR, the CAR and the PPARα. This article describes the workshop process, key issues discussed and conclusions. The value of an interactive workshop approach to apply current MOA/HRF frameworks was demonstrated. The results may help direct research on the MOA and dose–response of receptor-based toxicity, since there are commonalities for many receptors in the basic pathways involved for late steps in the MOA, and similar data gaps in early steps. Three additional papers in this series describe the results and conclusions for each case-study receptor regarding its MOA, relevance of the MOA to humans and the resulting dose–response implications.  相似文献   

17.
18.
The benchmark dose method has been proposed as an alternative to the no-observed-adverse-effect level (NOAEL) approach for assessing noncancer risks associated with hazardous compounds. The benchmark dose method is a more powerful statistical tool than the traditional NOAEL approach and represents a step in the right direction for a more accurate risk assessment. The benchmark dose method involves fitting a mathematical model to all the dose-response data within a study, and thus more biological information is incorporated in the resulting estimates of guidance values (e.g., acceptable daily intakes, ADIs). Although there is an increasing interest in the benchmark dose approach, it has not yet found its way into the regulatory toxicology in Europe, while in the United States the U.S. Environmental Protection Agency (EPA) already uses the benchmark dose in health risk assessment. Several software packages are today available for benchmark dose calculations. The availability of software to facilitate the analysis can make modeling appear simple, but often the interpretation of the results is not trivial, and it is recommended that benchmark dose modeling be performed in collaboration with a toxicologist and someone familiar with this type of statistical analysis. The procedure does not replace expert judgments of toxicologists and others addressing the hazard characterization issues in risk assessment. The aim of this article is to make risk assessors familiar with the concept, to show how the method can be used, and to describe some possibilities, limitations, and extensions of the benchmark dose approach. In this article the benchmark dose approach is presented in detail and compared to the traditional NOAEL approach. Statistical methods essential for the benchmark dose method are presented in Appendix A, and different mathematical models used in the U.S. EPA's BMD software, the Crump software, and the Kalliomaa software are described in the text and in Appendix B. For replacement of NOAEL in health risk assessment it is considered important that consensus is reached on the crucial parts of the benchmark dose method, that is, selection of risk types and the determination of a response level corresponding to the BMD, especially for continuous data. It is suggested that the BMD method is used as a first choice and that in cases where it is not possible to fit a model to the data the traditional NOAEL approach should be used instead. The possibilities to make benchmark dose calculations on continuous data need to be further investigated. In addition, it is of importance to study whether it would be appropriate to increase the number of dose levels by decreasing the number of animals in each dose group.  相似文献   

19.
The Michaelis–Menten (M–M) approximation of the target-mediated drug disposition (TMDD) pharmacokinetic (PK) model was derived based on the rapid binding (RB) or quasi steady-state (QSS) assumptions that implied that the target and drug binding and dissociation were in equilibrium. However, the initial dose for an IV bolus injection for the M–M model did not account for a fraction bound to the target. We postulated a correction to an initial condition that was consistent with the assumptions underlying the M–M approximation. We determined that the difference between the injected dose and one that should be used for the initial condition is equal to the amount of drug bound to the target upon reaching the equilibrium. We also observed that the corrected initial condition made the internalization rate constant an identifiable parameter that was not for the original M–M model. Finally, we performed a simulation exercise to check if the correction will impact the model performance and the bias of the M–M parameter estimates. We used literature data to simulate plasma drug concentrations described by the RB/QSS TMDD model. The simulated data were refitted by both models. All the parameters estimated from the original M–M model were substantially biased. On the other hand, the corrected M–M is able to accurately estimate these parameters except for equilibrium constant Km. Weighted sum of square residual and Akaike information criterion suggested a better performance of the corrected M–M model compared with the original M–M model. Further studies are necessary to determine the importance of this correction for the M–M model applications to analysis of TMDD driven PK data.  相似文献   

20.
ObjectiveTo compare risk factor–based screening tools for identifying prediabetes.MethodsParticipants in an employer-based wellness program were tested for glycosylated hemoglobin (A1C) at a regularly scheduled appointment, and prediabetes risk factor information was collected. The likelihood of having prediabetes and the need for laboratory testing were determined based on 3 risk factor–based screening tools: the Prediabetes Screening Test (PST), Prediabetes Risk Test (PRT), and 2016 American Diabetes Association guidelines (ADA2016). The results from the screening tools were compared with those of the A1C test. The predictive ability of the PST, PRT, and ADA2016 were compared using logistic regression. Results were validated with data from a secondary population.ResultsOf the 3 risk factor–based tools examined, the PRT demonstrated the best combination of sensitivity and specificity for identifying prediabetes. From July 2016 to March 2017, 740 beneficiaries of an employer-sponsored wellness program had their A1C tested and provided risk factor information. The population prevalence of prediabetes was 9.3%. Analysis of a second independent population with a prediabetes prevalence of more than 50% of confirmed PRT’s superiority despite differences in the calculated sensitivity and specificity for each population.ConclusionBecause PRT predicts prediabetes better than PST or ADA2016, it should be used preferentially.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号