首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
In this study, we developed a novel adaptive dose-finding approach for inclusion of correlated bivariate binary and continuous outcomes in designing phase I oncology trials. For this approach, binary toxicity and continuous efficacy outcomes are modeled jointly with a factorization model. The basic strategy of the proposed approach is based primarily on the Bayesian method. We based the dose escalation/de-escalation decision rules on the posterior distributions of both toxicity and efficacy outcomes. We compared the operating characteristics of the proposed and existing methods through simulation studies under various scenarios. We found that the recommendation rate of the true recommended dose (RD) in the proposed method was more favorable than that in the existing method when the true RD was relatively at the tail end among the tested doses. It was similar to that of the existing method when the true RD was relatively at the top end.  相似文献   

2.
We introduce a two-stage design for dose-finding in the context of Phase I/II studies, where two binary correlated endpoints are available, for instance, one for efficacy and one for toxicity. The bivariate probit model is used as a working model for the dose-response relationship. Given a 'desirable point' for the marginal probabilities of efficacy and toxicity, the goal is to find the target dose that is 'closest' to the desirable point. The criterion of optimality (objective function) is the variance of the estimator for that dose. Optimal experimental design methodology is used to construct efficient dose allocation procedures that treat patients in the study at doses that are both safe and efficacious.  相似文献   

3.
We present an adaptive model-based procedure for dose finding in phase I/II clinical trials when both efficacy and toxicity responses are available. In this setting, previous designs aimed at identifying the maximum tolerated dose as a surrogate for efficacy or the most successful dose, defined as the dose with the highest probability of efficacy without toxicity. Rather than using this definition of success, we propose considering all responses conditionally on the probability that dose-limiting toxicity is under a pre-specified threshold. The presented approach uses a joint model for the probability of an efficacy response and toxicity, and is evaluated through simulations. A retrospective application to a Phase I trial conducted in chronic lymphocytic leukemia is presented.  相似文献   

4.
Multivariate random effects meta‐analysis (MRMA) is an appropriate way for synthesizing data from studies reporting multiple correlated outcomes. In a Bayesian framework, it has great potential for integrating evidence from a variety of sources. In this paper, we propose a Bayesian model for MRMA of mixed outcomes, which extends previously developed bivariate models to the trivariate case and also allows for combination of multiple outcomes that are both continuous and binary. We have constructed informative prior distributions for the correlations by using external evidence. Prior distributions for the within‐study correlations were constructed by employing external individual patent data and using a double bootstrap method to obtain the correlations between mixed outcomes. The between‐study model of MRMA was parameterized in the form of a product of a series of univariate conditional normal distributions. This allowed us to place explicit prior distributions on the between‐study correlations, which were constructed using external summary data. Traditionally, independent ‘vague’ prior distributions are placed on all parameters of the model. In contrast to this approach, we constructed prior distributions for the between‐study model parameters in a way that takes into account the inter‐relationship between them. This is a flexible method that can be extended to incorporate mixed outcomes other than continuous and binary and beyond the trivariate case. We have applied this model to a motivating example in rheumatoid arthritis with the aim of incorporating all available evidence in the synthesis and potentially reducing uncertainty around the estimate of interest. © 2013 The Authors. Statistics inMedicine Published by John Wiley & Sons, Ltd.  相似文献   

5.
In real life and somewhat contrary to biostatistical textbook knowledge, sensitivity and specificity (and not only predictive values) of diagnostic tests can vary with the underlying prevalence of disease. In meta‐analysis of diagnostic studies, accounting for this fact naturally leads to a trivariate expansion of the traditional bivariate logistic regression model with random study effects. In this paper, a new model is proposed using trivariate copulas and beta‐binomial marginal distributions for sensitivity, specificity, and prevalence as an expansion of the bivariate model. Two different copulas are used, the trivariate Gaussian copula and a trivariate vine copula based on the bivariate Plackett copula. This model has a closed‐form likelihood, so standard software (e.g., SAS PROC NLMIXED ) can be used. The results of a simulation study have shown that the copula models perform at least as good but frequently better than the standard model. The methods are illustrated by two examples. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

6.
Novel therapies are challenging the standards of drug development. Agents with specific biologic targets and limited toxicity require novel designs to determine doses to be taken forward into larger studies. In this paper, we describe an approach that incorporates both toxicity and efficacy data into the estimation of the biologically optimal dose of an agent in a phase I trial. The approach is based on the flexible continuation-ratio model, and uses straightforward optimal dose selection criteria. Dose selection is based on all patients treated up until that time point, using a continual reassessment method approach. Dose-outcome curves considered include monotonically increasing, monotonically decreasing, and unimodal curves. Our simulation studies demonstrate that the proposed design, which we call TriCRM, has favourable operating characteristics.  相似文献   

7.
Phase I/II trials utilize both toxicity and efficacy data to achieve efficient dose finding. However, due to the requirement of assessing efficacy outcome, which often takes a long period of time to be evaluated, the duration of phase I/II trials is often longer than that of the conventional dose‐finding trials. As a result, phase I/II trials are susceptible to the missing data problem caused by patient dropout, and the missing efficacy outcomes are often nonignorable in the sense that patients who do not experience treatment efficacy are more likely to drop out of the trial. We propose a Bayesian phase I/II trial design to accommodate nonignorable dropouts. We treat toxicity as a binary outcome and efficacy as a time‐to‐event outcome. We model the marginal distribution of toxicity using a logistic regression and jointly model the times to efficacy and dropout using proportional hazard models to adjust for nonignorable dropouts. The correlation between times to efficacy and dropout is modeled using a shared frailty. We propose a two‐stage dose‐finding algorithm to adaptively assign patients to desirable doses. Simulation studies show that the proposed design has desirable operating characteristics. Our design selects the target dose with a high probability and assigns most patients to the target dose. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

8.
Novel therapies are challenging the standards of drug development. Agents with specific biologic targets, unknown dose‐efficacy curves, and limited toxicity mandate novel designs to identify biologically optimal doses. We review two model‐based designs that utilize either a proportional odds model or a continuation ratio model to identify an optimal dose of a single or two‐agent combination in a Phase I setting utilizing both toxicity and efficacy data. A continual reassessment method with straightforward dose selection criterion using accumulated data from all patients treated until that time point is employed while allowing for separate toxicity and efficacy curves for each drug in a two‐drug setting. The simulation studies demonstrate considerable promise, at least theoretically, in the ability of such model‐based designs to identify the optimal dose. Despite such favorable operating characteristics, there are several pragmatic challenges that hinder the routine implementation of such model‐based designs in practice. We review and offer practical solutions to potentially overcome some of these challenges. The acceptance and integration of these designs in practice may be quicker and easier if they are developed in concert with a clinical paradigm. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

9.
T D Tosteson  L A Stefanski  D W Schafer 《Statistics in medicine》1989,8(9):1139-47; discussion 1149
Exposure assessment poses special problems in air pollution epidemiology. This paper proposes a probit regression model for binary and ordinal outcomes that uses exposure validation information to develop estimates for the coefficient of the true exposure when only the inaccurate 'surrogate' measure of exposure is available for the individuals in the health study. This method is closely related to recently developed measurement-error methods, and is based on the assumption that the outcome and the surrogate exposure are conditionally independent given the true exposure. A test statistic is proposed for checking this conditional independence assumption when more than one surrogate is available, and an interpretation of the coefficient estimate is provided in the event that the assumption is violated. The methods are applied to an example involving nitrogen dioxide exposure and wheeze in children.  相似文献   

10.
We propose a robust two‐stage design to identify the optimal biological dose for phase I/II clinical trials evaluating both toxicity and efficacy outcomes. In the first stage of dose finding, we use the Bayesian model averaging continual reassessment method to monitor the toxicity outcomes and adopt an isotonic regression method based on the efficacy outcomes to guide dose escalation. When the first stage ends, we use the Dirichlet‐multinomial distribution to jointly model the toxicity and efficacy outcomes and pick the candidate doses based on a three‐dimensional volume ratio. The selected candidate doses are then seamlessly advanced to the second stage for dose validation. Both toxicity and efficacy outcomes are continuously monitored so that any overly toxic and/or less efficacious dose can be dropped from the study as the trial continues. When the phase I/II trial ends, we select the optimal biological dose as the dose obtaining the minimal value of the volume ratio within the candidate set. An advantage of the proposed design is that it does not impose a monotonically increasing assumption on the shape of the dose–efficacy curve. We conduct extensive simulation studies to examine the operating characteristics of the proposed design. The simulation results show that the proposed design has desirable operating characteristics across different shapes of the underlying true dose–toxicity and dose–efficacy curves. The software to implement the proposed design is available upon request. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

11.
Seamless phase I/II dose‐finding trials are attracting increasing attention nowadays in early‐phase drug development for oncology. Most existing phase I/II dose‐finding methods use sophisticated yet untestable models to quantify dose‐toxicity and dose‐efficacy relationships, which always renders them difficult to implement in practice. To simplify the practical implementation, we extend the Bayesian optimal interval design from maximum tolerated dose finding to optimal biological dose finding in phase I/II trials. In particular, optimized intervals for toxicity and efficacy are respectively derived by minimizing probabilities of incorrect classifications. If the pair of observed toxicity and efficacy probabilities at the current dose is located inside the promising region, we retain the current dose; if the observed probabilities are outside of the promising region, we propose an allocation rule by maximizing the posterior probability that the response rate of the next dose falls inside a prespecified efficacy probability interval while still controlling the level of toxicity. The proposed interval design is model‐free, thus is suitable for various dose‐response relationships. We conduct extensive simulation studies to demonstrate the small‐ and large‐sample performance of the proposed method under various scenarios. Compared to existing phase I/II dose‐finding designs, not only is our interval design easy to implement in practice, but it also possesses desirable and robust operating characteristics.  相似文献   

12.
Most phase I dose‐finding methods in oncology aim to find the maximum‐tolerated dose from a set of prespecified doses. However, in practice, because of a lack of understanding of the true dose–toxicity relationship, it is likely that none of these prespecified doses are equal or reasonably close to the true maximum‐tolerated dose. To handle this issue, we propose an adaptive dose modification (ADM) method that can be coupled with any existing dose‐finding method to adaptively modify the dose, when it is needed, during the course of dose finding. To reflect clinical practice, we divide the toxicity probability into three regions: underdosing, acceptable, and overdosing regions. We adaptively add a new dose whenever the observed data suggest that none of the investigational doses are likely to be located in the acceptable region. The new dose is estimated via a nonparametric dose–toxicity model based on local polynomial regression. The simulation study shows that ADM substantially outperforms the similar existing method. We applied ADM to a phase I cancer trial. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

13.
A statistical definition of surrogate endpoints as well as validation criteria was first presented by Prentice. Freedman et al. supplemented these criteria with the so-called proportion explained. Buyse and Molenberghs pointed to inadequacies of these criteria and suggested a new definition of surrogacy based on (i) the relative effect linking the overall effect of treatment on both endpoints and (ii) an individual-level measure of agreement between both endpoints. Using data from a randomized trial, they showed how a potential surrogate endpoint can be studied using a joint model for the surrogate and the true endpoint. Whereas Buyse and Molenberghs restricted themselves to the fairly simple cases of jointly normal and jointly binary outcomes, we treat the situation where the surrogate is binary and the true endpoint is continuous, or vice versa. In addition, we consider the case of ordinal endpoints. Further, Buyse et al. extended the approach of Buyse and Molenberghs to a meta-analytic context. We will adopt a similar approach for responses of a mixed data type.  相似文献   

14.
Diagnostic test accuracy studies typically report the number of true positives, false positives, true negatives and false negatives. There usually exists a negative association between the number of true positives and true negatives, because studies that adopt less stringent criterion for declaring a test positive invoke higher sensitivities and lower specificities. A generalized linear mixed model (GLMM) is currently recommended to synthesize diagnostic test accuracy studies. We propose a copula mixed model for bivariate meta‐analysis of diagnostic test accuracy studies. Our general model includes the GLMM as a special case and can also operate on the original scale of sensitivity and specificity. Summary receiver operating characteristic curves are deduced for the proposed model through quantile regression techniques and different characterizations of the bivariate random effects distribution. Our general methodology is demonstrated with an extensive simulation study and illustrated by re‐analysing the data of two published meta‐analyses. Our study suggests that there can be an improvement on GLMM in fit to data and makes the argument for moving to copula random effects models. Our modelling framework is implemented in the package CopulaREMADA within the open source statistical environment R . Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

15.
Phase I oncology trials are designed to identify a safe dose with an acceptable toxicity profile. The dose is typically determined based on the probability of severe toxicity observed during the first treatment cycle, although patients continue to receive treatment for multiple cycles. In addition, the toxicity data from multiple types and grades are typically summarized into a single binary outcome of dose‐limiting toxicity. A novel endpoint, the total toxicity profile, was previously developed to account for the multiple toxicity types and grades. In this work, we propose to account for longitudinal repeated measures of total toxicity profile over multiple treatment cycles, accounting for cumulative toxicity during dosing‐finding. A linear mixed model was utilized in the Bayesian framework, with addition of Bayesian risk functions for decision‐making in dose assignment. The performance of this design is evaluated using simulation studies and compared with the previously proposed quasi‐likelihood continual reassessment method (QLCRM) design. Twelve clinical scenarios incorporating four different locations of maximum tolerated dose and three different time trends (decreasing, increasing, and no effect) were investigated. The proposed repeated measures design was comparable with the QLCRM when only cycle 1 data were utilized in dose‐finding; however, it demonstrated an improvement over the QLCRM when data from multiple cycles were used across all scenarios. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

16.
Genome-wide association (GWA) study is becoming a powerful tool in deciphering genetic basis of complex human diseases/traits. Currently, the univariate analysis is the most commonly used method to identify genes associated with a certain disease/phenotype under study. A major limitation with the univariate analysis is that it may not make use of the information of multiple correlated phenotypes, which are usually measured and collected in practical studies. The multivariate analysis has proven to be a powerful approach in linkage studies of complex diseases/traits, but it has received little attention in GWA. In this study, we aim to develop a bivariate analytical method for GWA study, which can be used for a complex situation in which continuous trait and a binary trait are measured under study. Based on the modified extended generalized estimating equation (EGEE) method we proposed herein, we assessed the performance of our bivariate analyses through extensive simulations as well as real data analyses. In the study, to develop an EGEE approach for bivariate genetic analyses, we combined two different generalized linear models corresponding to phenotypic variables using a seemingly unrelated regression model. The simulation results demonstrated that our EGEE-based bivariate analytical method outperforms univariate analyses in increasing statistical power under a variety of simulation scenarios. Notably, EGEE-based bivariate analyses have consistent advantages over univariate analyses whether or not there exists a phenotypic correlation between the two traits. Our study has practical importance, as one can always use multivariate analyses as a screening tool when multiple phenotypes are available, without extra costs of statistical power and false-positive rate. Analyses on empirical GWA data further affirm the advantages of our bivariate analytical method.  相似文献   

17.
The paradigm of oncology drug development is expanding from developing cytotoxic agents to developing biological or molecularly targeted agents (MTAs). Although it is common for the efficacy and toxicity of cytotoxic agents to increase monotonically with dose escalation, the efficacy of some MTAs may exhibit non‐monotonic patterns in their dose–efficacy relationships. Many adaptive dose‐finding approaches in the available literature account for the non‐monotonic dose–efficacy behavior by including additional model parameters. In this study, we propose a novel adaptive dose‐finding approach based on binary efficacy and toxicity outcomes in phase I trials for monotherapy using an MTA. We develop a dose–efficacy model, the parameters of which are allowed to change in the vicinity of the change point of the dose level, in order to consider the non‐monotonic pattern of the dose–efficacy relationship. The change point is obtained as the dose that maximizes the log‐likelihood of the assumed dose–efficacy and dose‐toxicity models. The dose‐finding algorithm is based on the weighted Mahalanobis distance, calculated using the posterior probabilities of efficacy and toxicity outcomes. We compare the operating characteristics between the proposed and existing methods and examine the sensitivity of the proposed method by simulation studies under various scenarios. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

18.
In many clinical settings, improving patient survival is of interest but a practical surrogate, such as time to disease progression, is instead used as a clinical trial's primary endpoint. A time‐to‐first endpoint (e.g., death or disease progression) is commonly analyzed but may not be adequate to summarize patient outcomes if a subsequent event contains important additional information. We consider a surrogate outcome very generally as one correlated with the true endpoint of interest. Settings of interest include those where the surrogate indicates a beneficial outcome so that the usual time‐to‐first endpoint of death or surrogate event is nonsensical. We present a new two‐sample test for bivariate, interval‐censored time‐to‐event data, where one endpoint is a surrogate for the second, less frequently observed endpoint of true interest. This test examines whether patient groups have equal clinical severity. If the true endpoint rarely occurs, the proposed test acts like a weighted logrank test on the surrogate; if it occurs for most individuals, then our test acts like a weighted logrank test on the true endpoint. If the surrogate is a useful statistical surrogate, our test can have better power than tests based on the surrogate that naively handles the true endpoint. In settings where the surrogate is not valid (treatment affects the surrogate but not the true endpoint), our test incorporates the information regarding the lack of treatment effect from the observed true endpoints and hence is expected to have a dampened treatment effect compared with tests based on the surrogate alone. Published 2016. This article is a U.S. Government work and is in the public domain in the USA.  相似文献   

19.
Despite an enormous and growing statistical literature, formal procedures for dose‐finding are only slowly being implemented in phase I clinical trials. Even in oncology and other life‐threatening conditions in which a balance between efficacy and toxicity has to be struck, model‐based approaches, such as the Continual Reassessment Method, have not been universally adopted. Two related concerns have limited the adoption of the new methods. One relates to doubts about the appropriateness of models assumed to link the risk of toxicity to dose, and the other is the difficulty of communicating the nature of the process to clinical investigators responsible for early phase studies. In this paper, we adopt a new Bayesian approach involving a simple model assuming only monotonicity in the dose‐toxicity relationship. The parameters that define the model have immediate and simple interpretation. The approach can be applied automatically, and we present a simulation investigation of its properties when it is. More importantly, it can be used in a transparent fashion as one element in the expert consideration of what dose to administer to the next patient or group of patients. The procedure serves to summarize the opinions and the data concerning risks of a binary characterization of toxicity which can then be considered, together with additional and less tidy trial information, by the clinicians responsible for making decisions on the allocation of doses. Graphical displays of these opinions can be used to ease communication with investigators. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号