首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In clinical research and development, interim monitoring is critical for better decision‐making and minimizing the risk of exposing patients to possible ineffective therapies. For interim futility or efficacy monitoring, predictive probability methods are widely adopted in practice. Those methods have been well studied for univariate variables. However, for longitudinal studies, predictive probability methods using univariate information from only completers may not be most efficient, and data from on‐going subjects can be utilized to improve efficiency. On the other hand, leveraging information from on‐going subjects could allow an interim analysis to be potentially conducted once a sufficient number of subjects reach an earlier time point. For longitudinal outcomes, we derive closed‐form formulas for predictive probabilities, including Bayesian predictive probability, predictive power, and conditional power and also give closed‐form solutions for predictive probability of success in a future trial and the predictive probability of success of the best dose. When predictive probabilities are used for interim monitoring, we study their distributions and discuss their analytical cutoff values or stopping boundaries that have desired operating characteristics. We show that predictive probabilities utilizing all longitudinal information are more efficient for interim monitoring than that using information from completers only. To illustrate their practical application for longitudinal data, we analyze 2 real data examples from clinical trials.  相似文献   

2.
Group sequential designs are widely used in clinical trials to determine whether a trial should be terminated early. In such trials, maximum likelihood estimates are often used to describe the difference in efficacy between the experimental and reference treatments; however, these are well known for displaying conditional and unconditional biases. Established bias‐adjusted estimators include the conditional mean‐adjusted estimator (CMAE), conditional median unbiased estimator, conditional uniformly minimum variance unbiased estimator (CUMVUE), and weighted estimator. However, their performances have been inadequately investigated. In this study, we review the characteristics of these bias‐adjusted estimators and compare their conditional bias, overall bias, and conditional mean‐squared errors in clinical trials with survival endpoints through simulation studies. The coverage probabilities of the confidence intervals for the four estimators are also evaluated. We find that the CMAE reduced conditional bias and showed relatively small conditional mean‐squared errors when the trials terminated at the interim analysis. The conditional coverage probability of the conditional median unbiased estimator was well below the nominal value. In trials that did not terminate early, the CUMVUE performed with less bias and an acceptable conditional coverage probability than was observed for the other estimators. In conclusion, when planning an interim analysis, we recommend using the CUMVUE for trials that do not terminate early and the CMAE for those that terminate early. Copyright © 2017 John Wiley & Sons, Ltd.  相似文献   

3.
This paper considers the analysis of longitudinal data complicated by the fact that during follow‐up patients can be in different disease states, such as remission, relapse or death. If both the response of interest (for example, quality of life (QOL)) and the amount of missing data depend on this disease state, ignoring the disease state will yield biased means. Death as the final state is an additional complication because no measurements after death are taken and often the outcome of interest is undefined after death. We discuss a new approach to model these types of data. In our approach the probability to be in each of the different disease states over time is estimated using multi‐state models. In each different disease state, the conditional mean given the disease state is modeled directly. Generalized estimation equations are used to estimate the parameters of the conditional means, with inverse probability weights to account for unobserved responses. This approach shows the effect of the disease state on the longitudinal response. Furthermore, it yields estimates of the overall mean response over time, either conditionally on being alive or after imputing predefined values for the response after death. Graphical methods to visualize the joint distribution of disease state and response are discussed. As an example, the analysis of a Dutch randomized clinical trial for breast cancer is considered. In this study, the long‐term impact on the QOL for two different chemotherapy schedules was studied with three disease states: alive without relapse, alive after relapse and death. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

4.
We consider monitoring a pilot toxicity study in which the adverse outcome is bivariate and the goal is to terminate the trial if evidence of excessive toxicity is encountered. We develop a Bayesian monitoring rule, based on the posterior probability that the frequency of either adverse outcome exceeds that observed under standard therapy. This rule is intuitive and ethical, and extends in a straightforward fashion from the univariate to the multivariate case. Since p-values and confidence intervals are standard methods for reporting the results of clinical trials, we also suggest how frequentist inferences may be drawn at the conclusion of a study monitored in this fashion. This work thus represents an integration of Bayesian and frequentist methodology for sequential clinical trials.  相似文献   

5.
We develop a new modeling approach to enhance a recently proposed method to detect increases of contrast‐enhancing lesions (CELs) on repeated magnetic resonance imaging, which have been used as an indicator for potential adverse events in multiple sclerosis clinical trials. The method signals patients with unusual increases in CEL activity by estimating the probability of observing CEL counts as large as those observed on a patient's recent scans conditional on the patient's CEL counts on previous scans. This conditional probability index (CPI), computed based on a mixed‐effect negative binomial regression model, can vary substantially depending on the choice of distribution for the patient‐specific random effects. Therefore, we relax this parametric assumption to model the random effects with an infinite mixture of beta distributions, using the Dirichlet process, which effectively allows any form of distribution. To our knowledge, no previous literature considers a mixed‐effect regression for longitudinal count variables where the random effect is modeled with a Dirichlet process mixture. As our inference is in the Bayesian framework, we adopt a meta‐analytic approach to develop an informative prior based on previous clinical trials. This is particularly helpful at the early stages of trials when less data are available. Our enhanced method is illustrated with CEL data from 10 previous multiple sclerosis clinical trials. Our simulation study shows that our procedure estimates the CPI more accurately than parametric alternatives when the patient‐specific random effect distribution is misspecified and that an informative prior improves the accuracy of the CPI estimates. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

6.
We suggest a conceptually simple Bayesian approach to inferences about the conditional probability of a specimen being infection-free given the outcome of a diagnostic test and covariate information. The approach assumes that the infection state of a specimen is not observable but uses the outcomes of a second test in conjuction with those of the first, that is, dual testing data. Dual testing procedures are often employed in clinical laboratories to assure that positive samples are not contaminated or to increase the likelihood of correct diagnoses. Using the CD4 count and a proxy for risk behaviour as covariates, we apply the method to obtain inferences about the conditional probability of an individual being HIV-1 infection-free given the individual's covariates and a negative outcome with the standard enzyme-linked immunoadsorbent assay/Western blotting test for HIV-1 detection. Inferences combine data from two studies where specimens were tested with the standard and with the more sensitive polymerase chain reaction test.  相似文献   

7.
To address the objective in a clinical trial to estimate the mean or mean difference of an expensive endpoint Y, one approach employs a two‐phase sampling design, wherein inexpensive auxiliary variables W predictive of Y are measured in everyone, Y is measured in a random sample, and the semiparametric efficient estimator is applied. This approach is made efficient by specifying the phase two selection probabilities as optimal functions of the auxiliary variables and measurement costs. While this approach is familiar to survey samplers, it apparently has seldom been used in clinical trials, and several novel results practicable for clinical trials are developed. We perform simulations to identify settings where the optimal approach significantly improves efficiency compared to approaches in current practice. We provide proofs and R code. The optimality results are developed to design an HIV vaccine trial, with objective to compare the mean ‘importance‐weighted’ breadth (Y) of the T‐cell response between randomized vaccine groups. The trial collects an auxiliary response (W) highly predictive of Y and measures Y in the optimal subset. We show that the optimal design‐estimation approach can confer anywhere between absent and large efficiency gain (up to 24 % in the examples) compared to the approach with the same efficient estimator but simple random sampling, where greater variability in the cost‐standardized conditional variance of Y given W yields greater efficiency gains. Accurate estimation of E[Y | W] is important for realizing the efficiency gain, which is aided by an ample phase two sample and by using a robust fitting method. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

8.
In randomized trials, investigators typically rely upon an unadjusted estimate of the mean outcome within each treatment arm to draw causal inferences. Statisticians have underscored the gain in efficiency that can be achieved from covariate adjustment in randomized trials with a focus on problems involving linear models. Despite recent theoretical advances, there has been a reluctance to adjust for covariates based on two primary reasons: (i) covariate-adjusted estimates based on conditional logistic regression models have been shown to be less precise and (ii) concern over the opportunity to manipulate the model selection process for covariate adjustments to obtain favorable results. In this paper, we address these two issues and summarize recent theoretical results on which is based a proposed general methodology for covariate adjustment under the framework of targeted maximum likelihood estimation in trials with two arms where the probability of treatment is 50%. The proposed methodology provides an estimate of the true causal parameter of interest representing the population-level treatment effect. It is compared with the estimates based on conditional logistic modeling, which only provide estimates of subgroup-level treatment effects rather than marginal (unconditional) treatment effects. We provide a clear criterion for determining whether a gain in efficiency can be achieved with covariate adjustment over the unadjusted method. We illustrate our strategy using a resampled clinical trial dataset from a placebo controlled phase 4 study. Results demonstrate that gains in efficiency can be achieved even with binary outcomes through covariate adjustment leading to increased statistical power.  相似文献   

9.
We present the results of a Monte Carlo simulation study in which we demonstrate how strong baseline interactions between a confounding variable and a treatment can create an important difference between the marginal effect of exposure on outcome (as estimated by an inverse probability of treatment weighted logistic model) and the conditional effect (as estimated by an adjusted logistic regression model). The scenarios that we explored included one with a rare outcome and a strong and prevalent effect measure modifier where, across 1,000 simulated data sets, the estimates from an adjusted logistic regression model (mean β = 0.475) and an inverse probability of treatment weighted logistic model (mean β = 2.144) do not coincide with the known true effect (β = 0.68925) when the effect measure modifier is not accounted for. When the marginal and conditional estimates do not coincide despite a rare outcome this may suggest that there is heterogeneity in the effect of treatment between individuals. Failure to specify effect measure modification in the statistical model appears to results in systematic differences between the conditional and marginal estimates. When these differences in estimates are observed, testing for or including interactions or non-linear modeling terms may be advised.  相似文献   

10.
We present continuous and group sequential designs for phase II clinical trials based on the sequential conditional probability ratio test (SCPRT). The SCPRT is derived from a conditional likelihood ratio, where the conditioning is on what the corresponding (reference) fixed sample size test (RFSST) would achieve. In other words, we obtain the sequential design by controlling the maximum probability that the SCPRT does not agree with the RFSST. We shall discuss the difference between SCPRT and stochastic curtailment which also uses the concept of conditional distribution. We show that the power function of the SCPRT is virtually the same as that of the RFSST and its average sample numbers (ASNs) are close to those of Wald's sequential probability ratio test (SPRT), whereas its maximum sample size is no greater than that of the RFSST. Thus the SCPRT has all the desirable properties, such as allowing the use of the RFSST at the last analysis, of the Fleming procedure for phase II trials. The SCPRT, however, preserves the power function of the RFSST better and gives us the option for continuous monitoring. Our recommendation, therefore, is to use a group SCPRT boundary (for interim analyses performed as scheduled) embedded in a continuous SCPRT boundary (for unplanned interim analyses and analyses at times based on data trends). We provide as well a bias-adjusted estimator of the success rate after sequential stopping. We illustrate the method with several examples. The method applies to any single-arm clinical trial with binary endpoints, such as the classic paired design.  相似文献   

11.
Group sequential designs allow stopping a clinical trial for meeting its efficacy objectives based on interim evaluation of the accumulating data. Various methods to determine group sequential boundaries that control the probability of crossing the boundary at an interim or the final analysis have been proposed. To monitor trials with uncertainty in group sizes at each analysis, error spending functions are often used to derive stopping boundaries. Although flexible, most spending functions are generic increasing functions with parameters that are difficult to interpret. They are often selected arbitrarily, sometimes using trial and error, so that the corresponding boundaries approximate the desired behavior numerically. Lan and DeMets proposed a spending function that approximates in a natural way the O'Brien-Fleming boundary based on the Brownian motion process. We extend this approach to a general family that has an additive boundary for the Brownian motion process. The spending function and the group sequential boundary share a common parameter that regulates how fast the error is spent. Three subfamilies are considered with different additive terms. In the first subfamily, the parameter has an interpretation as the conditional error rate, which is the conditional probability to reject the null hypothesis at the final analysis. This parameter also provides a connection between group sequential and adaptive design methodology. More choices of designs are allowed in the other two subfamilies. Numerical results are provided to illustrate flexibility and interpretability of the proposed procedures. A clinical trial is described to illustrate the utility of conditional error in boundary determination.  相似文献   

12.
This paper reviews Bayesian strategies for monitoring clinical trial data. It focuses on a Bayesian stochastic curtailment method based on the predictive probability of observing a clinically significant outcome at the scheduled end of the study given the observed data. The proposed method is applied to derive efficacy and futility stopping rules in clinical trials with continuous, normally distributed and binary endpoints. The sensitivity of the resulting stopping rules to the choice of prior distributions is examined and guidelines for choosing a prior distribution of the treatment effect are discussed. The Bayesian predictive approach is compared to the frequentist (conditional power) and mixed Bayesian-frequentist (predictive power) approaches. The interim monitoring strategies discussed in the paper are illustrated using examples from a small proof-of-concept study and a large mortality trial.  相似文献   

13.
Missing outcome data are commonly encountered in randomized controlled trials and hence may need to be addressed in a meta‐analysis of multiple trials. A common and simple approach to deal with missing data is to restrict analysis to individuals for whom the outcome was obtained (complete case analysis). However, estimated treatment effects from complete case analyses are potentially biased if informative missing data are ignored. We develop methods for estimating meta‐analytic summary treatment effects for continuous outcomes in the presence of missing data for some of the individuals within the trials. We build on a method previously developed for binary outcomes, which quantifies the degree of departure from a missing at random assumption via the informative missingness odds ratio. Our new model quantifies the degree of departure from missing at random using either an informative missingness difference of means or an informative missingness ratio of means, both of which relate the mean value of the missing outcome data to that of the observed data. We propose estimating the treatment effects, adjusted for informative missingness, and their standard errors by a Taylor series approximation and by a Monte Carlo method. We apply the methodology to examples of both pairwise and network meta‐analysis with multi‐arm trials. © 2014 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.  相似文献   

14.
Quality-of-life (QOL) is an important outcome in clinical research, particularly in cancer clinical trials. Typically, data are collected longitudinally from patients during treatment and subsequent follow-up. Missing data are a common problem, and missingness may arise in a non-ignorable fashion. In particular, the probability that a patient misses an assessment may depend on the patient's QOL at the time of the scheduled assessment. We propose a Markov chain model for the analysis of categorical outcomes derived from QOL measures. Our model assumes that transitions between QOL states depend on covariates through generalized logit models or proportional odds models. To account for non-ignorable missingness, we incorporate logistic regression models for the conditional probabilities of observing measurements, given their actual values. The model can accommodate time-dependent covariates. Estimation is by maximum likelihood, summing over all possible values of the missing measurements. We describe options for selecting parsimonious models, and we study the finite-sample properties of the estimators by simulation. We apply the techniques to data from a breast cancer clinical trial in which QOL assessments were made longitudinally, and in which missing data frequently arose.  相似文献   

15.
The Fragility Index has been introduced as a complement to the P-value to summarize the statistical strength of evidence for a trial's result. The Fragility Index (FI) is defined in trials with two equal treatment group sizes, with a dichotomous or time-to-event outcome, and is calculated as the minimum number of conversions from nonevent to event in the treatment group needed to shift the P-value from Fisher's exact test over the .05 threshold. As the index lacks a well-defined probability motivation, its interpretation is challenging for consumers. We clarify what the FI may be capturing by separately considering two scenarios: (a) what the FI is capturing mathematically when the probability model is correct and (b) how well the FI captures violations of probability model assumptions. By calculating the posterior probability of a treatment effect, we show that when the probability model is correct, the FI inappropriately penalizes small trials for using fewer events than larger trials to achieve the same significance level. The analysis shows that for experiments conducted without bias, the FI promotes an incorrect intuition of probability, which has not been noted elsewhere and must be dispelled. We illustrate shortcomings of the FI's ability to quantify departures from model assumptions and contextualize the FI concept within current debate around the null hypothesis significance testing paradigm. Altogether, the FI creates more confusion than it resolves and does not promote statistical thinking. We recommend against its use. Instead, sensitivity analyses are recommended to quantify and communicate robustness of trial results.  相似文献   

16.
Noninferiority trials have recently gained importance for the clinical trials of drugs and medical devices. In these trials, most statistical methods have been used from a frequentist perspective, and historical data have been used only for the specification of the noninferiority margin Δ>0. In contrast, Bayesian methods, which have been studied recently are advantageous in that they can use historical data to specify prior distributions and are expected to enable more efficient decision making than frequentist methods by borrowing information from historical trials. In the case of noninferiority trials for response probabilities π 1,π 2, Bayesian methods evaluate the posterior probability of H 1:π 1>π 2?Δ being true. To numerically calculate such posterior probability, complicated Appell hypergeometric function or approximation methods are used. Further, the theoretical relationship between Bayesian and frequentist methods is unclear. In this work, we give the exact expression of the posterior probability of the noninferiority under some mild conditions and propose the Bayesian noninferiority test framework which can flexibly incorporate historical data by using the conditional power prior. Further, we show the relationship between Bayesian posterior probability and the P value of the Fisher exact test. From this relationship, our method can be interpreted as the Bayesian noninferior extension of the Fisher exact test, and we can treat superiority and noninferiority in the same framework. Our method is illustrated through Monte Carlo simulations to evaluate the operating characteristics, the application to the real HIV clinical trial data, and the sample size calculation using historical data.  相似文献   

17.
For a continuous treatment, the generalised propensity score (GPS) is defined as the conditional density of the treatment, given covariates. GPS adjustment may be implemented by including it as a covariate in an outcome regression. Here, the unbiased estimation of the dose–response function assumes correct specification of both the GPS and the outcome‐treatment relationship. This paper introduces a machine learning method, the ‘Super Learner’, to address model selection in this context. In the two‐stage estimation approach proposed, the Super Learner selects a GPS and then a dose–response function conditional on the GPS, as the convex combination of candidate prediction algorithms. We compare this approach with parametric implementations of the GPS and to regression methods. We contrast the methods in the Risk Adjustment in Neurocritical care cohort study, in which we estimate the marginal effects of increasing transfer time from emergency departments to specialised neuroscience centres, for patients with acute traumatic brain injury. With parametric models for the outcome, we find that dose–response curves differ according to choice of specification. With the Super Learner approach to both regression and the GPS, we find that transfer time does not have a statistically significant marginal effect on the outcomes. © 2015 The Authors. Health Economics Published by John Wiley & Sons Ltd.  相似文献   

18.
When simultaneously testing multiple hypotheses, the usual approach in the context of confirmatory clinical trials is to control the familywise error rate (FWER), which bounds the probability of making at least one false rejection. In many trial settings, these hypotheses will additionally have a hierarchical structure that reflects the relative importance and links between different clinical objectives. The graphical approach of Bretz et al (2009) is a flexible and easily communicable way of controlling the FWER while respecting complex trial objectives and multiple structured hypotheses. However, the FWER can be a very stringent criterion that leads to procedures with low power, and may not be appropriate in exploratory trial settings. This motivates controlling generalized error rates, particularly when the number of hypotheses tested is no longer small. We consider the generalized familywise error rate (k-FWER), which is the probability of making k or more false rejections, as well as the tail probability of the false discovery proportion (FDP), which is the probability that the proportion of false rejections is greater than some threshold. We also consider asymptotic control of the false discovery rate, which is the expectation of the FDP. In this article, we show how to control these generalized error rates when using the graphical approach and its extensions. We demonstrate the utility of the resulting graphical procedures on three clinical trial case studies.  相似文献   

19.
A global one-sample test for response rates for stratified phase II clinical trials is proposed. Such a test is analogous to that of a stratified log-rank test for time-to-event data. Both one- and two-stage tests are developed, and conditional and unconditional approaches are introduced in each case, where the conditional approach involves conditioning on the observed samples sizes within the strata. The methodology generates samples sizes and stopping boundaries that provide designs with the desired power and type I error probability. These methods are useful for designing stratified phase II clinical trials. An application to a Children's Oncology Group phase II clinical trial in relapsed neuroblastoma patients is presented.  相似文献   

20.
We consider estimation of treatment effects in two‐stage adaptive multi‐arm trials with a common control. The best treatment is selected at interim, and the primary endpoint is modeled via a Cox proportional hazards model. The maximum partial‐likelihood estimator of the log hazard ratio of the selected treatment will overestimate the true treatment effect in this case. Several methods for reducing the selection bias have been proposed for normal endpoints, including an iterative method based on the estimated conditional selection biases and a shrinkage approach based on empirical Bayes theory. We adapt these methods to time‐to‐event data and compare the bias and mean squared error of all methods in an extensive simulation study and apply the proposed methods to reconstructed data from the FOCUS trial. We find that all methods tend to overcorrect the bias, and only the shrinkage methods can reduce the mean squared error. © 2017 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号