首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In clinical trials with time‐to‐event outcomes, it is common to estimate the marginal hazard ratio from the proportional hazards model, even when the proportional hazards assumption is not valid. This is unavoidable from the perspective that the estimator must be specified a priori if probability statements about treatment effect estimates are desired. Marginal hazard ratio estimates under non‐proportional hazards are still useful, as they can be considered to be average treatment effect estimates over the support of the data. However, as many have shown, under non‐proportional hazard, the ‘usual’ unweighted marginal hazard ratio estimate is a function of the censoring distribution, which is not normally considered to be scientifically relevant when describing the treatment effect. In addition, in many practical settings, the censoring distribution is only conditionally independent (e.g., differing across treatment arms), which further complicates the interpretation. In this paper, we investigate an estimator of the hazard ratio that removes the influence of censoring and propose a consistent robust variance estimator. We compare the coverage probability of the estimator to both the usual Cox model estimator and an estimator proposed by Xu and O'Quigley (2000) when censoring is independent of the covariate. The new estimator should be used for inference that does not depend on the censoring distribution. It is particularly relevant to adaptive clinical trials where, by design, censoring distributions differ across treatment arms. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

2.
Group sequential designs are widely used in clinical trials to determine whether a trial should be terminated early. In such trials, maximum likelihood estimates are often used to describe the difference in efficacy between the experimental and reference treatments; however, these are well known for displaying conditional and unconditional biases. Established bias‐adjusted estimators include the conditional mean‐adjusted estimator (CMAE), conditional median unbiased estimator, conditional uniformly minimum variance unbiased estimator (CUMVUE), and weighted estimator. However, their performances have been inadequately investigated. In this study, we review the characteristics of these bias‐adjusted estimators and compare their conditional bias, overall bias, and conditional mean‐squared errors in clinical trials with survival endpoints through simulation studies. The coverage probabilities of the confidence intervals for the four estimators are also evaluated. We find that the CMAE reduced conditional bias and showed relatively small conditional mean‐squared errors when the trials terminated at the interim analysis. The conditional coverage probability of the conditional median unbiased estimator was well below the nominal value. In trials that did not terminate early, the CUMVUE performed with less bias and an acceptable conditional coverage probability than was observed for the other estimators. In conclusion, when planning an interim analysis, we recommend using the CUMVUE for trials that do not terminate early and the CMAE for those that terminate early. Copyright © 2017 John Wiley & Sons, Ltd.  相似文献   

3.
Propensity score methods are used to reduce the effects of observed confounding when using observational data to estimate the effects of treatments or exposures. A popular method of using the propensity score is inverse probability of treatment weighting (IPTW). When using this method, a weight is calculated for each subject that is equal to the inverse of the probability of receiving the treatment that was actually received. These weights are then incorporated into the analyses to minimize the effects of observed confounding. Previous research has found that these methods result in unbiased estimation when estimating the effect of treatment on survival outcomes. However, conventional methods of variance estimation were shown to result in biased estimates of standard error. In this study, we conducted an extensive set of Monte Carlo simulations to examine different methods of variance estimation when using a weighted Cox proportional hazards model to estimate the effect of treatment. We considered three variance estimation methods: (i) a naïve model‐based variance estimator; (ii) a robust sandwich‐type variance estimator; and (iii) a bootstrap variance estimator. We considered estimation of both the average treatment effect and the average treatment effect in the treated. We found that the use of a bootstrap estimator resulted in approximately correct estimates of standard errors and confidence intervals with the correct coverage rates. The other estimators resulted in biased estimates of standard errors and confidence intervals with incorrect coverage rates. Our simulations were informed by a case study examining the effect of statin prescribing on mortality. © 2016 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd.  相似文献   

4.
In two‐stage randomization designs, patients are randomized to one of the initial treatments, and at the end of the first stage, they are randomized to one of the second stage treatments depending on the outcome of the initial treatment. Statistical inference for survival data from these trials uses methods such as marginal mean models and weighted risk set estimates. In this article, we propose two forms of weighted Kaplan–Meier (WKM) estimators based on inverse‐probability weighting—one with fixed weights and the other with time‐dependent weights. We compare their properties with that of the standard Kaplan–Meier (SKM) estimator, marginal mean model‐based (MM) estimator and weighted risk set (WRS) estimator. Simulation study reveals that both forms of weighted Kaplan–Meier estimators are asymptotically unbiased, and provide coverage rates similar to that of MM and WRS estimators. The SKM estimator, however, is biased when the second randomization rates are not the same for the responders and non‐responders to initial treatment. The methods described are demonstrated by applying to a leukemia data set. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

5.
In sequential multiple assignment randomized trials, longitudinal outcomes may be the most important outcomes of interest because this type of trials is usually conducted in areas of chronic diseases or conditions. We propose to use a weighted generalized estimating equation (GEE) approach to analyzing data from such type of trials for comparing two adaptive treatment strategies based on generalized linear models. Although the randomization probabilities are known, we consider estimated weights in which the randomization probabilities are replaced by their empirical estimates and prove that the resulting weighted GEE estimator is more efficient than the estimators with true weights. The variance of the weighted GEE estimator is estimated by an empirical sandwich estimator. The time variable in the model can be linear, piecewise linear, or more complicated forms. This provides more flexibility that is important because, in the adaptive treatment setting, the treatment changes over time and, hence, a single linear trend over the whole period of study may not be practical. Simulation results show that the weighted GEE estimators of regression coefficients are consistent regardless of the specification of the correlation structure of the longitudinal outcomes. The weighted GEE method is then applied in analyzing data from the Clinical Antipsychotic Trials of Intervention Effectiveness. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

6.
In a cost-effectiveness analysis using clinical trial data, estimates of the between-treatment difference in mean cost and mean effectiveness are needed. Several methods for handling censored data have been suggested. One of them is inverse-probability weighting, and has the advantage that it can also be applied to estimate the parameters from a linear regression of the mean. Such regression models can potentially estimate the treatment contrast more precisely, since some of the residual variance can be explained by baseline covariates. The drawback, however, is that inverse-probability weighting may not be efficient. Using existing results on semi-parametric efficiency, this paper derives the semi-parametric efficient parameter estimates for regression of mean cost, mean quality-adjusted survival time and mean survival time. The performance of these estimates is evaluated through a simulation study. Applying both the new estimators and the inverse-probability weighted estimators to the results of the EVALUATE trial showed that the new estimators achieved a halving of the variance of the estimated treatment contrast for cost. Some practical suggestions for choosing an estimator are offered.  相似文献   

7.
In this article, we study blinded sample size re‐estimation in the ‘gold standard’ design with internal pilot study for normally distributed outcomes. The ‘gold standard’ design is a three‐arm clinical trial design that includes an active and a placebo control in addition to an experimental treatment. We focus on the absolute margin approach to hypothesis testing in three‐arm trials at which the non‐inferiority of the experimental treatment and the assay sensitivity are assessed by pairwise comparisons. We compare several blinded sample size re‐estimation procedures in a simulation study assessing operating characteristics including power and type I error. We find that sample size re‐estimation based on the popular one‐sample variance estimator results in overpowered trials. Moreover, sample size re‐estimation based on unbiased variance estimators such as the Xing–Ganju variance estimator results in underpowered trials, as it is expected because an overestimation of the variance and thus the sample size is in general required for the re‐estimation procedure to eventually meet the target power. To overcome this problem, we propose an inflation factor for the sample size re‐estimation with the Xing–Ganju variance estimator and show that this approach results in adequately powered trials. Because of favorable features of the Xing–Ganju variance estimator such as unbiasedness and a distribution independent of the group means, the inflation factor does not depend on the nuisance parameter and, therefore, can be calculated prior to a trial. Moreover, we prove that the sample size re‐estimation based on the Xing–Ganju variance estimator does not bias the effect estimate. Copyright © 2017 John Wiley & Sons, Ltd.  相似文献   

8.
Seamless phase II/III clinical trials offer an efficient way to select an experimental treatment and perform confirmatory analysis within a single trial. However, combining the data from both stages in the final analysis can induce bias into the estimates of treatment effects. Methods for bias adjustment developed thus far have made restrictive assumptions about the design and selection rules followed. In order to address these shortcomings, we apply recent methodological advances to derive the uniformly minimum variance conditionally unbiased estimator for two‐stage seamless phase II/III trials. Our framework allows for the precision of the treatment arm estimates to take arbitrary values, can be utilised for all treatments that are taken forward to phase III and is applicable when the decision to select or drop treatment arms is driven by a multiplicity‐adjusted hypothesis testing procedure. © 2016 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.  相似文献   

9.
We derived results for inference on parameters of the marginal model of the mixed effect model with the Box–Cox transformation based on the asymptotic theory approach. We also provided a robust variance estimator of the maximum likelihood estimator of the parameters of this model in consideration of the model misspecifications. Using these results, we developed an inference procedure for the difference of the model median between treatment groups at the specified occasion in the context of mixed effects models for repeated measures analysis for randomized clinical trials, which provided interpretable estimates of the treatment effect. From simulation studies, it was shown that our proposed method controlled type I error of the statistical test for the model median difference in almost all the situations and had moderate or high performance for power compared with the existing methods. We illustrated our method with cluster of differentiation 4 (CD4) data in an AIDS clinical trial, where the interpretability of the analysis results based on our proposed method is demonstrated. Copyright © 2017 John Wiley & Sons, Ltd.  相似文献   

10.
In survival analyses, inverse‐probability‐of‐treatment (IPT) and inverse‐probability‐of‐censoring (IPC) weighted estimators of parameters in marginal structural Cox models are often used to estimate treatment effects in the presence of time‐dependent confounding and censoring. In most applications, a robust variance estimator of the IPT and IPC weighted estimator is calculated leading to conservative confidence intervals. This estimator assumes that the weights are known rather than estimated from the data. Although a consistent estimator of the asymptotic variance of the IPT and IPC weighted estimator is generally available, applications and thus information on the performance of the consistent estimator are lacking. Reasons might be a cumbersome implementation in statistical software, which is further complicated by missing details on the variance formula. In this paper, we therefore provide a detailed derivation of the variance of the asymptotic distribution of the IPT and IPC weighted estimator and explicitly state the necessary terms to calculate a consistent estimator of this variance. We compare the performance of the robust and consistent variance estimators in an application based on routine health care data and in a simulation study. The simulation reveals no substantial differences between the 2 estimators in medium and large data sets with no unmeasured confounding, but the consistent variance estimator performs poorly in small samples or under unmeasured confounding, if the number of confounders is large. We thus conclude that the robust estimator is more appropriate for all practical purposes.  相似文献   

11.
In assessing the mechanism of treatment efficacy in randomized clinical trials, investigators often perform mediation analyses by analyzing if the significant intent‐to‐treat treatment effect on outcome occurs through or around a third intermediate or mediating variable: indirect and direct effects, respectively. Standard mediation analyses assume sequential ignorability, i.e. conditional on covariates the intermediate or mediating factor is randomly assigned, as is the treatment in a randomized clinical trial. This research focuses on the application of the principal stratification (PS) approach for estimating the direct effect of a randomized treatment but without the standard sequential ignorability assumption. This approach is used to estimate the direct effect of treatment as a difference between expectations of potential outcomes within latent subgroups of participants for whom the intermediate variable behavior would be constant, regardless of the randomized treatment assignment. Using a Bayesian estimation procedure, we also assess the sensitivity of results based on the PS approach to heterogeneity of the variances among these principal strata. We assess this approach with simulations and apply it to two psychiatric examples. Both examples and the simulations indicated robustness of our findings to the homogeneous variance assumption. However, simulations showed that the magnitude of treatment effects derived under the PS approach were sensitive to model mis‐specification. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

12.
In cluster randomized trials, the study units usually are not a simple random sample from some clearly defined target population. Instead, the target population tends to be hypothetical or ill‐defined, and the selection of study units tends to be systematic, driven by logistical and practical considerations. As a result, the population average treatment effect (PATE) may be neither well defined nor easily interpretable. In contrast, the sample average treatment effect (SATE) is the mean difference in the counterfactual outcomes for the study units. The sample parameter is easily interpretable and arguably the most relevant when the study units are not sampled from some specific super‐population of interest. Furthermore, in most settings, the sample parameter will be estimated more efficiently than the population parameter. To the best of our knowledge, this is the first paper to propose using targeted maximum likelihood estimation (TMLE) for estimation and inference of the sample effect in trials with and without pair‐matching. We study the asymptotic and finite sample properties of the TMLE for the sample effect and provide a conservative variance estimator. Finite sample simulations illustrate the potential gains in precision and power from selecting the sample effect as the target of inference. This work is motivated by the Sustainable East Africa Research in Community Health (SEARCH) study, a pair‐matched, community randomized trial to estimate the effect of population‐based HIV testing and streamlined ART on the 5‐year cumulative HIV incidence (NCT01864603). The proposed methodology will be used in the primary analysis for the SEARCH trial. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

13.
In individually randomised controlled trials, adjustment for baseline characteristics is often undertaken to increase precision of the treatment effect estimate. This is usually performed using covariate adjustment in outcome regression models. An alternative method of adjustment is to use inverse probability‐of‐treatment weighting (IPTW), on the basis of estimated propensity scores. We calculate the large‐sample marginal variance of IPTW estimators of the mean difference for continuous outcomes, and risk difference, risk ratio or odds ratio for binary outcomes. We show that IPTW adjustment always increases the precision of the treatment effect estimate. For continuous outcomes, we demonstrate that the IPTW estimator has the same large‐sample marginal variance as the standard analysis of covariance estimator. However, ignoring the estimation of the propensity score in the calculation of the variance leads to the erroneous conclusion that the IPTW treatment effect estimator has the same variance as an unadjusted estimator; thus, it is important to use a variance estimator that correctly takes into account the estimation of the propensity score. The IPTW approach has particular advantages when estimating risk differences or risk ratios. In this case, non‐convergence of covariate‐adjusted outcome regression models frequently occurs. Such problems can be circumvented by using the IPTW adjustment approach. © 2013 The authors. Statistics in Medicine published by John Wiley & Sons, Ltd.  相似文献   

14.
As the costs of medical care increase, more studies are evaluating cost in addition to effectiveness of treatments. Cost‐effectiveness analyses in randomized clinical trials have typically been conducted only at the end of follow‐up. However, cost‐effectiveness may change over time. We therefore propose a nonparametric estimator to assess the incremental cost‐effectiveness ratio over time. We also derive the asymptotic variance of our estimator and present formulation of Fieller‐based simultaneous confidence bands. Simulation studies demonstrate the performance of our point estimators, variance estimators, and confidence bands. We also illustrate our methods using data from a randomized clinical trial, the second Multicenter Automatic Defibrillator Implantation Trial. This trial studied the effects of implantable cardioverter‐defibrillators on patients at high risk for cardiac arrhythmia. Results show that our estimator performs well in large samples, indicating promising future directions in the field of cost‐effectiveness. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

15.
Developments in genome‐wide association studies and the increasing availability of summary genetic association data have made application of Mendelian randomization relatively straightforward. However, obtaining reliable results from a Mendelian randomization investigation remains problematic, as the conventional inverse‐variance weighted method only gives consistent estimates if all of the genetic variants in the analysis are valid instrumental variables. We present a novel weighted median estimator for combining data on multiple genetic variants into a single causal estimate. This estimator is consistent even when up to 50% of the information comes from invalid instrumental variables. In a simulation analysis, it is shown to have better finite‐sample Type 1 error rates than the inverse‐variance weighted method, and is complementary to the recently proposed MR‐Egger (Mendelian randomization‐Egger) regression method. In analyses of the causal effects of low‐density lipoprotein cholesterol and high‐density lipoprotein cholesterol on coronary artery disease risk, the inverse‐variance weighted method suggests a causal effect of both lipid fractions, whereas the weighted median and MR‐Egger regression methods suggest a null effect of high‐density lipoprotein cholesterol that corresponds with the experimental evidence. Both median‐based and MR‐Egger regression methods should be considered as sensitivity analyses for Mendelian randomization investigations with multiple genetic variants.  相似文献   

16.
ObjectiveOverall survival is a commonly reported end point in clinical trial publications and a key determinant of therapies’ cost-effectiveness. Patients’ survival times have skewed distributions. Outcomes are typically presented in clinical trials as the difference in median survival times; we compare median survival gain with the measure required for economic evaluation, the mean difference.Study DesignWe summarize the relationships between median and mean survival in 4 parametric survival distributions and the relationship of the differences in these measures between trial arms and parameterized treatment effects. Parametric estimates of mean survival were compared with median survival in a case study of a recent trial in metastatic melanoma.ResultsIn a trial of alternative therapies in unresectable metastatic melanoma, median overall survival with ipilimumab alone was 10.1 months versus 6.4 months with gp100-alone (hazard ratio 0.66; P = 0.003). A log-normal parametric survivor function fitted the gp100 Kaplan-Meier function and a time ratio of 1.90 applied only after 90 days gave a suitable fit to the Kaplan-Meier function for ipilimumab, with mean survival difference of 7 months, compared with an estimate of 5.7 months employing a Weibull distribution, and with a 3.7-months median difference.ConclusionParametric assessment of mean survival gain in clinical trials may indicate potential benefits to patients that observed medians may greatly underestimate.  相似文献   

17.
An adaptive treatment strategy (ATS) is defined as a sequence of treatments and intermediate responses. ATS' arise when chronic diseases such as cancer and depression are treated over time with various treatment alternatives depending on intermediate responses to earlier treatments. Clinical trials are often designed to compare ATSs based on appropriate designs such as sequential randomization designs. Although recent literature provides statistical methods for analyzing data from such trials, very few articles have focused on statistical power and sample size issues. This paper presents a sample size formula for comparing the survival probabilities under two treatment strategies sharing same initial, but different maintenance treatment. The formula is based on the large sample properties of inverse‐probability‐weighted estimator. Simulation study shows strong evidence that the proposed sample size formula guarantees desired power, regardless of the true distributions of survival times. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

18.
There is growing interest and investment in precision medicine as a means to provide the best possible health care. A treatment regime formalizes precision medicine as a sequence of decision rules, one per clinical intervention period, that specify if, when and how current treatment should be adjusted in response to a patient's evolving health status. It is standard to define a regime as optimal if, when applied to a population of interest, it maximizes the mean of some desirable clinical outcome, such as efficacy. However, in many clinical settings, a high‐quality treatment regime must balance multiple competing outcomes; eg, when a high dose is associated with substantial symptom reduction but a greater risk of an adverse event. We consider the problem of estimating the most efficacious treatment regime subject to constraints on the risk of adverse events. We combine nonparametric Q‐learning with policy‐search to estimate a high‐quality yet parsimonious treatment regime. This estimator applies to both observational and randomized data, as well as settings with variable, outcome‐dependent follow‐up, mixed treatment types, and multiple time points. This work is motivated by and framed in the context of dosing for chronic pain; however, the proposed framework can be applied generally to estimate a treatment regime which maximizes the mean of one primary outcome subject to constraints on one or more secondary outcomes. We illustrate the proposed method using data pooled from 5 open‐label flexible dosing clinical trials for chronic pain.  相似文献   

19.
We consider a latent variable hazard model for clustered survival data where clusters are a random sample from an underlying population. We allow interactions between the random cluster effect and covariates. We use a maximum pseudo-likelihood estimator to estimate the mean hazard ratio parameters. We propose a bootstrap sampling scheme to obtain an estimate of the variance of the proposed estimator. Application of this method in large multi-centre clinical trials allows one to assess the mean treatment effect, where we consider participating centres as a random sample from an underlying population. We evaluate properties of the proposed estimators via extensive simulation studies. A real data example from the Studies of Left Ventricular Dysfunction (SOLVD) Prevention Trial illustrates the method. © 1997 by John Wiley & Sons, Ltd.  相似文献   

20.
Clinical trials that randomize subjects to decision algorithms, which adapt treatments over time according to individual response, have gained considerable interest as investigators seek designs that directly inform clinical decision making. We consider designs in which subjects are randomized sequentially at decision points, among adaptive treatment options under evaluation. We present a sequential method to estimate the comparative effects of the randomized adaptive treatments, which are formalized as adaptive treatment strategies. Our causal estimators are derived using Bayesian predictive inference. We use analytical and empirical calculations to compare the predictive estimators to (i) the 'standard' approach that allocates the sequentially obtained data to separate strategy-specific groups as would arise from randomizing subjects at baseline; (ii) the semi-parametric approach of marginal mean models that, under appropriate experimental conditions, provides the same sequential estimator of causal differences as the proposed approach. Simulation studies demonstrate that sequential causal inference offers substantial efficiency gains over the standard approach to comparing treatments, because the predictive estimators can take advantage of the monotone structure of shared data among adaptive strategies. We further demonstrate that the semi-parametric asymptotic variances, which are marginal 'one-step' estimators, may exhibit significant bias, in contrast to the predictive variances. We show that the conditions under which the sequential method is attractive relative to the other two approaches are those most likely to occur in real studies.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号