首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
The three‐arm clinical trial design, which includes a test treatment, an active reference, and placebo control, is the gold standard for the assessment of non‐inferiority. In the presence of non‐compliance, one common concern is that an intent‐to‐treat (ITT) analysis (which is the standard approach to non‐inferiority trials), tends to increase the chances of erroneously concluding non‐inferiority, suggesting that the per‐protocol (PP) analysis may be preferable for non‐inferiority trials despite its inherent bias. The objective of this paper was to develop statistical methodology for dealing with non‐compliance in three‐arm non‐inferiority trials for censored, time‐to‐event data. Changes in treatment were here considered the only form of non‐compliance. An approach using a three‐arm rank preserving structural failure time model and G‐estimation analysis is here presented. Using simulations, the impact of non‐compliance on non‐inferiority trials was investigated in detail using ITT, PP analyses, and the present proposed method. Results indicate that the proposed method shows good characteristics, and that neither ITT nor PP analyses can always guarantee the validity of the non‐inferiority conclusion. A Statistical Analysis System program for the implementation of the proposed test procedure is available from the authors upon request. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

2.
Lin et al. (http://www.biostatsresearch.com/upennbiostat/papers/, 2006) proposed a nested Markov compliance class model in the Imbens and Rubin compliance class model framework to account for time-varying subject noncompliance in longitudinal randomized intervention studies. We use superclasses, or latent compliance class principal strata, to describe longitudinal compliance patterns, and time-varying compliance classes are assumed to depend on the history of compliance. In this paper, we search for good subject-level baseline predictors of these superclasses and also examine the relationship between these superclasses and all-cause mortality. Since the superclasses are completely latent in all subjects, we utilize multiple imputation techniques to draw inferences. We apply this approach to a randomized intervention study for elderly primary care patients with depression.  相似文献   

3.
This article discusses joint modeling of compliance and outcome for longitudinal studies when noncompliance is present. We focus on two‐arm randomized longitudinal studies in which subjects are randomized at baseline, treatment is applied repeatedly over time, and compliance behaviors and clinical outcomes are measured and recorded repeatedly over time. In the proposed Markov compliance and outcome model, we use the potential outcome framework to define pre‐randomization principal strata from the joint distribution of compliance under treatment and control arms, and estimate the effect of treatment within each principal strata. Besides the causal effect of the treatment, our proposed model can estimate the impact of the causal effect of the treatment at a given time on future compliance. Bayesian methods are used to estimate the parameters. The results are illustrated using a study assessing the effect of cognitive behavior therapy on depression. A simulation study is used to assess the repeated sampling properties of the proposed model. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

4.
We propose a principal stratification approach to assess causal effects in nonrandomized longitudinal comparative effectiveness studies with a binary endpoint outcome and repeated measures of a continuous intermediate variable. Our method is an extension of the principal stratification approach originally proposed for the longitudinal randomized study “Prevention of Suicide in Primary Care Elderly: Collaborative Trial” to assess the treatment effect on the continuous Hamilton depression score adjusting for the heterogeneity of repeatedly measured binary compliance status. Our motivation for this work comes from a comparison of the effect of two glucose‐lowering medications on a clinical cohort of patients with type 2 diabetes. Here, we consider a causal inference problem assessing how well the two medications work relative to one another on two binary endpoint outcomes: cardiovascular disease‐related hospitalization and all‐cause mortality. Clinically, these glucose‐lowering medications can have differential effects on the intermediate outcome, glucose level over time. Ultimately, we want to compare medication effects on the endpoint outcomes among individuals in the same glucose trajectory stratum while accounting for the heterogeneity in baseline covariates (i.e., to obtain ‘principal effects’ on the endpoint outcomes). The proposed method involves a three‐step model estimation procedure. Step 1 identifies principal strata associated with the intermediate variable using hybrid growth mixture modeling analyses. Step 2 obtains the stratum membership using the pseudoclass technique and derives propensity scores for treatment assignment. Step 3 obtains the stratum‐specific treatment effect on the endpoint outcome weighted by inverse propensity probabilities derived from Step 2. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

5.
In early‐phase clinical trials, interim monitoring is commonly conducted based on the estimated intent‐to‐treat effect, which is subject to bias in the presence of noncompliance. To address this issue, we propose a Bayesian sequential monitoring trial design based on the estimation of the causal effect using a principal stratification approach. The proposed design simultaneously considers efficacy and toxicity outcomes and utilizes covariates to predict a patient's potential compliance behavior and identify the causal effects. Based on accumulating data, we continuously update the posterior estimates of the causal treatment effects and adaptively make the go/no‐go decision for the trial. Numerical results show that the proposed method has desirable operating characteristics and addresses the issue of noncompliance. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

6.
This paper assesses the causal impact of late-term (8th month) maternal smoking on birthweight using data from a randomized clinical trial, in which some women were encouraged not to smoke, while others were not. The estimation of treatment effects in this case is made difficult as a result of the presence of non-compliers, women who would not change their smoking status, regardless of the receipt of encouragement. Because these women are not at risk of changing treatment status, treatment effect distributions may be difficult to construct for them. Consequently, the paper focuses on obtaining the distribution of treatment impacts for the sub-set of compliers found in the data. Because compliance status is not observed for all subjects in the sample, a Bayesian finite mixture model is estimated that recovers the treatment effect parameters of interest. The complier average treatment effect implies that smokers give birth to infants weighing 348 g less than those of non-smokers, on average, although the 95% posterior density interval contains zero. The treatment effect is stronger for women who were moderate smokers prior to pregnancy, implying a birthweight difference of 430 g. However, the model predicts that only about 22% of the women in the sample were at risk of changing their smoking behaviour in response to encouragement to quit.  相似文献   

7.
In the presence of non‐compliance, conventional analysis by intention‐to‐treat provides an unbiased comparison of treatment policies but typically under‐estimates treatment efficacy. With all‐or‐nothing compliance, efficacy may be specified as the complier‐average causal effect (CACE), where compliers are those who receive intervention if and only if randomised to it. We extend the CACE approach to model longitudinal data with time‐dependent non‐compliance, focusing on the situation in which those randomised to control may receive treatment and allowing treatment effects to vary arbitrarily over time. Defining compliance type to be the time of surgical intervention if randomised to control, so that compliers are patients who would not have received treatment at all if they had been randomised to control, we construct a causal model for the multivariate outcome conditional on compliance type and randomised arm. This model is applied to the trial of alternative regimens for glue ear treatment evaluating surgical interventions in childhood ear disease, where outcomes are measured over five time points, and receipt of surgical intervention in the control arm may occur at any time. We fit the models using Markov chain Monte Carlo methods to obtain estimates of the CACE at successive times after receiving the intervention. In this trial, over a half of those randomised to control eventually receive intervention. We find that surgery is more beneficial than control at 6months, with a small but non‐significant beneficial effect at 12months. © 2015 The Authors. Statistics in Medicine Published by JohnWiley & Sons Ltd.  相似文献   

8.
A critical issue in the analysis of clinical trials is patients' noncompliance to assigned treatments. In the context of a binary treatment with all or nothing compliance, the intent‐to‐treat analysis is a straightforward approach to estimating the effectiveness of the trial. In contrast, there exist 3 commonly used estimators with varying statistical properties for the efficacy of the trial, formally known as the complier‐average causal effect. The instrumental variable estimator may be unbiased but can be extremely variable in many settings. The as treated and per protocol estimators are usually more efficient than the instrumental variable estimator, but they may suffer from selection bias. We propose a synthetic approach that incorporates all 3 estimators in a data‐driven manner. The synthetic estimator is a linear convex combination of the instrumental variable, per protocol, and as treated estimators, resembling the popular model‐averaging approach in the statistical literature. However, our synthetic approach is nonparametric; thus, it is applicable to a variety of outcome types without specific distributional assumptions. We also discuss the construction of the synthetic estimator using an analytic form derived from a simple normal mixture distribution. We apply the synthetic approach to a clinical trial for post‐traumatic stress disorder.  相似文献   

9.
10.
Multilevel mixed effects survival models are used in the analysis of clustered survival data, such as repeated events, multicenter clinical trials, and individual participant data (IPD) meta‐analyses, to investigate heterogeneity in baseline risk and covariate effects. In this paper, we extend parametric frailty models including the exponential, Weibull and Gompertz proportional hazards (PH) models and the log logistic, log normal, and generalized gamma accelerated failure time models to allow any number of normally distributed random effects. Furthermore, we extend the flexible parametric survival model of Royston and Parmar, modeled on the log‐cumulative hazard scale using restricted cubic splines, to include random effects while also allowing for non‐PH (time‐dependent effects). Maximum likelihood is used to estimate the models utilizing adaptive or nonadaptive Gauss–Hermite quadrature. The methods are evaluated through simulation studies representing clinically plausible scenarios of a multicenter trial and IPD meta‐analysis, showing good performance of the estimation method. The flexible parametric mixed effects model is illustrated using a dataset of patients with kidney disease and repeated times to infection and an IPD meta‐analysis of prognostic factor studies in patients with breast cancer. User‐friendly Stata software is provided to implement the methods. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

11.
In cluster‐randomized trials, intervention effects are often formulated by specifying marginal models, fitting them under a working independence assumption, and using robust variance estimates to address the association in the responses within clusters. We develop sample size criteria within this framework, with analyses based on semiparametric Cox regression models fitted with event times subject to right censoring. At the design stage, copula models are specified to enable derivation of the asymptotic variance of estimators from a marginal Cox regression model and to compute the number of clusters necessary to satisfy power requirements. Simulation studies demonstrate the validity of the sample size formula in finite samples for a range of cluster sizes, censoring rates, and degrees of within‐cluster association among event times. The power and relative efficiency implications of copula misspecification is studied, as well as the effect of within‐cluster dependence in the censoring times. Sample size criteria and other design issues are also addressed for the setting where the event status is only ascertained at periodic assessments and times are interval censored. Copyright © 2014 JohnWiley & Sons, Ltd.  相似文献   

12.
Individual randomized trials (IRTs) and cluster randomized trials (CRTs) with binary outcomes arise in a variety of settings and are often analyzed by logistic regression (fitted using generalized estimating equations for CRTs). The effect of stratification on the required sample size is less well understood for trials with binary outcomes than for continuous outcomes. We propose easy-to-use methods for sample size estimation for stratified IRTs and CRTs and demonstrate the use of these methods for a tuberculosis prevention CRT currently being planned. For both IRTs and CRTs, we also identify the ratio of the sample size for a stratified trial vs a comparably powered unstratified trial, allowing investigators to evaluate how stratification will affect the required sample size when planning a trial. For CRTs, these can be used when the investigator has estimates of the within-stratum intracluster correlation coefficients (ICCs) or by assuming a common within-stratum ICC. Using these methods, we describe scenarios where stratification may have a practically important impact on the required sample size. We find that in the two-stratum case, for both IRTs and for CRTs with very small cluster sizes, there are unlikely to be plausible scenarios in which an important sample size reduction is achieved when the overall probability of a subject experiencing the event of interest is low. When the probability of events is not small, or when cluster sizes are large, however, there are scenarios where practically important reductions in sample size result from stratification.  相似文献   

13.
Small sample, sequential, multiple assignment, randomized trials (snSMARTs) are multistage trials with the overall goal of determining the best treatment after a fixed amount of time. In snSMART trials, patients are first randomized to one of three treatments and a binary (e.g. response/nonresponse) outcome is measured at the end of the first stage. Responders to first stage treatment continue their treatment. Nonresponders to first stage treatment are rerandomized to one of the remaining treatments. The same binary outcome is measured at the end of the first and second stages, and data from both stages are pooled together to find the best first stage treatment. However, in many settings the primary endpoint may be continuous, and dichotomizing this continuous variable may reduce statistical efficiency. In this article, we extend the snSMART design and methods to allow for continuous outcomes. Instead of requiring a binary outcome at the first stage for rerandomization, the probability of staying on the same treatment or switching treatment is a function of the first stage outcome. Rerandomization based on a mapping function of a continuous outcome allows for snSMART designs without requiring a binary outcome. We perform simulation studies to compare the proposed design with continuous outcomes to standard snSMART designs with binary outcomes. The proposed design results in more efficient treatment effect estimates and similar outcomes for trial patients.  相似文献   

14.
In multicentre trials, randomisation is often carried out using permuted blocks stratified by centre. It has previously been shown that stratification variables used in the randomisation process should be adjusted for in the analysis to obtain correct inference. For continuous outcomes, the two primary methods of accounting for centres are fixed‐effects and random‐effects models. We discuss the differences in interpretation between these two models and the implications that each pose for analysis. We then perform a large simulation study comparing the performance of these analysis methods in a variety of situations. In total, we assessed 378 scenarios. We found that random centre effects performed as well or better than fixed‐effects models in all scenarios. Random centre effects models led to increases in power and precision when the number of patients per centre was small (e.g. 10 patients or less) and, in some scenarios, when there was an imbalance between treatments within centres, either due to the randomisation method or to the distribution of patients across centres. With small samples sizes, random‐effects models maintained nominal coverage rates when a degree‐of‐freedom (DF) correction was used. We assessed the robustness of random‐effects models when assumptions regarding the distribution of the centre effects were incorrect and found this had no impact on results. We conclude that random‐effects models offer many advantages over fixed‐effects models in certain situations and should be used more often in practice. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

15.
Despite randomization, selection bias may occur in cluster randomized trials. Classical multivariable regression usually allows for adjusting treatment effect estimates with unbalanced covariates. However, for binary outcomes with low incidence, such a method may fail because of separation problems. This simulation study focused on the performance of propensity score (PS)‐based methods to estimate relative risks from cluster randomized trials with binary outcomes with low incidence. The results suggested that among the different approaches used (multivariable regression, direct adjustment on PS, inverse weighting on PS, and stratification on PS), only direct adjustment on the PS fully corrected the bias and moreover had the best statistical properties. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

16.
When a generic drug is developed, it is important to assess the equivalence of therapeutic efficacy between the new and the standard drugs. Although the number of publications on testing equivalence and its relevant sample size determination is numerous, the discussion on sample size determination for a desired power of detecting equivalence under a randomized clinical trial (RCT) with non-compliance and missing outcomes is limited. In this paper, we derive under the compound exclusion restriction model the maximum likelihood estimator (MLE) for the ratio of probabilities of response among compliers between two treatments in a RCT with both non-compliance and missing outcomes. Using the MLE with the logarithmic transformation, we develop an asymptotic test procedure for assessing equivalence and find that this test procedure can perform well with respect to type I error based on Monte Carlo simulation. We further develop a sample size calculation formula for a desired power of detecting equivalence at a nominal alpha-level. To evaluate the accuracy of the sample size calculation formula, we apply Monte Carlo simulation again to calculate the simulated power of the proposed test procedure corresponding to the resulting sample size for a desired power of 80 per cent at 0.05 level in a variety of situations. We also include a discussion on determining the optimal ratio of sample size allocation subject to a desired power to minimize a linear cost function and provide a sensitivity analysis of the sample size formula developed here under an alterative model with missing at random.  相似文献   

17.
In cancer clinical trials, patients often experience a recurrence of disease prior to the outcome of interest, overall survival. Additionally, for many cancers, there is a cured fraction of the population who will never experience a recurrence. There is often interest in how different covariates affect the probability of being cured of disease and the time to recurrence, time to death, and time to death after recurrence. We propose a multi‐state Markov model with an incorporated cured fraction to jointly model recurrence and death in colon cancer. A Bayesian estimation strategy is used to obtain parameter estimates. The model can be used to assess how individual covariates affect the probability of being cured and each of the transition rates. Checks for the adequacy of the model fit and for the functional forms of covariates are explored. The methods are applied to data from 12 randomized trials in colon cancer, where we show common effects of specific covariates across the trials. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

18.
Group‐randomized trials are randomized studies that allocate intact groups of individuals to different comparison arms. A frequent practical limitation to adopting such research designs is that only a limited number of groups may be available, and therefore, simple randomization is unable to adequately balance multiple group‐level covariates between arms. Therefore, covariate‐based constrained randomization was proposed as an allocation technique to achieve balance. Constrained randomization involves generating a large number of possible allocation schemes, calculating a balance score that assesses covariate imbalance, limiting the randomization space to a prespecified percentage of candidate allocations, and randomly selecting one scheme to implement. When the outcome is binary, a number of statistical issues arise regarding the potential advantages of such designs in making inference. In particular, properties found for continuous outcomes may not directly apply, and additional variations on statistical tests are available. Motivated by two recent trials, we conduct a series of Monte Carlo simulations to evaluate the statistical properties of model‐based and randomization‐based tests under both simple and constrained randomization designs, with varying degrees of analysis‐based covariate adjustment. Our results indicate that constrained randomization improves the power of the linearization F‐test, the KC‐corrected GEE t‐test (Kauermann and Carroll, 2001, Journal of the American Statistical Association 96, 1387‐1396), and two permutation tests when the prognostic group‐level variables are controlled for in the analysis and the size of randomization space is reasonably small. We also demonstrate that constrained randomization reduces power loss from redundant analysis‐based adjustment for non‐prognostic covariates. Design considerations such as the choice of the balance metric and the size of randomization space are discussed.  相似文献   

19.
目的 比较环境噪声水平、个体噪声暴露和累积噪声暴露量评价稳 态噪声所致听力损伤剂量-反应关系的优劣。方法 用个体计量仪采集8小时工作期间挡车工的噪声暴露数据,并将数据传输至微机存储和分析。选择细砂、布机车间使用不同类型机器的4组工人作为观察对象,每组选择3-5人,分别在早、中、晚班各测量1个班次的个体噪声暴露数据;用网格法和普通声级计测量每组工人工作环境的噪声水平,同时对该纺织厂接触稳态噪声的163名工人进行了问卷和听力检查。结果 经年龄、性别校正后的高频听力损伤患病率为64.4%;语频听力损伤患病率为2.5%,其中高频听力损伤患病率随噪声暴露的剂量增大而升高,呈现典型的剂量-反应关系。经趋势卡方检验和和Logistic回归模型拟合,累积噪声暴露量评价剂量-反应关系的效果优于噪声级,个体噪声暴露的效果优于环境噪声水平。结论 个体噪声暴露和累积噪声暴露量是评价稳态噪声暴露与高频听力损伤剂量-反应关系最好的暴露评价指标。  相似文献   

20.
The clinical trial design including a test treatment, an active control and a placebo is called the gold standard design. In this paper, we develop a statistical method for planning and evaluating non‐inferiority trials with gold standard design for right‐censored time‐to‐event data. We consider both lost to follow‐up and administrative censoring. We present a semiparametric approach that only assumes the proportionality of the hazard functions. In particular, we develop an algorithm for calculating the minimal total sample size and its optimal allocation to treatment groups such that a desired power can be attained for a specific parameter constellation under the alternative. For the purpose of sample size calculation, we assume the endpoints to be Weibull distributed. By means of simulations, we investigate the actual type I error rate, power and the accuracy of the calculated sample sizes. Finally, we compare our procedure with a previously proposed procedure assuming exponentially distributed event times. To illustrate our method, we consider a double‐blinded, randomized, active and placebo controlled trial in major depression. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号