首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到15条相似文献,搜索用时 62 毫秒
1.
Adaptive treatment strategies are useful in the treatment of chronic diseases such as AIDS and cancer because they allow tailoring the treatment to a patient's need and disease status. We consider two randomization schemes for clinical trials that are commonly used to design studies comparing adaptive treatment strategies, namely, up-front randomization and sequential randomization. Up-front randomization is the classical method of randomization where patients are randomized at the beginning of the study to pre-specified treatment strategies. In sequentially randomized trials, patients are randomized sequentially to available treatment options over the duration of the therapy as they become eligible to receive subsequent treatments. We compare the efficiency and the power of the traditional up-front randomized trials with that of sequentially randomized trials designed for comparing adaptive treatment strategies based on a continuous outcome. The analytical and simulation results indicate that, when properly analyzed, sequentially randomized trials are more efficient and powerful than up-front randomized trials.  相似文献   

2.
Sequential multiple assignment randomized trials (SMARTs) are increasingly being used to inform clinical and intervention science. In a SMART, each patient is repeatedly randomized over time. Each randomization occurs at a critical decision point in the treatment course. These critical decision points often correspond to milestones in the disease process or other changes in a patient's health status. Thus, the timing and number of randomizations may vary across patients and depend on evolving patient‐specific information. This presents unique challenges when analyzing data from a SMART in the presence of missing data. This paper presents the first comprehensive discussion of missing data issues typical of SMART studies: we describe five specific challenges and propose a flexible imputation strategy to facilitate valid statistical estimation and inference using incomplete data from a SMART. To illustrate these contributions, we consider data from the Clinical Antipsychotic Trial of Intervention and Effectiveness, one of the most well‐known SMARTs to date. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

3.
In sequential multiple assignment randomized trials, longitudinal outcomes may be the most important outcomes of interest because this type of trials is usually conducted in areas of chronic diseases or conditions. We propose to use a weighted generalized estimating equation (GEE) approach to analyzing data from such type of trials for comparing two adaptive treatment strategies based on generalized linear models. Although the randomization probabilities are known, we consider estimated weights in which the randomization probabilities are replaced by their empirical estimates and prove that the resulting weighted GEE estimator is more efficient than the estimators with true weights. The variance of the weighted GEE estimator is estimated by an empirical sandwich estimator. The time variable in the model can be linear, piecewise linear, or more complicated forms. This provides more flexibility that is important because, in the adaptive treatment setting, the treatment changes over time and, hence, a single linear trend over the whole period of study may not be practical. Simulation results show that the weighted GEE estimators of regression coefficients are consistent regardless of the specification of the correlation structure of the longitudinal outcomes. The weighted GEE method is then applied in analyzing data from the Clinical Antipsychotic Trials of Intervention Effectiveness. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

4.
Clinical trials that randomize subjects to decision algorithms, which adapt treatments over time according to individual response, have gained considerable interest as investigators seek designs that directly inform clinical decision making. We consider designs in which subjects are randomized sequentially at decision points, among adaptive treatment options under evaluation. We present a sequential method to estimate the comparative effects of the randomized adaptive treatments, which are formalized as adaptive treatment strategies. Our causal estimators are derived using Bayesian predictive inference. We use analytical and empirical calculations to compare the predictive estimators to (i) the 'standard' approach that allocates the sequentially obtained data to separate strategy-specific groups as would arise from randomizing subjects at baseline; (ii) the semi-parametric approach of marginal mean models that, under appropriate experimental conditions, provides the same sequential estimator of causal differences as the proposed approach. Simulation studies demonstrate that sequential causal inference offers substantial efficiency gains over the standard approach to comparing treatments, because the predictive estimators can take advantage of the monotone structure of shared data among adaptive strategies. We further demonstrate that the semi-parametric asymptotic variances, which are marginal 'one-step' estimators, may exhibit significant bias, in contrast to the predictive variances. We show that the conditions under which the sequential method is attractive relative to the other two approaches are those most likely to occur in real studies.  相似文献   

5.
The hazard ratios resulting from a Cox's regression hazards model are hard to interpret and to be converted into prolonged survival time. As the main goal is often to study survival functions, there is increasing interest in summary measures based on the survival function that are easier to interpret than the hazard ratio; the residual mean time is an important example of those measures. However, because of the presence of right censoring, the tail of the survival distribution is often difficult to estimate correctly. Therefore, we consider the restricted residual mean time, which represents a partial area under the survival function, given any time horizon τ, and is interpreted as the residual life expectancy up to τ of a subject surviving up to time t. We present a class of regression models for this measure, based on weighted estimating equations and inverse probability of censoring weighted estimators to model potential right censoring. Furthermore, we show how to extend the models and the estimators to deal with delayed entries. We demonstrate that the restricted residual mean life estimator is equivalent to integrals of Kaplan–Meier estimates in the case of simple factor variables. Estimation performance is investigated by simulation studies. Using real data from Danish Monitoring Cardiovascular Risk Factor Surveys, we illustrate an application to additive regression models and discuss the general assumption of right censoring and left truncation being dependent on covariates. Copyright © 2017 John Wiley & Sons, Ltd.  相似文献   

6.
There is growing interest in how best to adapt and readapt treatments to individuals to maximize clinical benefit. In response, adaptive treatment strategies (ATS), which operationalize adaptive, sequential clinical decision making, have been developed. From a patient's perspective an ATS is a sequence of treatments, each individualized to the patient's evolving health status. From a clinician's perspective, an ATS is a sequence of decision rules that input the patient's current health status and output the next recommended treatment. Sequential multiple assignment randomized trials (SMART) have been developed to address the sequencing questions that arise in the development of ATSs, but SMARTs are relatively new in clinical research. This article provides an introduction to ATSs and SMART designs. This article also discusses the design of SMART pilot studies to address feasibility concerns, and to prepare investigators for a full-scale SMART. We consider an example SMART for the development of an ATS in the treatment of pediatric generalized anxiety disorders. Using the example SMART, we identify and discuss design issues unique to SMARTs that are best addressed in an external pilot study prior to the full-scale SMART. We also address the question of how many participants are needed in a SMART pilot study. A properly executed pilot study can be used to effectively address concerns about acceptability and feasibility in preparation for (that is, prior to) executing a full-scale SMART.  相似文献   

7.
For diseases with some level of associated mortality, the case fatality ratio measures the proportion of diseased individuals who die from the disease. In principle, it is straightforward to estimate this quantity from individual follow-up data that provides times from onset to death or recovery. In particular, in a competing risks context, the case fatality ratio is defined by the limiting value of the sub-distribution function, F(1)(t) = Pr(T infinity, where T denotes the time from onset to death (J = 1) or recovery (J = 2). When censoring is present, however, estimation of F(1)(infinity) is complicated by the possibility of little information regarding the right tail of F(1), requiring use of estimators of F(1)(t(*)) or F(1)(t(*))/(F(1)(t(*))+F(2)(t(*))) where t(*) is large, with F(2)(t) = Pr(T 相似文献   

8.
Frailty models are widely used to model clustered survival data arising in multicenter clinical studies. In the literature, most existing frailty models are proportional hazards, additive hazards, or accelerated failure time model based. In this paper, we propose a frailty model framework based on mean residual life regression to accommodate intracluster correlation and in the meantime provide easily understand and straightforward interpretation for the effects of prognostic factors on the expectation of the remaining lifetime. To overcome estimation challenges, a novel hierarchical quasi-likelihood approach is developed by making use of the idea of hierarchical likelihood in the construction of the quasi-likelihood function, leading to hierarchical estimating equations. Simulation results show favorable performance of the method regardless of frailty distributions. The utility of the proposed methodology is illustrated by its application to the data from a multi-institutional study of breast cancer.  相似文献   

9.
Nonadherence to assigned treatment jeopardizes the power and interpretability of intent‐to‐treat comparisons from clinical trial data and continues to be an issue for effectiveness studies, despite their pragmatic emphasis. We posit that new approaches to design need to complement developments in methods for causal inference to address nonadherence, in both experimental and practice settings. This paper considers the conventional study design for psychiatric research and other medical contexts, in which subjects are randomized to treatments that are fixed throughout the trial and presents an alternative that converts the fixed treatments into an adaptive intervention that reflects best practice. The key element is the introduction of an adaptive decision point midway into the study to address a patient's reluctance to remain on treatment before completing a full‐length trial of medication. The clinical uncertainty about the appropriate adaptation prompts a second randomization at the new decision point to evaluate relevant options. Additionally, the standard ‘all‐or‐none’ principal stratification (PS) framework is applied to the first stage of the design to address treatment discontinuation that occurs too early for a midtrial adaptation. Drawing upon the adaptive intervention features, we develop assumptions to identify the PS causal estimand and to introduce restrictions on outcome distributions to simplify expectation–maximization calculations. We evaluate the performance of the PS setup, with particular attention to the role played by a binary covariate. The results emphasize the importance of collecting covariate data for use in design and analysis. We consider the generality of our approach beyond the setting of psychiatric research. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

10.
An adaptive treatment strategy (ATS) is defined as a sequence of treatments and intermediate responses. ATS' arise when chronic diseases such as cancer and depression are treated over time with various treatment alternatives depending on intermediate responses to earlier treatments. Clinical trials are often designed to compare ATSs based on appropriate designs such as sequential randomization designs. Although recent literature provides statistical methods for analyzing data from such trials, very few articles have focused on statistical power and sample size issues. This paper presents a sample size formula for comparing the survival probabilities under two treatment strategies sharing same initial, but different maintenance treatment. The formula is based on the large sample properties of inverse‐probability‐weighted estimator. Simulation study shows strong evidence that the proposed sample size formula guarantees desired power, regardless of the true distributions of survival times. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

11.
Group sequential design has become more popular in clinical trials because it allows for trials to stop early for futility or efficacy to save time and resources. However, this approach is less well‐known for longitudinal analysis. We have observed repeated cases of studies with longitudinal data where there is an interest in early stopping for a lack of treatment effect or in adapting sample size to correct for inappropriate variance assumptions. We propose an information‐based group sequential design as a method to deal with both of these issues. Updating the sample size at each interim analysis makes it possible to maintain the target power while controlling the type I error rate. We will illustrate our strategy with examples and simulations and compare the results with those obtained using fixed design and group sequential design without sample size re‐estimation. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

12.
This paper considers the analysis of longitudinal data complicated by the fact that during follow‐up patients can be in different disease states, such as remission, relapse or death. If both the response of interest (for example, quality of life (QOL)) and the amount of missing data depend on this disease state, ignoring the disease state will yield biased means. Death as the final state is an additional complication because no measurements after death are taken and often the outcome of interest is undefined after death. We discuss a new approach to model these types of data. In our approach the probability to be in each of the different disease states over time is estimated using multi‐state models. In each different disease state, the conditional mean given the disease state is modeled directly. Generalized estimation equations are used to estimate the parameters of the conditional means, with inverse probability weights to account for unobserved responses. This approach shows the effect of the disease state on the longitudinal response. Furthermore, it yields estimates of the overall mean response over time, either conditionally on being alive or after imputing predefined values for the response after death. Graphical methods to visualize the joint distribution of disease state and response are discussed. As an example, the analysis of a Dutch randomized clinical trial for breast cancer is considered. In this study, the long‐term impact on the QOL for two different chemotherapy schedules was studied with three disease states: alive without relapse, alive after relapse and death. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

13.
We are motivated by a randomized clinical trial evaluating the efficacy of amitriptyline for the treatment of interstitial cystitis and painful bladder syndrome in treatment‐naïve patients. In the trial, both the non‐adherence rate and the rate of loss to follow‐up are fairly high. To estimate the effect of the treatment received on the outcome, we use the generalized structural mean model (GSMM), originally proposed to deal with non‐adherence, to adjust for both non‐adherence and loss to follow‐up. In the model, loss to follow‐up is handled by weighting the estimation equations for GSMM with one over the probability of not being lost to follow‐up, estimated using a logistic regression model. We re‐analyzed the data from the trial and found a possible benefit of amitriptyline when administered at a high‐dose level. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

14.
Small sample, sequential, multiple assignment, randomized trials (snSMARTs) are multistage trials with the overall goal of determining the best treatment after a fixed amount of time. In snSMART trials, patients are first randomized to one of three treatments and a binary (e.g. response/nonresponse) outcome is measured at the end of the first stage. Responders to first stage treatment continue their treatment. Nonresponders to first stage treatment are rerandomized to one of the remaining treatments. The same binary outcome is measured at the end of the first and second stages, and data from both stages are pooled together to find the best first stage treatment. However, in many settings the primary endpoint may be continuous, and dichotomizing this continuous variable may reduce statistical efficiency. In this article, we extend the snSMART design and methods to allow for continuous outcomes. Instead of requiring a binary outcome at the first stage for rerandomization, the probability of staying on the same treatment or switching treatment is a function of the first stage outcome. Rerandomization based on a mapping function of a continuous outcome allows for snSMART designs without requiring a binary outcome. We perform simulation studies to compare the proposed design with continuous outcomes to standard snSMART designs with binary outcomes. The proposed design results in more efficient treatment effect estimates and similar outcomes for trial patients.  相似文献   

15.
In 1996–1997, the AIDS Clinical Trial Group 320 study randomized 1156 HIV‐infected U.S. patients to combination antiretroviral therapy (ART) or highly active ART with equal probability. Ninety‐six patients incurred AIDS or died, 51 (4 per cent) dropped out, and 290 (=51 + 239, 25 per cent) dropped out or stopped their assigned therapy for reasons other than toxicity during a 52‐week follow‐up. Such noncompliance likely results in null‐biased estimates of intent‐to‐treat hazard ratios (HR) of AIDS or death comparing highly active ART with combination ART, which were 0.75 (95 per cent confidence limits [CL]: 0.43, 1.31), 0.30 (95 per cent CL: 0.15, 0.60), and 0.51 (95 per cent CL: 0.33, 0.77) for follow‐up within 15 weeks, beyond 15 weeks, and overall, respectively. Noncompliance correction using Robins and Finkelstein's (RF) inverse probability‐of‐censoring weights (where participants are censored at dropout or when first noncompliant) yielded estimated HR of 0.46 (95 per cent CL: 0.25, 0.85), 0.43 (95 per cent CL: 0.19, 0.96), and 0.45 (95 per cent CL: 0.27, 0.74) for follow‐up within 15 weeks, beyond 15 weeks, and overall, respectively. Weights were estimated conditional on measured age, sex, race, ethnicity, prior Zidovudine use, randomized arm, baseline and time‐varying CD4 cell count, and time‐varying HIV‐related symptoms. Noncompliance corrected results were 63 and 13 per cent farther from the null value of one than intent‐to‐treat results within 15 weeks and overall, respectively, and resolve the apparent non‐proportionality in intent‐to‐treat results. Inverse probability‐of‐censoring weighted methods could help to resolve discrepancies between compliant and noncompliant randomized evidence, as well as between randomized and observational evidence, in a variety of biomedical fields. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号