首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Intention‐to‐treat (ITT) analysis is commonly used in randomized clinical trials. However, the use of ITT analysis presents a challenge: how to deal with subjects who drop out. Here we focus on randomized trials where the primary outcome is a binary endpoint. Several approaches are available for including the dropout subject in the ITT analysis, mainly chosen prior to unblinding the study. These approaches reduce the potential bias due to breaking the randomization code. However, the validity of the results will highly depend on untestable assumptions about the dropout mechanism. Thus, it is important to evaluate the sensitivity of the results across different missing‐data mechanisms. We propose here a Bayesian pattern‐mixture model for ITT analysis of binary outcomes with dropouts that applies over different types of missing‐data mechanisms. We introduce a new parameterization to identify the model, which is then used for sensitivity analysis. The parameterization is defined as the odds ratio of having an endpoint between the subjects who dropped out and those who completed the study. Such parameterization is intuitive and easy to use in sensitivity analysis; it also incorporates most of the available methods as special cases. The model is applied to TRial Of Preventing HYpertension. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

2.
In noninferiority (NI) trials, an ongoing methodological challenge is how to handle in the analysis the subjects who are nonadherent to their assigned treatment. Some investigators perform the intent-to-treat (ITT) as the primary analysis and the per-protocol (PP) analysis as sensitivity analysis, whereas others do the reverse since ITT results may be anticonservative in the NI setting. But even when there is agreement between the ITT and PP approaches, NI of the experimental therapy to the comparator is not guaranteed. We propose that a tipping point method be used to further assess the impact of nonadherence on the results of a NI trial. In this approach, data from the nonadherers obtained after treatment discontinuation is not used, and their outcomes under the counterfactual situation of complete adherence are considered missing. The tipping point analysis indicates how sensitive the NI trial results are to the values of these missing counterfactual outcomes. The advantages of this approach are that a model or mechanism for the missing outcomes does not have to be assumed, and all subjects who were randomized are included in the analysis. We consider both binary and continuous outcomes and propose extensions to accommodate different types of nonadherence. The methods are illustrated with examples from two NI trials, one to evaluate different doses of radiation therapy to treat painful bone metastases and the other to compare treatments for reducing depression in adolescents.  相似文献   

3.
We extend the pattern‐mixture approach to handle missing continuous outcome data in longitudinal cluster randomized trials, which randomize groups of individuals to treatment arms, rather than the individuals themselves. Individuals who drop out at the same time point are grouped into the same dropout pattern. We approach extrapolation of the pattern‐mixture model by applying multilevel multiple imputation, which imputes missing values while appropriately accounting for the hierarchical data structure found in cluster randomized trials. To assess parameters of interest under various missing data assumptions, imputed values are multiplied by a sensitivity parameter, k, which increases or decreases imputed values. Using simulated data, we show that estimates of parameters of interest can vary widely under differing missing data assumptions. We conduct a sensitivity analysis using real data from a cluster randomized trial by increasing k until the treatment effect inference changes. By performing a sensitivity analysis for missing data, researchers can assess whether certain missing data assumptions are reasonable for their cluster randomized trial.  相似文献   

4.
The intention-to-treat (ITT) approach to randomized controlled trials analyzes data on the basis of treatment assignment, not treatment receipt. Alternative approaches make comparisons according to the treatment received at the end of the trial (as-treated analysis) or using only subjects who did not deviate from the assigned treatment (adherers-only analysis). Using a sensitivity analysis on data for a hypothetical trial, we compare these different analytical approaches in the context of two common protocol deviations: loss to follow-up and switching across treatments. In each case, two rates of deviation are considered: 10% and 30%. The analysis shows that biased estimates of effect may occur when deviation is nonrandom, when a large percentage of participants switch treatments or are lost to follow-up, and when the method of estimating missing values accounts inadequately for the process causing loss to follow-up. In general, ITT analysis attenuates between-group effects. Trialists should use sensitivity analyses on their data and should compare the characteristics of participants who do and those who do not deviate from the trial protocol. The ITT approach is not a remedy for unsound design, and imputation of missing values is not a substitute for complete, good quality data.  相似文献   

5.
The standard approach for analysing a randomized clinical trial is based on intent-to-treat (ITT) where subjects are analysed according to their assigned treatment group regardless of actual adherence to the treatment protocol. For therapeutic equivalence trials, it is a common concern that an ITT analysis increases the chance of erroneously concluding equivalence. In this paper, we formally investigate the impact of non-compliance on an ITT analysis of equivalence trials with a binary outcome. We assume 'all-or-none' compliance and independence between compliance and the outcome. Our results indicate that non-compliance does not always make it easier to demonstrate equivalence. The direction and magnitude of changes in the type I error rate and power of the study depend on the patterns of non-compliance, event probabilities, the margin of equivalence and other factors.  相似文献   

6.
Although missing outcome data are an important problem in randomized trials and observational studies, methods to address this issue can be difficult to apply. Using simulated data, the authors compared 3 methods to handle missing outcome data: 1) complete case analysis; 2) single imputation; and 3) multiple imputation (all 3 with and without covariate adjustment). Simulated scenarios focused on continuous or dichotomous missing outcome data from randomized trials or observational studies. When outcomes were missing at random, single and multiple imputations yielded unbiased estimates after covariate adjustment. Estimates obtained by complete case analysis with covariate adjustment were unbiased as well, with coverage close to 95%. When outcome data were missing not at random, all methods gave biased estimates, but handling missing outcome data by means of 1 of the 3 methods reduced bias compared with a complete case analysis without covariate adjustment. Complete case analysis with covariate adjustment and multiple imputation yield similar estimates in the event of missing outcome data, as long as the same predictors of missingness are included. Hence, complete case analysis with covariate adjustment can and should be used as the analysis of choice more often. Multiple imputation, in addition, can accommodate the missing-not-at-random scenario more flexibly, making it especially suited for sensitivity analyses.  相似文献   

7.
Standard intent-to-treat analyses of randomized clinical trials can yield biased estimates of treatment efficacy and toxicity when not all patients comply with their assigned treatment. Flexible methods have been proposed which correct for this by modelling expected contrasts between an individual's observed outcome and his/her potential outcome in the absence of exposure. Because such comparisons often require untestable assumptions, a sensitivity analysis is warranted. We show how this can be performed in a meaningful and practically useful way. Following the approach of Molenberghs, Kenward and Goetghebeur in a missing data context, we evaluate the separate contributions of structural uninformativeness and sampling variation to uncertainty about the population parameters. This leads us to consider Honestly Estimated Ignorance Regions (HEIRs) and Estimated Uncertainty RegiOns (EUROs), respectively. We use the results to estimate the causal effect of observed exposure on successful blood pressure reduction in a randomized controlled clinical trial with partial non-compliance.  相似文献   

8.
Longitudinal studies with repeated measures are often subject to non-response. Methods currently employed to alleviate the difficulties caused by missing data are typically unsatisfactory, especially when the cause of the missingness is related to the outcomes. We present an approach for incomplete categorical data in the repeated measures setting that allows missing data to depend on other observed outcomes for a study subject. The proposed methodology also allows a broader examination of study findings through interpretation of results in the framework of the set of all possible test statistics that might have been observed had no data been missing. The proposed approach consists of the following general steps. First, we generate all possible sets of missing values and form a set of possible complete data sets. We then weight each data set according to clearly defined assumptions and apply an appropriate statistical test procedure to each data set, combining the results to give an overall indication of significance. We make use of the EM algorithm and a Bayesian prior in this approach. While not restricted to the one-sample case, the proposed methodology is illustrated for one-sample data and compared to the common complete-case and available-case analysis methods.  相似文献   

9.
Although recent guidelines for dealing with missing data emphasize the need for sensitivity analyses, and such analyses have a long history in statistics, universal recommendations for conducting and displaying these analyses are scarce. We propose graphical displays that help formalize and visualize the results of sensitivity analyses, building upon the idea of ‘tipping‐point’ analysis for randomized experiments with a binary outcome and a dichotomous treatment. The resulting ‘enhanced tipping‐point displays’ are convenient summaries of conclusions obtained from making different modeling assumptions about missingness mechanisms. The primary goal of the displays is to make formal sensitivity analysesmore comprehensible to practitioners, thereby helping them assess the robustness of the experiment's conclusions to plausible missingness mechanisms. We also present a recent example of these enhanced displays in amedical device clinical trial that helped lead to FDA approval. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

10.
Missing data are common in longitudinal studies and can occur in the exposure interest. There has been little work assessing the impact of missing data in marginal structural models (MSMs), which are used to estimate the effect of an exposure history on an outcome when time‐dependent confounding is present. We design a series of simulations based on the Framingham Heart Study data set to investigate the impact of missing data in the primary exposure of interest in a complex, realistic setting. We use a standard application of MSMs to estimate the causal odds ratio of a specific activity history on outcome. We report and discuss the results of four missing data methods, under seven possible missing data structures, including scenarios in which an unmeasured variable predicts missing information. In all missing data structures, we found that a complete case analysis, where all subjects with missing exposure data are removed from the analysis, provided the least bias. An analysis that censored individuals at the first occasion of missing exposure and includes a censorship model as well as a propensity model when creating the inverse probability weights also performed well. The presence of an unmeasured predictor of missing data only slightly increased bias, except in the situation such that the exposure had a large impact on missing data and the unmeasured variable had a large impact on missing data and outcome. A discussion of the results is provided using causal diagrams, showing the usefulness of drawing such diagrams before conducting an analysis. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

11.
Despite our best efforts, missing outcomes are common in randomized controlled clinical trials. The National Research Council's Committee on National Statistics panel report titled The Prevention and Treatment of Missing Data in Clinical Trials noted that further research is required to assess the impact of missing data on the power of clinical trials and how to set useful target rates and acceptable rates of missing data in clinical trials. In this article, using binary responses for illustration, we establish that conclusions based on statistical analyses that include only complete cases can be seriously misleading, and that the adverse impact of missing data grows not only with increasing rates of missingness but also with increasing sample size. We illustrate how principled sensitivity analysis can be used to assess the robustness of the conclusions. Finally, we illustrate how sample sizes can be adjusted to account for expected rates of missingness. We find that when sensitivity analyses are considered as part of the primary analysis, the required adjustments to the sample size are dramatically larger than those that are traditionally used. Furthermore, in some cases, especially in large trials with small target effect sizes, it is impossible to achieve the desired power.  相似文献   

12.
BACKGROUND: In longitudinal studies, it is extremely rare that all the planned measurements are actually performed. Missing data are often consecutive to drop-outs, but may also be intermittent. In both cases, the analysis of incomplete data necessarily requires assumptions that are generally unverifiable, and the need for sensitivity analyses has been advocated over the past few years. In this article, the attention will be given to longitudinal binary data. METHODS: A method is proposed, which is based on a log-linear model. A sensitivity parameter is introduced that represents the relationship between the response mechanism and the missing data mechanism. It is recommended not to estimate this parameter, but to consider a range of plausible values, and to estimate the parameters of interest conditionally on these plausible values. This allows to assess the sensitivity of the conclusion of a study to various assumptions regarding the missing data mechanism. RESULTS: This method was applied to a randomized clinical trial comparing the efficacy of two treatment regimens in patients with persistent asthma. The sensitivity analysis showed that the conclusion of this study was robust to missing data.  相似文献   

13.
Missing outcome data are commonly encountered in randomized controlled trials and hence may need to be addressed in a meta‐analysis of multiple trials. A common and simple approach to deal with missing data is to restrict analysis to individuals for whom the outcome was obtained (complete case analysis). However, estimated treatment effects from complete case analyses are potentially biased if informative missing data are ignored. We develop methods for estimating meta‐analytic summary treatment effects for continuous outcomes in the presence of missing data for some of the individuals within the trials. We build on a method previously developed for binary outcomes, which quantifies the degree of departure from a missing at random assumption via the informative missingness odds ratio. Our new model quantifies the degree of departure from missing at random using either an informative missingness difference of means or an informative missingness ratio of means, both of which relate the mean value of the missing outcome data to that of the observed data. We propose estimating the treatment effects, adjusted for informative missingness, and their standard errors by a Taylor series approximation and by a Monte Carlo method. We apply the methodology to examples of both pairwise and network meta‐analysis with multi‐arm trials. © 2014 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.  相似文献   

14.
During the course of a clinical trial, subjects may experience treatment failure. For ethical reasons, it is necessary to administer emergency or rescue medications for such subjects. However, the rescue medications may bias the set of response measurements. This bias is of particular concern if a subject has been randomized to the control group, and the rescue medications improve the subject's condition. The standard approach to analysing data from a clinical trial is to perform an intent-to-treat (ITT) analysis, wherein the data are analysed according to treatment randomization. Supplementary analyses may be performed in addition to the ITT analysis to account for the effect of treatment failures and rescue medications. A Bayesian, counterfactual approach, which uses the data augmentation (DA) algorithm, is proposed for supplemental analysis. A simulation study is conducted to compare the operating characteristics of this procedure with a likelihood-based, counterfactual approach based on the EM algorithm. An example from the Asthma Clinical Research Network (ACRN) is used to illustrate the Bayesian procedure.  相似文献   

15.
Cost‐effectiveness analyses (CEA) conducted alongside randomised trials provide key evidence for informing healthcare decision making, but missing data pose substantive challenges. Recently, there have been a number of developments in methods and guidelines addressing missing data in trials. However, it is unclear whether these developments have permeated CEA practice. This paper critically reviews the extent of and methods used to address missing data in recently published trial‐based CEA. Issues of the Health Technology Assessment journal from 2013 to 2015 were searched. Fifty‐two eligible studies were identified. Missing data were very common; the median proportion of trial participants with complete cost‐effectiveness data was 63% (interquartile range: 47%–81%). The most common approach for the primary analysis was to restrict analysis to those with complete data (43%), followed by multiple imputation (30%). Half of the studies conducted some sort of sensitivity analyses, but only 2 (4%) considered possible departures from the missing‐at‐random assumption. Further improvements are needed to address missing data in cost‐effectiveness analyses conducted alongside randomised trials. These should focus on limiting the extent of missing data, choosing an appropriate method for the primary analysis that is valid under contextually plausible assumptions, and conducting sensitivity analyses to departures from the missing‐at‐random assumption.  相似文献   

16.
Current advances in technology provide less invasive or less expensive diagnostic tests for identifying disease status. When a diagnostic test is evaluated against an invasive or expensive gold standard test, one often finds that not all patients undergo the gold standard test. The sensitivity and specificity estimates based only on the patients with verified disease are often biased. This bias is called verification bias. Many authors have examined the consequences of verification bias and have proposed bias correction methods based on the assumption of independence between disease status and election for verification conditionally on the test result, or equivalently on the assumption that the disease status is missing at random using missing data terminology. This assumption may not be valid and one may need to consider adjustment for a possible non-ignorable verification bias resulting from the non-ignorable missing data mechanism. Such an adjustment involves ultimately uncheckable assumptions and requires sensitivity analysis. The sensitivity analysis is most often accomplished by perturbing parameters in the chosen model for the missing data mechanism, and it has a local flavour because perturbations are around the fitted model. In this paper we propose a global sensitivity analysis for assessing performance of a diagnostic test in the presence of verification bias. We derive a region of all sensitivity and specificity values consistent with the observed data and call this region a test ignorance region (TIR). The term 'ignorance' refers to the lack of knowledge due to the missing disease status for the not verified patients. The methodology is illustrated with two clinical examples.  相似文献   

17.
The effects of missing values for a confounding variable are investigated in the setting of case-control studies in which, for simplicity, the effect of one binary risk factor and one categoric confounding variable on disease risk is under investigation. Some ad hoc techniques with which to deal with missing values are examined under different assumptions about the missing-data mechanism. Examples are given to illustrate that the magnitude of the bias that is introduced by applying an inadequate procedure can be large under circumstances that occur frequently in empiric research. This is true even for so-called complete case analysis, i.e., when only data on subjects with complete information are used. Appropriate bias corrections are derived. Making use of data on those subjects who are neglected in complete case analysis by creating an additional category always results in biased estimation. An alternative is to allocate these subjects to the cells of the contingency table in an appropriate manner. This approach yields consistent estimates if the data are missing at random. Choosing an appropriate method for dealing with missing values always requires some knowledge of why the data are missing. This suggests that investigators should carry out validation studies to understand whether the missing values occur randomly across the study population or occur more frequently in specific subgroups.  相似文献   

18.
Adjustment for baseline variables in a randomized trial can increase power to detect a treatment effect. However, when baseline data are partly missing, analysis of complete cases is inefficient. We consider various possible improvements in the case of normally distributed baseline and outcome variables. Joint modelling of baseline and outcome is the most efficient method. Mean imputation is an excellent alternative, subject to three conditions. Firstly, if baseline and outcome are correlated more than about 0.6 then weighting should be used to allow for the greater information from complete cases. Secondly, imputation should be carried out in a deterministic way, using other baseline variables if possible, but not using randomized arm or outcome. Thirdly, if baselines are not missing completely at random, then a dummy variable for missingness should be included as a covariate (the missing indicator method). The methods are illustrated in a randomized trial in community psychiatry.  相似文献   

19.
OBJECTIVE: Properly handling missing data is a challenge, especially when working with older populations that have high levels of morbidity and mortality. We illustrate methods for understanding whether missing values are ignorable and describe implications of their use in regression modeling. STUDY DESIGN AND SETTING: The use of missingness screens such as Little's missing completely at random "MCAR test" (1988) and the "Index of Sensitivity to Nonignorability (ISNI)" by Troxel and colleagues (2004)introduces complications for regression modeling, and, particularly, for risk factor selection. In a case study of older patients with simulated missing values for a delirium outcome set in a 14-bed medical intensive care unit, we outline a model fitting process that incorporates the use of missingness screens, controls for collinearity, and selects variables based on model fit. RESULTS: The proposed model fitting process identifies more actual risk factors for ICU delirium than does a complete case analysis. CONCLUSION: Use of imputation and other methods for handling missing data assist in the identification of risk factors. They do so accurately only when correct assumptions are made about the nature of missing data. Missingness screens enable researchers to investigate these assumptions.  相似文献   

20.
For longitudinal binary data with non‐monotone non‐ignorably missing outcomes over time, a full likelihood approach is complicated algebraically, and with many follow‐up times, maximum likelihood estimation can be computationally prohibitive. As alternatives, two pseudo‐likelihood approaches have been proposed that use minimal parametric assumptions. One formulation requires specification of the marginal distributions of the outcome and missing data mechanism at each time point, but uses an ‘independence working assumption,’ i.e. an assumption that observations are independent over time. Another method avoids having to estimate the missing data mechanism by formulating a ‘protective estimator.’ In simulations, these two estimators can be very inefficient, both for estimating time trends in the first case and for estimating both time‐varying and time‐stationary effects in the second. In this paper, we propose the use of the optimal weighted combination of these two estimators, and in simulations we show that the optimal weighted combination can be much more efficient than either estimator alone. Finally, the proposed method is used to analyze data from two longitudinal clinical trials of HIV‐infected patients. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号