共查询到20条相似文献,搜索用时 15 毫秒
1.
A Bayesian framework to account for uncertainty due to missing binary outcome data in pairwise meta‐analysis 下载免费PDF全文
Missing outcome data are a common threat to the validity of the results from randomised controlled trials (RCTs), which, if not analysed appropriately, can lead to misleading treatment effect estimates. Studies with missing outcome data also threaten the validity of any meta‐analysis that includes them. A conceptually simple Bayesian framework is proposed, to account for uncertainty due to missing binary outcome data in meta‐analysis. A pattern‐mixture model is fitted, which allows the incorporation of prior information on a parameter describing the missingness mechanism. We describe several alternative parameterisations, with the simplest being a prior on the probability of an event in the missing individuals. We describe a series of structural assumptions that can be made concerning the missingness parameters. We use some artificial data scenarios to demonstrate the ability of the model to produce a bias‐adjusted estimate of treatment effect that accounts for uncertainty. A meta‐analysis of haloperidol versus placebo for schizophrenia is used to illustrate the model. We end with a discussion of elicitation of priors, issues with poor reporting and potential extensions of the framework. Our framework allows one to make the best use of evidence produced from RCTs with missing outcome data in a meta‐analysis, accounts for any uncertainty induced by missing data and fits easily into a wider evidence synthesis framework for medical decision making. © 2015 The Authors. Statistics in MedicinePublished by John Wiley & Sons Ltd. 相似文献
2.
Linear regression is one of the most popular statistical techniques. In linear regression analysis, missing covariate data occur often. A recent approach to analyse such data is a weighted estimating equation. With weighted estimating equations, the contribution to the estimating equation from a complete observation is weighted by the inverse 'probability of being observed'. In this paper, we propose a weighted estimating equation in which we wrongly assume that the missing covariates are multivariate normal, but still produces consistent estimates as long as the probability of being observed is correctly modelled. In simulations, these weighted estimating equations appear to be highly efficient when compared to the most efficient weighted estimating equation as proposed by Robins et al. and Lipsitz et al. However, these weighted estimating equations, in which we wrongly assume that the missing covariates are multivariate normal, are much less computationally intensive than the weighted estimating equations given by Lipsitz et al. We compare the weighted estimating equations proposed in this paper to the efficient weighted estimating equations via an example and a simulation study. We only consider missing data which are missing at random; non-ignorably missing data are not addressed in this paper. 相似文献
3.
Baptiste Leurent Manuel Gomes Suzie Cro Nicola Wiles James R. Carpenter 《Health economics》2020,29(2):171-184
Missing data are a common issue in cost‐effectiveness analysis (CEA) alongside randomised trials and are often addressed assuming the data are ‘missing at random’. However, this assumption is often questionable, and sensitivity analyses are required to assess the implications of departures from missing at random. Reference‐based multiple imputation provides an attractive approach for conducting such sensitivity analyses, because missing data assumptions are framed in an intuitive way by making reference to other trial arms. For example, a plausible not at random mechanism in a placebo‐controlled trial would be to assume that participants in the experimental arm who dropped out stop taking their treatment and have similar outcomes to those in the placebo arm. Drawing on the increasing use of this approach in other areas, this paper aims to extend and illustrate the reference‐based multiple imputation approach in CEA. It introduces the principles of reference‐based imputation and proposes an extension to the CEA context. The method is illustrated in the CEA of the CoBalT trial evaluating cognitive behavioural therapy for treatment‐resistant depression. Stata code is provided. We find that reference‐based multiple imputation provides a relevant and accessible framework for assessing the robustness of CEA conclusions to different missing data assumptions. 相似文献
4.
Missingness mechanism is in theory unverifiable based only on observed data. If there is a suspicion of missing not at random, researchers often perform a sensitivity analysis to evaluate the impact of various missingness mechanisms. In general, sensitivity analysis approaches require a full specification of the relationship between missing values and missingness probabilities. Such relationship can be specified based on a selection model, a pattern-mixture model or a shared parameter model. Under the selection modeling framework, we propose a sensitivity analysis approach using a nonparametric multiple imputation strategy. The proposed approach only requires specifying the correlation coefficient between missing values and selection (response) probabilities under a selection model. The correlation coefficient is a standardized measure and can be used as a natural sensitivity analysis parameter. The sensitivity analysis involves multiple imputations of missing values, yet the sensitivity parameter is only used to select imputing/donor sets. Hence, the proposed approach might be more robust against misspecifications of the sensitivity parameter. For illustration, the proposed approach is applied to incomplete measurements of level of preoperative Hemoglobin A1c, for patients who had high-grade carotid artery stenosisa and were scheduled for surgery. A simulation study is conducted to evaluate the performance of the proposed approach. 相似文献
5.
Missing outcome data and incomplete uptake of randomised interventions are common problems, which complicate the analysis and interpretation of randomised controlled trials, and are rarely addressed well in practice. To promote the implementation of recent methodological developments, we describe sequences of randomisation-based analyses that can be used to explore both issues. We illustrate these in an Internet-based trial evaluating the use of a new interactive website for those seeking help to reduce their alcohol consumption, in which the primary outcome was available for less than half of the participants and uptake of the intervention was limited. For missing outcome data, we first employ data on intermediate outcomes and intervention use to make a missing at random assumption more plausible, with analyses based on general estimating equations, mixed models and multiple imputation. We then use data on the ease of obtaining outcome data and sensitivity analyses to explore departures from the missing at random assumption. For incomplete uptake of randomised interventions, we estimate structural mean models by using instrumental variable methods. In the alcohol trial, there is no evidence of benefit unless rather extreme assumptions are made about the missing data nor an important benefit in more extensive users of the intervention. These findings considerably aid the interpretation of the trial's results. More generally, the analyses proposed are applicable to many trials with missing outcome data or incomplete intervention uptake. To facilitate use by others, Stata code is provided for all methods. 相似文献
6.
Pattern‐mixture models (PMM) and selection models (SM) are alternative approaches for statistical analysis when faced with incomplete data and a nonignorable missing‐data mechanism. Both models make empirically unverifiable assumptions and need additional constraints to identify the parameters. Here, we first introduce intuitive parameterizations to identify PMM for different types of outcome with distribution in the exponential family; then we translate these to their equivalent SM approach. This provides a unified framework for performing sensitivity analysis under either setting. These new parameterizations are transparent, easy‐to‐use, and provide dual interpretation from both the PMM and SM perspectives. A Bayesian approach is used to perform sensitivity analysis, deriving inferences using informative prior distributions on the sensitivity parameters. These models can be fitted using software that implements Gibbs sampling. Copyright © 2014 John Wiley & Sons, Ltd. 相似文献
7.
Decision analytical models are widely used in economic evaluation of health care interventions with the objective of generating valuable information to assist health policy decision-makers to allocate scarce health care resources efficiently. The whole decision modelling process can be summarised in four stages: (i) a systematic review of the relevant data (including meta-analyses), (ii) estimation of all inputs into the model (including effectiveness, transition probabilities and costs), (iii) sensitivity analysis for data and model specifications, and (iv) evaluation of the model. The aim of this paper is to demonstrate how the individual components of decision modelling, outlined above, may be addressed simultaneously in one coherent Bayesian model (sometimes known as a comprehensive decision analytical model) and evaluated using Markov Chain Monte Carlo simulation implemented in the specialist software WinBUGS. To illustrate the method described, it is applied to two illustrative examples: (1) The prophylactic use of neurominidase inhibitors for the prevention of influenza. (2) The use of taxanes for the second-line treatment of advanced breast cancer.The advantages of integrating the four stages outlined into one comprehensive decision analytical model, compared to the conventional 'two-stage' approach, are discussed. 相似文献
8.
9.
Controlled pattern imputation for sensitivity analysis of longitudinal binary and ordinal outcomes with nonignorable dropout 下载免费PDF全文
Yongqiang Tang 《Statistics in medicine》2018,37(9):1467-1481
The controlled imputation method refers to a class of pattern mixture models that have been commonly used as sensitivity analyses of longitudinal clinical trials with nonignorable dropout in recent years. These pattern mixture models assume that participants in the experimental arm after dropout have similar response profiles to the control participants or have worse outcomes than otherwise similar participants who remain on the experimental treatment. In spite of its popularity, the controlled imputation has not been formally developed for longitudinal binary and ordinal outcomes partially due to the lack of a natural multivariate distribution for such endpoints. In this paper, we propose 2 approaches for implementing the controlled imputation for binary and ordinal data based respectively on the sequential logistic regression and the multivariate probit model. Efficient Markov chain Monte Carlo algorithms are developed for missing data imputation by using the monotone data augmentation technique for the sequential logistic regression and a parameter‐expanded monotone data augmentation scheme for the multivariate probit model. We assess the performance of the proposed procedures by simulation and the analysis of a schizophrenia clinical trial and compare them with the fully conditional specification, last observation carried forward, and baseline observation carried forward imputation methods. 相似文献
10.
Amit K. Chowdhry Robert H. Dworkin Michael P. McDermott 《Statistics in medicine》2016,35(17):3021-3032
We consider a study‐level meta‐analysis with a normally distributed outcome variable and possibly unequal study‐level variances, where the object of inference is the difference in means between a treatment and control group. A common complication in such an analysis is missing sample variances for some studies. A frequently used approach is to impute the weighted (by sample size) mean of the observed variances (mean imputation). Another approach is to include only those studies with variances reported (complete case analysis). Both mean imputation and complete case analysis are only valid under the missing‐completely‐at‐random assumption, and even then the inverse variance weights produced are not necessarily optimal. We propose a multiple imputation method employing gamma meta‐regression to impute the missing sample variances. Our method takes advantage of study‐level covariates that may be used to provide information about the missing data. Through simulation studies, we show that multiple imputation, when the imputation model is correctly specified, is superior to competing methods in terms of confidence interval coverage probability and type I error probability when testing a specified group difference. Finally, we describe a similar approach to handling missing variances in cross‐over studies. Copyright © 2016 John Wiley & Sons, Ltd. 相似文献
11.
Existing methods for power analysis for longitudinal study designs are limited in that they do not adequately address random missing data patterns. Although the pattern of missing data can be assessed during data analysis, it is unknown during the design phase of a study. The random nature of the missing data pattern adds another layer of complexity in addressing missing data for power analysis. In this paper, we model the occurrence of missing data with a two-state, first-order Markov process and integrate the modelling information into the power function to account for random missing data patterns. The Markov model is easily specified to accommodate different anticipated missing data processes. We develop this approach for the two most popular longitudinal models: the generalized estimating equations (GEE) and the linear mixed-effects model under the missing completely at random (MCAR) assumption. For GEE, we also limit our consideration to the working independence correlation model. The proposed methodology is illustrated with numerous examples that are motivated by real study designs. 相似文献
12.
Manuel Gomes Michael G. Kenward Richard Grieve James Carpenter 《Statistics in medicine》2020,39(11):1658-1674
Nonignorable missing data poses key challenges for estimating treatment effects because the substantive model may not be identifiable without imposing further assumptions. For example, the Heckman selection model has been widely used for handling nonignorable missing data but requires the study to make correct assumptions, both about the joint distribution of the missingness and outcome and that there is a valid exclusion restriction. Recent studies have revisited how alternative selection model approaches, for example estimated by multiple imputation (MI) and maximum likelihood, relate to Heckman-type approaches in addressing the first hurdle. However, the extent to which these different selection models rely on the exclusion restriction assumption with nonignorable missing data is unclear. Motivated by an interventional study (REFLUX) with nonignorable missing outcome data in half of the sample, this article critically examines the role of the exclusion restriction in Heckman, MI, and full-likelihood selection models when addressing nonignorability. We explore the implications of the different methodological choices concerning the exclusion restriction for relative bias and root-mean-squared error in estimating treatment effects. We find that the relative performance of the methods differs in practically important ways according to the relevance and strength of the exclusion restriction. The full-likelihood approach is less sensitive to alternative assumptions about the exclusion restriction than Heckman-type models and appears an appropriate method for handling nonignorable missing data. We illustrate the implications of method choice for inference in the REFLUX study, which evaluates the effect of laparoscopic surgery on long-term quality of life for patients with gastro-oseophageal reflux disease. 相似文献
13.
Michelle Shardell Gregory E. Hicks Ram R. Miller Jay Magaziner 《Statistics in medicine》2010,29(22):2282-2296
We propose a semiparametric marginal modeling approach for longitudinal analysis of cohorts with data missing due to death and non‐response to estimate regression parameters interpreted as conditioned on being alive. Our proposed method accommodates outcomes and time‐dependent covariates that are missing not at random with non‐monotone missingness patterns via inverse‐probability weighting. Missing covariates are replaced by consistent estimates derived from a simultaneously solved inverse‐probability‐weighted estimating equation. Thus, we utilize data points with the observed outcomes and missing covariates beyond the estimated weights while avoiding numerical methods to integrate over missing covariates. The approach is applied to a cohort of elderly female hip fracture patients to estimate the prevalence of walking disability over time as a function of body composition, inflammation, and age. Copyright © 2010 John Wiley & Sons, Ltd. 相似文献
14.
目的 比较在处理多种缺失机制共存的定量纵向缺失数据时,基于对照的模式混合模型(PMM)、重复测量的混合效应模型(MMRM)以及多重填补法(MI)的统计性能。方法 采用Monte Carlo技术模拟产生包含完全随机缺失、随机缺失和非随机缺失中两种或三种缺失机制的定量纵向缺失数据集,评价三类处理方法的统计性能。结果 基于对照的PMM控制Ⅰ类错误率在较低水平,检验效能最低。MMRM和MI的Ⅰ类错误率可控,检验效能高于基于对照的PMM。两组疗效无差异的情况下,所有方法的估计误差相当,基于对照的PMM方法的95%置信区间覆盖率最高;有差异的情况下,各方法受符合其缺失机制假设的缺失比例大小影响。含有非随机缺失数据时,基于对照的PMM基本不高估疗效差异,95%置信区间覆盖率最高,MMRM和MI高估疗效差异,95%置信区间覆盖率较低。所有方法的95%置信区间宽度相当。结论 分析多种缺失机制共存,特别是含有非随机缺失的纵向缺失数据时,MMRM和MI的统计性能有所降低,可采用基于对照的PMM进行敏感性分析,但需要注意其具体假设,防止估计过于保守。 相似文献
15.
A pattern‐mixture model approach for handling missing continuous outcome data in longitudinal cluster randomized trials 下载免费PDF全文
We extend the pattern‐mixture approach to handle missing continuous outcome data in longitudinal cluster randomized trials, which randomize groups of individuals to treatment arms, rather than the individuals themselves. Individuals who drop out at the same time point are grouped into the same dropout pattern. We approach extrapolation of the pattern‐mixture model by applying multilevel multiple imputation, which imputes missing values while appropriately accounting for the hierarchical data structure found in cluster randomized trials. To assess parameters of interest under various missing data assumptions, imputed values are multiplied by a sensitivity parameter, k, which increases or decreases imputed values. Using simulated data, we show that estimates of parameters of interest can vary widely under differing missing data assumptions. We conduct a sensitivity analysis using real data from a cluster randomized trial by increasing k until the treatment effect inference changes. By performing a sensitivity analysis for missing data, researchers can assess whether certain missing data assumptions are reasonable for their cluster randomized trial. 相似文献
16.
Biomedical research is plagued with problems of missing data, especially in clinical trials of medical and behavioral therapies adopting longitudinal design. After a literature review on modeling incomplete longitudinal data based on full-likelihood functions, this paper proposes a set of imputation-based strategies for implementing selection, pattern-mixture, and shared-parameter models for handling intermittent missing values and dropouts that are potentially nonignorable according to various criteria. Within the framework of multiple partial imputation, intermittent missing values are first imputed several times; then, each partially imputed data set is analyzed to deal with dropouts with or without further imputation. Depending on the choice of imputation model or measurement model, there exist various strategies that can be jointly applied to the same set of data to study the effect of treatment or intervention from multi-faceted perspectives. For illustration, the strategies were applied to a data set with continuous repeated measures from a smoking cessation clinical trial. 相似文献
17.
18.
This paper addresses the issue of biases in cost measures which are used in economic evaluation studies. The basic measure of hospital costs which is used by most investigators is unit cost. Focusing on this measure, a set of criteria which the basic measures must fulfil in order to approximate the marginal cost (MC) of a service for the relevant product, in the representative site, was identified. Then four distinct biases—a scale bias, a case mix bias, a methods bias and a site selection bias—each of which reflects the divergence of the unit cost measure from the desired MC measure, were identified. Measures are proposed for several of these biases and it is suggested how they can be corrected. 相似文献
19.
Models for the economic evaluation of health technologies provide valuable information to decision makers. The choice of model structure is rarely discussed in published studies and can affect the results produced. Many papers describe good modelling practice, but few describe how to choose from the many types of available models. This paper develops a new taxonomy of model structures. The horizontal axis of the taxonomy describes assumptions about the role of expected values, randomness, the heterogeneity of entities, and the degree of non-Markovian structure. Commonly used aggregate models, including decision trees and Markov models require large population numbers, homogeneous sub-groups and linear interactions. Individual models are more flexible, but may require replications with different random numbers to estimate expected values. The vertical axis of the taxonomy describes potential interactions between the individual actors, as well as how the interactions occur through time. Models using interactions, such as system dynamics, some Markov models, and discrete event simulation are fairly uncommon in the health economics but are necessary for modelling infectious diseases and systems with constrained resources. The paper provides guidance for choosing a model, based on key requirements, including output requirements, the population size, and system complexity. 相似文献
20.
Gianluca Baio 《Statistics in medicine》2014,33(11):1900-1913
Bayesian modelling for cost‐effectiveness data has received much attention in both the health economics and the statistical literature, in recent years. Cost‐effectiveness data are characterised by a relatively complex structure of relationships linking a suitable measure of clinical benefit (e.g. quality‐adjusted life years) and the associated costs. Simplifying assumptions, such as (bivariate) normality of the underlying distributions, are usually not granted, particularly for the cost variable, which is characterised by markedly skewed distributions. In addition, individual‐level data sets are often characterised by the presence of structural zeros in the cost variable. Hurdle models can be used to account for the presence of excess zeros in a distribution and have been applied in the context of cost data. We extend their application to cost‐effectiveness data, defining a full Bayesian specification, which consists of a model for the individual probability of null costs, a marginal model for the costs and a conditional model for the measure of effectiveness (given the observed costs). We presented the model using a working example to describe its main features. © 2013 The Authors. Statistics in Medicine published by John Wiley & Sons, Ltd. 相似文献