共查询到20条相似文献,搜索用时 0 毫秒
1.
Baptiste Leurent Manuel Gomes Suzie Cro Nicola Wiles James R. Carpenter 《Health economics》2020,29(2):171-184
Missing data are a common issue in cost‐effectiveness analysis (CEA) alongside randomised trials and are often addressed assuming the data are ‘missing at random’. However, this assumption is often questionable, and sensitivity analyses are required to assess the implications of departures from missing at random. Reference‐based multiple imputation provides an attractive approach for conducting such sensitivity analyses, because missing data assumptions are framed in an intuitive way by making reference to other trial arms. For example, a plausible not at random mechanism in a placebo‐controlled trial would be to assume that participants in the experimental arm who dropped out stop taking their treatment and have similar outcomes to those in the placebo arm. Drawing on the increasing use of this approach in other areas, this paper aims to extend and illustrate the reference‐based multiple imputation approach in CEA. It introduces the principles of reference‐based imputation and proposes an extension to the CEA context. The method is illustrated in the CEA of the CoBalT trial evaluating cognitive behavioural therapy for treatment‐resistant depression. Stata code is provided. We find that reference‐based multiple imputation provides a relevant and accessible framework for assessing the robustness of CEA conclusions to different missing data assumptions. 相似文献
2.
Randomised controlled trial (RCT)-based cost-effectiveness analyses, which are prone to missing data, are increasingly used in healthcare technology assessment. This has highlighted the need for appropriate methodological approaches to the handling of missing data. This paper reviews missing data methodology used in RCT-based cost-effectiveness analyses since 2003. Complete case analysis, which may lead to inappropriate conclusions, is still the most popular approach and its use has increased with time. The degree of missing data in cost-effectiveness analyses was often poorly reported and the methodology was often unclear. Reporting of missing data sensitivity analyses would improve article transparency. 相似文献
3.
Suzie
Cro Tim P. Morris Michael G. Kenward James R. Carpenter 《Statistics in medicine》2020,39(21):2815-2842
Missing data due to loss to follow-up or intercurrent events are unintended, but unfortunately inevitable in clinical trials. Since the true values of missing data are never known, it is necessary to assess the impact of untestable and unavoidable assumptions about any unobserved data in sensitivity analysis. This tutorial provides an overview of controlled multiple imputation (MI) techniques and a practical guide to their use for sensitivity analysis of trials with missing continuous outcome data. These include δ- and reference-based MI procedures. In δ-based imputation, an offset term, δ, is typically added to the expected value of the missing data to assess the impact of unobserved participants having a worse or better response than those observed. Reference-based imputation draws imputed values with some reference to observed data in other groups of the trial, typically in other treatment arms. We illustrate the accessibility of these methods using data from a pediatric eczema trial and a chronic headache trial and provide Stata code to facilitate adoption. We discuss issues surrounding the choice of δ in δ-based sensitivity analysis. We also review the debate on variance estimation within reference-based analysis and justify the use of Rubin's variance estimator in this setting, since as we further elaborate on within, it provides information anchored inference. 相似文献
4.
Missing binary outcomes under covariate‐dependent missingness in cluster randomised trials 下载免费PDF全文
Missing outcomes are a commonly occurring problem for cluster randomised trials, which can lead to biased and inefficient inference if ignored or handled inappropriately. Two approaches for analysing such trials are cluster‐level analysis and individual‐level analysis. In this study, we assessed the performance of unadjusted cluster‐level analysis, baseline covariate‐adjusted cluster‐level analysis, random effects logistic regression and generalised estimating equations when binary outcomes are missing under a baseline covariate‐dependent missingness mechanism. Missing outcomes were handled using complete records analysis and multilevel multiple imputation. We analytically show that cluster‐level analyses for estimating risk ratio using complete records are valid if the true data generating model has log link and the intervention groups have the same missingness mechanism and the same covariate effect in the outcome model. We performed a simulation study considering four different scenarios, depending on whether the missingness mechanisms are the same or different between the intervention groups and whether there is an interaction between intervention group and baseline covariate in the outcome model. On the basis of the simulation study and analytical results, we give guidance on the conditions under which each approach is valid. © 2017 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd. 相似文献
5.
When collecting patient-level resource use data for statistical analysis, for some patients and in some categories of resource use, the required count will not be observed. Although this problem must arise in most reported economic evaluations containing patient-level data, it is rare for authors to detail how the problem was overcome. Statistical packages may default to handling missing data through a so-called 'complete case analysis', while some recent cost-analyses have appeared to favour an 'available case' approach. Both of these methods are problematic: complete case analysis is inefficient and is likely to be biased; available case analysis, by employing different numbers of observations for each resource use item, generates severe problems for standard statistical inference. Instead we explore imputation methods for generating 'replacement' values for missing data that will permit complete case analysis using the whole data set and we illustrate these methods using two data sets that had incomplete resource use information. 相似文献
6.
Incomplete data due to premature withdrawal (dropout) constitute a serious problem in prospective economic evaluations that has received only little attention to date. The aim of this simulation study was to investigate how standard methods for dealing with incomplete data perform when applied to cost data with various distributions and various types of dropout. Selected methods included the product-limit estimator of Lin et al. the expectation maximisation (EM-) algorithm, several types of multiple imputation (MI) and various simple methods like complete case analysis and mean imputation. Almost all methods were unbiased in the case of dropout completely at random (DCAR), but only the product-limit estimator, the EM-algorithm and the MI approaches provided adequate estimates of the standard error (SE). The best estimates of the mean and SE for dropout at random (DAR) were provided by the bootstrap EM-algorithm, MI regression and MI Monte Carlo Markov chain. These methods were able to deal with skewed cost data in combination with DAR and only became biased when costs also included the costs of expensive events. None of the methods were able to deal adequately with informative dropout. In conclusion, the EM-algorithm with bootstrap, MI regression and MI MCMC are robust to the multivariate normal assumption and are the preferred methods for the analysis of incomplete cost data when the assumption of DCAR is not justified. 相似文献
7.
During recent discussions, it has been argued that stratified cost‐effectiveness analysis has a key role in reimbursement decision‐making and value‐based pricing (VBP). It has previously been shown that when manufacturers are price‐takers, reimbursement decisions made in reference to stratified cost‐effectiveness analysis lead to a more efficient allocation of resources than decisions based on whole‐population cost‐effectiveness analysis. However, we demonstrate that when manufacturers are price setters, reimbursement or VBP based on stratified cost‐effectiveness analysis may not be optimal. Using two examples – one considering the choice of thrombolytic treatment for specific patient subgroups and the other considering the extension of coverage for a cancer treatment to include an additional indication – we show that combinations of extended coverage and reduced price can be identified that are advantageous to both payers and manufacturers. The benefits of a given extension in coverage and reduction in price depend both upon the average treatment benefit in the additional population and its size relative to the original population. Negotiation regarding trade‐offs between price and coverage may lead to improved outcomes both for health‐care systems and manufacturers compared with processes where coverage is determined conditional simply on stratified cost‐effectiveness at a given price. Copyright © 2010 John Wiley & Sons, Ltd. 相似文献
8.
Handling incomplete correlated continuous and binary outcomes in meta‐analysis of individual participant data 下载免费PDF全文
Meta‐analysis of individual participant data (IPD) is increasingly utilised to improve the estimation of treatment effects, particularly among different participant subgroups. An important concern in IPD meta‐analysis relates to partially or completely missing outcomes for some studies, a problem exacerbated when interest is on multiple discrete and continuous outcomes. When leveraging information from incomplete correlated outcomes across studies, the fully observed outcomes may provide important information about the incompleteness of the other outcomes. In this paper, we compare two models for handling incomplete continuous and binary outcomes in IPD meta‐analysis: a joint hierarchical model and a sequence of full conditional mixed models. We illustrate how these approaches incorporate the correlation across the multiple outcomes and the between‐study heterogeneity when addressing the missing data. Simulations characterise the performance of the methods across a range of scenarios which differ according to the proportion and type of missingness, strength of correlation between outcomes and the number of studies. The joint model provided confidence interval coverage consistently closer to nominal levels and lower mean squared error compared with the fully conditional approach across the scenarios considered. Methods are illustrated in a meta‐analysis of randomised controlled trials comparing the effectiveness of implantable cardioverter‐defibrillator devices alone to implantable cardioverter‐defibrillator combined with cardiac resynchronisation therapy for treating patients with chronic heart failure. © 2016 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd. 相似文献
9.
Ari D. Panzer Joanna G. Emerson Brittany D'Cruz Avnee Patel Saudamini Dabak Wanrudee Isaranuwatchai Yot Teerawattananon Daniel A. Ollendorf Peter J. Neumann David D. Kim 《Health economics》2020,29(8):945-954
As economic evaluation becomes increasingly essential to support universal health coverage (UHC), we aim to understand the growth, characteristics, and quality of cost‐effectiveness analyses (CEA) conducted for Africa and to assess institutional capacity and relationship patterns among authors. We searched the Tufts Medical Center CEA Registries and four databases to identify CEAs for Africa. After extracting relevant information, we examined study characteristics, cost‐effectiveness ratios, individual and institutional contribution to the literature, and network dyads at the author, institution, and country levels. The 358 identified CEAs for Africa primarily focused on sub‐Saharan Africa (96%) and interventions for communicable diseases (77%). Of 2,121 intervention‐specific ratios, 8% were deemed cost‐saving, and most evaluated immunizations strategies. As 64% of studies included at least one African author, we observed widespread collaboration among international researchers and institutions. However, only 23% of first authors were affiliated with African institutions. The top producers of CEAs among African institutions are more adherent to methodological and reporting guidelines. Although economic evidence in Africa has grown substantially, the capacity for generating such evidence remains limited. Increasing the ability of regional institutions to produce high‐quality evidence and facilitate knowledge transfer among African institutions has the potential to inform prioritization decisions for designing UHC. 相似文献
10.
As the costs of medical care increase, more studies are evaluating cost in addition to effectiveness of treatments. Cost‐effectiveness analyses in randomized clinical trials have typically been conducted only at the end of follow‐up. However, cost‐effectiveness may change over time. We therefore propose a nonparametric estimator to assess the incremental cost‐effectiveness ratio over time. We also derive the asymptotic variance of our estimator and present formulation of Fieller‐based simultaneous confidence bands. Simulation studies demonstrate the performance of our point estimators, variance estimators, and confidence bands. We also illustrate our methods using data from a randomized clinical trial, the second Multicenter Automatic Defibrillator Implantation Trial. This trial studied the effects of implantable cardioverter‐defibrillators on patients at high risk for cardiac arrhythmia. Results show that our estimator performs well in large samples, indicating promising future directions in the field of cost‐effectiveness. Copyright © 2015 John Wiley & Sons, Ltd. 相似文献
11.
A pattern‐mixture model approach for handling missing continuous outcome data in longitudinal cluster randomized trials 下载免费PDF全文
We extend the pattern‐mixture approach to handle missing continuous outcome data in longitudinal cluster randomized trials, which randomize groups of individuals to treatment arms, rather than the individuals themselves. Individuals who drop out at the same time point are grouped into the same dropout pattern. We approach extrapolation of the pattern‐mixture model by applying multilevel multiple imputation, which imputes missing values while appropriately accounting for the hierarchical data structure found in cluster randomized trials. To assess parameters of interest under various missing data assumptions, imputed values are multiplied by a sensitivity parameter, k, which increases or decreases imputed values. Using simulated data, we show that estimates of parameters of interest can vary widely under differing missing data assumptions. We conduct a sensitivity analysis using real data from a cluster randomized trial by increasing k until the treatment effect inference changes. By performing a sensitivity analysis for missing data, researchers can assess whether certain missing data assumptions are reasonable for their cluster randomized trial. 相似文献
12.
Kaifeng Lu 《Statistics in medicine》2014,33(7):1134-1145
Pattern‐mixture models provide a general and flexible framework for sensitivity analyses of nonignorable missing data. The placebo‐based pattern‐mixture model (Little and Yau, Biometrics 1996; 52 :1324–1333) treats missing data in a transparent and clinically interpretable manner and has been used as sensitivity analysis for monotone missing data in longitudinal studies. The standard multiple imputation approach (Rubin, Multiple Imputation for Nonresponse in Surveys, 1987) is often used to implement the placebo‐based pattern‐mixture model. We show that Rubin's variance estimate of the multiple imputation estimator of treatment effect can be overly conservative in this setting. As an alternative to multiple imputation, we derive an analytic expression of the treatment effect for the placebo‐based pattern‐mixture model and propose a posterior simulation or delta method for the inference about the treatment effect. Simulation studies demonstrate that the proposed methods provide consistent variance estimates and outperform the imputation methods in terms of power for the placebo‐based pattern‐mixture model. We illustrate the methods using data from a clinical study of major depressive disorders. Copyright © 2013 John Wiley & Sons, Ltd. 相似文献
13.
Niko A. Kaciroti M. Anthony Schork Trivellore Raghunathan Stevo Julius 《Statistics in medicine》2009,28(4):572-585
Intention‐to‐treat (ITT) analysis is commonly used in randomized clinical trials. However, the use of ITT analysis presents a challenge: how to deal with subjects who drop out. Here we focus on randomized trials where the primary outcome is a binary endpoint. Several approaches are available for including the dropout subject in the ITT analysis, mainly chosen prior to unblinding the study. These approaches reduce the potential bias due to breaking the randomization code. However, the validity of the results will highly depend on untestable assumptions about the dropout mechanism. Thus, it is important to evaluate the sensitivity of the results across different missing‐data mechanisms. We propose here a Bayesian pattern‐mixture model for ITT analysis of binary outcomes with dropouts that applies over different types of missing‐data mechanisms. We introduce a new parameterization to identify the model, which is then used for sensitivity analysis. The parameterization is defined as the odds ratio of having an endpoint between the subjects who dropped out and those who completed the study. Such parameterization is intuitive and easy to use in sensitivity analysis; it also incorporates most of the available methods as special cases. The model is applied to TRial Of Preventing HYpertension. Copyright © 2008 John Wiley & Sons, Ltd. 相似文献
14.
Md. Abu Manju Math J. J. M. Candel Martijn P. F. Berger 《Statistics in medicine》2014,33(15):2538-2553
In this paper, the optimal sample sizes at the cluster and person levels for each of two treatment arms are obtained for cluster randomized trials where the cost‐effectiveness of treatments on a continuous scale is studied. The optimal sample sizes maximize the efficiency or power for a given budget or minimize the budget for a given efficiency or power. Optimal sample sizes require information on the intra‐cluster correlations (ICCs) for effects and costs, the correlations between costs and effects at individual and cluster levels, the ratio of the variance of effects translated into costs to the variance of the costs (the variance ratio), sampling and measuring costs, and the budget. When planning, a study information on the model parameters usually is not available. To overcome this local optimality problem, the current paper also presents maximin sample sizes. The maximin sample sizes turn out to be rather robust against misspecifying the correlation between costs and effects at the cluster and individual levels but may lose much efficiency when misspecifying the variance ratio. The robustness of the maximin sample sizes against misspecifying the ICCs depends on the variance ratio. The maximin sample sizes are robust under misspecification of the ICC for costs for realistic values of the variance ratio greater than one but not robust under misspecification of the ICC for effects. Finally, we show how to calculate optimal or maximin sample sizes that yield sufficient power for a test on the cost‐effectiveness of an intervention. Copyright © 2014 John Wiley & Sons, Ltd. 相似文献
15.
Several probability‐based measures are introduced in order to assess the cost‐effectiveness of a treatment. The basic measure consists of the probability that one treatment is less costly and more effective compared with another. Several variants of this measure are suggested as flexible options for cost‐effectiveness analysis. The proposed measures are invariant under monotone transformations of the cost and effectiveness measures. Interval estimation of the proposed measures are investigated under a parametric model, assuming bivariate normality, and also non‐parametrically. The delta method and a generalized pivotal quantity approach are both investigated under the bivariate normal model. A non‐parametric U‐statistics‐based approach is also investigated for computing confidence intervals. Numerical results show that under bivariate normality, the solution based on generalized pivotal quantities exhibits accurate performance in terms of maintaining the coverage probability of the confidence interval. The non‐parametric U‐statistics‐based solution is accurate for sample sizes that are at least moderately large. The results are illustrated using data from a clinical trial for prostate cancer therapy. Copyright © 2016 John Wiley & Sons, Ltd. 相似文献
16.
Stephen Burgess Ian R. White Matthieu Resche‐Rigon Angela M. Wood 《Statistics in medicine》2013,32(26):4499-4514
Multiple imputation is a strategy for the analysis of incomplete data such that the impact of the missingness on the power and bias of estimates is mitigated. When data from multiple studies are collated, we can propose both within‐study and multilevel imputation models to impute missing data on covariates. It is not clear how to choose between imputation models or how to combine imputation and inverse‐variance weighted meta‐analysis methods. This is especially important as often different studies measure data on different variables, meaning that we may need to impute data on a variable which is systematically missing in a particular study. In this paper, we consider a simulation analysis of sporadically missing data in a single covariate with a linear analysis model and discuss how the results would be applicable to the case of systematically missing data. We find in this context that ensuring the congeniality of the imputation and analysis models is important to give correct standard errors and confidence intervals. For example, if the analysis model allows between‐study heterogeneity of a parameter, then we should incorporate this heterogeneity into the imputation model to maintain the congeniality of the two models. In an inverse‐variance weighted meta‐analysis, we should impute missing data and apply Rubin's rules at the study level prior to meta‐analysis, rather than meta‐analyzing each of the multiple imputations and then combining the meta‐analysis estimates using Rubin's rules. We illustrate the results using data from the Emerging Risk Factors Collaboration. © 2013 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd. 相似文献
17.
Manisha Desai Maria E. Montez-Rath Kristopher Kapphahn Vilija R. Joyce Maya B. Mathur Ariadna Garcia Natasha Purington Douglas K. Owens 《Statistics in medicine》2019,38(17):3204-3220
The treatment of missing data in comparative effectiveness studies with right-censored outcomes and time-varying covariates is challenging because of the multilevel structure of the data. In particular, the performance of an accessible method like multiple imputation (MI) under an imputation model that ignores the multilevel structure is unknown and has not been compared to complete-case (CC) and single imputation methods that are most commonly applied in this context. Through an extensive simulation study, we compared statistical properties among CC analysis, last value carried forward, mean imputation, the use of missing indicators, and MI-based approaches with and without auxiliary variables under an extended Cox model when the interest lies in characterizing relationships between non-missing time-varying exposures and right-censored outcomes. MI demonstrated favorable properties under a moderate missing-at-random condition (absolute bias <0.1) and outperformed CC and single imputation methods, even when the MI method did not account for correlated observations in the imputation model. The performance of MI decreased with increasing complexity such as when the missing data mechanism involved the exposure of interest, but was still preferred over other methods considered and performed well in the presence of strong auxiliary variables. We recommend considering MI that ignores the multilevel structure in the imputation model when data are missing in a time-varying confounder, incorporating variables associated with missingness in the MI models as well as conducting sensitivity analyses across plausible assumptions. 相似文献
18.
When missing data occur in one or more covariates in a regression model, multiple imputation (MI) is widely advocated as an improvement over complete‐case analysis (CC). We use theoretical arguments and simulation studies to compare these methods with MI implemented under a missing at random assumption. When data are missing completely at random, both methods have negligible bias, and MI is more efficient than CC across a wide range of scenarios. For other missing data mechanisms, bias arises in one or both methods. In our simulation setting, CC is biased towards the null when data are missing at random. However, when missingness is independent of the outcome given the covariates, CC has negligible bias and MI is biased away from the null. With more general missing data mechanisms, bias tends to be smaller for MI than for CC. Since MI is not always better than CC for missing covariate problems, the choice of method should take into account what is known about the missing data mechanism in a particular substantive application. Importantly, the choice of method should not be based on comparison of standard errors. We propose new ways to understand empirical differences between MI and CC, which may provide insights into the appropriateness of the assumptions underlying each method, and we propose a new index for assessing the likely gain in precision from MI: the fraction of incomplete cases among the observed values of a covariate (FICO). Copyright © 2010 John Wiley & Sons, Ltd. 相似文献
19.
K.S. Goldfeld 《Statistics in medicine》2014,33(7):1222-1241
Cost‐effectiveness analysis is an important tool that can be applied to the evaluation of a health treatment or policy. When the observed costs and outcomes result from a nonrandomized treatment, making causal inference about the effects of the treatment requires special care. The challenges are compounded when the observation period is truncated for some of the study subjects. This paper presents a method of unbiased estimation of cost‐effectiveness using observational study data that is not fully observed. The method—twice‐weighted multiple interval estimation of a marginal structural model—was developed in order to analyze the cost‐effectiveness of treatment protocols for advanced dementia residents living nursing homes when they become acutely ill. A key feature of this estimation approach is that it facilitates a sensitivity analysis that identifies the potential effects of unmeasured confounding on the conclusions concerning cost‐effectiveness. Copyright © 2013 John Wiley & Sons, Ltd. 相似文献
20.
Matthieu Resche‐Rigon Ian R. White JonathanW. Bartlett Sanne A.E. Peters Simon G. Thompson 《Statistics in medicine》2013,32(28):4890-4905
A variable is ‘systematically missing’ if it is missing for all individuals within particular studies in an individual participant data meta‐analysis. When a systematically missing variable is a potential confounder in observational epidemiology, standard methods either fail to adjust the exposure–disease association for the potential confounder or exclude studies where it is missing. We propose a new approach to adjust for systematically missing confounders based on multiple imputation by chained equations. Systematically missing data are imputed via multilevel regression models that allow for heterogeneity between studies. A simulation study compares various choices of imputation model. An illustration is given using data from eight studies estimating the association between carotid intima media thickness and subsequent risk of cardiovascular events. Results are compared with standard methods and also with an extension of a published method that exploits the relationship between fully adjusted and partially adjusted estimated effects through a multivariate random effects meta‐analysis model. We conclude that multiple imputation provides a practicable approach that can handle arbitrary patterns of systematic missingness. Bias is reduced by including sufficient between‐study random effects in the imputation model. Copyright © 2013 John Wiley & Sons, Ltd. 相似文献