首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In clinical trials comparing different treatments and in health economics and outcomes research, medical costs are frequently analysed to evaluate the economical impacts of new treatment options and economic values of health-care utilization. Since Lin et al.'s first finding in the problem of applying the survival analysis techniques to the cost data, many new methods have been proposed. In this report, we establish analytic relationships among several widely adopted medical cost estimators that are seemingly different. Specifically, we report the equivalence among various estimators that were introduced by Lin et al., Bang and Tsiatis, and Zhao and Tian. Lin's estimators are formerly known to be asymptotically unbiased in some discrete censoring situations and biased otherwise, whereas all other estimators discussed here are consistent for the expected medical cost. Thus, we identify conditions under which these estimators become identical and, consequently, the biased estimators achieve consistency. We illustrate these relationships using an example from a clinical trial examining the effectiveness of implantable cardiac defibrillators in preventing death among people who had prior myocardial infarctions.  相似文献   

2.
Marginal structural Cox models have been used to estimate the causal effect of a time-varying treatment on a survival outcome in the presence of time-dependent confounders. These methods rely on the positivity assumption, which states that the propensity scores are bounded away from zero and one. Practical violations of this assumption are common in longitudinal studies, resulting in extreme weights that may yield erroneous inferences. Truncation, which consists of replacing outlying weights with less extreme ones, is the most common approach to control for extreme weights to date. While truncation reduces the variability in the weights and the consequent sampling variability of the estimator, it can also introduce bias. Instead of truncated weights, we propose using optimal probability weights, defined as those that have a specified variance and the smallest Euclidean distance from the original, untruncated weights. The set of optimal weights is obtained by solving a constrained quadratic optimization problem. The proposed weights are evaluated in a simulation study and applied to the assessment of the effect of treatment on time to death among people in Sweden who live with human immunodeficiency virus and inject drugs.  相似文献   

3.
Propensity score methods are increasingly being used to reduce or minimize the effects of confounding when estimating the effects of treatments, exposures, or interventions when using observational or non‐randomized data. Under the assumption of no unmeasured confounders, previous research has shown that propensity score methods allow for unbiased estimation of linear treatment effects (e.g., differences in means or proportions). However, in biomedical research, time‐to‐event outcomes occur frequently. There is a paucity of research into the performance of different propensity score methods for estimating the effect of treatment on time‐to‐event outcomes. Furthermore, propensity score methods allow for the estimation of marginal or population‐average treatment effects. We conducted an extensive series of Monte Carlo simulations to examine the performance of propensity score matching (1:1 greedy nearest‐neighbor matching within propensity score calipers), stratification on the propensity score, inverse probability of treatment weighting (IPTW) using the propensity score, and covariate adjustment using the propensity score to estimate marginal hazard ratios. We found that both propensity score matching and IPTW using the propensity score allow for the estimation of marginal hazard ratios with minimal bias. Of these two approaches, IPTW using the propensity score resulted in estimates with lower mean squared error when estimating the effect of treatment in the treated. Stratification on the propensity score and covariate adjustment using the propensity score result in biased estimation of both marginal and conditional hazard ratios. Applied researchers are encouraged to use propensity score matching and IPTW using the propensity score when estimating the relative effect of treatment on time‐to‐event outcomes. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

4.
Marginal structural models are commonly used to estimate the causal effect of a time‐varying treatment in presence of time‐dependent confounding. When fitting an MSM to data, the analyst must specify both the structural model for the outcome and the treatment models for the inverse‐probability‐of‐treatment weights. The use of stabilized weights is recommended because they are generally less variable than the standard weights. In this paper, we are concerned with the use of the common stabilized weights when the structural model is specified to only consider partial treatment history, such as the current or most recent treatments. We present various examples of settings where these stabilized weights yield biased inferences while the standard weights do not. These issues are first investigated on the basis of simulated data and subsequently exemplified using data from the Honolulu Heart Program. Unlike common stabilized weights, we find that basic stabilized weights offer some protection against bias in structural models designed to estimate current or most recent treatment effects. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

5.
Childhood acute lymphoblastic leukaemia is treated with long-term intensive chemotherapy. During the latter part of the treatment, the maintenance therapy, the patients receive oral doses of two cytostatics. The doses are tailored to blood counts measured on a weekly basis, and the treatment is therefore highly dynamic. In 1992-1996, the Nordic Society of Paediatric Haematology and Oncology (NOPHO) conducted a randomised study (NOPHO-ALL-92) to investigate the effect of a new and more sophisticated dynamic treatment strategy. Unexpectedly, the new strategy worsened the outcome for the girls, whereas there were no treatment differences for the boys. There are as yet no general guidelines for optimising the treatment. On basis of the data from this study, our goal is to formulate an alternative dosing strategy. We use recently developed methods proposed by van der Laan et al. to obtain statistical models that may be used in the guidance of how the physicians should assign the doses to the patients to obtain the target of the treatment. We present a possible strategy and discuss the reliability of this strategy. The implementation is complicated, and we touch upon the limitations of the methods in relation to the formulation of alternative dosing strategies for the maintenance therapy.  相似文献   

6.
For random effects meta-analysis, seven different estimators of the heterogeneity variance are compared and assessed using a simulation study. The seven estimators are the variance component type estimator (VC), the method of moments estimator (MM), the maximum likelihood estimator (ML), the restricted maximum likelihood estimator (REML), the empirical Bayes estimator (EB), the model error variance type estimator (MV), and a variation of the MV estimator (MVvc). The performance of the estimators is compared in terms of both bias and mean squared error, using Monte Carlo simulation. The results show that the REML and especially the ML and MM estimators are not accurate, having large biases unless the true heterogeneity variance is small. The VC estimator tends to overestimate the heterogeneity variance in general, but is quite accurate when the number of studies is large. The MV estimator is not a good estimator when the heterogeneity variance is small to moderate, but it is reasonably accurate when the heterogeneity variance is large. The MVvc estimator is an improved estimator compared to the MV estimator, especially for small to moderate values of the heterogeneity variance. The two estimators MVvc and EB are found to be the most accurate in general, particularly when the heterogeneity variance is moderate to large.  相似文献   

7.
Correct specification of the inverse probability weighting (IPW) model is necessary for consistent inference from a marginal structural Cox model (MSCM). In practical applications, researchers are typically unaware of the true specification of the weight model. Nonetheless, IPWs are commonly estimated using parametric models, such as the main‐effects logistic regression model. In practice, assumptions underlying such models may not hold and data‐adaptive statistical learning methods may provide an alternative. Many candidate statistical learning approaches are available in the literature. However, the optimal approach for a given dataset is impossible to predict. Super learner (SL) has been proposed as a tool for selecting an optimal learner from a set of candidates using cross‐validation. In this study, we evaluate the usefulness of a SL in estimating IPW in four different MSCM simulation scenarios, in which we varied the specification of the true weight model specification (linear and/or additive). Our simulations show that, in the presence of weight model misspecification, with a rich and diverse set of candidate algorithms, SL can generally offer a better alternative to the commonly used statistical learning approaches in terms of MSE as well as the coverage probabilities of the estimated effect in an MSCM. The findings from the simulation studies guided the application of the MSCM in a multiple sclerosis cohort from British Columbia, Canada (1995–2008), to estimate the impact of beta‐interferon treatment in delaying disability progression. Copyright © 2017 John Wiley & Sons, Ltd.  相似文献   

8.
Motivated by a previously published study of HIV treatment, we simulated data subject to time‐varying confounding affected by prior treatment to examine some finite‐sample properties of marginal structural Cox proportional hazards models. We compared (a) unadjusted, (b) regression‐adjusted, (c) unstabilized, and (d) stabilized marginal structural (inverse probability‐of‐treatment [IPT] weighted) model estimators of effect in terms of bias, standard error, root mean squared error (MSE), and 95% confidence limit coverage over a range of research scenarios, including relatively small sample sizes and 10 study assessments. In the base‐case scenario resembling the motivating example, where the true hazard ratio was 0.5, both IPT‐weighted analyses were unbiased, whereas crude and adjusted analyses showed substantial bias towards and across the null. Stabilized IPT‐weighted analyses remained unbiased across a range of scenarios, including relatively small sample size; however, the standard error was generally smaller in crude and adjusted models. In many cases, unstabilized weighted analysis showed a substantial increase in standard error compared with other approaches. Root MSE was smallest in the IPT‐weighted analyses for the base‐case scenario. In situations where time‐varying confounding affected by prior treatment was absent, IPT‐weighted analyses were less precise and therefore had greater root MSE compared with adjusted analyses. The 95% confidence limit coverage was close to nominal for all stabilized IPT‐weighted but poor in crude, adjusted, and unstabilized IPT‐weighted analysis. Under realistic scenarios, marginal structural Cox proportional hazards models performed according to expectations based on large‐sample theory and provided accurate estimates of the hazard ratio. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

9.
Parametric models are only occasionally used in the analysis of clinical studies of survival although they may offer advantages over Cox's model. In this paper, we report experiences that we have made fitting parametric models to data sets from different clinical trials mainly performed at the Vienna University Medical School. We emphasize the role of residuals for discriminating among candidate models and judging their goodness of fit. The effect of misspecification of the baseline distribution on parameter estimates and testing has been explored. The results from parametric analyses have always been contrasted with those from Cox's model.  相似文献   

10.
Several approaches exist for handling missing covariates in the Cox proportional hazards model. The multiple imputation (MI) is relatively easy to implement with various software available and results in consistent estimates if the imputation model is correct. On the other hand, the fully augmented weighted estimators (FAWEs) recover a substantial proportion of the efficiency and have the doubly robust property. In this paper, we compare the FAWEs and the MI through a comprehensive simulation study. For the MI, we consider the multiple imputation by chained equation and focus on two imputation methods: Bayesian linear regression imputation and predictive mean matching. Simulation results show that the imputation methods can be rather sensitive to model misspecification and may have large bias when the censoring time depends on the missing covariates. In contrast, the FAWEs allow the censoring time to depend on the missing covariates and are remarkably robust as long as getting either the conditional expectations or the selection probability correct due to the doubly robust property. The comparison suggests that the FAWEs show the potential for being a competitive and attractive tool for tackling the analysis of survival data with missing covariates. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

11.
Given a predictive marker and a time‐to‐event response variable, the proportion of concordant pairs in a data set is called concordance index. A specifically useful marker is the risk predicted by a survival regression model. This article extends the existing methodology for applications where the length of the follow‐up period depends on the predictor variables. A class of inverse probability of censoring weighted estimators is discussed in which the estimates rely on a working model for the conditional censoring distribution. The estimators are consistent for a truncated concordance index if the working model is correctly specified and if the probability of being uncensored at the truncation time is positive. In this framework, all kinds of prediction models can be assessed, and time trends in the discrimination ability of a model can be captured by varying the truncation time point. For illustration, we re‐analyze a study on risk prediction for prostate cancer patients. The effects of misspecification of the censoring model are studied in simulated data. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

12.
The hazard ratios resulting from a Cox's regression hazards model are hard to interpret and to be converted into prolonged survival time. As the main goal is often to study survival functions, there is increasing interest in summary measures based on the survival function that are easier to interpret than the hazard ratio; the residual mean time is an important example of those measures. However, because of the presence of right censoring, the tail of the survival distribution is often difficult to estimate correctly. Therefore, we consider the restricted residual mean time, which represents a partial area under the survival function, given any time horizon τ, and is interpreted as the residual life expectancy up to τ of a subject surviving up to time t. We present a class of regression models for this measure, based on weighted estimating equations and inverse probability of censoring weighted estimators to model potential right censoring. Furthermore, we show how to extend the models and the estimators to deal with delayed entries. We demonstrate that the restricted residual mean life estimator is equivalent to integrals of Kaplan–Meier estimates in the case of simple factor variables. Estimation performance is investigated by simulation studies. Using real data from Danish Monitoring Cardiovascular Risk Factor Surveys, we illustrate an application to additive regression models and discuss the general assumption of right censoring and left truncation being dependent on covariates. Copyright © 2017 John Wiley & Sons, Ltd.  相似文献   

13.
Zhao H  Cheng Y  Bang H 《Statistics in medicine》2011,30(19):2381-2388
Censored survival data analysis has been studied for many years. Yet, the analysis of censored mark variables, such as medical cost, quality-adjusted lifetime, and repeated events, faces a unique challenge that makes standard survival analysis techniques invalid. Because of the 'informative' censorship imbedded in censored mark variables, the use of the Kaplan-Meier (Journal of the American Statistical Association 1958; 53:457-481) estimator, as an example, will produce biased estimates. Innovative estimators have been developed in the past decade in order to handle this issue. Even though consistent estimators have been proposed, the formulations and interpretations of some estimators are less intuitive to practitioners. On the other hand, more intuitive estimators have been proposed, but their mathematical properties have not been established. In this paper, we prove the analytic identity between some estimators (a statistically motivated estimator and an intuitive estimator) for censored cost data. Efron (1967) made similar investigation for censored survival data (between the Kaplan-Meier estimator and the redistribute-to-the-right algorithm). Therefore, we view our study as an extension of Efron's work to informatively censored data so that our findings could be applied to other marked variables.  相似文献   

14.
In two‐stage randomization designs, patients are randomized to one of the initial treatments, and at the end of the first stage, they are randomized to one of the second stage treatments depending on the outcome of the initial treatment. Statistical inference for survival data from these trials uses methods such as marginal mean models and weighted risk set estimates. In this article, we propose two forms of weighted Kaplan–Meier (WKM) estimators based on inverse‐probability weighting—one with fixed weights and the other with time‐dependent weights. We compare their properties with that of the standard Kaplan–Meier (SKM) estimator, marginal mean model‐based (MM) estimator and weighted risk set (WRS) estimator. Simulation study reveals that both forms of weighted Kaplan–Meier estimators are asymptotically unbiased, and provide coverage rates similar to that of MM and WRS estimators. The SKM estimator, however, is biased when the second randomization rates are not the same for the responders and non‐responders to initial treatment. The methods described are demonstrated by applying to a leukemia data set. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

15.
Inverse probability of treatment weighted (IPTW) estimation for marginal structural models (MSMs) requires the specification of a nuisance model describing the conditional relationship between treatment allocation and confounders. However, there is still limited information on the best strategy for building these treatment models in practice. We developed a series of simulations to systematically determine the effect of including different types of candidate variables in such models. We explored the performance of IPTW estimators across several scenarios of increasing complexity, including one designed to mimic the complexity typically seen in large pharmacoepidemiologic studies.Our results show that including pure predictors of treatment (i.e. not confounders) in treatment models can lead to estimators that are biased and highly variable, particularly in the context of small samples. The bias and mean-squared error of the MSM-based IPTW estimator increase as the complexity of the problem increases. The performance of the estimator is improved by either increasing the sample size or using only variables related to the outcome to develop the treatment model. Estimates of treatment effect based on the true model for the probability of treatment are asymptotically unbiased.We recommend including only pure risk factors and confounders in the treatment model when developing an IPTW-based MSM.  相似文献   

16.
With changing the age distribution at the time of cancer diagnosis, the administrative censoring due to study end may be informative. This problem has been mentioned frequently in the relative survival field, and an estimator aiming to correct this problem has been developed. In this paper, we review the existing methods for estimation in relative survival, demonstrate their deficiencies, and propose weighting to correct both the recently introduced net survival estimator and the Ederer I estimator. Using simulations and real cancer registry data, we evaluate the magnitude of the informative censoring problem. We clarify the assumptions behind the reviewed methods and provide guidance to their usage in practice. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

17.
18.
When statistical models are used to predict the values of unobserved random variables, loss functions are often used to quantify the accuracy of a prediction. The expected loss over some specified set of occasions is called the prediction error. This paper considers the estimation of prediction error when regression models are used to predict survival times and discusses the use of these estimates. Extending the previous work, we consider both point and confidence interval estimations of prediction error, and allow for variable selection and model misspecification. Different estimators are compared in a simulation study for an absolute relative error loss function, and results indicate that cross‐validation procedures typically produce reliable point estimates and confidence intervals, whereas model‐based estimates are sensitive to model misspecification. Links between performance measures for point predictors and for predictive distributions of survival times are also discussed. The methodology is illustrated in a medical setting involving survival after treatment for disease. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

19.
When we synthesize research findings via meta‐analysis, it is common to assume that the true underlying effect differs across studies. Total variability consists of the within‐study and between‐study variances (heterogeneity). There have been established measures, such as I2, to quantify the proportion of the total variation attributed to heterogeneity. There is a plethora of estimation methods available for estimating heterogeneity. The widely used DerSimonian and Laird estimation method has been challenged, but knowledge of the overall performance of heterogeneity estimators is incomplete. We identified 20 heterogeneity estimators in the literature and evaluated their performance in terms of mean absolute estimation error, coverage probability, and length of the confidence interval for the summary effect via a simulation study. Although previous simulation studies have suggested the Paule‐Mandel estimator, it has not been compared with all the available estimators. For dichotomous outcomes, estimating heterogeneity through Markov chain Monte Carlo is a good choice if an informative prior distribution for heterogeneity is employed (eg, by published Cochrane reviews). Nonparametric bootstrap and positive DerSimonian and Laird perform well for all assessment criteria for both dichotomous and continuous outcomes. Hartung‐Makambi estimator can be the best choice when the heterogeneity values are close to 0.07 for dichotomous outcomes and medium heterogeneity values (0.01 , 0.05) for continuous outcomes. Hence, there are heterogeneity estimators (nonparametric bootstrap DerSimonian and Laird and positive DerSimonian and Laird) that perform better than the suggested Paule‐Mandel. Maximum likelihood provides the best performance for both types of outcome in the absence of heterogeneity.  相似文献   

20.
In patients with chronic kidney disease (CKD), clinical interest often centers on determining treatments and exposures that are causally related to renal progression. Analyses of longitudinal clinical data in this population are often complicated by clinical competing events, such as end‐stage renal disease (ESRD) and death, and time‐dependent confounding, where patient factors that are predictive of later exposures and outcomes are affected by past exposures. We developed multistate marginal structural models (MS‐MSMs) to assess the effect of time‐varying systolic blood pressure on disease progression in subjects with CKD. The multistate nature of the model allows us to jointly model disease progression characterized by changes in the estimated glomerular filtration rate (eGFR), the onset of ESRD, and death, and thereby avoid unnatural assumptions of death and ESRD as noninformative censoring events for subsequent changes in eGFR. We model the causal effect of systolic blood pressure on the probability of transitioning into 1 of 6 disease states given the current state. We use inverse probability weights with stabilization to account for potential time‐varying confounders, including past eGFR, total protein, serum creatinine, and hemoglobin. We apply the model to data from the Chronic Renal Insufficiency Cohort Study, a multisite observational study of patients with CKD.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号