首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
In this paper, we propose empirical likelihood methods based on influence function and jackknife techniques for constructing confidence intervals for mean medical cost with censored data. We conduct a simulation study to compare the coverage probabilities and interval lengths of our proposed confidence intervals with that of the existing normal approximation‐based confidence intervals and bootstrap confidence intervals. The proposed methods have better finite‐sample performances than existing methods. Finally, we illustrate our proposed methods with a relevant example.  相似文献   

2.
BACKGROUND: The relation between methodological advances in estimation of confidence intervals (CIs) for incremental cost-effectiveness ratios (ICER) and estimation of cost effectiveness in the presence of censoring has not been explored. The authors address the joint problem of estimating ICER precision in the presence of censoring. METHODS: Using patient-level data (n = 168) on cost and survival from a published placebo-controlled trial, the authors compared 2 methods of measuring uncertainty with censored data: 1) Bootstrap with censor adjustment (BCA); 2) Fieller's method with censor adjustment (FCA). The authors estimate the FCA over all possible values for the correlation (rho) between costs and effects (range = -1 to + 1) and also examine the use of the correlation between cases without censoring adjustment (i.e., simple time-on-study) for costs and effects as an approximation for rho. RESULTS: Using time-on-study, which considers all censored observations as responders (deaths), yields 0.64 life-years gained at an additional cost of 87.9 for a cost per life-year of 137 (95% CI by bootstrap -5.9 to 392). Censoring adjustment corrects for the bias in the time-on-study approach and reduces the cost per life-year estimate to 132 (=72/0.54). Confidence intervals with censor adjustment were approximately 40% wider than the base-case without adjustment. Using the Fieller method with an approximation of rho based on the uncensored cost and effect correlation provides a 95% CI of (-48 to 529), which is very close to the BCA interval of (-52 to 504). CONCLUSIONS: Adjustment for censoring is necessary in cost-effectiveness studies to obtain unbiased estimates of ICER with appropriate uncertainty limits. In this study, BCA and FCA methods, the latter with approximated covariance, are simple to compute and give similar confidence intervals.  相似文献   

3.
Medical cost data are typically highly skewed to the right with a large proportion of zero costs. It is also common for these data to be censored because of incomplete follow‐up and death. In the case of censoring due to death, it is important to consider the potential dependence between cost and survival. This association can occur because patients who incur a greater amount of medical cost tend to be frailer and hence are more likely to die. To handle this informative censoring issue, joint modeling of cost and survival with shared random effects has been proposed. In this paper, we extend this joint modeling approach to handle a final feature of many medical cost data sets, i.e., Specifically, the fact that data were obtained via a complex survey design. Specifically, we extend the joint model by incorporating the sample weights when estimating the parameters and using the Taylor series linearization approach when calculating the standard errors. We use a simulation study to compare the joint modeling approach with and without these adjustments. The simulation study shows that parameter estimates can be seriously biased when information about the complex survey design is ignored. It also shows that standard errors based on the Taylor series linearization approach provide satisfactory confidence interval coverage. The proposed joint model is applied to monthly hospital costs obtained from the 2004 National Long Term Care Survey. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

4.
Tian L  Huang J 《Statistics in medicine》2007,26(23):4273-4292
The two-part model is often used to analyse medical cost data which contain a large proportion of zero cost and are highly skewed with some large costs. The total medical costs over a period of time are often censored due to incomplete follow-up, making the analysis difficult as the censoring can be informative. We propose to apply the inverse probability weighting method on a two-part model to analyse right-censored cumulative medical costs with informative censoring. We also introduce a set of simple functionals based on the intermediate cost history to be applied with the efficiency augmentation technique. In addition, we propose a practical model-checking technique based on the cumulative residuals. Simulation studies are conducted to evaluate the finite sample performance of the proposed method. We use a data set on the cardiovascular disease (CVD)-related Medicare costs to illustrate our proposed method.  相似文献   

5.
The incidence of new infections is a key measure of the status of the HIV epidemic, but accurate measurement of incidence is often constrained by limited data. Karon et al. (Statist. Med. 2008; 27:4617–4633) developed a model to estimate the incidence of HIV infection from surveillance data with biologic testing for recent infection for newly diagnosed cases. This method has been implemented by public health departments across the United States and is behind the new national incidence estimates, which are about 40 per cent higher than previous estimates. We show that the delta method approximation given for the variance of the estimator is incomplete, leading to an inflated variance estimate. This contributes to the generation of overly conservative confidence intervals, potentially obscuring important differences between populations. We demonstrate via simulation that an innovative model-based bootstrap method using the specified model for the infection and surveillance process improves confidence interval coverage and adjusts for the bias in the point estimate. Confidence interval coverage is about 94–97 per cent after correction, compared with 96–99 per cent before. The simulated bias in the estimate of incidence ranges from ?6.3 to +14.6 per cent under the original model but is consistently under 1 per cent after correction by the model-based bootstrap. In an application to data from King County, Washington in 2007 we observe correction of 7.2 per cent relative bias in the incidence estimate and a 66 per cent reduction in the width of the 95 per cent confidence interval using this method. We provide open-source software to implement the method that can also be extended for alternate models.  相似文献   

6.

Background

This work has investigated under what conditions confidence intervals around the differences in mean costs from a cluster RCT are suitable for estimation using a commonly used cluster-adjusted bootstrap in preference to methods that utilise the Huber-White robust estimator of variance. The bootstrap's main advantage is in dealing with skewed data, which often characterise patient costs. However, it is insufficiently well recognised that one method of adjusting the bootstrap to deal with clustered data is only valid in large samples. In particular, the requirement that the number of clusters randomised should be large would not be satisfied in many cluster RCTs performed to date.

Methods

The performances of confidence intervals for simple differences in mean costs utilising a robust (cluster-adjusted) standard error and from two cluster-adjusted bootstrap procedures were compared in terms of confidence interval coverage in a large number of simulations. Parameters varied included the intracluster correlation coefficient, the sample size and the distributions used to generate the data.

Results

The bootstrap's advantage in dealing with skewed data was found to be outweighed by its poor confidence interval coverage when the number of clusters was at the level frequently found in cluster RCTs in practice. Simulations showed that confidence intervals based on robust methods of standard error estimation achieved coverage rates between 93.5% and 94.8% for a 95% nominal level whereas those for the bootstrap ranged between 86.4% and 93.8%.

Conclusion

In general, 24 clusters per treatment arm is probably the minimum number for which one would even begin to consider the bootstrap in preference to traditional robust methods, for the parameter combinations investigated here. At least this number of clusters and extremely skewed data would be necessary for the bootstrap to be considered in favour of the robust method. There is a need for further investigation of more complex bootstrap procedures if economic data from cluster RCTs are to be analysed appropriately.  相似文献   

7.
This paper analyses a case in censored failure time data problems where some observations are potentially censored. The traditional models for failure time data implicitly assume that the censoring status for each observation is deterministic. Therefore, they cannot be applied directly to the potentially censored data. We propose an estimator that uses resampling techniques to approximate censoring probabilities for individual observations. A Monte Carlo simulation study shows that the proposed estimator properly corrects biases that would otherwise be present had it been assumed that either all potentially censored observations are censored or that no censoring has occurred. Finally, we apply the estimator to a health insurance claims database.  相似文献   

8.
Economic evaluations of medical technologies involve a consideration of both costs and clinical benefits, and an increasing number of clinical studies include a specific objective of assessing cost-effectiveness. These studies measure the trade-off between costs and benefits using the cost-effectiveness ratio (CE ratio), which is defined as the net incremental cost per unit of benefit provided by the candidate therapy. In this paper we review the statistical methods which have been proposed for estimating 95 per cent confidence intervals for cost-effectiveness ratios. We show that the use of an angular transformation of the standardized ratio stabilizes the variance of the estimated CE ratio, and provides a clearer interpretation of study results. An estimate of the 95 per cent confidence interval for the CE ratio in the transformed scale is easily made using the jack-knife or bootstrap. The available methods are compared using data from a long term study of mortality in patients with congestive heart failure.  相似文献   

9.
In this paper, we propose a model for medical costs recorded at regular time intervals, e.g. every month, as repeated measures in the presence of a terminating event, such as death. Prior models have related monthly medical costs to time since entry, with extra costs at the final observations at the time of death. Our joint model for monthly medical costs and survival time incorporates two important new features. First, medical cost and survival may be correlated because more 'frail' patients tend to accumulate medical costs faster and die earlier. A joint random effects model is proposed to account for the correlation between medical costs and survival by a shared random effect. Second, monthly medical costs usually increase during the time period prior to death because of the intensive care for dying patients. We present a method for estimating the pattern of cost prior to death, which is applicable if the pattern can be characterized as an additive effect that is limited to a fixed time interval, say b units of time before death. This 'turn back time' method for censored observations censors cost data b units of time before the actual censoring time, while keeping the actual censoring time for the survival data. Time-dependent covariates can be included. Maximum likelihood estimation and inference are carried out through a Monte Carlo EM algorithm with a Metropolis-Hastings sampler in the E-step. An analysis of monthly outpatient EPO medical cost data for dialysis patients is presented to illustrate the proposed methods.  相似文献   

10.
Censoring is a common problem with medical cost data. Methods from traditional survival analysis are not directly applicable to estimate medical costs since patients accumulate costs with different rate functions over time, leading to negatively biased estimates. Heckman's two-step estimator results in large variances when identical explanatory variables that influence selection are included in the structural equation, i.e. when there are no exclusion restrictions. This paper provides a systematic treatment of the correction for nonrandom sample selection bias of medical cost data where the selection rule is described by a censored regression model. The proposed method first uses the duration of time a patient is tracked for the selection, rather than a binary variable, namely whether or not the duration is censored. Second, using Tobit residuals instead of the inverse Mills ratio in the structural equation allows us to decrease large variances introduced by the Heckman model when there are no exclusion restrictions. We show that the resulting estimators are consistent and asymptotically normal. Simulation studies confirmed our results. Moreover, we derive a simple test to determine possible sample selection bias due to censoring. Data from a study on the medical cost of breast, prostate, colon, and lung cancer is used as an application of the method.  相似文献   

11.
For a continuous-scale diagnostic test, it is of interest to construct a confidence interval for the sensitivity of the diagnostic test at the cut-off that yields a predetermined level of its specificity (for example, 80, 90 or 95 per cent). In this paper we propose two new intervals for the sensitivity of a continuous-scale diagnostic test at a fixed level of specificity. We then conduct simulation studies to compare the relative performance of these two intervals with the best existing BCa bootstrap interval, proposed by Platt et al. Our simulation results show that the newly proposed intervals are better than the BCa bootstrap interval in terms of coverage accuracy and interval length.  相似文献   

12.
X He  W K Fung 《Statistics in medicine》1999,18(15):1993-2009
The Weibull family of distributions is frequently used in failure time models. The maximum likelihood estimator is very sensitive to occurrence of upper and lower outliers, especially when the hazard function is increasing. We consider the method of medians estimator for the two-parameter Weibull model. As an M-estimator, it has a bounded influence function and is highly robust against outliers. It is easy to compute as it requires solving only one equation instead of a pair of equations as for most other M-estimators. Furthermore, no assumptions or adjustments are needed for the estimator when there are some possibly censored observations at either end of the sample. About 16 per cent of the largest observations and 34 per cent of the smallest observations may be censored without affecting the calculations. We also present a simple criterion to choose between the maximum likelihood estimator and the method of medians estimator to improve on the finite-sample efficiency of the Weibull model. Robust inference on the shape parameter is also considered. The usefulness with contaminated or censored samples is illustrated by examples on three lifetime data sets. A simulation study was carried out to assess the performance of the proposed estimator and the confidence intervals of a variety of contaminated Weibull models.  相似文献   

13.
Interval‐censored data, in which the event time is only known to lie in some time interval, arise commonly in practice, for example, in a medical study in which patients visit clinics or hospitals at prescheduled times and the events of interest occur between visits. Such data are appropriately analyzed using methods that account for this uncertainty in event time measurement. In this paper, we propose a survival tree method for interval‐censored data based on the conditional inference framework. Using Monte Carlo simulations, we find that the tree is effective in uncovering underlying tree structure, performs similarly to an interval‐censored Cox proportional hazards model fit when the true relationship is linear, and performs at least as well as (and in the presence of right‐censoring outperforms) the Cox model when the true relationship is not linear. Further, the interval‐censored tree outperforms survival trees based on imputing the event time as an endpoint or the midpoint of the censoring interval. We illustrate the application of the method on tooth emergence data.  相似文献   

14.
The statistic of interest in most health economic evaluations is the incremental cost-effectiveness ratio. Since the variance of a ratio estimator is intractable, the health economics literature has suggested a number of alternative approaches to estimating confidence intervals for the cost-effectiveness ratio. In this paper, Monte Carlo simulation techniques are employed to address the question of which of the proposed methods is most appropriate. By repeatedly sampling from a known distribution and applying the different methods of confidence interval estimation, it is possible to calculate the coverage properties of each method to see if these correspond to the chosen confidence level. As the results of a single Monte Carlo experiment would be valid only for that particular set of circumstances, a series of experiments was conducted in order to examine the performance of the different methods under a variety of conditions relating to the sample size, the coefficient of variation of the numerator and denominator of the ratio, and the covariance between costs and effects in the underlying data. Response surface analysis was used to analyse the results and substantial differences between the different methods of confidence interval estimation were identified. The methods, both parametric and non-parametric, which assume a normal sampling distribution performed poorly, as did the approach based on simply combining the separate intervals on costs and effects. The choice of method for confidence interval estimation can lead to large differences in the estimated confidence limits for cost-effectiveness ratios. The importance of such differences is an empirical question and will depend to a large extent on the role of hypothesis testing in economic appraisal. However, where it is suspected that the sampling distribution is skewed, normal approximation methods produce particularly poor results and should be avoided.  相似文献   

15.
Methods for the evaluation of the predictive accuracy of biomarkers with respect to survival outcomes subject to right censoring have been discussed extensively in the literature. In cancer and other diseases, survival outcomes are commonly subject to interval censoring by design or due to the follow up schema. In this article, we present an estimator for the area under the time-dependent receiver operating characteristic (ROC) curve for interval censored data based on a nonparametric sieve maximum likelihood approach. We establish the asymptotic properties of the proposed estimator and illustrate its finite-sample properties using a simulation study. The application of our method is illustrated using data from a cancer clinical study. An open-source R package to implement the proposed method is available on Comprehensive R Archive Network.  相似文献   

16.
The objective of this paper is to estimate survival curves for two different exposure groups when the exposure group is not known for all observations, and the data is subject to left truncation and right censoring. The situation we consider is when the probability that the exposure group is missing may depend on whether the observation is censored or uncensored, in which case the exposure is not missing at random. The problem was motivated by a study of Alzheimer's disease to estimate the distribution of ages at diagnosis for individuals with and without an apolipoprotein E4 allele (the exposure group). Genotyping for this risk factor was incomplete and performed more frequently on the cases of Alzheimer's disease (the uncensored observations) than the censored observations. The survival curves are estimated in discrete time using an EM algorithm. A bootstrapping procedure is proposed that guarantees each bootstrap sample has the same proportion of observations with missing exposure. A simulation is performed to evaluate the bias of the estimators and to investigate design and efficiency issues. The methods are applied to the Alzheimer's disease study.  相似文献   

17.
We review and develop pointwise confidence intervals for a survival distribution with right‐censored data for small samples, assuming only independence of censoring and survival. When there is no censoring, at each fixed time point, the problem reduces to making inferences about a binomial parameter. In this case, the recently developed beta product confidence procedure (BPCP) gives the standard exact central binomial confidence intervals of Clopper and Pearson. Additionally, the BPCP has been shown to be exact (gives guaranteed coverage at the nominal level) for progressive type II censoring and has been shown by simulation to be exact for general independent right censoring. In this paper, we modify the BPCP to create a ‘mid‐p’ version, which reduces to the mid‐p confidence interval for a binomial parameter when there is no censoring. We perform extensive simulations on both the standard and mid‐p BPCP using a method of moments implementation that enforces monotonicity over time. All simulated scenarios suggest that the standard BPCP is exact. The mid‐p BPCP, like other mid‐p confidence intervals, has simulated coverage closer to the nominal level but may not be exact for all survival times, especially in very low censoring scenarios. In contrast, the two asymptotically‐based approximations have lower than nominal coverage in many scenarios. This poor coverage is due to the extreme inflation of the lower error rates, although the upper limits are very conservative. Both the standard and the mid‐p BPCP methods are available in our bpcp R package. Published 2016. This article is US Government work and is in the public domain in the USA.  相似文献   

18.
OBJECTIVES: This work has investigated under what conditions cost-effectiveness data from a cluster randomized trial (CRT) are suitable for analysis using a cluster-adjusted nonparametric bootstrap. The bootstrap's main advantages are in dealing with skewed data and its ability to take correlations between costs and effects into account. However, there are known theoretical problems with a commonly used cluster bootstrap procedure, and the practical implications of these require investigation. METHODS: Simulations were used to estimate the coverage of confidence intervals around incremental cost-effectiveness ratios from CRTs using two bootstrap methods. RESULTS: The bootstrap gave excessively narrow confidence intervals, but there was evidence to suggest that, when the number of clusters per treatment arm exceeded 24, it might give acceptable results. The method that resampled individuals as well as clusters did not perform well when cost and effectiveness data were correlated. CONCLUSIONS: If economic data from such trials are to be analyzed adequately, then there is a need for further investigations of more complex bootstrap procedures. Similarly, further research is required on methods such as the net benefit approach.  相似文献   

19.
Health economic evaluations are now more commonly being included in pragmatic randomized trials. However a variety of methods are being used for the presentation and analysis of the resulting cost data, and in many cases the approaches taken are inappropriate. In order to inform health care policy decisions, analysis needs to focus on arithmetic mean costs, since these will reflect the total cost of treating all patients with the disease. Thus, despite the often highly skewed distribution of cost data, standard non-parametric methods or use of normalizing transformations are not appropriate. Although standard parametric methods of comparing arithmetic means may be robust to non-normality for some data sets, this is not guaranteed. While the randomization test can be used to overcome assumptions of normality, its use for comparing means is still restricted by the need for similarly shaped distributions in the two groups. In this paper we show how the non-parametric bootstrap provides a more flexible alternative for comparing arithmetic mean costs between randomized groups, avoiding the assumptions which limit other methods. Details of several bootstrap methods for hypothesis tests and confidence intervals are described and applied to cost data from two randomized trials. The preferred bootstrap approaches are the bootstrap-t or variance stabilized bootstrap-t and the bias corrected and accelerated percentile methods. We conclude that such bootstrap techniques can be recommended either as a check on the robustness of standard parametric methods, or to provide the primary statistical analysis when making inferences about arithmetic means for moderately sized samples of highly skewed data such as costs.  相似文献   

20.
BackgroundHealth utility data often show an apparent truncation effect, where a proportion of individuals achieve the upper bound of 1. The Tobit model and censored least absolute deviations (CLAD) have both been used as analytic solutions to this apparent truncation effect. These models assume that the observed utilities are censored at 1, and hence that the true utility can be greater than 1. We aimed to examine whether the Tobit and CLAD models yielded acceptable results when this censoring assumption was not appropriate.MethodsUsing health utility (captured through EQ5D) data from a diabetes study, we conducted a simulation to compare the performance of the Tobit, CLAD, ordinary least squares (OLS), two-part and latent class estimators in terms of their bias and estimated confidence intervals. We also illustrate the performance of semiparametric and nonparametric bootstrap methods.ResultsWhen the true utility was conceptually bounded above at 1, the Tobit and CLAD estimators were both biased. The OLS estimator was asymptotically unbiased and, while the model-based and semiparametric bootstrap confidence intervals were too narrow, confidence intervals based on the robust standard errors or the nonparametric bootstrap were acceptable for sample sizes of 100 and larger. Two-part and latent class models also yielded unbiased estimates.ConclusionsWhen the intention of the analysis is to inform an economic evaluation, and the utilities should be bounded above at 1, CLAD, and Tobit methods were biased. OLS coupled with robust standard errors or the nonparametric bootstrap is recommended as a simple and valid approach.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号