首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
In this paper, we propose a hybrid variance estimator for the Kaplan-Meier survival function. This new estimator approximates the true variance by a Binomial variance formula, where the proportion parameter is a piecewise non-increasing function of the Kaplan-Meier survival function and its upper bound, as described below. Also, the effective sample size equals the number of subjects not censored prior to that time. In addition, we consider an adjusted hybrid variance estimator that modifies the regular estimator for small sample sizes. We present a simulation study to compare the performance of the regular and adjusted hybrid variance estimators to the Greenwood and Peto variance estimators for small sample sizes. We show that on average these hybrid variance estimators give closer variance estimates to the true values than the traditional variance estimators, and hence confidence intervals constructed with these hybrid variance estimators have more nominal coverage rates. Indeed, the Greenwood and Peto variance estimators can substantially underestimate the true variance in the left and right tails of the survival distribution, even with moderately censored data. Finally, we illustrate the use of these hybrid and traditional variance estimators on a data set from a leukaemia clinical trial.  相似文献   

2.
Jiang H  Zhou XH 《Statistics in medicine》2004,23(21):3365-3376
Medical costs data with administratively censored observations often arise in cost-effectiveness studies of treatments for life-threatening diseases. Mean of medical costs incurred from the start of a treatment until death or a certain time point after the implementation of treatment is frequently of interest. In many situations, due to the skewed nature of the cost distribution and non-uniform rate of cost accumulation over time, the currently available normal approximation confidence interval has poor coverage accuracy. In this paper, we propose a bootstrap confidence interval for the mean of medical costs with censored observations. In simulation studies, we show that the proposed bootstrap confidence interval had much better coverage accuracy than the normal approximation one when medical costs had a skewed distribution. When there is light censoring on medical costs (< or =25 per cent), we found that the bootstrap confidence interval based on the simple weighted estimator is preferred due to its simplicity and good coverage accuracy. For heavily censored cost data (censoring rate > or =30 per cent) with larger sample sizes (n > or =200), the bootstrap confidence intervals based on the partitioned estimator has superior performance in terms of both efficiency and coverage accuracy. We also illustrate the use of our methods in a real example.  相似文献   

3.
In medical studies with censored data Kaplan and Meier's product limit estimator has frequent use as the estimate of the survival function. Simultaneous confidence intervals for the survival function at various time points constitute a useful addition to the analysis. This study compares several such methods. We consider in a simulation investigation two whole curve confidence bands and four methods based on the Bonferroni inequality. The results show that three Bonferroni-type methods are essentially equivalent, all being better than the other methods when the number of time points is small (3 or 5).  相似文献   

4.
Although estimation and confidence intervals have become popular alternatives to hypothesis testing and p-values, statisticians usually determine sample sizes for randomized clinical trials by controlling the power of a statistical test at an appropriate alternative, even those statisticians who recommend the use of confidence intervals for inference. There is merit in achieving consistency in the techniques for data analysis and sample size determination. To that end, this paper compares sample size determination with use of the length of the confidence interval with that obtained by control of power.  相似文献   

5.
We review and develop pointwise confidence intervals for a survival distribution with right‐censored data for small samples, assuming only independence of censoring and survival. When there is no censoring, at each fixed time point, the problem reduces to making inferences about a binomial parameter. In this case, the recently developed beta product confidence procedure (BPCP) gives the standard exact central binomial confidence intervals of Clopper and Pearson. Additionally, the BPCP has been shown to be exact (gives guaranteed coverage at the nominal level) for progressive type II censoring and has been shown by simulation to be exact for general independent right censoring. In this paper, we modify the BPCP to create a ‘mid‐p’ version, which reduces to the mid‐p confidence interval for a binomial parameter when there is no censoring. We perform extensive simulations on both the standard and mid‐p BPCP using a method of moments implementation that enforces monotonicity over time. All simulated scenarios suggest that the standard BPCP is exact. The mid‐p BPCP, like other mid‐p confidence intervals, has simulated coverage closer to the nominal level but may not be exact for all survival times, especially in very low censoring scenarios. In contrast, the two asymptotically‐based approximations have lower than nominal coverage in many scenarios. This poor coverage is due to the extreme inflation of the lower error rates, although the upper limits are very conservative. Both the standard and the mid‐p BPCP methods are available in our bpcp R package. Published 2016. This article is US Government work and is in the public domain in the USA.  相似文献   

6.
The incremental life expectancy, defined as the difference in mean survival times between two treatment groups, is a crucial quantity of interest in cost‐effectiveness analyses. Usually, this quantity is very difficult to estimate from censored survival data with a limited follow‐up period. The paper develops estimation procedures for a time‐shift survival model that, provided model assumptions are met, gives a reliable estimate of incremental life expectancy without extrapolation beyond the study period. Methods for inference are developed both for individual patient data and when only published Kaplan–Meier curves are available. Through simulation, the estimators are shown to be close to unbiased and constructed confidence intervals are shown to have close to nominal coverage for small to moderate sample sizes. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

7.
We propose a method for calculating power and sample size for studies involving interval‐censored failure time data that only involves standard software required for fitting the appropriate parametric survival model. We use the framework of a longitudinal study where patients are assessed periodically for a response and the only resultant information available to the investigators is the failure window: the time between the last negative and first positive test results. The survival model is fit to an expanded data set using easily computed weights. We illustrate with a Weibull survival model and a two‐group comparison. The investigator can specify a group difference in terms of a hazards ratio. Our simulation results demonstrate the merits of these proposed power calculations. We also explore how the number of assessments (visits), and thus the corresponding lengths of the failure intervals, affect study power. The proposed method can be easily extended to more complex study designs and a variety of survival and censoring distributions. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

8.
X He  W K Fung 《Statistics in medicine》1999,18(15):1993-2009
The Weibull family of distributions is frequently used in failure time models. The maximum likelihood estimator is very sensitive to occurrence of upper and lower outliers, especially when the hazard function is increasing. We consider the method of medians estimator for the two-parameter Weibull model. As an M-estimator, it has a bounded influence function and is highly robust against outliers. It is easy to compute as it requires solving only one equation instead of a pair of equations as for most other M-estimators. Furthermore, no assumptions or adjustments are needed for the estimator when there are some possibly censored observations at either end of the sample. About 16 per cent of the largest observations and 34 per cent of the smallest observations may be censored without affecting the calculations. We also present a simple criterion to choose between the maximum likelihood estimator and the method of medians estimator to improve on the finite-sample efficiency of the Weibull model. Robust inference on the shape parameter is also considered. The usefulness with contaminated or censored samples is illustrated by examples on three lifetime data sets. A simulation study was carried out to assess the performance of the proposed estimator and the confidence intervals of a variety of contaminated Weibull models.  相似文献   

9.
Methods for estimating the size of a closed population often consist of fitting some model (e.g. a log-linear model) to data with a missing cell corresponding to the members of the population missed by all reporting sources. Although the use of the asymptotic standard error is the usual method for forming confidence intervals for the population total, the sample sizes are not always large enough to produce valid confidence intervals. We propose a method for forming confidence intervals based upon changes in a goodness-of-fit statistic associated with changes in trial values of the population total.  相似文献   

10.
K Kim 《Statistics in medicine》1992,11(11):1477-1488
The study duration in a clinical trial with censored survival data is the sum of the accrual duration, which determines the sample size in a traditional sense, and the follow-up duration, which more or less controls the number of events to be observed. We propose a design procedure for determining the study duration or for calculating the power in a group sequential clinical trial with censored survival data and possibly unequal patient allocation between treatments, adjusting for stratified randomization. The group sequential method is based on the use function approach. We describe a clinical trial recently activated by the Eastern Cooperative Oncology Group for an illustration of the proposed procedure.  相似文献   

11.
The problem of assessing occupational exposure using the mean or an upper percentile of a lognormal distribution is addressed. Inferential methods for constructing an upper confidence limit for an upper percentile of a lognormal distribution and for finding confidence intervals for a lognormal mean based on samples with multiple detection limits are proposed. The proposed methods are based on the maximum likelihood estimates. They perform well with respect to coverage probabilities as well as power and are applicable to small sample sizes. The proposed approaches are also applicable for finding confidence limits for the percentiles of a gamma distribution. Computational details and a source for the computer programs are given. An advantage of the proposed approach is the ease of computation and implementation. Illustrative examples with real data sets and a simulated data set are given.  相似文献   

12.
In this paper, we propose empirical likelihood methods based on influence function and jackknife techniques for constructing confidence intervals for mean medical cost with censored data. We conduct a simulation study to compare the coverage probabilities and interval lengths of our proposed confidence intervals with that of the existing normal approximation‐based confidence intervals and bootstrap confidence intervals. The proposed methods have better finite‐sample performances than existing methods. Finally, we illustrate our proposed methods with a relevant example.  相似文献   

13.
Analytical data are often subject to left‐censoring when the actual values to be quantified fall below the limit of detection. The primary interest of this paper is statistical inference for the two‐sample problem. Most of the current publications are centered around naive approaches or the parametric Tobit model approach. These methods may not be suitable for data with high censoring rates and relatively small sample sizes. In this paper, we establish the theoretical equivalence of three nonparametric methods: the Wilcoxon rank sum, the Gehan, and the Peto‐Peto tests, under fixed left‐censoring and other mild conditions. We then develop a nonparametric point and interval estimation procedure for the location shift model. A large set of simulations compares 14 methods including naive, parametric, and nonparametric methods. The results clearly favor the nonparametric methods for a range of sample sizes and censoring rates. Simulations also demonstrate satisfactory point and interval estimation results. Finally, a real data example is given followed by discussion. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

14.
目的探讨生存分析log-rank检验样本含量估计的影响因素。方法应用PASS软件中的Lachin-Foulkes方法模块,设置不同的参数计算实例的样本含量来研究log-rank检验样本含量估计的影响因素。结果检验水准(包括单、双侧检验)、预期的检验效能、风险比、试验组和对照组例数是否均衡、随访的时间、删失率等因素都会影响log-rank检验样本含量估计。结论生存分析log-rank检验样本含量估计受诸多因素影响,设计阶段样本含量估算应考虑这些影响因素。  相似文献   

15.
Deriving valid confidence intervals for complex estimators is a challenging task in practice. Estimators of dynamic weighted survival modeling (DWSurv), a method to estimate an optimal dynamic treatment regime of censored outcomes, are asymptotically normal and consistent for their target parameters when at least a subset of the nuisance models is correctly specified. However, their behavior in finite samples and the impact of model misspecification on inferences remain unclear. In addition, the estimators' nonregularity may negatively affect the inferences under some specific data generating mechanisms. Our objective was to compare five methods, two asymptotic variance formulas (adjusting or not for the estimation of nuisance parameters) to three bootstrap approaches, to construct confidence intervals for the DWSurv parameters in finite samples. Via simulations, we considered practical scenarios, for example, when some nuisance models are misspecified or when nonregularity is problematic. We also compared the five methods in an application about the treatment of rheumatoid arthritis. We found that the bootstrap approaches performed consistently well at the cost of longer computational times. The asymptotic variance with adjustments generally yielded conservative confidence intervals. The asymptotic variance without adjustments yielded nominal coverages for large sample sizes. We recommend using the asymptotic variance with adjustments in small samples and the bootstrap if computationally feasible. Caution should be taken when nonregularity may be an issue.  相似文献   

16.
BackgroundHealth utility data often show an apparent truncation effect, where a proportion of individuals achieve the upper bound of 1. The Tobit model and censored least absolute deviations (CLAD) have both been used as analytic solutions to this apparent truncation effect. These models assume that the observed utilities are censored at 1, and hence that the true utility can be greater than 1. We aimed to examine whether the Tobit and CLAD models yielded acceptable results when this censoring assumption was not appropriate.MethodsUsing health utility (captured through EQ5D) data from a diabetes study, we conducted a simulation to compare the performance of the Tobit, CLAD, ordinary least squares (OLS), two-part and latent class estimators in terms of their bias and estimated confidence intervals. We also illustrate the performance of semiparametric and nonparametric bootstrap methods.ResultsWhen the true utility was conceptually bounded above at 1, the Tobit and CLAD estimators were both biased. The OLS estimator was asymptotically unbiased and, while the model-based and semiparametric bootstrap confidence intervals were too narrow, confidence intervals based on the robust standard errors or the nonparametric bootstrap were acceptable for sample sizes of 100 and larger. Two-part and latent class models also yielded unbiased estimates.ConclusionsWhen the intention of the analysis is to inform an economic evaluation, and the utilities should be bounded above at 1, CLAD, and Tobit methods were biased. OLS coupled with robust standard errors or the nonparametric bootstrap is recommended as a simple and valid approach.  相似文献   

17.
In this paper we outline and illustrate an easy to program method for analytically calculating both parametric and non-parametric bootstrap-type confidence intervals for quantiles of the survival distribution based on right censored data. This new approach allows for the incorporation of covariates within the framework of parametric models. The procedure is based upon the notion of fractional order statistics and is carried forth using a simple beta transformation of the estimated survival function (parametric or non-parametric). It is the only direct method currently available in the sense that all other methods are based on inverting test statistics or employing confidence intervals for other survival quantities. We illustrate that the new method has favourable coverage probabilities for median confidence intervals as compared to six other competing methods.  相似文献   

18.
Effects of mid-point imputation on the analysis of doubly censored data.   总被引:2,自引:0,他引:2  
Doubly censored data arise in some cohort studies of the AIDS incubation period because the time of infection may be known only up to an interval defined by two successive screening tests for HIV antibody. A simple analytic approach is to impute the infection time by the mid-point of the interval and then apply standard survival techniques for right censored data. The objective of this paper is to investigate the statistical properties of such a mid-point imputation approach. We investigated the asymptotic bias of the Kaplan-Meier estimate, coverage probabilities of associated confidence intervals, bias in hazard ratio, and the size of the logrank test. We show that the statistical properties of mid-point imputation depend strongly on the underlying distributions of infection times and the incubation periods, and the width of the interval between screening tests. In the absence of treatment, the median incubation period of HIV infection is approximately 10 years, and we conclude that, for this situation, mid-point imputation is a reasonable procedure for interval widths of 2 years or less.  相似文献   

19.
OBJECTIVE: Small sample sizes in Asian, Hispanic, and Native American groups and misreporting of race/ethnicity across all groups (including blacks and whites) limit the usefulness of racial/ethnic comparisons based on Medicare data. The objective of this paper is to compare procedure rates for these groups using Medicare data, to assess how small sample size and misreporting affect the validity of comparisons, and to compare rates after correcting for misreporting. DATA: We use 1997 physician claims data for a 5 percent sample of Medicare beneficiaries aged 65 and older to study cardiac procedures and tests. STUDY DESIGN: We calculate age and sex-adjusted rates and confidence intervals by race/ethnicity. Confidence intervals are compared among the groups. Out-of-sample data on misreporting of race/ethnicity are used to assess potential bias due to misreporting, and to correct for the bias. PRINCIPAL FINDINGS: Sample sizes are sufficient to find significant ethnic and racial differences for most procedures studied. Blacks' rates tend to be lower than whites. Asian and Hispanic rates also tend to be lower than whites', and about the same as blacks'. Sample sizes for Native Americans are very small (about .1 percent of the data); nonetheless, some significant differences from whites can still be identified. Biases in rates due to misreporting are small (less than 10 percent) for blacks, Hispanics, and whites. Biases in rates for Asians and Native Americans are greater, and exceed 20 percent for some procedures. CONCLUSIONS: Sample sizes for Asians, blacks, and Hispanics are generally adequate to permit meaningful comparisons with whites. Implementing a correction for misreporting makes Medicare data useful for all ethnic groups. Misreporting race/ethnicity and small sample sizes do not materially limit the usefulness of Medicare data for comparing rates among racial and ethnic groups.  相似文献   

20.
This paper concerns large sample prediction intervals for the survival times of a future sample based on an initial sample of censored survival data. Simple procedures are developed for obtaining non-parametric and exponential prediction intervals for the future sample quantiles; the non-parametric interval results from inversion of an appropriate test statistic. A simulation study performed under various conditions evaluates the accuracy of the proposed intervals. An adjuvant chemotherapy study of breast cancer patients illustrates the methodology.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号