首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Xie J  Liu C 《Statistics in medicine》2005,24(20):3089-3110
Estimation and group comparison of survival curves are two very common issues in survival analysis. In practice, the Kaplan-Meier estimates of survival functions may be biased due to unbalanced distribution of confounders. Here we develop an adjusted Kaplan-Meier estimator (AKME) to reduce confounding effects using inverse probability of treatment weighting (IPTW). Each observation is weighted by its inverse probability of being in a certain group. The AKME is shown to be a consistent estimate of the survival function, and the variance of the AKME is derived. A weighted log-rank test is proposed for comparing group differences of survival functions. Simulation studies are used to illustrate the performance of AKME and the weighted log-rank test. The method proposed here outperforms the Kaplan-Meier estimate, and it does better than or as well as other estimators based on stratification. The AKME and the weighted log-rank test are applied to two real examples: one is the study of times to reinfection of sexually transmitted diseases, and the other is the primary biliary cirrhosis (PBC) study.  相似文献   

2.
We propose a procedure for estimating the survival function of a time-to-event random variable under arbitrary patterns of censoring. The method is predicated on the mild assumption that the distribution of the random variable, and hence the survival function, has a density that lies in a class of 'smooth' densities whose elements can be represented by an infinite Hermite series. Truncation of the series yields a 'parametric' expression that can well approximate any plausible survival density, and hence survival function, provided the degree of truncation is suitably chosen. The representation admits a convenient expression for the likelihood for the 'parameters' in the approximation under arbitrary censoring/truncation that is straightforward to compute and maximize. A test statistic for comparing two survival functions, which is based on an integrated weighted difference of estimates of each under this representation, is proposed. Via simulation studies and application to a number of data sets, we demonstrate that the approach yields reliable inferences and can result in gains in efficiency over traditional nonparametric methods.  相似文献   

3.
In a cost-effectiveness analysis using clinical trial data, estimates of the between-treatment difference in mean cost and mean effectiveness are needed. Several methods for handling censored data have been suggested. One of them is inverse-probability weighting, and has the advantage that it can also be applied to estimate the parameters from a linear regression of the mean. Such regression models can potentially estimate the treatment contrast more precisely, since some of the residual variance can be explained by baseline covariates. The drawback, however, is that inverse-probability weighting may not be efficient. Using existing results on semi-parametric efficiency, this paper derives the semi-parametric efficient parameter estimates for regression of mean cost, mean quality-adjusted survival time and mean survival time. The performance of these estimates is evaluated through a simulation study. Applying both the new estimators and the inverse-probability weighted estimators to the results of the EVALUATE trial showed that the new estimators achieved a halving of the variance of the estimated treatment contrast for cost. Some practical suggestions for choosing an estimator are offered.  相似文献   

4.
Neurodegenerative diseases require an autopsy for confirmation of diagnosis. When death is the event of interest, studies based on autopsy-confirmed diagnoses result in right truncated survival times because individuals who live past the end of study date do not receive a pathological diagnosis and are therefore not included in the sample. Furthermore, many studies of neurodegenerative diseases recruit subjects only after the onset of the disease, which may result in left truncated survival times. Therefore, double truncation, the simultaneous presence of left and right truncation, is inherent in many autopsy-confirmed survival studies of neurodegenerative diseases. The main focus of this paper is to inform about the inherent double truncation in these studies and demonstrate how to properly estimate and compare survival distribution functions in this setting. We do so by conducting a case study of subjects with autopsy-confirmed Alzheimer's disease and frontotemporal lobar degeneration. This case study is supported by extensive simulation studies, which provide several new contributions to the literature on survival distribution estimation in the context of double truncation.  相似文献   

5.
With censored event time observations, the logrank test is the most popular tool for testing the equality of two underlying survival distributions. Although this test is asymptotically distribution free, it may not be powerful when the proportional hazards assumption is violated. Various other novel testing procedures have been proposed, which generally are derived by assuming a class of specific alternative hypotheses with respect to the hazard functions. The test considered by Pepe and Fleming (1989) is based on a linear combination of weighted differences of the two Kaplan–Meier curves over time and is a natural tool to assess the difference of two survival functions directly. In this article, we take a similar approach but choose weights that are proportional to the observed standardized difference of the estimated survival curves at each time point. The new proposal automatically makes weighting adjustments empirically. The new test statistic is aimed at a one‐sided general alternative hypothesis and is distributed with a short right tail under the null hypothesis but with a heavy tail under the alternative. The results from extensive numerical studies demonstrate that the new procedure performs well under various general alternatives with a caution of a minor inflation of the type I error rate when the sample size is small or the number of observed events is small. The survival data from a recent cancer comparative study are utilized for illustrating the implementation of the process. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

6.
We extend the discussion of Lee et al. and others on methods for performing secondary analyses of case-control sampled data and carry out an extensive investigation of efficiency and robustness. We find that, with the exception of the 'analyse-the-controls-only' strategy for populations in which cases are rare, ad hoc methods in common usage often lead to extremely misleading conclusions and that it is not possible to tell in advance when this will happen. Weighted likelihood and semi-parametric maximum likelihood methods are justified theoretically. We find that semi-parametric maximum likelihood can be as much as twice as efficient as the weighted method, but is subject to bias in estimating parameters of interest when the nuisance models this method requires have been mis-specified. The weighted method needs no nuisance models and thus is robust in this regard, but we cannot tell when it is going to be very inefficient without sophisticated modelling as through the SPML method. Practitioners should routinely use both methods and will often have to weigh up the practical consequences of severe inefficiency and lack of robustness in the context of their enquiries.  相似文献   

7.
For modern evidence-based medicine, a well thought-out risk scoring system for predicting the occurrence of a clinical event plays an important role in selecting prevention and treatment strategies. Such an index system is often established based on the subject's 'baseline' genetic or clinical markers via a working parametric or semi-parametric model. To evaluate the adequacy of such a system, C-statistics are routinely used in the medical literature to quantify the capacity of the estimated risk score in discriminating among subjects with different event times. The C-statistic provides a global assessment of a fitted survival model for the continuous event time rather than focussing on the prediction of bit-year survival for a fixed time. When the event time is possibly censored, however, the population parameters corresponding to the commonly used C-statistics may depend on the study-specific censoring distribution. In this article, we present a simple C-statistic without this shortcoming. The new procedure consistently estimates a conventional concordance measure which is free of censoring. We provide a large sample approximation to the distribution of this estimator for making inferences about the concordance measure. Results from numerical studies suggest that the new procedure performs well in finite sample.  相似文献   

8.
In randomized clinical trials where time‐to‐event is the primary outcome, almost routinely, the logrank test is prespecified as the primary test and the hazard ratio is used to quantify treatment effect. If the ratio of 2 hazard functions is not constant, the logrank test is not optimal and the interpretation of hazard ratio is not obvious. When such a nonproportional hazards case is expected at the design stage, the conventional practice is to prespecify another member of weighted logrank tests, eg, Peto‐Prentice‐Wilcoxon test. Alternatively, one may specify a robust test as the primary test, which can capture various patterns of difference between 2 event time distributions. However, most of those tests do not have companion procedures to quantify the treatment difference, and investigators have fallen back on reporting treatment effect estimates not associated with the primary test. Such incoherence in the “test/estimation” procedure may potentially mislead clinicians/patients who have to balance risk‐benefit for treatment decision. To address this, we propose a flexible and coherent test/estimation procedure based on restricted mean survival time, where the truncation time τ is selected data dependently. The proposed procedure is composed of a prespecified test and an estimation of corresponding robust and interpretable quantitative treatment effect. The utility of the new procedure is demonstrated by numerical studies based on 2 randomized cancer clinical trials; the test is dramatically more powerful than the logrank, Wilcoxon tests, and the restricted mean survival time–based test with a fixed τ, for the patterns of difference seen in these cancer clinical trials.  相似文献   

9.
Longitudinal studies often gather joint information on time to some event (survival analysis, time to dropout) and serial outcome measures (repeated measures, growth curves). Depending on the purpose of the study, one may wish to estimate and compare serial trends over time while accounting for possibly non-ignorable dropout or one may wish to investigate any associations that may exist between the event time of interest and various longitudinal trends. In this paper, we consider a class of random-effects models known as shared parameter models that are particularly useful for jointly analysing such data; namely repeated measurements and event time data. Specific attention will be given to the longitudinal setting where the primary goal is to estimate and compare serial trends over time while adjusting for possible informative censoring due to patient dropout. Parametric and semi-parametric survival models for event times together with generalized linear or non-linear mixed-effects models for repeated measurements are proposed for jointly modelling serial outcome measures and event times. Methods of estimation are based on a generalized non-linear mixed-effects model that may be easily implemented using existing software. This approach allows for flexible modelling of both the distribution of event times and of the relationship of the longitudinal response variable to the event time of interest. The model and methods are illustrated using data from a multi-centre study of the effects of diet and blood pressure control on progression of renal disease, the modification of diet in renal disease study.  相似文献   

10.
The hazard ratios resulting from a Cox's regression hazards model are hard to interpret and to be converted into prolonged survival time. As the main goal is often to study survival functions, there is increasing interest in summary measures based on the survival function that are easier to interpret than the hazard ratio; the residual mean time is an important example of those measures. However, because of the presence of right censoring, the tail of the survival distribution is often difficult to estimate correctly. Therefore, we consider the restricted residual mean time, which represents a partial area under the survival function, given any time horizon τ, and is interpreted as the residual life expectancy up to τ of a subject surviving up to time t. We present a class of regression models for this measure, based on weighted estimating equations and inverse probability of censoring weighted estimators to model potential right censoring. Furthermore, we show how to extend the models and the estimators to deal with delayed entries. We demonstrate that the restricted residual mean life estimator is equivalent to integrals of Kaplan–Meier estimates in the case of simple factor variables. Estimation performance is investigated by simulation studies. Using real data from Danish Monitoring Cardiovascular Risk Factor Surveys, we illustrate an application to additive regression models and discuss the general assumption of right censoring and left truncation being dependent on covariates. Copyright © 2017 John Wiley & Sons, Ltd.  相似文献   

11.
In oncology clinical trials, overall survival, time to progression, and progression‐free survival are three commonly used endpoints. Empirical correlations among them have been published for different cancers, but statistical models describing the dependence structures are limited. Recently, Fleischer et al. proposed a statistical model that is mathematically tractable and shows some flexibility to describe the dependencies in a realistic way, based on the assumption of exponential distributions. This paper aims to extend their model to the more flexible Weibull distribution. We derived theoretical correlations among different survival outcomes, as well as the distribution of overall survival induced by the model. Model parameters were estimated by the maximum likelihood method and the goodness of fit was assessed by plotting estimated versus observed survival curves for overall survival. We applied the method to three cancer clinical trials. In the non‐small‐cell lung cancer trial, both the exponential and the Weibull models provided an adequate fit to the data, and the estimated correlations were very similar under both models. In the prostate cancer trial and the laryngeal cancer trial, the Weibull model exhibited advantages over the exponential model and yielded larger estimated correlations. Simulations suggested that the proposed Weibull model is robust for data generated from a range of distributions. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

12.
Analysis of clustered data focusing on inference of the marginal distribution may be problematic when the risk of the outcome is related to the cluster size, termed as informative cluster size. In the absence of censoring, Hoffman et al. proposed a within-cluster resampling method, which is asymptotically equivalent to a weighted generalized estimating equations score equation. We investigate the estimation of the marginal distribution for multivariate survival data with informative cluster size using cluster-weighted Weibull and Cox proportional hazards models. The cluster-weighted Cox model can be implemented using standard software. Simulation results demonstrate that the proposed methods produce unbiased parameter estimation in the presence of informative cluster size. To illustrate the proposed approach, we analyze survival data from a lymphatic filariasis study in Recife, Brazil.  相似文献   

13.
Event history studies based on disease clinic data often face several complications. Specifically, patients may visit the clinic irregularly, and the intermittent observation times could depend on disease‐related variables; this can cause a failure time outcome to be dependently interval‐censored. We propose a weighted estimating function approach so that dependently interval‐censored failure times can be analysed consistently. A so‐called inverse‐intensity‐of‐visit weight is employed to adjust for the informative inspection times. Left truncation of failure times can also be easily handled. Additionally, in observational studies, treatment assignments are typically non‐randomized and may depend on disease‐related variables. An inverse‐probability‐of‐treatment weight is applied to estimating functions to further adjust for measured confounders. Simulation studies are conducted to examine the finite sample performances of the proposed estimators. Finally, the Toronto Psoriatic Arthritis Cohort Study is used for illustration. Copyright © 2017 John Wiley & Sons, Ltd.  相似文献   

14.
Jung SH 《Statistics in medicine》2008,27(17):3350-3365
This paper introduces a sample size calculation method for the weighted rank test statistics with paired two-sample survival data. Our sample size formula requires specification of joint survival and censoring distributions. For modelling the distribution of paired survival variables, we may use a paired exponential survival distribution that is specified by the marginal hazard rates and a measure of dependency. Also, in most trials randomizing paired subjects, the subjects of each pair are accrued and censored at the same time over an accrual period and an additional follow-up period, so that the paired subjects have a common censoring time. Under these practical settings, the design parameters include type I and type II error probabilities, marginal hazard rates under the alternative hypothesis, correlation coefficient, accrual period (or accrual rate) and follow-up period. If pilot data are available, we may estimate the survival distributions from them, but we specify the censoring distribution based on the specified accrual trend and the follow-up period planned for the new study. Through simulations, the formula is shown to provide accurate sample sizes under practical settings. Real studies are taken to demonstrate the proposed method.  相似文献   

15.
In observational studies with censored data, exposure-outcome associations are commonly measured with adjusted hazard ratios from multivariable Cox proportional hazards models. The difference in restricted mean survival times (RMSTs) up to a pre-specified time point is an alternative measure that offers a clinically meaningful interpretation. Several regression-based methods exist to estimate an adjusted difference in RMSTs, but they digress from the model-free method of taking the area under the survival function. We derive the adjusted RMST by integrating an adjusted Kaplan-Meier estimator with inverse probability weighting (IPW). The adjusted difference in RMSTs is the area between the two IPW-adjusted survival functions. In a Monte Carlo-type simulation study, we demonstrate that the proposed estimator performs as well as two regression-based approaches: the ANCOVA-type method of Tian et al and the pseudo-observation method of Andersen et al. We illustrate the methods by reexamining the association between total cholesterol and the 10-year risk of coronary heart disease in the Framingham Heart Study.  相似文献   

16.
In a survival study, it may not be possible to record the exact event time but only that the event has occurred between two time points or still has to occur, leading to interval-censored survival times.Recently, Sun et al. (Scand. J. Stat. 2006; 33(4):637-649) suggested to fit a Clayton copula with nonparametric marginal distributions to estimate the association for bivariate interval-censored failure data. We propose here to model the marginal distributions with an accelerated failure time model with a flexible error term as suggested by Komárek et al. (J. Comput. Graph. Stat. 2005; 14(3):726-745) in combination with a one parameter copula. In addition, we allow the association parameter of the copula to depend on covariates.The performance of our method is illustrated by an extensive simulation study and is applied to tooth emergence data of permanent teeth measured on 4468 children from a longitudinal dental study.  相似文献   

17.
Mixture cure models are usually used to model failure time data with long-term survivors. These models have been applied to grouped survival data. The models provide simultaneous estimates of the proportion of the patients cured from disease and the distribution of the survival times for uncured patients (latency distribution). However, a crucial issue with mixture cure models is the identifiability of the cure fraction and parameters of kernel distribution. Cure fraction estimates can be quite sensitive to the choice of latency distributions and length of follow-up time. In this paper, sensitivity of parameter estimates under semi-parametric model and several most commonly used parametric models, namely lognormal, loglogistic, Weibull and generalized Gamma distributions, is explored. The cure fraction estimates from the model with generalized Gamma distribution is found to be quite robust. A simulation study was carried out to examine the effect of follow-up time and latency distribution specification on cure fraction estimation. The cure models with generalized Gamma latency distribution are applied to the population-based survival data for several cancer sites from the Surveillance, Epidemiology and End Results (SEER) Program. Several cautions on the general use of cure model are advised.  相似文献   

18.
Clustered right‐censored data often arise from tumorigenicity experiments and clinical trials. For testing the equality of two survival functions, Jung and Jeong extended weighted logrank (WLR) tests to two independent samples of clustered right‐censored data, while the weighted Kaplan–Meier (WKM) test can be derived from the work of O'Gorman and Akritas. The weight functions in both classes of tests (WLR and WKM) can be selected to be more sensitive to detect a certain alternative; however, since the exact alternative is unknown, it is difficult to specify the selected weights in advance. Since WLR is rank‐based, it is not sensitive to the magnitude of the difference in survival times. Although WKM is constructed to be more sensitive to the magnitude of the difference in survival times, it is not sensitive to late hazard differences. Therefore, in order to combine the advantages of these two classes of tests, this paper developed a class of versatile tests based on simultaneously using WLR and WKM for two independent samples of clustered right‐censored data. The comparative results from a simulation study are presented and the implementation of the versatile tests to two real data sets is illustrated. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

19.
In genetic association studies, the differences between the hazard functions for the individual genotypes are often time-dependent. We address the non-proportional hazards data by using the weighted logrank approach by Fleming and Harrington [1981]:Commun Stat-Theor M 10:763-794. We introduce a weighted FBAT-Logrank whose weights are based on a non-parametric estimator for the genetic marker distribution function under the alternative hypothesis. We show that the computation of the marker distribution under the alternative does not bias the significance level of any subsequently computed FBAT-statistic. Hence, we use the estimated marker distribution to select the Fleming-Harrington weights so that the power of the weighted FBAT-Logrank test is maximized. In simulation studies and applications to an asthma study, we illustrate the practical relevance of the new methodology. In addition to power increases of 100% over the original FBAT-Logrank test, we also gain insight into the age at which a genotype exerts the greatest influence on disease risk.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号