共查询到20条相似文献,搜索用时 15 毫秒
1.
Clustered right‐censored data often arise from tumorigenicity experiments and clinical trials. For testing the equality of two survival functions, Jung and Jeong extended weighted logrank (WLR) tests to two independent samples of clustered right‐censored data, while the weighted Kaplan–Meier (WKM) test can be derived from the work of O'Gorman and Akritas. The weight functions in both classes of tests (WLR and WKM) can be selected to be more sensitive to detect a certain alternative; however, since the exact alternative is unknown, it is difficult to specify the selected weights in advance. Since WLR is rank‐based, it is not sensitive to the magnitude of the difference in survival times. Although WKM is constructed to be more sensitive to the magnitude of the difference in survival times, it is not sensitive to late hazard differences. Therefore, in order to combine the advantages of these two classes of tests, this paper developed a class of versatile tests based on simultaneously using WLR and WKM for two independent samples of clustered right‐censored data. The comparative results from a simulation study are presented and the implementation of the versatile tests to two real data sets is illustrated. Copyright © 2009 John Wiley & Sons, Ltd. 相似文献
2.
In medical studies with censored data Kaplan and Meier's product limit estimator has frequent use as the estimate of the survival function. Simultaneous confidence intervals for the survival function at various time points constitute a useful addition to the analysis. This study compares several such methods. We consider in a simulation investigation two whole curve confidence bands and four methods based on the Bonferroni inequality. The results show that three Bonferroni-type methods are essentially equivalent, all being better than the other methods when the number of time points is small (3 or 5). 相似文献
3.
Ritesh Ramchandani Dianne M. Finkelstein David A. Schoenfeld 《Statistics in medicine》2015,34(9):1454-1466
The generalized Wilcoxon and log‐rank tests are commonly used for testing differences between two survival distributions. We modify the Wilcoxon test to account for auxiliary information on intermediate disease states that subjects may pass through before failure. For a disease with multiple states where patients are monitored periodically but exact transition times are unknown (e.g. staging in cancer), we first fit a multi‐state Markov model to the full data set; when censoring precludes the comparison of survival times between two subjects, we use the model to estimate the probability that one subject will have survived longer than the other given their censoring times and last observed status, and use these probabilities to compute an expected rank for each subject. These expected ranks form the basis of our test statistic. Simulations demonstrate that the proposed test can improve power over the log‐rank and generalized Wilcoxon tests in some settings while maintaining the nominal type 1 error rate. The method is illustrated on an amyotrophic lateral sclerosis data set. Copyright © 2015 John Wiley & Sons, Ltd. 相似文献
4.
In two‐stage randomization designs, patients are randomized to one of the initial treatments, and at the end of the first stage, they are randomized to one of the second stage treatments depending on the outcome of the initial treatment. Statistical inference for survival data from these trials uses methods such as marginal mean models and weighted risk set estimates. In this article, we propose two forms of weighted Kaplan–Meier (WKM) estimators based on inverse‐probability weighting—one with fixed weights and the other with time‐dependent weights. We compare their properties with that of the standard Kaplan–Meier (SKM) estimator, marginal mean model‐based (MM) estimator and weighted risk set (WRS) estimator. Simulation study reveals that both forms of weighted Kaplan–Meier estimators are asymptotically unbiased, and provide coverage rates similar to that of MM and WRS estimators. The SKM estimator, however, is biased when the second randomization rates are not the same for the responders and non‐responders to initial treatment. The methods described are demonstrated by applying to a leukemia data set. Copyright © 2010 John Wiley & Sons, Ltd. 相似文献
5.
Logistic or other constraints often preclude the possibility of conducting incident cohort studies. A feasible alternative in such cases is to conduct a cross‐sectional prevalent cohort study for which we recruit prevalent cases, that is, subjects who have already experienced the initiating event, say the onset of a disease. When the interest lies in estimating the lifespan between the initiating event and a terminating event, say death for instance, such subjects may be followed prospectively until the terminating event or loss to follow‐up, whichever happens first. It is well known that prevalent cases have, on average, longer lifespans. As such, they do not constitute a representative random sample from the target population; they comprise a biased sample. If the initiating events are generated from a stationary Poisson process, the so‐called stationarity assumption, this bias is called length bias. The current literature on length‐biased sampling lacks a simple method for estimating the margin of errors of commonly used summary statistics. We fill this gap by using the empirical likelihood‐based confidence intervals by adapting this method to right‐censored length‐biased survival data. Both large and small sample behaviors of these confidence intervals are studied. We illustrate our method by using a set of data on survival with dementia, collected as part of the Canadian Study of Health and Aging. Copyright © 2012 John Wiley & Sons, Ltd. 相似文献
6.
Survival data are described as interval censored when the failure time is not measured exactly but is known only to have occurred within a defined interval. In this paper, we describe and assess three methods for calculating pointwise confidence intervals for the non-parametric survivor function estimated from interval-censored data: the first based on the full information matrix, the second a modification of this approach involving deletion of rows and columns of the information matrix corresponding to zero estimates prior to inversion and the third based on likelihood ratio inference. In a simulation study the likelihood ratio method gave the most accurate confidence intervals with coverage consistently close to the nominal level of 95 per cent. 相似文献
7.
Adjusted Kaplan-Meier estimator and log-rank test with inverse probability of treatment weighting for survival data 总被引:1,自引:0,他引:1
Estimation and group comparison of survival curves are two very common issues in survival analysis. In practice, the Kaplan-Meier estimates of survival functions may be biased due to unbalanced distribution of confounders. Here we develop an adjusted Kaplan-Meier estimator (AKME) to reduce confounding effects using inverse probability of treatment weighting (IPTW). Each observation is weighted by its inverse probability of being in a certain group. The AKME is shown to be a consistent estimate of the survival function, and the variance of the AKME is derived. A weighted log-rank test is proposed for comparing group differences of survival functions. Simulation studies are used to illustrate the performance of AKME and the weighted log-rank test. The method proposed here outperforms the Kaplan-Meier estimate, and it does better than or as well as other estimators based on stratification. The AKME and the weighted log-rank test are applied to two real examples: one is the study of times to reinfection of sexually transmitted diseases, and the other is the primary biliary cirrhosis (PBC) study. 相似文献
8.
In this paper, we propose empirical likelihood methods based on influence function and jackknife techniques for constructing confidence intervals for mean medical cost with censored data. We conduct a simulation study to compare the coverage probabilities and interval lengths of our proposed confidence intervals with that of the existing normal approximation‐based confidence intervals and bootstrap confidence intervals. The proposed methods have better finite‐sample performances than existing methods. Finally, we illustrate our proposed methods with a relevant example. 相似文献
9.
10.
The weighted Kaplan–Meier (WKM) estimator is often used to incorporate prognostic covariates into survival analysis to improve efficiency and correct for potential bias. In this paper, we generalize the WKM estimator to handle a situation with multiple prognostic covariates and potential‐dependent censoring through the use of prognostic covariates. We propose to combine multiple prognostic covariates into two risk scores derived from two working proportional hazards models. One model is for the event times. The other model is for the censoring times. These two risk scores are then categorized to define the risk groups needed for the WKM estimator. A method of defining categories based on principal components is proposed. We show that the WKM estimator is robust to misspecification of either one of the two working models. In simulation studies, we show that the robust WKM approach can reduce bias due to dependent censoring and improve efficiency. We apply the robust WKM approach to a prostate cancer data set. Copyright 2010 John Wiley & Sons, Ltd. 相似文献
11.
Response‐adaptive treatment allocation for survival trials with clustered right‐censored data 下载免费PDF全文
A comparison of 2 treatments with survival outcomes in a clinical study may require treatment randomization on clusters of multiple units with correlated responses. For example, for patients with otitis media in both ears, a specific treatment is normally given to a single patient, and hence, the 2 ears constitute a cluster. Statistical procedures are available for comparison of treatment efficacies. The conventional approach for treatment allocation is the adoption of a balanced design, in which half of the patients are assigned to each treatment arm. However, considering the increasing acceptability of responsive‐adaptive designs in recent years because of their desirable features, we have developed a response‐adaptive treatment allocation scheme for survival trials with clustered data. The proposed treatment allocation scheme is superior to the balanced design in that it allows more patients to receive the better treatment. At the same time, the test power for comparing treatment efficacies using our treatment allocation scheme remains highly competitive. The advantage of the proposed randomization procedure is supported by a simulation study and the redesign of a clinical study. 相似文献
12.
A multivariate cure model for left‐censored and right‐censored data with application to colorectal cancer screening patterns 下载免费PDF全文
We develop a multivariate cure survival model to estimate lifetime patterns of colorectal cancer screening. Screening data cover long periods of time, with sparse observations for each person. Some events may occur before the study begins or after the study ends, so the data are both left‐censored and right‐censored, and some individuals are never screened (the ‘cured’ population). We propose a multivariate parametric cure model that can be used with left‐censored and right‐censored data. Our model allows for the estimation of the time to screening as well as the average number of times individuals will be screened. We calculate likelihood functions based on the observations for each subject using a distribution that accounts for within‐subject correlation and estimate parameters using Markov chain Monte Carlo methods. We apply our methods to the estimation of lifetime colorectal cancer screening behavior in the SEER‐Medicare data set. Copyright © 2016 John Wiley & Sons, Ltd. 相似文献
13.
We propose a method for calculating power and sample size for studies involving interval‐censored failure time data that only involves standard software required for fitting the appropriate parametric survival model. We use the framework of a longitudinal study where patients are assessed periodically for a response and the only resultant information available to the investigators is the failure window: the time between the last negative and first positive test results. The survival model is fit to an expanded data set using easily computed weights. We illustrate with a Weibull survival model and a two‐group comparison. The investigator can specify a group difference in terms of a hazards ratio. Our simulation results demonstrate the merits of these proposed power calculations. We also explore how the number of assessments (visits), and thus the corresponding lengths of the failure intervals, affect study power. The proposed method can be easily extended to more complex study designs and a variety of survival and censoring distributions. Copyright © 2015 John Wiley & Sons, Ltd. 相似文献
14.
D.J. Lowsky Y. Ding D.K.K. Lee C.E. McCulloch L.F. Ross J.R. Thistlethwaite S.A. Zenios 《Statistics in medicine》2013,32(12):2062-2069
We introduce a nonparametric survival prediction method for right‐censored data. The method generates a survival curve prediction by constructing a (weighted) Kaplan–Meier estimator using the outcomes of the K most similar training observations. Each observation has an associated set of covariates, and a metric on the covariate space is used to measure similarity between observations. We apply our method to a kidney transplantation data set to generate patient‐specific distributions of graft survival and to a simulated data set in which the proportional hazards assumption is explicitly violated. We compare the performance of our method with the standard Cox model and the random survival forests method. Copyright © 2012 John Wiley & Sons, Ltd. 相似文献
15.
Mixture models for undiagnosed prevalent disease and interval‐censored incident disease: applications to a cohort assembled from electronic health records 下载免费PDF全文
Li C. Cheung Qing Pan Noorie Hyun Mark Schiffman Barbara Fetterman Philip E. Castle Thomas Lorey Hormuzd A. Katki 《Statistics in medicine》2017,36(22):3583-3595
For cost‐effectiveness and efficiency, many large‐scale general‐purpose cohort studies are being assembled within large health‐care providers who use electronic health records. Two key features of such data are that incident disease is interval‐censored between irregular visits and there can be pre‐existing (prevalent) disease. Because prevalent disease is not always immediately diagnosed, some disease diagnosed at later visits are actually undiagnosed prevalent disease. We consider prevalent disease as a point mass at time zero for clinical applications where there is no interest in time of prevalent disease onset. We demonstrate that the naive Kaplan–Meier cumulative risk estimator underestimates risks at early time points and overestimates later risks. We propose a general family of mixture models for undiagnosed prevalent disease and interval‐censored incident disease that we call prevalence–incidence models. Parameters for parametric prevalence–incidence models, such as the logistic regression and Weibull survival (logistic–Weibull) model, are estimated by direct likelihood maximization or by EM algorithm. Non‐parametric methods are proposed to calculate cumulative risks for cases without covariates. We compare naive Kaplan–Meier, logistic–Weibull, and non‐parametric estimates of cumulative risk in the cervical cancer screening program at Kaiser Permanente Northern California. Kaplan–Meier provided poor estimates while the logistic–Weibull model was a close fit to the non‐parametric. Our findings support our use of logistic–Weibull models to develop the risk estimates that underlie current US risk‐based cervical cancer screening guidelines. Published 2017. This article has been contributed to by US Government employees and their work is in the public domain in the USA. 相似文献
16.
Interval‐censored data, in which the event time is only known to lie in some time interval, arise commonly in practice, for example, in a medical study in which patients visit clinics or hospitals at prescheduled times and the events of interest occur between visits. Such data are appropriately analyzed using methods that account for this uncertainty in event time measurement. In this paper, we propose a survival tree method for interval‐censored data based on the conditional inference framework. Using Monte Carlo simulations, we find that the tree is effective in uncovering underlying tree structure, performs similarly to an interval‐censored Cox proportional hazards model fit when the true relationship is linear, and performs at least as well as (and in the presence of right‐censoring outperforms) the Cox model when the true relationship is not linear. Further, the interval‐censored tree outperforms survival trees based on imputing the event time as an endpoint or the midpoint of the censoring interval. We illustrate the application of the method on tooth emergence data. 相似文献
17.
BinomiRare: A robust test of the association of a rare variant with a disease for pooled analysis and meta‐analysis,with application to the HCHS/SOL 下载免费PDF全文
Tamar Sofer 《Genetic epidemiology》2017,41(5):388-395
Most regression‐based tests of the association between a low‐count variant and a binary outcome do not protect type 1 error, especially when tests are rejected based on a very low significance threshold. Noted exception is the Firth test. However, it was recently shown that in meta‐analyzing multiple studies all asymptotic, regression‐based tests, including the Firth, may not control type 1 error in some settings, and the Firth test may suffer a substantial loss of power. The problem is exacerbated when the case‐control proportions differ between studies. I propose the BinomiRare exact test that circumvents the calibration problems of regression‐based estimators. It quantifies the strength of association between the variant and the disease outcome based on the departure of the number of diseased individuals carrying the variant from the expected distribution of disease probability, under the null hypothesis of no association between the disease outcome and the rare variant. I provide a meta‐analytic strategy to combine tests across multiple cohorts, which requires that each cohort provides the disease probabilities of all carriers of the variant in question, and the number of diseased individuals among the carriers. I show that BinomiRare controls type 1 error in meta‐analysis even when the case‐control proportions differ between the studies, and does not lose power compared to pooled analysis. I demonstrate the test in studying the association of rare variants with asthma in the Hispanic Community Health Study/Study of Latinos. 相似文献
18.
Prognostic studies are widely conducted to examine whether biomarkers are associated with patient's prognoses and play important roles in medical decisions. Because findings from one prognostic study may be very limited, meta‐analyses may be useful to obtain sound evidence. However, prognostic studies are often analyzed by relying on a study‐specific cut‐off value, which can lead to difficulty in applying the standard meta‐analysis techniques. In this paper, we propose two methods to estimate a time‐dependent version of the summary receiver operating characteristics curve for meta‐analyses of prognostic studies with a right‐censored time‐to‐event outcome. We introduce a bivariate normal model for the pair of time‐dependent sensitivity and specificity and propose a method to form inferences based on summary statistics reported in published papers. This method provides a valid inference asymptotically. In addition, we consider a bivariate binomial model. To draw inferences from this bivariate binomial model, we introduce a multiple imputation method. The multiple imputation is found to be approximately proper multiple imputation, and thus the standard Rubin's variance formula is justified from a Bayesian view point. Our simulation study and application to a real dataset revealed that both methods work well with a moderate or large number of studies and the bivariate binomial model coupled with the multiple imputation outperforms the bivariate normal model with a small number of studies. Copyright © 2016 John Wiley & Sons, Ltd. 相似文献
19.
Confidence interval (CI) construction with respect to proportion/rate difference for paired binary data has become a standard procedure in many clinical trials and medical studies. When the sample size is small and incomplete data are present, asymptotic CIs may be dubious and exact CIs are not yet available. In this article, we propose exact and approximate unconditional test‐based methods for constructing CI for proportion/rate difference in the presence of incomplete paired binary data. Approaches based on one‐ and two‐sided Wald's tests will be considered. Unlike asymptotic CI estimators, exact unconditional CI estimators always guarantee their coverage probabilities at or above the pre‐specified confidence level. Our empirical studies further show that (i) approximate unconditional CI estimators usually yield shorter expected confidence width (ECW) with their coverage probabilities being well controlled around the pre‐specified confidence level; and (ii) the ECWs of the unconditional two‐sided‐test‐based CI estimators are generally narrower than those of the unconditional one‐sided‐test‐based CI estimators. Moreover, ECWs of asymptotic CIs may not necessarily be narrower than those of two‐sided‐based exact unconditional CIs. Two real examples will be used to illustrate our methodologies. Copyright © 2008 John Wiley & Sons, Ltd. 相似文献
20.
Hong Zhu 《Statistics in medicine》2014,33(14):2467-2479
Regression methods for survival data with right censoring have been extensively studied under semiparametric transformation models such as the Cox regression model and the proportional odds model. However, their practical application could be limited because of possible violation of model assumption or lack of ready interpretation for the regression coefficients in some cases. As an alternative, in this paper, the proportional likelihood ratio model introduced by Luo and Tsai is extended to flexibly model the relationship between survival outcome and covariates. This model has a natural connection with many important semiparametric models such as generalized linear model and density ratio model and is closely related to biased sampling problems. Compared with the semiparametric transformation model, the proportional likelihood ratio model is appealing and practical in many ways because of its model flexibility and quite direct clinical interpretation. We present two likelihood approaches for the estimation and inference on the target regression parameters under independent and dependent censoring assumptions. Based on a conditional likelihood approach using uncensored failure times, a numerically simple estimation procedure is developed by maximizing a pairwise pseudo‐likelihood. We also develop a full likelihood approach, and the most efficient maximum likelihood estimator is obtained by a profile likelihood. Simulation studies are conducted to assess the finite‐sample properties of the proposed estimators and compare the efficiency of the two likelihood approaches. An application to survival data for bone marrow transplantation patients of acute leukemia is provided to illustrate the proposed method and other approaches for handling non‐proportionality. The relative merits of these methods are discussed in concluding remarks. Copyright © 2014 John Wiley & Sons, Ltd. 相似文献