首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Recurrent event data are commonly observed in biomedical longitudinal studies. In many instances, there exists a terminal event, which precludes the occurrence of additional repeated events, and usually there is also a nonignorable correlation between the terminal event and recurrent events. In this article, we propose a partly Aalen's additive model with a multiplicative frailty for the rate function of recurrent event process and assume a Cox frailty model for terminal event time. A shared gamma frailty is used to describe the correlation between the two types of events. Consequently, this joint model can provide the information of temporal influence of absolute covariate effects on the rate of recurrent event process, which is usually helpful in the decision‐making process for physicians. An estimating equation approach is developed to estimate marginal and association parameters in the joint model. The consistency of the proposed estimator is established. Simulation studies demonstrate that the proposed approach is appropriate for practical use. We apply the proposed method to a peritonitis cohort data set for illustration. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

2.
The purpose of this paper is to develop a formula for calculating the required sample size for paired recurrent events data. The developed formula is based on robust non‐parametric tests for comparing the marginal mean function of events between paired samples. This calculation can accommodate the associations among a sequence of paired recurrent event times with a specification of correlated gamma frailty variables for a proportional intensity model. We evaluate the performance of the proposed method with comprehensive simulations including the impacts of paired correlations, homogeneous or nonhomogeneous processes, marginal hazard rates, censoring rate, accrual and follow‐up times, as well as the sensitivity analysis for the assumption of the frailty distribution. The use of the formula is also demonstrated using a premature infant study from the neonatal intensive care unit of a tertiary center in southern Taiwan. Copyright © 2017 John Wiley & Sons, Ltd.  相似文献   

3.
Times between sequentially ordered events (gap times) are often of interest in biomedical studies. For example, in a cancer study, the gap times from incidence-to-remission and remission-to-recurrence may be examined. Such data are usually subject to right censoring, and within-subject failure times are generally not independent. Statistical challenges in the analysis of the second and subsequent gap times include induced dependent censoring and non-identifiability of the marginal distributions. We propose a non-parametric method for constructing one-sample estimators of conditional gap-time specific survival functions. The estimators are uniformly consistent and, upon standardization, converge weakly to a zero-mean Gaussian process, with a covariance function which can be consistently estimated. Simulation studies reveal that the asymptotic approximations are appropriate for finite samples. Methods for confidence bands are provided. The proposed methods are illustrated on a renal failure data set, where the probabilities of transplant wait-listing and kidney transplantation are of interest.  相似文献   

4.
This paper considers the analysis of a repeat event outcome in clinical trials of chronic diseases in the context of dependent censoring (e.g. mortality). It has particular application in the context of recurrent heart failure hospitalisations in trials of heart failure. Semi‐parametric joint frailty models (JFMs) simultaneously analyse recurrent heart failure hospitalisations and time to cardiovascular death, estimating distinct hazard ratios whilst individual‐specific latent variables induce associations between the two processes. A simulation study was carried out to assess the suitability of the JFM versus marginal analyses of recurrent events and cardiovascular death using standard methods. Hazard ratios were consistently overestimated when marginal models were used, whilst the JFM produced good, well‐estimated results. An application to the Candesartan in Heart failure: Assessment of Reduction in Mortality and morbidity programme was considered. The JFM gave unbiased estimates of treatment effects in the presence of dependent censoring. We advocate the use of the JFM for future trials that consider recurrent events as the primary outcome. © 2016 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.  相似文献   

5.
Alternating recurrent event data arise frequently in clinical and epidemiologic studies, where 2 types of events such as hospital admission and discharge occur alternately over time. The 2 alternating states defined by these recurrent events could each carry important and distinct information about a patient's underlying health condition and/or the quality of care. In this paper, we propose a semiparametric method for evaluating covariate effects on the 2 alternating states jointly. The proposed methodology accounts for the dependence among the alternating states as well as the heterogeneity across patients via a frailty with unspecified distribution. Moreover, the estimation procedure, which is based on smooth estimating equations, not only properly addresses challenges such as induced dependent censoring and intercept sampling bias commonly confronted in serial event gap time data but also is more computationally tractable than the existing rank‐based methods. The proposed methods are evaluated by simulation studies and illustrated by analyzing psychiatric contacts from the South Verona Psychiatric Case Register.  相似文献   

6.
Recurrent events arise when an event occurs many times for a subject. Many models have been developed to analyze these kind of data: the Andersen-Gill's model is one of them as well as the Prentice-William and the Peterson's model, the Wei Lee and Weissfeld's model, or even frailty models, all assuming an independent and noninformative censoring. However, in practice, these assumptions may be violated by the existence of a terminal event that permanently stops the recurrent process (eg, death). Indeed, a patient who experiences an early terminal event is more likely to have a lower number of recurrent events than a patient who experiences a terminal event later. Thus, ignoring terminal events in the analysis may lead to biased results. Many methods have been developed to handle terminal events. In this paper, we describe the existing methods classifying into conditional or marginal methods and compare them in a simulation study to highlight bias in results if an inappropriate method is used, when recurrent events and terminal event are correlated. In addition, we apply the different models on a real dataset to show how results should be interpreted. Finally, we provide recommendations for choosing the appropriate method for analyzing recurrent events in the presence of a terminal event.  相似文献   

7.
We provide non-parametric estimates of the marginal cumulative distribution of stage occupation times (waiting times) and non-parametric estimates of marginal cumulative incidence function (proportion of persons who leave stage j for stage j' within time t of entering stage j) using right-censored data from a multi-stage model. We allow for stage and path dependent censoring where the censoring hazard for an individual may depend on his or her natural covariate history such as the collection of stages visited before the current stage and their occupation times. Additional external time dependent covariates that may induce dependent censoring can also be incorporated into our estimates, if available. Our approach requires modelling the censoring hazard so that an estimate of the integrated censoring hazard can be used in constructing the estimates of the waiting times distributions. For this purpose, we propose the use of an additive hazard model which results in very flexible (robust) estimates. Examples based on data from burn patients and simulated data with tracking are also provided to demonstrate the performance of our estimators.  相似文献   

8.
We develop an approach, based on multiple imputation, that estimates the marginal survival distribution in survival analysis using auxiliary variables to recover information for censored observations. To conduct the imputation, we use two working survival models to define a nearest neighbour imputing risk set. One model is for the event times and the other for the censoring times. Based on the imputing risk set, two non-parametric multiple imputation methods are considered: risk set imputation, and Kaplan-Meier imputation. For both methods a future event or censoring time is imputed for each censored observation. With a categorical auxiliary variable, we show that with a large number of imputes the estimates from the Kaplan-Meier imputation method correspond to the weighted Kaplan-Meier estimator. We also show that the Kaplan-Meier imputation method is robust to mis-specification of either one of the two working models. In a simulation study with time independent and time-dependent auxiliary variables, we compare the multiple imputation approaches with an inverse probability of censoring weighted method. We show that all approaches can reduce bias due to dependent censoring and improve the efficiency. We apply the approaches to AIDS clinical trial data comparing ZDV and placebo, in which CD4 count is the time-dependent auxiliary variable.  相似文献   

9.
One-sample non-parametric tests are proposed here for inference on recurring events. The focus is on the marginal mean function of events and the basis for inference is the standardized distance between the observed and the expected number of events under a specified reference rate. Different weights are considered in order to account for various types of alternative hypotheses on the mean function of the recurrent events process. A robust version and a stratified version of the test are also proposed. The performance of these tests was investigated through simulation studies under various underlying event generation processes, such as homogeneous and nonhomogeneous Poisson processes, autoregressive and renewal processes, with and without frailty effects. The robust versions of the test have been shown to be suitable in a wide variety of event generating processes. The motivating context is a study on gene therapy in a very rare immunodeficiency in children, where a major end-point is the recurrence of severe infections. Robust non-parametric one-sample tests for recurrent events can be useful to assess efficacy and especially safety in non-randomized studies or in epidemiological studies for comparison with a standard population.  相似文献   

10.
In clinical trials with time‐to‐event outcomes, it is common to estimate the marginal hazard ratio from the proportional hazards model, even when the proportional hazards assumption is not valid. This is unavoidable from the perspective that the estimator must be specified a priori if probability statements about treatment effect estimates are desired. Marginal hazard ratio estimates under non‐proportional hazards are still useful, as they can be considered to be average treatment effect estimates over the support of the data. However, as many have shown, under non‐proportional hazard, the ‘usual’ unweighted marginal hazard ratio estimate is a function of the censoring distribution, which is not normally considered to be scientifically relevant when describing the treatment effect. In addition, in many practical settings, the censoring distribution is only conditionally independent (e.g., differing across treatment arms), which further complicates the interpretation. In this paper, we investigate an estimator of the hazard ratio that removes the influence of censoring and propose a consistent robust variance estimator. We compare the coverage probability of the estimator to both the usual Cox model estimator and an estimator proposed by Xu and O'Quigley (2000) when censoring is independent of the covariate. The new estimator should be used for inference that does not depend on the censoring distribution. It is particularly relevant to adaptive clinical trials where, by design, censoring distributions differ across treatment arms. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

11.
Wang M  Long Q 《Statistics in medicine》2011,30(11):1278-1291
Generalized estimating equations (GEE (Biometrika 1986; 73(1):13-22) is a general statistical method to fit marginal models for correlated or clustered responses, and it uses a robust sandwich estimator to estimate the variance-covariance matrix of the regression coefficient estimates. While this sandwich estimator is robust to the misspecification of the correlation structure of the responses, its finite sample performance deteriorates as the number of clusters or observations per cluster decreases. To address this limitation, Pan (Biometrika 2001; 88(3):901-906) and Mancl and DeRouen (Biometrics 2001; 57(1):126-134) investigated two modifications to the original sandwich variance estimator. Motivated by the ideas underlying these two modifications, we propose a novel robust variance estimator that combines the strengths of these estimators. Our theoretical and numerical results show that the proposed estimator attains better efficiency and achieves better finite sample performance compared with existing estimators. In particular, when the sample size or cluster size is small, our proposed estimator exhibits lower bias and the resulting confidence intervals for GEE estimates achieve better coverage rates performance. We illustrate the proposed method using data from a dental study.  相似文献   

12.
Researchers routinely adopt composite endpoints in multicenter randomized trials designed to evaluate the effect of experimental interventions in cardiovascular disease, diabetes, and cancer. Despite their widespread use, relatively little attention has been paid to the statistical properties of estimators of treatment effect based on composite endpoints. We consider this here in the context of multivariate models for time to event data in which copula functions link marginal distributions with a proportional hazards structure. We then examine the asymptotic and empirical properties of the estimator of treatment effect arising from a Cox regression model for the time to the first event. We point out that even when the treatment effect is the same for the component events, the limiting value of the estimator based on the composite endpoint is usually inconsistent for this common value. We find that in this context the limiting value is determined by the degree of association between the events, the stochastic ordering of events, and the censoring distribution. Within the framework adopted, marginal methods for the analysis of multivariate failure time data yield consistent estimators of treatment effect and are therefore preferred. We illustrate the methods by application to a recent asthma study. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

13.
We extend the shared frailty model of recurrent events and a dependent terminal event to allow for a nonparametric covariate function. We include a Gaussian random effect (frailty) in the intensity functions of both the recurrent and terminal events to capture correlation between the two processes. We employ the penalized cubic spline method to describe the nonparametric covariate function in the recurrent events model. We use Laplace approximation to evaluate the marginal penalized partial likelihood without a closed form. We also propose the variance estimates for regression coefficients. Numerical analysis results show that the proposed estimates perform well for both the nonparametric and parametric components. We apply this method to analyze the hospitalization rate of patients with heart failure in the presence of death.  相似文献   

14.
Repeated events processes are ubiquitous across a great range of important health, medical, and public policy applications, but models for these processes have serious limitations. Alternative estimators often produce different inferences concerning treatment effects due to bias and inefficiency. We recommend a robust strategy for the estimation of effects in medical treatments, social conditions, individual behaviours, and public policy programs in repeated events survival models under three common conditions: heterogeneity across individuals, dependence across the number of events, and both heterogeneity and event dependence. We compare several models for analysing recurrent event data that exhibit both heterogeneity and event dependence. The conditional frailty model best accounts for the various conditions of heterogeneity and event dependence by using a frailty term, stratification, and gap time formulation of the risk set. We examine the performance of recurrent event models that are commonly used in applied work using Monte Carlo simulations, and apply the findings to data on chronic granulomatous disease and cystic fibrosis.  相似文献   

15.
Multivariate survival data are frequently encountered in biomedical applications in the form of clustered failures (or recurrent events data). A popular way of analyzing such data is by using shared frailty models, which assume that the proportional hazards assumption holds conditional on an unobserved cluster-specific random effect. Such models are often incorporated in more complicated joint models in survival analysis. If the random effect distribution has finite expectation, then the conditional proportional hazards assumption does not carry over to the marginal models. It has been shown that, for univariate data, this makes it impossible to distinguish between the presence of unobserved heterogeneity (eg, due to missing covariates) and marginal nonproportional hazards. We show that time-dependent covariate effects may falsely appear as evidence in favor of a frailty model also in the case of clustered failures or recurrent events data, when the cluster size or number of recurrent events is small. When true unobserved heterogeneity is present, the presence of nonproportional hazards leads to overestimating the frailty effect. We show that this phenomenon is somewhat mitigated as the cluster size grows. We carry out a simulation study to assess the behavior of test statistics and estimators for frailty models in such contexts. The gamma, inverse Gaussian, and positive stable shared frailty models are contrasted using a novel software implementation for estimating semiparametric shared frailty models. Two main questions are addressed in the contexts of clustered failures and recurrent events: whether covariates with a time-dependent effect may appear as indication of unobserved heterogeneity and whether the additional presence of unobserved heterogeneity can be detected in this case. Finally, the practical implications are illustrated in a real-world data analysis example.  相似文献   

16.
In longitudinal studies, matched designs are often employed to control the potential confounding effects in the field of biomedical research and public health. Because of clinical interest, recurrent time‐to‐event data are captured during the follow‐up. Meanwhile, the terminal event of death is always encountered, which should be taken into account for valid inference because of informative censoring. In some scenarios, a certain large portion of subjects may not have any recurrent events during the study period due to nonsusceptibility to events or censoring; thus, the zero‐inflated nature of data should be considered in analysis. In this paper, a joint frailty model with recurrent events and death is proposed to adjust for zero inflation and matched designs. We incorporate 2 frailties to measure the dependency between subjects within a matched pair and that among recurrent events within each individual. By sharing the random effects, 2 event processes of recurrent events and death are dependent with each other. The maximum likelihood based approach is applied for parameter estimation, where the Monte Carlo expectation‐maximization algorithm is adopted, and the corresponding R program is developed and available for public usage. In addition, alternative estimation methods such as Gaussian quadrature (PROC NLMIXED) and a Bayesian approach (PROC MCMC) are also considered for comparison to show our method's superiority. Extensive simulations are conducted, and a real data application on acute ischemic studies is provided in the end.  相似文献   

17.

Background

In matched-pair cohort studies with censored events, the hazard ratio (HR) may be of main interest. However, it is lesser known in epidemiologic literature that the partial maximum likelihood estimator of a common HR conditional on matched pairs is written in a simple form, namely, the ratio of the numbers of two pair-types. Moreover, because HR is a noncollapsible measure and its constancy across matched pairs is a restrictive assumption, marginal HR as “average” HR may be targeted more than conditional HR in analysis.

Methods

Based on its simple expression, we provided an alternative interpretation of the common HR estimator as the odds of the matched-pair analog of C-statistic for censored time-to-event data. Through simulations assuming proportional hazards within matched pairs, the influence of various censoring patterns on the marginal and common HR estimators of unstratified and stratified proportional hazards models, respectively, was evaluated. The methods were applied to a real propensity-score matched dataset from the Rotterdam tumor bank of primary breast cancer.

Results

We showed that stratified models unbiasedly estimated a common HR under the proportional hazards within matched pairs. However, the marginal HR estimator with robust variance estimator lacks interpretation as an “average” marginal HR even if censoring is unconditionally independent to event, unless no censoring occurs or no exposure effect is present. Furthermore, the exposure-dependent censoring biased the marginal HR estimator away from both conditional HR and an “average” marginal HR irrespective of whether exposure effect is present. From the matched Rotterdam dataset, we estimated HR for relapse-free survival of absence versus presence of chemotherapy; estimates (95% confidence interval) were 1.47 (1.18–1.83) for common HR and 1.33 (1.13–1.57) for marginal HR.

Conclusion

The simple expression of the common HR estimator would be a useful summary of exposure effect, which is less sensitive to censoring patterns than the marginal HR estimator. The common and the marginal HR estimators, both relying on distinct assumptions and interpretations, are complementary alternatives for each other.
  相似文献   

18.
19.
The process by which patients experience a series of recurrent events, such as hospitalizations, may be subject to death. In cohort studies, one strategy for analyzing such data is to fit a joint frailty model for the intensities of the recurrent event and death, which estimates covariate effects on the two event types while accounting for their dependence. When certain covariates are difficult to obtain, however, researchers may only have the resources to subsample patients on whom to collect complete data: one way is using the nested case–control (NCC) design, in which risk set sampling is performed based on a single outcome. We develop a general framework for the design of NCC studies in the presence of recurrent and terminal events and propose estimation and inference for a joint frailty model for recurrence and death using data arising from such studies. We propose a maximum weighted penalized likelihood approach using flexible spline models for the baseline intensity functions. Two standard error estimators are proposed: a sandwich estimator and a perturbation resampling procedure. We investigate operating characteristics of our estimators as well as design considerations via a simulation study and illustrate our methods using two studies: one on recurrent cardiac hospitalizations in patients with heart failure and the other on local recurrence and metastasis in patients with breast cancer.  相似文献   

20.
We employ a general bias preventive approach developed by Firth (Biometrika 1993; 80:27-38) to reduce the bias of an estimator of the log-odds ratio parameter in a matched case-control study by solving a modified score equation. We also propose a method to calculate the standard error of the resultant estimator. A closed-form expression for the estimator of the log-odds ratio parameter is derived in the case of a dichotomous exposure variable. Finite sample properties of the estimator are investigated via a simulation study. Finally, we apply the method to analyze a matched case-control data from a low birthweight study.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号