首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 187 毫秒
1.
Event history studies based on disease clinic data often face several complications. Specifically, patients may visit the clinic irregularly, and the intermittent observation times could depend on disease‐related variables; this can cause a failure time outcome to be dependently interval‐censored. We propose a weighted estimating function approach so that dependently interval‐censored failure times can be analysed consistently. A so‐called inverse‐intensity‐of‐visit weight is employed to adjust for the informative inspection times. Left truncation of failure times can also be easily handled. Additionally, in observational studies, treatment assignments are typically non‐randomized and may depend on disease‐related variables. An inverse‐probability‐of‐treatment weight is applied to estimating functions to further adjust for measured confounders. Simulation studies are conducted to examine the finite sample performances of the proposed estimators. Finally, the Toronto Psoriatic Arthritis Cohort Study is used for illustration. Copyright © 2017 John Wiley & Sons, Ltd.  相似文献   

2.
Interval‐censored failure‐time data arise when subjects are examined or observed periodically such that the failure time of interest is not examined exactly but only known to be bracketed between two adjacent observation times. The commonly used approaches assume that the examination times and the failure time are independent or conditionally independent given covariates. In many practical applications, patients who are already in poor health or have a weak immune system before treatment usually tend to visit physicians more often after treatment than those with better health or immune system. In this situation, the visiting rate is positively correlated with the risk of failure due to the health status, which results in dependent interval‐censored data. While some measurable factors affecting health status such as age, gender, and physical symptom can be included in the covariates, some health‐related latent variables cannot be observed or measured. To deal with dependent interval censoring involving unobserved latent variable, we characterize the visiting/examination process as recurrent event process and propose a joint frailty model to account for the association of the failure time and visiting process. A shared gamma frailty is incorporated into the Cox model and proportional intensity model for the failure time and visiting process, respectively, in a multiplicative way. We propose a semiparametric maximum likelihood approach for estimating model parameters and show the asymptotic properties, including consistency and weak convergence. Extensive simulation studies are conducted and a data set of bladder cancer is analyzed for illustrative purposes. Copyright © 2017 John Wiley & Sons, Ltd.  相似文献   

3.
Multivariate interval‐censored failure time data arise commonly in many studies of epidemiology and biomedicine. Analysis of these type of data is more challenging than the right‐censored data. We propose a simple multiple imputation strategy to recover the order of occurrences based on the interval‐censored event times using a conditional predictive distribution function derived from a parametric gamma random effects model. By imputing the interval‐censored failure times, the estimation of the regression and dependence parameters in the context of a gamma frailty proportional hazards model using the well‐developed EM algorithm is made possible. A robust estimator for the covariance matrix is suggested to adjust for the possible misspecification of the parametric baseline hazard function. The finite sample properties of the proposed method are investigated via simulation. The performance of the proposed method is highly satisfactory, whereas the computation burden is minimal. The proposed method is also applied to the diabetic retinopathy study (DRS) data for illustration purpose and the estimates are compared with those based on other existing methods for bivariate grouped survival data. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

4.
Interval‐censored data occur naturally in many fields and the main feature is that the failure time of interest is not observed exactly, but is known to fall within some interval. In this paper, we propose a semiparametric probit model for analyzing case 2 interval‐censored data as an alternative to the existing semiparametric models in the literature. Specifically, we propose to approximate the unknown nonparametric nondecreasing function in the probit model with a linear combination of monotone splines, leading to only a finite number of parameters to estimate. Both the maximum likelihood and the Bayesian estimation methods are proposed. For each method, regression parameters and the baseline survival function are estimated jointly. The proposed methods make no assumptions about the observation process and can be applicable to any interval‐censored data with easy implementation. The methods are evaluated by simulation studies and are illustrated by two real‐life interval‐censored data applications. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

5.
A model is developed for chronic diseases with an indolent phase that is followed by a phase with more active disease resulting in progression and damage. The time scales for the intensity functions for the active phase are more naturally based on the time since the start of the active phase, corresponding to a semi‐Markov formulation. This two‐phase model enables one to fit a separate regression model for the duration of the indolent phase and intensity‐based models for the more active second phase. In cohort studies for which the disease status is only known at a series of clinical assessment times, transition times are interval‐censored, which means the time origin for phase II is interval‐censored. Weakly parametric models with piecewise constant baseline hazard and rate functions are specified, and an expectation‐maximization algorithm is described for model fitting. Simulation studies examining the performance of the proposed model show good performance under maximum likelihood and two‐stage estimation. An application to data from the motivating study of disease progression in psoriatic arthritis illustrates the procedure and identifies new human leukocyte antigens associated with the duration of the indolent phase. Copyright © 2017 John Wiley & Sons, Ltd.  相似文献   

6.
Interval‐censored data, in which the event time is only known to lie in some time interval, arise commonly in practice, for example, in a medical study in which patients visit clinics or hospitals at prescheduled times and the events of interest occur between visits. Such data are appropriately analyzed using methods that account for this uncertainty in event time measurement. In this paper, we propose a survival tree method for interval‐censored data based on the conditional inference framework. Using Monte Carlo simulations, we find that the tree is effective in uncovering underlying tree structure, performs similarly to an interval‐censored Cox proportional hazards model fit when the true relationship is linear, and performs at least as well as (and in the presence of right‐censoring outperforms) the Cox model when the true relationship is not linear. Further, the interval‐censored tree outperforms survival trees based on imputing the event time as an endpoint or the midpoint of the censoring interval. We illustrate the application of the method on tooth emergence data.  相似文献   

7.
Non-parametric tests of independence, as well as accompanying measures of association, are essential tools for the analysis of bivariate data. Such tests and measures have been developed for uncensored and right censored failure time data, but have not been developed for interval censored failure time data. Bivariate interval censored data arise in AIDS studies in which screening tests for early signs of viral and bacterial infection are done at clinic visits. Because of missed clinic visits, the actual times of first positive screening tests are interval censored. To handle such data, we propose an extension of Kendall's coefficient of concordance. We apply it to data from an AIDS study that recorded times of shedding of cytomegalovirus (CMV) and times of colonization of mycobacterium avium complex (MAC). We examine the performance of our proposed measure through a simulation study.  相似文献   

8.
Irreversible multi‐state models provide a convenient framework for characterizing disease processes that arise when the states represent the degree of organ or tissue damage incurred by a progressive disease. In many settings, however, individuals are only observed at periodic clinic visits and so the precise times of the transitions are not observed. If the life history and observation processes are not independent, the observation process contains information about the life history process, and more importantly, likelihoods based on the disease process alone are invalid. With interval‐censored failure time data, joint models are nonidentifiable and data analysts must rely on sensitivity analyses to assess the effect of the dependent observation times. This paper is concerned, however, with the analysis of data from progressive multi‐state disease processes in which individuals are scheduled to be seen at periodic pre‐scheduled assessment times. We cast the problem in the framework used for incomplete longitudinal data problems. Maximum likelihood estimation via an EM algorithm is advocated for parameter estimation. Simulation studies demonstrate that the proposed method works well under a variety of situations. Data from a cohort of patients with psoriatic arthritis are analyzed for illustration. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

9.
In conventional survival analysis there is an underlying assumption that all study subjects are susceptible to the event. In general, this assumption does not adequately hold when investigating the time to an event other than death. Owing to genetic and/or environmental etiology, study subjects may not be susceptible to the disease. Analyzing nonsusceptibility has become an important topic in biomedical, epidemiological, and sociological research, with recent statistical studies proposing several mixture models for right‐censored data in regression analysis. In longitudinal studies, we often encounter left, interval, and right‐censored data because of incomplete observations of the time endpoint, as well as possibly left‐truncated data arising from the dissimilar entry ages of recruited healthy subjects. To analyze these kinds of incomplete data while accounting for nonsusceptibility and possible crossing hazards in the framework of mixture regression models, we utilize a logistic regression model to specify the probability of susceptibility, and a generalized gamma distribution, or a log‐logistic distribution, in the accelerated failure time location‐scale regression model to formulate the time to the event. Relative times of the conditional event time distribution for susceptible subjects are extended in the accelerated failure time location‐scale submodel. We also construct graphical goodness‐of‐fit procedures on the basis of the Turnbull–Frydman estimator and newly proposed residuals. Simulation studies were conducted to demonstrate the validity of the proposed estimation procedure. The mixture regression models are illustrated with alcohol abuse data from the Taiwan Aboriginal Study Project and hypertriglyceridemia data from the Cardiovascular Disease Risk Factor Two‐township Study in Taiwan. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

10.
In contrast to the usual ROC analysis with a contemporaneous reference standard, the time‐dependent setting introduces the possibility that the reference standard refers to an event at a future time and may not be known for every patient due to censoring. The goal of this research is to determine the sample size required for a study design to address the question of the accuracy of a diagnostic test using the area under the curve in time‐dependent ROC analysis. We adapt a previously published estimator of the time‐dependent area under the ROC curve, which is a function of the expected conditional survival functions. This estimator accommodates censored data. The estimation of the required sample size is based on approximations of the expected conditional survival functions and their variances, derived under parametric assumptions of an exponential failure time and an exponential censoring time. We also consider different patient enrollment strategies. The proposed method can provide an adequate sample size to ensure that the test's accuracy is estimated to a prespecified precision. We present results of a simulation study to assess the accuracy of the method and its robustness to departures from the parametric assumptions. We apply the proposed method to design of a study of positron emission tomography as predictor of disease free survival in women undergoing therapy for cervical cancer. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

11.
Nonparametric comparison of survival functions is one of the most commonly required task in failure time studies such as clinical trials, and for this, many procedures have been developed under various situations. This paper considers a situation that often occurs in practice but has not been discussed much: the comparison based on interval‐censored data in the presence of unequal censoring. That is, one observes only interval‐censored data, and the distributions of or the mechanisms behind censoring variables may depend on treatments and thus be different for the subjects in different treatment groups. For the problem, a test procedure is developed that takes into account the difference between the distributions of the censoring variables, and the asymptotic normality of the test statistics is given. For the assessment of the performance of the procedure, a simulation study is conducted and suggests that it works well for practical situations. An illustrative example is provided. Copyright © 2017 John Wiley & Sons, Ltd.  相似文献   

12.
For censored survival outcomes, it can be of great interest to evaluate the predictive power of individual markers or their functions. Compared with alternative evaluation approaches, approaches based on the time‐dependent receiver operating characteristics (ROC) rely on much weaker assumptions, can be more robust, and hence are preferred. In this article, we examine evaluation of markers' predictive power using the time‐dependent ROC curve and a concordance measure that can be viewed as a weighted area under the time‐dependent area under the ROC curve profile. This study significantly advances from existing time‐dependent ROC studies by developing nonparametric estimators of the summary indexes and, more importantly, rigorously establishing their asymptotic properties. It reinforces the statistical foundation of the time‐dependent ROC‐based evaluation approaches for censored survival outcomes. Numerical studies, including simulations and application to an HIV clinical trial, demonstrate the satisfactory finite‐sample performance of the proposed approaches. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

13.
Cancer clinical trials are routinely designed to assess the effect of treatment on disease progression and death, often in terms of a composite endpoint called progression‐free survival. When progression status is known only at periodic assessment times, the progression time is interval censored, and complications arise in the analysis of progression‐free survival. Despite the advances in methods for dealing with interval‐censored data, naive methods such as right‐endpoint imputation are widely adopted in this setting. We examine the asymptotic and empirical properties of estimators of the marginal progression‐free survival functions and associated treatment effects under this scheme. Specifically, we explore the determinants of the asymptotic bias and point out that there is typically a loss in power of tests for treatment effects. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

14.
Two‐period two‐treatment (2×2) crossover designs are commonly used in clinical trials. For continuous endpoints, it has been shown that baseline (pretreatment) measurements collected before the start of each treatment period can be useful in improving the power of the analysis. Methods to achieve a corresponding gain for censored time‐to‐event endpoints have not been adequately studied. We propose a method in which censored values are treated as missing data and multiply imputed using prespecified parametric event time models. The event times in each imputed data set are then log‐transformed and analyzed using a linear model suitable for a 2×2 crossover design with continuous endpoints, with the difference in period‐specific baselines included as a covariate. Results obtained from the imputed data sets are synthesized for point and confidence interval estimation of the treatment ratio of geometric mean event times using model averaging in conjunction with Rubin's combination rule. We use simulations to illustrate the favorable operating characteristics of our method relative to two other methods for crossover trials with censored time‐to‐event data, ie, a hierarchical rank test that ignores the baselines and a stratified Cox model that uses each study subject as a stratum and includes period‐specific baselines as a covariate. Application to a real data example is provided.  相似文献   

15.
Outcome variables that are semicontinuous with clumping at zero are commonly seen in biomedical research. In addition, the outcome measurement is sometimes subject to interval censoring and a lower detection limit (LDL). This gives rise to interval‐censored observations with clumping below the LDL. Level of antibody against influenza virus measured by the hemagglutination inhibition assay is an example. The interval censoring is due to the assay's technical procedure. The clumping below LDL is likely a result of the lack of prior exposure in some individuals such that they either have zero level of antibodies or do not have detectable level of antibodies. Given a pair of such measurements from the same subject at two time points, a binary ‘fold‐increase’ endpoint can be defined according to the ratio of these two measurements, as it often is in vaccine clinical trials. The intervention effect or vaccine immunogenicity can be assessed by comparing the binary endpoint between groups of subjects given different vaccines or placebos. We introduce a two‐part random effects model for modeling the paired interval‐censored data with clumping below the LDL. Based on the estimated model parameters, we propose to use Monte Carlo approximation for estimation of the ‘fold‐increase’ endpoint and the intervention effect. Bootstrapping is used for variance estimation. The performance of the proposed method is demonstrated by simulation. We analyze antibody data from an influenza vaccine trial for illustration. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

16.
Multi‐state models are useful for modelling disease progression where the state space of the process is used to represent the discrete disease status of subjects. Often, the disease process is only observed at clinical visits, and the schedule of these visits can depend on the disease status of patients. In such situations, the frequency and timing of observations may depend on transition times that are themselves unobserved in an interval‐censored setting. There is a potential for bias if we model a disease process with informative observation times as a non‐informative observation scheme with pre‐specified examination times. In this paper, we develop a joint model for the disease and observation processes to ensure valid inference because the follow‐up process may itself contain information about the disease process. The transitions for each subject are modelled using a Markov process, where bivariate subject‐specific random effects are used to link the disease and observation models. Inference is based on a Bayesian framework, and we apply our joint model to the analysis of a large study examining functional decline trajectories of palliative care patients. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

17.
This paper analyses a case in censored failure time data problems where some observations are potentially censored. The traditional models for failure time data implicitly assume that the censoring status for each observation is deterministic. Therefore, they cannot be applied directly to the potentially censored data. We propose an estimator that uses resampling techniques to approximate censoring probabilities for individual observations. A Monte Carlo simulation study shows that the proposed estimator properly corrects biases that would otherwise be present had it been assumed that either all potentially censored observations are censored or that no censoring has occurred. Finally, we apply the estimator to a health insurance claims database.  相似文献   

18.
Multivariate current‐status failure time data consist of several possibly related event times of interest, in which the status of each event is determined at a single examination time. If the examination time is intrinsically related to the event times, the examination is referred to as dependent censoring and needs to be taken into account. Such data often occur in clinical studies and animal carcinogenicity experiments. To accommodate for possible dependent censoring, this paper proposes a joint frailty model for event times and dependent censoring time. We develop a likelihood approach using Gaussian quadrature techniques for obtaining maximum likelihood estimates. We conduct extensive simulation studies for investigating finite‐sample properties of the proposed method. We illustrate the proposed method with an analysis of patients with ankylosing spondylitis, where the examination time may be dependent on the event times of interest. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

19.
Continuous‐time multistate survival models can be used to describe health‐related processes over time. In the presence of interval‐censored times for transitions between the living states, the likelihood is constructed using transition probabilities. Models can be specified using parametric or semiparametric shapes for the hazards. Semiparametric hazards can be fitted using P‐splines and penalised maximum likelihood estimation. This paper presents a method to estimate flexible multistate models that allow for parametric and semiparametric hazard specifications. The estimation is based on a scoring algorithm. The method is illustrated with data from the English Longitudinal Study of Ageing.  相似文献   

20.
Sequentially administered, laboratory‐based diagnostic tests or self‐reported questionnaires are often used to determine the occurrence of a silent event. In this paper, we consider issues relevant in design of studies aimed at estimating the association of one or more covariates with a non‐recurring, time‐to‐event outcome that is observed using a repeatedly administered, error‐prone diagnostic procedure. The problem is motivated by the Women's Health Initiative, in which diabetes incidence among the approximately 160,000 women is obtained from annually collected self‐reported data. For settings of imperfect diagnostic tests or self‐reports with known sensitivity and specificity, we evaluate the effects of various factors on resulting power and sample size calculations and compare the relative efficiency of different study designs. The methods illustrated in this paper are readily implemented using our freely available R software package icensmis, which is available at the Comprehensive R Archive Network website. An important special case is that when diagnostic procedures are perfect, they result in interval‐censored, time‐to‐event outcomes. The proposed methods are applicable for the design of studies in which a time‐to‐event outcome is interval censored. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号