首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 641 毫秒
1.
Therapeutic advances in cancer mean that it is now impractical to performed phase III randomized trials evaluating experimental treatments on the basis of overall survival. As a result, the composite endpoint of progression‐free survival has been routinely adopted in recent years as it is viewed as enabling a more timely and cost‐effective approach to assessing the clinical benefit of novel interventions. This article considers design of cancer trials directed at the evaluation of treatment effects on progression‐free survival. In particular, we derive sample size criteria based on an illness‐death model that considers cancer progression and death jointly while accounting for the fact that progression is assessed only intermittently. An alternative approach to design is also considered in which the sample size is derived based on a misspecified Cox model, which uses the documented time of progression as the progression time rather than dealing with the interval censoring. Simulation studies show the validity of the proposed methods.  相似文献   

2.
Clinical trials often assess efficacy by comparing treatments on the basis of two or more event‐time outcomes. In the case of cancer clinical trials, progression‐free survival (PFS), which is the minimum of the time from randomization to progression or to death, summarizes the comparison of treatments on the hazards for disease progression and mortality. However, the analysis of PFS does not utilize all the information we have on patients in the trial. First, if both progression and death times are recorded, then information on death time is ignored in the PFS analysis. Second, disease progression is monitored at regular clinic visits, and progression time is recorded as the first visit at which evidence of progression is detected. However, many patients miss or have irregular visits (resulting in interval‐censored data) and sometimes die of the cancer before progression was recorded. In this case, the previous progression‐free time could provide additional information on the treatment efficacy. The aim of this paper is to propose a method for comparing treatments that could more fully utilize the data on progression and death. We develop a test for treatment effect based on of the joint distribution of progression and survival. The issue of interval censoring is handled using the very simple and intuitive approach of the Conditional Expected Score Test (CEST). We focus on the application of these methods in cancer research. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

3.
Interval‐censored data, in which the event time is only known to lie in some time interval, arise commonly in practice, for example, in a medical study in which patients visit clinics or hospitals at prescheduled times and the events of interest occur between visits. Such data are appropriately analyzed using methods that account for this uncertainty in event time measurement. In this paper, we propose a survival tree method for interval‐censored data based on the conditional inference framework. Using Monte Carlo simulations, we find that the tree is effective in uncovering underlying tree structure, performs similarly to an interval‐censored Cox proportional hazards model fit when the true relationship is linear, and performs at least as well as (and in the presence of right‐censoring outperforms) the Cox model when the true relationship is not linear. Further, the interval‐censored tree outperforms survival trees based on imputing the event time as an endpoint or the midpoint of the censoring interval. We illustrate the application of the method on tooth emergence data.  相似文献   

4.
Varying‐coefficient models have claimed an increasing portion of statistical research and are now applied to censored data analysis in medical studies. We incorporate such flexible semiparametric regression tools for interval censored data with a cured proportion. We adopted a two‐part model to describe the overall survival experience for such complicated data. To fit the unknown functional components in the model, we take the local polynomial approach with bandwidth chosen by cross‐validation. We establish consistency and asymptotic distribution of the estimation and propose to use bootstrap for inference. We constructed a BIC‐type model selection method to recommend an appropriate specification of parametric and nonparametric components in the model. We conducted extensive simulations to assess the performance of our methods. An application on a decompression sickness data illustrates our methods. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

5.
Multivariate interval‐censored failure time data arise commonly in many studies of epidemiology and biomedicine. Analysis of these type of data is more challenging than the right‐censored data. We propose a simple multiple imputation strategy to recover the order of occurrences based on the interval‐censored event times using a conditional predictive distribution function derived from a parametric gamma random effects model. By imputing the interval‐censored failure times, the estimation of the regression and dependence parameters in the context of a gamma frailty proportional hazards model using the well‐developed EM algorithm is made possible. A robust estimator for the covariance matrix is suggested to adjust for the possible misspecification of the parametric baseline hazard function. The finite sample properties of the proposed method are investigated via simulation. The performance of the proposed method is highly satisfactory, whereas the computation burden is minimal. The proposed method is also applied to the diabetic retinopathy study (DRS) data for illustration purpose and the estimates are compared with those based on other existing methods for bivariate grouped survival data. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

6.
Interval‐censored data occur naturally in many fields and the main feature is that the failure time of interest is not observed exactly, but is known to fall within some interval. In this paper, we propose a semiparametric probit model for analyzing case 2 interval‐censored data as an alternative to the existing semiparametric models in the literature. Specifically, we propose to approximate the unknown nonparametric nondecreasing function in the probit model with a linear combination of monotone splines, leading to only a finite number of parameters to estimate. Both the maximum likelihood and the Bayesian estimation methods are proposed. For each method, regression parameters and the baseline survival function are estimated jointly. The proposed methods make no assumptions about the observation process and can be applicable to any interval‐censored data with easy implementation. The methods are evaluated by simulation studies and are illustrated by two real‐life interval‐censored data applications. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

7.
In many clinical settings, improving patient survival is of interest but a practical surrogate, such as time to disease progression, is instead used as a clinical trial's primary endpoint. A time‐to‐first endpoint (e.g., death or disease progression) is commonly analyzed but may not be adequate to summarize patient outcomes if a subsequent event contains important additional information. We consider a surrogate outcome very generally as one correlated with the true endpoint of interest. Settings of interest include those where the surrogate indicates a beneficial outcome so that the usual time‐to‐first endpoint of death or surrogate event is nonsensical. We present a new two‐sample test for bivariate, interval‐censored time‐to‐event data, where one endpoint is a surrogate for the second, less frequently observed endpoint of true interest. This test examines whether patient groups have equal clinical severity. If the true endpoint rarely occurs, the proposed test acts like a weighted logrank test on the surrogate; if it occurs for most individuals, then our test acts like a weighted logrank test on the true endpoint. If the surrogate is a useful statistical surrogate, our test can have better power than tests based on the surrogate that naively handles the true endpoint. In settings where the surrogate is not valid (treatment affects the surrogate but not the true endpoint), our test incorporates the information regarding the lack of treatment effect from the observed true endpoints and hence is expected to have a dampened treatment effect compared with tests based on the surrogate alone. Published 2016. This article is a U.S. Government work and is in the public domain in the USA.  相似文献   

8.
Outcome variables that are semicontinuous with clumping at zero are commonly seen in biomedical research. In addition, the outcome measurement is sometimes subject to interval censoring and a lower detection limit (LDL). This gives rise to interval‐censored observations with clumping below the LDL. Level of antibody against influenza virus measured by the hemagglutination inhibition assay is an example. The interval censoring is due to the assay's technical procedure. The clumping below LDL is likely a result of the lack of prior exposure in some individuals such that they either have zero level of antibodies or do not have detectable level of antibodies. Given a pair of such measurements from the same subject at two time points, a binary ‘fold‐increase’ endpoint can be defined according to the ratio of these two measurements, as it often is in vaccine clinical trials. The intervention effect or vaccine immunogenicity can be assessed by comparing the binary endpoint between groups of subjects given different vaccines or placebos. We introduce a two‐part random effects model for modeling the paired interval‐censored data with clumping below the LDL. Based on the estimated model parameters, we propose to use Monte Carlo approximation for estimation of the ‘fold‐increase’ endpoint and the intervention effect. Bootstrapping is used for variance estimation. The performance of the proposed method is demonstrated by simulation. We analyze antibody data from an influenza vaccine trial for illustration. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

9.
We derive a nonparametric maximum likelihood estimate of the overall survival distribution in an illness–death model from interval censored observations with unknown status of the nonfatal event. This expanded model is applied to the re‐analysis of data from a randomized trial where infants, born to women infected with HIV‐1 that were randomly assigned to breastfeeding or counseling for formula feeding, were followed for 24 months for HIV‐1 positivity, HIV‐1‐free survival, and overall survival. HIV‐1 positivity, assessed by postpartum venous blood tests, is the interval censored nonfatal event, and HIV‐1 positivity status is unknown for a subset of infants due to periodic assessment. The analysis demonstrates that estimation of the overall and the pre‐ and post‐nonfatal event survival distributions with the proposed methods provide novel insights into how overall survival is influenced by the occurrence of the nonfatal event. More generally, it suggests the usefulness of this expanded illness–death model when evaluating composite endpoints as potential surrogates for overall survival in a given disease setting. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

10.
Transform methods have proved effective for networks describing a progression of events. In semi‐Markov networks, we calculated the transform of time to a terminating event from corresponding transforms of intermediate steps. Saddlepoint inversion then provided survival and hazard functions, which integrated, and fully utilised, the network data. However, the presence of censored data introduces significant difficulties for these methods. Many participants in controlled trials commonly remain event‐free at study completion, a consequence of the limited period of follow‐up specified in the trial design. Transforms are not estimable using nonparametric methods in states with survival truncated by end‐of‐study censoring. We propose the use of parametric models specifying residual survival to next event. As a simple approach to extrapolation with competing alternative states, we imposed a proportional incidence (constant relative hazard) assumption beyond the range of study data. No proportional hazards assumptions are necessary for inferences concerning time to endpoint; indeed, estimation of survival and hazard functions can proceed in a single study arm. We demonstrate feasibility and efficiency of transform inversion in a large randomised controlled trial of cholesterol‐lowering therapy, the Long‐Term Intervention with Pravastatin in Ischaemic Disease study. Transform inversion integrates information available in components of multistate models: estimates of transition probabilities and empirical survival distributions. As a by‐product, it provides some ability to forecast survival and hazard functions forward, beyond the time horizon of available follow‐up. Functionals of survival and hazard functions provide inference, which proves sharper than that of log‐rank and related methods for survival comparisons ignoring intermediate events. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

11.
Nonparametric comparison of survival functions is one of the most commonly required task in failure time studies such as clinical trials, and for this, many procedures have been developed under various situations. This paper considers a situation that often occurs in practice but has not been discussed much: the comparison based on interval‐censored data in the presence of unequal censoring. That is, one observes only interval‐censored data, and the distributions of or the mechanisms behind censoring variables may depend on treatments and thus be different for the subjects in different treatment groups. For the problem, a test procedure is developed that takes into account the difference between the distributions of the censoring variables, and the asymptotic normality of the test statistics is given. For the assessment of the performance of the procedure, a simulation study is conducted and suggests that it works well for practical situations. An illustrative example is provided. Copyright © 2017 John Wiley & Sons, Ltd.  相似文献   

12.
13.
Two‐period two‐treatment (2×2) crossover designs are commonly used in clinical trials. For continuous endpoints, it has been shown that baseline (pretreatment) measurements collected before the start of each treatment period can be useful in improving the power of the analysis. Methods to achieve a corresponding gain for censored time‐to‐event endpoints have not been adequately studied. We propose a method in which censored values are treated as missing data and multiply imputed using prespecified parametric event time models. The event times in each imputed data set are then log‐transformed and analyzed using a linear model suitable for a 2×2 crossover design with continuous endpoints, with the difference in period‐specific baselines included as a covariate. Results obtained from the imputed data sets are synthesized for point and confidence interval estimation of the treatment ratio of geometric mean event times using model averaging in conjunction with Rubin's combination rule. We use simulations to illustrate the favorable operating characteristics of our method relative to two other methods for crossover trials with censored time‐to‐event data, ie, a hierarchical rank test that ignores the baselines and a stratified Cox model that uses each study subject as a stratum and includes period‐specific baselines as a covariate. Application to a real data example is provided.  相似文献   

14.
In oncology clinical trials, progression‐free survival (PFS), generally defined as the time from randomization until disease progression or death, has been a key endpoint to support licensing approval. In the U.S. Food and Drug Administration guidance for industry, May 2007, concerning the PFS as the primary or co‐primary clinical trial endpoint, it is recommended to have tumor assessments verified by an independent review committee blinded to study treatments, especially in open‐label studies. It is considered reassuring about the lack of reader‐evaluation bias if treatment effect estimates from the investigators' and independent review committees' evaluations agree. The agreement between these evaluations may vary for subjects with short or long PFS, while there exist no such statistical quantities that can completely account for this temporal pattern of agreements. Therefore, in this paper, we propose a new method to assess temporal agreement between two time‐to‐event endpoints, while the two event times are assumed to have a positive probability of being identical. This method measures agreement in terms of the two event times being identical at a given time or both being greater than a given time. Overall scores of agreement over a period of time are also proposed. We propose a maximum likelihood estimation to infer the proposed agreement measures using empirical data, accounting for different censoring mechanisms, including reader's censoring (event from one reader dependently censored by event from the other reader). The proposed method is demonstrated to perform well in small samples via extensive simulation studies and is illustrated through a head and neck cancer trial. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

15.
We develop a multivariate cure survival model to estimate lifetime patterns of colorectal cancer screening. Screening data cover long periods of time, with sparse observations for each person. Some events may occur before the study begins or after the study ends, so the data are both left‐censored and right‐censored, and some individuals are never screened (the ‘cured’ population). We propose a multivariate parametric cure model that can be used with left‐censored and right‐censored data. Our model allows for the estimation of the time to screening as well as the average number of times individuals will be screened. We calculate likelihood functions based on the observations for each subject using a distribution that accounts for within‐subject correlation and estimate parameters using Markov chain Monte Carlo methods. We apply our methods to the estimation of lifetime colorectal cancer screening behavior in the SEER‐Medicare data set. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

16.
In conventional survival analysis there is an underlying assumption that all study subjects are susceptible to the event. In general, this assumption does not adequately hold when investigating the time to an event other than death. Owing to genetic and/or environmental etiology, study subjects may not be susceptible to the disease. Analyzing nonsusceptibility has become an important topic in biomedical, epidemiological, and sociological research, with recent statistical studies proposing several mixture models for right‐censored data in regression analysis. In longitudinal studies, we often encounter left, interval, and right‐censored data because of incomplete observations of the time endpoint, as well as possibly left‐truncated data arising from the dissimilar entry ages of recruited healthy subjects. To analyze these kinds of incomplete data while accounting for nonsusceptibility and possible crossing hazards in the framework of mixture regression models, we utilize a logistic regression model to specify the probability of susceptibility, and a generalized gamma distribution, or a log‐logistic distribution, in the accelerated failure time location‐scale regression model to formulate the time to the event. Relative times of the conditional event time distribution for susceptible subjects are extended in the accelerated failure time location‐scale submodel. We also construct graphical goodness‐of‐fit procedures on the basis of the Turnbull–Frydman estimator and newly proposed residuals. Simulation studies were conducted to demonstrate the validity of the proposed estimation procedure. The mixture regression models are illustrated with alcohol abuse data from the Taiwan Aboriginal Study Project and hypertriglyceridemia data from the Cardiovascular Disease Risk Factor Two‐township Study in Taiwan. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

17.
A model is developed for chronic diseases with an indolent phase that is followed by a phase with more active disease resulting in progression and damage. The time scales for the intensity functions for the active phase are more naturally based on the time since the start of the active phase, corresponding to a semi‐Markov formulation. This two‐phase model enables one to fit a separate regression model for the duration of the indolent phase and intensity‐based models for the more active second phase. In cohort studies for which the disease status is only known at a series of clinical assessment times, transition times are interval‐censored, which means the time origin for phase II is interval‐censored. Weakly parametric models with piecewise constant baseline hazard and rate functions are specified, and an expectation‐maximization algorithm is described for model fitting. Simulation studies examining the performance of the proposed model show good performance under maximum likelihood and two‐stage estimation. An application to data from the motivating study of disease progression in psoriatic arthritis illustrates the procedure and identifies new human leukocyte antigens associated with the duration of the indolent phase. Copyright © 2017 John Wiley & Sons, Ltd.  相似文献   

18.
19.
In clinical trials with survival endpoint, it is common to observe an overlap between two Kaplan–Meier curves of treatment and control groups during the early stage of the trials, indicating a potential delayed treatment effect. Formulas have been derived for the asymptotic power of the log‐rank test in the presence of delayed treatment effect and its accompanying sample size calculation. In this paper, we first reformulate the alternative hypothesis with the delayed treatment effect in a rescaled time domain, which can yield a simplified sample size formula for the log‐rank test in this context. We further propose an intersection‐union test to examine the efficacy of treatment with delayed effect and show it to be more powerful than the log‐rank test. Simulation studies are conducted to demonstrate the proposed methods. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

20.
Sequentially administered, laboratory‐based diagnostic tests or self‐reported questionnaires are often used to determine the occurrence of a silent event. In this paper, we consider issues relevant in design of studies aimed at estimating the association of one or more covariates with a non‐recurring, time‐to‐event outcome that is observed using a repeatedly administered, error‐prone diagnostic procedure. The problem is motivated by the Women's Health Initiative, in which diabetes incidence among the approximately 160,000 women is obtained from annually collected self‐reported data. For settings of imperfect diagnostic tests or self‐reports with known sensitivity and specificity, we evaluate the effects of various factors on resulting power and sample size calculations and compare the relative efficiency of different study designs. The methods illustrated in this paper are readily implemented using our freely available R software package icensmis, which is available at the Comprehensive R Archive Network website. An important special case is that when diagnostic procedures are perfect, they result in interval‐censored, time‐to‐event outcomes. The proposed methods are applicable for the design of studies in which a time‐to‐event outcome is interval censored. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号