首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Our aim is to develop a rich and coherent framework for modeling correlated time‐to‐event data, including (1) survival regression models with different links and (2) flexible modeling for time‐dependent and nonlinear effects with rich postestimation. We extend the class of generalized survival models, which expresses a transformed survival in terms of a linear predictor, by incorporating a shared frailty or random effects for correlated survival data. The proposed approach can include parametric or penalized smooth functions for time, time‐dependent effects, nonlinear effects, and their interactions. The maximum (penalized) marginal likelihood method is used to estimate the regression coefficients and the variance for the frailty or random effects. The optimal smoothing parameters for the penalized marginal likelihood estimation can be automatically selected by a likelihood‐based cross‐validation criterion. For models with normal random effects, Gauss‐Hermite quadrature can be used to obtain the cluster‐level marginal likelihoods. The Akaike Information Criterion can be used to compare models and select the link function. We have implemented these methods in the R package rstpm2. Simulating for both small and larger clusters, we find that this approach performs well. Through 2 applications, we demonstrate (1) a comparison of proportional hazards and proportional odds models with random effects for clustered survival data and (2) the estimation of time‐varying effects on the log‐time scale, age‐varying effects for a specific treatment, and two‐dimensional splines for time and age.  相似文献   

2.
The generalized Wilcoxon and log‐rank tests are commonly used for testing differences between two survival distributions. We modify the Wilcoxon test to account for auxiliary information on intermediate disease states that subjects may pass through before failure. For a disease with multiple states where patients are monitored periodically but exact transition times are unknown (e.g. staging in cancer), we first fit a multi‐state Markov model to the full data set; when censoring precludes the comparison of survival times between two subjects, we use the model to estimate the probability that one subject will have survived longer than the other given their censoring times and last observed status, and use these probabilities to compute an expected rank for each subject. These expected ranks form the basis of our test statistic. Simulations demonstrate that the proposed test can improve power over the log‐rank and generalized Wilcoxon tests in some settings while maintaining the nominal type 1 error rate. The method is illustrated on an amyotrophic lateral sclerosis data set. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

3.
To compare the survival functions based on right-truncated data, Lagakos et al. proposed a weighted logrank test based on a reverse time scale. This is in contrast to Bilker and Wang, who suggested a semi-parametric version of the Mann-Whitney test by assuming that the distribution of truncation times is known or can be estimated parametrically. The approach of Lagakos et al. is simple and elegant, but the weight function in their method depends on the underlying cumulative hazard functions even under proportional hazards models. On the other hand, a semi-parametric test may have better efficiency, but it may be sensitive to misspecification of the distribution of truncation times. Therefore, this paper proposes a non-parametric test statistic based on the integrated weighted difference between two estimated survival functions in forward time. The comparative results from a simulation study are presented and the implementation of these methods to a real data set is demonstrated.  相似文献   

4.
The normality assumption of measurement error is a widely used distribution in joint models of longitudinal and survival data, but it may lead to unreasonable or even misleading results when longitudinal data reveal skewness feature. This paper proposes a new joint model for multivariate longitudinal and multivariate survival data by incorporating a nonparametric function into the trajectory function and hazard function and assuming that measurement errors in longitudinal measurement models follow a skew‐normal distribution. A Monte Carlo Expectation‐Maximization (EM) algorithm together with the penalized‐splines technique and the Metropolis–Hastings algorithm within the Gibbs sampler is developed to estimate parameters and nonparametric functions in the considered joint models. Case deletion diagnostic measures are proposed to identify the potential influential observations, and an extended local influence method is presented to assess local influence of minor perturbations. Simulation studies and a real example from a clinical trial are presented to illustrate the proposed methodologies. Copyright © 2017 John Wiley & Sons, Ltd.  相似文献   

5.
Family‐based designs enriched with affected subjects and disease associated variants can increase statistical power for identifying functional rare variants. However, few rare variant analysis approaches are available for time‐to‐event traits in family designs and none of them applicable to the X chromosome. We developed novel pedigree‐based burden and kernel association tests for time‐to‐event outcomes with right censoring for pedigree data, referred to FamRATS (family‐based rare variant association tests for survival traits). Cox proportional hazard models were employed to relate a time‐to‐event trait with rare variants with flexibility to encompass all ranges and collapsing of multiple variants. In addition, the robustness of violating proportional hazard assumptions was investigated for the proposed and four current existing tests, including the conventional population‐based Cox proportional model and the burden, kernel, and sum of squares statistic (SSQ) tests for family data. The proposed tests can be applied to large‐scale whole‐genome sequencing data. They are appropriate for the practical use under a wide range of misspecified Cox models, as well as for population‐based, pedigree‐based, or hybrid designs. In our extensive simulation study and data example, we showed that the proposed kernel test is the most powerful and robust choice among the proposed burden test and the existing four rare variant survival association tests. When applied to the Diabetes Heart Study, the proposed tests found exome variants of the JAK1 gene on chromosome 1 showed the most significant association with age at onset of type 2 diabetes from the exome‐wide analysis.  相似文献   

6.
An improved method of sample size calculation for the one‐sample log‐rank test is provided. The one‐sample log‐rank test may be the method of choice if the survival curve of a single treatment group is to be compared with that of a historic control. Such settings arise, for example, in clinical phase‐II trials if the response to a new treatment is measured by a survival endpoint. Present sample size formulas for the one‐sample log‐rank test are based on the number of events to be observed, that is, in order to achieve approximately a desired power for allocated significance level and effect the trial is stopped as soon as a certain critical number of events are reached. We propose a new stopping criterion to be followed. Both approaches are shown to be asymptotically equivalent. For small sample size, though, a simulation study indicates that the new criterion might be preferred when planning a corresponding trial. In our simulations, the trial is usually underpowered, and the aspired significance level is not exploited if the traditional stopping criterion based on the number of events is used, whereas a trial based on the new stopping criterion maintains power with the type‐I error rate still controlled. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

7.
This work studies a new survival modeling technique based on least‐squares support vector machines. We propose the use of a least‐squares support vector machine combining ranking and regression. The advantage of this kernel‐based model is threefold: (i) the problem formulation is convex and can be solved conveniently by a linear system; (ii) non‐linearity is introduced by using kernels, componentwise kernels in particular are useful to obtain interpretable results; and (iii) introduction of ranking constraints makes it possible to handle censored data. In an experimental setup, the model is used as a preprocessing step for the standard Cox proportional hazard regression by estimating the functional forms of the covariates. The proposed model was compared with different survival models from the literature on the clinical German Breast Cancer Study Group data and on the high‐dimensional Norway/Stanford Breast Cancer Data set. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

8.
Regression models for the mean quality‐adjusted survival time are specified from hazard functions of transitions between two states and the mean quality‐adjusted survival time may be a complex function of covariates. We discuss a regression model for the mean quality‐adjusted survival (QAS) time based on pseudo‐observations, which has the advantage of directly modeling the effect of covariates in the QAS time. Both Monte Carlo simulations and a real data set are studied. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

9.
The long‐term survivor mixture model is commonly applied to analyse survival data when some individuals may never experience the failure event of interest. A score test is presented to assess whether the cured proportion is significant to justify the long‐term survivor mixture model. Sampling distribution and power of the test statistic are evaluated by simulation studies. The results confirm that the proposed test statistic performs well in finite sample situations. The test procedure is illustrated using a breast cancer survival data set and the clustered multivariate failure times from a multi‐centre clinical trial of carcinoma. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

10.
We propose a score‐type statistic to evaluate heterogeneity in zero‐inflated models for count data in a stratified population, where heterogeneity is defined as instances in which the zero counts are generated from two sources. Evaluating heterogeneity in this class of models has attracted considerable attention in the literature, but existing testing procedures have primarily relied on the constancy assumption under the alternative hypothesis. In this paper, we extend the literature by describing a score‐type test to evaluate homogeneity against general alternatives that do not neglect the stratification information under the alternative hypothesis. The limiting null distribution of the proposed test statistic is a mixture of chi‐squared distributions that can be well approximated by a simple parametric bootstrap procedure. Our numerical simulation studies show that the proposed test can greatly improve efficiency over tests of heterogeneity that ignore the stratification information. An empirical application to dental caries data in early childhood further shows the importance and practical utility of the methodology in using the stratification profile to detect heterogeneity in the population. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

11.
Multilevel mixed effects survival models are used in the analysis of clustered survival data, such as repeated events, multicenter clinical trials, and individual participant data (IPD) meta‐analyses, to investigate heterogeneity in baseline risk and covariate effects. In this paper, we extend parametric frailty models including the exponential, Weibull and Gompertz proportional hazards (PH) models and the log logistic, log normal, and generalized gamma accelerated failure time models to allow any number of normally distributed random effects. Furthermore, we extend the flexible parametric survival model of Royston and Parmar, modeled on the log‐cumulative hazard scale using restricted cubic splines, to include random effects while also allowing for non‐PH (time‐dependent effects). Maximum likelihood is used to estimate the models utilizing adaptive or nonadaptive Gauss–Hermite quadrature. The methods are evaluated through simulation studies representing clinically plausible scenarios of a multicenter trial and IPD meta‐analysis, showing good performance of the estimation method. The flexible parametric mixed effects model is illustrated using a dataset of patients with kidney disease and repeated times to infection and an IPD meta‐analysis of prognostic factor studies in patients with breast cancer. User‐friendly Stata software is provided to implement the methods. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

12.
Prognostic studies often estimate survival curves for patients with different covariate vectors, but the validity of their results depends largely on the accuracy of the estimated covariate effects. To avoid conventional proportional hazards and linearity assumptions, flexible extensions of Cox's proportional hazards model incorporate non‐linear (NL) and/or time‐dependent (TD) covariate effects. However, their impact on survival curves estimation is unclear. Our primary goal is to develop and validate a flexible method for estimating individual patients' survival curves, conditional on multiple predictors with possibly NL and/or TD effects. We first obtain maximum partial likelihood estimates of NL and TD effects and use backward elimination to select statistically significant effects into a final multivariable model. We then plug the selected NL and TD estimates in the full likelihood function and estimate the baseline hazard function and the resulting survival curves, conditional on individual covariate vectors. The TD and NL functions and the log hazard are modeled with unpenalized regression B‐splines. In simulations, our flexible survival curve estimates were unbiased and had much lower mean square errors than the conventional estimates. In real‐life analyses of mortality after a septic shock, our model improved significantly the deviance (likelihood ratio test = 84.8, df = 20, p < 0.0001) and changed substantially the predicted survival for several subjects. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

13.
Control rate regression is a diffuse approach to account for heterogeneity among studies in meta‐analysis by including information about the outcome risk of patients in the control condition. Correcting for the presence of measurement error affecting risk information in the treated and in the control group has been recognized as a necessary step to derive reliable inferential conclusions. Within this framework, the paper considers the problem of small sample size as an additional source of misleading inference about the slope of the control rate regression. Likelihood procedures relying on first‐order approximations are shown to be substantially inaccurate, especially when dealing with increasing heterogeneity and correlated measurement errors. We suggest to address the problem by relying on higher‐order asymptotics. In particular, we derive Skovgaard's statistic as an instrument to improve the accuracy of the approximation of the signed profile log‐likelihood ratio statistic to the standard normal distribution. The proposal is shown to provide much more accurate results than standard likelihood solutions, with no appreciable computational effort. The advantages of Skovgaard's statistic in control rate regression are shown in a series of simulation experiments and illustrated in a real data example. R code for applying first‐ and second‐order statistic for inference on the slope on the control rate regression is provided.  相似文献   

14.
Multi‐state models generalize survival or duration time analysis to the estimation of transition‐specific hazard rate functions for multiple transitions. When each of the transition‐specific risk functions is parametrized with several distinct covariate effect coefficients, this leads to a model of potentially high dimension. To decrease the parameter space dimensionality and to work out a clear image of the underlying multi‐state model structure, one can either aim at setting some coefficients to zero or to make coefficients for the same covariate but two different transitions equal. The first issue can be approached by penalizing the absolute values of the covariate coefficients as in lasso regularization. If, instead, absolute differences between coefficients of the same covariate on different transitions are penalized, this leads to sparse competing risk relations within a multi‐state model, that is, equality of covariate effect coefficients. In this paper, a new estimation approach providing sparse multi‐state modelling by the aforementioned principles is established, based on the estimation of multi‐state models and a simultaneous penalization of the L1‐norm of covariate coefficients and their differences in a structured way. The new multi‐state modelling approach is illustrated on peritoneal dialysis study data and implemented in the R package penMSM . Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

15.
This article considers sample size determination for jointly testing a cause‐specific hazard and the all‐cause hazard for competing risks data. The cause‐specific hazard and the all‐cause hazard jointly characterize important study end points such as the disease‐specific survival and overall survival, which are commonly used as coprimary end points in clinical trials. Specifically, we derive sample size calculation methods for 2‐group comparisons based on an asymptotic chi‐square joint test and a maximum joint test of the aforementioned quantities, taking into account censoring due to lost to follow‐up as well as staggered entry and administrative censoring. We illustrate the application of the proposed methods using the Die Deutsche Diabetes Dialyse Studies clinical trial. An R package “powerCompRisk” has been developed and made available at the CRAN R library.  相似文献   

16.
The process of developing and validating a prognostic model for survival time data has been much discussed in the literature. Assessment of the performance of candidate prognostic models on data other than that used to fit the models is essential for choosing a model that will generalize well to independent data. However, there remain difficulties in current methods of measuring the accuracy of predictions of prognostic models for censored survival time data. In this paper, flexible parametric models based on the Weibull, loglogistic and lognormal distributions with spline smoothing of the baseline log cumulative hazard function are used to fit a set of candidate prognostic models across k data sets. The model that generalizes best to new data is chosen using a cross-validation scheme which fits the model on k-1 data sets and tests the predictive accuracy on the omitted data set. The procedure is repeated, omitting each data set in turn. The quality of the predictions is measured using three different methods: two commonly proposed validation methods, Harrell's concordance statistic and the Brier statistic, and a novel method using deviance differences. The results show that the deviance statistic is able to discriminate between quite similar models and can be used to choose a prognostic model that generalizes well to new data. The methods are illustrated by using a model developed to predict progression to a new AIDS event or death in HIV-1 positive patients starting antiretroviral therapy.  相似文献   

17.
Clustered right‐censored data often arise from tumorigenicity experiments and clinical trials. For testing the equality of two survival functions, Jung and Jeong extended weighted logrank (WLR) tests to two independent samples of clustered right‐censored data, while the weighted Kaplan–Meier (WKM) test can be derived from the work of O'Gorman and Akritas. The weight functions in both classes of tests (WLR and WKM) can be selected to be more sensitive to detect a certain alternative; however, since the exact alternative is unknown, it is difficult to specify the selected weights in advance. Since WLR is rank‐based, it is not sensitive to the magnitude of the difference in survival times. Although WKM is constructed to be more sensitive to the magnitude of the difference in survival times, it is not sensitive to late hazard differences. Therefore, in order to combine the advantages of these two classes of tests, this paper developed a class of versatile tests based on simultaneously using WLR and WKM for two independent samples of clustered right‐censored data. The comparative results from a simulation study are presented and the implementation of the versatile tests to two real data sets is illustrated. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

18.
Transform methods have proved effective for networks describing a progression of events. In semi‐Markov networks, we calculated the transform of time to a terminating event from corresponding transforms of intermediate steps. Saddlepoint inversion then provided survival and hazard functions, which integrated, and fully utilised, the network data. However, the presence of censored data introduces significant difficulties for these methods. Many participants in controlled trials commonly remain event‐free at study completion, a consequence of the limited period of follow‐up specified in the trial design. Transforms are not estimable using nonparametric methods in states with survival truncated by end‐of‐study censoring. We propose the use of parametric models specifying residual survival to next event. As a simple approach to extrapolation with competing alternative states, we imposed a proportional incidence (constant relative hazard) assumption beyond the range of study data. No proportional hazards assumptions are necessary for inferences concerning time to endpoint; indeed, estimation of survival and hazard functions can proceed in a single study arm. We demonstrate feasibility and efficiency of transform inversion in a large randomised controlled trial of cholesterol‐lowering therapy, the Long‐Term Intervention with Pravastatin in Ischaemic Disease study. Transform inversion integrates information available in components of multistate models: estimates of transition probabilities and empirical survival distributions. As a by‐product, it provides some ability to forecast survival and hazard functions forward, beyond the time horizon of available follow‐up. Functionals of survival and hazard functions provide inference, which proves sharper than that of log‐rank and related methods for survival comparisons ignoring intermediate events. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

19.
This work arises from consideration of sarcoma patients in which fluorodeoxyglucose positron emission tomography (FDG‐PET) imaging pre‐therapy and post‐chemotherapy is used to assess treatment response. Our focus is on methods for evaluation of the statistical uncertainty in the measured response for an individual patient. The gamma distribution is often used to describe data with constant coefficient of variation, but it can be adapted to describe the pseudo‐Poisson character of PET measurements. We propose co‐registering the pre‐therapy and post‐ therapy images and modeling the approximately paired voxel‐level data using the gamma statistics. Expressions for the estimation of the treatment effect and its variability are provided. Simulation studies explore the performance in the context of testing for a treatment effect. The impact of misregistration errors and how test power is affected by estimation of variability using simplified sampling assumptions, as might be produced by direct bootstrapping, is also clarified. The results illustrate a marked benefit in using a properly constructed paired approach. Remarkably, the power of the paired analysis is maintained even if the pre‐image and post‐ image data are poorly registered. A theoretical explanation for this is indicated. The methodology is further illustrated in the context of a series of fluorodeoxyglucose‐PET sarcoma patient studies. These data demonstrate the additional prognostic value of the proposed treatment effect test statistic. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

20.
In conventional survival analysis there is an underlying assumption that all study subjects are susceptible to the event. In general, this assumption does not adequately hold when investigating the time to an event other than death. Owing to genetic and/or environmental etiology, study subjects may not be susceptible to the disease. Analyzing nonsusceptibility has become an important topic in biomedical, epidemiological, and sociological research, with recent statistical studies proposing several mixture models for right‐censored data in regression analysis. In longitudinal studies, we often encounter left, interval, and right‐censored data because of incomplete observations of the time endpoint, as well as possibly left‐truncated data arising from the dissimilar entry ages of recruited healthy subjects. To analyze these kinds of incomplete data while accounting for nonsusceptibility and possible crossing hazards in the framework of mixture regression models, we utilize a logistic regression model to specify the probability of susceptibility, and a generalized gamma distribution, or a log‐logistic distribution, in the accelerated failure time location‐scale regression model to formulate the time to the event. Relative times of the conditional event time distribution for susceptible subjects are extended in the accelerated failure time location‐scale submodel. We also construct graphical goodness‐of‐fit procedures on the basis of the Turnbull–Frydman estimator and newly proposed residuals. Simulation studies were conducted to demonstrate the validity of the proposed estimation procedure. The mixture regression models are illustrated with alcohol abuse data from the Taiwan Aboriginal Study Project and hypertriglyceridemia data from the Cardiovascular Disease Risk Factor Two‐township Study in Taiwan. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号