首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Although sample size calculations have become an important element in the design of research projects, such methods for studies involving current status data are scarce. Here, we propose a method for calculating power and sample size for studies using current status data. This method is based on a Weibull survival model for a two‐group comparison. The Weibull model allows the investigator to specify a group difference in terms of a hazards ratio or a failure time ratio. We consider exponential, Weibull and uniformly distributed censoring distributions. We base our power calculations on a parametric approach with the Wald test because it is easy for medical investigators to conceptualize and specify the required input variables. As expected, studies with current status data have substantially less power than studies with the usual right‐censored failure time data. Our simulation results demonstrate the merits of these proposed power calculations. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

2.
In many medical problems that collect multiple observations per subject, the time to an event is often of interest. Sometimes, the occurrence of the event can be recorded at regular intervals leading to interval‐censored data. It is further desirable to obtain the most parsimonious model in order to increase predictive power and to obtain ease of interpretation. Variable selection and often random effects selection in case of clustered data become crucial in such applications. We propose a Bayesian method for random effects selection in mixed effects accelerated failure time (AFT) models. The proposed method relies on the Cholesky decomposition on the random effects covariance matrix and the parameter‐expansion method for the selection of random effects. The Dirichlet prior is used to model the uncertainty in the random effects. The error distribution for the accelerated failure time model has been specified using a Gaussian mixture to allow flexible error density and prediction of the survival and hazard functions. We demonstrate the model using extensive simulations and the Signal Tandmobiel Study®. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

3.
Existing joint models for longitudinal and survival data are not applicable for longitudinal ordinal outcomes with possible non‐ignorable missing values caused by multiple reasons. We propose a joint model for longitudinal ordinal measurements and competing risks failure time data, in which a partial proportional odds model for the longitudinal ordinal outcome is linked to the event times by latent random variables. At the survival endpoint, our model adopts the competing risks framework to model multiple failure types at the same time. The partial proportional odds model, as an extension of the popular proportional odds model for ordinal outcomes, is more flexible and at the same time provides a tool to test the proportional odds assumption. We use a likelihood approach and derive an EM algorithm to obtain the maximum likelihood estimates of the parameters. We further show that all the parameters at the survival endpoint are identifiable from the data. Our joint model enables one to make inference for both the longitudinal ordinal outcome and the failure times simultaneously. In addition, the inference at the longitudinal endpoint is adjusted for possible non‐ignorable missing data caused by the failure times. We apply the method to the NINDS rt‐PA stroke trial. Our study considers the modified Rankin Scale only. Other ordinal outcomes in the trial, such as the Barthel and Glasgow scales, can be treated in the same way. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

4.
Multivariate interval‐censored failure time data arise commonly in many studies of epidemiology and biomedicine. Analysis of these type of data is more challenging than the right‐censored data. We propose a simple multiple imputation strategy to recover the order of occurrences based on the interval‐censored event times using a conditional predictive distribution function derived from a parametric gamma random effects model. By imputing the interval‐censored failure times, the estimation of the regression and dependence parameters in the context of a gamma frailty proportional hazards model using the well‐developed EM algorithm is made possible. A robust estimator for the covariance matrix is suggested to adjust for the possible misspecification of the parametric baseline hazard function. The finite sample properties of the proposed method are investigated via simulation. The performance of the proposed method is highly satisfactory, whereas the computation burden is minimal. The proposed method is also applied to the diabetic retinopathy study (DRS) data for illustration purpose and the estimates are compared with those based on other existing methods for bivariate grouped survival data. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

5.
In contrast to the usual ROC analysis with a contemporaneous reference standard, the time‐dependent setting introduces the possibility that the reference standard refers to an event at a future time and may not be known for every patient due to censoring. The goal of this research is to determine the sample size required for a study design to address the question of the accuracy of a diagnostic test using the area under the curve in time‐dependent ROC analysis. We adapt a previously published estimator of the time‐dependent area under the ROC curve, which is a function of the expected conditional survival functions. This estimator accommodates censored data. The estimation of the required sample size is based on approximations of the expected conditional survival functions and their variances, derived under parametric assumptions of an exponential failure time and an exponential censoring time. We also consider different patient enrollment strategies. The proposed method can provide an adequate sample size to ensure that the test's accuracy is estimated to a prespecified precision. We present results of a simulation study to assess the accuracy of the method and its robustness to departures from the parametric assumptions. We apply the proposed method to design of a study of positron emission tomography as predictor of disease free survival in women undergoing therapy for cervical cancer. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

6.
We consider a marginal model for the regression analysis of clustered failure time data with a cure fraction. We propose to use novel generalized estimating equations in an expectation–maximization algorithm to estimate regression parameters in a semiparametric proportional hazards mixture cure model. The dependence among the cure statuses and among the survival times of uncured patients within clusters are modeled by working correlation matrices in the estimating equations. We use a bootstrap method to obtain the variances of the estimates. We report a simulation study to demonstrate a substantial efficiency gain of the proposed method over an existing marginal method. Finally, we apply the model and the proposed method to a set of data from a multi‐institutional study of tonsil cancer patients treated with a radiation therapy. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

7.
Survival median is commonly used to compare treatment groups in cancer‐related research. The current literature focuses on developing tests for independent survival data. However, researchers often encounter dependent survival data such as matched pair data or clustered data. We propose a pseudo‐value approach to test the equality of survival medians for both independent and dependent survival data. We investigate the type I error and power of the proposed method by a simulation study, in which we examine independent and dependent data. The simulation study shows that the proposed method performs equivalently to the existing methods for independent survival data and performs better for dependent survival data. A study comparing survival median times for bone marrow transplants illustrates the proposed method. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

8.
We studied the problem of testing a hypothesized distribution in survival regression models when the data is right censored and survival times are influenced by covariates. A modified chi‐squared type test, known as Nikulin‐Rao‐Robson statistic, is applied for the comparison of accelerated failure time models. This statistic is used to test the goodness‐of‐fit for hypertabastic survival model and four other unimodal hazard rate functions. The results of simulation study showed that the hypertabastic distribution can be used as an alternative to log‐logistic and log‐normal distribution. In statistical modeling, because of its flexible shape of hazard functions, this distribution can also be used as a competitor of Birnbaum‐Saunders and inverse Gaussian distributions. The results for the real data application are shown. Copyright © 2017 John Wiley & Sons, Ltd.  相似文献   

9.
The long‐term survivor mixture model is commonly applied to analyse survival data when some individuals may never experience the failure event of interest. A score test is presented to assess whether the cured proportion is significant to justify the long‐term survivor mixture model. Sampling distribution and power of the test statistic are evaluated by simulation studies. The results confirm that the proposed test statistic performs well in finite sample situations. The test procedure is illustrated using a breast cancer survival data set and the clustered multivariate failure times from a multi‐centre clinical trial of carcinoma. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

10.
This paper addresses model‐based Bayesian inference in the analysis of data arising from bioassay experiments. In such experiments, increasing doses of a chemical substance are given to treatment groups (usually rats or mice) for a fixed period of time (usually 2 years). The goal of such an experiment is to determine whether an increased dosage of the chemical is associated with increased probability of an adverse effect (usually presence of adenoma or carcinoma). The data consists of dosage, survival time, and the occurrence of the adverse event for each unit in the study. To determine whether such relationship exists, this paper proposes using Bayes factors to compare two probit models, the model that assumes increasing dose effects and the model that assumes no dose effect. These models account for the survival time of each unit through a Poly‐k type correction. In order to increase statistical power, the proposed approach allows the incorporation of information from control groups from previous studies. The proposed method is able to handle data with very few occurrences of the adverse event. The proposed method is compared with a variation of the Peddada test via simulation and is shown to have higher power. We demonstrate the method by applying it to the two bioassay experiment datasets previously analyzed by other authors. Copyright © 2017 John Wiley & Sons, Ltd.  相似文献   

11.
Flexible modelling in survival analysis can be useful both for exploratory and predictive purposes. Feed forward neural networks were recently considered for flexible non-linear modelling of censored survival data through the generalization of both discrete and continuous time models. We show that by treating the time interval as an input variable in a standard feed forward network with logistic activation and entropy error function, it is possible to estimate smoothed discrete hazards as conditional probabilities of failure. We considered an easily implementable approach with a fast selection criteria of the best configurations. Examples on data sets from two clinical trials are provided. The proposed artificial neural network (ANN) approach can be applied for the estimation of the functional relationships between covariates and time in survival data to improve model predictivity in the presence of complex prognostic relationships. © 1998 John Wiley & Sons, Ltd.  相似文献   

12.
We consider the situation of estimating the marginal survival distribution from censored data subject to dependent censoring using auxiliary variables. We had previously developed a nonparametric multiple imputation approach. The method used two working proportional hazards (PH) models, one for the event times and the other for the censoring times, to define a nearest neighbor imputing risk set. This risk set was then used to impute failure times for censored observations. Here, we adapt the method to the situation where the event and censoring times follow accelerated failure time models and propose to use the Buckley–James estimator as the two working models. Besides studying the performances of the proposed method, we also compare the proposed method with two popular methods for handling dependent censoring through the use of auxiliary variables, inverse probability of censoring weighted and parametric multiple imputation methods, to shed light on the use of them. In a simulation study with time‐independent auxiliary variables, we show that all approaches can reduce bias due to dependent censoring. The proposed method is robust to misspecification of either one of the two working models and their link function. This indicates that a working proportional hazards model is preferred because it is more cumbersome to fit an accelerated failure time model. In contrast, the inverse probability of censoring weighted method is not robust to misspecification of the link function of the censoring time model. The parametric imputation methods rely on the specification of the event time model. The approaches are applied to a prostate cancer dataset. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

13.
Accelerated failure time (AFT) models allowing for random effects are linear mixed models under the log-transformation of survival time with censoring and describe dependence in correlated survival data. It is well known that the AFT models are useful alternatives to frailty models. To the best of our knowledge, however, there is no literature on variable selection methods for such AFT models. In this paper, we propose a simple but unified variable-selection procedure of fixed effects in the AFT random-effect models using penalized h-likelihood (HL). We consider four penalty functions (ie, least absolute shrinkage and selection operator (LASSO), adaptive LASSO, smoothly clipped absolute deviation (SCAD), and HL). We show that the proposed method can be easily implemented via a slight modification to existing h-likelihood estimation procedures. We thus demonstrate that the proposed method can also be easily extended to AFT models with multilevel (or nested) structures. Simulation studies also show that the procedure using the adaptive LASSO, SCAD, or HL penalty performs well. In particular, we find via the simulation results that the variable selection method with HL penalty provides a higher probability of choosing the true model than other three methods. The usefulness of the new method is illustrated using two actual datasets from multicenter clinical trials.  相似文献   

14.
Competing risks analysis considers time‐to‐first‐event (‘survival time’) and the event type (‘cause’), possibly subject to right‐censoring. The cause‐, i.e. event‐specific hazards, completely determine the competing risk process, but simulation studies often fall back on the much criticized latent failure time model. Cause‐specific hazard‐driven simulation appears to be the exception; if done, usually only constant hazards are considered, which will be unrealistic in many medical situations. We explain simulating competing risks data based on possibly time‐dependent cause‐specific hazards. The simulation design is as easy as any other, relies on identifiable quantities only and adds to our understanding of the competing risks process. In addition, it immediately generalizes to more complex multistate models. We apply the proposed simulation design to computing the least false parameter of a misspecified proportional subdistribution hazard model, which is a research question of independent interest in competing risks. The simulation specifications have been motivated by data on infectious complications in stem‐cell transplanted patients, where results from cause‐specific hazards analyses were difficult to interpret in terms of cumulative event probabilities. The simulation illustrates that results from a misspecified proportional subdistribution hazard analysis can be interpreted as a time‐averaged effect on the cumulative event probability scale. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

15.
Family‐based designs enriched with affected subjects and disease associated variants can increase statistical power for identifying functional rare variants. However, few rare variant analysis approaches are available for time‐to‐event traits in family designs and none of them applicable to the X chromosome. We developed novel pedigree‐based burden and kernel association tests for time‐to‐event outcomes with right censoring for pedigree data, referred to FamRATS (family‐based rare variant association tests for survival traits). Cox proportional hazard models were employed to relate a time‐to‐event trait with rare variants with flexibility to encompass all ranges and collapsing of multiple variants. In addition, the robustness of violating proportional hazard assumptions was investigated for the proposed and four current existing tests, including the conventional population‐based Cox proportional model and the burden, kernel, and sum of squares statistic (SSQ) tests for family data. The proposed tests can be applied to large‐scale whole‐genome sequencing data. They are appropriate for the practical use under a wide range of misspecified Cox models, as well as for population‐based, pedigree‐based, or hybrid designs. In our extensive simulation study and data example, we showed that the proposed kernel test is the most powerful and robust choice among the proposed burden test and the existing four rare variant survival association tests. When applied to the Diabetes Heart Study, the proposed tests found exome variants of the JAK1 gene on chromosome 1 showed the most significant association with age at onset of type 2 diabetes from the exome‐wide analysis.  相似文献   

16.
Interval‐censored data occur naturally in many fields and the main feature is that the failure time of interest is not observed exactly, but is known to fall within some interval. In this paper, we propose a semiparametric probit model for analyzing case 2 interval‐censored data as an alternative to the existing semiparametric models in the literature. Specifically, we propose to approximate the unknown nonparametric nondecreasing function in the probit model with a linear combination of monotone splines, leading to only a finite number of parameters to estimate. Both the maximum likelihood and the Bayesian estimation methods are proposed. For each method, regression parameters and the baseline survival function are estimated jointly. The proposed methods make no assumptions about the observation process and can be applicable to any interval‐censored data with easy implementation. The methods are evaluated by simulation studies and are illustrated by two real‐life interval‐censored data applications. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

17.
Zhang J  Peng Y 《Statistics in medicine》2007,26(16):3157-3171
The proportional hazard (PH) mixture cure model and the accelerated failure time (AFT) mixture cure model are usually used in analysing failure time data with long-term survivors. However, the semiparametric AFT mixture cure model has attracted less attention than the semiparametric PH mixture cure model because of the complexity of its estimation method. In this paper, we propose a new estimation method for the semiparametric AFT mixture cure model. This method employs the EM algorithm and the rank estimator of the AFT model to estimate the parameters of interest. The M-step in the EM algorithm, which incorporates the rank-like estimating equation, can be carried out easily using the linear programming method. To evaluate the performance of the proposed method, we conduct a simulation study. The results of the simulation study demonstrate that the proposed method performs better than the existing estimation method and the semiparametric AFT mixture cure model improves the identifiability of the parameters in comparison to the parametric AFT mixture cure model. To illustrate, we apply the model and the proposed method to a data set of failure times from bone marrow transplant patients.  相似文献   

18.
We consider structural measurement error models for group testing data. Likelihood inference based on structural measurement error models requires one to specify a model for the latent true predictors. Inappropriate specification of this model can lead to erroneous inference. We propose a new method tailored to detect latent‐variable model misspecification in structural measurement error models for group testing data. Compared with the existing diagnostic methods developed for the same purpose, our method shows vast improvement in the power to detect latent‐variable model misspecification in group testing design. We illustrate the implementation and performance of the proposed method via simulation and application to a real data example. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

19.
We propose a flexible cure rate model that accommodates different censoring distributions for the cured and uncured groups and also allows for some individuals to be observed as cured when their survival time exceeds a known threshold. We model the survival times for the uncured group using an accelerated failure time model with errors distributed according to the seminonparametric distribution, potentially truncated at a known threshold. We suggest a straightforward extension of the usual expectation–maximization algorithm approach for obtaining estimates in cure rate models to accommodate the cure threshold and dependent censoring. We additionally suggest a likelihood ratio test for testing for the presence of dependent censoring in the proposed cure rate model. We show through numerical studies that our model has desirable properties and leads to approximately unbiased parameter estimates in a variety of scenarios. To demonstrate how our method performs in practice, we analyze data from a bone marrow transplantation study and a liver transplant study. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

20.
We study Bayesian linear regression models with skew‐symmetric scale mixtures of normal error distributions. These kinds of models can be used to capture departures from the usual assumption of normality of the errors in terms of heavy tails and asymmetry. We propose a general noninformative prior structure for these regression models and show that the corresponding posterior distribution is proper under mild conditions. We extend these propriety results to cases where the response variables are censored. The latter scenario is of interest in the context of accelerated failure time models, which are relevant in survival analysis. We present a simulation study that demonstrates good frequentist properties of the posterior credible intervals associated with the proposed priors. This study also sheds some light on the trade‐off between increased model flexibility and the risk of over‐fitting. We illustrate the performance of the proposed models with real data. Although we focus on models with univariate response variables, we also present some extensions to the multivariate case in the Supporting Information. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号