首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
    
This paper provides a comparison of the three‐parameter exponentiated Weibull (EW) and generalized gamma (GG) distributions. The connection between these two different families is that the hazard functions of both have the four standard shapes (increasing, decreasing, bathtub, and arc shaped), and in fact, the shape of the hazard is the same for identical values of the three parameters. For a given EW distribution, we define a matching GG using simulation and also by matching the 5 th, 50 th, and 95 th percentiles. We compare EW and matching GG distributions graphically and using the Kullback–Leibler distance. We find that the survival functions for the EW and matching GG are graphically indistinguishable, and only the hazard functions can sometimes be seen to be slightly different. The Kullback–Leibler distances are very small and decrease with increasing sample size. We conclude that the similarity between the two distributions is striking, and therefore, the EW represents a convenient alternative to the GG with the identical richness of hazard behavior. More importantly, these results suggest that having the four basic hazard shapes may to some extent be an important structural characteristic of any family of distributions. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

2.
    
The proportional hazard model is one of the most important statistical models used in medical research involving time‐to‐event data. Simulation studies are routinely used to evaluate the performance and properties of the model and other alternative statistical models for time‐to‐event outcomes under a variety of situations. Complex simulations that examine multiple situations with different censoring rates demand approaches that can accommodate this variety. In this paper, we propose a general framework for simulating right‐censored survival data for proportional hazards models by simultaneously incorporating a baseline hazard function from a known survival distribution, a known censoring time distribution, and a set of baseline covariates. Specifically, we present scenarios in which time to event is generated from exponential or Weibull distributions and censoring time has a uniform or Weibull distribution. The proposed framework incorporates any combination of covariate distributions. We describe the steps involved in nested numerical integration and using a root‐finding algorithm to choose the censoring parameter that achieves predefined censoring rates in simulated survival data. We conducted simulation studies to assess the performance of the proposed framework. We demonstrated the application of the new framework in a comprehensively designed simulation study. We investigated the effect of censoring rate on potential bias in estimating the conditional treatment effect using the proportional hazard model in the presence of unmeasured confounding variables. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

3.
    
We develop flexible multiparameter regression (MPR) survival models for interval-censored survival data arising in longitudinal prospective studies and longitudinal randomised controlled clinical trials. A multiparameter Weibull regression survival model, which is wholly parametric, and has nonproportional hazards, is the main focus of the article. We describe the basic model, develop the interval-censored likelihood, and extend the model to include gamma frailty and a dispersion model. We evaluate the models by means of a simulation study and a detailed reanalysis of data from the Signal Tandmobiel study. The results demonstrate that the MPR model with frailty is computationally efficient and provides an excellent fit to the data.  相似文献   

4.
    
It is known that the ages of onset of many diseases are determined by both a genetic predisposition to disease as well as environmental risk factors that are capable of either triggering or hastening the onset of disease. Difficulties in modelling onset ages arise when a large fraction fail to inherit the disease-causing gene, and multiple reasons for censoring result in unobserved onset ages. We present a parametric Bayesian model that includes subjects with missing age information, non-susceptible subjects and allows for regression on risk factor information. The model is fit using Markov chain Monte Carlo simulation from the posterior distribution, and allows the simultaneous estimation of the proportion of the population at risk of disease, the mean onset age of disease, survival after disease onset, and the association of risk factors with susceptibility, onset age and survival after onset. An example employing Huntington's disease data is presented.  相似文献   

5.
    
Traditional methods of sample size and power calculations in clinical trials with a time‐to‐event end point are based on the logrank test (and its variations), Cox proportional hazards (PH) assumption, or comparison of means of 2 exponential distributions. Of these, sample size calculation based on PH assumption is likely the most common and allows adjusting for the effect of one or more covariates. However, when designing a trial, there are situations when the assumption of PH may not be appropriate. Additionally, when it is known that there is a rapid decline in the survival curve for a control group, such as from previously conducted observational studies, a design based on the PH assumption may confer only a minor statistical improvement for the treatment group that is neither clinically nor practically meaningful. For such scenarios, a clinical trial design that focuses on improvement in patient longevity is proposed, based on the concept of proportional time using the generalized gamma ratio distribution. Simulations are conducted to evaluate the performance of the proportional time method and to identify the situations in which such a design will be beneficial as compared to the standard design using a PH assumption, piecewise exponential hazards assumption, and specific cases of a cure rate model. A practical example in which hemorrhagic stroke patients are randomized to 1 of 2 arms in a putative clinical trial demonstrates the usefulness of this approach by drastically reducing the number of patients needed for study enrollment.  相似文献   

6.
    
Spatial scan statistics are widely applied to identify spatial clusters in geographic disease surveillance. To evaluate the statistical significance of detected clusters, Monte Carlo hypothesis testing is often used because the null distribution of spatial scan statistics is not known. A drawback of the method is that we have to increase the number of replications to obtain accurate p‐values. Gumbel‐based p‐value approximations for spatial scan statistics have recently been proposed and evaluated for Poisson and Bernoulli models. In this study, we examine the use of a generalized extreme value distribution to approximate the null distribution of spatial scan statistics as well as the Gumbel distribution. Through simulation, p‐value approximations using extreme value distributions for spatial scan statistics are assessed for multinomial and ordinal models in addition to Poisson and Bernoulli models. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

7.
    
Left truncated right censored (LTRC) data arise quite commonly from survival studies. In this article, a model based on piecewise linear approximation is proposed for the analysis of LTRC data with covariates. Specifically, the model involves a piecewise linear approximation for the cumulative baseline hazard function of the proportional hazards model. The principal advantage of the proposed model is that it does not depend on restrictive parametric assumptions while being flexible and data-driven. Likelihood inference for the model is developed. Through detailed simulation studies, the robustness property of the model is studied by fitting it to LTRC data generated from different processes covering a wide range of lifetime distributions. A sensitivity analysis is also carried out by fitting the model to LTRC data generated from a process with a piecewise constant baseline hazard. It is observed that the performance of the model is quite satisfactory in all those cases. Analyses of two real LTRC datasets by using the model are provided as illustrative examples. Applications of the model in some practical prediction issues are discussed. In summary, the proposed model provides a comprehensive and flexible approach to model a general structure for LTRC lifetime data.  相似文献   

8.
    
In conventional survival analysis there is an underlying assumption that all study subjects are susceptible to the event. In general, this assumption does not adequately hold when investigating the time to an event other than death. Owing to genetic and/or environmental etiology, study subjects may not be susceptible to the disease. Analyzing nonsusceptibility has become an important topic in biomedical, epidemiological, and sociological research, with recent statistical studies proposing several mixture models for right‐censored data in regression analysis. In longitudinal studies, we often encounter left, interval, and right‐censored data because of incomplete observations of the time endpoint, as well as possibly left‐truncated data arising from the dissimilar entry ages of recruited healthy subjects. To analyze these kinds of incomplete data while accounting for nonsusceptibility and possible crossing hazards in the framework of mixture regression models, we utilize a logistic regression model to specify the probability of susceptibility, and a generalized gamma distribution, or a log‐logistic distribution, in the accelerated failure time location‐scale regression model to formulate the time to the event. Relative times of the conditional event time distribution for susceptible subjects are extended in the accelerated failure time location‐scale submodel. We also construct graphical goodness‐of‐fit procedures on the basis of the Turnbull–Frydman estimator and newly proposed residuals. Simulation studies were conducted to demonstrate the validity of the proposed estimation procedure. The mixture regression models are illustrated with alcohol abuse data from the Taiwan Aboriginal Study Project and hypertriglyceridemia data from the Cardiovascular Disease Risk Factor Two‐township Study in Taiwan. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

9.
    
Childhood and adolescenthood overweight or obesity, which may be quantified through the body mass index (BMI), is strongly associated with adult obesity and other health problems. Motivated by the child and adolescent behaviors in long‐term evolution (CABLE) study, we are interested in individual, family, and school factors associated with marginal quantiles of longitudinal adolescent BMI values. We propose a new method for composite marginal quantile regression analysis for longitudinal outcome data, which performs marginal quantile regressions at multiple quantile levels simultaneously. The proposed method extends the quantile regression coefficient modeling method introduced by Frumento and Bottai (Biometrics 2016; 72 :74–84) to longitudinal data accounting suitably for the correlation structure in longitudinal observations. A goodness‐of‐fit test for the proposed modeling is also developed. Simulation results show that the proposed method can be much more efficient than the analysis without taking correlation into account and the analysis performing separate quantile regressions at different quantile levels. The application to the longitudinal adolescent BMI data from the CABLE study demonstrates the practical utility of our proposal. Copyright © 2017 John Wiley & Sons, Ltd.  相似文献   

10.
    
Song X  Ma S 《Statistics in medicine》2008,27(16):3178-3190
There has been substantial effort devoted to the analysis of censored failure time with covariates that are subject to measurement error. Previous studies have focused on right-censored survival data, but interval-censored survival data with covariate measurement error are yet to be investigated. Our study is partly motivated by analysis of the HIV clinical trial AIDS Clinical Trial Group (ACTG) 175 data, where the occurrence time of AIDS is interval censored and the covariate CD4 count is subject to measurement error. We assume that the data are realized from a proportional hazards model. A multiple augmentation approach is proposed to convert interval-censored data to right-censored data, and the conditional score approach is then employed to account for measurement error. The proposed approach is easy to implement and can be readily extended to other semiparametric models. Extensive simulations show that the proposed approach has satisfactory finite-sample performance. The ACTG 175 data are then analyzed.  相似文献   

11.
    
We present an approach that uses latent variable modeling and multiple imputation to correct rater bias when one group of raters tends to be more lenient in assigning a diagnosis than another. Our method assumes that there exists an unobserved moderate category of patient who is assigned a positive diagnosis by one type of rater and a negative diagnosis by the other type. We present a Bayesian random effects censored ordinal probit model that allows us to calibrate the diagnoses across rater types by identifying and multiply imputing 'case' or 'non-case' status for patients in the moderate category. A Markov chain Monte Carlo algorithm is presented to estimate the posterior distribution of the model parameters and generate multiple imputations. Our method enables the calibrated diagnosis variable to be used in subsequent analyses while also preserving uncertainty in true diagnosis. We apply our model to diagnoses of posttraumatic stress disorder (PTSD) from a depression study where nurse practitioners were twice as likely as clinical psychologists to diagnose PTSD despite the fact that participants were randomly assigned to either a nurse or a psychologist. Our model appears to balance PTSD rates across raters, provides a good fit to the data, and preserves between-rater variability. After calibrating the diagnoses of PTSD across rater types, we perform an analysis looking at the effects of comorbid PTSD on changes in depression scores over time. Results are compared with an analysis that uses the original diagnoses and show that calibrating the PTSD diagnoses can yield different inferences.  相似文献   

12.
    
This paper presents a parametric method of fitting semi‐Markov models with piecewise‐constant hazards in the presence of left, right and interval censoring. We investigate transition intensities in a three‐state illness–death model with no recovery. We relax the Markov assumption by adjusting the intensity for the transition from state 2 (illness) to state 3 (death) for the time spent in state 2 through a time‐varying covariate. This involves the exact time of the transition from state 1 (healthy) to state 2. When the data are subject to left or interval censoring, this time is unknown. In the estimation of the likelihood, we take into account interval censoring by integrating out all possible times for the transition from state 1 to state 2. For left censoring, we use an Expectation–Maximisation inspired algorithm. A simulation study reflects the performance of the method. The proposed combination of statistical procedures provides great flexibility. We illustrate the method in an application by using data on stroke onset for the older population from the UK Medical Research Council Cognitive Function and Ageing Study. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

13.
    
  相似文献   

14.
  总被引:1,自引:0,他引:1  
Zhang Z  Sun J  Sun L 《Statistics in medicine》2005,24(9):1399-1407
Current status data arise when each study subject is observed only once and the survival time of interest is known only to be either less or greater than the observation time. Such data often occur in, for example, cross-sectional studies, demographical investigations and tumorigenicity experiments and several semi-parametric and non-parametric methods for their analysis have been proposed. However, most of these methods deal only with the situation where observation time is independent of the underlying survival time completely or given covariates. This paper discusses regression analysis of current status data when the observation time may be related to the underlying survival time and inference procedures are presented for estimation of regression parameters under the additive hazards regression model. The procedures can be easily implemented and are applied to two motivating examples.  相似文献   

15.
目的 解决方差未知且不等时总体均值差别的推断问题。方法 根据Bayis理论猛士叫体均数差值的后验分布Behrens-Fisher分布,用Monte Carlo模拟方法产生Behrens-Fisher的频数分布进行统计推断。结果 模拟样本足够大时,获得Behrens-Fisher的精确分布、均数差值的置信区间及假设检验结果。结论 与其他Behrens-Fisher分 近似方法结果相同,但本方法不信赖  相似文献   

16.
    
We propose a changepoint model for the analysis of longitudinal CD4 T-cell counts for HIV infected subjects following highly active antiretroviral treatment. The profile of CD4 counts for each subject follows a simple, 'broken stick' changepoint model, with random subject-specific parameters, including the changepoint. The model accounts for baseline covariates. The longitudinal CD4 records are censored at the time of the subject going off-study-treatment. This is a potentially informative drop-out mechanism, which we address by modelling it jointly with the CD4 count outcome. The drop-out model incorporates terms from the CD4 model, including the changepoint. The estimation is done in a Bayesian framework, with implementation via Markov chain Monte Carlo methods in the WinBUGS software. Model selection using DIC indicates that the data support the complex random changepoint and informative censoring model.  相似文献   

17.
    
Multivariate current‐status failure time data consist of several possibly related event times of interest, in which the status of each event is determined at a single examination time. If the examination time is intrinsically related to the event times, the examination is referred to as dependent censoring and needs to be taken into account. Such data often occur in clinical studies and animal carcinogenicity experiments. To accommodate for possible dependent censoring, this paper proposes a joint frailty model for event times and dependent censoring time. We develop a likelihood approach using Gaussian quadrature techniques for obtaining maximum likelihood estimates. We conduct extensive simulation studies for investigating finite‐sample properties of the proposed method. We illustrate the proposed method with an analysis of patients with ankylosing spondylitis, where the examination time may be dependent on the event times of interest. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

18.
    
Cox C 《Statistics in medicine》2008,27(21):4301-4312
  相似文献   

19.
    
Burden analysis in public health often involves the estimation of exposure‐attributable fractions from observed time series. When the entire population is exposed, the association between the exposure and outcome must be carefully modelled before the attributable fractions can be estimated. This article derives asymptotic convergences for the estimation of attributable fractions for commonly used time series models (ARMAX, Poisson, negative binomial, and Serfling), using for the most part the delta method. For the Poisson regression, the estimation of the attributable fraction is achieved by a Monte Carlo algorithm, taking into account both an estimation and a prediction error. A simulation study compares these estimations in the case of an epidemic exposure and highlights the importance of thorough analysis of the data: When the outcome is generated under an additive model, the additive models are satisfactory, and the multiplicative models are poor, and vice versa. However, the Serfling model performs poorly in all cases. Of note, a misspecification in the form or delay of the association between the exposure and the outcome leads to mediocre estimation of the attributable fraction. An application to the fraction of French outpatient antibiotic use attributable to influenza between 2003 and 2010 illustrates the asymptotic convergences. This study suggests that the Serfling model should be avoided when estimating attributable fractions while the model of choice should be selected after careful investigation of the association between the exposure and outcome.  相似文献   

20.
    
Longitudinal measurement of biomarkers is important in determining risk factors for binary endpoints such as infection or disease. However, biomarkers are subject to measurement error, and some are also subject to left‐censoring due to a lower limit of detection. Statistical methods to address these issues are few. We herein propose a generalized linear mixed model and estimate the model parameters using the Monte Carlo Newton‐Raphson (MCNR) method. Inferences regarding the parameters are made by applying Louis's method and the delta method. Simulation studies were conducted to compare the proposed MCNR method with existing methods including the maximum likelihood (ML) method and the ad hoc approach of replacing the left‐censored values with half of the detection limit (HDL). The results showed that the performance of the MCNR method is superior to ML and HDL with respect to the empirical standard error, as well as the coverage probability for the 95% confidence interval. The HDL method uses an incorrect imputation method, and the computation is constrained by the number of quadrature points; while the ML method also suffers from the constrain for the number of quadrature points, the MCNR method does not have this limitation and approximates the likelihood function better than the other methods. The improvement of the MCNR method is further illustrated with real‐world data from a longitudinal study of local cervicovaginal HIV viral load and its effects on oncogenic HPV detection in HIV‐positive women.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号