首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Brookmeyer and Crowley derived a non-parametric confidence interval for the median survival time of a homogeneous population by inverting a generalization of the sign test for censored data. The 1 – α confidence interval for the median is essentially the set of all values t such that the Kaplan—Meier estimate of the survival curve at time t does not differ significantly from one-half at the two-sided α level. Su and Wei extended this approach to the two-sample problem and derived a confidence interval for the difference in median survival times based on the Kaplan-Meier estimates of the individual survival curves and a ‘minimum dispersion’ test statistic. Here, I incorporate covariates into the analysis by assuming a proportional hazards model for the covariate effects, while leaving the two underlying survival curves virtually unconstrained. I generate a simultaneous confidence region for the two median survival times, adjusted to any selected value, z , of the covariate vector using a test-based approach analogous to Brookmeyer and Crowley's for the one-sample case. This region is, in turn, used to derive a confidence interval for the difference in median survival times between the two treatment groups at the selected value of z . Employment of a procedure suggested by Aitchison sets the level of the simultaneous region to a value that should yield, at least approximately, the desired confidence coefficient for the difference in medians. Simulation studies indicate that the method provides reasonably accurate coverage probabilities.  相似文献   

2.
A nonparametric test for equality of survival medians   总被引:1,自引:0,他引:1  
In clinical trials, researchers often encounter testing for equality of survival medians across study arms based on censored data. Even though Brookmeyer and Crowley introduced a method for comparing medians of several survival distributions, still some researchers misuse procedures that are designed for testing the homogeneity of survival curves. These procedures include the log-rank, Wilcoxon, and Cox models. This practice leads to inflation of the probability of a type I error, particularly when the underlying assumptions of these procedures are not met. We propose a new nonparametric method for testing the equality of several survival medians based on the Kaplan-Meier estimation from randomly right-censored data. We derive asymptotic properties of this test statistic. Through simulations, we compute and compare the empirical probabilities of type I errors and the power of this new procedure with those of the Brookmeyer-Crowley, log-rank, and Wilcoxon methods. Our simulation results indicate that the performance of these test procedures depends on the level of censoring and appropriateness of the underlying assumptions. When the objective is to test homogeneity of survival medians rather than survival curves and the assumptions of these tests are not met, some of these procedures severely inflate the probability of a type I error. In these situations, our test statistic provides an alternative to the Brookmeyer-Crowley test.  相似文献   

3.
目的 以实例阐述生存分析分段指数模型的拟合及SAS实现.方法 将生存时间划分为几个区间,利用SAS中的PROC GENMOD过程或PROC LIFEREG过程,对生存资料进行分段指数模型拟合,并通过参数估计结果,计算各时间区间的风险率及生存率.结果 该资料拟合分段指数分布的结果比较满意,三个时间区间的死亡风险均不相同,且第一个和第二、三个区间的差别有统计学意义(P<0.05).结论 分段指数模型不仅模型形式简单,参数易于估计,若拟合得当可提高统计效率.  相似文献   

4.
We propose an extension of the landmark model for ordinary survival data as a new approach to the problem of dynamic prediction in competing risks with time‐dependent covariates. We fix a set of landmark time points tLM within the follow‐up interval. For each of these landmark time points tLM, we create a landmark data set by selecting individuals at risk at tLM; we fix the value of the time‐dependent covariate in each landmark data set at tLM. We assume Cox proportional hazard models for the cause‐specific hazards and consider smoothing the (possibly) time‐dependent effect of the covariate for the different landmark data sets. Fitting this model is possible within the standard statistical software. We illustrate the features of the landmark modelling on a real data set on bone marrow transplantation. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

5.
In survival studies, information lost through censoring can be partially recaptured through repeated measures data which are predictive of survival. In addition, such data may be useful in removing bias in survival estimates, due to censoring which depends upon the repeated measures. Here we investigate joint models for survival T and repeated measurements Y, given a vector of covariates Z. Mixture models indexed as f (T/Z) f (Y/T,Z) are well suited for assessing covariate effects on survival time. Our objective is efficiency gains, using non-parametric models for Y in order to avoid introducing bias by misspecification of the distribution for Y. We model (T/Z) as a piecewise exponential distribution with proportional hazards covariate effect. The component (Y/T,Z) has a multinomial model. The joint likelihood for survival and longitudinal data is maximized, using the EM algorithm. The estimate of covariate effect is compared to the estimate based on the standard proportional hazards model and an alternative joint model based estimate. We demonstrate modest gains in efficiency when using the joint piecewise exponential joint model. In a simulation, the estimated efficiency gain over the standard proportional hazards model is 6.4 per cent. In clinical trial data, the estimated efficiency gain over the standard proportional hazards model is 10.2 per cent.  相似文献   

6.
7.
Modeling the incubation period of inhalational anthrax.   总被引:1,自引:0,他引:1  
Ever since the pioneering work of Philip Sartwell, the incubation period distribution for infectious diseases is most often modeled using a lognormal distribution. Theoretical models based on underlying disease mechanisms in the host are less well developed. This article modifies a theoretical model originally developed by Brookmeyer and others for the inhalational anthrax incubation period distribution in humans by using a more accurate distribution to represent the in vivo bacterial growth phase and by extending the model to represent the time from exposure to death, thereby allowing the model to be fit to nonhuman primate time-to-death data. The resulting incubation period distribution and the dose dependence of the median incubation period are in good agreement with human data from the 1979 accidental atmospheric anthrax release in Sverdlovsk, Russia, and limited nonhuman primate data. The median incubation period for the Sverdlovsk victims is 9.05 (95% confidence interval = 8.0-10.3) days, shorter than previous estimates, and it is predicted to drop to less than 2.5 days at doses above 10(6) spores. The incubation period distribution is important because the left tail determines the time at which clinical diagnosis or syndromic surveillance systems might first detect an anthrax outbreak based on early symptomatic cases, the entire distribution determines the efficacy of medical intervention-which is determined by the speed of the prophylaxis campaign relative to the incubation period-and the right tail of the distribution influences the recommended duration for antibiotic treatment.  相似文献   

8.
Cystic fibrosis (CF) is a progressive, genetic disease characterized by frequent, prolonged drops in lung function. Accurately predicting rapid underlying lung-function decline is essential for clinical decision support and timely intervention. Determining whether an individual is experiencing a period of rapid decline is complicated due to its heterogeneous timing and extent, and error component of the measured lung function. We construct individualized predictive probabilities for “nowcasting” rapid decline. We assume each patient's true longitudinal lung function, S(t) , follows a nonlinear, nonstationary stochastic process, and accommodate between-patient heterogeneity through random effects. Corresponding lung-function decline at time t is defined as the rate of change, S′(t) . We predict S′(t) conditional on observed covariate and measurement history by modeling a measured lung function as a noisy version of S(t) . The method is applied to data on 30 879 US CF Registry patients. Results are contrasted with a currently employed decision rule using single-center data on 212 individuals. Rapid decline is identified earlier using predictive probabilities than the center's currently employed decision rule (mean difference: 0.65 years; 95% confidence interval (CI): 0.41, 0.89). We constructed a bootstrapping algorithm to obtain CIs for predictive probabilities. We illustrate real-time implementation with R Shiny. Predictive accuracy is investigated using empirical simulations, which suggest this approach more accurately detects peak decline, compared with a uniform threshold of rapid decline. Median area under the ROC curve estimates (Q1-Q3) were 0.817 (0.814-0.822) and 0.745 (0.741-0.747), respectively, implying reasonable accuracy for both. This article demonstrates how individualized rate of change estimates can be coupled with probabilistic predictive inference and implementation for a useful medical-monitoring approach.  相似文献   

9.
This article explores Bayesian joint models for a quantile of longitudinal response, mismeasured covariate and event time outcome with an attempt to (i) characterize the entire conditional distribution of the response variable based on quantile regression that may be more robust to outliers and misspecification of error distribution; (ii) tailor accuracy from measurement error, evaluate non‐ignorable missing observations, and adjust departures from normality in covariate; and (iii) overcome shortages of confidence in specifying a time‐to‐event model. When statistical inference is carried out for a longitudinal data set with non‐central location, non‐linearity, non‐normality, measurement error, and missing values as well as event time with being interval censored, it is important to account for the simultaneous treatment of these data features in order to obtain more reliable and robust inferential results. Toward this end, we develop Bayesian joint modeling approach to simultaneously estimating all parameters in the three models: quantile regression‐based nonlinear mixed‐effects model for response using asymmetric Laplace distribution, linear mixed‐effects model with skew‐t distribution for mismeasured covariate in the presence of informative missingness and accelerated failure time model with unspecified nonparametric distribution for event time. We apply the proposed modeling approach to analyzing an AIDS clinical data set and conduct simulation studies to assess the performance of the proposed joint models and method. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

10.
We present a reversible jump Bayesian piecewise log-linear hazard model that extends the Bayesian piecewise exponential hazard to a continuous function of piecewise linear log hazards. A simulation study encompassing several different hazard shapes, accrual rates, censoring proportion, and sample sizes showed that the Bayesian piecewise linear log-hazard model estimated the true mean survival time and survival distributions better than the piecewsie exponential hazard. Survival data from Wake Forest Baptist Medical Center is analyzed by both methods and the posterior results are compared.  相似文献   

11.
A linear relative risk form for the Cox model is sometimes more appropriate than the usual exponential form. The usual asymptotic confidence interval may not have the appropriate coverage, however, due to flatness of the likelihood in the neighbourhood of beta. For a single continuous covariate, we derive bootstrapped confidence intervals with use of two resampling methods. The first resamples the original data and yields both one-step and fully iterated estimates of beta. The second resamples the score and information quantities at each failure time to yield a one-step estimate. We computed the bootstrapped confidence intervals by three different methods and compared these intervals to one based on the asymptotic standard error and to a likelihood-based interval. The bootstrapped intervals did not perform well and underestimated the true coverage in most cases.  相似文献   

12.
I describe general expressions for the evaluation of sample size and power for the K group Mantel‐logrank test or the Cox proportional hazards (PH) model score test. Under an exponential model, the method of Lachin and Foulkes for the 2 group case is extended to the group case using the non‐centrality parameter of the K ? 1 df chi‐square test. I also show similar results to apply to the K group score test in a Cox PH model. Lachin and Foulkes employed a truncated exponential distribution to provide for a non‐linear rate of enrollment. I present expressions for the mean time of enrollment and the expected follow‐up time in the presence of exponential losses to follow‐up. When used with the expression for the noncentrality parameter for the test, equations are derived for the evaluation of sample size and power under specific designs with r years of recruitment and T years total duration. I also describe sample size and power for a stratified‐adjusted K group test and for the assessment of a group by stratum interaction. Similarly, I describe computations for a stratified‐adjusted analysis of a quantitative covariate and a test of a stratum by covariate interaction in the Cox PH model. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

13.
Relative survival is used to estimate patient survival excluding causes of death not related to the disease of interest. Rather than using cause of death information from death certificates, which is often poorly recorded, relative survival compares the observed survival to that expected in a matched group from the general population. Models for relative survival can be expressed on the hazard (mortality) rate scale as the sum of two components where the total mortality rate is the sum of the underlying baseline mortality rate and the excess mortality rate due to the disease of interest. Previous models for relative survival have assumed that covariate effects act multiplicatively and have thus provided relative effects of differences between groups using excess mortality rate ratios. In this paper we consider (i) the use of an additive covariate model, which provides estimates of the absolute difference in the excess mortality rate; and (ii) the use of fractional polynomials in relative survival models for the baseline excess mortality rate and time-dependent effects. The approaches are illustrated using data on 115 331 female breast cancer patients diagnosed between 1 January 1986 and 31 December 1990. The use of additive covariate relative survival models can be useful in situations when the excess mortality rate is zero or slightly less than zero and can provide useful information from a public health perspective. The use of fractional polynomials has advantages over the usual piecewise estimation by providing smooth estimates of the baseline excess mortality rate and time-dependent effects for both the multiplicative and additive covariate models. All models presented in this paper can be estimated within a generalized linear models framework and thus can be implemented using standard software.  相似文献   

14.
In situations in which one cannot specify a single primary outcome, epidemiologic analyses often examine multiple associations between outcomes and explanatory covariates or risk factors. To compare alternative approaches to the analysis of multiple outcomes in regression models, I used generalized estimating equations (GEE) models, a multivariate extension of generalized linear models, to incorporate the dependence among the outcomes from the same subject and to provide robust variance estimates of the regression coefficients. I applied the methods in a hospital-population-based study of complications of surgical anaesthesia, using GEE model fitting and quasi-likelihood score and Wald tests. In one GEE model specification, I allowed the associations between each of the outcomes and a covariate to differ, yielding a regression coefficient for each of the outcome and covariate combinations; I obtained the covariances among the set of outcome-specific regression coefficients for each covariate from the robust ‘sandwich’ variance estimator. To address the problem of multiple inference, I used simultaneous methods that make adjustments to the test statistic p-values and the confidence interval widths, to control type I error and simultaneous coverage, respectively. In a second model specification, for each of the covariates I assumed a common association between the outcomes and the covariate, which eliminates the problem of multiplicity by use of a global test of association. In an alternative approach to multiplicity, I used empirical Bayes methods to shrink the outcome-specific coefficients toward a pooled mean that is similar to the common effect coefficient. GEE regression models can provide a flexible framework for estimation and testing of multiple outcomes. © 1998 John Wiley & Sons, Ltd.  相似文献   

15.
Lee M  Fine JP 《Statistics in medicine》2011,30(27):3221-3235
In survival analysis, a point estimate and confidence interval for median survival time have been frequently used to summarize the survival curve. However, such quantile analyses on competing risks data have not been widely investigated. In this paper, we propose parametric inferences for quantiles from the cumulative incidence function and develop parametric confidence intervals for quantiles. In addition, we study a simplified method of inference for the nonparametric approach. We compare the parametric and nonparametric inferences in empirical studies. Simulation studies show that the procedures perform well, with parametric analyses yielding smaller mean square error when the model is not too badly misspecified. We illustrate the methods with data from a breast cancer clinical trial.  相似文献   

16.
Prognostic studies often estimate survival curves for patients with different covariate vectors, but the validity of their results depends largely on the accuracy of the estimated covariate effects. To avoid conventional proportional hazards and linearity assumptions, flexible extensions of Cox's proportional hazards model incorporate non‐linear (NL) and/or time‐dependent (TD) covariate effects. However, their impact on survival curves estimation is unclear. Our primary goal is to develop and validate a flexible method for estimating individual patients' survival curves, conditional on multiple predictors with possibly NL and/or TD effects. We first obtain maximum partial likelihood estimates of NL and TD effects and use backward elimination to select statistically significant effects into a final multivariable model. We then plug the selected NL and TD estimates in the full likelihood function and estimate the baseline hazard function and the resulting survival curves, conditional on individual covariate vectors. The TD and NL functions and the log hazard are modeled with unpenalized regression B‐splines. In simulations, our flexible survival curve estimates were unbiased and had much lower mean square errors than the conventional estimates. In real‐life analyses of mortality after a septic shock, our model improved significantly the deviance (likelihood ratio test = 84.8, df = 20, p < 0.0001) and changed substantially the predicted survival for several subjects. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

17.
This paper presents a parametric method of fitting semi‐Markov models with piecewise‐constant hazards in the presence of left, right and interval censoring. We investigate transition intensities in a three‐state illness–death model with no recovery. We relax the Markov assumption by adjusting the intensity for the transition from state 2 (illness) to state 3 (death) for the time spent in state 2 through a time‐varying covariate. This involves the exact time of the transition from state 1 (healthy) to state 2. When the data are subject to left or interval censoring, this time is unknown. In the estimation of the likelihood, we take into account interval censoring by integrating out all possible times for the transition from state 1 to state 2. For left censoring, we use an Expectation–Maximisation inspired algorithm. A simulation study reflects the performance of the method. The proposed combination of statistical procedures provides great flexibility. We illustrate the method in an application by using data on stroke onset for the older population from the UK Medical Research Council Cognitive Function and Ageing Study. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

18.
《Value in health》2022,25(4):595-604
ObjectivesState-transition models (STMs) applied in oncology have given limited considerations to modeling postprogression survival data. This study presents an application of an STM focusing on methods to evaluate the postprogression transition and its impact on survival predictions.MethodsData from the lenalidomide plus dexamethasone arm of the ASPIRE trial was used to estimate transition rates for an STM. The model accounted for the competing risk between the progression and preprogression death events and included an explicit structural link between the time to progression and subsequent death. The modeled transition rates were used to simulate individual disease trajectories in a discrete event simulation framework, based on which progression-free survival and overall survival over a 30-year time horizon were estimated. Survival predictions were compared with the observed trial data, matched external data, and estimates obtained from a more conventional partitioned survival analysis approach.ResultsThe rates of progression and preprogression death were modeled using piecewise exponential functions. The rate of postprogression mortality was modeled using an exponential function accounting for the nonlinear effect of the time to progression. The STM provided survival estimates that closely fitted the trial data and gave more plausible long-term survival predictions than the best-fitting Weibull model applied in a partitioned survival analysis.ConclusionsThe fit of the STM suggested that the modeled transition rates accurately captured the underlying disease process over the modeled time horizon. The considerations of this study may apply to other settings and facilitate a wider use of STMs in oncology.  相似文献   

19.
Xenograft trials allow tumor growth in human cell lines to be monitored over time in a mouse model. We consider the problem of inferring the effect of treatment combinations on tumor growth. A piecewise quadratic model with flexible phase change locations is proposed to model the effect of change in therapy over time. Each piece represents a growth phase, with phase changes in response to change in treatment. Piecewise slopes represent phase‐specific (log) linear growth rates and curvature parameters represent departure from linear growth. Trial data are analyzed in two stages: (i) subject‐specific curve fitting (ii) analysis of slope and curvature estimates across subjects. A least‐squares approach with penalty for phase change point location is proposed for curve fitting. In simulation studies, the method is shown to give consistent estimates of slope and curvature parameters under independent and AR (1) measurement error. The piecewise quadratic model is shown to give excellent fit (median R2=0.98) to growth data from a six armed xenograft trial on a lung carcinoma cell line. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

20.
The Michigan Female Health Study (MFHS) conducted research focusing on reproductive health outcomes among women exposed to polybrominated biphenyls (PBBs). In the work presented here, the available longitudinal serum PBB exposure measurements are used to obtain predictions of PBB exposure for specific time points of interest via random effects models. In a two‐stage approach, a prediction of the PBB exposure is obtained and then used in a second‐stage health outcome model. This paper illustrates how a unified approach, which links the exposure and outcome in a joint model, provides an efficient adjustment for covariate measurement error. We compare the use of empirical Bayes predictions in the two‐stage approach with results from a joint modeling approach, with and without an adjustment for left‐ and interval‐censored data. The unified approach with the adjustment for left‐ and interval‐censored data resulted in little bias and near‐nominal confidence interval coverage in both the logistic and linear model setting. Published in 2010 by John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号