首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Brookmeyer and Crowley derived a non-parametric confidence interval for the median survival time of a homogeneous population by inverting a generalization of the sign test for censored data. The 1−α confidence interval for the median is essentially the set of all values t such that the Kaplan–Meier estimate of the survival function at time t does not differ significantly from one-half at significance level α. Here I extend the method to incorporate covariates into the analysis by assuming an underlying piecewise exponential model with proportional hazards covariate effects. Maximum likelihood estimates of the model parameters are obtained via iterative techniques, from which the estimated (log) survival curve is easily constructed. The delta method provides asymptotic standard errors. Following Brookmeyer and Crowley, I find the confidence interval for the median survival time at a specified value of the covariate vector by inverting the sign test. I illustrate the methods using data from a clinical trial conducted by the Radiation Therapy Oncology Group in cancer of the mouth and throat. It is seen that the piecewise exponential model provides considerable flexibility in accommodating to the shape of the underlying survival curve and thus offers advantages to other, more restrictive, parametric models. Simulation studies indicate that the method provides reasonably accurate coverage probabilities.  相似文献   

2.
We propose an extension of the landmark model for ordinary survival data as a new approach to the problem of dynamic prediction in competing risks with time‐dependent covariates. We fix a set of landmark time points tLM within the follow‐up interval. For each of these landmark time points tLM, we create a landmark data set by selecting individuals at risk at tLM; we fix the value of the time‐dependent covariate in each landmark data set at tLM. We assume Cox proportional hazard models for the cause‐specific hazards and consider smoothing the (possibly) time‐dependent effect of the covariate for the different landmark data sets. Fitting this model is possible within the standard statistical software. We illustrate the features of the landmark modelling on a real data set on bone marrow transplantation. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

3.
Accuracy and sample size issues concerning the estimation of covariate‐dependent quantile curves are considered. It is proposed to measure the precision of an estimate of the pth quantile at a given covariate value by the probability with which this estimate lies between the p1th and p2th quantile, where p1 < p < p2. Requiring that this probability exceeds a given confidence bound for all covariate values in a specified range leads to a sample size criterion. Approximate formulae for the precision and sample size are derived for the normal parametric regression approach and for the semiparametric quantile regression method. A simulation study is performed to evaluate the accuracy of the approximations. Numerical evaluations show that rather large numbers of subjects are needed to construct quantile curves with a reasonable amount of accuracy, especially if the quantile regression method is applied. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

4.
We study the properties of test statistics for a covariate effect in Aalen's additive hazard model and propose several new test statistics. The proposed statistics are derived by using the weights from linear rank statistics for comparing two survival curves. We compare these statistics with the two statistics proposed by Aalen using Monte Carlo simulations. Several different survival configurations are considered in the simulation study: proportional hazards; crossing hazards; hazard differences early in time, and hazard differences for large survival times. Of the proposed test statistics, one is superior for detecting hazard differences for large survival times and another is superior for detecting early hazard differences and crossing hazards. © 1998 John Wiley & Sons, Ltd.  相似文献   

5.
This article explores Bayesian joint models for a quantile of longitudinal response, mismeasured covariate and event time outcome with an attempt to (i) characterize the entire conditional distribution of the response variable based on quantile regression that may be more robust to outliers and misspecification of error distribution; (ii) tailor accuracy from measurement error, evaluate non‐ignorable missing observations, and adjust departures from normality in covariate; and (iii) overcome shortages of confidence in specifying a time‐to‐event model. When statistical inference is carried out for a longitudinal data set with non‐central location, non‐linearity, non‐normality, measurement error, and missing values as well as event time with being interval censored, it is important to account for the simultaneous treatment of these data features in order to obtain more reliable and robust inferential results. Toward this end, we develop Bayesian joint modeling approach to simultaneously estimating all parameters in the three models: quantile regression‐based nonlinear mixed‐effects model for response using asymmetric Laplace distribution, linear mixed‐effects model with skew‐t distribution for mismeasured covariate in the presence of informative missingness and accelerated failure time model with unspecified nonparametric distribution for event time. We apply the proposed modeling approach to analyzing an AIDS clinical data set and conduct simulation studies to assess the performance of the proposed joint models and method. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

6.
In many experiments, it is necessary to evaluate the effectiveness of a treatment by comparing the responses of two groups of subjects. This evaluation is often performed by using a confidence interval for the difference between the population means. To compute the limits of this confidence interval, researchers usually use the pooled t formulas, which are derived by assuming normally distributed errors. When the normality assumption does not seem reasonable, the researcher may have little confidence in the confidence interval because the actual one‐sided coverage probability may not be close to the nominal coverage probability. This problem can be avoided by using the Robbins–Monro iterative search method to calculate the limits. One problem with this iterative procedure is that it is not clear when the procedure produces a sufficiently accurate estimate of a limit. In this paper, we describe a multiple search method that allows the user to specify the accuracy of the limits. We also give guidance concerning the number of iterations that would typically be needed to achieve a specified accuracy. This multiple iterative search method will produce limits for one‐sided and two‐sided confidence intervals that maintain their coverage probabilities with non‐normal distributions. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

7.
The difference in restricted mean survival times between two groups is a clinically relevant summary measure. With observational data, there may be imbalances in confounding variables between the two groups. One approach to account for such imbalances is estimating a covariate‐adjusted restricted mean difference by modeling the covariate‐adjusted survival distribution and then marginalizing over the covariate distribution. Because the estimator for the restricted mean difference is defined by the estimator for the covariate‐adjusted survival distribution, it is natural to expect that a better estimator of the covariate‐adjusted survival distribution is associated with a better estimator of the restricted mean difference. We therefore propose estimating restricted mean differences with stacked survival models. Stacked survival models estimate a weighted average of several survival models by minimizing predicted error. By including a range of parametric, semi‐parametric, and non‐parametric models, stacked survival models can robustly estimate a covariate‐adjusted survival distribution and, therefore, the restricted mean treatment effect in a wide range of scenarios. We demonstrate through a simulation study that better performance of the covariate‐adjusted survival distribution often leads to better mean squared error of the restricted mean difference although there are notable exceptions. In addition, we demonstrate that the proposed estimator can perform nearly as well as Cox regression when the proportional hazards assumption is satisfied and significantly better when proportional hazards is violated. Finally, the proposed estimator is illustrated with data from the United Network for Organ Sharing to evaluate post‐lung transplant survival between large‐volume and small‐volume centers. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

8.
In protection tests on white mice vaccinated with BCG vaccine and challenged with a pathogenic strain of Mycobacterium bovis, the survival times are considerably altered by several variables. In the strains of mice used mainly in this study (NMRI and Albany), the median survival time of a group was roughly doubled in the sensitive range of the test system either by a twofold increase in the immunization period, a threefold decrease in the challenge dose or a 100-fold or less increase in the vaccine dose. The shape of the survival curve of an animal group depends on the median survival time achieved. The Gaussian distributions (sum curves) of the logarithms of the individual survival times are near linearity and parallelity in groups of animals which either survive for short or very long periods. In an intermediate range, however, the survival curves show a flatter and sometimes S-shaped course. This intermediate range of survival corresponds to the time at which the lung findings shift from acute to chronic. The occurrence of acute or chronic findings depends on the individual survival time after challenge. The autopsies show that both findings are equally frequent approximately equal to 35 days after challenge. Individual survival times should be evaluated by non-parametric methods due to their non-normal (bimodal) distribution. Evaluation of the gross lung findings supports these results but is, however, less efficient. The discriminating power of the test system can be altered by changes in any of the variables and is best when animal groups attaining less than 20 days median survival time are compared with groups attaining greater than 30 days. A twofold increase in the median survival time generally provides evidence of significance that may already be obtained 30 days after challenge. With a vaccination-challenge interval of 21 days or more, a 50 microliter vaccine dose generally induces a significant increase in the survival times of the vaccinated animals versus non-vaccinated controls. With increasing immunization periods (vaccination-challenge interval), however, a difference in the efficacy of several vaccines or vaccine doses will be evened out.  相似文献   

9.
Experimenters in toxicology often compare the concentration-response relationship between two distinct populations using the median lethal concentration (LC50). This comparison is sometimes done by calculating the 95% confidence interval for the LC50 for each population, concluding that no significant difference exists if the two confidence intervals overlap. A more appropriate test compares the ratio of the LC50s to 1 or the log(LC50 ratio) to 0. In this ratio test, we conclude that no difference exists in LC50s if the confidence interval for the ratio of the LC50s contains 1 or the confidence interval for the log(LC50 ratio) contains 0. A Monte Carlo simulation study was conducted to compare the confidence interval overlap test to the ratio test. The confidence interval overlap test performs substantially below the nominal alpha = 0.05 level, closer to p = 0.005; therefore, it has considerably less power for detecting true differences compared to the ratio test. The ratio-based method exhibited better type I error rates and superior power properties in comparison to the confidence interval overlap test. Thus, a ratio-based statistical procedure is preferred to using simple overlap of two independently derived confidence intervals.  相似文献   

10.
Interest in equivalence trials has been increasing for many years, though the methodology which has been developed for such trials is mainly for uncensored data. In cancer research we are more often concerned with survival. In an efficacy trial, the null hypothesis specifies equality of the two survival distributions, but in an equivalence trial, a null hypothesis of inequivalence H0 has to be tested. The usual logrank test has to be modified to test whether the true value r of the ratio of hazard rates in two treatment groups is at least equal to a limit value r0. If prognostic factors have to be taken into account, the Cox model provides tests of H0, and a useful confidence interval for the adjusted relative risk derived from the regression parameter for the treatment indicator. An equivalence trial of maintenance therapy was carried out in children with B non-Hodgkin lymphoma, and serves as an illustration.  相似文献   

11.
In studies in which a binary response for each subject is observed, the success probability and functions of this quantity are of interest. The use of confidence intervals has been increasingly encouraged as complementary to, and indeed preferable to, p‐values as the primary expression of the impact of sampling uncertainty on the findings. The asymptotic confidence interval, based on a normal approximation, is often considered, but this interval can have poor statistical properties when the sample size is small and/or when the success probability is near 0 or 1. In this paper, an estimate of the risk difference based on median unbiased estimates (MUEs) of the two group probabilities is proposed. A corresponding confidence interval is derived using a fully specified bootstrap sample space. The proposed method is compared with Chen's quasi‐exact method, Wald intervals and Agresti and Caffo's method with regard to mean square error and coverage probability. For a variety of settings, the MUE‐based estimate of risk difference has mean square error uniformly smaller than maximum likelihood estimate within a certain range of risk difference. The fully specified bootstrap had better coverage probability in the tail area than Chen's quasi‐exact method, Wald intervals and Agresti and Caffo's intervals. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

12.
Prognostic studies often estimate survival curves for patients with different covariate vectors, but the validity of their results depends largely on the accuracy of the estimated covariate effects. To avoid conventional proportional hazards and linearity assumptions, flexible extensions of Cox's proportional hazards model incorporate non‐linear (NL) and/or time‐dependent (TD) covariate effects. However, their impact on survival curves estimation is unclear. Our primary goal is to develop and validate a flexible method for estimating individual patients' survival curves, conditional on multiple predictors with possibly NL and/or TD effects. We first obtain maximum partial likelihood estimates of NL and TD effects and use backward elimination to select statistically significant effects into a final multivariable model. We then plug the selected NL and TD estimates in the full likelihood function and estimate the baseline hazard function and the resulting survival curves, conditional on individual covariate vectors. The TD and NL functions and the log hazard are modeled with unpenalized regression B‐splines. In simulations, our flexible survival curve estimates were unbiased and had much lower mean square errors than the conventional estimates. In real‐life analyses of mortality after a septic shock, our model improved significantly the deviance (likelihood ratio test = 84.8, df = 20, p < 0.0001) and changed substantially the predicted survival for several subjects. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

13.
Cystic fibrosis (CF) is a progressive, genetic disease characterized by frequent, prolonged drops in lung function. Accurately predicting rapid underlying lung-function decline is essential for clinical decision support and timely intervention. Determining whether an individual is experiencing a period of rapid decline is complicated due to its heterogeneous timing and extent, and error component of the measured lung function. We construct individualized predictive probabilities for “nowcasting” rapid decline. We assume each patient's true longitudinal lung function, S(t) , follows a nonlinear, nonstationary stochastic process, and accommodate between-patient heterogeneity through random effects. Corresponding lung-function decline at time t is defined as the rate of change, S′(t) . We predict S′(t) conditional on observed covariate and measurement history by modeling a measured lung function as a noisy version of S(t) . The method is applied to data on 30 879 US CF Registry patients. Results are contrasted with a currently employed decision rule using single-center data on 212 individuals. Rapid decline is identified earlier using predictive probabilities than the center's currently employed decision rule (mean difference: 0.65 years; 95% confidence interval (CI): 0.41, 0.89). We constructed a bootstrapping algorithm to obtain CIs for predictive probabilities. We illustrate real-time implementation with R Shiny. Predictive accuracy is investigated using empirical simulations, which suggest this approach more accurately detects peak decline, compared with a uniform threshold of rapid decline. Median area under the ROC curve estimates (Q1-Q3) were 0.817 (0.814-0.822) and 0.745 (0.741-0.747), respectively, implying reasonable accuracy for both. This article demonstrates how individualized rate of change estimates can be coupled with probabilistic predictive inference and implementation for a useful medical-monitoring approach.  相似文献   

14.
Identification of subgroups with differential treatment effects in randomized trials is attracting much attention. Many methods use regression tree algorithms. This article addresses 2 important questions arising from the subgroups: how to ensure that treatment effects in subgroups are not confounded with effects of prognostic variables and how to determine the statistical significance of treatment effects in the subgroups. We address the first question by selectively including linear prognostic effects in the subgroups in a regression tree model. The second question is more difficult because it falls within the subject of postselection inference. We use a bootstrap technique to calibrate normal-theory t intervals so that their expected coverage probability, averaged over all the subgroups in a fitted model, approximates the desired confidence level. It can also provide simultaneous confidence intervals for all subgroups. The first solution is implemented in the GUIDE algorithm and is applicable to data with missing covariate values, 2 or more treatment arms, and outcomes subject to right censoring. Bootstrap calibration is applicable to any subgroup identification method; it is not restricted to regression tree models. Two real examples are used for illustration: a diabetes trial where the outcomes are completely observed but some covariate values are missing and a breast cancer trial where the outcome is right censored.  相似文献   

15.
In situations in which one cannot specify a single primary outcome, epidemiologic analyses often examine multiple associations between outcomes and explanatory covariates or risk factors. To compare alternative approaches to the analysis of multiple outcomes in regression models, I used generalized estimating equations (GEE) models, a multivariate extension of generalized linear models, to incorporate the dependence among the outcomes from the same subject and to provide robust variance estimates of the regression coefficients. I applied the methods in a hospital-population-based study of complications of surgical anaesthesia, using GEE model fitting and quasi-likelihood score and Wald tests. In one GEE model specification, I allowed the associations between each of the outcomes and a covariate to differ, yielding a regression coefficient for each of the outcome and covariate combinations; I obtained the covariances among the set of outcome-specific regression coefficients for each covariate from the robust ‘sandwich’ variance estimator. To address the problem of multiple inference, I used simultaneous methods that make adjustments to the test statistic p-values and the confidence interval widths, to control type I error and simultaneous coverage, respectively. In a second model specification, for each of the covariates I assumed a common association between the outcomes and the covariate, which eliminates the problem of multiplicity by use of a global test of association. In an alternative approach to multiplicity, I used empirical Bayes methods to shrink the outcome-specific coefficients toward a pooled mean that is similar to the common effect coefficient. GEE regression models can provide a flexible framework for estimation and testing of multiple outcomes. © 1998 John Wiley & Sons, Ltd.  相似文献   

16.
Receiver operating characteristic (ROC) curves and in particular the area under the curve (AUC), are widely used to examine the effectiveness of diagnostic markers. Diagnostic markers and their corresponding ROC curves can be strongly influenced by covariate variables. When several diagnostic markers are available, they can be combined by a best linear combination such that the area under the ROC curve of the combination is maximized among all possible linear combinations. In this paper we discuss covariate effects on this linear combination assuming that the multiple markers, possibly transformed, follow a multivariate normal distribution. The ROC curve of this linear combination when markers are adjusted for covariates is estimated and approximate confidence intervals for the corresponding AUC are derived. An example of two biomarkers of coronary heart disease for which covariate information on age and gender is available is used to illustrate this methodology.  相似文献   

17.
Cancer patients are frequently affected by malnutrition and weight loss, which affects their prognosis, length of hospital stay, health care costs, quality of life and survival. Our aim was to assess the prognostic value of different scores based on malnutrition or systemic inflammatory response in 91 metastatic or recurrent gastric cancer patients considered for palliative chemotherapy at the Masaryk Memorial Cancer Institute. We investigated their overall survival according to the following measures: Onodera's Prognostic Nutritional Index (OPNI), Glasgow Prognostic Score (GPS), nutritional risk indicator (NRI), Cancer Cachexia Study Group (CCSG), as previously defined, and a simple preadmission weight loss. The OPNI, GPS, and CCSG provided very significant prognostic values for survival (log-rank test P value < 0.001). For example, the median survival for patients with GPS 0 was 12.3 mo [95% confidence interval (CI): 7.7–16.7], whereas the median survival for patients with GPS 2 was only 2.9 mo (95% CI: 1.9–4.8). A significantly worse survival of malnourished patients was also suggested by a multivariate model. The values of GPS, OPNI, and CCSG represent useful tools for the evaluation of patients’ prognosis and should be part of a routine evaluation of patients to provide a timely nutrition support.  相似文献   

18.
I propose a new confidence interval for the difference between two binomial probabilities that requires only the solution of a quadratic equation. The procedure is based on estimating the variance of the observed difference at the boundaries of the confidence interval, and uses least squares estimation rather than maximum likelihood as previously suggested. The proposed procedure is non-iterative, agrees with the conventional test of equality of two binomial probabilities, and, even for fairly small sample sizes, appears to yields actual 95 per cent confidence intervals with mean or median probabilities of coverage very close to 0⋅95. The Yates continuity correction appears to generate confidence intervals with the conditional probability of coverage at least equal to nominal levels. © 1997 by John Wiley & Sons, Ltd.  相似文献   

19.
The goal in stratified medicine is to administer the “best” treatment to a patient. Not all patients might benefit from the same treatment; the choice of best treatment can depend on certain patient characteristics. In this article, it is assumed that a time-to-event outcome is considered as a patient-relevant outcome and a qualitative interaction between a continuous covariate and treatment exists, ie, that patients with different values of one specific covariate should be treated differently. We suggest and investigate different methods for confidence interval estimation for the covariate value, where the treatment recommendation should be changed based on data collected in a randomized clinical trial. An adaptation of Fieller's theorem, the delta method, and different bootstrap approaches (normal, percentile-based, wild bootstrap) are investigated and compared in a simulation study. Extensions to multivariable problems are presented and evaluated. We observed appropriate confidence interval coverage following Fieller's theorem irrespective of sample size but at the cost of very wide or even infinite confidence intervals. The delta method and the wild bootstrap approach provided the smallest intervals but inadequate coverage for small to moderate event numbers, also depending on the location of the true changepoint. For the percentile-based bootstrap, wide intervals were observed, and it was slightly conservative regarding coverage, whereas the normal bootstrap did not provide acceptable results for many scenarios. The described methods were also applied to data from a randomized clinical trial comparing two treatments for patients with symptomatic, severe carotid artery stenosis, considering patient's age as predictive marker.  相似文献   

20.
Stratification is commonly employed in clinical trials to reduce the chance covariate imbalances and increase the precision of the treatment effect estimate. We propose a general framework for constructing the confidence interval (CI) for a difference or ratio effect parameter under stratified sampling by the method of variance estimates recovery (MOVER). We consider the additive variance and additive CI approaches for the difference, in which either the CI for the weighted difference, or the CI for the weighted effect in each group, or the variance for the weighted difference is calculated as the weighted sum of the corresponding stratum-specific statistics. The CI for the ratio is derived by the Fieller and log-ratio methods. The weights can be random quantities under the assumption of a constant effect across strata, but this assumption is not needed for fixed weights. These methods can be easily applied to different endpoints in that they require only the point estimate, CI, and variance estimate for the measure of interest in each group across strata. The methods are illustrated with two real examples. In one example, we derive the MOVER CIs for the risk difference and risk ratio for binary outcomes. In the other example, we compare the restricted mean survival time and milestone survival in stratified analysis of time-to-event outcomes. Simulations show that the proposed MOVER CIs generally outperform the standard large sample CIs, and that the additive CI approach performs better than the additive variance approach. Sample SAS code is provided in the Supplementary Material.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号