首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 10 毫秒
1.
ObjectiveCox proportional hazards regression models are frequently used to determine the association between exposure and time-to-event outcomes in both randomized controlled trials and in observational cohort studies. The resultant hazard ratio is a relative measure of effect that provides limited clinical information.Study Design and SettingA method is described for deriving absolute reductions in the risk of an event occurring within a given duration of follow-up time from a Cox regression model. The associated number needed to treat can be derived from this quantity. The method involves determining the probability of the outcome occurring within the specified duration of follow-up if each subject in the cohort was treated and if each subject was untreated, based on the covariates in the regression model. These probabilities are then averaged across the study population to determine the average probability of the occurrence of an event within a specific duration of follow-up in the population if all subjects were treated and if all subjects were untreated.ResultsRisk differences and numbers needed to treat.ConclusionsAbsolute measures of treatment effect can be derived in prospective studies when Cox regression is used to adjust for possible imbalance in prognostically important baseline covariates.  相似文献   

2.
3.
4.
BackgroundWhen estimating the number needed to treat (NNT) from randomized controlled trials (RCTs) with time-to-event outcomes, varying follow-up times have to be considered. Two methods have been proposed, namely (1) inverting risk differences estimated by survival time methods (RD approach) and (2) inverting incidence differences (ID approach).Study Design and SettingA simulation study was conducted to compare the RD and the ID approaches regarding bias and coverage probability (CP) considering various distributions, baseline risks, effect sizes, and sample sizes. Additionally, the two approaches were compared by using two real data examples.ResultsThe RD approach showed good estimation and coverage properties with only a few exceptions in the case of small sample sizes and small effect sizes. The ID approach showed considerable bias and low CPs in most of the considered data situations.ConclusionsAbsolute risks estimated by means of survival time methods rather than incidence rates should be used to estimate NNTs in RCTs with time-to-event outcomes.  相似文献   

5.
ObjectiveThe estimation of the number needed to be exposed (NNE) with adjustment for covariates can be performed by inverting the corresponding adjusted risk difference. The latter can be estimated by several approaches based on binomial and Poisson regression with or without constraints. A novel proposal is given by logistic regression with average risk difference (LR-ARD) estimation. Finally, the use of ordinary linear regression and unadjusted estimation can be considered.Study Design and SettingLR-ARD is compared with alternative approaches regarding bias, precision, and coverage probability by means of an extensive simulation study.ResultsLR-ARD was found to be superior compared with the other approaches. In the case of balanced covariates and large sample sizes, unadjusted estimation and ordinary linear regression can also be used. In general, however, LR-ARD seems to be the most appropriate approach to estimate adjusted risk differences and NNEs.ConclusionsTo estimate risk differences and NNEs with adjustment for covariates, the LR-ARD approach should be used.  相似文献   

6.
In epidemiology, the risk of disease in terms of a set of covariates is often modelled by logistic regression. The resulting linear predictor can be used to define the extent of risk between extremes, and to calculate an attributable risk for the covariates taken together. As is well known, straightforward use of the linear predictor, on the sample from which it was derived, to obtain estimates the relative and attributable risk will be biased, often seriously. Use of the jack-knife technique is extended to produce asymptotically unbiased estimates of relative and attributable risks. The asymptotic variances associated with these estimates are derived by using the formulae of conditional variances. They are applied to the results of a case-control study of stomach cancer.  相似文献   

7.
In survival analysis with competing risks, the transformation model allows different functions between the outcome and explanatory variables. However, the model's prediction accuracy and the interpretation of parameters may be sensitive to the choice of link function. We review the practical implications of different link functions for regression of the absolute risk (or cumulative incidence) of an event. Specifically, we consider models in which the regression coefficients β have the following interpretation: The probability of dying from cause D during the next t years changes with a factor exp(β) for a one unit change of the corresponding predictor variable, given fixed values for the other predictor variables. The models have a direct interpretation for the predictive ability of the risk factors. We propose some tools to justify the models in comparison with traditional approaches that combine a series of cause‐specific Cox regression models or use the Fine–Gray model. We illustrate the methods with the use of bone marrow transplant data. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

8.
ObjectivesThe uncertainty around number needed to treat (NNT) is often represented through a confidence interval (CI). However, it is not clear how the CI can help inform treatment decisions. We developed decision-theoretic measures of uncertainty for the NNT.Study Design and SettingWe build our argument on the basis that a risk-neutral decision maker should always choose the treatment with the highest expected benefit, regardless of uncertainty. From this perspective, uncertainty can be seen as a source of “opportunity loss” owing to its associated chance of choosing the suboptimal treatment. Motivated from the concept of the expected value of perfect information (EVPI) in decision analysis, we quantify such opportunity loss and propose novel measures of uncertainty around the NNT: the Lost NNT and the Lost Opportunity Index (LOI).ResultsThe Lost NNT is the quantification of the lost opportunity expressed on the same scale as the NNT. The LOI is a scale-free measure quantifying the loss in terms of the relative efficacy of treatment. We illustrate the method using a sample of published NNT values.ConclusionDecision-theoretic concepts have the potential to be applied in this context to provide measures of uncertainty that can have relevant implications.  相似文献   

9.
OBJECTIVE: Ordinal scales often generate scores with skewed data distributions. The optimal method of analyzing such data is not entirely clear. The objective was to compare four statistical multivariable strategies for analyzing skewed health-related quality of life (HRQOL) outcome data. HRQOL data were collected at 1 year following catheterization using the Seattle Angina Questionnaire (SAQ), a disease-specific quality of life and symptom rating scale. STUDY DESIGN AND SETTING: In this methodological study, four regression models were constructed. The first model used linear regression. The second and third models used logistic regression with two different cutpoints and the fourth model used ordinal regression. To compare the results of these four models, odds ratios, 95% confidence intervals, and 95% confidence interval widths (i.e., ratios of upper to lower confidence interval endpoints) were assessed. RESULTS: Relative to the two logistic regression analysis, the linear regression model and the ordinal regression model produced more stable parameter estimates with smaller confidence interval widths. CONCLUSION: A combination of analysis results from both of these models (adjusted SAQ scores and odds ratios) provides the most comprehensive interpretation of the data.  相似文献   

10.
Recently, Laubender and Bender (Stat. Med. 2010; 29: 851–859) applied the average risk difference (RD) approach to estimate adjusted RD and corresponding number needed to treat measures in the Cox proportional hazards model. We calculated standard errors and confidence intervals by using bootstrap techniques. In this paper, we develop asymptotic variance estimates of the adjusted RD measures and corresponding asymptotic confidence intervals within the counting process theory and evaluated them in a simulation study. We illustrate the use of the asymptotic confidence intervals by means of data of the Düsseldorf Obesity Mortality Study. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

11.
ObjectiveTo assess alternative statistical methods for estimating relative risks and their confidence intervals from multivariable binary regression when outcomes are common.Study Design and SettingWe performed simulations on two hypothetical groups of patients in a single-center study, either randomized or cohort, and reanalyzed a published observational study. Outcomes of interest were the bias of relative risk estimates, coverage of 95% confidence intervals, and the Akaike information criterion.ResultsAccording to simulations, a commonly used method of computing confidence intervals for relative risk substantially overstates statistical significance in typical applications when outcomes are common. Generalized linear models other than logistic regression sometimes failed to converge, or produced estimated risks that exceeded 1.0. Conditional or marginal standardization using logistic regression and bootstrap resampling estimated risks within the [0,1] bounds and relative risks with appropriate confidence intervals.ConclusionEspecially when outcomes are common, relative risks and confidence intervals are easily computed indirectly from multivariable logistic regression. Log-linear regression models, by contrast, are problematic when outcomes are common.  相似文献   

12.
Numbers needed to treat (NNTs) may be used to present the effects of treatment and are the reciprocal of the absolute difference between treatment and control groups in a randomized controlled trial. NNTs are sensitive to factors that change the baseline risk of trial participants: the outcome considered; characteristics of patients; secular trends in incidence and case-fatality; and clinical setting. NNTs derived from pooled absolute risk differences in meta-analyses are commonly presented and easily calculated by meta-analytic software but may be seriously misleading because of heterogeneity between trials included in meta-analyses. Meaningful NNTs are obtained by applying the pooled relative risk reductions calculated from meta-analyses or individual trials to the baseline risk relevant to specific patient groups. This process will give a range of NNTs depending on whether patients are at high, low, or intermediate levels of risk, rather than a potentially misleading single number.  相似文献   

13.
B Rosner  W C Willett  D Spiegelman 《Statistics in medicine》1989,8(9):1051-69; discussion 1071-3
Errors in the measurement of exposure that are independent of disease status tend to bias relative risk estimates and other measures of effect in epidemiologic studies toward the null value. Two methods are provided to correct relative risk estimates obtained from logistic regression models for measurement errors in continuous exposures within cohort studies that may be due to either random (unbiased) within-person variation or to systematic errors for individual subjects. These methods require a separate validation study to estimate the regression coefficient lambda relating the surrogate measure to true exposure. In the linear approximation method, the true logistic regression coefficient beta* is estimated by beta/lambda, where beta is the observed logistic regression coefficient based on the surrogate measure. In the likelihood approximation method, a second-order Taylor series expansion is used to approximate the logistic function, enabling closed-form likelihood estimation of beta*. Confidence intervals for the corrected relative risks are provided that include a component representing error in the estimation of lambda. Based on simulation studies, both methods perform well for true odds ratios up to 3.0; for higher odds ratios the likelihood approximation method was superior with respect to both bias and coverage probability. An example is provided based on data from a prospective study of dietary fat intake and risk of breast cancer and a validation study of the questionnaire used to assess dietary fat intake.  相似文献   

14.
Frequently, covariates used in a logistic regression are measured with error. The authors previously described the correction of logistic regression relative risk estimates for measurement error in one or more covariates when a "gold standard" is available for exposure assessment. For some exposures (e.g., serum cholesterol), no gold standard exists, and one must assess measurement error via a reproducibility substudy. In this paper, the authors present measurement error methods for logistic regression when there is error (possibly correlated) in one or more covariates and one has data from both a main study and a reproducibility substudy. Confidence intervals from this procedure reflect error in parameter estimates from both studies. These methods are applied to the Framingham Heart Study, where the 10-year incidence of coronary heart disease is related to several coronary risk factors among 1,731 men disease-free at examination 4. Reproducibility data are obtained from the subgroup of 1,346 men seen at examinations 2 and 3. Estimated odds ratios comparing extreme quintiles for risk factors with substantial error were increased after correction for measurement error (serum cholesterol, 2.2 vs. 2.9; serum glucose, 1.3 vs. 1.5; systolic blood pressure, 2.8 vs. 3.8), but were generally decreased or unchanged for risk factors with little or no error (body mass index, 1.6 vs. 1.6; age 65-69 years vs. 35-44 years, 4.3 vs. 3.8; smoking, 1.7 vs. 1.7).  相似文献   

15.
For comparative evaluation, discriminant analysis, logistic regression and Cox's model were used to select risk factors for total and coronary deaths among 6595 men aged 20-49 followed for 9 years. Groups with mortality between 5 and 93 per 1000 were considered. Discriminant analysis selected variable sets only marginally different from the logistic and Cox methods which always selected the same sets. A time-saving option, offered for both the logistic and Cox selection, showed no advantage compared with discriminant analysis. Analysing more than 3800 subjects, the logistic and Cox methods consumed, respectively, 80 and 10 times more computer time than discriminant analysis. When including the same set of variables in non-stepwise analyses, all methods estimated coefficients that in most cases were almost identical. In conclusion, discriminant analysis is advocated for preliminary or stepwise analysis, otherwise Cox's method should be used.  相似文献   

16.
BACKGROUND AND OBJECTIVE: We consider the number needed to treat (NNT) when the event of interest is defined by dichotomizing a continuous response at a threshold level. If the response is measured with error, the resulting NNT is biased. We consider methods to reduce this bias. METHODS: Bias adjustment was studied using simulations in which we varied the distributions of the underlying response and measurement error, including both normal and nonnormal distributions. We studied a maximum likelihood estimate (MLE) based on normality assumptions, and also considered a simulation-extrapolation estimate (SIMEX) without such assumptions. The treatment effect across all potential thresholds was summarized using an NNT threshold curve. RESULTS: Crude NNT estimation was substantially biased due to measurement error. The MLE performed well under normality, and it continued to perform well with nonnormal measurement error, but when the underlying response was nonnormal the MLE was unacceptably biased and was outperformed by the SIMEX estimate. The simulation results were also reflected in empirical data from a randomized study of cholesterol-lowering therapy. CONCLUSION: Ignoring measurement error can lead to substantial bias in NNT, which can have an important practical effect on the interpretation of analyses. Analysis methods that adjust for measurement error bias can be used to assess the sensitivity of NNT estimates to this effect.  相似文献   

17.
18.
In a community based, prospective study to determine risk factors for falls, 465 women and 296 men 70 years and over were followed for 1 year and 507 falls were documented. A greater proportion of women (32.7%) than men (23.0%) experienced at least one fall in which there was no or minimal external contribution. Using unconditional logistic regression models we investigated the effect of physical and sociological variables on the sex difference in fall rate. Controlling for the variables age, use of psychotropic drugs, inability to rise from a chair without using arms, going outdoors less than daily and living alone decreased the relative risk of women falling compared to men from 2.02 (95% CI, 1.40–2.92) to 1.55 (95% CI 1.04–2.31). Some of the increased risk of falling associated with being a women was able to be explained and is potentially correctable. But even after controlling for the physical and social variables which we had assessed, women compared to men still had a significantly increased relative risk of falling.  相似文献   

19.
Statistical analyses of the joint effects of several factors (covariates) on the risk of disease, death, or other dichotomous outcomes, are frequently based on a model that relates the effect of the covariates to some function of the probability of the outcome. The odds ratio, relative risk, and the difference in risks are among the simplest candidates for the outcome function. Each can be specified as a special case of the generalized linear model, but their use has been limited to researchers with access to specialized computer programs that are not yet generally available in standard computer packages. The purpose of this communication is to describe how to implement the maximum likelihood estimation procedures and hypothesis testing associated with the generalized linear model using any computer program that can perform weighted least squares analyses. The procedure is applied specifically to models for relative risks, risk differences, and odds ratios. The techniques are illustrated with SAS and SPSSx programs for data sets previously presented.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号