首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
PURPOSE: To compare adjusted effects of drug treatment for hypertension on the risk of stroke from propensity score (PS) methods with a multivariable Cox proportional hazards (Cox PH) regression in an observational study with censored data. METHODS: From two prospective population-based cohort studies in The Netherlands a selection of subjects was used who either received drug treatment for hypertension (n = 1293) or were untreated 'candidates' for treatment (n = 954). A multivariable Cox PH was performed on the risk of stroke using eight covariates along with three PS methods. RESULTS: In multivariable Cox PH regression the adjusted hazard ratio (HR) for treatment was 0.64 (CI(95%): 0.42, 0.98). After stratification on the PS the HR was 0.58 (CI(95%): 0.38, 0.89). Matching on the PS yielded a HR of 0.49 (CI(95%): 0.27, 0.88), whereas adjustment with a continuous PS gave similar results as Cox regression. When more covariates were added (not possible in multivariable Cox model) a similar reduction in HR was reached by all PS methods. The inclusion of a simulated balanced covariate gave largest changes in HR using the multivariable Cox model and matching on the PS. CONCLUSIONS: In PS methods in general a larger number of confounders can be used. In this data set matching on the PS is sensitive to small changes in the model, probably because of the small number of events. Stratification, and covariate adjustment, were less sensitive to the inclusion of a non-confounder than multivariable Cox PH regression. Attention should be paid to PS model building and balance checking.  相似文献   

2.
ABSTRACT

A crucial component of making individualized treatment decisions is to accurately predict each patient’s disease risk. In clinical oncology, disease risks are often measured through time-to-event data, such as overall survival and progression/recurrence-free survival, and are often subject to censoring. Risk prediction models based on recursive partitioning methods are becoming increasingly popular largely due to their ability to handle nonlinear relationships, higher-order interactions, and/or high-dimensional covariates. The most popular recursive partitioning methods are versions of the Classification and Regression Tree (CART) algorithm, which builds a simple interpretable tree structured model. With the aim of increasing prediction accuracy, the random forest algorithm averages multiple CART trees, creating a flexible risk prediction model. Risk prediction models used in clinical oncology commonly use both traditional demographic and tumor pathological factors as well as high-dimensional genetic markers and treatment parameters from multimodality treatments. In this article, we describe the most commonly used extensions of the CART and random forest algorithms to right-censored outcomes. We focus on how they differ from the methods for noncensored outcomes, and how the different splitting rules and methods for cost-complexity pruning impact these algorithms. We demonstrate these algorithms by analyzing a randomized Phase III clinical trial of breast cancer. We also conduct Monte Carlo simulations to compare the prediction accuracy of survival forests with more commonly used regression models under various scenarios. These simulation studies aim to evaluate how sensitive the prediction accuracy is to the underlying model specifications, the choice of tuning parameters, and the degrees of missing covariates.  相似文献   

3.
BackgroundThere is uncertainty about whether piperacillin/tazobactam (PT) increases the risk of acute kidney injury (AKI) in patients without concomitant use of vancomycin. This study compared the risk of hospital-acquired AKI (HA-AKI) among adults treated with PT or antipseudomonal β-lactams (meropenem, ceftazidime) without concomitant use of vancomycin.MethodsThis real-world study analysed the data from China Renal Data System and assessed the risk of HA-AKI in adults hospitalized with infection after exposure to PT, meropenem or ceftazidime in the absence of concomitant vancomycin. The primary outcome was any stage of HA-AKI according to the Kidney Disease Improving Global Outcomes guidelines. A multi-variable Cox regression model and different propensity score (PS) matching models were used.ResultsAmong the 29,441 adults [mean (standard deviation) age 62.44 (16.84) years; 17,980 females (61.1%)] included in this study, 14,721 (50%) used PT, 9081 (31%) used meropenem and 5639 (19%) used ceftazidime. During a median follow-up period of 8 days, 2601 (8.8%) develped HA-AKI. The use of PT was not associated with significantly higher risk of HA-AKI compared with meropenem [adjusted hazard ratio (aHR) 1.07, 95% confidence interval (CI) 0.97–1.19], ceftazidime (aHR 1.09, 95% CI 0.92–1.30) or both agents (aHR 1.07, 95% CI 0.97–1.17) after adjusting for confounders. Results were consistent in stratified analyses, PS matching using logistic regression or random forest methods to generate a PS, and in an analysis restricting outcomes to AKI stage 2–3.ConclusionsWithout concomitant use of vancomycin, the risk of AKI following PT therapy is comparable with that of meropenem or ceftazidime among adults hospitalized with infection.  相似文献   

4.
Confounding bias often occurs in the analysis of the exposure–safety relationship due to confounding factors that have impacts on both drug exposure and safety outcomes. Instrumental variable (IV) methods have been widely used to eliminate or to reduce the bias in observational studies in, for example, epidemiology. Recently applications of IV methods can also be found in clinical trials to deal with problems such as treatment non-compliance. IV methods have rarely been used in pharmacokinetic/pharmacodynamic analyses in clinical trials, although in a randomized trial with multiple dose levels dose may be a powerful IV. We consider modeling the relationship between pharmacokinetics as a measure of drug exposure and risk of adverse events with Poisson regression models and dose as an IV. We show that although IV methods for nonlinear models are in general complex, simple approaches are available for the combination of Poisson regression models and routinely used dose-exposure models. We propose two simple methods that are intuitive and easy to implement. Both methods consist of two stages with the first stage fitting the dose-exposure model; then the fitted model is used in fitting the Poisson regression model in two different ways. The properties of the two methods are compared under several practical scenarios with simulation. A numerical example is used to illustrate an application of the methods.  相似文献   

5.
Confounding bias often occurs in the analysis of the exposure-safety relationship due to confounding factors that have impacts on both drug exposure and safety outcomes. Instrumental variable (IV) methods have been widely used to eliminate or to reduce the bias in observational studies in, for example, epidemiology. Recently applications of IV methods can also be found in clinical trials to deal with problems such as treatment non-compliance. IV methods have rarely been used in pharmacokinetic/pharmacodynamic analyses in clinical trials, although in a randomized trial with multiple dose levels dose may be a powerful IV. We consider modeling the relationship between pharmacokinetics as a measure of drug exposure and risk of adverse events with Poisson regression models and dose as an IV. We show that although IV methods for nonlinear models are in general complex, simple approaches are available for the combination of Poisson regression models and routinely used dose-exposure models. We propose two simple methods that are intuitive and easy to implement. Both methods consist of two stages with the first stage fitting the dose-exposure model; then the fitted model is used in fitting the Poisson regression model in two different ways. The properties of the two methods are compared under several practical scenarios with simulation. A numerical example is used to illustrate an application of the methods.  相似文献   

6.
ABSTRACT

Objective: Gastro-oesophageal reflux disease (GORD) is a recurring condition with many patients requiring long-term maintenance therapy. Therefore initial choice of treatment has long-term cost implications. The aim was to compare the costs and effectiveness of treatment of GORD (unconfirmed by endoscopy) with seven proton pump inhibitors (PPIs: esomeprazole, lansoprazole (capsules and oro-dispersible tablets), omeprazole (generic and branded), pantoprazole and rabeprazole), over one year.

Design and methods: A treatment model was developed of 13 interconnected Markov models incorporating acute treatment of symptoms, long-term therapy and subsequent decisions to undertake endoscopy to confirm diagnosis. Patients were allowed to stop treatment or to receive maintenance treatment either continuously or on-demand depending on response to therapy. Long-term dosing schedule (high dose or step-down dose) was based on current market data. Efficacy of treatments was based on clinical trials and follow-up studies, while resource use patterns were determined by a panel of physicians.

Main outcome measures: The model predicts total expected annual costs, number of symptom-free days and quality-adjusted life-years (QALY).

Results: Generic omeprazole and rabeprazole dominated (i.e. cost less and resulted in more symptom-free days and higher QALY gains) the other PPIs. Rabeprazole had a favourable cost-effectiveness ratio of £3.42 per symptom-free day and £8308/quality-adjusted life-year gained when compared with generic omeprazole. Rabeprazole remained cost-effective independent of choice of maintenance treatment (i.e. proportion of patients remaining on continuous treatment versus on-demand treatment).

Conclusions: Economic models provide a useful framework to evaluate PPIs in realistic clinical scenarios. Our findings show that rabeprazole is cost-effective for the treatment of GORD.  相似文献   

7.
ABSTRACT

Methods for assessing whether a single biomarker is prognostic or predictive in the context of a control and experimental treatment are well known. With a panel of biomarkers, each component biomarker potentially measuring sensitivity to a different drug, it is not obvious how to extend these methods. We consider two situations, which lead to different ways of defining whether a biomarker panel is prognostic or predictive. In one, there are multiple experimental targeted treatments, each with an associated biomarker assay of the relevant target in the panel, along with a control treatment; the extension of the single-biomarker scenario to this situation is straightforward. In the other situation, there are many (nontargeted) treatments and a single assay that can be used to assess the sensitivity of the patient’s tumor to the different treatments. In addition to evaluating previous approaches to this situation, we propose using regression models with varying assumptions to assess such panel biomarkers. Missing biomarker data can be problematic with the regression models, and, after demonstrating that a multiple imputation procedure does not work, we suggest a modified regression model that can accommodate some forms of missing data. We also address the notions of qualitative interactions in the biomarker panel setting.  相似文献   

8.
PURPOSE: Both propensity score (PS) matching and inverse probability of treatment weighting (IPTW) allow causal contrasts, albeit different ones. In the presence of effect-measure modification, different analytic approaches produce different summary estimates. METHODS: We present a spreadsheet example that assumes a dichotomous exposure, covariate, and outcome. The covariate can be a confounder or not and a modifier of the relative risk (RR) or not. Based on expected cell counts, we calculate RR estimates using five summary estimators: Mantel-Haenszel (MH), maximum likelihood (ML), the standardized mortality ratio (SMR), PS matching, and a common implementation of IPTW. RESULTS: Without effect-measure modification, all approaches produce identical results. In the presence of effect-measure modification and regardless of the presence of confounding, results from the SMR and PS are identical, but IPTW can produce strikingly different results (e.g., RR = 0.83 vs. RR = 1.50). In such settings, MH and ML do not estimate a population parameter and results for those measures fall between PS and IPTW. CONCLUSIONS: Discrepancies between PS and IPTW reflect different weighting of stratum-specific effect estimates. SMR and PS matching assign weights according to the distribution of the effect-measure modifier in the exposed subpopulation, whereas IPTW assigns weights according to the distribution of the entire study population. In pharmacoepidemiology, contraindications to treatment that also modify the effect might be prevalent in the population, but would be rare among the exposed. In such settings, estimating the effect of exposure in the exposed rather than the whole population is preferable.  相似文献   

9.
Abstract

Recently, various approaches have been suggested for dose escalation studies based on observations of both undesirable events and evidence of therapeutic benefit. This article concerns a Bayesian approach to dose escalation that requires the user to make numerous design decisions relating to the number of doses to make available, the choice of the prior distribution, the imposition of safety constraints and stopping rules, and the criteria by which the design is to be optimized. Results are presented of a substantial simulation study conducted to investigate the influence of some of these factors on the safety and the accuracy of the procedure with a view toward providing general guidance for investigators conducting such studies. The Bayesian procedures evaluated use logistic regression to model the two responses, which are both assumed to be binary. The simulation study is based on features of a recently completed study of a compound with potential benefit to patients suffering from inflammatory diseases of the lung.  相似文献   

10.
11.
ABSTRACT

A personalized treatment policy requires defining the optimal treatment for each patient based on their clinical and other characteristics. Here we consider a commonly encountered situation in practice, when analyzing data from observational cohorts, that there are auxiliary variables which affect both the treatment and the outcome, yet these variables are not of primary interest to be included in a generalizable treatment strategy. Furthermore, there is not enough prior knowledge of the effect of the treatments or of the importance of the covariates for us to explicitly specify the dependency between the outcome and different covariates, thus we choose a model that is flexible enough to accommodate the possibly complex association of the outcome on the covariates. We consider observational studies with a survival outcome and propose to use Random Survival Forest with Weighted Bootstrap (RSFWB) to model the counterfactual outcomes while marginalizing over the auxiliary covariates. By maximizing the restricted mean survival time, we estimate the optimal regime for a target population based on a selected set of covariates. Simulation studies illustrate that the proposed method performs reliably across a range of different scenarios. We further apply RSFWB to a prostate cancer study.  相似文献   

12.
Little research has been done to evaluate the effect of adjusting for baseline in the analysis of repeated incomplete binary data through simulation study. In this article, covariate adjusted and unadjusted implementations of the following methods were compared in analyzing incomplete repeated binary data when the outcome at the study endpoint is of interest: logistic regression with the last observation carried forward (LOCF), generalized estimating equations (GEE), weighted GEE (WGEE), generalized linear mixed model (GLMM), and multiple imputation (MI) with analyses via GEE. Incomplete data mimicking several clinical trial scenarios were generated using missing completely at random (MCAR), missing at random (MAR), and missing not at random (MNAR) mechanisms. Across the various analytic methods and scenarios covariate adjusted analyses generally yielded larger, less biased treatment effect estimates and larger standard errors compared with their unadjusted counterpart. The net result of these factors was increased power from the covariate adjusted analyses without increasing Type I error rates. Although all methods were biased in at least some of the MNAR scenarios, the Type I error rates from LOCF exceeded 20% whereas the highest rate from any other method in any scenario was less than 10%. LOCF also yielded biased results in MCAR and MAR data whereas the other methods were not biased or had smaller biases than LOCF. These results support longitudinal modeling of repeated binary data over LOCF logistic regression of the study endpoint only. These results also support covariate adjustment for baseline severity in these longitudinal models.  相似文献   

13.
BACKGROUND: A correctly specified propensity score (PS) estimated in a cohort ("cohort PS") should, in expectation, remain valid in a subgroup population. OBJECTIVE: We sought to determine whether using a cohort PS can be validly applied to subgroup analyses and, thus, add efficiency to studies with many subgroups or restricted data. METHODS: In each of three cohort studies, we estimated a cohort PS, defined five subgroups, and then estimated subgroup-specific PSs. We compared difference in treatment effect estimates for subgroup analyses adjusted by cohort PSs versus subgroup-specific PSs. Then, over 10 million times, we simulated a population with known characteristics of confounding, subgroup size, treatment interactions, and treatment effect and again assessed difference in point estimates. RESULTS: We observed that point estimates in most subgroups were substantially similar with the two methods of adjustment. In simulations, the effect estimates differed by a median of 3.4% (interquartile (IQ) range 1.3-10.0%). The IQ range exceeded 10% only in cases where the subgroup had 相似文献   

14.
目的:以口服硫普罗宁为实例,建立血药曲线下面积(AUC)预测模型,比较该模型的内部验证方法。方法:20名健康志愿者单剂量口服硫普罗宁胶囊,以有限抽样法建立多元回归模型预测AUC,用rmse和Pe比较模拟法、Bootstrap法、Jackknife法验证回归模型的效果。结果:模拟数据验证回归模型(AUC0~48=9.36 1.81C3 23.28C8)预测性良好。经模拟法、Bootstrap法、Jackknife法扩大样本量后,估算的回归模型参数分别为Intercept=17.58、9.33、9.84,C3的回归参数M1=1.01、1.76、1.84,C8的回归参数M2=6.17、23.14、21.36,并用原始数据计算其rmse分别为3.87、1.94、2.23。结论:用模拟的c-t数据预测AUC,可用于验证LSS回归模型,但不能通过模拟的c-t数据估算LSS回归模型参数达到这一目的;Bootstrap法和Jackknife法可验证LSS回归模型,结果相近。  相似文献   

15.
ABSTRACT

Observational studies provide a core resource in assessing post-market drug safety and effectiveness. Propensity scores are a predominant method for confounding adjustment to achieve unbiased estimation of average treatment effects in observational data. However, the use of propensity score methods has been limited to comparing two treatment groups, while medical situations frequently present with multiple treatment options. Inverse probability of treatment weighting (IPTW) is a popular propensity score adjustment method, but its performance degrades with decreased positivity leading to extreme weights, a problem that can be amplified with multiple treatment groups. Meanwhile, regression on a spline of the propensity score has shown favorable performance compared to other propensity score methods in recent studies involving two treatments. This project utilizes a simulation study to compare IPTW and propensity score splines as adjustment methods in a three-treatment setting. We test a variety of spline methods, including natural cubic splines with varying numbers of interior knots, and thin-plate regression splines. We vary several parameters across simulations, including the degree of propensity score overlap among treatment groups, treatment prevalence, outcome prevalence, and true marginal relative risk. We assess methods based on their bias, root mean squared error, and coverage of the true marginal relative risk across simulations. We find that all methods perform similarly well when there is good propensity score distribution overlap. However, with even moderate decrease in overlap or low outcome prevalence, IPTW produces more biased estimates and higher variance than propensity score splines. Low treatment prevalence or unequal treatment prevalences across groups also worsens IPTW performance. Overall, a natural cubic spline with a relatively small number of interior knots provides good performance across a range of simulations.  相似文献   

16.
Use of propensity scores to identify and control for confounding in observational studies that relate medications to outcomes has increased substantially in recent years. However, it remains unclear whether, and if so when, use of propensity scores provides estimates of drug effects that are less biased than those obtained from conventional multivariate models. In the great majority of published studies that have used both approaches, estimated effects from propensity score and regression methods have been similar. Simulation studies further suggest comparable performance of the two approaches in many settings. We discuss five reasons that favour use of propensity scores: the value of focus on indications for drug use; optimal matching strategies from alternative designs; improved control of confounding with scarce outcomes; ability to identify interactions between propensity of treatment and drug effects on outcomes; and correction for unobserved confounders via propensity score calibration. We describe alternative approaches to estimate and implement propensity scores and the limitations of the C-statistic for evaluation. Use of propensity scores will not correct biases from unmeasured confounders, but can aid in understanding determinants of drug use and lead to improved estimates of drug effects in some settings.  相似文献   

17.
The purpose of this study was to use the stochastic simulation and estimation method to evaluate the effects of sample size and the number of samples per individual on the model development and evaluation. The pharmacokinetic parameters and inter- and intra-individual variation were obtained from a population pharmacokinetic model of clinical trials of amlodipine. Stochastic simulation and estimation were performed to evaluate the efficiencies of different sparse sampling scenarios to estimate the compartment model. Simulated data were generated a 1000 times and three candidate models were used to fit the 1000 data sets. Fifty-five kinds of sparse sampling scenarios were investigated and compared. The results showed that, 60 samples with three points and 20 samples with five points are recommended, and the quantitative methodology of stochastic simulation and estimation is valuable for efficiently estimating the compartment model and can be used for other similar model development and evaluation approaches.  相似文献   

18.
19.
In longitudinal data, interest is usually focused on the repeatedly measured variable itself. In some situations, however, the pattern of variation of the variable over time may contain information about a separate outcome variable. In such situations, longitudinal data provide an opportunity to develop predictive models for future observations of the separate outcome variable given the current data for an individual. In particular, longitudinally changing patterns of repeated measurements of a variable measured up to time t , or trajectories, can be used to predict an outcome measure or event that occurs after time t .

In this article, we propose a method for predicting an outcome variable based on a generalized linear model, specifically, a logistic regression model, the covariates of which are variables that characterize the trajectory of an individual. Since the trajectory of an individual contains estimation error, the proposed logistic regression model constitutes a measurement error model. The model is fitted in two steps. First, a linear mixed model is fitted to the longitudinal data to estimate the random effect that characterizes the trajectory for each individual while adjusting for other covariates. In the second step, a conditional likelihood approach is applied to account for the estimation error in the trajectory. Prediction of an outcome variable is based on the logistic regression model in the second step. The receiver operating characteristic curve is used to compare the discrimination ability of a model with trajectories to one without trajectories as covariates. A simulation study is used to assess the performance of the proposed method, and the method is applied to clinical trial data.  相似文献   

20.
Identifying subgroups, which respond differently to a treatment, both in terms of efficacy and safety, is an important part of drug development. A well-known challenge in exploratory subgroup analyses is the small sample size in the considered subgroups, which is usually too low to allow for definite comparisons. In early phase trials, this problem is further exaggerated, because limited or no clinical prior information on the drug and plausible subgroups is available. We evaluate novel strategies for treatment effect estimation in these settings in a simulation study motivated by real clinical trial situations. We compare several approaches to estimate treatment effects for selected subgroups, employing model averaging, resampling, and Lasso regression methods. Two subgroup identification approaches are employed, one based on categorization of covariates and the other based on splines. Our results show that naive estimation of the treatment effect, which ignores that a selection has taken place, leads to bias and overoptimistic conclusions. For the considered simulation scenarios, virtually all evaluated novel methods provide more adequate estimates of the treatment effect for selected subgroups, in terms of bias, mean squared error (MSE), and confidence interval coverage. Supplementary materials for this article are available online.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号