首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
We develop the randomized analysis for repeated binary outcomes with non-compliance. A break randomization-based semi-parametric estimation procedure for both the causal risk difference and the causal risk ratio is proposed for repeated binary data. Although we assume the simple structural models for potential outcomes, we choose to avoid making any assumptions about comparability beyond those implied by randomization at time zero. The proposed methods can incorporate non-compliance information, while preserving the validity of the test of the null hypothesis, and even in the presence of non-random non-compliance can give the estimate of the causal effect that treatment would have if all individuals complied with their assigned treatment. The methods are applied to data from a randomized clinical trial for reduction of febrile neutropenia events among acute myeloid leukaemia patients, in which a prophylactic use of macrophage colony-stimulating factor (M-CSF) was compared to placebo during the courses of intensive chemotherapies.  相似文献   

2.
Adjusting for a causal intermediate is a common analytic strategy for estimating an average causal direct effect (ACDE). The ACDE is the component of the total exposure effect that is not relayed through the specified intermediate. Even if the total effect is unconfounded, the usual ACDE estimate may be biased when an unmeasured variable affects the intermediate and outcome variables. Using linear programming optimization to compute non-parametric bounds, we develop new ACDE estimators for binary measured variables in this causal structure, and use root mean square confounding bias (RMSB) to compare their performance with the usual stratified estimator in simulated distributions of target populations comprised of the 64 possible potential response types as well as distributions of target populations restricted to subsets of 18 or 12 potential response types defined by monotonicity or no-interactions assumptions of unit-level causal effects. We also consider target population distributions conditioned on fixed outcome risk among the unexposed, or fixed true ACDE in one stratum of the intermediate. Results show that a midpoint estimator constructed from the optimization bounds has consistently lower RMSB than the usual stratified estimator both unconditionally and conditioned on any risk in the unexposed. When conditioning on true ACDE, this midpoint estimator performs more poorly only when conditioned on an extreme true ACDE in one stratum of the intermediate, yet outperforms the stratified estimator in the other stratum when interaction is permitted. An alternate 'limit-modified crude' estimator can never perform less favourably than the stratified estimator, and often has lower RMSB.  相似文献   

3.
In this article, we show the general relation between standardization methods and marginal structural models. Standardization has been recognized as a method to control confounding and to estimate causal parameters of interest. Because standardization requires stratification by confounders, the sparse-data problem will occur when stratified by many confounders and one then might have an unstable estimator. A new class of causal models called marginal structural models has recently been proposed. In marginal structural models, the parameters are consistently estimated by the inverse-probability-of-treatment weighting method. Marginal structural models give a nonparametric standardization using the total group (exposed and unexposed) as the standard. In epidemiologic analysis, it is also important to know the change in the average risk of the exposed (or the unexposed) subgroup produced by exposure, which corresponds to the exposed (or the unexposed) group as the standard. We propose modifications of the weights in the marginal structural models, which give the nonparametric estimation of standardized parameters. With the proposed weights, we can use the marginal structural models as a useful tool for the nonparametric multivariate standardization.  相似文献   

4.
For a two-group comparative study, a stratified inference procedure is routinely used to estimate an overall group contrast to increase the precision of the simple two-sample estimator. Unfortunately, most commonly used methods including the Cochran-Mantel-Haenszel statistic for a binary outcome and the stratified Cox procedure for the event time endpoint do not serve this purpose well. In fact, these procedures may be worse than their two-sample counterparts even when the observed treatment allocations are imbalanced across strata. Various procedures beyond the conventional stratified methods have been proposed to increase the precision of estimation when the naive estimator is consistent. In this paper, we are interested in the case when the treatment allocation proportions vary markedly across strata. We study the stochastic properties of the two-sample naive estimator conditional on the ancillary statistics, the observed treatment allocation proportions and/or the stratum sizes, and present a biased-adjusted estimator. This adjusted estimator is asymptotically equivalent to the augmentation estimators proposed under the unconditional setting. Moreover, this consistent estimation procedure is also equivalent to a rather simple procedure, which estimates the mean response of each treatment group first via a stratum-size weighted average and then constructs the group contrast estimate. This simple procedure is flexible and readily applicable to any target patient population by choosing appropriate stratum weights. All the proposals are illustrated with the data from a cardiovascular clinical trial, whose treatment allocations are imbalanced.  相似文献   

5.
In observational studies, estimation of average causal treatment effect on a patient's response should adjust for confounders that are associated with both treatment exposure and response. In addition, the response, such as medical cost, may have incomplete follow‐up. In this article, a double robust estimator is proposed for average causal treatment effect for right censored medical cost data. The estimator is double robust in the sense that it remains consistent when either the model for the treatment assignment or the regression model for the response is correctly specified. Double robust estimators increase the likelihood the results will represent a valid inference. Asymptotic normality is obtained for the proposed estimator, and an estimator for the asymptotic variance is also derived. Simulation studies show good finite sample performance of the proposed estimator and a real data analysis using the proposed method is provided as illustration. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

6.
Population prevalence rates of dementia using stratified sampling have previously been estimated using two methods: standard weighted estimates and a logistic model-based approach. An earlier study described this application of the model-based approach and reported a small computer simulation comparing the performance of this estimator to the standard weighted estimator. In this article we use large-scale computer simulations based on data from the recently completed Kame survey of prevalent dementia in the Japanese-American residents of King County, Washington, to describe the performance of these estimators. We found that the standard weighted estimator was unbiased. This estimator performed well for a sample design with proportional allocation, but performed poorly for a sample design that included large strata that were lightly sampled. The logistic model-based estimator performed consistently well for all sample designs considered in terms of the extent of variability in estimation, although some modest bias was observed.  相似文献   

7.
Miettinen and Caro (J Clin Epidemiol 1989; 42: 325-331) [6] put forth principles of non-experimental assessment of excess risks in case-base studies; in such a design, risk difference is assessed by estimating the denominators of the proportions of cases among exposed and comparable unexposed subjects by means of a representative sample from the base population of the study. They provided appropriate formulations for the point estimation of risk differences for various exposure patterns with allowance for covariables by means of stratified analysis. However, in small samples point estimates can be uncertain. In this paper, first, likelihood-based statistics are derived which can be used for interval estimation (and also point estimation and significance testing) of risk differences. The unified approach generalizes to stratified analysis. Second, the procedure is parameterized for inferences about risk ratios using a chi-square function in analogy with the profile likelihood method for full cohort analysis. Third, the approach is extended from the study of a binary exposure variable to a risk function analysis under the Poisson regression model. The suggested estimation procedure is conceptually clear and computationally simple in that the modelling, unlike the models considered by Prentice (Biometrika 1986; 73: 1-11) [3] for the analysis of case-base data, focuses directly on the comparison of risks between exposure categories in the study base and thus involves no covariance between the cases and the base sample.  相似文献   

8.
Missing responses are common problems in medical, social, and economic studies. When responses are missing at random, a complete case data analysis may result in biases. A popular debias method is inverse probability weighting proposed by Horvitz and Thompson. To improve efficiency, Robins et al. proposed an augmented inverse probability weighting method. The augmented inverse probability weighting estimator has a double‐robustness property and achieves the semiparametric efficiency lower bound when the regression model and propensity score model are both correctly specified. In this paper, we introduce an empirical likelihood‐based estimator as an alternative to Qin and Zhang (2007). Our proposed estimator is also doubly robust and locally efficient. Simulation results show that the proposed estimator has better performance when the propensity score is correctly modeled. Moreover, the proposed method can be applied in the estimation of average treatment effect in observational causal inferences. Finally, we apply our method to an observational study of smoking, using data from the Cardiovascular Outcomes in Renal Atherosclerotic Lesions clinical trial. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

9.
Classical methods for fitting a varying intercept logistic regression model to stratified data are based on the conditional likelihood principle to eliminate the stratum-specific nuisance parameters. When the outcome variable has multiple ordered categories, a natural choice for the outcome model is a stratified proportional odds or cumulative logit model. However, classical conditioning techniques do not apply to the general K-category cumulative logit model (K>2) with varying stratum-specific intercepts as there is no reduction due to sufficiency; the nuisance parameters remain in the conditional likelihood. We propose a methodology to fit stratified proportional odds model by amalgamating conditional likelihoods obtained from all possible binary collapsings of the ordinal scale. The method allows for categorical and continuous covariates in a general regression framework. We provide a robust sandwich estimate of the variance of the proposed estimator. For binary exposures, we show equivalence of our approach to the estimators already proposed in the literature. The proposed recipe can be implemented very easily in standard software. We illustrate the methods via three real data examples related to biomedical research. Simulation results comparing the proposed method with a random effects model on the stratification parameters are also furnished.  相似文献   

10.
Noncompliance often complicates estimation of treatment efficacy from randomized trials. Under random noncompliance, per protocol analyses or even simple regression adjustments for noncompliance, could be adequate for causal inference, but special methods are needed when noncompliance is related to risk. For survival data, Robins and Tsiatis introduced the semi-parametric structural Causal Accelerated Life Model (CALM) which allows time-dependent departures from randomized treatment in either arm and relates each observed event time to a potential event time that would have been observed if the control treatment had been given throughout the trial. Alternatively, Loeys and Goetghebeur developed a structural Proportional Hazards (C-Prophet) model for when there is all-or-nothing noncompliance in the treatment arm only. Whitebiet al. proposed a 'complier average causal effect' method for Proportional Hazards estimation which allows time-dependent departures from randomized treatment in the active arm. A time-invariant version of this estimator (CHARM) consists of a simple adjustment to the Intention-to-Treat hazard ratio estimate. We used simulation studies mimicking a randomized controlled trial of active treatment versus control with censored time-to-event data, and under both random and non-random time-dependent noncompliance, to evaluate performance of these methods in terms of 95 per cent confidence interval coverage, bias and root mean square errors (RMSE). All methods performed well in terms of bias, even the C-Prophet used after treating time-varying compliance as all-or-nothing. Coverage of the latter method, as implemented in Stata, was too low. The CALM method performed best in terms of bias and coverage but had the largest RMSE.  相似文献   

11.
In this paper, we discuss causal inference on the efficacy of a treatment or medication on a time‐to‐event outcome with competing risks. Although the treatment group can be randomized, there can be confoundings between the compliance and the outcome. Unmeasured confoundings may exist even after adjustment for measured covariates. Instrumental variable methods are commonly used to yield consistent estimations of causal parameters in the presence of unmeasured confoundings. On the basis of a semiparametric additive hazard model for the subdistribution hazard, we propose an instrumental variable estimator to yield consistent estimation of efficacy in the presence of unmeasured confoundings for competing risk settings. We derived the asymptotic properties for the proposed estimator. The estimator is shown to be well performed under finite sample size according to simulation results. We applied our method to a real transplant data example and showed that the unmeasured confoundings lead to significant bias in the estimation of the effect (about 50% attenuated). Copyright © 2017 John Wiley & Sons, Ltd.  相似文献   

12.
In clinical research, investigators are interested in inferring the average causal effect of a treatment. However, the causal parameter that can be used to derive the average causal effect is not well defined for ordinal outcomes. Although some definitions have been proposed, they are limited in that they are not identical to the well‐defined causal risk for a binary outcome, which is the simplest ordinal outcome. In this paper, we propose the use of a causal parameter for an ordinal outcome, defined as the proportion that a potential outcome under one treatment condition would not be smaller than that under the other condition. For a binary outcome, this proportion is identical to the causal risk. Unfortunately, the proposed causal parameter cannot be identified, even under randomization. Therefore, we present a numerical method to calculate the sharp nonparametric bounds within a sample, reflecting the impact of confounding. When the assumption of independent potential outcomes is included, the causal parameter can be identified when randomization is in play. Then, we present exact tests and the associated confidence intervals for the relative treatment effect using the randomization‐based approach, which are an extension of the existing methods for a binary outcome. Our methodologies are illustrated using data from an emetic prevention clinical trial.  相似文献   

13.
There is growing interest in using routinely collected data from health care databases to study the safety and effectiveness of therapies in “real‐world” conditions, as it can provide complementary evidence to that of randomized controlled trials. Causal inference from health care databases is challenging because the data are typically noisy, high dimensional, and most importantly, observational. It requires methods that can estimate heterogeneous treatment effects while controlling for confounding in high dimensions. Bayesian additive regression trees, causal forests, causal boosting, and causal multivariate adaptive regression splines are off‐the‐shelf methods that have shown good performance for estimation of heterogeneous treatment effects in observational studies of continuous outcomes. However, it is not clear how these methods would perform in health care database studies where outcomes are often binary and rare and data structures are complex. In this study, we evaluate these methods in simulation studies that recapitulate key characteristics of comparative effectiveness studies. We focus on the conditional average effect of a binary treatment on a binary outcome using the conditional risk difference as an estimand. To emulate health care database studies, we propose a simulation design where real covariate and treatment assignment data are used and only outcomes are simulated based on nonparametric models of the real outcomes. We apply this design to 4 published observational studies that used records from 2 major health care databases in the United States. Our results suggest that Bayesian additive regression trees and causal boosting consistently provide low bias in conditional risk difference estimates in the context of health care database studies.  相似文献   

14.
In causal inference, often the interest lies in the estimation of the average causal effect. Other quantities such as the quantile treatment effect may be of interest as well. In this article, we propose a multiply robust method for estimating the marginal quantiles of potential outcomes by achieving mean balance in (a) the propensity score, and (b) the conditional distributions of potential outcomes. An empirical likelihood or entropy measure approach can be utilized for estimation instead of inverse probability weighting, which is known to be sensitive to the misspecification of the propensity score model. Simulation studies are conducted across different scenarios of correctness in both the propensity score models and the outcome models. Both simulation results and theoretical development indicate that our proposed estimator is consistent if any of the models are correctly specified. In the data analysis, we investigate the quantile treatment effect of mothers' smoking status on infants' birthweight.  相似文献   

15.
The behavior of the conditional logistic estimator is analyzed under a causal model for two‐arm experimental studies with possible non‐compliance in which the effect of the treatment is measured by a binary response variable. We show that, when non‐compliance may only be observed in the treatment arm, the effect (measured on the logit scale) of the treatment on compliers and that of the control on non‐compliers can be identified and consistently estimated under mild conditions. The same does not happen for the effect of the control on compliers. A simple correction of the conditional logistic estimator is then proposed, which allows us to considerably reduce the bias in estimating this quantity and the causal effect of the treatment over control on compliers. A two‐step estimator results on the basis of which we can also set up a Wald test for the hypothesis of absence of a causal effect of the treatment. The asymptotic properties of the estimator are studied by exploiting the general theory on maximum likelihood estimation of misspecified models. Finite‐sample properties of the estimator and of the related Wald test are studied by simulation. The extension of the approach to the case of missing responses is also outlined. The approach is illustrated by an application to a dataset deriving from a study on the efficacy of a training course on the breast self examination practice. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

16.
Estimation of causal effects in non‐randomized studies comprises two distinct phases: design, without outcome data, and analysis of the outcome data according to a specified protocol. Recently, Gutman and Rubin (2013) proposed a new analysis‐phase method for estimating treatment effects when the outcome is binary and there is only one covariate, which viewed causal effect estimation explicitly as a missing data problem. Here, we extend this method to situations with continuous outcomes and multiple covariates and compare it with other commonly used methods (such as matching, subclassification, weighting, and covariance adjustment). We show, using an extensive simulation, that of all methods considered, and in many of the experimental conditions examined, our new ‘multiple‐imputation using two subclassification splines’ method appears to be the most efficient and has coverage levels that are closest to nominal. In addition, it can estimate finite population average causal effects as well as non‐linear causal estimands. This type of analysis also allows the identification of subgroups of units for which the effect appears to be especially beneficial or harmful. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

17.
Many longitudinal databases record the occurrence of recurrent events over time. In this article, we propose a new method to estimate the average causal effect of a binary treatment for recurrent event data in the presence of confounders. We propose a doubly robust semiparametric estimator based on a weighted version of the Nelson-Aalen estimator and a conditional regression estimator under an assumed semiparametric multiplicative rate model for recurrent event data. We show that the proposed doubly robust estimator is consistent and asymptotically normal. In addition, a model diagnostic plot of residuals is presented to assess the adequacy of our proposed semiparametric model. We then evaluate the finite sample behavior of the proposed estimators under a number of simulation scenarios. Finally, we illustrate the proposed methodology via a database of circus artist injuries.  相似文献   

18.
Developments in genome‐wide association studies and the increasing availability of summary genetic association data have made application of Mendelian randomization relatively straightforward. However, obtaining reliable results from a Mendelian randomization investigation remains problematic, as the conventional inverse‐variance weighted method only gives consistent estimates if all of the genetic variants in the analysis are valid instrumental variables. We present a novel weighted median estimator for combining data on multiple genetic variants into a single causal estimate. This estimator is consistent even when up to 50% of the information comes from invalid instrumental variables. In a simulation analysis, it is shown to have better finite‐sample Type 1 error rates than the inverse‐variance weighted method, and is complementary to the recently proposed MR‐Egger (Mendelian randomization‐Egger) regression method. In analyses of the causal effects of low‐density lipoprotein cholesterol and high‐density lipoprotein cholesterol on coronary artery disease risk, the inverse‐variance weighted method suggests a causal effect of both lipid fractions, whereas the weighted median and MR‐Egger regression methods suggest a null effect of high‐density lipoprotein cholesterol that corresponds with the experimental evidence. Both median‐based and MR‐Egger regression methods should be considered as sensitivity analyses for Mendelian randomization investigations with multiple genetic variants.  相似文献   

19.
Recent developments in statistical methods for epidemiology have revived the application of sampling techniques in the design and analysis of cohort studies. The 'case-base' design involves sampling of both the cases and the cohort base of the study. This paper reviews some data-analytic imperfections of the approaches to risk ratio estimation, and modifies and advances a consistent likelihood-based procedure-analogous to Miettinen and Nurminen's proposal for a full cohort design--for interval estimation (and also point estimation and significance testing) in the context of binary case-base data. First, the procedure avoids the use of Taylor-series approximations to derive variance estimators for non-linear functions of parameters. Second, the asymptotic condition effects a simple computational expression for the chi-square function of risk ratios that is universally applicable to small samples. The statistical modelling underlying the method allows inferences about risk ratios without the assumption of rare disease either for the general population or for a particular base. The paper also extends the analysis to encompass stratified data. Finally, a numerical evaluation evinced the accurate small-sample properties of the proposed method.  相似文献   

20.
Cluster randomized trials (CRTs) were originally proposed for use when randomization at the subject level is practically infeasible or may lead to a severe estimation bias of the treatment effect. However, recruiting an additional cluster costs more than enrolling an additional subject in an individually randomized trial. Under budget constraints, researchers have proposed the optimal sample sizes in two-level CRTs. CRTs may have a three-level structure, in which two levels of clustering should be considered. In this paper, we propose optimal designs in three-level CRTs with a binary outcome, assuming a nested exchangeable correlation structure in generalized estimating equation models. We provide the variance of estimators of three commonly used measures: risk difference, risk ratio, and odds ratio. For a given sampling budget, we discuss how many clusters and how many subjects per cluster are necessary to minimize the variance of each measure estimator. For known association parameters, the locally optimal design is proposed. When association parameters are unknown but within predetermined ranges, the MaxiMin design is proposed to maximize the minimum of relative efficiency over the possible ranges, that is, to minimize the risk of the worst scenario.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号