首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
In many observational studies, the objective is to estimate the effect of treatment or state‐change on the recurrent event rate. If treatment is assigned after the start of follow‐up, traditional methods (eg, adjustment for baseline‐only covariates or fully conditional adjustment for time‐dependent covariates) may give biased results. We propose a two‐stage modeling approach using the method of sequential stratification to accurately estimate the effect of a time‐dependent treatment on the recurrent event rate. At the first stage, we estimate the pretreatment recurrent event trajectory using a proportional rates model censored at the time of treatment. Prognostic scores are estimated from the linear predictor of this model and used to match treated patients to as yet untreated controls based on prognostic score at the time of treatment for the index patient. The final model is stratified on matched sets and compares the posttreatment recurrent event rate to the recurrent event rate of the matched controls. We demonstrate through simulation that bias due to dependent censoring is negligible, provided the treatment frequency is low, and we investigate a threshold at which correction for dependent censoring is needed. The method is applied to liver transplant (LT), where we estimate the effect of development of post‐LT End Stage Renal Disease (ESRD) on rate of days hospitalized.  相似文献   

2.
Modeling events requires accounting for differential follow‐up duration, especially when combining randomized and observational studies. Although events occur at any point over a follow‐up period and censoring occurs throughout, most applied researchers use odds ratios as association measures, assuming follow‐up duration is similar across treatment groups. We derive the bias of the rate ratio when incorrectly assuming equal follow‐up duration in the single study binary treatment setting. Simulations illustrate bias, efficiency, and coverage and demonstrate that bias and coverage worsen rapidly as the ratio of follow‐up duration between arms moves away from one. Combining study rate ratios with hierarchical Poisson regression models, we examine bias and coverage for the overall rate ratio via simulation in three cases: when average arm‐specific follow‐up duration is available for all studies, some studies, and no study. In the null case, bias and coverage are poor when the study average follow‐up is used and improve even if some arm‐specific follow‐up information is available. As the rate ratio gets further from the null, bias and coverage remain poor. We investigate the effectiveness of cardiac resynchronization therapy devices compared with those with cardioverter‐defibrillator capacity where three of eight studies report arm‐specific follow‐up duration. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

3.
In longitudinal studies, matched designs are often employed to control the potential confounding effects in the field of biomedical research and public health. Because of clinical interest, recurrent time‐to‐event data are captured during the follow‐up. Meanwhile, the terminal event of death is always encountered, which should be taken into account for valid inference because of informative censoring. In some scenarios, a certain large portion of subjects may not have any recurrent events during the study period due to nonsusceptibility to events or censoring; thus, the zero‐inflated nature of data should be considered in analysis. In this paper, a joint frailty model with recurrent events and death is proposed to adjust for zero inflation and matched designs. We incorporate 2 frailties to measure the dependency between subjects within a matched pair and that among recurrent events within each individual. By sharing the random effects, 2 event processes of recurrent events and death are dependent with each other. The maximum likelihood based approach is applied for parameter estimation, where the Monte Carlo expectation‐maximization algorithm is adopted, and the corresponding R program is developed and available for public usage. In addition, alternative estimation methods such as Gaussian quadrature (PROC NLMIXED) and a Bayesian approach (PROC MCMC) are also considered for comparison to show our method's superiority. Extensive simulations are conducted, and a real data application on acute ischemic studies is provided in the end.  相似文献   

4.
Conventional longitudinal data analysis methods assume that outcomes are independent of the data‐collection schedule. However, the independence assumption may be violated, for example, when a specific treatment necessitates a different follow‐up schedule than the control arm or when adverse events trigger additional physician visits in between prescheduled follow‐ups. Dependence between outcomes and observation times may introduce bias when estimating the marginal association of covariates on outcomes using a standard longitudinal regression model. We formulate a framework of outcome‐observation dependence mechanisms to describe conditional independence given observed observation‐time process covariates or shared latent variables. We compare four recently developed semi‐parametric methods that accommodate one of these mechanisms. To allow greater flexibility, we extend these methods to accommodate a combination of mechanisms. In simulation studies, we show how incorrectly specifying the outcome‐observation dependence may yield biased estimates of covariate‐outcome associations and how our proposed extensions can accommodate a greater number of dependence mechanisms. We illustrate the implications of different modeling strategies in an application to bladder cancer data. In longitudinal studies with potentially outcome‐dependent observation times, we recommend that analysts carefully explore the conditional independence mechanism between the outcome and observation‐time processes to ensure valid inference regarding covariate‐outcome associations. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

5.
Observational comparative effectiveness and safety studies are often subject to immortal person‐time, a period of follow‐up during which outcomes cannot occur because of the treatment definition. Common approaches, like excluding immortal time from the analysis or naïvely including immortal time in the analysis, are known to result in biased estimates of treatment effect. Other approaches, such as the Mantel–Byar and landmark methods, have been proposed to handle immortal time. Little is known about the performance of the landmark method in different scenarios. We conducted extensive Monte Carlo simulations to assess the performance of the landmark method compared with other methods in settings that reflect realistic scenarios. We considered four landmark times for the landmark method. We found that the Mantel–Byar method provided unbiased estimates in all scenarios, whereas the exclusion and naïve methods resulted in substantial bias when the hazard of the event was constant or decreased over time. The landmark method performed well in correcting immortal person‐time bias in all scenarios when the treatment effect was small, and provided unbiased estimates when there was no treatment effect. The bias associated with the landmark method tended to be small when the treatment rate was higher in the early follow‐up period than it was later. These findings were confirmed in a case study of chronic obstructive pulmonary disease. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

6.
Sumi M  Tango T 《Statistics in medicine》2010,29(30):3186-3193
As statistical methods for testing the null hypothesis of no difference between two groups for the matched pairs design, the paired-t test, Wilcoxon signed rank sum test and McNemar test are well known. However, there is no simple test for the comparison of incidence rate of recurrent events. This paper proposes a simple statistical method and a sample size formula for the comparison of counts of recurrent events over a specified period of observation under the matched pairs design, where the subject-specific incidence of recurrent events is assumed to follow a time-homogeneous Poisson process. As a special case, the proposed method is found to be virtually equivalent in form to Mantel-Haenszel method for a common rate ratio among the set of stratified tables based on person-time data. The proposed methods are illustrated with the within-arm comparison of data from a clinical trial of 59 epileptics with baseline count data.  相似文献   

7.
We present an estimate of the kappa-coefficient of agreement between two methods of rating based on matched pairs of binary responses and show that the estimate depends on the common intraclass correlation coefficient between the pairs. Via Monte Carlo simulation, we investigate power of the test of significance on kappa, and the large sample bias and variance of its maximum likelihood estimator.  相似文献   

8.
Propensity-score matching allows one to reduce the effects of treatment-selection bias or confounding when estimating the effects of treatments when using observational data. Some authors have suggested that methods of inference appropriate for independent samples can be used for assessing the statistical significance of treatment effects when using propensity-score matching. Indeed, many authors in the applied medical literature use methods for independent samples when making inferences about treatment effects using propensity-score matched samples. Dichotomous outcomes are common in healthcare research. In this study, we used Monte Carlo simulations to examine the effect on inferences about risk differences (or absolute risk reductions) when statistical methods for independent samples are used compared with when statistical methods for paired samples are used in propensity-score matched samples. We found that compared with using methods for independent samples, the use of methods for paired samples resulted in: (i) empirical type I error rates that were closer to the advertised rate; (ii) empirical coverage rates of 95 per cent confidence intervals that were closer to the advertised rate; (iii) narrower 95 per cent confidence intervals; and (iv) estimated standard errors that more closely reflected the sampling variability of the estimated risk difference. Differences between the empirical and advertised performance of methods for independent samples were greater when the treatment-selection process was stronger compared with when treatment-selection process was weaker. We recommend using statistical methods for paired samples when using propensity-score matched samples for making inferences on the effect of treatment on the reduction in the probability of an event occurring.  相似文献   

9.
Propensity‐score matching is frequently used to estimate the effect of treatments, exposures, and interventions when using observational data. An important issue when using propensity‐score matching is how to estimate the standard error of the estimated treatment effect. Accurate variance estimation permits construction of confidence intervals that have the advertised coverage rates and tests of statistical significance that have the correct type I error rates. There is disagreement in the literature as to how standard errors should be estimated. The bootstrap is a commonly used resampling method that permits estimation of the sampling variability of estimated parameters. Bootstrap methods are rarely used in conjunction with propensity‐score matching. We propose two different bootstrap methods for use when using propensity‐score matching without replacementand examined their performance with a series of Monte Carlo simulations. The first method involved drawing bootstrap samples from the matched pairs in the propensity‐score‐matched sample. The second method involved drawing bootstrap samples from the original sample and estimating the propensity score separately in each bootstrap sample and creating a matched sample within each of these bootstrap samples. The former approach was found to result in estimates of the standard error that were closer to the empirical standard deviation of the sampling distribution of estimated effects. © 2014 The Authors. Statistics in Medicine Published by John Wiley & Sons, Ltd.  相似文献   

10.
Non‐inferiority tests for matched‐pair data where pairs are mutually independent may not be appropriate when pairs are clustered. The tests may require an adjustment to account for the correlation within a cluster. We consider the adjusted score and Wald‐type tests, and a modification of Obuchowski's method for non‐inferiority and compare them with the non‐inferiority test based on a method of moments estimate in terms of Type 1 error rate and power by simulations for a small cluster size under various correlation structures. In general, the score test adjusted by an inflation factor and the modified Obuchowski's method perform as good as the test based on moments estimate in the accuracy of Type 1 error rates. The latter does not provide reasonably close Type 1 error rates to the nominal level when the number of clusters is 25 or smaller and a positive response rate for the standard procedure is 20 per cent or lower. The adjusted score test, the method based on moments estimate and the modified test are comparable in power. The adjusted Wald‐type test is too anti‐conservative and we should caution use of the test. Since number of clusters is strongly related to the accuracy of empirical Type 1 error rate and power, it is very important to have a sufficiently large number of clusters in designing a clustered matched‐pair study for non‐inferiority. Published in 2009 by John Wiley & Sons, Ltd.  相似文献   

11.
Many observational studies estimate causal effects using methods based on matching on the propensity score. Full matching on the propensity score is an effective and flexible method for utilizing all available data and for creating well‐balanced treatment and control groups. An important component of the full matching algorithm is the decision about whether to impose a restriction on the maximum ratio of controls matched to each treated subject. Despite the possible effect of this restriction on subsequent inferences, this issue has not been examined. We used a series of Monte Carlo simulations to evaluate the effect of imposing a restriction on the maximum ratio of controls matched to each treated subject when estimating risk differences. We considered full matching both with and without a caliper restriction. When using full matching with a caliper restriction, the imposition of a subsequent constraint on the maximum ratio of the number of controls matched to each treated subject had no effect on the quality of inferences. However, when using full matching without a caliper restriction, the imposition of a constraint on the maximum ratio of the number of controls matched to each treated subject tended to result in an increase in bias in the estimated risk difference. However, this increase in bias tended to be accompanied by a corresponding decrease in the sampling variability of the estimated risk difference. We illustrate the consequences of these restrictions using observational data to estimate the effect of medication prescribing on survival following hospitalization for a heart attack.  相似文献   

12.
The most common data structures in the biomedical studies have been matched or unmatched designs. Data structures resulting from a hybrid of the two may create challenges for statistical inferences. The question may arise whether to use parametric or nonparametric methods on the hybrid data structure. The Early Treatment for Retinopathy of Prematurity study was a multicenter clinical trial sponsored by the National Eye Institute. The design produced data requiring a statistical method of a hybrid nature. An infant in this multicenter randomized clinical trial had high‐risk prethreshold retinopathy of prematurity that was eligible for treatment in one or both eyes at entry into the trial. During follow‐up, recognition visual acuity was accessed for both eyes. Data from both eyes (matched) and from only one eye (unmatched) were eligible to be used in the trial. The new hybrid nonparametric method is a meta‐analysis based on combining the Hodges–Lehmann estimates of treatment effects from the Wilcoxon signed rank and rank sum tests. To compare the new method, we used the classic meta‐analysis with the t‐test method to combine estimates of treatment effects from the paired and two sample t‐tests. We used simulations to calculate the empirical size and power of the test statistics, as well as the bias, mean square and confidence interval width of the corresponding estimators. The proposed method provides an effective tool to evaluate data from clinical trials and similar comparative studies. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

13.
If past treatment assignments are unmasked, selection bias may arise even in randomized controlled trials. The impact of such bias can be measured by considering the type I error probability. In case of a normally distributed outcome, there already exists a model accounting for selection bias that permits calculating the corresponding type I error probabilities. To model selection bias for trials with a time‐to‐event outcome, we introduce a new biasing policy for exponentially distributed data. Using this biasing policy, we derive an exact formula to compute type I error probabilities whenever an F‐test is performed and no observations are censored. Two exemplary settings, with and without random censoring, are considered in order to illustrate how our results can be applied to compare distinct randomization procedures with respect to their performance in the presence of selection bias. © 2017 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.  相似文献   

14.
We describe methods for meta‐analysis of randomised trials where a continuous outcome is of interest, such as blood pressure, recorded at both baseline (pre treatment) and follow‐up (post treatment). We used four examples for illustration, covering situations with and without individual participant data (IPD) and with and without baseline imbalance between treatment groups in each trial. Given IPD, meta‐analysts can choose to synthesise treatment effect estimates derived using analysis of covariance (ANCOVA), a regression of just final scores, or a regression of the change scores. When there is baseline balance in each trial, treatment effect estimates derived using ANCOVA are more precise and thus preferred. However, we show that meta‐analysis results for the summary treatment effect are similar regardless of the approach taken. Thus, without IPD, if trials are balanced, reviewers can happily utilise treatment effect estimates derived from any of the approaches. However, when some trials have baseline imbalance, meta‐analysts should use treatment effect estimates derived from ANCOVA, as this adjusts for imbalance and accounts for the correlation between baseline and follow‐up; we show that the other approaches can give substantially different meta‐analysis results. Without IPD and with unavailable ANCOVA estimates, reviewers should limit meta‐analyses to those trials with baseline balance. Trowman's method to adjust for baseline imbalance without IPD performs poorly in our examples and so is not recommended. Finally, we extend the ANCOVA model to estimate the interaction between treatment effect and baseline values and compare options for estimating this interaction given only aggregate data. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

15.
Stratified medicine utilizes individual‐level covariates that are associated with a differential treatment effect, also known as treatment‐covariate interactions. When multiple trials are available, meta‐analysis is used to help detect true treatment‐covariate interactions by combining their data. Meta‐regression of trial‐level information is prone to low power and ecological bias, and therefore, individual participant data (IPD) meta‐analyses are preferable to examine interactions utilizing individual‐level information. However, one‐stage IPD models are often wrongly specified, such that interactions are based on amalgamating within‐ and across‐trial information. We compare, through simulations and an applied example, fixed‐effect and random‐effects models for a one‐stage IPD meta‐analysis of time‐to‐event data where the goal is to estimate a treatment‐covariate interaction. We show that it is crucial to centre patient‐level covariates by their mean value in each trial, in order to separate out within‐trial and across‐trial information. Otherwise, bias and coverage of interaction estimates may be adversely affected, leading to potentially erroneous conclusions driven by ecological bias. We revisit an IPD meta‐analysis of five epilepsy trials and examine age as a treatment effect modifier. The interaction is ?0.011 (95% CI: ?0.019 to ?0.003; p = 0.004), and thus highly significant, when amalgamating within‐trial and across‐trial information. However, when separating within‐trial from across‐trial information, the interaction is ?0.007 (95% CI: ?0.019 to 0.005; p = 0.22), and thus its magnitude and statistical significance are greatly reduced. We recommend that meta‐analysts should only use within‐trial information to examine individual predictors of treatment effect and that one‐stage IPD models should separate within‐trial from across‐trial information to avoid ecological bias. © 2016 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd.  相似文献   

16.
For time‐to‐event outcomes, a rich literature exists on the bias introduced by covariate measurement error in regression models, such as the Cox model, and methods of analysis to address this bias. By comparison, less attention has been given to understanding the impact or addressing errors in the failure time outcome. For many diseases, the timing of an event of interest (such as progression‐free survival or time to AIDS progression) can be difficult to assess or reliant on self‐report and therefore prone to measurement error. For linear models, it is well known that random errors in the outcome variable do not bias regression estimates. With nonlinear models, however, even random error or misclassification can introduce bias into estimated parameters. We compare the performance of 2 common regression models, the Cox and Weibull models, in the setting of measurement error in the failure time outcome. We introduce an extension of the SIMEX method to correct for bias in hazard ratio estimates from the Cox model and discuss other analysis options to address measurement error in the response. A formula to estimate the bias induced into the hazard ratio by classical measurement error in the event time for a log‐linear survival model is presented. Detailed numerical studies are presented to examine the performance of the proposed SIMEX method under varying levels and parametric forms of the error in the outcome. We further illustrate the method with observational data on HIV outcomes from the Vanderbilt Comprehensive Care Clinic.  相似文献   

17.
The statistical analysis of panel count data has recently attracted a great deal of attention, and a number of approaches have been developed. However, most of these approaches are for situations where the observation and follow‐up processes are independent of the underlying recurrent event process unconditional or conditional on covariates. In this paper, we discuss a more general situation where both the observation and the follow‐up processes may be related with the recurrent event process of interest. For regression analysis, we present a class of semiparametric transformation models and develop some estimating equations for estimation of regression parameters. Numerical studies under different settings conducted for assessing the proposed methodology suggest that it works well for practical situations, and the approach is applied to a skin cancer study that motivated the study. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

18.
A comparison of methods to detect publication bias in meta-analysis   总被引:15,自引:0,他引:15  
Meta-analyses are subject to bias for many of reasons, including publication bias. Asymmetry in a funnel plot of study size against treatment effect is often used to identify such bias. We compare the performance of three simple methods of testing for bias: the rank correlation method; a simple linear regression of the standardized estimate of treatment effect on the precision of the estimate; and a regression of the treatment effect on sample size. The tests are applied to simulated meta-analyses in the presence and absence of publication bias. Both one-sided and two-sided censoring of studies based on statistical significance was used. The results indicate that none of the tests performs consistently well. Test performance varied with the magnitude of the true treatment effect, distribution of study size and whether a one- or two-tailed significance test was employed. Overall, the power of the tests was low when the number of studies per meta-analysis was close to that often observed in practice. Tests that showed the highest power also had type I error rates higher than the nominal level. Based on the empirical type I error rates, a regression of treatment effect on sample size, weighted by the inverse of the variance of the logit of the pooled proportion (using the marginal total) is the preferred method.  相似文献   

19.
The matched‐pairs design enables researchers to efficiently infer causal effects from randomized experiments. In this paper, we exploit the key feature of the matched‐pairs design and develop a sensitivity analysis for missing outcomes due to truncation by death, in which the outcomes of interest (e.g., quality of life measures) are not even well defined for some units (e.g., deceased patients). Our key idea is that if 2 nearly identical observations are paired prior to the randomization of the treatment, the missingness of one unit's outcome is informative about the potential missingness of the other unit's outcome under an alternative treatment condition. We consider the average treatment effect among always‐observed pairs (ATOP) whose units exhibit no missing outcome regardless of their treatment status. The naive estimator based on available pairs is unbiased for the ATOP if 2 units of the same pair are identical in terms of their missingness patterns. The proposed sensitivity analysis characterizes how the bounds of the ATOP widen as the degree of the within‐pair similarity decreases. We further extend the methodology to the matched‐pairs design in observational studies. Our simulation studies show that informative bounds can be obtained under some scenarios when the proportion of missing data is not too large. The proposed methodology is also applied to the randomized evaluation of the Mexican universal health insurance program. An open‐source software package is available for implementing the proposed research.  相似文献   

20.
Kim YJ  Jhun M 《Statistics in medicine》2008,27(7):1075-1085
In analysis of recurrent event data, recurrent events are not completely experienced when the terminating event occurs before the end of a study. To make valid inference of recurrent events, several methods have been suggested for accommodating the terminating event (Statist. Med. 1997; 16:911-924; Biometrics 2000; 56:554-562). In this paper, our interest is to consider a particular situation, where intermittent dropouts result in observation gaps during which no recurrent events are observed. In this situation, risk status varies over time and the usual definition of risk variable is not applicable. In particular, we consider the case when information on the observation gap is incomplete, that is, the starting time of intermittent dropout is known but the terminating time is not available. This incomplete information is modeled in terms of an interval-censored mechanism. Our proposed method is applied to the study of the Young Traffic Offenders Program on conviction rates, wherein a certain proportion of subjects experienced suspensions with intermittent dropouts during the study.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号