首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Three recent sequential methods, group sequential analysis (GSA), the sequential probability ratio test (SPRT) and the triangular test (TT) are well suited to randomized clinical trials with a censored response criterion, as they do not require matched pairs of patients. We undertook a simulation study to investigate their statistical properties and to compare these three methods with the fixed-sample design. Our results suggest that the three methods have the expected statistical properties for size and power; they allow an important reduction of the average number of events before stopping, except with GSA when there is no treatment difference; the triangular test (closed design) appears the optimal design, as the variance of the number of events is smaller than with the sequential probability ratio test (open design) and analysis after every twenty new events does not alter the statistical properties of these sequential methods and enhances their usefulness.  相似文献   

2.
This paper focuses on statistical analyses in scenarios where some samples from the matched pairs design are missing, resulting in partially matched samples. Motivated by the idea of meta‐analysis, we recast the partially matched samples as coming from two experimental designs and propose a simple yet robust approach based on the weighted Z‐test to integrate the p‐values computed from these two designs. We show that the proposed approach achieves better operating characteristics in simulations and a case study, compared with existing methods for partially matched samples. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

3.
Propensity-score matching has been used widely in observational studies to balance confounders across treatment groups. However, whether matched-pairs analyses should be used as a primary approach is still in debate. We compared the statistical power and type 1 error rate for four commonly used methods of analyzing propensity-score–matched samples with continuous outcomes: (1) an unadjusted mixed-effects model, (2) an unadjusted generalized estimating method, (3) simple linear regression, and (4) multiple linear regression. Multiple linear regression had the highest statistical power among the four competing methods. We also found that the degree of intraclass correlation within matched pairs depends on the dissimilarity between the coefficient vectors of confounders in the outcome and treatment models. Multiple linear regression is superior to the unadjusted matched-pairs analyses for propensity-score–matched data.  相似文献   

4.
Statistical methods for the analysis of recurrent events are often evaluated in simulation studies. A factor rarely varied in such studies is the underlying event generation process. If the relative performance of statistical methods differs across generation processes, then studies based upon one process may mislead. This paper describes the simulation of recurrent events data using four models of the generation process: Poisson, mixed Poisson, autoregressive, and Weibull. For each model four commonly used statistical methods for the analysis of recurrent events (Cox's proportional hazards method, the Andersen-Gill method, negative binomial regression, the Prentice-Williams-Peterson method) were applied to 200 simulated data sets, and the mean estimates, standard errors, and confidence intervals obtained. All methods performed well for the Poisson process. Otherwise, negative binomial regression only performed well for the mixed Poisson process, as did the Andersen-Gill method with a robust estimate of the standard error. The Prentice-Williams-Peterson method performed well only for the autoregressive and Weibull processes. So the relative performance of statistical methods depended upon the model of event generation used to simulate data. In conclusion, it is important that simulation studies of statistical methods for recurrent events include simulated data sets based upon a range of models for event generation.  相似文献   

5.
The missing-indicator method and conditional logistic regression have been recommended as alternative approaches for data analysis in matched case-control studies with missing exposure values. The authors evaluated the performance of the two methods using Monte Carlo simulation. Data were generated from a 1:m matched design based on McNemar's 2 x 2 tables with four scenarios for missing values: completely-at-random, case-dependent, exposure-dependent, and case/exposure-dependent. In their analysis, the authors used conditional logistic regression for complete pairs and the missing-indicator method for all pairs. For 1:1 matched studies, given no confounding between exposure and disease, the two methods yielded unbiased estimates. Otherwise, conditional logistic regression produced unbiased estimates with empirical confidence interval coverage similar to nominal coverage under the first three missing-value scenarios, whereas the missing-indicator method produced slightly more bias and lower confidence interval coverage. An increased number of matched controls was associated with slightly more bias and lower confidence interval coverage. Under the case/exposure-dependent missing-value scenario, neither method performed satisfactorily; this indicates the need for more sophisticated statistical methods for handling such missing values. Overall, compared with the missing-indicator method, conditional logistic regression provided a slight advantage in terms of bias and coverage probability, at the cost of slightly reduced statistical power and efficiency.  相似文献   

6.
In longitudinal studies, matched designs are often employed to control the potential confounding effects in the field of biomedical research and public health. Because of clinical interest, recurrent time‐to‐event data are captured during the follow‐up. Meanwhile, the terminal event of death is always encountered, which should be taken into account for valid inference because of informative censoring. In some scenarios, a certain large portion of subjects may not have any recurrent events during the study period due to nonsusceptibility to events or censoring; thus, the zero‐inflated nature of data should be considered in analysis. In this paper, a joint frailty model with recurrent events and death is proposed to adjust for zero inflation and matched designs. We incorporate 2 frailties to measure the dependency between subjects within a matched pair and that among recurrent events within each individual. By sharing the random effects, 2 event processes of recurrent events and death are dependent with each other. The maximum likelihood based approach is applied for parameter estimation, where the Monte Carlo expectation‐maximization algorithm is adopted, and the corresponding R program is developed and available for public usage. In addition, alternative estimation methods such as Gaussian quadrature (PROC NLMIXED) and a Bayesian approach (PROC MCMC) are also considered for comparison to show our method's superiority. Extensive simulations are conducted, and a real data application on acute ischemic studies is provided in the end.  相似文献   

7.
Luo X  Sorock GS 《Statistics in medicine》2008,27(15):2890-2901
The case-crossover design is useful for studying the effects of transient exposures on short-term risk of diseases or injuries when only data on cases are available. The crossover nature of this design allows each subject to serve as his/her own control. While the original design was proposed for univariate event data, in many applications recurrent events are encountered (e.g. elderly falls, gout attacks, and sexually transmitted infections). In such situations, the within-subject dependence among recurrent events needs to be taken into account in the analysis. We review three existing conditional logistic regression (CLR)-based approaches for recurrent event data under the case-crossover design. A simple approach is to use only the first event for each subject; however, we would expect loss of efficiency in estimation. The other two reviewed approaches rely on independence assumptions for the recurrent events, conditionally on a set of covariates. Furthermore, we propose new methods that adjust the CLR using either a within-subject pairwise resampling technique or a weighted estimating equation. No specific dependency structure among recurrent events is needed therein, and hence, they have more flexibility than the existing methods in the situations with unknown correlation structures. We also propose a weighted Mantel-Haenszel estimator, which is easy to implement for data with a binary exposure. In simulation studies, we show that all discussed methods yield virtually unbiased estimates when the conditional independence assumption holds. These methods are illustrated using data from a study of the effect of medication changes on falls among the elderly.  相似文献   

8.
The statistical analysis of recurrent events relies on the assumption of independent censoring. When random effects are used, this means, in addition, that the censoring cannot depend on the random effect. Whenever the recurrent event process is terminated by death, this assumption might not be satisfied. Because joint models arising from such situations are more difficult to fit and interpret, clinicians rarely check whether joint modeling is preferred. In this paper, we propose and compare simple, yet efficient methods for testing whether the terminal event and the recurrent events are associated or not. The performance of the proposed methods is evaluated in a simulation study, and the sensitivity to misspecification of the model is assessed. Finally, the methods are illustrated on a data set comprising repeated observations of skin tumors on T‐cell lymphoma patients. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

9.
STUDY OBJECTIVE: The purpose of this paper is to give an overview and comparison of different easily applicable statistical techniques to analyse recurrent event data. SETTING: These techniques include naive techniques and longitudinal techniques such as Cox regression for recurrent events, generalised estimating equations (GEE), and random coefficient analysis. The different techniques are illustrated with a dataset from a randomised controlled trial regarding the treatment of lateral epicondylitis. MAIN RESULTS: The use of different statistical techniques leads to different results and different conclusions regarding the effectiveness of the different intervention strategies. CONCLUSIONS: If you are interested in a particular short term or long term result, simple naive techniques are appropriate. However, if the development of a particular outcome is of interest, statistical techniques that consider the recurrent events and additionally corrects for the dependency of the observations are necessary.  相似文献   

10.
This paper examines the problem of hypothesis testing for comparison of time-dependent recurrent events between categorical exposure groups. We compare three methods (a summary chi-square test based on a generalization of the simple Poisson distribution, a chi-square test based on a generalization of the compound Poisson distribution, and a test on risk scores based on individual observed to expected ratios), when the dependent variable may be autocorrelated within an individual, and disease risk may be heterogeneous among subjects. We present a simulation study and an application to a cohort of sickle cell anaemia patients. With autocorrelation or heterogeneous risk present, the simple chi-square test is inappropriate, while the other two methods perform well. An attraction of the risk score method is its relative ease of application.  相似文献   

11.
We consider statistical procedures for hypothesis testing of real valued functionals of matched pairs with missing values. In order to improve the accuracy of existing methods, we propose a novel multiplication combination procedure. Dividing the observed data into dependent (completely observed) pairs and independent (incompletely observed) components, it is based on combining separate results of adequate tests for the two sub data sets. Our methods can be applied for parametric as well as semiparametric and nonparametric models and make use of all available data. In particular, the approaches are flexible and can be used to test different hypotheses in various models of interest. This is exemplified by a detailed study of mean- as well as rank-based approaches under different missingness mechanisms with different amount of missing data. Extensive simulations show that in most considered situations, the proposed procedures are more accurate than existing competitors particularly for the nonparametric Behrens-Fisher problem. A real data set illustrates the application of the methods.  相似文献   

12.
Comparative studies of the accuracy of diagnostic procedures often use a paired design to gain in efficiency. Standard methods for analysing data from paired designs require complete observations. In many studies, however, one of the test results may be missing for some patients. In this paper, we propose a simple correction to the existing complete data methods to compare areas under ROC curves derived from paired designs. The approach makes it possible to use the entire available data set in carrying out the comparison, provided that the probability of having both tests does not depend on the test results. As an illustration, we apply our method to the analysis of data from prospective comparison of MRI and ultrasound in detecting periprostatic invasion.  相似文献   

13.
One-sample non-parametric tests are proposed here for inference on recurring events. The focus is on the marginal mean function of events and the basis for inference is the standardized distance between the observed and the expected number of events under a specified reference rate. Different weights are considered in order to account for various types of alternative hypotheses on the mean function of the recurrent events process. A robust version and a stratified version of the test are also proposed. The performance of these tests was investigated through simulation studies under various underlying event generation processes, such as homogeneous and nonhomogeneous Poisson processes, autoregressive and renewal processes, with and without frailty effects. The robust versions of the test have been shown to be suitable in a wide variety of event generating processes. The motivating context is a study on gene therapy in a very rare immunodeficiency in children, where a major end-point is the recurrence of severe infections. Robust non-parametric one-sample tests for recurrent events can be useful to assess efficacy and especially safety in non-randomized studies or in epidemiological studies for comparison with a standard population.  相似文献   

14.
Over the past two decades, a variety of fruitful statistical methods for the analysis of recurrent events has been proposed for the estimation of covariates effect using the Cox proportional hazard model. Besides frailty modelling, two simple trends of modelling have been developed: the first one uses stratification on the rank of the event, whereas the second one, more closely related to Poisson processes theory, does not use stratification. Although they both take into account the correlation of the unit failure times, each of these approaches emphasizes a different aspect of the underlying point process and there is still an ongoing debate concerning the most appropriate method. The aim of this paper is to stress current interests and trends concerning these two approaches. For each model, main statistical methods for estimating the covariates effects are presented. Methods are illustrated and compared in two randomized clinical trials which involve recurrences of severe adverse events following chemotherapy in 938 patients with chronic lymphocytic leukaemia, and recurrences of infectious rhinitis episodes in 327 patients. The discussion, based on the previous examples and on the properties of underlying statistical inference, deals with the appropriateness of the model choice, which is closely related to the data structure.  相似文献   

15.
Although recurrent event data analysis is a rapidly evolving area of research, rigorous studies on estimation of the effects of intermittently observed time‐varying covariates on the risk of recurrent events have been lacking. Existing methods for analyzing recurrent event data usually require that the covariate processes are observed throughout the entire follow‐up period. However, covariates are often observed periodically rather than continuously. We propose a novel semiparametric estimator for the regression parameters in the popular proportional rate model. The proposed estimator is based on an estimated score function where we kernel smooth the mean covariate process. We show that the proposed semiparametric estimator is asymptotically unbiased, normally distributed, and derives the asymptotic variance. Simulation studies are conducted to compare the performance of the proposed estimator and the simple methods carrying forward the last covariates. The different methods are applied to an observational study designed to assess the effect of group A streptococcus on pharyngitis among school children in India. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

16.
采用Cochran-Armitage趋势(CAT)检验和线性回归(LR)两种统计学方法,分析天津市居民1999-2013年急性心肌梗死(AMI)发病率的时间趋势,比较两种方法结果的差别。结果显示,以实际人口为基础,无论是总的发病率时间趋势,还是按年龄分亚组后的时间趋势,CAT检验的统计效能均大于LR(CAT PP值)。将实际人口缩小100倍,AMI发病率保持不变,CAT检验的P值呈明显升高,LR的P值保持不变。两种统计学方法在流行病学率的趋势分析中都有其各自的优势和缺点,研究者可以根据资料选择拟合度较好的模型,或者两种方法结合来分析,全面地描述统计结果。  相似文献   

17.
BACKGROUND: The published literature on cluster randomized trials focuses on outcomes that are either continuous or binary. In many trials, the outcome is an incidence rate, such as mortality, based on person-years data. In this paper we review methods for the analysis of such data in cluster randomized trials and present some simple approaches. METHODS: We discuss the choice of the measure of intervention effect and present methods for confidence interval estimation and hypothesis testing which are conceptually simple and easy to perform using standard statistical software. The method proposed for hypothesis testing applies a t-test to cluster observations. To control confounding, a Poisson regression model is fitted to the data incorporating all covariates except intervention status, and the analysis is carried out on the residuals from this model. The methods are presented for unpaired data, and extensions to paired or stratified clusters are outlined. RESULTS: The methods are evaluated by simulation and illustrated by application to data from a trial of the effect of insecticide-impregnated bednets on child mortality. CONCLUSIONS: The techniques provide a straightforward approach to the analysis of incidence rates in cluster randomized trials. Both the unadjusted analysis and the analysis adjusting for confounders are shown to be robust, even for very small numbers of clusters, in situations that are likely to arise in randomized trials.  相似文献   

18.
As myocardial infarction (MI) hospital fatalities decline, survivors are candidates for recurrent events. However, little is known about morbidity after MI and how it may have changed over time. The authors examined the incidence of sudden cardiac death and recurrent ischemic events post-MI to test the hypothesis that it has declined over time. MIs were validated by using standardized criteria. Sudden cardiac death and recurrent ischemic events (recurrent MI or unstable angina) were identified through Olmsted County, Minnesota, community medical records and their association with time examined after adjustment for age, sex, and comorbidity. Between 1979 and 1998, 2,277 MIs occurred (57% in men; mean age, 67 (standard deviation, 14) years). After 3 years, the event-free survival rate was 94% (95% confidence interval: 93, 95) for sudden cardiac death and 56% (95% confidence interval: 54, 58) for recurrent ischemic events. Both outcomes were more frequent with older age and greater comorbidity. The temporal decline in both events was of similar magnitude; for an MI occurring in 1998 versus 1979, risk of subsequent recurrent ischemic events or sudden cardiac death declined by 24% (relative risk = 0.76, 95% confidence interval: 0.63, 0.93). Thus, in the community, recurrent ischemic events are frequent post-MI, while sudden cardiac death is less common. Their incidence declined over time, supporting the notion that contemporary treatments effectively improve outcomes after MI.  相似文献   

19.
Statistical methodology for paired cluster designs   总被引:2,自引:0,他引:2  
Recently developed methodology is applied to the analysis of data arising from a design in which an experimental intervention is assigned at random to one of two clusters in a matched pair. Standard statistical techniques do not apply to such data, since they do not take into account variation arising from differences among clusters as well as within clusters. The methodology presented provides a significance test over kappa pairs of clusters with respect to a dichotomous outcome variable while controlling for confounding. The investigation of interaction effects is also discussed.  相似文献   

20.
The matched‐pairs design enables researchers to efficiently infer causal effects from randomized experiments. In this paper, we exploit the key feature of the matched‐pairs design and develop a sensitivity analysis for missing outcomes due to truncation by death, in which the outcomes of interest (e.g., quality of life measures) are not even well defined for some units (e.g., deceased patients). Our key idea is that if 2 nearly identical observations are paired prior to the randomization of the treatment, the missingness of one unit's outcome is informative about the potential missingness of the other unit's outcome under an alternative treatment condition. We consider the average treatment effect among always‐observed pairs (ATOP) whose units exhibit no missing outcome regardless of their treatment status. The naive estimator based on available pairs is unbiased for the ATOP if 2 units of the same pair are identical in terms of their missingness patterns. The proposed sensitivity analysis characterizes how the bounds of the ATOP widen as the degree of the within‐pair similarity decreases. We further extend the methodology to the matched‐pairs design in observational studies. Our simulation studies show that informative bounds can be obtained under some scenarios when the proportion of missing data is not too large. The proposed methodology is also applied to the randomized evaluation of the Mexican universal health insurance program. An open‐source software package is available for implementing the proposed research.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号