共查询到20条相似文献,搜索用时 180 毫秒
1.
We used simulation to compare accuracy of estimation and confidence interval coverage of several methods for analysing binary outcomes from cluster randomized trials. The following methods were used to estimate the population-averaged intervention effect on the log-odds scale: marginal logistic regression models using generalized estimating equations with information sandwich estimates of standard error (GEE); unweighted cluster-level mean difference (CL/U); weighted cluster-level mean difference (CL/W) and cluster-level random effects linear regression (CL/RE). Methods were compared across trials simulated with different numbers of clusters per trial arm, numbers of subjects per cluster, intraclass correlation coefficients (rho), and intervention versus control arm proportions. Two thousand data sets were generated for each combination of design parameter values. The results showed that the GEE method has generally acceptable properties, including close to nominal levels of confidence interval coverage, when a simple adjustment is made for data with relatively few clusters. CL/U and CL/W have good properties for trials where the number of subjects per cluster is sufficiently large and rho is sufficiently small. CL/RE also has good properties in this situation provided a t-distribution multiplier is used for confidence interval calculation in studies with small numbers of clusters. For studies where the number of subjects per cluster is small and rho is large all cluster-level methods may perform poorly for studies with between 10 and 50 clusters per trial arm. 相似文献
2.
Philip M. Westgate 《Statistics in medicine》2016,35(19):3272-3284
In this paper, we give focus to cluster randomized trials, also known as group randomized trials, which randomize clusters, or groups, of subjects to different trial arms, such as intervention or control. Outcomes from subjects within the same cluster tend to exhibit an exchangeable correlation measured by the intra‐cluster correlation coefficient (ICC). Our primary interest is to test if the intervention has an impact on the marginal mean of an outcome. Using recently developed methods, we propose how to select a working ICC structure with the goal of choosing the structure that results in the smallest standard errors for regression parameter estimates and thus the greatest power for this test. Specifically, we utilize small‐sample corrections for the estimation of the covariance matrix of regression parameter estimates. This matrix is incorporated within correlation selection criteria proposed in the generalized estimating equations literature to choose one of multiple working ICC structures under consideration. We demonstrate the potential power and utility of this approach when used in cluster randomized trial settings via a simulation study and application example, and we discuss practical considerations for its use in practice. Copyright © 2016 John Wiley & Sons, Ltd. 相似文献
3.
Fan Li Elizabeth L. Turner Patrick J. Heagerty David M. Murray William M. Vollmer Elizabeth R. DeLong 《Statistics in medicine》2017,36(24):3791-3806
Group‐randomized trials are randomized studies that allocate intact groups of individuals to different comparison arms. A frequent practical limitation to adopting such research designs is that only a limited number of groups may be available, and therefore, simple randomization is unable to adequately balance multiple group‐level covariates between arms. Therefore, covariate‐based constrained randomization was proposed as an allocation technique to achieve balance. Constrained randomization involves generating a large number of possible allocation schemes, calculating a balance score that assesses covariate imbalance, limiting the randomization space to a prespecified percentage of candidate allocations, and randomly selecting one scheme to implement. When the outcome is binary, a number of statistical issues arise regarding the potential advantages of such designs in making inference. In particular, properties found for continuous outcomes may not directly apply, and additional variations on statistical tests are available. Motivated by two recent trials, we conduct a series of Monte Carlo simulations to evaluate the statistical properties of model‐based and randomization‐based tests under both simple and constrained randomization designs, with varying degrees of analysis‐based covariate adjustment. Our results indicate that constrained randomization improves the power of the linearization F‐test, the KC‐corrected GEE t‐test (Kauermann and Carroll, 2001, Journal of the American Statistical Association 96, 1387‐1396), and two permutation tests when the prognostic group‐level variables are controlled for in the analysis and the size of randomization space is reasonably small. We also demonstrate that constrained randomization reduces power loss from redundant analysis‐based adjustment for non‐prognostic covariates. Design considerations such as the choice of the balance metric and the size of randomization space are discussed. 相似文献
4.
Individual randomized trials (IRTs) and cluster randomized trials (CRTs) with binary outcomes arise in a variety of settings and are often analyzed by logistic regression (fitted using generalized estimating equations for CRTs). The effect of stratification on the required sample size is less well understood for trials with binary outcomes than for continuous outcomes. We propose easy-to-use methods for sample size estimation for stratified IRTs and CRTs and demonstrate the use of these methods for a tuberculosis prevention CRT currently being planned. For both IRTs and CRTs, we also identify the ratio of the sample size for a stratified trial vs a comparably powered unstratified trial, allowing investigators to evaluate how stratification will affect the required sample size when planning a trial. For CRTs, these can be used when the investigator has estimates of the within-stratum intracluster correlation coefficients (ICCs) or by assuming a common within-stratum ICC. Using these methods, we describe scenarios where stratification may have a practically important impact on the required sample size. We find that in the two-stratum case, for both IRTs and for CRTs with very small cluster sizes, there are unlikely to be plausible scenarios in which an important sample size reduction is achieved when the overall probability of a subject experiencing the event of interest is low. When the probability of events is not small, or when cluster sizes are large, however, there are scenarios where practically important reductions in sample size result from stratification. 相似文献
5.
This paper evaluates methods for unadjusted analyses of binary outcomes in cluster randomized trials (CRTs). Under the generalized estimating equations (GEE) method the identity, log and logit link functions may be specified to make inferences on the risk difference, risk ratio and odds ratio scales, respectively. An alternative, 'cluster-level', method applies the t-test to summary statistics calculated for each cluster, using proportions, log proportions and log odds, to make inferences on the respective scales. Simulation was used to estimate the bias of the unadjusted intervention effect estimates and confidence interval coverage, generating data sets with different combinations of number of clusters, number of participants per cluster, intra-cluster correlation coefficient rho and intervention effect. When the identity link was specified, GEE had little bias and good coverage, performing slightly better than the log and logit link functions. The cluster-level method provided unbiased point estimates when proportions were used to summarize the clusters. When the log proportion and log odds were used, however, the method often had markedly large bias for two reasons: (i) bias in the modified summary statistic used for cluster-level estimation when a cluster has zero cases with the outcome of interest (arising when the number of participants sampled per cluster is small and the outcome prevalence is low) and (ii) asymptotically, the method estimates the ratio of geometric means of the cluster proportions or odds, respectively, between the trial arms rather than the ratio of arithmetic means. 相似文献
6.
In cluster‐randomized trials, it is commonly assumed that the magnitude of the correlation among subjects within a cluster is constant across clusters. However, the correlation may in fact be heterogeneous and depend on cluster characteristics. Accurate modeling of the correlation has the potential to improve inference. We use second‐order generalized estimating equations to model heterogeneous correlation in cluster‐randomized trials. Using simulation studies we show that accurate modeling of heterogeneous correlation can improve inference when the correlation is high or varies by cluster size. We apply the methods to a cluster‐randomized trial of an intervention to promote breast cancer screening. Copyright © 2008 John Wiley & Sons, Ltd. 相似文献
7.
Philip M. Westgate 《Statistics in medicine》2013,32(16):2850-2858
Generalized estimating equations (GEEs) are routinely used for the marginal analysis of correlated data. The efficiency of GEE depends on how closely the working covariance structure resembles the true structure, and therefore accurate modeling of the working correlation of the data is important. A popular approach is the use of an unstructured working correlation matrix, as it is not as restrictive as simpler structures such as exchangeable and AR‐1 and thus can theoretically improve efficiency. However, because of the potential for having to estimate a large number of correlation parameters, variances of regression parameter estimates can be larger than theoretically expected when utilizing the unstructured working correlation matrix. Therefore, standard error estimates can be negatively biased. To account for this additional finite‐sample variability, we derive a bias correction that can be applied to typical estimators of the covariance matrix of parameter estimates. Via simulation and in application to a longitudinal study, we show that our proposed correction improves standard error estimation and statistical inference. Copyright © 2012 John Wiley & Sons, Ltd. 相似文献
8.
Fan Li Andrew B. Forbes Elizabeth L. Turner John S. Preisser 《Statistics in medicine》2019,38(4):636-649
The cluster randomized crossover design has been proposed to improve efficiency over the traditional parallel cluster randomized design, which often involves a limited number of clusters. In recent years, the cluster randomized crossover design has been increasingly used to evaluate the effectiveness of health care policy or programs, and the interest often lies in quantifying the population-averaged intervention effect. In this paper, we consider the two-treatment two-period crossover design, and develop sample size procedures for continuous and binary outcomes corresponding to a population-averaged model estimated by generalized estimating equations, accounting for both within-period and interperiod correlations. In particular, we show that the required sample size depends on the correlation parameters through an eigenvalue of the within-cluster correlation matrix for continuous outcomes and through two distinct eigenvalues of the correlation matrix for binary outcomes. We demonstrate that the empirical power corresponds well with the predicted power by the proposed formulae for as few as eight clusters, when outcomes are analyzed using the matrix-adjusted estimating equations for the correlation parameters concurrently with a suitable bias-corrected sandwich variance estimator. 相似文献
9.
Recent methodological advances in covariate adjustment in randomized clinical trials have used semiparametric theory to improve efficiency of inferences by incorporating baseline covariates; these methods have focused on independent outcomes. We modify one of these approaches, augmentation of standard estimators, for use within cluster randomized trials in which treatments are assigned to groups of individuals, thereby inducing correlation. We demonstrate the potential for imbalance correction and efficiency improvement through consideration of both cluster-level covariates and individual-level covariates. To improve small-sample estimation, we consider several variance adjustments. We evaluate this approach for continuous and binary outcomes through simulation and apply it to data from a cluster randomized trial of a community behavioral intervention related to HIV prevention in Tanzania. 相似文献
10.
We consider the problem of sample size determination for count data. Such data arise naturally in the context of multicenter (or cluster) randomized clinical trials, where patients are nested within research centers. We consider cluster‐specific and population‐averaged estimators (maximum likelihood based on generalized mixed‐effect regression and generalized estimating equations, respectively) for subject‐level and cluster‐level randomized designs, respectively. We provide simple expressions for calculating the number of clusters when comparing event rates of two groups in cross‐sectional studies. The expressions we derive have closed‐form solutions and are based on either between‐cluster variation or intercluster correlation for cross‐sectional studies. We provide both theoretical and numerical comparisons of our methods with other existing methods. We specifically show that the performance of the proposed method is better for subject‐level randomized designs, whereas the comparative performance depends on the rate ratio for the cluster‐level randomized designs. We also provide a versatile method for longitudinal studies. Three real data examples illustrate the results. Copyright © 2013 John Wiley & Sons, Ltd. 相似文献
11.
Longitudinal data analysis methods are powerful tools for exploring scientific questions regarding change and are well suited to evaluate the impact of a new policy. However, there are challenging aspects of policy change data that require consideration, such as defining comparison groups, separating the effect of time from that of the policy, and accounting for heterogeneity in the policy effect. We compare currently available methods to evaluate a policy change and illustrate issues specific to a policy change analysis via a case study of laws that eliminate gun-use restrictions (shall-issue laws) and firearm-related homicide. We obtain homicide rate ratios estimating the effect of enacting a shall-issue law, which vary between 0.903 and 1.101. We conclude that in a policy change analysis it is essential to select a mean model that most accurately characterizes the anticipated effect of the policy intervention, thoroughly model temporal trends, and select methods that accommodate unit-specific policy effects. We also conclude that several longitudinal data analysis methods are useful to evaluate a policy change, but not all may be appropriate in certain contexts. Analysts must carefully decide which methods are appropriate for their application and must be aware of the differences between methods to select a procedure that generates valid inference. 相似文献
12.
The analysis of very small samples of Gaussian repeated measurements can be challenging. First, due to a very small number of independent subjects contributing outcomes over time, statistical power can be quite small. Second, nuisance covariance parameters must be appropriately accounted for in the analysis in order to maintain the nominal test size. However, available statistical strategies that ensure valid statistical inference may lack power, whereas more powerful methods may have the potential for inflated test sizes. Therefore, we explore an alternative approach to the analysis of very small samples of Gaussian repeated measurements, with the goal of maintaining valid inference while also improving statistical power relative to other valid methods. This approach uses generalized estimating equations with a bias‐corrected empirical covariance matrix that accounts for all small‐sample aspects of nuisance correlation parameter estimation in order to maintain valid inference. Furthermore, the approach utilizes correlation selection strategies with the goal of choosing the working structure that will result in the greatest power. In our study, we show that when accurate modeling of the nuisance correlation structure impacts the efficiency of regression parameter estimation, this method can improve power relative to existing methods that yield valid inference. Copyright © 2017 John Wiley & Sons, Ltd. 相似文献
13.
Generalized estimating equations (GEEs) are commonly used for the marginal analysis of longitudinal data. In order to obtain consistent regression parameter estimates, these estimating equations must be unbiased. However, in the presence of certain types of time‐dependent covariates, these equations can be biased unless they incorporate the independence working correlation structure. Moreover, in this case, regression parameter estimation can be very inefficient because not all valid moment conditions are incorporated within the corresponding estimating equations. Therefore, approaches based on the generalized method of moments or quadratic inference functions have been proposed in order to utilize all valid moment conditions. However, we have found in previous studies, as well as the current study, that such methods will not always provide valid inference and can also be improved upon in terms of finite‐sample regression parameter estimation. Therefore, we propose both a modified GEE approach and a method selection strategy in order to ensure valid inference with the goal of improving regression parameter estimation. In a simulation study and application example, we compare existing and proposed methods and demonstrate that our modified GEE approach performs well, and the correlation information criterion has good accuracy with respect to selecting the best approach in terms of regression parameter estimation. Copyright © 2017 John Wiley & Sons, Ltd. 相似文献
14.
The sandwich estimator in generalized estimating equations (GEE) approach underestimates the true variance in small samples and consequently results in inflated type I error rates in hypothesis testing. This fact limits the application of the GEE in cluster‐randomized trials (CRTs) with few clusters. Under various CRT scenarios with correlated binary outcomes, we evaluate the small sample properties of the GEE Wald tests using bias‐corrected sandwich estimators. Our results suggest that the GEE Wald z‐test should be avoided in the analyses of CRTs with few clusters even when bias‐corrected sandwich estimators are used. With t‐distribution approximation, the Kauermann and Carroll (KC)‐correction can keep the test size to nominal levels even when the number of clusters is as low as 10 and is robust to the moderate variation of the cluster sizes. However, in cases with large variations in cluster sizes, the Fay and Graubard (FG)‐correction should be used instead. Furthermore, we derive a formula to calculate the power and minimum total number of clusters one needs using the t‐test and KC‐correction for the CRTs with binary outcomes. The power levels as predicted by the proposed formula agree well with the empirical powers from the simulations. The proposed methods are illustrated using real CRT data. We conclude that with appropriate control of type I error rates under small sample sizes, we recommend the use of GEE approach in CRTs with binary outcomes because of fewer assumptions and robustness to the misspecification of the covariance structure. Copyright © 2014 John Wiley & Sons, Ltd. 相似文献
15.
Philip M. Westgate 《Statistics in medicine》2014,33(13):2222-2237
Generalized estimating equations are commonly used to analyze correlated data. Choosing an appropriate working correlation structure for the data is important, as the efficiency of generalized estimating equations depends on how closely this structure approximates the true structure. Therefore, most studies have proposed multiple criteria to select the working correlation structure, although some of these criteria have neither been compared nor extensively studied. To ease the correlation selection process, we propose a criterion that utilizes the trace of the empirical covariance matrix. Furthermore, use of the unstructured working correlation can potentially improve estimation precision and therefore should be considered when data arise from a balanced longitudinal study. However, most previous studies have not allowed the unstructured working correlation to be selected as it estimates more nuisance correlation parameters than other structures such as AR‐1 or exchangeable. Therefore, we propose appropriate penalties for the selection criteria that can be imposed upon the unstructured working correlation. Via simulation in multiple scenarios and in application to a longitudinal study, we show that the trace of the empirical covariance matrix works very well relative to existing criteria. We further show that allowing criteria to select the unstructured working correlation when utilizing the penalties can substantially improve parameter estimation. Copyright © 2014 John Wiley & Sons, Ltd. 相似文献
16.
Generalized estimating equations (GEE) are commonly used for the analysis of correlated data. However, use of quadratic inference functions (QIFs) is becoming popular because it increases efficiency relative to GEE when the working covariance structure is misspecified. Although shown to be advantageous in the literature, the impacts of covariates and imbalanced cluster sizes on the estimation performance of the QIF method in finite samples have not been studied. This cluster size variation causes QIF's estimating equations and GEE to be in separate classes when an exchangeable correlation structure is implemented, causing QIF and GEE to be incomparable in terms of efficiency. When utilizing this structure and the number of clusters is not large, we discuss how covariates and cluster size imbalance can cause QIF, rather than GEE, to produce estimates with the larger variability. This occurrence is mainly due to the empirical nature of weighting QIF employs, rather than differences in estimating equations classes. We demonstrate QIF's lost estimation precision through simulation studies covering a variety of general cluster randomized trial scenarios and compare QIF and GEE in the analysis of data from a cluster randomized trial. Copyright © 2012 John Wiley & Sons, Ltd. 相似文献
17.
Alejandro Salazar Begoña Ojeda María Dueñas Fernando Fernández Inmaculada Failde 《Statistics in medicine》2016,35(19):3424-3448
Missing data are a common problem in clinical and epidemiological research, especially in longitudinal studies. Despite many methodological advances in recent decades, many papers on clinical trials and epidemiological studies do not report using principled statistical methods to accommodate missing data or use ineffective or inappropriate techniques. Two refined techniques are presented here: generalized estimating equations (GEEs) and weighted generalized estimating equations (WGEEs). These techniques are an extension of generalized linear models to longitudinal or clustered data, where observations are no longer independent. They can appropriately handle missing data when the missingness is completely at random (GEE and WGEE) or at random (WGEE) and do not require the outcome to be normally distributed. Our aim is to describe and illustrate with a real example, in a simple and accessible way to researchers, these techniques for handling missing data in the context of longitudinal studies subject to dropout and show how to implement them in R. We apply them to assess the evolution of health‐related quality of life in coronary patients in a data set subject to dropout. Copyright © 2016 John Wiley & Sons, Ltd. 相似文献
18.
A number of methods for analysing longitudinal ordinal categorical data with missing-at-random drop-outs are considered. Two are maximum-likelihood methods (MAXLIK) which employ marginal global odds ratios to model associations. The remainder use weighted or unweighted generalized estimating equations (GEE). Two of the GEE use Cholesky-decomposed standardized residuals to model the association structure, while another three extend methods developed for longitudinal binary data in which the association structures are modelled using either Gaussian estimation, multivariate normal estimating equations or conditional residuals. Simulated data sets were used to discover differences among the methods in terms of biases, variances and convergence rates when the association structure is misspecified. The methods were also applied to a real medical data set. Two of the GEE methods, referred to as Cond and ML-norm in this paper and by their originators, were found to have relatively good convergence rates and mean squared errors for all sample sizes (80, 120, 300) considered, and one more, referred to as MGEE in this paper and by its originators, worked fairly well for all but the smallest sample size, 80. 相似文献
19.
Generalized estimating equations (GEE) is a general statistical method to fit marginal models for longitudinal data in biomedical studies. The variance–covariance matrix of the regression parameter coefficients is usually estimated by a robust “sandwich” variance estimator, which does not perform satisfactorily when the sample size is small. To reduce the downward bias and improve the efficiency, several modified variance estimators have been proposed for bias‐correction or efficiency improvement. In this paper, we provide a comprehensive review on recent developments of modified variance estimators and compare their small‐sample performance theoretically and numerically through simulation and real data examples. In particular, Wald tests and t‐tests based on different variance estimators are used for hypothesis testing, and the guideline on appropriate sample sizes for each estimator is provided for preserving type I error in general cases based on numerical results. Moreover, we develop a user‐friendly R package “geesmv” incorporating all of these variance estimators for public usage in practice. Copyright © 2015 John Wiley & Sons, Ltd. 相似文献
20.
Generalized estimating equations (GEE) are often used for the marginal analysis of longitudinal data. Although much work has been performed to improve the validity of GEE for the analysis of data arising from small‐sample studies, little attention has been given to power in such settings. Therefore, we propose a valid GEE approach to improve power in small‐sample longitudinal study settings in which the temporal spacing of outcomes is the same for each subject. Specifically, we use a modified empirical sandwich covariance matrix estimator within correlation structure selection criteria and test statistics. Use of this estimator can improve the accuracy of selection criteria and increase the degrees of freedom to be used for inference. The resulting impacts on power are demonstrated via a simulation study and application example. Copyright © 2016 John Wiley & Sons, Ltd. 相似文献