首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到10条相似文献,搜索用时 140 毫秒
1.
Cluster randomized trials (CRTs) are increasingly used to evaluate the effectiveness of health‐care interventions. A key feature of CRTs is that the observations on individuals within clusters are correlated as a result of between‐cluster variability. Sample size formulae exist which account for such correlations, but they make different assumptions regarding the between‐cluster variability in the intervention arm of a trial, resulting in different sample size estimates. We explore the relationship for binary outcome data between two common measures of between‐cluster variability: k, the coefficient of variation and ρ, the intracluster correlation coefficient. We then assess how the assumptions of constant k or ρ across treatment arms correspond to different assumptions about intervention effects. We assess implications for sample size estimation and present a simple solution to the problems outlined. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

2.
We used simulation to compare accuracy of estimation and confidence interval coverage of several methods for analysing binary outcomes from cluster randomized trials. The following methods were used to estimate the population-averaged intervention effect on the log-odds scale: marginal logistic regression models using generalized estimating equations with information sandwich estimates of standard error (GEE); unweighted cluster-level mean difference (CL/U); weighted cluster-level mean difference (CL/W) and cluster-level random effects linear regression (CL/RE). Methods were compared across trials simulated with different numbers of clusters per trial arm, numbers of subjects per cluster, intraclass correlation coefficients (rho), and intervention versus control arm proportions. Two thousand data sets were generated for each combination of design parameter values. The results showed that the GEE method has generally acceptable properties, including close to nominal levels of confidence interval coverage, when a simple adjustment is made for data with relatively few clusters. CL/U and CL/W have good properties for trials where the number of subjects per cluster is sufficiently large and rho is sufficiently small. CL/RE also has good properties in this situation provided a t-distribution multiplier is used for confidence interval calculation in studies with small numbers of clusters. For studies where the number of subjects per cluster is small and rho is large all cluster-level methods may perform poorly for studies with between 10 and 50 clusters per trial arm.  相似文献   

3.
Consider the problem of testing H0:p?p0 vs H1:p>p0, where p could, for example, represent the response rate to a new drug. The group sequential TT is an efficient alternative to a single‐stage test as it can provide a substantial reduction in the expected number of test subjects. Whitehead provides formulas for determining stopping boundaries for this test. Existing research shows that test designs based on these formulas (WTTs) may not meet Type I error and/or power specifications, or may be over‐powered at the expense of requiring more test subjects than are necessary. We present a search algorithm, with program available from the author, which provides an alternative approach to triangular test design. The primary advantage of the algorithm is that it generates test designs that consistently meet error specifications. In tests on nearly 1000 example combinations of n (group size), p0, p1, α, and β the algorithm‐determined triangular test (ATT) design met specified Type I error and power constraints in every case considered, whereas WTT designs met constraints in only 10 cases. Actual Type I error and power values for the ATTs tend to be close to specified values, leading to test designs with favorable average sample number performance. For cases where the WTT designs did meet Type I error and power constraints, the corresponding ATT designs also had the advantage of providing, on average, a modest reduction in average sample numbers calculated at p0, p1, and (p0 + p1)/2. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

4.
We used theoretical and simulation‐based approaches to study Type I error rates for one‐stage and two‐stage analytic methods for cluster‐randomized designs. The one‐stage approach uses the observed data as outcomes and accounts for within‐cluster correlation using a general linear mixed model. The two‐stage model uses the cluster specific means as the outcomes in a general linear univariate model. We demonstrate analytically that both one‐stage and two‐stage models achieve exact Type I error rates when cluster sizes are equal. With unbalanced data, an exact size α test does not exist, and Type I error inflation may occur. Via simulation, we compare the Type I error rates for four one‐stage and six two‐stage hypothesis testing approaches for unbalanced data. With unbalanced data, the two‐stage model, weighted by the inverse of the estimated theoretical variance of the cluster means, and with variance constrained to be positive, provided the best Type I error control for studies having at least six clusters per arm. The one‐stage model with Kenward–Roger degrees of freedom and unconstrained variance performed well for studies having at least 14 clusters per arm. The popular analytic method of using a one‐stage model with denominator degrees of freedom appropriate for balanced data performed poorly for small sample sizes and low intracluster correlation. Because small sample sizes and low intracluster correlation are common features of cluster‐randomized trials, the Kenward–Roger method is the preferred one‐stage approach. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

5.
The aim of this study was to determine the efficiency of Dugesia dorotocephala on Methyl parathion removal. An initial concentration of 1.25 μg mL−1 of MeP was used to evaluate the removal capacity of planarian. A first-order removal kinetics was obtained with a disappearance rate constant (k r) of 0.49 days−1 and 69% efficiency on contaminant removal. This is significantly different (p < 0.5) from the degradation occurring in control systems, leading us to conclude that D. dorotocephala effectively removes MeP from contaminated water.  相似文献   

6.
Trials in which treatments induce clustering of observations in one of two treatment arms, such as when comparing group therapy with pharmacological treatment or with a waiting‐list group, are examined with respect to the efficiency loss caused by varying cluster sizes. When observations are (approximately) normally distributed, treatment effects can be estimated and tested through linear mixed model analysis. For maximum likelihood estimation, the asymptotic relative efficiency of unequal versus equal cluster sizes is derived. In an extensive Monte Carlo simulation for small sample sizes, the asymptotic relative efficiency turns out to be accurate for the treatment effect, but less accurate for the random intercept variance. For the treatment effect, the efficiency loss due to varying cluster sizes rarely exceeds 10 per cent, which can be regained by recruiting 11 per cent more clusters for one arm and 11 per cent more persons for the other. For the intercept variance the loss can be 16 per cent, which requires recruiting 19 per cent more clusters for one arm, with no additional recruitment of subjects for the other arm. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

7.
Adjustments of sample size formulas are given for varying cluster sizes in cluster randomized trials with a binary outcome when testing the treatment effect with mixed effects logistic regression using second‐order penalized quasi‐likelihood estimation (PQL). Starting from first‐order marginal quasi‐likelihood (MQL) estimation of the treatment effect, the asymptotic relative efficiency of unequal versus equal cluster sizes is derived. A Monte Carlo simulation study shows this asymptotic relative efficiency to be rather accurate for realistic sample sizes, when employing second‐order PQL. An approximate, simpler formula is presented to estimate the efficiency loss due to varying cluster sizes when planning a trial. In many cases sampling 14 per cent more clusters is sufficient to repair the efficiency loss due to varying cluster sizes. Since current closed‐form formulas for sample size calculation are based on first‐order MQL, planning a trial also requires a conversion factor to obtain the variance of the second‐order PQL estimator. In a second Monte Carlo study, this conversion factor turned out to be 1.25 at most. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

8.
Uptake, biotransformation, and elimination rates were determined for pentachlorophenol (PCP), methyl parathion (MP), fluoranthene (FU), and 2,2′,4,4′,5,5′-hexachlorobiphenyl (HCBP) using juvenile Hyalella azteca under water-only exposures. A two-compartment model that included biotransformation described the kinetics for each chemical. The uptake clearance coefficients (ku) were 25.7 ± 2.9, 11.5 ± 1.1, 184.4 ± 9.3, and 251.7 ± 9.0 (ml g−1 h−1) for PCP, MP, FU, and HCBP, respectively. The elimination rate constant of the parent compound (kep) for MP was almost an order of magnitude faster (0.403 ± 0.070 h−1) than for PCP and FU (0.061 ± 0.034 and 0.040 ± 0.008 h−1). The elimination rate constants for FU and PCP metabolites (kem) were similar to the parent compound elimination 0.040 ± 0.005 h−1 and 0.076 ± 0.012 h−1, respectively. For MP, the metabolites were excreted much more slowly than the parent compound (0.021 ± 0.001 h−1). For PCP, FU, and MP whose metabolites were measured, the biological half-life (t1/2p) of the parent compound was shorter than the half-life for metabolites (t1/2m) because the rate is driven both by elimination and biotransformation processes. Thus, H. azteca is capable of metabolizing compounds with varying chemical structures and modes of toxic action, which may complicate interpretation of toxicity and bioaccumulation results. This finding improves our understanding of H. azteca as a test organism, because most biomonitoring activities do not account for biotransformation and some metabolites can contribute significantly to the noted toxicity. Received: 13 June 2002/Accepted: 21 October 2002  相似文献   

9.
The elimination rate constants (k 2) of nine polycyclic aromatic hydrocarbons (PAHs) were examined for the freshwater mussel Elliptio complanata. The concentrations of fluorene, phenanthrene, anthracene, fluoranthene, pyrene, benzo[a]anthracene, chrysene, benzo[b]fluoranthene, and indeno[1,2,3-c,d]pyrene revealed a significant inverse relationship with time and their k 2 values ranged from 0.10 to 0.22 day−1. The k 2 values of these significantly cleared PAHs were similar to k 2 values observed for nonmetabolized organochlorines in mussels previously reported in the literature. The inverse relationship between k 2 and K ow provides evidence that the nine PAHs were being passively eliminated from the mussels and that they can be used to calibrate the mussel as a quantitative biomonitor. A general expression relating elimination rate constants and chemical K ow is derived for hydrophobic contaminants in E. complanata. The k 2 versus log K ow regression equation for mussels developed herein was similar to other studies documenting the elimination of PCBs and PAHs in a number of bivalve species. Received: 13 August 2001/Accepted: 25 April 2002  相似文献   

10.
Diagnostic errors are an important source of medical errors. Problematic information-gathering is a common cause of diagnostic errors among physicians and medical students. The objectives of this study were to (1) determine if medical students’ information-gathering patterns formed clusters of similar strategies, and if so (2) to calculate the percentage of incorrect diagnoses in each cluster. A total of 141 2nd year medical students completed a computer case simulation. Each student’s information-gathering pattern included the sequence of history, physical examination, and ancillary testing items chosen from a predefined list. We analyzed the patterns using an artificial neural network and compared percentages of incorrect diagnoses among clusters of information-gathering patterns. We input patterns into a 35 × 35 self organizing map. The network trained for 10,000 epochs. The number of students at each neuron formed a surface that was statistically smoothed into clusters. Each student was assigned to one cluster, the cluster that contributed the largest value to the smoothed function at the student’s location in the grid. Seven clusters were identified. Percentage of incorrect diagnoses differed significantly among clusters (Range 0–42%, Χ 2 = 13.62, P = .034). Distance of each cluster from the worst performing cluster was used to rank clusters. This rank was compared to rank determined by percentage incorrect. We found a high positive correlation (Spearman Correlation = .893, P = .007). Clusters closest to the worst performing cluster had the highest percentages of incorrect diagnoses. Patterns of information-gathering were distinct and had different rates of diagnostic error.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号