首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 171 毫秒
1.
目的 探讨Fisher合并P值法在两阶段自适应设计样本量调整中对I型错误和检验效能的影响.方法 利用蒙特-卡罗(Monte Carlo)法模拟不同样本量时的两阶段自适应设计过程,分别采用合并P值法和t检验分析最后数据并比较二者对I型错误、检验效能值的影响.结果 第一阶段样本量较小时,t检验能够保证检验效能,但是不能很好的抑制I型错误;合并P值法能较好的抑制I型错误,但检验效能降低较大.结论 根据第一阶段的方差和组间均值差调整样本量时,第一阶段样本量大于计划样本量的三分之一而小于计划样本量的一半时,应选择合并P值法;超过计划样本量的一半,则应采用t检验法.  相似文献   

2.
目的对用于癌症预后研究的五种中介分析方法(VanderWeele法、Baron-Kenny法、Imai法、Sobel法和InverseWeight法)进行评价,为实例分析的方法选择提供依据。方法基于模拟试验,产生不同参数设置下的模拟数据,并评价五种方法的第一类错误、检验效能和分析时间。结果除InverseWeight法在相关系数较大时第一类错误有所膨胀外,其余四种方法的第一类错误在不同参数情况下均在0.05附近。五种方法的检验效能趋势一致,均随着样本量、中介比、总效应的增大而增大,随着删失比的增大而减小。在样本量较小(N=100)且中介比不大于30%的情况下,InverseWeight法的检验效能低于另四种方法。InverseWeight法、Baron-Kenny法和Imai法的分析效率远低于VanderWeele法和Sobel法。结论综合考虑一类错误控制、检验效能及分析效率,推荐VanderWeele法进行预后研究的中介分析。  相似文献   

3.
目的 以Monte Carlo模拟评估样本量和群内相关系数对整群干预试验中干预效应推断的影响.方法 干预效应的推断通过SAS PROC MIXED的Wald卡方检验和两种不同自由度的t检验,评价第一类错误和可信区间覆盖率.结果 设计整群干预试验时,干预组和对照组群数应大于20,可提高干预效应卡方检验的可信区间覆盖率并减少一类错误.群自由度法优于其他两种方法,但当每组群数大于25时,MIXED中不同检验结果接近.结论群数是影响干预效应推断的最重要因素,而MIXED过程的t检验要选择恰当的自由度.  相似文献   

4.
目的探讨含安慰剂组三臂临床试验基于bootstrap再抽样的非劣效评判的方法。方法用Monte Carlo模拟方法,产生服从正态分布、对数正态分布和Gamma分布的随机样本,进行Welch校正t检验法和bootstrap法的α-模拟和power模拟的验证和比较。结果当数据服从正态分布,在样本量较大时,Welch校正t检验法和bootstrap法均表现出较好的统计性能,但当数据呈偏态分布时,Welch校正t检验法的第一类错误率会偏离预先给定的α-水平,而bootstrap法在样本量较大时,第一类错误率基本保持在预先给定的水平。Welch校正t检验法和bootstrap法的power模拟结果基本相同。结论含安慰剂组的三臂临床试验在数据不服从正态分布时,bootstrap法可作为一种有效的非劣效评判方法。  相似文献   

5.
目的本文着重比较秩和检验、调整自由度的t'检验、混合效应模型(mixed model)以及方差加权最小二乘法(VWLS)等方法在方差不齐时,用于两组/多组独立样本均数比较时的稳健性和把握度。方法本文通过模拟分析方法,分别设计总体均数相等或不等时,在不同标准差和样本量的条件下,用几种统计方法比较2组及3组样本均数的Ⅰ类错误和Power度。结果 (1)证实样本量相等时,t检验对于方差不齐的2组样本均数比较具有稳健性,但是样本量相等方差不齐的3组独立样本均数比较时,方差分析方法却不具有稳健性。(2)不论是2组还是3组样本均数比较,秩和检验在特定条件下对于方差不齐具有稳健性。(3)两组方差不齐样本均数比较时,t'检验和mixed model因为Ⅰ类错误更稳健,比VWLS方法更稳定,且这三种方法的Power值相互比较接近。(4)三组方差不齐样本均数比较,mixed model方法在样本量较少时比VWLS方法Ⅰ类错误更稳健,但是随着样本量增加,这一优势消失,而VWLS的Power值明显高于mixed model统计方法。结论 2组方差不齐样本均数比较时,可以使用t'检验、mixed model及VWLS等方法,其中首选更为稳健的t'检验、mixed model,3组方差不齐样本均数比较时可以使用mixed model及VWLS等方法,当样本量较小时首选mixed model方法,样本量增大时,以VWLS方法更优。  相似文献   

6.
目的 通过对不同时间点间存在相关、无残留效应的N-of-1定量数据进行模拟研究,比较不同检验方法的统计性能.方法 模拟参数设样本量为10(模型1),研究周期为3(模型2-4),不同时间点间相关系数为0.8(模型5-7),无残留效应,根据固定的效应差值产生多元正态分布数据,建立配对t检验、混合效应模型和差值的混合效应模型.使用效应差值估计值的Ⅰ类错误、检验功效、平均误差(ME),平均绝对误差(AE),均方误差(RMSE)评价各种模型.结果 所有模型估计值的均数都非常接近效应差值,所有模型估计值的ME、AE、RMSE都较小.除了模型7,其他模型的Ⅰ类错误概率都约等于0.05.随着效应差值的增大,所有模型的检验功效都随之增大.当两组的效应差值较小时(<1.0),模型5的检验功效最大,模型2至模型4的功效较小.当两组的效应差值较大时(≥1.0),所有模型的功效都小于0.010.结论 混合效应模型比配对t检验更适合存在相关关系的N-of-1数据.混合效应模型的效果优于差值的混合效应模型,效果最优的模型是CS结构的混合效应模型.  相似文献   

7.
目的 对检验两个非正态样本是否同分布的常用非参数方法进行评价,为合理选择检验方法提供参考依据.方法 采用Matlab7.5软件编程,模拟数据在不同的分布类型、样本量相等或不等、方差齐或不齐、方差与样本量顺向或反向、均数相等或不等等条件下,分别采用Wilcoxon检验、Wald-Wolfowitz游程检验(WWR)、Kolmogorov-Smirnov检验(K-S)和Hollander极端反应检验(Hollander)进行检验.结果 给出4种检验法的Ⅰ型和Ⅱ型误差估计值.结论 当两个总体均数相等时,建议选用Hollander检验;当两个总体方差相等时,建议选用Wilcoxon检验或K-S检验;而在两个总体方差、均数都不相等但差异不大时,则可选用Wilcoxon检验、K-S检验或Hollander检验中的任意一种.  相似文献   

8.
一种临床试验中的适应性样本量调整方法   总被引:1,自引:1,他引:0  
目的介绍一种临床试验中的适应性样本量调整方法,并探讨样本量调整后统计分析方法的第Ⅰ类错误率及检验效能。方法通过montecarlo模拟的方法研究n1大小对最终样本量Nf的影响,并估计最终方差偏移大小;同时模拟研究样本量调整后统计分析方法的第Ⅰ类错误率及检验效能大小。结果(1)模拟结果显示运用该样本量调整方法所得到的最终样本量Nf非常接近其真实值N0,尤其在π=0.4时进行样本量调整。(2)同时模拟结果显示所介绍的样本量调整后的校正t检验方法不仅能有效控制第Ⅰ类错误率α并且能充分满足试验检验效能(1-β)。结论该样本量调整方法研究结果是在一般两样本单侧t检验条件下得到也可应用于优效或非劣效设计的临床试验中。  相似文献   

9.
完全随机设计两组t检验与秩和检验的功效比较   总被引:3,自引:1,他引:3  
目的比较t检验与秩和检验检验完全随机设计两组资料的功效.方法用SAS软件编制电脑实验程序,模拟研究和比较不同总体条件下两组秩和检验与t检验的检验功效.结果若总体分布对称,小样本时t检验功效较高,大样本时两种方法功效相似;总体非对称分布时秩和检验的功效高于t检验.结论当样本量足够大时,可以用秩和检验代替t检验.  相似文献   

10.
目的本研究以生存结局为切入点,探讨含两个中介变量时的中介生存分析模型(Aalen相加风险模型、Cox比例风险模型、加速失效时间AFT模型),为预后的多中介变量分析方法的选择提供应用建议。方法通过统计模拟试验,设定不同的相关系数、效应比、删失率等,从第一类错误及检验效能等方面对上述三种方法进行统计学性质评价。结果中介变量与暴露的相关系数越大,越容易发现中介变量的中介效应;删失率与效应比对Aalen模型的影响较大,对其他两种模型的影响较小;随着删失率的降低,Aalen模型的第一类错误反而膨胀,故Aalen模型不适用于多中介变量的分析;样本量越大,三种模型的检验效能差别减小且趋于稳定。不同参数设定下,AFT模型的检验效能最大,其次为Cox模型,最后为Aalen模型。结论 AFT模型优于其他两种方法,推荐用于生存结局的多中介变量的中介分析;进行中介分析时需要足够的样本量。  相似文献   

11.
Medical researchers can employ repeated measures designs to study the effects of a treatment over time or when each subject receives all treatments. Univariate F tests and multiple comparison procedures for comparing means constitute the methods to test for the presence of treatment effects. For validity, however, these tests must satisfy the sphericity assumption. To circumvent the biasing effects of non-sphericity, this paper shows the applicability of the Greenhouse and Geisser three stage approach for univariate omnibus hypothesis testing in repeated measures designs containing any number of repeated factors. In addition, we present a multiple comparison procedure which provides a valid or robust test and thus controls the overall probability of a Type I error.  相似文献   

12.
Multivariable fractional polynomial (MFP) models are commonly used in medical research. The datasets in which MFP models are applied often contain covariates with missing values. To handle the missing values, we describe methods for combining multiple imputation with MFP modelling, considering in turn three issues: first, how to impute so that the imputation model does not favour certain fractional polynomial (FP) models over others; second, how to estimate the FP exponents in multiply imputed data; and third, how to choose between models of differing complexity. Two imputation methods are outlined for different settings. For model selection, methods based on Wald‐type statistics and weighted likelihood‐ratio tests are proposed and evaluated in simulation studies. The Wald‐based method is very slightly better at estimating FP exponents. Type I error rates are very similar for both methods, although slightly less well controlled than analysis of complete records; however, there is potential for substantial gains in power over the analysis of complete records. We illustrate the two methods in a dataset from five trauma registries for which a prognostic model has previously been published, contrasting the selected models with that obtained by analysing the complete records only. © 2015 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.  相似文献   

13.
Single-variant-based genome-wide association studies have successfully detected many genetic variants that are associated with a number of complex traits. However, their power is limited due to weak marginal signals and ignoring potential complex interactions among genetic variants. The set-based strategy was proposed to provide a remedy where multiple genetic variants in a given set (e.g., gene or pathway) are jointly evaluated, so that the systematic effect of the set is considered. Among many, the kernel-based testing (KBT) framework is one of the most popular and powerful methods in set-based association studies. Given a set of candidate kernels, the method has been proposed to choose the one with the smallest p-value. Such a method, however, can yield inflated Type 1 error, especially when the number of variants in a set is large. Alternatively one can get p values by permutations which, however, could be very time-consuming. In this study, we proposed an efficient testing procedure that cannot only control Type 1 error rate but also have power close to the one obtained under the optimal kernel in the candidate kernel set, for quantitative trait association studies. Our method, a maximum kernel-based U-statistic method, is built upon the KBT framework and is based on asymptotic results under a high-dimensional setting. Hence it can efficiently deal with the case where the number of variants in a set is much larger than the sample size. Both simulation and real data analysis demonstrate the advantages of the method compared with its counterparts.  相似文献   

14.
We compared the statistical performance of sibpair-based and variance components approaches to multipoint linkage analysis of a quantitative trait in unselected samples. As a benchmark dataset, we used the simulated family data from Genetic Analysis Workshop 10 [Goldin et al., 1997], and each method was used to screen all 200 replications of the GAW10 genome for evidence of linkage to quantitative trait Q1. The sibpair and variance components methods were each applied to datasets comprising single-sibpairs and complete sibships, and for further comparison we also applied the variance components method to the nuclear family and extended pedigree datasets. For each analysis, the unbiasedness and efficiency of parameter estimation, the power to detect linkage, and the Type I error rate were estimated empirically. Sibpair and variance components methods exhibited comparable performance in terms of the unbiasedness of the estimate of QTL location and the Type I error rate. Within the single-sibpair and sibship sampling units, the variance components approach gave consistently superior power and efficiency of parameter estimation. Within each method, the statistical performance was improved by the use of the larger and more informative sampling units.  相似文献   

15.
With varying, but substantial, proportions of heritability remaining unexplained by summaries of single‐SNP genetic variation, there is a demand for methods that extract maximal information from genetic association studies. One source of variation that is difficult to assess is genetic interactions. A major challenge for naive detection methods is the large number of possible combinations, with a requisite need to correct for multiple testing. Assumptions of large marginal effects, to reduce the search space, may be restrictive and miss higher order interactions with modest marginal effects. In this paper, we propose a new procedure for detecting gene‐by‐gene interactions through heterogeneity in estimated low‐order (e.g., marginal) effect sizes by leveraging population structure, or ancestral differences, among studies in which the same phenotypes were measured. We implement this approach in a meta‐analytic framework, which offers numerous advantages, such as robustness and computational efficiency, and is necessary when data‐sharing limitations restrict joint analysis. We effectively apply a dimension reduction procedure that scales to allow searches for higher order interactions. For comparison to our method, which we term phylogenY‐aware Effect‐size Tests for Interactions (YETI), we adapt an existing method that assumes interacting loci will exhibit strong marginal effects to our meta‐analytic framework. As expected, YETI excels when multiple studies are from highly differentiated populations and maintains its superiority in these conditions even when marginal effects are small. When these conditions are less extreme, the advantage of our method wanes. We assess the Type‐I error and power characteristics of complementary approaches to evaluate their strengths and limitations.  相似文献   

16.
[目的]比较倾向指数匹配法与Logistic回归分析方法的检验效能和I类错误。[方法]通过Monte Carlo模拟比较倾向指数匹配法和Logistic回归处理二分类资料的区别。[结果]倾向指数匹配法和Logistic回归的I类错误无差异,倾向指数全匹配法的检验效能略高。[结论]在观察性研究中,倾向指数匹配法具有很高的实用价值。  相似文献   

17.
OBJECTIVE: To determine what health behaviors patients choose to change in response to medical advice when they are given the potential net-present value (reduction in biological age) of modifying a behavior. METHODS: Baseline data for multiple health-risk behaviors that were recommended for change among 660 coronary angioplasty patients at the New York-Presbyterian Hospital-Weill-Cornell Medical Center who were enrolled during 2000--02 in one of two arms of a behavioral intervention trial designed to compare different approaches to communicating health risk (net-present vs. future value) were analyzed using multivariate statistical methods. RESULTS: Although there was no difference between study arms, knowing the biological-age value of behaviors, stage of change, and the total number of behaviors recommended for change was associated with choosing several behaviors. Notably, stage of change was associated in both groups with strength training (intervention OR 2.82, 95% CI 1.85, 4.30; comparison OR 2.84, 95% CI 1.83, 4.43, p<.0001) and reducing weight (intervention OR 2.49, 95% CI 1.32, 4.67, p=.005; comparison OR 1.98, 95% CI 1.80, 3.31, p=.01). CONCLUSION: Patients with coronary disease are more likely to choose strength training and reducing weight regardless of knowing the biological-age reduction of any given behavior.  相似文献   

18.
目的 通过盲态下内部预试验IPS样本量调整的模拟分析,探索协变量存在前提下有效控制Ⅰ型错误、保证检验效能的合理统计方法.方法 利用蒙特-卡罗法模拟存在协变量时的两阶段自适应设计过程,分别采用协方差分析法和方差分析法分析两阶段数据,采用合并P值法确定检验的最终结果,比较两种方法对Ⅰ型错误、检验效能值的影响.结果 采用方差分析的Ⅰ型错误膨胀较协方差分析要大,检验效能也较协方差分析略低,但是Ⅰ型错误的增大更明显.结论 根据第一阶段的方差和组间均值差调整样本量时,如果存在协变量,应采用协方差分析方法分别分析第一、二阶段的数据,然后采用合并P值法做出统计推断.  相似文献   

19.
Genome‐wide association studies (GWAS) require considerable investment, so researchers often study multiple traits collected on the same set of subjects to maximize return. However, many GWAS have adopted a case‐control design; improperly accounting for case‐control ascertainment can lead to biased estimates of association between markers and secondary traits. We show that under the null hypothesis of no marker‐secondary trait association, naïve analyses that ignore ascertainment or stratify on case‐control status have proper Type I error rates except when both the marker and secondary trait are independently associated with disease risk. Under the alternative hypothesis, these methods are unbiased when the secondary trait is not associated with disease risk. We also show that inverse‐probability‐of‐sampling‐weighted (IPW) regression provides unbiased estimates of marker‐secondary trait association. We use simulation to quantify the Type I error, power and bias of naïve and IPW methods. IPW regression has appropriate Type I error in all situations we consider, but has lower power than naïve analyses. The bias for naïve analyses is small provided the marker is independent of disease risk. Considering the majority of tested markers in a GWAS are not associated with disease risk, naïve analyses provide valid tests of and nearly unbiased estimates of marker‐secondary trait association. Care must be taken when there is evidence that both the secondary trait and tested marker are associated with the primary disease, a situation we illustrate using an analysis of the relationship between a marker in FGFR2 and mammographic density in a breast cancer case‐control sample. Genet. Epidemiol. 33:717–728, 2009. © 2009 Wiley‐Liss, Inc.  相似文献   

20.
The statistical properties of sib-pair and variance-components linkage methods were compared using the nuclear family data from Problem 2. Overall, the power to detect linkage was not high for either method. The variance-components method had better power for detection of linkage, particularly when covariates were included in the model. Type I error rates were similar to nominal error rates for both methods. © 1997 Wiley-Liss, Inc.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号