首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 171 毫秒
1.
有些临床试验设计是在病人之间比较不同处理的效果,其常用的统计方法如团体,检验.完全随机化设计方差分析等,而另一些临床试验设计是在病人内部对不同处理进行比较,常用的统计分析方法如配对,检验,随机区组设计方差分析等。  相似文献   

2.
随机区组设计方差分析中应注意的几个问题   总被引:1,自引:0,他引:1  
于晓洁  王彤 《现代预防医学》2012,39(8):1881-1884
目的探讨区组因素的概念及随机区组设计与分析的若干前提条件。方法对实例1进行配对、随机区组和单因素设计方差分析3种不同的处理,比较结果的异同。对实例2采取不考虑与考虑区组-处理交互作用两种不同的处理,比较结果的异同。结果实例1中3种分析方法前两种结果相同但与第3种方法的P值有所差异;实例2中,如果直接用随机区组设计的方差分析,会忽略可能存在的交互作用,使结果发生偏倚。结论随机区组设计属于单因素设计,配对设计是其特例。不主张抛开研究设计将区组方差分析变为单因素方差分析。利用残差散点图可对区组设计方差分析的前提条件进行考察,其中区组-处理之间是否存在交互作用可采用Turkey的单自由度检验进行判断。  相似文献   

3.
不等距重复测量设计方差分析用SAS和SPSS实现的对比   总被引:3,自引:0,他引:3  
目的探讨不等距重复测量设计方差分析的条件及结果解释。方法用SAS和SPSS软件实现不等距重复测量设计方差分析。结果给出了方差分析条件、SAS和SPSS程序及主要结果解释。结论只有满足方差分析条件、正确合理解释结果,才能保证不等距重复测量设计方差分析的有效性。  相似文献   

4.
随机区组设计中缺项的估计   总被引:3,自引:0,他引:3  
在实际工作中,有时由于意外原因可使实验数据缺失,例如临床观察中病人因故未作某次检查,动物在试验中因故生病或死亡,仪器测量中突然发生故障等,致使实验数据缺失一项或两项。这时按通常随机区组方差分析就无法进行,如将缺失区组舍去则又惋惜,此种情形下可用回归分析法和缺项估计法将缺失数据补上。但因回归分析法计算较繁,不易推广,  相似文献   

5.
交叉试验设计统计方法探讨上海第二医科大学200025李菊英,苏炳华有些临床试验设计是在病人之间比较不同处理的效果,其常用的统计方法如团体t检验,完全随机化设计方差分析等;而另一些临床试验设计是在病人内部对不同处理进行比较,常用的统计分析方法如配对t检...  相似文献   

6.
目的 比较"近似估算法"和"饱和蒸气压推导法"两种方法测试混合有机溶剂闪点的优劣,找出两种方法的特点和适用性。 方法 按正丁醇、甲醇的不同摩尔分数比,配置5份混合液体,分别用两种方法估算5份混合液体的闪点,并与实测结果进行比较。 结果 "近似估算法"所得值与实测值最大之差为4℃,最小为1℃;"饱和蒸气压推导法"推导值与实测值之差最大为1℃,最小为0℃。 结论 相比"近似估算法","饱和蒸气压推导法"涉及的参数多,计算复杂,但更为精确。  相似文献   

7.
目的通过改善行医就医行为,让患者得到合理治疗。方法随机选取三组就医人群,按照诊断明确性、用药适应性及配伍合理性三个标准进行筛查。结果很多患者没有得到合理治疗,第一组:诊断不明确11.3%,用药适应性差17.5%,配伍不合理13.2%;第二组:诊断不明确10%,用药适应性差20.8%,配伍不合理12%;第三组:诊断不明确11.2%,用药适应性差24.2%,配伍不合理11%。结论只有改善不良行医及就医行为,才能让患者得到合理的治疗,有利于身体尽快康复。  相似文献   

8.
《现代医院》2019,(9):1318-1320
目的分析多学科联动机制在危重症患者跨院区转运中的应用效果。方法选取2017年1月—2018年7月入住赣州市人民医院且需要进行跨院区转运的102例危重患者,按随机数字表法分成观察组和对照组,每组各51例。对照组按常规方式进行转运,观察组采用多学科联动机制进行转运。对比安全转运质量、医护满意度、转运不良事件发生率。结果观察组转运准备时间明显较对照组短(P <0. 05);观察组记录漏项、药品检查缺项、物品检查缺项发生率明显低于对照组(P <0. 05);观察组医患沟通、转运途中照护、转回医院处理得分明显高于对照组(P <0. 05);观察组转运前漏评、意外拔管、管道脱落或阻塞、仪器故障及NEWS≥4分发生率明显低于对照组(P <0. 05)。结论多学科联动机制能有效提高危重患者跨院区转运质量及降低转运不良事件发生率。  相似文献   

9.
目的探讨二次多项曲线拟合在锌测定中的应用。方法用火焰原子吸收分光光度法多次对多种浓度的锌进行测定后,分别用直线回归和二次多项曲线拟合分析得出结果。结果0.10、0.20、0.40、0.60和1.00μg/m L浓度点用直线回归分析所得结果的相对误差均比用二次多项曲线拟合分析的大,两者差异均有统计学意义(P0.01);只有0.80μg/m L浓度点用直线回归分析所得结果的相对误差比用二次多项曲线拟合分析的小,但差异无统计学意义(P0.05)。结论在锌测定中,用二次多项曲线拟合分析法优于直线回归分析法。  相似文献   

10.
韩竞  王彤  郭军 《现代预防医学》2011,38(22):4757-4761
[目的]本课题采用传统多因素分析和基于倾向性评分的3种方法比较伽玛刀分次和单次治疗垂体腺瘤患者的疗效,证实倾向性评分可以用来平衡两个组的协变量,从而降低偏倚。[方法]采用t检验和简单线性回归方法逐个分析每一因素的作用,利用多因素回归平衡混杂因素来探讨处理方法的效果;倾向性评分方法均衡数据后采用1︰1配对、分层及分层后回归调整比较处理方法的效果。[结果]①多因素分析平衡了其他因素后处理方法仍然为术后疗效的显著影响因素。②经配对、分层后,伽玛刀分次治疗组垂体肿瘤的体积差小于对照组,差异均具有统计学意义,采用分层后回归调整处理方法后,分次伽玛刀治疗的体积增大风险是单次伽玛刀治疗的22.1%,分次伽玛刀治疗的效果要优于单次伽玛刀。③两方法结果一致,平衡了其他因素后处理方法不同术后疗效不同,分次伽玛刀治疗的效果要优于单次伽玛刀。[结论]倾向评分法能够有效地均衡各对比组间特征变量的分布和构成,并在组间均衡的基础上评价干预措施或危险因素与结果变量间的联系或作用。  相似文献   

11.
Covariate adjustment using linear models for continuous outcomes in randomized trials has been shown to increase efficiency and power over the unadjusted method in estimating the marginal effect of treatment. However, for binary outcomes, investigators generally rely on the unadjusted estimate as the literature indicates that covariate-adjusted estimates based on the logistic regression models are less efficient. The crucial step that has been missing when adjusting for covariates is that one must integrate/average the adjusted estimate over those covariates in order to obtain the marginal effect. We apply the method of targeted maximum likelihood estimation (tMLE) to obtain estimators for the marginal effect using covariate adjustment for binary outcomes. We show that the covariate adjustment in randomized trials using the logistic regression models can be mapped, by averaging over the covariate(s), to obtain a fully robust and efficient estimator of the marginal effect, which equals a targeted maximum likelihood estimator. This tMLE is obtained by simply adding a clever covariate to a fixed initial regression. We present simulation studies that demonstrate that this tMLE increases efficiency and power over the unadjusted method, particularly for smaller sample sizes, even when the regression model is mis-specified.  相似文献   

12.
Identification of subgroups with differential treatment effects in randomized trials is attracting much attention. Many methods use regression tree algorithms. This article addresses 2 important questions arising from the subgroups: how to ensure that treatment effects in subgroups are not confounded with effects of prognostic variables and how to determine the statistical significance of treatment effects in the subgroups. We address the first question by selectively including linear prognostic effects in the subgroups in a regression tree model. The second question is more difficult because it falls within the subject of postselection inference. We use a bootstrap technique to calibrate normal-theory t intervals so that their expected coverage probability, averaged over all the subgroups in a fitted model, approximates the desired confidence level. It can also provide simultaneous confidence intervals for all subgroups. The first solution is implemented in the GUIDE algorithm and is applicable to data with missing covariate values, 2 or more treatment arms, and outcomes subject to right censoring. Bootstrap calibration is applicable to any subgroup identification method; it is not restricted to regression tree models. Two real examples are used for illustration: a diabetes trial where the outcomes are completely observed but some covariate values are missing and a breast cancer trial where the outcome is right censored.  相似文献   

13.
We derive the closed‐form restricted maximum likelihood estimator and Kenward–Roger's variance estimator for fixed effects in the mixed effects model for repeated measures (MMRM) when the missing data pattern is monotone. As an important application of the analytic result, we present the formula for calculating the power of treatment comparison using the Wald t‐test with the Kenward–Roger adjusted variance estimate in MMRM. It allows adjustment for baseline covariates without the need to specify the covariate distribution in randomized trials. A simple two‐step procedure is proposed to determine the sample size needed to achieve the targeted power. The proposed method performs well for both normal and moderately non‐normal data even in small samples (n=20) in simulations. An antidepressant trial is analyzed for illustrative purposes. Copyright © 2017 John Wiley & Sons, Ltd.  相似文献   

14.
When analysing the survival of patients in comparative randomized clinical trials using the Cox proportional hazards model, important prognostic factors may be included for the adjustment of the treatment effect. In this paper we examine two of the most common misspecifications of the model: (i) an important prognostic factor is omitted from the analysis; and (ii) an important prognostic factor originally present on continuous scale is included in categorized form. Both situations may emerge from the occurrence of missing values. We investigate the properties of the maximum partial likelihood estimator of the treatment effect under this kind of misspecification. The estimate of the treatment effect is found to be asymptotically biased toward zero. For its asymptotic variance we obtain a quantity with the so-called ‘sandwich’ structure. Thus, variance estimation by the inverse of the second-order derivative of the likelihood is not consistent. The magnitude of overestimation or underestimation is evaluated numerically for specific settings. The precision of the treatment effect estimate under covariate omission or categorization is compared with the precision of the estimate in the correct and not misspecified model. It turns out that correct adjustment does not lead to a higher precision of the treatment effect estimate, but due to the resulting underestimation, covariate omission or categorization lead to loss of power of the test of no treatment effect. © 1997 by John Wiley & Sons, Ltd.  相似文献   

15.
In survival analysis, median residual lifetime is often used as a summary measure to assess treatment effectiveness; it is not clear, however, how such a quantity could be estimated for a given dynamic treatment regimen using data from sequential randomized clinical trials. We propose a method to estimate a dynamic treatment regimen‐specific median residual life (MERL) function from sequential multiple assignment randomized trials. We present the MERL estimator, which is based on inverse probability weighting, as well as, two variance estimates for the MERL estimator. One variance estimate follows from Lunceford, Davidian and Tsiatis' 2002 survival function‐based variance estimate and the other uses the sandwich estimator. The MERL estimator is evaluated, and its two variance estimates are compared through simulation studies, showing that the estimator and both variance estimates produce approximately unbiased results in large samples. To demonstrate our methods, the estimator has been applied to data from a sequentially randomized leukemia clinical trial. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

16.
Analysis of a randomized trial with missing outcome data involves untestable assumptions, such as the missing at random (MAR) assumption. Estimated treatment effects are potentially biased if these assumptions are wrong. We quantify the degree of departure from the MAR assumption by the informative missingness odds ratio (IMOR). We incorporate prior beliefs about the IMOR in a Bayesian pattern-mixture model and derive a point estimate and standard error that take account of the uncertainty about the IMOR. In meta-analysis, this model should be used for four separate sensitivity analyses which explore the impact of IMORs that either agree or contrast across trial arms on pooled results via their effects on point estimates or on standard errors. We also propose a variance inflation factor that can be used to assess the influence of trials with many missing outcomes on the meta-analysis. We illustrate the methods using a meta-analysis on psychiatric interventions in deliberate self-harm.  相似文献   

17.
BACKGROUND: In longitudinal studies, it is extremely rare that all the planned measurements are actually performed. Missing data are often consecutive to drop-outs, but may also be intermittent. In both cases, the analysis of incomplete data necessarily requires assumptions that are generally unverifiable, and the need for sensitivity analyses has been advocated over the past few years. In this article, the attention will be given to longitudinal binary data. METHODS: A method is proposed, which is based on a log-linear model. A sensitivity parameter is introduced that represents the relationship between the response mechanism and the missing data mechanism. It is recommended not to estimate this parameter, but to consider a range of plausible values, and to estimate the parameters of interest conditionally on these plausible values. This allows to assess the sensitivity of the conclusion of a study to various assumptions regarding the missing data mechanism. RESULTS: This method was applied to a randomized clinical trial comparing the efficacy of two treatment regimens in patients with persistent asthma. The sensitivity analysis showed that the conclusion of this study was robust to missing data.  相似文献   

18.
In a prospective cohort study, examining all participants for incidence of the condition of interest may be prohibitively expensive. For example, the “gold standard” for diagnosing temporomandibular disorder (TMD) is a physical examination by a trained clinician. In large studies, examining all participants in this manner is infeasible. Instead, it is common to use questionnaires to screen for incidence of TMD and perform the “gold standard” examination only on participants who screen positively. Unfortunately, some participants may leave the study before receiving the “gold standard” examination. Within the framework of survival analysis, this results in missing failure indicators. Motivated by the Orofacial Pain: Prospective Evaluation and Risk Assessment (OPPERA) study, a large cohort study of TMD, we propose a method for parameter estimation in survival models with missing failure indicators. We estimate the probability of being an incident case for those lacking a “gold standard” examination using logistic regression. These estimated probabilities are used to generate multiple imputations of case status for each missing examination that are combined with observed data in appropriate regression models. The variance introduced by the procedure is estimated using multiple imputation. The method can be used to estimate both regression coefficients in Cox proportional hazard models as well as incidence rates using Poisson regression. We simulate data with missing failure indicators and show that our method performs as well as or better than competing methods. Finally, we apply the proposed method to data from the OPPERA study. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

19.
Pattern‐mixture models provide a general and flexible framework for sensitivity analyses of nonignorable missing data. The placebo‐based pattern‐mixture model (Little and Yau, Biometrics 1996; 52 :1324–1333) treats missing data in a transparent and clinically interpretable manner and has been used as sensitivity analysis for monotone missing data in longitudinal studies. The standard multiple imputation approach (Rubin, Multiple Imputation for Nonresponse in Surveys, 1987) is often used to implement the placebo‐based pattern‐mixture model. We show that Rubin's variance estimate of the multiple imputation estimator of treatment effect can be overly conservative in this setting. As an alternative to multiple imputation, we derive an analytic expression of the treatment effect for the placebo‐based pattern‐mixture model and propose a posterior simulation or delta method for the inference about the treatment effect. Simulation studies demonstrate that the proposed methods provide consistent variance estimates and outperform the imputation methods in terms of power for the placebo‐based pattern‐mixture model. We illustrate the methods using data from a clinical study of major depressive disorders. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

20.
This paper extends the line‐segment parametrization of the structural measurement error (ME) model to situations in which the error variance on both variables is not constant over all observations. Under these conditions, we develop a method‐of‐moments estimate of the slope, and derive its asymptotic variance. We further derive an accurate estimator of the variability of the slope estimate based on sample data in a rather general setting. We perform simulations that validate our results and demonstrate that our estimates are more precise than estimates under a different model when the ME variance is not small. Finally, we illustrate our estimation approach using real data involving heteroscedastic ME, and compare its performance with that of earlier models. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号