首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 78 毫秒
1.
目的 探讨基于混合效应模型对短期纵向数据进行模拟,从而获得长期纵向数据这一方法的效果。方法 以金属镉的膳食暴露数据作为应用实例来介绍该方法。首先计算某省成人的镉短期暴露量,在此基础上构建三个不同的模型并计算相应参数,包括固定效应参数、随机效应参数和一阶自相关系数。通过Monte Carlo模拟得到不同模拟人数和天数下人群的每日暴露量及其分布。三个标准用于评价模拟数据与原始数据的相似度:总平均值和标准差、方差成分比例和自相关系数。结果 固定效应参数:年份、地区及两者的一级交互作用被纳入混合效应模型;随机效应参数:个体间方差σb2=0.056,个体内方差σe2=0.717,一阶自相关系数AR(1)ρ=-0.026;模型选择:在同样的模拟人数和天数下,模拟结果显示含有自相关结构的混合效应模型模拟的数据结构与原始数据结构最接近;考虑自相关的混合效应模型的模拟结果:模拟人数固定为1000人,模拟天数从30天递增至360天,随着时间长度逐渐增加,个体内与个体间方差成分比例以及自相关系数都很稳定,且接近原始数据的...  相似文献   

2.
目的 讨论重复测量数据的方差分析方法的特点及应用范围.方法 分析摄食含有高、中、低3种不同剂量的X农药饲料的雄鼠在两年过程中的体重变化.结果 高剂量的X农药对雄鼠体重的影响有极显著性,中剂量和低剂量X农药对大鼠体重的影响无显著性.结论 重复测量数据的方差分析方法具有宏观性、整体性和前瞻性等特点.  相似文献   

3.
多结局生存分析模型与Cox模型的随机模拟比较   总被引:1,自引:0,他引:1  
高峻  董伟  高尔生  赵耐青 《中国卫生统计》2007,24(3):248-250,254
目的通过随机模拟评价不同多结局生存分析模型的特点。方法利用随机模拟数据,比较多结局生存分析模型、将多个结局视为一种结局的单结局Cox模型及将各个结局分开拟合多个单结局Cox模型的回归系数的估计精度。结果多结局生存分析模型回归系数的估计最准确,95%可信区间包含回归系数的百分比最高,95%可信区间不包含0的百分比也最高。结论采用将各个结局分开单独拟合Cox模型和将多种结局视为单一结局拟合Cox模型分析多结局生存数据会导致回归系数估计不准及检验效能的降低。  相似文献   

4.
目的比较不同的数据包络分析模型测童医院的技术效率结果,为医院技术效率测量方法提供参考依据.方法分别使用CCR、BCC、SBM、SE-CCR和SE-SBM模型测量四川省287家会立医院的技术效率.使用MyDEA 1.0实现数据包络分析模型的计算,使用SPSS17.0进行统计分析.结果:SBM模型效率值小于等于CCR模型效率值;超效率模型可实现对数据包络分析有效医院的进一步区分;用两种超效率模型比较二甲、二乙医院时,得到不同结果.结论:在评价医院技术效率时,超效率模型可作为传统DEA模型的有效补充;建议使用SE-SBM模型对评价医院进行排序.  相似文献   

5.
目的比较末次观测结转法(LOCF)、重复测量的混合效应模型法(MMRM)、多重填补法(MI)在处理纵向缺失数据中的统计性能。方法以双臂设计、4次访视、3种访视间相关程度为应用背景,采用Monte Carlo模拟技术,产生模拟完整纵向数据后考虑两种缺失比例和三种缺失机制,即完全随机缺失(MCAR)、随机缺失(MAR)和非随机缺失(MNAR)的缺失数据集。以完整纵向数据的分析结果为基准,评价不同处理方法的统计性能,包括Ⅰ类错误、检验效能、组间疗效差的估计误差及其95%置信区间(95%CI)宽度。结果所有情况下,MMRM和MI均可控制Ⅰ类错误,检验效能略低于完整数据;LOCF大多难以控制Ⅰ类错误,检验效能变异较大。多数情况下MMRM和MI的点估计误差较低,LOCF则表现不稳定。所有情况下,MI的95%CI最宽,MMRM次之,LOCF最窄。结论 MCAR和MAR缺失机制下,MMRM与MI的统计性能相当,受各种因素影响较有规律,可根据实际情况选择其中一个作为主要分析。LOCF因填补方法的特殊性使得变异较小,精度较高,但其最大的缺陷是不够稳健且不能有效控制I类错误,需谨慎使用。基于MNAR缺失机制对缺失数据进行敏感性分析以考察试验结果的稳健性是必要的。  相似文献   

6.
李晨 《医疗设备信息》2010,(2):55-57,72
随着成本核算工作的发展和管理需求的不断提出,各种分析手段正广泛地应用于当前医院成本分析中。其中本量利方法较早地被应用到成本核算与分析中,并对医院未来保本点与经营状况进行预测。为了更可靠地建立本量利分析模型,有必要引入成本预测方程。本文介绍了三种常用的建立预测方程的方法并通过对比,提出各方法的优劣,供医院在实际建立本量利模型和预测成本时进行选择和参考。  相似文献   

7.
目的 优选出北京市麻疹的最佳预警模型及其参数,为其自动预警提供技术支持.方法 利用暴发模拟软件生成一系列不同性质的暴发信号,将其添加到北京市2005-2007年麻疹的实际日报数中.综合采用指数加权移动平均(EWMA)、C1-MILD(C1)、C2-MEDIUM(C2)、C3.ULTRA(C3)及时空重排扫描统计等预警模型识别加入的暴发信号,比较各模型不同参数的约登指数(YD指数)和检出时间(DT),优选出最佳模型参数,进而比较各模型最优参数下的预警功效,优选出最佳预警模型.结果 EWMA模型的最优参数为λ=0.6,κ=1.0;CI的最优参数为k=0.1,H=3σ;C2的最优参数为k=0.I,H=30;C3的最优参数为K=1.0,H=40;时空重排扫描模型的最优参数为时间聚集性最大值为7 d,空问聚集性最大值为5 km.各模型的预警功效评价结果:EWMA的YD指数为90.8%,DT为0.121 d;CI的YD指数为88.7%,DT为0.142 d;c2的YD指数为92.9%,DT为0.121 d:C3的YD指数为87.9%.DT为0.058 d;时空重排扫描的YD指数为94.3%,DT为0.176 d.结论 5种模型中,时空重排扫描的预警功效最优.  相似文献   

8.
EXCEL中的几种常用分析工具在医学统计上的应用   总被引:2,自引:0,他引:2  
Excel以其简便易学已为大多数研究人员所掌握 ,其中的“分析工具库”能做一些常用的统计处理 ,我们只要针对实际问题找到对应的分析工具就可以了。下面通过实例来介绍一些分析工具的使用。1.描述统计 此工具能够计算出一些常用的统计量。例 1:测得 15名男子的身高 (米 ) :1 731 72 1 6 91 74 1 6 6 1 81 781 76 1 71 6 2 1 82 1 711 751 71 77,试计算该数据的描述性统计量。步骤 :①在EXCEL工作表中按列输入数据 ;②单击“工具”→“数据分析”→“描述统计” ,将出现“描述统计”对话框 ;③指定数据区域 ,若选择了列标题 ,则要在“标志…  相似文献   

9.
目的 比较采用不同基线数据传染病暴发探测方法的效果.方法 以2009年6个省(市)报告的手足口病(HFMD)病例和暴发为数据来源,对C1、C2、C3三种传染病暴发探测方法均选用基于"区分与不区分周末和工作日"两种基线数据进行运算,以暴发探测时间(TTD)和错误预警率(FAR)作为算法功效的评价指标,比较C1、C2和C3分别基于两种基线数据的暴发探测效果.结果 2009年6个省(市)共报告了HFMD病例405 460例,工作日期问每县每日平均报告1.78例,周末每县每日平均报告1.29例,两者差异具有统计学意义(P<0.01).采用不区分周末和工作日的基线数据时,C1、C2和C3的最优阈值分别为0.2、0.4和0.6,TTD均为1 d,FAR分别为5.33%、4.88%和4.50%;采用区分周末和工作日的基线数据时,C1、C2和C3的最优阈值分别为0.4、0.6和1.0,TTD均为1 d,FAR分别为4.81%、4.75%和4.16%,三种方法的FAR均低于采用不区分周末和工作日的基线数据.结论 HFMD在工作日与周末报告的病例数有显著差异;C1、C2和C3三种异常探测方法采用区分周末和工作日的基线数据可降低FAR,提高暴发探测的准确性.
Abstract:
Objective To compare the performance of aberration detection algorithm for infectious disease outbreaks, based on two different types of baseline data. Methods Cases and outbreaks of hand-foot-and-mouth disease (HFMD) reported by six provinces of China in 2009 were used as the source of data. Two types of baseline data on algorithms of C1 ,C2 and C3 were tested, by distinguishing the baseline data of weekdays and weekends. Time to detection (TTD) and false alarm rate (FAR) were adopted as two evaluation indices to compare the performance of 3 algorithms based on these two types of baseline data. Results A total of 405 460 cases of HFMD were reported by 6 provinces in 2009. On average,each county reported 1.78 cases per day during the weekdays and 1.29 cases per day during weekends, with significant difference (P<0.01) between them. When using the baseline data without distinguish weekdays and weekends, the optimal thresholds for C1, C2 and C3 was 0.2,0.4 and 0.6 respectively while the TTD of C1,C2 and C3 was all 1 day and the FARs were 5.33% ,4.88% and 4.50% respectively. On the contrast, when using the baseline data to distinguish the weekdays and weekends, the optimal thresholds for C1, C2 and C3 became 0.4,0.6 and 1.0 while the TTD of Cl,C2 and C3 also appeared equally as 1 day.However, the FARs became 4.81%,4.75% and 4.16% respectively, which were lower than the baseline data from the first type. Conclusion The number of HFMD cases reported in weekdays and weekends were significantly different, suggesting that when using the baseline data to distinguish weekdays and weekends, the FAR of C1, C2 and C3 algorithm could effectively reduce so as to improve the accuracy of outbreak detection.  相似文献   

10.
偏态分布数据一般多采用对数转换法或百分位数法计算均值及均值上限。顺序统计量也是计算偏态分布资料的方法之一。现以塘沽区83年290例尿氟测定结果(呈正偏态分布)为例,用上述三种方法求得95%上限,以便对三种方法进行评价。  相似文献   

11.
本文对CPD方差分析法进行了Monte Carlo研究。结果表明CPD方差分析法所用的统计量都渐近于F分布;只要样本容量达到一定大小,CPD方差分析法就具有很好的功效。从而可以认为,在实验设计中,CPD方差分析法是一种有效的分析方法。  相似文献   

12.
裂-裂区设计及其方差分析   总被引:1,自引:0,他引:1  
目的介绍裂-裂区设计及其方差分析。方法结合实例介绍裂-裂区设计中实验单位、实验因素级别的正确识别和合理安排,以便正确选择方差分析模型。结果只有正确识别裂-裂区设计中实验单位、实验因素的级别,才能获得正确的方差分析模型以及相应的分析结果。结论裂区及裂-裂区设计是采用区组化控制技术进行的多因素研究设计,是完全随机区组设计、拉丁方设计等的综合运用。  相似文献   

13.
Penetrance‐based linkage analysis and variance component linkage analysis are two methods that are widely used to localize genes influencing quantitative traits. Using computer programs PAP and SOLAR as representative software implementations, we have conducted an empirical comparison of both methods' power to map quantitative trait loci in extended, randomly ascertained pedigrees, using simulated data. Two‐point linkage analyses were conducted on several quantitative traits of different genetic and environmental etiology using both programs, and the lod scores were compared. The two methods appear to have similar power when the underlying quantitative trait locus is diallelic, with one or the other method being slightly more powerful depending on the characteristics of the quantitative trait and the quantitative trait locus. In the case of a multiallelic quantitative trait locus, however, the variance component approach has much greater power. These findings suggest that one should give careful thought to the likely allelic architecture of the quantitative trait to be analyzed when choosing between these two analytical approaches. It may be the case in general that linkage methods which explicitly or implicitly rely on the assumption of a diallelic trait locus fare poorly when this assumption is incorrect. © 2001 Wiley‐Liss, Inc.  相似文献   

14.
An extension of the traditional regression of offspring on midparent (ROMP) method was used to estimate the heritability of the trait, test for marker association, and estimate the heritability attributable to a marker locus. The fifty replicates of the Genetic Analysis Workshop (GAW) 12 simulated general population data were used to compare the ROMP method with the variance components method as implemented in SOLAR as a test for marker association, and to a standard analysis of variance (ANOVA) method. Large sample statistical properties of the ROMP and ANOVA methods were compared using 2,000 replicates resampled from the families of the original 50 replicates. Overall, the power to detect a completely associated single nucleotide polymorphism (SNP) marker was high, and the type I error rates were similar to nominal significance levels for all three methods. The standard deviations of the estimates of the heritability of the trait were large for both SOLAR and ROMP, but the estimates were, on average, close to those of the generating model for both methods. However, on average, SOLAR overestimated the heritability attributable to the associated SNP marker (by 256%) while ROMP underestimated it (by 26%). © 2001 Wiley‐Liss, Inc.  相似文献   

15.
We propose a family of log-linear models for ordinal data that contain parameters reflecting change patterns to compare treatments relative to change from baseline. Under the most general model, rates of change can depend not only upon the direction of change, but also upon the level of the baseline classification. We describe methods for selection of a parsimonious model and for tests of hypotheses concerning treatment differences. Interpretation of treatment differences in the follow-up response profiles, within baseline strata, employs the concept of stochastic ordering. Data from two clinical trials illustrate the proposed procedure.  相似文献   

16.
We contrast the pooling of multiple data sets with the compound HLOD (HLODC) and the posterior probability of linkage (PPL), two approaches that have been shown to have more power in the presence of genetic heterogeneity. We also propose and evaluate several multipoint extensions. © 2001 Wiley‐Liss, Inc.  相似文献   

17.
目的评价浙江省儿童乙型肝炎(乙肝)基因工程疫苗基础免疫后5~11年,抗体阳性者加强免疫1剂次不同乙肝疫苗的效果。方法 2009年9月在浙江省台州、丽水和衢州市选择1周岁内完成全程乙肝疫苗接种的5岁以上儿童4407人,采集静脉血,使用化学发光法检测血清乙肝病毒表面抗原(HB sAg)、乙肝病毒表面抗体(抗-HBs)和乙肝病毒核心抗体(抗-HBc),选择仅抗-HBs阳性者1994人加强免疫1剂次乙肝疫苗,完成接种后1个月、6个月分别采血检测抗-HBs,观察加强免疫效果。结果通过对完成接种后1个月、6个月分别采血检测抗-HBs,采用多元方差分析、重复测量方差分析及多元线性回归,不同疫苗加强免疫1剂次均能产生较强免疫应答,不同疫苗免疫效果不全一致。结论乙肝疫苗免疫策略实施以来,疫苗免疫效果良好,但抗-HBs阳性率存在随时间推移逐步下降的趋势,对5岁以上乙肝抗体阳性儿童免疫1剂次乙肝疫苗均能取得较好的免疫效果。  相似文献   

18.
目的探讨复杂抽样下截取因变量数据拟合回归模型后其回归系数的方差估计。方法模拟复杂抽样下分别从左右方向发生截取的数据,按照是否考虑抽样特征分别拟合参数与半参数回归模型,给出两种情况下模型中回归系数的标准误,比较这两种情况所得结果的异同。结果在样本量固定的前提下拟合截取回归模型,考虑复杂抽样特征后估计所得的回归系数与假设完全随机抽样一致,但其回归系数的标准误却不同于复杂抽样的情形。如果群内异质性高,群内相关系数很小,在复杂抽样条件下回归系数的标准误要低于不考虑复杂抽样特征的情形。结论对于抽样框完整的复杂抽样截取数据,进行数据处理时应尽可能地将抽样特征考虑在内,运用复杂抽样数据方差估计得到的结果更接近于实际情况,统计推断结果更加真实可靠。  相似文献   

19.
OBJECTIVE: Randomized clinical trials that compare two treatments on a continuous outcome can be analyzed using analysis of covariance (ANCOVA) or a t-test approach. We present a method for the sample size calculation when ANCOVA is used. STUDY DESIGN AND SETTING: We derived an approximate sample size formula. Simulations were used to verify the accuracy of the formula and to improve the approximation for small trials. The sample size calculations are illustrated in a clinical trial in rheumatoid arthritis. RESULTS: If the correlation between the outcome measured at baseline and at follow-up is rho, ANCOVA comparing groups of (1-rho(2))n subjects has the same power as t-test comparing groups of n subjects. When on the same data, ANCOVA is used instead of t-test, the precision of the treatment estimate is increased, and the length of the confidence interval is reduced by a factor 1-rho(2). CONCLUSION: ANCOVA may considerably reduce the number of patients required for a trial.  相似文献   

20.
《Value in health》2021,24(11):1634-1642
ObjectivesCurative treatments can result in complex hazard functions. The use of standard survival models may result in poor extrapolations. Several models for data which may have a cure fraction are available, but comparisons of their extrapolation performance are lacking. A simulation study was performed to assess the performance of models with and without a cure fraction when fit to data with a cure fraction.MethodsData were simulated from a Weibull cure model, with 9 scenarios corresponding to different lengths of follow-up and sample sizes. Cure and noncure versions of standard parametric, Royston-Parmar, and dynamic survival models were considered along with noncure fractional polynomial and generalized additive models. The mean-squared error and bias in estimates of the hazard function were estimated.ResultsWith the shortest follow-up, none of the cure models provided good extrapolations. Performance improved with increasing follow-up, except for the misspecified standard parametric cure model (lognormal). The performance of the flexible cure models was similar to that of the correctly specified cure model. Accurate estimates of the cured fraction were not necessary for accurate hazard estimates. Models without a cure fraction provided markedly worse extrapolations.ConclusionsFor curative treatments, failure to model the cured fraction can lead to very poor extrapolations. Cure models provide improved extrapolations, but with immature data there may be insufficient evidence to choose between cure and noncure models, emphasizing the importance of clinical knowledge for model choice. Dynamic cure fraction models were robust to model misspecification, but standard parametric cure models were not.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号