首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到18条相似文献,搜索用时 328 毫秒
1.
目的 评价三种年龄调整率可信区间估计方法,探索适合江苏省宫颈癌筛查研究中年龄调整患病率可信区间估计的方法.方法 以二项分布正态近似法、Gamma分布法及"确切概率法"进行年龄调整率的区间估计,运用统计模拟考察多种率及阳性数情况下三种方法的区间覆盖率及宽度.结果 当样本量较小(阳性数较少)时,确切概率法的覆盖率离理论可信度的偏差及区间宽度均优于Gamma分布法,两者的覆盖率均明显优于正态近似法;随着阳性数增多,三法各自的覆盖率偏差及区间宽度均逐渐变小,方法间的差异亦逐渐缩小;当阳性数增至30以上时,确切概率法及正态近似法的覆盖率的偏差皆在±1%以内,此时两者的区间宽度接近;而Gamma分布法的覆盖率偏差若要达到1%以内,则要求总阳性数在100以上.无论样本构成是轻度偏离还是明显偏离总体构成,上述规律皆成立.结论 综合考虑区间覆盖率、区间宽度及计算便捷性,建议当总阳性数小于30时,采用确切概率法计算调整率的可信区间;当总阳性数大于等于30时,采用正态近似法.  相似文献   

2.
彭斌  易东  田考聪  钟晓妮 《现代预防医学》2007,34(13):2472-2474,2476
[目的]对基于二项分布的小样本总体率可信区间估计方法-Clopper-Pearson提出的精确法进行校正,以减少精确法的保守性,提高可信区间的精密度.[方法]运用SAS软件,编制二项分布的Monte Carlo模拟抽样程序,通过计算95%可信区间的实际可信度寻找合适的校正系数k.[结果]校正系数k=0.5时校正法所估计的95%可信区间的实际可信度比精确法更接近期望可信度(95%),其可信区间宽度比精确法更窄;而当样本含量小于15时,k=0.6的校正结果最理想.[结论]校正法可以减少精确法的保守性,提高可信区间的精密度.  相似文献   

3.
卫生统计     
随机模拟法验证非劣效临床试验样本量计算公式;良性前列腺增生症患者生活质量量表的修订与考评——量表的修订及条目筛选方法;核平滑半参数回归模型在重复测量资料中的应用;负二项分布抽样中的患病率无偏估计;患病人数未知时患病率的点估计及区间估计方法.  相似文献   

4.
目的对目前惯用的正态近似法计算总体率可信区间的应用条件进行评价,为正确应用该法提供理论基础和应用指导.方法应用二项分布原理计算总体率精确可信区间并与正态近似法计算结果相比较;采用蒙特卡洛模拟抽样评价可信度;应用SAS和Excel软件绘制二项分类数据概率分布图.结果以n×p=5作为近似条件,用正态近似法计算总体率可信区间可造成显著的相对误差.当n×p为常数时,随着p减小,相对误差在一定范围内呈线性增加;随着n增加,相对误差呈非线性增加.结论目前惯用的估计总体率可信区间的正态近似法应用条件并不能保证总体率估计的可信度和准确度.根据实验结果,提出了使用正态近似法估计总体率95%可信区间一套新的应用条件.  相似文献   

5.
探讨Bootstrap自抽样如何在层次结构数据中实现,为组内相关系数(ICC)可信区间的计算方法提供选择。文中利用混合效应模型估计重复测量数据和两阶段抽样数据的ICC及利用Bootstrap法估计ICC的可信区间,比较不同的自抽样模式下ICC可信区间结果。重复测量实例 结果显示Bootstrap整群抽样估计的可信区间包含ICC真值,如忽视数据的层次结构特征, Bootstrap方法得到无效的可信区间估计;两阶段抽样实例结果显示整群Bootstrap自抽样方法估计的ICC均数与原样本ICC偏差最小,可信区间宽泛。表明对层次结构数据进行Bootstrap自抽样,需考虑数据的产生机制,即高水平Bootstrap自抽样的统计量估计更接近原样本统计量。  相似文献   

6.
用Bootstrap方法计算中位数的可信区间   总被引:7,自引:2,他引:5  
临床试验中 ,除了需要了解两组观察对象的疗效是否存在差别 ,还希望能了解差别的大小。假设检验能够解决疗效是否存在差别的问题 ,但不能告知差别的实际大小。ICHGCP规定 ,在临床试验中 ,除了假设检验的P值外 ,还需要在统计分析报告中列出统计推断的可信区间。对于正态分布的数据 ,通过均数及标准误能够得到可信区间。但当数据分布未知时 ,此时中位数成为较好的表现数据集中趋势的统计量 ,然而中位数的可信区间的计算是比较困难的。此时 ,Bootstrap抽样估计中位数的可信区间成为较好的方法。在某些情况下 ,当有些信息是不可…  相似文献   

7.
四种方法计算总体率可信区间的比较研究   总被引:1,自引:3,他引:1  
刘沛 《中国卫生统计》2005,22(6):354-358
目的 比较一次近似法、校正一次近似法、二次近似法和二项分布精确法计算总体率95%可信区间的精密度、可信度和相对误差,并探讨一次近似法和校正一次近似法的应用条件和注意事项。方法 用SAS软件编制Monte Carlo模拟抽样程序,比较4种方法的可信度;以精确法为标准,计算其他方法的精密度和相对误差。结果 校正一次近似法的可信度、精密度与精确法相似,相对误差明显小于一次近似法和二次近似法。二次近似法不论是可信度还是精密度均明显差于精确法,并具有显著的相对误差。结论 在进行率的区间估计时,建议采用校正一次近似法。  相似文献   

8.
本文在回顾批质量保证抽样(LQAS)样本量计算的统计学原理基础上,用超几何分布计算在一定条件下LQAS一次抽样的样本量;应用Excel和SAS软件展示二项分布蒙特卡罗模拟的原理和实现;采用SAS超几何分布随机数函数完成LQAS样本量估计的蒙特卡罗模拟。超几何分布计算的与蒙特卡罗方法模拟的LQAS一次抽样样本量与现有LQAS样本量表的结果一致。蒙特卡罗模拟估计LQAS样本量具有深刻的统计学思想,其计算简单,易于理解;LQAS可广泛应用于公共卫生领域的快速评价。  相似文献   

9.
目的应用Monte-Carlo模拟进行基于人时的相对危险度的分布估计。方法结合实例进行相对危险度的模型构建、拉丁超立方抽样和概率分布的拟合及RR可信区间的几种计算方法比较。结果模拟的RR频率分布经拟合符合Pearson5、Lognorm、Gamma和InvGauss4种分布,以Pearson5分布拟合最佳。模拟的RR值95%可信区间结果与统计量函数计算值、Wald法和Score法大致相当,但其上限值和下限值均略小。结论应用Monte-Carlo模拟结合拉丁超立方抽样技术,实现了基于人时的相对危险度的分布估计,该方法可应用于更为复杂的参数分布估计。  相似文献   

10.
本文为南方医科大学陈平雁教授团队2012年发表于本刊的《样本量估计及其在nQuery+nTerim和SAS软件上的实现—均数比较》系列文章^([1-7])的后续部分。前期主要考虑了连续变量和等级变量的情形,本文将介绍离散变量,即两组泊松分布及负二项分布均数比较的样本量估计方法。文中的公式和实例序号均依照前期的系列文章顺序编排,以保持原有结构。  相似文献   

11.
The confidence interval (CI) on the population average (PA) odds ratio (OR) is a useful measure of agreement among different diagnostic methods when no gold standard is available. It can be calculated by the repeated measures logistic regression procedure (GENMOD, SAS). We compare the width of CIs from paired and independent samples with an identical number of measurements and an identical probability of positive response among them. For two and three diagnostic methods with binomial endpoints, the best performing sampling strategy is analytically described. The asymptotic formulae of the ratio of the CI widths for paired and independent samples are provided. We numerically study the dependence of the width of the CIs on the number of positive concordant outcomes. The width of CIs from independent samples is an increasing function of the sample size with a saturation asymptote and rather weak dependence on the argument. The width of CIs from paired samples is a decreasing function of the sample size with a saturation asymptote and significant dependence on the argument when the sample size is small. If curves for paired and independent samples intersect, a critical sample size exists. At this point, a small change in the sample size can reverse the choice of the best performing sampling policy. We numerically validated the robustness of the critical point to variations of the conditional OR.  相似文献   

12.
The two-test two-population model, originally formulated by Hui and Walter, for estimation of test accuracy and prevalence estimation assumes conditionally independent tests, constant accuracy across populations and binomial sampling. The binomial assumption is incorrect if all individuals in a population e.g. child-care centre, village in Africa, or a cattle herd are sampled or if the sample size is large relative to population size. In this paper, we develop statistical methods for evaluating diagnostic test accuracy and prevalence estimation based on finite sample data in the absence of a gold standard. Moreover, two tests are often applied simultaneously for the purpose of obtaining a 'joint' testing strategy that has either higher overall sensitivity or specificity than either of the two tests considered singly. Sequential versions of such strategies are often applied in order to reduce the cost of testing. We thus discuss joint (simultaneous and sequential) testing strategies and inference for them. Using the developed methods, we analyse two real and one simulated data sets, and we compare 'hypergeometric' and 'binomial-based' inferences. Our findings indicate that the posterior standard deviations for prevalence (but not sensitivity and specificity) based on finite population sampling tend to be smaller than their counterparts for infinite population sampling. Finally, we make recommendations about how small the sample size should be relative to the population size to warrant use of the binomial model for prevalence estimation.  相似文献   

13.
BACKGROUND: The most commonly used measures of association in cross-sectional studies are the odds ratio (OR) and the prevalence ratio (PR). Some cross-sectional epidemiologic studies describe their results as OR but use the definition of PR. The main aim of this study was to describe and compare different calculation methods for PR described in literature using two situations (prevalence < 20% and prevalence > 20%). MATERIAL AND METHODS: A literature search was carried out to determine the most commonly used techniques for estimating the PR. The four most frequent methods were: 1) obtaining the OR using non-conditional logistic regression but using the correct definition; 2) using Breslow-Cox regression; 3) using a generalized linear model with logarithmic transformation and binomial family, and 4) using the conversion formula from OR into PR. The models found were replicated for both situations (prevalence less than 20% and greater than 20%) using real data from the 1994 Catalan Health Interview Survey. RESULTS: When prevalence was low, no substantial differences were observed in either the estimators or standard errors obtained using the four procedures. When prevalence was high, differences were found between estimators and confidence intervals although all the measures maintained statistical significance. CONCLUSION: All the methods have advantages and disadvantages. Individual researchers should decide which technique is the most appropriate for their data and should be consistent when using an estimator and interpreting it.  相似文献   

14.
We investigate methods for the construction of confidence intervals for a proportion in a stratified two-stage sampling design with few events occurring in a small number of large, unequal size strata. The critical aspect is the incorporation of the weighting scheme into the construction of a single overall confidence interval. With small numbers of events, the binomial based methods may be inadequate since the normal approximation is not valid. Computer simulations compare coverage probability and bias for five methods of obtaining confidence intervals for proportions by combining: (1) binomial variances; (2) confidence intervals based on the F-distribution approximation to the cumulative binomial; (3) the binomial variance method with exact confidence limits when a zero prevalence occurs in any stratum; (4) confidence intervals based on the F-distribution using a rescaling factor; and (5) the binomial variance method with exact confidence limits using a rescaling factor. The method that performs best in terms of coverage probability is the combination of stratum specific confidence intervals based on the F-distribution using a rescaling factor. The methods involving the binomial variance tend to be negatively biased and the methods based on the F-distribution tend to be positively biased. Application of these methods with data from a study of adolescent depression that employs a stratified two-stage sampling design is consistent with these results.  相似文献   

15.
BACKGROUND: The purpose of this study was to evaluate the bias and precision of 46 methods published from 1953 to 2000 for estimating resting energy expenditure (REE) of thermally injured patients. METHODS: Twenty-four adult patients with > or =20% body surface area burn admitted to a burn center who required specialized nutrition support and who had their REE measured via indirect calorimetry (IC) were evaluated. Patients with morbid obesity, human immunovirus, malignancy, pregnancy, hepatic or renal failure, neuromuscular paralysis, or those requiring a FiO2 >50% or positive end expiratory pressure (PEEP) > or =10 cm H2O were excluded. One steady-state measured REE measurement (MEE) was obtained per patient. The methods of Sheiner and Beal were used to assess bias and precision of these methods. The formulas were considered unbiased if the 95% confidence interval (CI) for the error (kilocalories per day) intersected 0 and were considered precise if the 95% CI for the absolute error (%) was within 15% of MEE. RESULTS: MEE was 2780+/-567 kcal/d or 158%+/-34% of the Harris Benedict equations. None of the methods was precise (< or =15% CI error). Over one-half (57%) of the 46 methods had a 95% confidence interval error >30% of the MEE. Forty-eight percent of the methods were unbiased, 33% were biased toward overpredicting MEE, and 19% consistently underpredicted MEE. The pre-1980s methods more frequently overpredicted MEE compared with the 1990 to 2000 (p < .01) and 1980 to 1989 (p < .05) published methods, respectively. The most precise unbiased methods for estimating MEE were those of Milner (1994) at a mean error of 16% (CI of 10% to 22%), Zawacki (1970) with a mean error of 16% (CI of 9% to 23%), and Xie (1993) at a mean error of 18% (CI of 12% to 24%). The "conventional 1.5 times the Harris Benedict equations" was also unbiased and had a mean error of 19% (CI of 9% to 29%). CONCLUSIONS: Thermally injured patients are variably hypermetabolic and energy expenditure cannot be precisely predicted. If IC is not available, the most precise, unbiased methods were those of Milner (1994), Zawacki (1970), and Xie (1993).  相似文献   

16.
OBJECTIVE: When studies report proportions such as sensitivity or specificity, it is customary to meta-analyze them using the DerSimonian and Laird random effects model. This method approximates the within-study variability of the proportion by a normal distribution, which may lead to bias for several reasons. Alternatively an exact likelihood approach based on the binomial within-study distribution can be used. This method can easily be performed in standard statistical packages. We investigate the performance of the standard method and the alternative approach. STUDY DESIGN AND SETTING: We compare the two approaches through a simulation study, in terms of bias, mean-squared error, and coverage probabilities. We varied the size of the overall sensitivity or specificity, the between-studies variance, the within-study sample sizes, and the number of studies. The methods are illustrated using a published meta-analysis data set. RESULTS: The exact likelihood approach performs always better than the approximate approach and gives unbiased estimates. The coverage probability, in particular for the profile likelihood, is also reasonably acceptable. In contrast, the approximate approach gives huge bias with very poor coverage probability in many cases. CONCLUSION: The exact likelihood approach is the method of preference and should be used whenever feasible.  相似文献   

17.
Four interval estimation methods for the ratio of marginal binomial proportions are compared in terms of expected interval width and exact coverage probability. Two new methods are proposed that are based on combining two Wilson score intervals. The new methods are easy to compute and perform as well or better than the method recently proposed by Nam and Blackwelder. Two sample size formulas are proposed to approximate the sample size required to achieve an interval estimate with desired confidence level and width.  相似文献   

18.
Propensity score methods are increasingly being used to estimate the effects of treatments on health outcomes using observational data. There are four methods for using the propensity score to estimate treatment effects: covariate adjustment using the propensity score, stratification on the propensity score, propensity‐score matching, and inverse probability of treatment weighting (IPTW) using the propensity score. When outcomes are binary, the effect of treatment on the outcome can be described using odds ratios, relative risks, risk differences, or the number needed to treat. Several clinical commentators suggested that risk differences and numbers needed to treat are more meaningful for clinical decision making than are odds ratios or relative risks. However, there is a paucity of information about the relative performance of the different propensity‐score methods for estimating risk differences. We conducted a series of Monte Carlo simulations to examine this issue. We examined bias, variance estimation, coverage of confidence intervals, mean‐squared error (MSE), and type I error rates. A doubly robust version of IPTW had superior performance compared with the other propensity‐score methods. It resulted in unbiased estimation of risk differences, treatment effects with the lowest standard errors, confidence intervals with the correct coverage rates, and correct type I error rates. Stratification, matching on the propensity score, and covariate adjustment using the propensity score resulted in minor to modest bias in estimating risk differences. Estimators based on IPTW had lower MSE compared with other propensity‐score methods. Differences between IPTW and propensity‐score matching may reflect that these two methods estimate the average treatment effect and the average treatment effect for the treated, respectively. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号