首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到18条相似文献,搜索用时 140 毫秒
1.
目的探讨次序统计量回归并将其应用于食品重金属残留的估计。方法应用次序统计量回归来处理全国食品化学污染物监测数据库未检出值资料,应用SAS编制程序来计算次序统计量,对未检出值进行填补,并对残留水平进行估计。结果与经典替换方法相比,次序统计量回归方法估计的镉残留水平更为稳健,且便于后期的分析。结论次序统计量回归是处理未检出值数据值得推荐使用的方法,具体的使用还需要进一步的应用研究。  相似文献   

2.
目的:比较用于处理包含未检出值的痕量测定数据的次序统计量回归方法与经典替换方法的估计效果。方法:应用Bootstrap方法设计模拟试验生成服从正态和非正态分布的模拟数据,分别用次序统计量回归与经典替换方法进行统计量的估计,与相应的原完整数据估计值进行比较,计算相对估计误差作为考核指标。结果:在数据服从正态和对数正态分布时,次序统计量回归效果明显优于简单替换法;资料服从非正态分布资料时,次序统计量回归方法没有明显优势。结论:次序统计量回归是处理左截尾数据更为有效和稳健的方法,但是,计算经典平均水平统计量时,需要考虑合适的正态分布假定。  相似文献   

3.
上海市成年人膳食中镉暴露水平评估   总被引:2,自引:0,他引:2  
[目的]了解上海市成年人膳食中镉暴露的基础数据,评估上海市成年人膳食镉暴露风险。[方法]对上海市16类1 680件市售食品中镉含量进行分层随机抽样监测,对上海市1 368名成年人膳食摄入量多阶段随机抽样调查,应用世界卫生组织推荐的食品中化学污染物膳食暴露点评估方法,对上海市成年居民膳食中镉暴露水平进行评估。[结果]上海市成年人平均每周膳食中镉暴露量为0.149 4 mg/人,占暂定每周可耐受摄入量(PTWI)的34.56%。上海市成年人每周膳食中镉暴露量中位数为0.032 4 mg/人,占PTWI的7.50%。上海市成年人每周膳食中镉暴露量(膳食摄入量P90、极端P90)分别为0.287 9 mg/人和0.937 2 mg/人,分别占PTWI的66.59%和216.80%。[结论]上海市成年人膳食中镉暴露水平正常情况下低于PTWI,但仍有进一步降低必要。  相似文献   

4.
比值比(OR)和相对危险度(RR)均是评估暴露因素与研究结局间关联的常用指标, 在罕见结局的队列中, OR值常被用作RR值的近似估计, 但RR值的意义更加清晰易解释。本研究旨在基于罕见结局队列研究, 比较不同多因素回归模型获得RR与OR估计值的差别, 为基于队列研究估计暴露因素与罕见结局间关联关系时选择多因素回归方法, 以及优先报告关联大小估计指标提供参考。本研究基于中国出生队列数据开展实例研究, 以全部病种的出生缺陷为研究结局, 以受孕方式为暴露因素, 纳入孕妇年龄、是否有出生缺陷家族史等有明确证据支持的变量作为协变量, 分别拟合logistic回归、log-binomial回归以及Poisson回归, 并比较OR和RR的点估计值及其95%CI。结果表明, 在罕见结局队列研究中logistic回归估计的OR值与log-binomial回归及Poisson回归估计的RR值近似, 但log-binomial回归及Poisson回归估计的效应值更接近1.00, 且效应值的95%CI分布更窄, 但可能存在不收敛或过离散问题。针对罕见结局的队列研究, 在适用前提下, 推荐优先报告基于log-b...  相似文献   

5.
【目的】了解上海市某区市售水产品中重金属镉的污染状况,评估居民经由市售水产品的膳食镉暴露风险水平。【方法】2018—2019年,采用多阶段抽样方法随机抽取该区11个乡镇/街道农贸市场/超市的动物性水产品397份。按照国家标准进行镉含量检测,利用单因子污染指数(Pi)法评价水产品中镉的污染状况,结合2013年该区居民膳食调查数据,采用点评估法评估居民膳食镉摄入状况,并计算各类水产膳食镉安全限值(MOS)来评价人群镉摄入健康风险。【结果】该区397份市售动物性水产品中镉的检出率为75.06%,其中镉超标10份,均为蟹类(2.25%)。397份样品中蟹类、双壳类中镉平均水平P_(50)分别为140.0、90.0μg/kg,高于虾类、腹足类、海水鱼类的11.0、7.6、3.8μg/kg(χ~2=186.41,P<0.005),淡水鱼类中均未检出镉。市售蟹类水产中镉的Pi值为0.280,其余均小于0.100,说明除甲壳蟹类属于轻污染外,其余不同种类的水产品均属于未污染级。该区居民动物性水产品消费量以淡水鱼最高,其次是虾类、海水鱼、蟹类,膳食暴露评估结果显示各类水产MOS均大于1,即通过市售动物性水产摄入镉对居民健康风险可以接受。【结论】该区居民通过市售动物性水产品膳食摄入镉风险水平较低,但对市售甲壳蟹类镉污染状况应提高警惕,尽量减少蟹类的摄入,相关部门也应当引起重视,加强监测管理。  相似文献   

6.
童峰  陈坤 《中国卫生统计》2006,23(5):410-412
目的 介绍应用修正poisson回归模型计算常见结局事件的前瞻性研究中暴露因素的调整相对危险度的精确区间估计值.方法 应用稳健误差方差估计法(sandwich variance estimator)来校正相对危险度(RR)的估计方差,并通过SAS程序中GENMOD过程的REPEATED语句实现修正poisson回归.此外,采用不同的统计方法对5个虚拟的研究数据进行了分析比较.结果 以分层的Mantel-Haenszel法为标准参照,修正poisson回归对aRR点和区间估计均较为理想,普通poisson回归的aRR区间估计偏于保守.而logistic回归得到的aOR值明显偏离真实的RR值.结论 修正poisson回归模型适合于处理常见结局事件的前瞻性研究资料.  相似文献   

7.
目的 了解成都市居民膳食中铅、镉暴露的基础数据,评估成都市居民膳食铅、镉暴露风险.方法 对成都市市售食品中铅、镉含量进行监测,参考2002年中国居民营养与健康状况调查中的成都市居民膳食摄入量,应用WHO推荐的食品中化学污染物膳食暴露评估方法,对成都市居民膳食中铅、镉暴露水平进行评估.结果 成都市居民平均每周膳食中铅暴露量为0.006 9 mg/kg BW,占暂定每周可耐受摄入量(PTWI)的27.6%;镉暴露量为0.005 1 mg/kg BW,占暂定每周可耐受摄入量(PTWI)的72.86%.结论成 都市居民膳食中镉暴露水平高于铅暴露水平,正常情况下均低于PTWI,但仍有进一步降低的必要.  相似文献   

8.
目的了解广州市日常消费食品镉污染状况,评估居民膳食镉暴露风险。方法对2013—2015年广州市11类食品中的镉含量进行监测,结合2011年广州市居民膳食摄入量调查,应用WHO推荐的食品中化学污染物膳食暴露评估方法对广州市居民膳食镉暴露水平进行评估。结果共监测食品样品3 999份,镉平均含量为0.1 889 mg/kg,P50和P95镉含量分别为0.01和0.751 mg/kg,检出值范围为0.000 5~7.83 mg/kg;总检出率为75.04%(3 001/3 999),总超标率为1.50%(60/3 999)。居民主要膳食中镉平均每周暴露量为0.003 3 mg/kg BW,占耐受摄入量(PTWI)的47.14%。稻米、紫菜、蔬菜和水产品是镉摄入的主要来源,从这四类膳食中摄入的镉占总的膳食镉暴露量91.66%;然而对于膳食消费量高(P95)的人群,镉的周总暴露量为0.011 6 mg/kg BW,占PTWI 165.71%。结论大部分广州市居民膳食镉暴露水平位于安全限值以内,但高摄入量人群暴露量超过PTWI,对健康可能存在一定风险,应引起重视。  相似文献   

9.
正常值研究中未检出值估计的一种方法   总被引:1,自引:0,他引:1  
在进行人体正常值研究中,由于检测手段的限制,在一组样品的测定结果中常会出现某些样品未被检出的现象,对这些未检出值能否合理地进行估算,将直接关系到全部数据处理的正确与否,实际应用中,有人以本组资料的最小检出值或仪器的检出下限的一半作为来捡出值的估计,田凤调介绍了三种估计方法,即极大似然估计法、四点两平均法和位次估计法,本文应用Dempster、Laird和Rubin(1977)提出的处理一般不完全资料的EM算法,在对测量值的分布作出适当的参数分布假设下得到未检出值的估计,然后选用合适的方法估计正常值范围。  相似文献   

10.
目的 调查孕期三卤甲烷内暴露水平及影响因素,为饮水氯化消毒副产物的暴露评估提供科学依据.方法 选择湖北省武汉市某医院2011年7-12月入院分娩的277名孕妇进行问卷调查,采用顶空固相微萃取-气相色谱法测定血液中4种三卤甲烷(三氯甲烷、二氯一溴甲烷、一氯二溴甲烷和三溴甲烷)的浓度,采用广义线性回归模型分析探讨孕期三卤甲烷内暴露水平的影响因素.结果 血液中三氯甲烷的检出范围为13.08~480.26 ng/L,中位数为70.17 ng/L,检出率为100%;二氯一溴甲烷的检出范围为未检出~38.24 ng/L,中位数为3.30 ng/L,检出率为86.6%;一氯二溴甲烷的检出范围为未检出~20.92 ng/L,中位数为0.48 ng/L,检出率为27.4%;三溴甲烷的检出范围为未检出~78.76 ng/L,中位数为1.41 ng/L,检出率为43.7%;总三卤甲烷的范围为17.04~491.42 ng/L,中位数为84.86 ng/L;广义线性回归分析结果显示,血液中三氯甲烷含量和总三卤甲烷含量均与年龄呈正相关(β=0.007,P=0.039;β =0.007,P=0.016),均与饮水煮沸呈负相关(β=-0.170,P=0.003;β=-0.135,P=0.004).结论 年龄和饮水煮沸可能影响孕期三卤甲烷内暴露水平.  相似文献   

11.
Studies of determinants of occupational exposure frequently involve left-censored lognormally distributed data, often with repeated measures. Left censoring occurs when observations are below the analytical limit of detection (LOD); repeated measures data results from taking multiple measurements on the same worker. A common method of dealing with this type of data has been to substitute a value (such as LOD/2) for the censored data followed by statistical analysis using the 'usual' methods. Recently, maximum likelihood estimation (MLE) methods have been employed to reduce bias associated with the substitution method. We compared substitution and MLE methods using simulated lognormally distributed exposure data subjected to varying amounts of censoring using two procedures available in SAS: LIFEREG and NLMIXED. In these simulations, the MLE method resulted in less bias and performed well even for censoring up to 80%, whereas the substitution method resulted in considerable bias. We illustrate the NLMIXED procedure using a dataset of chlorpyrifos air measurements collected from termiticide applicators on consecutive days over a 5-day workweek. We provide sample SAS code for several situations including one and two groups, with and without repeated measures, random slopes, and nested random effects.  相似文献   

12.
目的 简述“率”的标化和“率”的校正的概念、区别以及计算方法,并用SAS宏程序实现直接标化率、间接标化率以及校正的率的计算,并以统计表格形式直接输出到rtf文件中.方法 对直接标化率、间接标化率以及校正的率的计算过程编写通用的SAS宏程序.结果 整理好相应的原始数据,设定好相应的宏参数,运行宏程序便可快速获得直接标化率、间接标化率以及校正的率的计算结果.结论 笔者编写的直接标化率、间接标化率以及校正的率的SAS宏程序具有通用、简便、实用的特点,在流行病学研究中具有一定的实用价值.  相似文献   

13.
Developments in genome‐wide association studies and the increasing availability of summary genetic association data have made application of Mendelian randomization relatively straightforward. However, obtaining reliable results from a Mendelian randomization investigation remains problematic, as the conventional inverse‐variance weighted method only gives consistent estimates if all of the genetic variants in the analysis are valid instrumental variables. We present a novel weighted median estimator for combining data on multiple genetic variants into a single causal estimate. This estimator is consistent even when up to 50% of the information comes from invalid instrumental variables. In a simulation analysis, it is shown to have better finite‐sample Type 1 error rates than the inverse‐variance weighted method, and is complementary to the recently proposed MR‐Egger (Mendelian randomization‐Egger) regression method. In analyses of the causal effects of low‐density lipoprotein cholesterol and high‐density lipoprotein cholesterol on coronary artery disease risk, the inverse‐variance weighted method suggests a causal effect of both lipid fractions, whereas the weighted median and MR‐Egger regression methods suggest a null effect of high‐density lipoprotein cholesterol that corresponds with the experimental evidence. Both median‐based and MR‐Egger regression methods should be considered as sensitivity analyses for Mendelian randomization investigations with multiple genetic variants.  相似文献   

14.

Background

Environmental and biomedical researchers frequently encounter laboratory data constrained by a lower limit of detection (LOD). Commonly used methods to address these left-censored data, such as simple substitution of a constant for all values < LOD, may bias parameter estimation. In contrast, multiple imputation (MI) methods yield valid and robust parameter estimates and explicit imputed values for variables that can be analyzed as outcomes or predictors.

Objective

In this article we expand distribution-based MI methods for left-censored data to a bivariate setting, specifically, a longitudinal study with biological measures at two points in time.

Methods

We have presented the likelihood function for a bivariate normal distribution taking into account values < LOD as well as missing data assumed missing at random, and we use the estimated distributional parameters to impute values < LOD and to generate multiple plausible data sets for analysis by standard statistical methods. We conducted a simulation study to evaluate the sampling properties of the estimators, and we illustrate a practical application using data from the Community Participatory Approach to Measuring Farmworker Pesticide Exposure (PACE3) study to estimate associations between urinary acephate (APE) concentrations (indicating pesticide exposure) at two points in time and self-reported symptoms.

Results

Simulation study results demonstrated that imputed and observed values together were consistent with the assumed and estimated underlying distribution. Our analysis of PACE3 data using MI to impute APE values < LOD showed that urinary APE concentration was significantly associated with potential pesticide poisoning symptoms. Results based on simple substitution methods were substantially different from those based on the MI method.

Conclusions

The distribution-based MI method is a valid and feasible approach to analyze bivariate data with values < LOD, especially when explicit values for the nondetections are needed. We recommend the use of this approach in environmental and biomedical research.  相似文献   

15.
The problem of assessing occupational exposure using the mean or an upper percentile of a lognormal distribution is addressed. Inferential methods for constructing an upper confidence limit for an upper percentile of a lognormal distribution and for finding confidence intervals for a lognormal mean based on samples with multiple detection limits are proposed. The proposed methods are based on the maximum likelihood estimates. They perform well with respect to coverage probabilities as well as power and are applicable to small sample sizes. The proposed approaches are also applicable for finding confidence limits for the percentiles of a gamma distribution. Computational details and a source for the computer programs are given. An advantage of the proposed approach is the ease of computation and implementation. Illustrative examples with real data sets and a simulated data set are given.  相似文献   

16.
SAS软件计算条件Logistic 回归的方法比较   总被引:1,自引:0,他引:1  
在病因学研究中,常用1:1配对的Logistic回归来探讨危险因素的作用,SAS软件中作条件Logistic回归的方法很多,本文介绍几种常用方法,对几种方法作出比较,发现使用SAS软件的宏程序可以很方便地解决此问题。  相似文献   

17.
Monte Carlo simulations were used to evaluate statistical methods for estimating 95% upper confidence limits of mean constituent concentrations for left-censored data with nonuniform detection limits. Two primary scenarios were evaluated: data sets with 15 to 50% nondetected samples and data sets with 51 to 80% nondetected samples. Sample size and the percentage of nondetected samples were allowed to vary randomly to generate a variety of left-censored data sets. All statistical methods were evaluated for efficacy by comparing the 95% upper confidence limits for the left-censored data with the 95% upper confidence limits for the noncensored data and by determining percent coverage of the true mean (micro). For data sets with 15 to 50% nondetected samples, the trimmed mean, Winsorization, Aitchison's, and log-probit regression methods were evaluated. The log-probit regression was the only method that yielded sufficient coverage (99-100%) of micro, as well as a high correlation coefficient (r2 = 0.99) and small average percent residuals (-0.1%) between upper confidence limits for censored versus noncensored data sets. For data sets with 51 to 80% nondetected samples, a bounding method was effective (r2 = 0.96 - 0.99, average residual = -5% to -7%, 95-98% coverage of micro), except when applied to distributions with low coefficients of variation (standard deviation/micro < 0.5). Thus, the following recommendations are supported by this research: data sets with 15 to 50% nondetected samples--log-probit regression method and use of Chebyshev theorem to estimate 95% upper confidence limits; data sets with 51 to 80% nondetected samples-bounding method and use of Chebyshev theorem to estimate 95% upper confidence limits.  相似文献   

18.
Propensity-score matching has been used widely in observational studies to balance confounders across treatment groups. However, whether matched-pairs analyses should be used as a primary approach is still in debate. We compared the statistical power and type 1 error rate for four commonly used methods of analyzing propensity-score–matched samples with continuous outcomes: (1) an unadjusted mixed-effects model, (2) an unadjusted generalized estimating method, (3) simple linear regression, and (4) multiple linear regression. Multiple linear regression had the highest statistical power among the four competing methods. We also found that the degree of intraclass correlation within matched pairs depends on the dissimilarity between the coefficient vectors of confounders in the outcome and treatment models. Multiple linear regression is superior to the unadjusted matched-pairs analyses for propensity-score–matched data.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号