首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In this paper, we are concerned with the estimation of the discrepancy between two treatments when right-censored survival data are accompanied with covariates. Conditional confidence intervals given the available covariates are constructed for the difference between or ratio of two median survival times under the unstratified and stratified Cox proportional hazards models, respectively. The proposed confidence intervals provide the information about the difference in survivorship for patients with common covariates but in different treatments. The results of a simulation study investigation of the coverage probability and expected length of the confidence intervals suggest the one designed for the stratified Cox model when data fit reasonably with the model. When the stratified Cox model is not feasible, however, the one designed for the unstratified Cox model is recommended. The use of the confidence intervals is finally illustrated with a HIV+ data set.  相似文献   

2.
I create a general model to perform score tests on interval censored data. Special cases of this model are the score tests of Finkelstein, Sun and Fay. Although Sun's was derived as a test for discrete data and Finkelstein's and Fay's tests were derived under a grouped continuous model, by writing all tests under one general model we see that as long as the regularity conditions hold, any of these three classes of tests may be applied to either grouped continuous or discrete data. I show the equivalence between the weighted logrank form of the general test and the form with a term for each individual, the form often used with permutation tests. From the weighted logrank form of the tests, we see that Sun's and Finkelstein's test are similar, giving constant (or approximately constant) weights to differences in survival distributions over time. In contrast, the proportional odds model (Fay's model with logistic error) gives more weight to early differences.  相似文献   

3.
目的探索ICEstimator用来计算半数致死浓度(LC50)值及其95%可信区间(95%CI)的可行性和优越性。方法利用ICEstimator分别计算敌百虫对白纹伊蚊、致倦库蚊的LC50及其95%CI和氟氯氰菊酯对白纹伊蚊、致倦库蚊的LC50及其95%CI,并与目前常用的计算方法(SPSS、SAS、DPS)对上述数值进行分析比较。结果经DPS/SAS/SPSS软件计算氟氯氰菊酯对白纹伊蚊及致倦库蚊的LC50(mg/ml)及其95%CI分别为5.03(4.68~5.38)、5.20(4.90~5.60)、5.14(4.83~5.46)、5.14(4.83~5.46)和5.31(4.58~6.03)、5.40(4.75~6.15)、5.28(2.37~7.11)、5.28(2.37~7.11);敌百虫对白纹伊蚊及致倦库蚊的LC50(mg/ml)及其95%CI分别为92.92(83.27~102.58)、100.60(90.60~110.70)、96.00(87.88~105.33)、96.00(87.88~105.33)和1123.02(998-89~1247.14)、1123.70(800.60~1652.40)、1111.91(725.47~1745.88)、1111.90(725.46~1745.87)。经非参数K-W-Test检验,二者计算的数值及其95%CI差异均无统计学意义(χ^2=5.595,P=0.113)。结论用ICEstimator可快速得到某杀虫剂对某种蚊虫的LC50及其95%CI,与SPSS和SAS方法比较,该计算方法计算过程简单,速度快,更加有效地利用了初始数据。  相似文献   

4.
5.
The confidence interval (CI) on the population average (PA) odds ratio (OR) is a useful measure of agreement among different diagnostic methods when no gold standard is available. It can be calculated by the repeated measures logistic regression procedure (GENMOD, SAS). We compare the width of CIs from paired and independent samples with an identical number of measurements and an identical probability of positive response among them. For two and three diagnostic methods with binomial endpoints, the best performing sampling strategy is analytically described. The asymptotic formulae of the ratio of the CI widths for paired and independent samples are provided. We numerically study the dependence of the width of the CIs on the number of positive concordant outcomes. The width of CIs from independent samples is an increasing function of the sample size with a saturation asymptote and rather weak dependence on the argument. The width of CIs from paired samples is a decreasing function of the sample size with a saturation asymptote and significant dependence on the argument when the sample size is small. If curves for paired and independent samples intersect, a critical sample size exists. At this point, a small change in the sample size can reverse the choice of the best performing sampling policy. We numerically validated the robustness of the critical point to variations of the conditional OR.  相似文献   

6.
A mechanistic model that explains how toxic effects depend on the duration of exposure has been developed. Derived from the dynamic energy budget (DEB)tox model, it expresses the hazard rate as a function of the toxic concentration in the organism. Using linear approximations in accordance with the general simplifications made in DEBtox, the concentration that induces x% of lethality (LCx) and in particular the lethal concentration 50% (LC50) are expressed explicitly as functions of time. Only three parameters are required: an asymptotic effect concentration, a time constant, and an effect velocity. More sophisticated (but still analytic) models are possible, describing more complex toxicity patterns such as an increase of sensitivity with time or, conversely, an adaptation. These models can be fitted to the common and widespread LC50 endpoints available from the literature for various aquatic species and chemicals. The interpretation of the values assigned to the parameters will help explain the toxicity processes and standardize toxicity values from different sources for comparisons.  相似文献   

7.
In analyzing standardized mortality ratios (SMRs), it is of interest to calculate a confidence interval for the true SMR. The exact limits of a specific interval can be obtained by means of the Poisson distribution either within an iterative procedure or by one of the tables. The limits can be approximated in using one of various shortcut methods. In this paper, a method is described for calculating the exact limits in a simple and easy way. The method is based on the link between the chi 2 distribution and the Poisson distribution. Only a table of the chi 2 distribution is necessary.  相似文献   

8.
9.
Stratification is commonly employed in clinical trials to reduce the chance covariate imbalances and increase the precision of the treatment effect estimate. We propose a general framework for constructing the confidence interval (CI) for a difference or ratio effect parameter under stratified sampling by the method of variance estimates recovery (MOVER). We consider the additive variance and additive CI approaches for the difference, in which either the CI for the weighted difference, or the CI for the weighted effect in each group, or the variance for the weighted difference is calculated as the weighted sum of the corresponding stratum-specific statistics. The CI for the ratio is derived by the Fieller and log-ratio methods. The weights can be random quantities under the assumption of a constant effect across strata, but this assumption is not needed for fixed weights. These methods can be easily applied to different endpoints in that they require only the point estimate, CI, and variance estimate for the measure of interest in each group across strata. The methods are illustrated with two real examples. In one example, we derive the MOVER CIs for the risk difference and risk ratio for binary outcomes. In the other example, we compare the restricted mean survival time and milestone survival in stratified analysis of time-to-event outcomes. Simulations show that the proposed MOVER CIs generally outperform the standard large sample CIs, and that the additive CI approach performs better than the additive variance approach. Sample SAS code is provided in the Supplementary Material.  相似文献   

10.
The diagnostic abilities of two or more diagnostic tests are traditionally compared by their respective sensitivities and specificities, either separately or using a summary of them such as Youden's index. Several authors have argued that the likelihood ratios provide a more appropriate, if in practice a less intuitive, comparison. We present a simple graphic which incorporates all these measures and admits easily interpreted comparison of two or more diagnostic tests. We show, using likelihood ratios and this graphic, that a test can be superior to a competitor in terms of predictive values while having either sensitivity or specificity smaller. A decision theoretic basis for the interpretation of the graph is given by relating it to the tent graph of Hilden and Glasziou (Statistics in Medicine, 1996). Finally, a brief example comparing two serodiagnostic tests for Lyme disease is presented. Published in 2000 by John Wiley & Sons, Ltd.  相似文献   

11.
We propose a non-parametric method to calculate a confidence interval for the difference or ratio of two median failure times for paired observations with censoring. The new method is simple to calculate, does not involve non-parametric density estimates, and is valid asymptotically even when the two underlying distribution functions differ in shape. The method also allows missing observations. We report numerical studies to examine the performance of the new method for practical sample sizes.  相似文献   

12.
Exposure duration and intensity (concentration or dose) determine lethal effects of toxicants. However, environmental regulators have focused on exposure intensity and have considered duration only peripherally. Conventional testing for toxicology tends to fix exposure time and to use the median lethal concentration (LC50) at that time to quantify mortality. Fixing the exposure duration and selecting the 50% mortality level for reasons of statistical and logistical convenience result in the loss of ecologically relevant information generated at all other times and ignore latent mortality that manifests after the exposure ends. In the present study, we used survival analysis, which is widely employed in other fields, to include both time and concentration as covariates and to quantify latent mortality. This was done with two contrasting toxicants, copper sulfate (CuSO4) and sodium pentachlorophenol (NaPCP). Amphipods (Hyalella azteca) were exposed to different toxicant concentrations, and the percentage mortalities were noted both during and after the exposure ended. For CuSO4 at the conventional 48-h LC50 concentrations, the predicted proportions dead after including latent mortality were 65 to 85%, not 50%. In contrast, only 5% or fewer additional animals died if the latent mortality was included for NaPCP. The data (including exposure time, concentration, and proportion dead at each time) for each toxicant were then successfully fit with survival models. The proportion of organisms dying at any combination of exposure concentration and time can be predicted from such models. Survival models including latent mortality produced predictions of lethal effects that were more meaningful in an ecological or field context than those from conventional LC50 methods.  相似文献   

13.
14.
The purpose of the present study was to develop sensitive, rapid, and easily quantified avoidance tests for small fish (Danio rerio) in order to provide important ecological information during toxicity assessments. Fish were exposed in three replicate linear flow-through chambers consisting of five compartments. The test system was found to provide a linear contamination gradient, with mean dilutions in each compartment of 90, 70, 50, 30, and 10%. Also, in the absence of a toxic gradient, the fish were uniformly distributed along the five-compartment chambers. Then the apparatus was evaluated by exposing fish to a concentration gradient of copper and a dilution gradient of a field sample contaminated with acid mine drainage (AMD). Avoidance was monitored at 24-h intervals up to 96 h of exposure. The avoidance of copper and AMD by D. rerio was confirmed. The apparatus enabled quantification of median avoidance effect concentrations or dilutions (EC50 or EDil50) and also lowest-observed-effect gradients, which express the minimum toxicant gradient eliciting avoidance, a parameter increasing the ecological relevance of the laboratory avoidance responses. For quantifying avoidance, a 24-h exposure was sufficient, as the 24- to 96-h EC50 and EDil50 values were similar. The avoidance response was easy and rapid to quantify, leading this test to routine use in environmental risk assessment.  相似文献   

15.
More than one odds ratio estimate will often arise from a single epidemiologic study. Examples of designs where this may occur include those where there is more than one case or control group, and investigations of several risk factors as part of the same study. Various methods for presenting multiple interval estimates are discussed, including: the naive method, the Bonferroni method, the Dunn method, the Scheffé method, and the Dunnett method. For rectangular regions the Dunnett method gives a region with the most appropriate confidence level, but this region contains a different set of odds ratio estimates than are implied by the usual significance tests. A confidence ellipse circumscribed by the Scheffé limits gives the best agreement with the significance tests. Each of these methods is illustrated with a numerical example.  相似文献   

16.
  目的  建立泊松分布95%可信区间表,基于该可信区间表估算辐射生物剂量。
  方法  根据泊松分布累积概率和的算法,用Excel函数及迭代法建立计算变量X可信区间的方法,用宏代码循环计算得到95%可信区间表,基于该可信区间表建立辐射生物剂量估算的Excel应用,并验证可信区间表以及估算剂量的准确性。
  结果  利用Excel软件建立了X取值0 ~ 500的95%可信区间表,其结果与权威教材文献给出的可信区间一致;正态近似法与泊松分布表法估算辐射生物剂量95%可信区间在畸变数较小时差别明显,在畸变数较大时差别较小,泊松分布表法的95%可信区间与国际原子能机构(IAEA)推荐的专业估算软件CABAS 2.0的结果一致。
  结论  建立的估算程序可以准确计算泊松分布的可信区间,使用该可信区间表来估算辐射生物剂量的95%可信区间更合理;该程序应用范围更广、使用更方便,可满足辐射应急事故中对大量受照人员进行生物剂量估算的需求。
  相似文献   

17.
From 1981 through 1985, the authors studied the changes in monthly nosocomial infection rates at the University of Virginia Hospital in Charlottesville, Virginia using the 95% confidence interval for infection rates as a marker of the efficacy of infection control activities. For a 99-month baseline period, monthly infection rates were calculated and the 95% confidence interval was established. In the 60 study months, each monthly rate was compared with the 95% confidence interval for that particular month. At the end of each study year, the monthly infection rates were incorporated into the existing confidence interval. Of 60 monthly rates during the study period, 30 were below the confidence interval (p less than 0.00001), two were above the confidence interval (p = 0.23), and 28 were within the confidence interval. Since there was no reduction in surveillance activity, patient case-mix index, or laboratory sensitivity for organism recovery, these results suggest that monthly nosocomial infection rates at this hospital have decreased when compared with the baseline period. The use of the 95% confidence interval may provide a measure of the efficacy of infection control activities, suggest temporal intervals requiring more intensive infection surveillance, and provide a method for examining the variability in monthly infection rates.  相似文献   

18.
目的探讨正电子发射计算机断层显像/计算机体层扫描仪(PET/CT)性能检测过程中18F的放射性活度及活度浓度准确使用量。方法按照NEMANU2-2001标准,采用其相关模体,结合飞利浦Gemini系列,GE Discovery系列和西门子Biograph系列PET/CT性能检测项目所使用18F的放射性活度或放射性活度浓度进行分析。结果①空间分辨率:飞利浦1.48~2.22GBq/ml;GE>185MBq/ml;西门子1.11GBq/ml);②散射因子、计数丢失和随机符合测量及精确性(计数丢失和随机符合校正):飞利浦481~555MBq;GE900MBq;西门子1.07GBq;③灵敏度:飞利浦7.4MBq;GE10MBq;西门子4.6MBq。结论各厂商提供的放射性活度及活度浓度可用于NEMA标准检测。  相似文献   

19.
  目的  回顾性分析福建省2021年3月 — 2022年2月新冠病毒Delta变异株感染与复阳者的流行病学特征。  方法   通过中国疾病预防控制信息系统中的传染病报告卡收集福建省2020年10月1日 — 2022年9月30日的新冠病毒感染者信息,通过突发公共卫生事件管理信息系统中的流行病学调查报告获取本土暴发疫情中的个案传播关系。运用描述流行病学的方法分析Delta变异株感染者的三间分布特征和潜伏期,运用R软件的“R0”、“EpiEstim”包拟合暴发疫情中感染者的代间隔(SI)并估算实时再生数(Rt);分析复阳的发生率及其核酸检测情况,运用logistic回归分析复阳发生的影响因素。  结果   新冠病毒Delta变异株在福建省的流行时间大致为2021年3月 — 2022年2月,呈持续散在输入和本土局部暴发态势。输入性感染者以海员及长途驾驶员(28.99%)、学生(15.94%)以及商业服务人员(14.49%)为主,由厦门入境的感染者最多,主要来自东南亚和日本、美国、英国等地;本土疫情主要发生在莆田和厦门的学校和工厂,传播速度快,实时再生数(Rt)的最大值达7.35,代间隔(SI)服从Gamma分布,均值和标准差分别为2.28和2.06,中位潜伏期为6.50 d。复阳发生率为37.98%,本土感染者发生复阳的风险是输入性感染者的2.68倍(95%CI = 1.45~4.95),病程为15~30 d的感染者发生复阳的风险是病程为45 d以上感染者的4.12倍(95%CI:1.18~14.34)。  结论   新冠病毒Delta变异株在福建省的流行时间大致为2021年3月 — 2022年2月,呈持续散在输入和本土局部暴发态势,输入性疫情防控压力较大,但暴发疫情得到迅速而有效的控制;Delta变异株感染者复阳发生率为37.98%,病程可能是复阳的影响因素。  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号