首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In low-level radioactivity measurements, it is often important to decide whether a measurement differs from background. A traditional formula for decision level (DL) is given in numerous sources, including the recent ANSI/HPS N13.30-1996, Performance Criteria for Radiobioassay and the Multi-Agency Radiation Survey and Site Investigation Manual (MARSSIM). This formula, which we dub the N13.30 rule, does not adequately account for the discrete nature of the Poisson distribution for paired blank (equal count times for background and sample) measurements, especially at low numbers of counts. We calculate the actual false positive rates that occur using the N13.30 DL formula as a function of a priori false positive rate a and background Poisson mean mu = rhot, where rho is the underlying Poisson rate and t is the counting time. False positive rates exceed a by significant amounts for alpha < or = 0.2 and mu < 100 counts, peaking at 25% at mu approximately equal to 0.71, nearly independent of alpha. Monte Carlo simulations verified calculations. Currie's derivation of the N13.30 DL was based on knowing a good estimate of the mean and standard deviation of background, a case that does not hold for paired blanks and low background rates. We propose one new decision rule (simply add 1 to the number of background counts), and we present six additional decision rules from various sources. We evaluate the actual false positive rate for all eight decision rules as a function of a priori false positive rate and background mean. All of the seven alternative rules perform better than the N13.30 rule. Each has advantages and drawbacks. Given these results, we believe that many regulations, national standards, guidance documents, and texts should be corrected or modified to use a better decision rule.  相似文献   

2.
In structuring decision models of medical interventions, it is commonly recommended that only 2 branches be used for each chance node to avoid logical inconsistencies that can arise during sensitivity analyses if the branching probabilities do not sum to 1. However, information may be naturally available in an unconditional form, and structuring a tree in conditional form may complicate rather than simplify the sensitivity analysis of the unconditional probabilities. Current guidance emphasizes using probabilistic sensitivity analysis, and a method is required to provide probabilistic probabilities over multiple branches that appropriately represents uncertainty while satisfying the requirement that mutually exclusive event probabilities should sum to 1. The authors argue that the Dirichlet distribution, the multivariate equivalent of the beta distribution, is appropriate for this purpose and illustrate its use for generating a fully probabilistic transition matrix for a Markov model. Furthermore, they demonstrate that by adopting a Bayesian approach, the problem of observing zero counts for transitions of interest can be overcome.  相似文献   

3.
OBJECTIVES: To evaluate the reproducibility and workability of the in vivo test model of the European test standard EN 12791 regarding the effectiveness of surgical hand antiseptics and, as a secondary objective, to evaluate the power of the model to discriminate between the effectiveness of various formulations of surgical hand antiseptics. DESIGN: Prospective, randomized, multicenter study with a Latin square design. SETTING: Five laboratories at 2 universities, 2 disinfectant manufacturers, and 1 private testing institution. PARTICIPANTS: Twenty healthy adults in each laboratory. INTERVENTION: Surgical hand antisepsis was performed by scrubbing with chlorhexidine gluconate 4% detergent (CHG) or by rubbing the hands with propan-2-OL (70% by volume; Iso 70) or ethanol 85% (E 85); rubbing the hands and forearms for 3 minutes with propan-1-OL (N 60) was used as the reference disinfection procedure. We deliberately chose to use these antiseptics at the given concentrations because they were intended to cover the range of typical antiseptics submitted for approval according to EN 12791. METHODS: In once-weekly tests, the immediate effects of the 4 antiseptics were established according to the method laid down in EN 12791 by assessing the release of skin flora from the fingertips as viable bacteria counts per milliliter of sampling fluids before treatment and viable bacteria counts immediately after treatment, separately for both hands, such that after 4 weeks each volunteer had used every formulation once. RESULTS: The mean log reduction factor (RF) for the release of bacterial skin flora (the log RF was calculated as the log count before treatment minus the log count after treatment) and corresponding standard deviations for the 4 hand antisepsis formulations were as follows: for CHG, 1.1+/-0.3 colony-forming units (cfu) per milliliter of sampled fluid; for Iso 70, 1.7+/-0.3 cfu/mL; for E 85, 2.1+/-0.3 cfu/mL; and for N 60, 2.4+/-0.4 cfu/mL. The differences between these values proved significant (P<.001) by analysis of variance and in Tukey's "honestly significantly different" (HSD) post hoc test. Although, with regard to their immediate antibacterial activity, the same ranking of these antiseptics was found at all laboratories, the levels of efficacy were significantly different across laboratories (P<.001); no statistical difference was found between left and right hands (P>.01). Relating the log RF values of the other 3 formulations to those of the reference formulation (N 60) abolished differences between laboratories (P=.16); in addition, the interclass correlation coefficient decreased from 9.1% to 4.5%. With 20 volunteers, a minimum difference of 0.47 log between the mean log RFs of the reference formulation and an inferior test formulation will be detected as significant at an alpha of .05 (1-sided) and a 1- beta value of .8. CONCLUSION: The test method described in EN 12791 yielded the same conclusion on the effectiveness of the tested formulations in every laboratory and proved, therefore, reproducible and workable.  相似文献   

4.
J M Kemp  M Kajihara  S Nagahara  A Sano  M Brandon  S Lofthouse 《Vaccine》2002,20(7-8):1089-1098
Two continuous delivery injectable silicone implants were tested to determine if they were capable of delivering vaccines in a single shot. The Type A implant delivers antigen in vitro over a 1-month-period and the Type B over several months. Vaccination studies in sheep were designed to compare the responses induced by the Type A and B implants, Alzet mini-osmotic pumps and conventional antigen delivery. A model antigen, avidin, was used along with IL-1beta or alum as adjuvants. Sheep were immunised with various formulations and the titre and isotype of the antigen specific antibodies monitored. The Type B implant induced antibody (Ab) titres of greater magnitude and duration than soluble vaccines or the Type A implant with adjuvant, but only if IL-1beta was included in the formulation. Both implants induced antibodies of IgG1 and IgG2 isotype. A memory response to soluble antigen challenge was induced by the Type B+IL-1beta implant, which was predominantly of an IgG1 isotype.  相似文献   

5.
The purpose of this study was to compare the levels of interleukin-6 (IL-6) and malondialdehyde (MDA) in healthy controls and diabetic patients without coronary heart disease (CHD). Fasting serum IL-6 and MDA were determined in 30 healthy controls and 52 (20 Type 1 and 32 Type 2) diabetic patients without clinical evidence of CHD. MDA was calculated as oxidative stress. The IL-6 concentration was used to evaluate the cytokine function. Results showed that the serum IL-6 and MDA concentrations are significantly higher in Type 2 diabetic patients (p<0.05). In conclusion, we demonstrated that Type 2 diabetic patients are more influenced by oxidative stress than Type 1 diabetic patients and healthy controls, because of the action of cytokine.  相似文献   

6.
Dipper samples of Anopheles quadrimaculatus immatures from stocked enclosures in Arkansas rice fields were used to develop regression equations relating dipper sample counts to absolute density. Confidence limits were developed for mean number of immatures collected at each density and stadia, including combined stadia. These data can be used to estimate absolute density from mean dipper count. Distribution of rice field immatures approximated but did not fit the Poisson distribution. Sample size was calculated for 10, 25 and 50% of the true mean, at various levels of Type I and II error. A sample size of N = 6,424 was necessary to detect differences within 10% of the true mean, with 5 and 10% probability of Type I and II error, respectively.  相似文献   

7.
Stone (Statistics in Medicine, 7, 649-660 (1988)) proposed a method of testing for elevation of disease risk around a point source. Stone's test is appropriate to data consisting of counts of the numbers of cases, Yi say, in each of n regions which can be ordered in increasing distance from a point source. The test assumes that the Yi are mutually independent Poisson variates, with means mu i = Ei lambda i where the Ei are the expected numbers of cases, for example based on appropriately standardized national incidence rates, and the lambda i are relative risks. The null hypothesis that the lambda i are constant is then tested against the alternative that they are monotone non-increasing with distance from the source. We propose an extension to Stone's test which allows for covariate adjustment via a log-linear model, mu i = Ei lambda i exp (sigma jp = 1 xij beta j), where the xij are the values of each of p explanatory variables in each of the n regions, and the beta j are unknown regression parameters. Our methods are illustrated using data on the incidence of stomach cancer near two municipal incinerators.  相似文献   

8.
OBJECTIVE--The aim was to demonstrate how the beta distribution may be used to find confidence limits on a standardised mortality ratio (SMR) when the expected number of events is subject to random variation and to compare these limits with those obtained with the standard exact approach used for SMRs and with a Fieller-based confidence interval. DESIGN--The relationship of the binomial and the beta distributions is explained. For cohort studies in which deaths are counted in exposed and unexposed groups exact confidence limits on the relative risk are found conditional on the total number of observed deaths. A similar method for the SMR is justified by analogy between the SMR and the relative risk found from such cohort studies, and the fact that the relevant (beta) distribution does not require integer parameters. SOURCE OF DATA--Illustrative examples of hypothetical data were used, together with a MINITAB macro (see appendix) to perform the calculations. MAIN RESULTS--Exact confidence intervals that include error in the expected number are much wider than those found with the standard exact method. Fieller intervals are comparable with the new exact method provided the observed and expected numbers (taken to be means of Poisson variates) are large enough to approximate normality. As the expected number is increased, the standard method gives results closer to the new method, but may still lead to different conclusions even with as many as 100 expected. CONCLUSIONS--If there is reason to suppose the expected number of deaths in an SMR is subject to sampling error (because of imprecisely estimated rates in the standard population) then exact confidence limits should be found by the methods described here, or approximate Fieller-based limits provided enough events are observed and expected to approximate normality.  相似文献   

9.
Past computer solutions for confidence intervals in paired counting are extended to the case where the ratio of the sample count time to the blank count time is taken to be an integer, IRR. Previously, confidence intervals have been named Neyman-Pearson confidence intervals; more correctly they should have been named Neyman confidence intervals or simply confidence intervals. The technique utilized mimics a technique used by Pearson and Hartley to tabulate confidence intervals for the expected value of the discrete Poisson and Binomial distributions. The blank count and the contribution of the sample to the gross count are assumed to be Poisson distributed. The expected value of the blank count, in the sample count time, is assumed known. The net count, OC, is taken to be the gross count minus the product of IRR with the blank count. The probability density function (PDF) for the net count can be determined in a straightforward manner.  相似文献   

10.
We use a hierarchical model for a meta-analysis that combines information from autopsy studies of adenoma prevalence and counts. The studies we included reported findings using a variety of adenoma prevalence groupings and age categories. We use a non-homogeneous Poisson model for multinomial bin probabilities. The Poisson model allows risk to depend on age and sex, and incorporates extra-Poisson variability. We evaluate model fit using the posterior predicted distribution of adenoma prevalence reported by the studies included in our analyses and validate our model using adenoma prevalence reported by more recent colonoscopy studies. For 1990, the estimated adenoma prevalence among Americans at age 60 is 40.3 per cent for men compared to 29.2 per cent for women.  相似文献   

11.
OBJECTIVE. Health services researchers often need to compute the probability of observing a certain number of events when only a few such events are expected. Our objective is to show that the standard approaches (Poisson, binomial, and normal approximations) are inappropriate in such instances, and to suggest an alternative. DATA SOURCES. Patients undergoing cholecystectomy (34,234) in 465 California hospitals in 1983 are used to demonstrate the biases arising from various methods of calculating the probability of observing a given number of deaths in each hospital. Similar data from other procedures and diagnoses with lower and higher mortality rates are also used for illustration. STUDY DESIGN. The computational methods to derive probabilities using the Poisson, normal, simulation, and exact probabilities are discussed. Using a previously developed risk factor model, the probability of observing the actual number of deaths (or more) is calculated given the expectation of death for each patient in each hospital. Results for the four methods are compared, showing the types of random and systematic errors in the Poisson, normal, and simulation approaches. DATA COLLECTION. Routinely collected hospital discharge abstract data were provided by the California Office of Statewide Planning and Development. PRINCIPAL FINDINGS. The Poisson and normal approximations are often biased substantially in calculating upper-tail p-values, especially when the expected number of adverse outcomes is less than five. Simulations allow unbiased calculations, and the degree of random error can be made arbitrarily small given enough trials. Exact calculations using a simple recursive algorithm can be done very efficiently on either a mainframe or personal computer. For example, the whole set of cholecystectomy patients can be assessed in less than 90 seconds on a Macintosh. CONCLUSIONS. Calculating the probability of observing a small number of events using standard approaches may result in substantial errors. The availability of a simple and inexpensive method of calculating these probabilities exactly can avoid these errors.  相似文献   

12.
The impact of measurement error and temporal variability of risk factors on estimates of disease probabilities based on the logistic function is discussed. Monte Carlo results and empirical findings from the Multiple Risk Factor Intervention Trial indicate that the degree of attenuation of logistic parameter estimates is well approximated by the reliability coefficient when the errors are assumed to be normal random variates and event probabilities are small. In the design of intervention studies, measurement error and temporal variability of risk factors do not usually influence estimates of the probability of developing the disease in the control group, but can result in mis-estimation of the probability of developing the disease in the experimental group, substantially reducing the statistical power of the clinical trial.  相似文献   

13.
Various methods have been described for re-estimating the final sample size in a clinical trial based on an interim assessment of the treatment effect. Many re-weight the observations after re-sizing so as to control the pursuant inflation in the type I error probability alpha. Lan and Trost (Estimation of parameters and sample size re-estimation. Proceedings of the American Statistical Association Biopharmaceutical Section 1997; 48-51) proposed a simple procedure based on conditional power calculated under the current trend in the data (CPT). The study is terminated for futility if CPT < or = CL, continued unchanged if CPT > or = CU, or re-sized by a factor m to yield CPT = CU if CL < CPT < CU, where CL and CU are pre-specified probability levels. The overall level alpha can be preserved since the reduction due to stopping for futility can balance the inflation due to sample size re-estimation, thus permitting any form of final analysis with no re-weighting. Herein the statistical properties of this approach are described including an evaluation of the probabilities of stopping for futility or re-sizing, the distribution of the re-sizing factor m, and the unconditional type I and II error probabilities alpha and beta. Since futility stopping does not allow a type I error but commits a type II error, then as the probability of stopping for futility increases, alpha decreases and beta increases. An iterative procedure is described for choice of the critical test value and the futility stopping boundary so as to ensure that specified alpha and beta are obtained. However, inflation in beta is controlled by reducing the probability of futility stopping, that in turn dramatically increases the possible re-sizing factor m. The procedure is also generalized to limit the maximum sample size inflation factor, such as at m max = 4. However, doing so then allows for a non-trivial fraction of studies to be re-sized at this level that still have low conditional power. These properties also apply to other methods for sample size re-estimation with a provision for stopping for futility. Sample size re-estimation procedures should be used with caution and the impact on the overall type II error probability should be assessed.  相似文献   

14.
Currently the analysis of clinical trials for treatment of paroxysmal atrial fibrillation (PAF) relies on the assumption that the events are distributed according to a Poisson distribution. We contend that the occurrence of PAF events are clearly not Poisson and tend to occur in clusters. A candidate parametric model of the inter-event interval, the Weibull distribution, is presented. When the events are distributed according to a Poisson distribution, the time to the first event (TFE) has the same distribution as the inter-event intervals (IEI) due to the 'memoryless' property of the Poisson distribution, hence the TFE can be used instead of the IEI. When the events do not form a Poisson distribution, the TFE does not have the same distribution as the IEI. We show that for the Weibull distribution, when the TFE is used to model the IEI, both the mean and the survivor distribution are biased. The bias in the survivor function is a function both of time and the parameters of the distribution. Therefore when two groups have different parameters for their distributions (as in the case of different treatment effects), the discrepancy between the survivor distribution of the IEI and the survivor distribution of the TFE is affected differentially. We demonstrate the low coverage probabilities of the mean and the survivor function which result when the underlying distribution is Weibull with shape parameter kappa < 1.0. It is likely that this problem will arise for other clustered event processes. This suggests that careful empirical investigation of the distribution of IEI for recurrent events is necessary before choosing to analyse the data using the TFE.  相似文献   

15.
低温冷冻和孵育对人精子氧化应激水平的影响   总被引:1,自引:0,他引:1  
目的:探讨低温冷冻和孵育对人精子氧化应激水平的影响。方法:60份精液,每份分成5份分别列入5组(处理前对照组、低温冷冻空白实验组、低温冷冻样本组、孵育空白实验组和孵育样本组)。以晚期氧化蛋白产物(AOPP)为蛋白氧化指标,丙二醛(MDA)为脂质过氧化指标,用分光光度法分别检测AOPP和MDA的水平。结果:低温冷冻样本组MDA水平明显高于处理前对照组和低温冷冻空白实验组(P〈0.05),但AOPP水平无明显差异(P〉0.05)。孵育样本组AOPP和MDA水平均高于处理前对照组、孵育空白实验组及低温冷冻样本组(P〈0.05)。结论:精子的低温冷冻和孵育,均存在氧化应激损伤。低温冷冻主要造成精子的脂质过氧化损伤,孵育可致精子的脂质过氧化损伤和蛋白氧化损伤。  相似文献   

16.
This note discusses the use of blank or background counting data that are measured for times that differ from times used for the sample counts. The correct formula for the minimum detectable activity, under this condition, is given as follows: MDA = [3 + 3.29 square root of Rbtg(1 + tg/tb)]/epsilon tg, where Rb denotes background count rate, tb and tg denote background and gross count times, and epsilon denotes counting efficiency. Counting backgrounds for a long time reduces decision levels, uncertainties, and minimum detectable activities. These benefits are fully available only when there is no other source of variability than random fluctuations in count rates.  相似文献   

17.
This paper examines the effects of systematic and random errors in recall and of selection bias in case-control studies of mobile phone use and cancer. These sensitivity analyses are based on Monte-Carlo computer simulations and were carried out within the INTERPHONE Study, an international collaborative case-control study in 13 countries. Recall error scenarios simulated plausible values of random and systematic, non-differential and differential recall errors in amount of mobile phone use reported by study subjects. Plausible values for the recall error were obtained from validation studies. Selection bias scenarios assumed varying selection probabilities for cases and controls, mobile phone users, and non-users. Where possible these selection probabilities were based on existing information from non-respondents in INTERPHONE. Simulations used exposure distributions based on existing INTERPHONE data and assumed varying levels of the true risk of brain cancer related to mobile phone use. Results suggest that random recall errors of plausible levels can lead to a large underestimation in the risk of brain cancer associated with mobile phone use. Random errors were found to have larger impact than plausible systematic errors. Differential errors in recall had very little additional impact in the presence of large random errors. Selection bias resulting from underselection of unexposed controls led to J-shaped exposure-response patterns, with risk apparently decreasing at low to moderate exposure levels. The present results, in conjunction with those of the validation studies conducted within the INTERPHONE study, will play an important role in the interpretation of existing and future case-control studies of mobile phone use and cancer risk, including the INTERPHONE study.  相似文献   

18.
The frequency of chromosome aberrations per traversal of a nucleus by a charged particle at the low dose limit increases proportionally to the square of the linear energy transfer (LET), peaks at about 100 keV/micron and then decreases with further increase of LET. This has long been interpreted as an excessive energy deposition over the necessary energy required to produce a biologically effective event. Here, we present an alternative interpretation. Cell traversed by a charged particle has certain probability to receive lethal damage leading to direct death. Such events may increase with an increase of LET and the number of charged particles traversing the cell. Assuming that the lethal damage is distributed according to a Poisson distribution, the probability that a cell has no such damage is expressed by e-cLx, where c is a constant, L is LET, and x is the number of charged particles traversing the cell. From these assumptions, the frequency of chromosome aberration in surviving cells can be described by Y = alpha SD + beta S2D2 with the empirical relation Y = alpha D + beta D2 in the low LET region, where S = e-cL, alpha is a value proportional to LET, beta is a constant, and D is the absorbed dose. This model readily explains the empirically established relationship between LET and relative biological effectiveness (RBE). The model can also be applied to clonogenic survival. If cells can survive and they have neither unstable chromosome aberrations nor other lethal damage, the LET-RBE relationship for clonogenic survival forms a humped curve. The relationship between LET and inactivation cross-section becomes proportional to the square of LET in the low LET region when the frequency of a directly lethal events is sufficiently smaller than unity, and the inactivation cross-section saturates to the cell nucleus cross-sectional area with an increase in LET in the high LET region.  相似文献   

19.
Simple exact analysis of the standardised mortality ratio.   总被引:17,自引:4,他引:13       下载免费PDF全文
The standardised mortality ratio is the ratio of deaths observed, D, to those expected, E, on the basis of the mortality rates of some reference population. On the usual assumptions--that D was generated by a Poisson process and that E is based on such large numbers that it can be taken as without error--the long established, but apparently little known, link between the Poisson and chi 2 distributions provides both an exact test of significance and expressions for obtaining exact (1-alpha) confidence limits on the SMR. When a table of the chi 2 distribution gives values for 1-1/2 alpha and 1/2 alpha with the required degrees of freedom, the procedures are not only precise but very simple. When the required values of chi 2 are not tabulated, only slightly less simple procedures are shown to be highly reliable for D greater than 5; they are more reliable for all D and alpha than even the best of three approximate methods. For small D, all approximations can be seriously unreliable. The exact procedures are therefore recommended for use wherever the basic assumptions (Poisson D and fixed E) apply.  相似文献   

20.
Shared savings arrangements are designed to financially reward provider groups that reduce healthcare spending through improved care coordination. A major concern with these arrangements is that annual changes in spending are subject to a variety of random factors that are unrelated to care coordination efforts. As a result, resources can be misallocated if providers who are unsuccessful at controlling spending are inappropriately rewarded and providers who are successful are inappropriately denied rewards. This paper provides a systematic analysis of the role of random variation using a general statistical model based on shared savings arrangements that are currently evolving in the public and private sectors. The model focuses specifically on the variance of the average savings rate (ASR), which is the quantity used to determine whether and by how much a provider group will be rewarded. Variance in the ASR is a major driver of the probabilities of Type I error (i.e., inappropriately rewarding providers) and Type II error (i.e., inappropriately failing to reward providers), which can lead to major resource misallocations. We find that the probabilities of Type I and Type II errors associated with common approaches to savings measurement can be quite high, often exceeding 10 or 25 %, respectively. We also find that the likelihood of both types of errors can be substantially reduced through careful planning and design of savings measurement schemes before payers and providers enter into shared savings agreements.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号