首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
This paper shows that the extension of the simple procedure of George and Elston in calculation of confidence limits for the underlying prevalence rate to accommodate any finite number of cases in inverse sampling is straightforward. To appreciate the fact that the length of the confidence interval calculated on the basis of the first single case may be too wide for general utility, I include a quantitative discussion on the effect due to an increase in the number of cases requested in the sample on the expected length of confidence intervals. To facilitate further the application of the results presented in this paper, I present a table that summarizes in a variety of situations the minimum required number of cases for the ratio of the expected length of a confidence interval relative to the underlying prevalence rate to be less than or equal to a given value. I also include a discussion on the relation between Cleman'S confidence limits on the expected number of trials before the failure of a given device and those presented here.  相似文献   

2.
The standardized rate of indirect method is widely used, but no method of interval estimation for its population rate has been reported. The authors have discussed the standard error of standardized rate of indirect method and suggested that a method using this standard error should be used to determine the confidence limits for the population rate. In this paper, the authors put forward another method (Confidence Factors-Method) which can be easily applied to determine the confidence limits. It gives approximately the same result with the method mentioned above.  相似文献   

3.
4.
Existing methods for setting confidence intervals for the difference θ between binomial proportions based on paired data perform inadequately. The asymptotic method can produce limits outside the range of validity. The ‘exact’ conditional method can yield an interval which is effectively only one-sided. Both these methods also have poor coverage properties. Better methods are described, based on the profile likelihood obtained by conditionally maximizing the proportion of discordant pairs. A refinement (methods 5 and 6) which aligns 1−α with an aggregate of tail areas produces appropriate coverage properties. A computationally simpler method based on the score interval for the single proportion also performs well (method 10). © 1998 John Wiley & Sons, Ltd.  相似文献   

5.
6.
Methods for estimating the size of a closed population often consist of fitting some model (e.g. a log-linear model) to data with a missing cell corresponding to the members of the population missed by all reporting sources. Although the use of the asymptotic standard error is the usual method for forming confidence intervals for the population total, the sample sizes are not always large enough to produce valid confidence intervals. We propose a method for forming confidence intervals based upon changes in a goodness-of-fit statistic associated with changes in trial values of the population total.  相似文献   

7.
8.
Two new methods are proposed for constructing the confidence limits for quartiles. The small sample behaviour of these two methods is compared with the Jennison and Turnbull modified Brookmeyer-Crowley method at three quartiles. The simulation study indicates that one of the new methods, the smoothed modified reflected method, is preferred over the other methods when the censoring rate is less than 40 per cent while the Jennison and Turnbull method is preferred for higher censoring. The results for the upper quartile are similar to the median. For the lower quartile with small sample and high censoring, semi-infinite intervals may often occur. The correct practice is to permit the semi-infinite intervals without modifying them since it is more informative and gives closer to nominal level coverage.  相似文献   

9.
One encounters in the literature estimates of some rates of genetic and congenital disorders based on log-linear methods to model possible interactions among sources. Often the analyst chooses the simplest model consistent with the data for estimation of the size of a closed population and calculates confidence intervals on the assumption that this simple model is correct. However, despite an apparent excellent fit of the data to such a model, we note here that the resulting confidence intervals may well be misleading in that they can fail to provide an adequate coverage probability. We illustrate this with a simulation for a hypothetical population based on data reported in the literature from three sources. The simulated nominal 95 per cent confidence intervals contained the modelled population size only 30 per cent of the time. Only if external considerations justify the assumption of plausible interactions of sources would use of the simpler model's interval be justified.  相似文献   

10.
11.
12.
Thirteen methods for computing binomial confidence intervals are compared based on their coverage properties, widths and errors relative to exact limits. The use of the standard textbook method, x/n ± 1.96√[(x/n) (1 ? x/n)/n], or its continuity corrected version, is strongly discouraged. A commonly cited rule of thumb stating that alternatives to exact methods may be used when the estimated proportion p? is such that np? and n(1 ? p?) both exceed 5 does not ensure adequate accuracy. Score limits are easily calculated from closed from solutions to quadratic equations and can be used at all times. Based on coverage functions, the continuity corrected score method is recommended over exact methods. Its conservative nature should be kept in mind, as should the wider fluctuation of actual coverage that accompanies omission of the continuity correction.  相似文献   

13.
14.
15.
This paper describes a method and associated computer program for calculating exact confidence limits and P values, along with the conditional maximum likelihood estimate of the common odds ratio for a series of 2 x 2 tables. The program can be used to calculate exact estimates for matched data and is considerably faster than others currently available.  相似文献   

16.
A rapidly converging algorithm is given to calculate exact confidence intervals about the adjusted relative risk in follow-up studies with stratified incidence-density data. The network approach that Mehta developed for tables with person-count numerators and denominators is adapted to tables with person-count numerators and person-time denominators. This algorithm updates an earlier program by Guess et al, yielding the same quantities but with running times that are between ten and a hundred times faster. Applications to Poisson regression are discussed.  相似文献   

17.
OBJECTIVE--The aim was to demonstrate how the beta distribution may be used to find confidence limits on a standardised mortality ratio (SMR) when the expected number of events is subject to random variation and to compare these limits with those obtained with the standard exact approach used for SMRs and with a Fieller-based confidence interval. DESIGN--The relationship of the binomial and the beta distributions is explained. For cohort studies in which deaths are counted in exposed and unexposed groups exact confidence limits on the relative risk are found conditional on the total number of observed deaths. A similar method for the SMR is justified by analogy between the SMR and the relative risk found from such cohort studies, and the fact that the relevant (beta) distribution does not require integer parameters. SOURCE OF DATA--Illustrative examples of hypothetical data were used, together with a MINITAB macro (see appendix) to perform the calculations. MAIN RESULTS--Exact confidence intervals that include error in the expected number are much wider than those found with the standard exact method. Fieller intervals are comparable with the new exact method provided the observed and expected numbers (taken to be means of Poisson variates) are large enough to approximate normality. As the expected number is increased, the standard method gives results closer to the new method, but may still lead to different conclusions even with as many as 100 expected. CONCLUSIONS--If there is reason to suppose the expected number of deaths in an SMR is subject to sampling error (because of imprecisely estimated rates in the standard population) then exact confidence limits should be found by the methods described here, or approximate Fieller-based limits provided enough events are observed and expected to approximate normality.  相似文献   

18.
19.
Hofer E 《Health physics》2007,92(3):226-235
Hypothesis testing, statistical power, and confidence limits are concepts from classical statistics that require data from observations. In some important recent applications some of the data are not observational but are reconstructed by computer models. There is generally epistemic uncertainty in model formulations, as well as in parameter and input values. The resulting epistemic uncertainty of the reconstructed data is determined by an uncertainty analysis and is expressed by subjective probability distributions. Sometimes only the mean or median values of the distributions are used in the concepts mentioned above, which hides the uncertainty of the data thereby rendering misleading results. Misleading results are also obtained if the epistemic uncertainty of the data is combined incorrectly with the stochastic variability of the outcome of the actual random complex concerned. This paper argues that an uncertainty analysis of the application of classical statistical concepts is essentially the correct way of dealing with the epistemic uncertainty of the data. A practical example serves as an illustration.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号