首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
2.
STUDY OBJECTIVE: To establish the prevalence of problem drug use in the 10 local authorities within the Metropolitan County of Greater Manchester between April 2000 and March 2001. SETTING AND PARTICIPANTS: Problem drug users aged 16-54 resident within Greater Manchester who attended community based statutory drug treatment agencies, were in contact with general practitioners, were assessed by arrest referral workers, were in contact with the probation service, or arrested under the Misuse of Drugs Act for offences involving possession of opioids, cocaine, or benzodiazepines. DESIGN: Multi-sample stratified capture-recapture analysis. Patterns of overlaps between data sources were modelled in a log-linear regression to estimate the hidden number of drug users within each of 60 area, age group, and gender strata. Simulation methods were used to generate 95% confidence intervals for the sums of the stratified estimates. MAIN RESULTS: The total number of problem drug users in Greater Manchester was estimated to be 19 255 giving a prevalence of problem drug use of 13.7 (95% CI 13.4 to 15.7) per 1000 population aged 16-54. The ratio of men to women was 3.5:1. The distribution of problem drug users varied across three age groups (16-24, 25-34, and 35-54) and varied between the 10 areas. CONCLUSIONS: Areas in close geographical proximity display different patterns of drug use in terms of prevalence rates and age and gender patterns. This has important implications, both for future planning of service provision and for the way in which the impact of drug misuse interventions are evaluated.  相似文献   

3.
Health researchers commonly use logistic regression when profiling health providers. Data from the patients treated by the providers are used to construct models predicting the expected number of outcomes for providers and the ratio of observed to expected outcomes (O/E ratio) used as a risk-adjusted measure of provider performance. Typically, when calculating the standard deviation (SD) of O/E ratios, only O is treated as a random variable. We used the propagation of errors (Pe) to derive a SD estimate that accounted for variability in O and the estimate of E. Using data previously used to profile Canadian cardiac surgery providers, we compared Pe-SD estimates with typical SD (SDT) estimates. The SDT estimates and confidence intervals were always larger than the Pe estimates, most notably when one or more providers treated a large proportion of the patients. This was confirmed using computer simulations. SDT estimates should be abandoned in favor of more sophisticated estimates.  相似文献   

4.
Nauta JJ  de Bruijn IA 《Vaccine》2006,24(44-46):6643-6644
In a well-know CHMP Note for Guidance criteria are given for evaluation the results of annual-update studies. Now and then it sometimes is suggested that these criteria can be improved by the use of confidence intervals. Here it is argued that this is based on a misinterpretation of what confidence intervals are. These intervals require that subjects are chosen at random from a population. In influenza studies this is never the case.  相似文献   

5.
Methods for estimating the size of a closed population often consist of fitting some model (e.g. a log-linear model) to data with a missing cell corresponding to the members of the population missed by all reporting sources. Although the use of the asymptotic standard error is the usual method for forming confidence intervals for the population total, the sample sizes are not always large enough to produce valid confidence intervals. We propose a method for forming confidence intervals based upon changes in a goodness-of-fit statistic associated with changes in trial values of the population total.  相似文献   

6.
Influenza vaccine trials typically report vaccine efficacy for infection-confirmed symptomatic illness. Data on indirect vaccine efficacy for susceptibility, the degree of vaccine protection to susceptibles, or indirect vaccine efficacy for illness given infection, are sparse. Using inactivated influenza vaccine randomized trial data, we calculated indirect vaccine efficacy for susceptibility of 20% [95% CI 9-30] and indirect vaccine efficacy for illness among infected persons 12% [95% CI 2-22], values inferior to a direct vaccine efficacy for infection-confirmed symptomatic illness of 55% [95% CI −21 to 84] and an indirect effect of 61% [95% CI 8-83]. Such data reveal variance in protective efficacy of the vaccine for multi-dimensional direct and indirect efficacy measures.  相似文献   

7.
8.
In many experiments, it is necessary to evaluate the effectiveness of a treatment by comparing the responses of two groups of subjects. This evaluation is often performed by using a confidence interval for the difference between the population means. To compute the limits of this confidence interval, researchers usually use the pooled t formulas, which are derived by assuming normally distributed errors. When the normality assumption does not seem reasonable, the researcher may have little confidence in the confidence interval because the actual one‐sided coverage probability may not be close to the nominal coverage probability. This problem can be avoided by using the Robbins–Monro iterative search method to calculate the limits. One problem with this iterative procedure is that it is not clear when the procedure produces a sufficiently accurate estimate of a limit. In this paper, we describe a multiple search method that allows the user to specify the accuracy of the limits. We also give guidance concerning the number of iterations that would typically be needed to achieve a specified accuracy. This multiple iterative search method will produce limits for one‐sided and two‐sided confidence intervals that maintain their coverage probabilities with non‐normal distributions. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

9.
B Rosner  W C Willett  D Spiegelman 《Statistics in medicine》1989,8(9):1051-69; discussion 1071-3
Errors in the measurement of exposure that are independent of disease status tend to bias relative risk estimates and other measures of effect in epidemiologic studies toward the null value. Two methods are provided to correct relative risk estimates obtained from logistic regression models for measurement errors in continuous exposures within cohort studies that may be due to either random (unbiased) within-person variation or to systematic errors for individual subjects. These methods require a separate validation study to estimate the regression coefficient lambda relating the surrogate measure to true exposure. In the linear approximation method, the true logistic regression coefficient beta* is estimated by beta/lambda, where beta is the observed logistic regression coefficient based on the surrogate measure. In the likelihood approximation method, a second-order Taylor series expansion is used to approximate the logistic function, enabling closed-form likelihood estimation of beta*. Confidence intervals for the corrected relative risks are provided that include a component representing error in the estimation of lambda. Based on simulation studies, both methods perform well for true odds ratios up to 3.0; for higher odds ratios the likelihood approximation method was superior with respect to both bias and coverage probability. An example is provided based on data from a prospective study of dietary fat intake and risk of breast cancer and a validation study of the questionnaire used to assess dietary fat intake.  相似文献   

10.
Conventionally a confidence interval (CI) for the standardized mortality ratio is set using the conservative CI for a Poisson expectation, μ. Employing the mid-P argument we present alternative CIs that are shorter than the conventional ones. The mid-P intervals do not guarantee the nominal confidence level, but the true coverage probability is only lower than the nominal level for a few short ranges of μ. The implications for mid-P confidence intervals of various proposed definitions of two-sided tests for discrete data are discussed.  相似文献   

11.
Frequently, covariates used in a logistic regression are measured with error. The authors previously described the correction of logistic regression relative risk estimates for measurement error in one or more covariates when a "gold standard" is available for exposure assessment. For some exposures (e.g., serum cholesterol), no gold standard exists, and one must assess measurement error via a reproducibility substudy. In this paper, the authors present measurement error methods for logistic regression when there is error (possibly correlated) in one or more covariates and one has data from both a main study and a reproducibility substudy. Confidence intervals from this procedure reflect error in parameter estimates from both studies. These methods are applied to the Framingham Heart Study, where the 10-year incidence of coronary heart disease is related to several coronary risk factors among 1,731 men disease-free at examination 4. Reproducibility data are obtained from the subgroup of 1,346 men seen at examinations 2 and 3. Estimated odds ratios comparing extreme quintiles for risk factors with substantial error were increased after correction for measurement error (serum cholesterol, 2.2 vs. 2.9; serum glucose, 1.3 vs. 1.5; systolic blood pressure, 2.8 vs. 3.8), but were generally decreased or unchanged for risk factors with little or no error (body mass index, 1.6 vs. 1.6; age 65-69 years vs. 35-44 years, 4.3 vs. 3.8; smoking, 1.7 vs. 1.7).  相似文献   

12.
13.
We propose two measures of performance for a confidence interval for a binomial proportion p: the root mean squared error and the mean absolute deviation. We also devise a confidence interval for p based on the actual coverage function that combines several existing approximate confidence intervals. This “Ensemble” confidence interval has improved statistical properties over the constituent confidence intervals. Software in an R package, which can be used in devising and assessing these confidence intervals, is available on CRAN.  相似文献   

14.
15.
In this paper we outline and illustrate an easy to program method for analytically calculating both parametric and non-parametric bootstrap-type confidence intervals for quantiles of the survival distribution based on right censored data. This new approach allows for the incorporation of covariates within the framework of parametric models. The procedure is based upon the notion of fractional order statistics and is carried forth using a simple beta transformation of the estimated survival function (parametric or non-parametric). It is the only direct method currently available in the sense that all other methods are based on inverting test statistics or employing confidence intervals for other survival quantities. We illustrate that the new method has favourable coverage probabilities for median confidence intervals as compared to six other competing methods.  相似文献   

16.
Likelihood-based confidence intervals for a log-normal mean   总被引:1,自引:0,他引:1  
Wu J  Wong AC  Jiang G 《Statistics in medicine》2003,22(11):1849-1860
To construct a confidence interval for the mean of a log-normal distribution in small samples, we propose likelihood-based approaches - the signed log-likelihood ratio and modified signed log-likelihood ratio methods. Extensive Monte Carlo simulation results show the advantages of the modified signed log-likelihood ratio method over the signed log-likelihood ratio method and other methods. In particular, the modified signed log-likelihood ratio method produces a confidence interval with a nearly exact coverage probability and highly accurate and symmetric error probabilities even for extremely small sample sizes. We then apply the methods to two sets of real-life data.  相似文献   

17.
18.
Although confidence intervals (CIs) for binary isotonic regression and current status survival data have been well studied theoretically, their practical application has been limited, in part because of poor performance in small samples and in part because of computational difficulties. Ghosh, Banerjee, and Biswas (2008, Biometrics 64 , 1009‐1017) described three approaches to constructing CIs: (i) the Wald‐based method; (ii) the subsampling‐based method; and (iii) the likelihood‐ratio test (LRT)‐based method. In simulation studies, they found that the subsampling‐based method and LRT‐based method tend to have better coverage probabilities than a simple Wald‐based method that may perform poorly in realistic sample sizes. However, software implementing these approaches is currently unavailable. In this article, we show that by using transformations, simple Wald‐based CIs can be improved with small and moderate sample sizes to have competitive performance with LRT‐based method. Our simulations further show that a simple nonparametric bootstrap gives approximately correct CIs for the data generating mechanisms that we consider. We provide an R package that can be used to compute the Wald‐type and the bootstrap CIs and demonstrate its practical utility with two real data analyses. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

19.
The intraclass correlation coefficient rho plays a key role in the design of cluster randomized trials. Estimates of rho obtained from previous cluster trials and used to inform sample size calculation in planned trials may be imprecise due to the typically small numbers of clusters in such studies. It may be useful to quantify this imprecision. This study used simulation to compare different methods for assigning bootstrap confidence intervals to rho for continuous outcomes from a balanced design. Data were simulated for combinations of numbers of clusters (10, 30, 50), intraclass correlation coefficients (0.001, 0.01, 0.05, 0.3) and outcome distributions (normal, non-normal continuous). The basic, bootstrap-t, percentile, bias corrected and bias corrected accelerated bootstrap intervals were compared with new methods using the basic and bootstrap-t intervals applied to a variance stabilizing transformation of rho. The standard bootstrap methods provided coverage levels for 95 per cent intervals that were markedly lower than the nominal level for data sets with only 10 clusters, and only provided close to 95 per cent coverage when there were 50 clusters. Application of the bootstrap-t method to the variance stabilizing transformation of rho improved upon the performance of the standard bootstrap methods, providing close to nominal coverage.  相似文献   

20.
This paper proposes a method for computing conservative confidence intervals for a group sequential test in which an adaptive design change is made one or more times over the course of the trial. The key idea, due to Müller and Sch?fer (Biometrics 2001; 57:886-891), is that by preserving the null conditional rejection probability of the remainder of the trial at the time of each adaptive change, the overall type I error rate, taken unconditionally over all possible design modifications, is also preserved. We show how this principle may be extended to construct one-sided confidence intervals by applying the idea to a sequence of dual tests derived from the repeated confidence intervals (RCIs) proposed by Jennison and Turnbull (J. Roy. Statist. Soc. B 1989; 51:301-361). These adaptive RCIs, such as their classical counterparts, have the advantage that they preserve the desired coverage probability even if the pre-specified stopping rule is over-ruled. The statistical methodology is explored by simulations and is illustrated by an application to a clinical trial of deep brain stimulation for Parkinson's disease.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号