首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Confidence intervals for the 50 per cent response dose are usually computed using either the Delta method or Fieller's procedure. Recently, confidence intervals computed by inverting the asymptotic likelihood ratio test have also been recommended. There is some controversy as to which of these methods should be used. By means of an extensive simulation study we examine these methods as well as confidence intervals obtained by the approximate bootstrap confidence (ABC) procedure and an adjusted form of the likelihood ratio based confidence intervals.  相似文献   

2.
Methods for estimating the size of a closed population often consist of fitting some model (e.g. a log-linear model) to data with a missing cell corresponding to the members of the population missed by all reporting sources. Although the use of the asymptotic standard error is the usual method for forming confidence intervals for the population total, the sample sizes are not always large enough to produce valid confidence intervals. We propose a method for forming confidence intervals based upon changes in a goodness-of-fit statistic associated with changes in trial values of the population total.  相似文献   

3.
OBJECTIVE: We compared outcomes, safety, and resource utilization in a collaborative management birth center model of perinatal care versus traditional physician-based care. METHODS: We studied 2957 low-risk, low-income women: 1808 receiving collaborative care and 1149 receiving traditional care. RESULTS: Major antepartum (adjusted risk difference [RD] = -0.5%; 95% confidence interval [CI] = -2.5, 1.5), intrapartum (adjusted RD = 0.8%; 95% CI = -2.4, 4.0), and neonatal (adjusted RD = -1.8%; 95% CI = -3.8, 0.1) complications were similar, as were neonatal intensive care unit admissions (adjusted RD = -1.3%; 95% CI = -3.8, 1.1). Collaborative care had a greater number of normal spontaneous vaginal deliveries (adjusted RD = 14.9%; 95% CI = 11.5, 18.3) and less use of epidural anesthesia (adjusted RD = -35.7%; 95% CI = -39.5, -31.8). CONCLUSIONS: For low-risk women, both scenarios result in safe outcomes for mothers and babies. However, fewer operative deliveries and medical resources were used in collaborative care.  相似文献   

4.
目的 最可能数(MPN)方法已经被广泛应用于多种食品安全指标菌的检测中,但是由于大多数操作者对该方法的原理及使用原则缺乏了解,错误地使用了该方法,最终导致对结果的错误评价.本文通过解决上述问题来增强该方法的可操作性.方法 利用细菌浓度泊松分布数学模型全面分析MPN值及其置信区间:通过国内外相关标准方法的比较分析,以相应实例诠释MPN表的实际应用原则.结果 推导MPN置信区间的精确计算公式:通过ISO标准的概率类群分类的引入建立新的MPN表:扩展FDA标准中5个平行样品连续稀释度的选取原则至9管法,建立实例表.结论 MPN值的置信区间可以通过数学模型进行精确的计算,计算结果将为国内外标准方法的进一步修定提供很好的参考:MPN方法的关键是将样品进行倍比稀释直至产生阴性管,MPN值的确定取决于对实验结果中连续稀释度的正确选择.  相似文献   

5.
The over‐dispersion parameter is an important and versatile measure in the analysis of one‐way layout of count data in biological studies. For example, it is commonly used as an inverse measure of aggregation in biological count data. Its estimation from finite data sets is a recognized challenge. Many simulation studies have examined the bias and efficiency of different estimators of the over‐dispersion parameter for finite data sets (see, for example, Clark and Perry, Biometrics 1989; 45:309–316 and Piegorsch, Biometrics 1990; 46:863–867), but little attention has been paid to the accuracy of the confidence intervals (CIs) of it. In this paper, we first derive asymptotic procedures for the construction of confidence limits for the over‐dispersion parameter using four estimators that are specified by only the first two moments of the counts. We also obtain closed‐form asymptotic variance formulae for these four estimators. In addition, we consider the asymptotic CI based on the maximum likelihood (ML) estimator using the negative binomial model. It appears from the simulation results that the asymptotic CIs based on these five estimators have coverage below the nominal coverage probability. To remedy this, we also study the properties of the asymptotic CIs based on the restricted estimates of ML, extended quasi‐likelihood, and double extended quasi‐likelihood by eliminating the nuisance parameter effect using their adjusted profile likelihood and quasi‐likelihoods. It is shown that these CIs outperform the competitors by providing coverage levels close to nominal over a wide range of parameter combinations. Two examples to biological count data are presented. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

6.
ObjectiveRisk difference (RD) is often estimated from relative association measures generated by meta-analysis and a particular group's baseline risk. We describe a problematic situation in using this approach.Study Design and SettingWe encountered a meta-analysis in which a confidence interval (CI) of relative risk (RR) overlapped 1.0; the point estimate favored treatment A, but when we used RR and median baseline risk to calculate a CI for RD, a greater portion of the CI favored treatment B (a result that some may find counterintuitive). We then calculated 10 different RDs from recently published meta-analyses in outcomes in which CIs of RR crossed 1.0 using three methods: estimation from RR, estimation from the odds ratio, and pooling RDs across trials.ResultsWhen RD is estimated from relative measures, the counterintuitive result occurred in 2 of 10 instances. This discordance of interpretation is because of the logarithmic transformation that makes CIs of relative measures asymmetric around their point estimates.ConclusionWhen RD is estimated from relative association measures that are nonsignificant and this counterintuitive situation occurs, it may be more appropriate to pool RD across studies. Pooling is particularly valid when baseline risks across studies are homogeneous.  相似文献   

7.
Familiar measures of association for 2 x 2 tables are the odds ratio, the risk ratio and the risk difference. Analagous measures of outcome-exposure association are desirable when there are several degrees of severity of both exposure and disease outcome. One such measure (alpha), which we label the general odds ratio (OR(G)), was proposed by Agresti. Convenient methods are given for calculation of both standard error and 95 per cent confidence intervals for OR(G). Other approaches to generalizing the odds ratio entail fitting statistical models which might not fit the data, and cannot handle some zero frequencies. We propose a generalization of the risk ratio (RR(G)) following the statistical approaches of Agresti, Goodman and Kruskal. A method of calculating the standard error and 95 per cent confidence interval for RR(G) is provided. A known statistic, Somers' d, fulfils the characteristics necessary for a generalized risk difference (RD(G)). These measures have straightforward interpretations, are easily computed, are at least as precise as other methods and do not require fitting statistical models to the data. We also examine the pooling of such measures as in, for example, meta-analysis.  相似文献   

8.
Propensity score methods are increasingly being used to estimate the effects of treatments on health outcomes using observational data. There are four methods for using the propensity score to estimate treatment effects: covariate adjustment using the propensity score, stratification on the propensity score, propensity‐score matching, and inverse probability of treatment weighting (IPTW) using the propensity score. When outcomes are binary, the effect of treatment on the outcome can be described using odds ratios, relative risks, risk differences, or the number needed to treat. Several clinical commentators suggested that risk differences and numbers needed to treat are more meaningful for clinical decision making than are odds ratios or relative risks. However, there is a paucity of information about the relative performance of the different propensity‐score methods for estimating risk differences. We conducted a series of Monte Carlo simulations to examine this issue. We examined bias, variance estimation, coverage of confidence intervals, mean‐squared error (MSE), and type I error rates. A doubly robust version of IPTW had superior performance compared with the other propensity‐score methods. It resulted in unbiased estimation of risk differences, treatment effects with the lowest standard errors, confidence intervals with the correct coverage rates, and correct type I error rates. Stratification, matching on the propensity score, and covariate adjustment using the propensity score resulted in minor to modest bias in estimating risk differences. Estimators based on IPTW had lower MSE compared with other propensity‐score methods. Differences between IPTW and propensity‐score matching may reflect that these two methods estimate the average treatment effect and the average treatment effect for the treated, respectively. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

9.
A linear relative risk form for the Cox model is sometimes more appropriate than the usual exponential form. The usual asymptotic confidence interval may not have the appropriate coverage, however, due to flatness of the likelihood in the neighbourhood of beta. For a single continuous covariate, we derive bootstrapped confidence intervals with use of two resampling methods. The first resamples the original data and yields both one-step and fully iterated estimates of beta. The second resamples the score and information quantities at each failure time to yield a one-step estimate. We computed the bootstrapped confidence intervals by three different methods and compared these intervals to one based on the asymptotic standard error and to a likelihood-based interval. The bootstrapped intervals did not perform well and underestimated the true coverage in most cases.  相似文献   

10.
We have formulated the problem of determining whether there has been an upturn in HIV-1 seroconversion incidence over the first five years of follow-up in the Multicenter AIDS Cohort Study (MACS) as that of locating the minimum of a quadratic regression or examination of two-knot piecewise spline models. Under a quadratic model, we propose a method to obtain a direct estimate and a bootstrap estimate for the location of the temporal turning point (local minimum) for HIV-1 seroconversion incidence and three methods to estimate confidence intervals for the location of the turning point for HIV seroconversion incidence: (1) Wald confidence interval estimate with or without log transformation assuming the asymptotic normality and applying the Delta method; (2) asymmetric confidence intervals using Fieller's Theorem and its modification; and (3) bootstrapping confidence intervals. Inferences for the temporal turning point based on Wald tests for a single regression term in a non-linear regression model were not reliable compared to inferences based on confidence intervals placed on calendar time. We present results using these different method applied to the MACS data and we obtain power estimates to illustrate the performance of different methods.  相似文献   

11.
Three interval estimation procedures were evaluated to determine the method which provides the most accurate estimates for the recombination fraction, θ. The lod–0.83 support interval, the jackknife confidence interval, and the confidence interval based on estimated asymptotic standard error were compared by calculating the coverage probabilities of each. Family data that were simulated under the model of a single fully penetrant, dominant disease locus at some distance, θ, from fully informative matings were used. Comparisons were based on 1,000 random samples of size 20, 60, and 100 families. In addition, a methodology for obtaining prediction intervals for θ was developed. This procedure is of practical use and does not require asymptotic assumptions based on large sample theory. The results provide an a priori idea about precision of the estimates, as well as empirical interval estimates of θ. Graphs of the authors' Monte Carlo intervals are presented for these simulations. Investigators studying different traits, however, could condition specifically on the family structure and distribution of the disease they are investigating and obtain similar graphs. ©1995 Wiley-Liss, Inc.  相似文献   

12.
Deriving valid confidence intervals for complex estimators is a challenging task in practice. Estimators of dynamic weighted survival modeling (DWSurv), a method to estimate an optimal dynamic treatment regime of censored outcomes, are asymptotically normal and consistent for their target parameters when at least a subset of the nuisance models is correctly specified. However, their behavior in finite samples and the impact of model misspecification on inferences remain unclear. In addition, the estimators' nonregularity may negatively affect the inferences under some specific data generating mechanisms. Our objective was to compare five methods, two asymptotic variance formulas (adjusting or not for the estimation of nuisance parameters) to three bootstrap approaches, to construct confidence intervals for the DWSurv parameters in finite samples. Via simulations, we considered practical scenarios, for example, when some nuisance models are misspecified or when nonregularity is problematic. We also compared the five methods in an application about the treatment of rheumatoid arthritis. We found that the bootstrap approaches performed consistently well at the cost of longer computational times. The asymptotic variance with adjustments generally yielded conservative confidence intervals. The asymptotic variance without adjustments yielded nominal coverages for large sample sizes. We recommend using the asymptotic variance with adjustments in small samples and the bootstrap if computationally feasible. Caution should be taken when nonregularity may be an issue.  相似文献   

13.
Background: Polychlorinated biphenyls (PCBs) manufactured in Anniston, Alabama, from 1929 to 1971 caused significant environmental contamination. The Anniston population remains one of the most highly exposed in the world.Objectives: Reports of increased diabetes in PCB-exposed populations led us to examine possible associations in Anniston residents.Methods: Volunteers (n = 774) from a cross-sectional study of randomly selected households and adults who completed the Anniston Community Health Survey also underwent measurements of height, weight, fasting glucose, lipid, and PCB congener levels and verification of medications. Odds ratios (ORs) and 95% confidence intervals (CIs) were calculated to assess the relationships between PCBs and diabetes, adjusting for diabetes risk factors. Participants with prediabetes were excluded from the logistic regression analyses.Results: Participants were 47% African American, 70% female, with a mean age of 54.8 years. The prevalence of diabetes was 27% in the study population, corresponding to an estimated prevalence of 16% for Anniston overall; the PCB body burden of 35 major congeners ranged from 0.11 to 170.42 ppb, wet weight. The adjusted OR comparing the prevalence of diabetes in the fifth versus first quintile of serum PCB was 2.78 (95% CI: 1.00, 7.73), with similar associations estimated for second through fourth quintiles. In participants < 55 years of age, the adjusted OR for diabetes for the highest versus lowest quintile was 4.78 (95% CI: 1.11, 20.6), whereas in those ≥ 55 years of age, we observed no significant associations with PCBs. Elevated diabetes prevalence was observed with a 1 SD increase in log PCB levels in women (OR = 1.52; 95% CI: 1.01, 2.28); a decreased prevalence was observed in men (OR = 0.68; 95% CI: 0.33, 1.41).Conclusions: We observed significant associations between elevated PCB levels and diabetes mostly due to associations in women and in individuals < 55 years of age.  相似文献   

14.
In population pharmacokinetic studies, one of the main objectives is to estimate population pharmacokinetic parameters specifying the population distributions of pharmacokinetic parameters. Confidence intervals for population pharmacokinetic parameters are generally estimated by assuming the asymptotic normality, which is a large-sample property, that is, a property which holds for the cases where sample sizes are large enough. In actual clinical trials, however, sample sizes are limited and not so large in general. Likelihood functions in population pharmacokinetic modelling include a multiple integral and are quite complicated. We hence suspect that the sample sizes of actual trials are often not large enough for assuming the asymptotic normality and that the asymptotic confidence intervals underestimate the uncertainties of the estimates of population pharmacokinetic parameters. As an alternative to the asymptotic normality approach, we can employ a bootstrap approach. This paper proposes a bootstrap standard error approach for constructing confidence intervals for population pharmacokinetic parameters. Comparisons between the asymptotic and bootstrap confidence intervals are made through applications to a simulated data set and an actual phase I trial.  相似文献   

15.
The authors present a computer program for hypothesis testing and calculation of exact binomial confidence intervals for the adjusted relative risk in follow-up studies involving multiple strata with incidence density (person-time) denominators and small or zero person-count numerators. The program is an extension to multiple tables of a single-table method by Rothman and Boice (NIH publication no. 79-1649, Washington, DC: US GPO, 1979) and represents a counterpart for person-time denominators to the program of Thomas (Comput Biomed Res 1975;8:423-46) for exact analysis of multiple tables with person-count denominators. Comparisons with asymptotic analyses of real and simulated data are given. Copies of the program are available from the authors on request.  相似文献   

16.
Biostatisticians have frequently uncritically accepted the measurements provided by their medical colleagues engaged in clinical research. Such measures often involve considerable loss of information. Particularly, unfortunate is the widespread use of the so‐called ‘responder analysis’, which may involve not only a loss of information through dichotomization, but also extravagant and unjustified causal inference regarding individual treatment effects at the patient level, and, increasingly, the use of the so‐called number needed to treat scale of measurement. Other problems involve inefficient use of baseline measurements, the use of covariates measured after the start of treatment, the interpretation of titrations and composite response measures. Many of these bad practices are becoming enshrined in the regulatory guidance to the pharmaceutical industry. We consider the losses involved in inappropriate measures and suggest that statisticians should pay more attention to this aspect of their work. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

17.
The change in area under the curve (?AUC), the integrated discrimination improvement (IDI), and net reclassification index (NRI) are commonly used measures of risk prediction model performance. Some authors have reported good validity of associated methods of estimating their standard errors (SE) and construction of confidence intervals, whereas others have questioned their performance. To address these issues, we unite the ?AUC, IDI, and three versions of the NRI under the umbrella of the U‐statistics family. We rigorously show that the asymptotic behavior of ?AUC, NRIs, and IDI fits the asymptotic distribution theory developed for U‐statistics. We prove that the ?AUC, NRIs, and IDI are asymptotically normal, unless they compare nested models under the null hypothesis. In the latter case, asymptotic normality and existing SE estimates cannot be applied to ?AUC, NRIs, or IDI. In the former case, SE formulas proposed in the literature are equivalent to SE formulas obtained from U‐statistics theory if we ignore adjustment for estimated parameters. We use Sukhatme–Randles–deWet condition to determine when adjustment for estimated parameters is necessary. We show that adjustment is not necessary for SEs of the ?AUC and two versions of the NRI when added predictor variables are significant and normally distributed. The SEs of the IDI and three‐category NRI should always be adjusted for estimated parameters. These results allow us to define when existing formulas for SE estimates can be used and when resampling methods such as the bootstrap should be used instead when comparing nested models. We also use the U‐statistic theory to develop a new SE estimate of ?AUC. Copyright © 2017 John Wiley & Sons, Ltd.  相似文献   

18.
Biomarkers are often measured with error due to imperfect lab conditions or temporal variability within subjects. Using an internal reliability sample of the biomarker, we develop a parametric bias‐correction approach for estimating a variety of diagnostic performance measures including sensitivity, specificity, the Youden index with its associated optimal cut‐point, positive and negative predictive values, and positive and negative diagnostic likelihood ratios when the biomarker is subject to measurement error. We derive the asymptotic properties of the proposed likelihood‐based estimators and show that they are consistent and asymptotically normally distributed. We propose confidence intervals for these estimators and confidence bands for the receiver operating characteristic curve. We demonstrate through extensive simulations that the proposed approach removes the bias due to measurement error and outperforms the naïve approach (which ignores the measurement error) in both point and interval estimation. We also derive the asymptotic bias of naïve estimates and discuss conditions in which naïve estimates of the diagnostic measures are biased toward estimates produced when the biomarker is ineffective (i.e., when sensitivity equals 1 ? specificity) or are anticonservatively biased. The proposed method has broad biomedical applications and is illustrated using a biomarker study in Alzheimer's disease. We recommend collecting an internal reliability sample during the biomarker discovery phase in order to adequately evaluate the performance of biomarkers with careful adjustment for measurement error. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

19.
Health indices provide information to the general public on the health condition of the community. They can also be used to inform the government's policy making, to evaluate the effect of a current policy or healthcare program, or for program planning and priority setting. It is a common practice that the health indices across different geographic units are ranked and the ranks are reported as fixed values. We argue that the ranks should be viewed as random and hence should be accompanied by an indication of precision (i.e., the confidence intervals). A technical difficulty in doing so is how to account for the dependence among the ranks in the construction of confidence intervals. In this paper, we propose a novel Monte Carlo method for constructing the individual and simultaneous confidence intervals of ranks for age‐adjusted rates. The proposed method uses as input age‐specific counts (of cases of disease or deaths) and their associated populations. We have further extended it to the case in which only the age‐adjusted rates and confidence intervals are available. Finally, we demonstrate the proposed method to analyze US age‐adjusted cancer incidence rates and mortality rates for cancer and other diseases by states and counties within a state using a website that will be publicly available. The results show that for rare or relatively rare disease (especially at the county level), ranks are essentially meaningless because of their large variability, while for more common disease in larger geographic units, ranks can be effectively utilized. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

20.
In affected-sib-pair (ASP) studies, parameters such as the locus-specific sibling relative risk, lambda(s), may be estimated and used to decide whether or not to continue the search for susceptibility genes. Typically, a maximum likelihood point estimate of lambda(s) is given, but since this estimate may have substantial variability, it is of interest to obtain confidence limits for the true value of lambda(s). While a variety of methods for doing this exist, there is considerable uncertainty over their reliability. This is because the discrete nature of ASP data and the imposition of genetic "possible triangle" constraints during the likelihood maximization mean that asymptotic results may not apply. In this paper, we use simulation to evaluate the reliability of various asymptotic and simulation-based confidence intervals, the latter being based on a resampling, or bootstrap approach. We seek to identify, from the large pool of methods available, those methods that yield short intervals with accurate coverage probabilities for ASP data. Our results show that many of the most popular bootstrap confidence interval methods perform poorly for ASP data, giving coverage probabilities much lower than claimed. The test-inversion, profile-likelihood, and asymptotic methods, however, perform well, although some care is needed in choice of nuisance parameter. Overall, in simulations under a variety of different genetic hypotheses, we find that the asymptotic methods of confidence interval evaluation are the most reliable, even in small samples. We illustrate our results with a practical application to a real data set, obtaining confidence intervals for the sibling relative risks associated with several loci involved in type 1 diabetes.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号