共查询到20条相似文献,搜索用时 15 毫秒
1.
Testing for or against a qualitative interaction is relevant in randomized clinical trials that use a common primary factor treatment and have a secondary factor, such as the centre, region, subgroup, gender or biomarker. Interaction contrasts are formulated for ratios of differences between the levels of the primary treatment factor. Simultaneous confidence intervals allow for interpreting the magnitude and the relevance of the qualitative interaction. The proposed method is demonstrated by means of a multi‐centre clinical trial, using the R package mratios. Copyright © 2013 John Wiley & Sons, Ltd. 相似文献
2.
We consider the problems of interval estimation and hypothesis testing for the intraclass correlation coefficient in an interrater reliability study when both raters and subjects are assumed to be randomly selected from populations of raters and subjects, respectively. We propose a novel approach for the confidence interval estimation and hypothesis testing using the concepts of generalized confidence interval (GCI) and generalized P-values. A simulation study is conducted to investigate the coverage probabilities of the GCI approach relative to the modified large sample (MLS) approach. Both methods tend to provide somewhat conservative coverage. Relative to the MLS approach, the GCI approach is closer to the correct (nominal) coverage for a two-sided interval, but farther to the correct coverage for a one-sided lower interval. Unlike the MLS approach, the GCI approach can also easily provide P-values. The fact that the GCI approach is suitable for confidence interval estimation and obtaining P-values makes the GCI approach a suitable candidate for making inference about interrater reliability. 相似文献
3.
Tian L 《Statistics in medicine》2008,27(21):4221-4237
Recently, there is an emerging interest in the inference of P(Y1>Y2) where Y1 and Y2 stand for two independent continuous random variables. So far, most of the research in this field focuses on simply comparing two outcomes without adjusting for covariates. This paper mainly presents a large sample approach based on a noncentral t distribution for the confidence interval estimation of P(Y1>Y2) with normal outcomes in linear models. Furthermore, the performance of the proposed large sample approach is compared with that of a generalized variable approach and a bootstrap approach. Simulation studies demonstrate that for small-to-medium sample sizes, both the large sample approach and the generalized variable approach provide confidence intervals with satisfying coverage probabilities whereas the bootstrap approach can be slightly liberal for certain scenarios. The proposed approaches are applied to three real-life data sets. 相似文献
4.
Samuel D. Oman 《Statistics in medicine》2013,32(23):4090-4101
Let x denote a precise measurement of a quantity and Y an inexact measurement, which is, however, less expensive or more easily obtained than x. We have available a calibration set comprising clustered sets of (x,Y ) observations, obtained from different sampling units. At the prediction step, we will only observe Y for a new unit, and we wish to estimate the corresponding unknown x, which we denote by ξ. This problem has been treated under the assumption that x and Y are linearly related. Here, we expand on those results in three directions: First, we show that if we center ξ about a known value c, for example, the mean x‐value of the calibration set, then the proposed estimator now shrinks to c. Second, we examine in detail the performance of the estimator, which was proposed when one or more (x,Y ) observations can be obtained for the new subject. Third, we compare the Fieller‐like confidence intervals, previously proposed, with t‐like intervals based on asymptotic moments of the point estimate. We illustrate and evaluate our procedures in the context of a data set of true bladder‐volumes (x) and ultrasound measurements (Y). Copyright © 2013 John Wiley & Sons, Ltd. 相似文献
5.
Viechtbauer W 《Statistics in medicine》2007,26(1):37-52
Effect size estimates to be combined in a systematic review are often found to be more variable than one would expect based on sampling differences alone. This is usually interpreted as evidence that the effect sizes are heterogeneous. A random-effects model is then often used to account for the heterogeneity in the effect sizes. A novel method for constructing confidence intervals for the amount of heterogeneity in the effect sizes is proposed that guarantees nominal coverage probabilities even in small samples when model assumptions are satisfied. A variety of existing approaches for constructing such confidence intervals are summarized and the various methods are applied to an example to illustrate their use. A simulation study reveals that the newly proposed method yields the most accurate coverage probabilities under conditions more analogous to practice, where assumptions about normally distributed effect size estimates and known sampling variances only hold asymptotically. 相似文献
6.
交互作用评估是流行病学数据分析的重要环节,病因学研究中得到广泛应用的指数模型如logistic回归或Cox比例风险模型,常将危险因素的乘积项纳入模型,其乘积项系数反映了因素间的相乘交互作用,而在公共卫生方面交互作用分析应基于加法模型才更合适.文中根据Rothman提出的评估相加交互作用的指标,通过一个队列研究实例拟合Cox比例风险模型,应用RR值计算两因素的相加交互作用指标,并利用内置Bootstrap功能的S-Plus软件,较为方便地得到Bootstrap法估计的可信区间,避免队列研究资料应用OR值计算导致的估值偏差,且有更高的估计精度.相加和相乘交互作用分析的组合模式相当复杂,当两者冲突时宜选择加法模型. 相似文献
7.
Tian L 《Statistics in medicine》2005,24(11):1745-1753
In this paper, we propose a novel approach using the concept of generalized variable (GV) for the confidence interval estimation of the difference of two intraclass correlation coefficients under unequal family sizes. This approach can also easily provide P-values for hypothesis testing. Simulation results show that the GV approach can provide confidence intervals with good coverage properties and perform hypothesis testing with satisfactory type-I error control. Furthermore, the confidence intervals and P-values by GV approach can be easily obtained by simulation. Therefore the GV approach is a suitable candidate for making inference concerning two intraclass correlation coefficients. 相似文献
8.
We evaluated four methods for computing confidence intervals for cost–effectiveness ratios developed from randomized controlled trials: the box method, the Taylor series method, the nonparametric bootstrap method and the Fieller theorem method. We performed a Monte Carlo experiment to compare these methods. We investigated the relative performance of each method and assessed whether or not it was affected by differing distributions of costs (normal and log normal) and effects (10% absolute difference in mortality resulting from mortality rates of 25% versus 15% in the two groups as well as from mortality rates of 55% versus 45%) or by differing levels of correlation between the costs and effects (correlations of −0.50, −0.25, 0.0, 0.25 and 0.50). The principal criterion used to evaluate the performance of the methods was the probability of miscoverage. Symmetrical miscoverage of the intervals was used as a secondary criterion for evaluating the four methods. Overall probabilities of miscoverage for the nonparametric bootstrap method and the Fieller theorem method were more accurate than those for the other the methods. The Taylor series method had confidence intervals that asymmetrically underestimated the upper limit of the interval. Confidence intervals for cost–effectiveness ratios resulting from the nonparametric bootstrap method and the Fieller theorem method were more dependably accurate than those estimated using the Taylor series or box methods. Routine reporting of these intervals will allow individuals using cost–effectiveness ratios to make clinical and policy judgments to better identify when an intervention is a good value for its cost. © 1997 by John Wiley & Sons, Ltd. 相似文献
9.
We consider the problem of constructing a confidence interval for the intraclass correlation coefficient in an interrater reliability study when both raters and subjects are random effects in a two-way random effects model. A simulation study is conducted to investigate and compare the interval coverage per cents of three methods of approximation: (i) Satterthwaite's two moments, the standard approach; (ii) modified three moments, a newer approach; and (iii) modified large sample, the newest approach for this particular problem. One-sided lower and two-sided bounds are examined. For the two-moment approach, coverage of confidence intervals can be understated for not only one-sided bounds but also two-sided bounds. Coverage of confidence intervals for the modified large-sample approach is either correct or conservative and provides narrower two-sided widths than the modified three-moment approach, which can understate coverage on one-sided and two-sided bounds for two or three raters. The competing methods are illustrated with data from a reliability study of four raters evaluating the number of decayed, missing, and filled surfaces in the permanent teeth of ten subjects. Overall, the modified large-sample approach provides the most satisfactory performance of the three methods. 相似文献
10.
Application of cost-effectiveness analysis (CEA) is growing rapidly in health care. Two general approaches to analysis are differentiated by the type of data available: (i) deterministic models based upon secondary analysis of retrospective data from one or more trials and other sources; and (ii) stochastic analyses in which the design of a randomized controlled trial is adapted to collect prospectively patient-specific data on costs and effectiveness. An important methodological difference between these two approaches is in how uncertainty is handled. Deterministic CEA models typically rely upon sensitivity analysis to determine the robustness of findings to alternative assumptions, whereas stochastic (CEA) analysis, as part of prospective studies, permits the use of conventional statistical methods on the cost and effectiveness data for both inference (hypothesis testing) and estimation. This paper presents a procedure for the statistical analysis of cost-effectiveness data, with specific application to those studies for which effectiveness is measured as a binary outcome. Specifically, Fieller's Theorem was used to calculate confidence intervals for ratios of the two random variables of between-treatment differences in observed costs and effectiveness, i.e. the incremental cost-effectiveness ratio. It is also shown how this approach can be used to determine sample size requirements for cost-effectiveness studies. 相似文献
11.
When investigating the effects of potential prognostic or risk factors that have been measured on a quantitative scale, values of these factors are often categorized into two groups. Sometimes an 'optimal' cutpoint is chosen that gives the best separation in terms of a two-sample test statistic. It is well known that this approach leads to a serious inflation of the type I error and to an overestimation of the effect of the prognostic or risk factor in absolute terms. In this paper, we illustrate that the resulting confidence intervals are similarly affected. We show that the application of a shrinkage procedure to correct for bias, together with bootstrap resampling for estimating the variance, yields confidence intervals for the effect of a potential prognostic or risk factor with the desired coverage. 相似文献
12.
Dankmar Bhning 《Statistics in medicine》1988,7(8):865-875
The problem of estimating a rate or proportion is considered. Four methods for constructing an approximate confidence interval are discussed and compared via a simulation study. The most accurate method is found. Also, for each method a sharp upper bound (dependent only on the sample size) is given for the length of the confidence interval. By choosing an appropriate sample size this bound enables the practitioner to achieve a prespecified maximum length for the confidence interval without knowing the population rate. The striking result is that the most accurate method has the smallest bound, thus requiring the least sample units. 相似文献
13.
Paired dichotomous data may arise in clinical trials such as pre-/post-test comparison studies and equivalence trials. Reporting parameter estimates (e.g. odds ratio, rate difference and rate ratio) along with their associated confidence interval estimates becomes a necessity in many medical journals. Various asymptotic confidence interval estimators have long been developed for differences in correlated binary proportions. Nevertheless, the performance of these asymptotic methods may have poor coverage properties in small samples. In this article, we investigate several alternative confidence interval estimators for the difference between binomial proportions based on small-sample paired data. Specifically, we consider exact and approximate unconditional confidence intervals for rate difference via inverting a score test. The exact unconditional confidence interval guarantees the coverage probability, and it is recommended if strict control of coverage probability is required. However, the exact method tends to be overly conservative and computationally demanding. Our empirical results show that the approximate unconditional score confidence interval estimators based on inverting the score test demonstrate reasonably good coverage properties even in small-sample designs, and yet they are relatively easy to implement computationally. We illustrate the methods using real examples from a pain management study and a cancer study. 相似文献
14.
In this paper, we are concerned with the estimation of the discrepancy between two treatments when right-censored survival data are accompanied with covariates. Conditional confidence intervals given the available covariates are constructed for the difference between or ratio of two median survival times under the unstratified and stratified Cox proportional hazards models, respectively. The proposed confidence intervals provide the information about the difference in survivorship for patients with common covariates but in different treatments. The results of a simulation study investigation of the coverage probability and expected length of the confidence intervals suggest the one designed for the stratified Cox model when data fit reasonably with the model. When the stratified Cox model is not feasible, however, the one designed for the unstratified Cox model is recommended. The use of the confidence intervals is finally illustrated with a HIV+ data set. 相似文献
15.
16.
Using the bootstrap to improve estimation and confidence intervals for regression coefficients selected using backwards variable elimination 总被引:1,自引:0,他引:1
Austin PC 《Statistics in medicine》2008,27(17):3286-3300
Applied researchers frequently use automated model selection methods, such as backwards variable elimination, to develop parsimonious regression models. Statisticians have criticized the use of these methods for several reasons, amongst them are the facts that the estimated regression coefficients are biased and that the derived confidence intervals do not have the advertised coverage rates. We developed a method to improve estimation of regression coefficients and confidence intervals which employs backwards variable elimination in multiple bootstrap samples. In a given bootstrap sample, predictor variables that are not selected for inclusion in the final regression model have their regression coefficient set to zero. Regression coefficients are averaged across the bootstrap samples, and non-parametric percentile bootstrap confidence intervals are then constructed for each regression coefficient. We conducted a series of Monte Carlo simulations to examine the performance of this method for estimating regression coefficients and constructing confidence intervals for variables selected using backwards variable elimination. We demonstrated that this method results in confidence intervals with superior coverage compared with those developed from conventional backwards variable elimination. We illustrate the utility of our method by applying it to a large sample of subjects hospitalized with a heart attack. 相似文献
17.
For a continuous outcome in a two‐arm trial that satisfies normal distribution assumptions, we can transform the standardized mean difference with the use of the cumulative distribution function to be the effect size measure P(X < Y ). This measure is already established within engineering as the reliability parameter in stress–strength models, where Y represents the strength of a component and X represents the stress the component undergoes. If X is greater than Y, then the component will fail. In this paper, we consider the closely related effect size measure, This measure is also known as Somer's d, which was introduced by Somers in 1962 as an ordinal measure of association. In this paper, we explore this measure as a treatment effect size for a continuous outcome. Although the point estimates for λ are easily calculated, the interval is not so readily obtained. We compare kernel density estimation and use of bootstrap and jackknife methods to estimate confidence intervals against two further methods for estimating P(X < Y ) and their respective intervals, one of which makes no assumption about the underlying distribution and the other assumes a normal distribution. Simulations show that the choice of the best estimator depends on the value of λ, the variability within the data, and the underlying distribution of the data. Copyright © 2012 John Wiley & Sons, Ltd. 相似文献
18.
Peter K.
Kimani Susan Todd Lindsay A. Renfro Ekkehard Glimm Josephine N. Khan John A. Kairalla Nigel Stallard 《Statistics in medicine》2020,39(19):2568-2586
In personalized medicine, it is often desired to determine if all patients or only a subset of them benefit from a treatment. We consider estimation in two-stage adaptive designs that in stage 1 recruit patients from the full population. In stage 2, patient recruitment is restricted to the part of the population, which, based on stage 1 data, benefits from the experimental treatment. Existing estimators, which adjust for using stage 1 data for selecting the part of the population from which stage 2 patients are recruited, as well as for the confirmatory analysis after stage 2, do not consider time to event patient outcomes. In this work, for time to event data, we have derived a new asymptotically unbiased estimator for the log hazard ratio and a new interval estimator with good coverage probabilities and probabilities that the upper bounds are below the true values. The estimators are appropriate for several selection rules that are based on a single or multiple biomarkers, which can be categorical or continuous. 相似文献
19.
Sheng Luo Xiao Su Stacia M. DeSantis Xuelin Huang Min Yi Kelly K. Hunt 《Statistics in medicine》2014,33(15):2554-2566
Breast cancer patients after breast conservation therapy often develop ipsilateral breast tumor relapse (IBTR), whose classification (true local recurrence versus new ipsilateral primary tumor) is subject to error, and there is no available gold standard. Some patients may die because of breast cancer before IBTR develops. Because this terminal event may be related to the individual patient's unobserved disease status and time to IBTR, the terminal mechanism is non‐ignorable. This article presents a joint analysis framework to model the binomial regression with misclassified binary outcome and the correlated time to IBTR, subject to a dependent terminal event and in the absence of a gold standard. Shared random effects are used to link together two survival times. The proposed approach is evaluated by a simulation study and is applied to a breast cancer data set consisting of 4477 breast cancer patients. The proposed joint model can be conveniently fit using adaptive Gaussian quadrature tools implemented in SAS 9.3 (SAS Institute Inc., Cary, NC, USA) procedure NLMIXED . Copyright © 2014 John Wiley & Sons, Ltd. 相似文献
20.
Newcombe RG 《Statistics in medicine》2007,26(18):3429-3442
Estimating the false-negative rate is a major issue in evaluating sentinel node biopsy (SNB) for staging cancer. In a large multicentre trial of SNB for intra-operative staging of clinically node-negative breast cancer, two sources of information on the false-negative rate are available.Direct information is available from a preliminary validation phase: all patients underwent SNB followed by axillary nodal clearance or sampling. Of 803 patients with successful sentinel node localization, 19 (2.4 per cent) were classed as false negatives.Indirect information is also available from the randomized phase. Ninety-seven (25.4 per cent) of 382 control patients undergoing axillary clearance had positive axillae. In the experimental group, 94/366 (25.7 per cent) were apparently node positive. Taking a simple difference of these proportions gives a point estimate of -0.3 per cent for the proportion of patients who had positive axillae but were missed by SNB. This estimate is clearly inadmissible.In this situation, a Bayesian analysis yields interpretable point and interval estimates. We consider the single proportion estimate from the validation phase; the difference between independent proportions from the randomized phase, both unconstrained and constrained to non-negativity; and combined information from the two parts of the study. As well as tail-based and highest posterior density interval estimates, we examine three obvious point estimates, the posterior mean, median and mode. Posterior means and medians are similar for the validation and randomized phases separately and combined, all between 2 and 3 per cent, indicating similarity rather than conflict between the two data sources. 相似文献