首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In network meta‐analyses that synthesize direct and indirect comparison evidence concerning multiple treatments, multivariate random effects models have been routinely used for addressing between‐studies heterogeneities. Although their standard inference methods depend on large sample approximations (eg, restricted maximum likelihood estimation) for the number of trials synthesized, the numbers of trials are often moderate or small. In these situations, standard estimators cannot be expected to behave in accordance with asymptotic theory; in particular, confidence intervals cannot be assumed to exhibit their nominal coverage probabilities (also, the type I error probabilities of the corresponding tests cannot be retained). The invalidity issue may seriously influence the overall conclusions of network meta‐analyses. In this article, we develop several improved inference methods for network meta‐analyses to resolve these problems. We first introduce 2 efficient likelihood‐based inference methods, the likelihood ratio test–based and efficient score test–based methods, in a general framework of network meta‐analysis. Then, to improve the small‐sample inferences, we developed improved higher‐order asymptotic methods using Bartlett‐type corrections and bootstrap adjustment methods. The proposed methods adopt Monte Carlo approaches using parametric bootstraps to effectively circumvent complicated analytical calculations of case‐by‐case analyses and to permit flexible application to various statistical models network meta‐analyses. These methods can also be straightforwardly applied to multivariate meta‐regression analyses and to tests for the evaluation of inconsistency. In numerical evaluations via simulations, the proposed methods generally performed well compared with the ordinary restricted maximum likelihood–based inference method. Applications to 2 network meta‐analysis datasets are provided.  相似文献   

2.
In this paper we explore the potential of multilevel models for meta-analysis of trials with binary outcomes for both summary data, such as log-odds ratios, and individual patient data. Conventional fixed effect and random effects models are put into a multilevel model framework, which provides maximum likelihood or restricted maximum likelihood estimation. To exemplify the methods, we use the results from 22 trials to prevent respiratory tract infections; we also make comparisons with a second example data set comprising fewer trials. Within summary data methods, confidence intervals for the overall treatment effect and for the between-trial variance may be derived from likelihood based methods or a parametric bootstrap as well as from Wald methods; the bootstrap intervals are preferred because they relax the assumptions required by the other two methods. When modelling individual patient data, a bias corrected bootstrap may be used to provide unbiased estimation and correctly located confidence intervals; this method is particularly valuable for the between-trial variance. The trial effects may be modelled as either fixed or random within individual data models, and we discuss the corresponding assumptions and implications. If random trial effects are used, the covariance between these and the random treatment effects should be included; the resulting model is equivalent to a bivariate approach to meta-analysis. Having implemented these techniques, the flexibility of multilevel modelling may be exploited in facilitating extensions to standard meta-analysis methods.  相似文献   

3.
Cox proportional hazard regression model is a popular tool to analyze the relationship between a censored lifetime variable with other relevant factors. The semiparametric Cox model is widely used to study different types of data arising from applied disciplines such as medical science, biology, and reliability studies. A fully parametric version of the Cox regression model, if properly specified, can yield more efficient parameter estimates, leading to better insight generation. However, the existing maximum likelihood approach of generating inference under the fully parametric proportional hazards model is highly nonrobust against data contamination (often manifested through outliers), which restricts its practical usage. In this paper, we develop a robust estimation procedure for the parametric proportional hazards model based on the minimum density power divergence approach. The proposed minimum density power divergence estimator is seen to produce highly robust estimates under data contamination with only a slight loss in efficiency under pure data. Further, it is always seen to generate more precise inference than the likelihood based estimates under the semiparametric Cox models or their existing robust versions. We also justify their robustness theoretically through the influence function analysis. The practical applicability and usefulness of the proposal are illustrated through simulations and real data examples.  相似文献   

4.
Where OLS regression seeks to model the mean of a random variable as a function of observed variables, quantile regression seeks to model the quantiles of a random variable as functions of observed variables. Tests for the dependence of the quantiles of a random variable upon observed variables have only been developed through the use of computer resampling or based on asymptotic approximations resting on distributional assumptions. We propose an exceedingly simple but heretofore undocumented likelihood ratio test within a logistic regression framework to test the dependence of a quantile of a random variable upon observed variables. Simulated data sets are used to illustrate the rationale, ease, and utility of the hypothesis test. Simulations have been performed over a variety of situations to estimate the type I error rates and statistical power of the procedure. Results from this procedure are compared to (1) previously proposed asymptotic tests for quantile regression and (2) bootstrap techniques commonly used for quantile regression inference. Results show that this less computationally intense method has appropriate type I error control, which is not true for all competing procedures, comparable power to both previously proposed asymptotic tests and bootstrap techniques, and greater computational ease. We illustrate the approach using data from 779 adolescent boys age 12-18 from the Third National Health and Nutrition Examination Survey (NHANES III) to test hypotheses regarding age, ethnicity, and their interaction upon quantiles of waist circumference.  相似文献   

5.
This paper investigates the use of likelihood methods for meta-analysis, within the random-effects models framework. We show that likelihood inference relying on first-order approximations, while improving common meta-analysis techniques, can be prone to misleading results. This drawback is very evident in the case of small sample sizes, which are typical in meta-analysis. We alleviate the problem by exploiting the theory of higher-order asymptotics. In particular, we focus on a second-order adjustment to the log-likelihood ratio statistic. Simulation studies in meta-analysis and meta-regression show that higher-order likelihood inference provides much more accurate results than its first-order counterpart, while being of a computationally feasible form. We illustrate the application of the proposed approach on a real example.  相似文献   

6.
Proportion data with support lying in the interval [0,1] are a commonplace in various domains of medicine and public health. When these data are available as clusters, it is important to correctly incorporate the within‐cluster correlation to improve the estimation efficiency while conducting regression‐based risk evaluation. Furthermore, covariates may exhibit a nonlinear relationship with the (proportion) responses while quantifying disease status. As an alternative to various existing classical methods for modeling proportion data (such as augmented Beta regression) that uses maximum likelihood, or generalized estimating equations, we develop a partially linear additive model based on the quadratic inference function. Relying on quasi‐likelihood estimation techniques and polynomial spline approximation for unknown nonparametric functions, we obtain the estimators for both parametric part and nonparametric part of our model and study their large‐sample theoretical properties. We illustrate the advantages and usefulness of our proposition over other alternatives via extensive simulation studies, and application to a real dataset from a clinical periodontal study.  相似文献   

7.
Prognostic models are increasingly common in the biomedical literature. These models are frequently evaluated with respect to their ability to discriminate between those with and without an outcome. The area under the receiver-operating curve (AROC) is often used to assess discrimination. In this study, we introduce a bootstrap method, and, using Monte Carlo simulation, we compare three different bootstrap approaches with four commonly used methods in their ability to accurately estimate 95% confidence intervals (CIs) around the AROC for a simple prognostic model. We also evaluated the power of a bootstrap method and the commonly used trapezoid rule to compare different prognostic models. We show that several good methods exist for calculating 95% CIs of AROC, but the maximum likelihood estimation method should not be used with small sample sizes. We further show that for our simple prognostic model a bootstrap z-statistic approach is preferred over the trapezoidal method when comparing the AROCs of two related models.  相似文献   

8.
Hong Zhu 《Statistics in medicine》2014,33(14):2467-2479
Regression methods for survival data with right censoring have been extensively studied under semiparametric transformation models such as the Cox regression model and the proportional odds model. However, their practical application could be limited because of possible violation of model assumption or lack of ready interpretation for the regression coefficients in some cases. As an alternative, in this paper, the proportional likelihood ratio model introduced by Luo and Tsai is extended to flexibly model the relationship between survival outcome and covariates. This model has a natural connection with many important semiparametric models such as generalized linear model and density ratio model and is closely related to biased sampling problems. Compared with the semiparametric transformation model, the proportional likelihood ratio model is appealing and practical in many ways because of its model flexibility and quite direct clinical interpretation. We present two likelihood approaches for the estimation and inference on the target regression parameters under independent and dependent censoring assumptions. Based on a conditional likelihood approach using uncensored failure times, a numerically simple estimation procedure is developed by maximizing a pairwise pseudo‐likelihood. We also develop a full likelihood approach, and the most efficient maximum likelihood estimator is obtained by a profile likelihood. Simulation studies are conducted to assess the finite‐sample properties of the proposed estimators and compare the efficiency of the two likelihood approaches. An application to survival data for bone marrow transplantation patients of acute leukemia is provided to illustrate the proposed method and other approaches for handling non‐proportionality. The relative merits of these methods are discussed in concluding remarks. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

9.
Cheng KF  Lin WJ 《Statistics in medicine》2005,24(21):3289-3310
Association analysis of genetic polymorphisms has been mostly performed in a case-control setting in connection with the traditional logistic regression analysis. However, in a case-control study, subjects are recruited according to their disease status and their past exposures are determined. Thus the natural model for making inference is the retrospective model. In this paper, we discuss some retrospective models and give maximum likelihood estimators of exposure effects and estimators of asymptotic variances, when the frequency distribution of exposures in controls contains information about the parameters of interest. Two situations about the control population are considered in this paper: (a) the control population or its subpopulations are in Hardy-Weinberg equilibrium; and (b) genetic and environmental factors are independent in the control population. Using the concept of asymptotic relative efficiency, we shall show the precision advantages of such retrospective analysis over the traditional prospective analysis. Maximum likelihood estimates and variance estimates under retrospective models are simple in computation and thus can be applied in many practical applications. We present one real example to illustrate our methods.  相似文献   

10.
Marginal structural models were developed as a semiparametric alternative to the G‐computation formula to estimate causal effects of exposures. In practice, these models are often specified using parametric regression models. As such, the usual conventions regarding regression model specification apply. This paper outlines strategies for marginal structural model specification and considerations for the functional form of the exposure metric in the final structural model. We propose a quasi‐likelihood information criterion adapted from use in generalized estimating equations. We evaluate the properties of our proposed information criterion using a limited simulation study. We illustrate our approach using two empirical examples. In the first example, we use data from a randomized breastfeeding promotion trial to estimate the effect of breastfeeding duration on infant weight at 1 year. In the second example, we use data from two prospective cohorts studies to estimate the effect of highly active antiretroviral therapy on CD4 count in an observational cohort of HIV‐infected men and women. The marginal structural model specified should reflect the scientific question being addressed but can also assist in exploration of other plausible and closely related questions. In marginal structural models, as in any regression setting, correct inference depends on correct model specification. Our proposed information criterion provides a formal method for comparing model fit for different specifications. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

11.
In a medical diagnostic testing problem, multiple diagnostic tests are often available in distinguishing between diseased and nondiseased subjects. Different diagnostic tests are usually sensitive to different aspects of the disease. A desirable approach is to combine multiple diagnostic tests so as to obtain an optimal composite diagnostic test with higher sensitivity and specificity that detects the presence of the disease more accurately. To accomplish this, it has been observed via signal detection theory developed in the 1950s and 1960s, that the optimal combination of different diagnostic variables (i.e. the diagnostic test results) is determined by the likelihood ratio function for the diseased and nondiseased groups. The conventional approach is to fit parametric models for the diseased and nondiseased groups separately and then to use the fitted likelihood ratio function for the best combination of test results. However, this approach is not so robust if the underlying distribution functions are misspecified. Since the optimal combination depends only on the likelihood ratio function, it would be more appropriate to model this function directly. A two‐sample semiparametric inference technique is applied to the model for the likelihood ratio function. We consider the best combination of multiple diagnostic tests, and study semiparametric likelihood estimation of the optimal receiver operating characteristic curve and the area under the curve. We present a bootstrap procedure along with some results on simulation and on analysis of two real data sets. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

12.
Many complex human diseases such as alcoholism and cancer are rated on ordinal scales. Well‐developed statistical methods for the genetic mapping of quantitative traits may not be appropriate for ordinal traits. We propose a class of variance‐component models for the joint linkage and association analysis of ordinal traits. The proposed models accommodate arbitrary pedigrees and allow covariates and gene‐environment interactions. We develop efficient likelihood‐based inference procedures under the proposed models. The maximum likelihood estimators are approximately unbiased, normally distributed, and statistically efficient. Extensive simulation studies demonstrate that the proposed methods perform well in practical situations. An application to data from the Collaborative Study on the Genetics of Alcoholism is provided. Genet. Epidemiol. 34: 232–237, 2010. © 2009 Wiley‐Liss, Inc.  相似文献   

13.
Substantial advances in Bayesian methods for causal inference have been made in recent years. We provide an introduction to Bayesian inference for causal effects for practicing statisticians who have some familiarity with Bayesian models and would like an overview of what it can add to causal estimation in practical settings. In the paper, we demonstrate how priors can induce shrinkage and sparsity in parametric models and be used to perform probabilistic sensitivity analyses around causal assumptions. We provide an overview of nonparametric Bayesian estimation and survey their applications in the causal inference literature. Inference in the point‐treatment and time‐varying treatment settings are considered. For the latter, we explore both static and dynamic treatment regimes. Throughout, we illustrate implementation using off‐the‐shelf open source software. We hope to leave the reader with implementation‐level knowledge of Bayesian causal inference using both parametric and nonparametric models. All synthetic examples and code used in the paper are publicly available on a companion GitHub repository.  相似文献   

14.
Bootstrap估计及其应用   总被引:17,自引:6,他引:11  
本文介绍了一种面向应用的、基于大量计算的统计推断——bootstrap法。它是以原始数据为基础的模拟抽样统计推断法,用于研究原始数据的某统计量的分布特征,特别适用于那些难以用常规方法导出的参数的区间估计、假设检验等问题。讨论了参数与非参数的bootstrap法,bootstrap的估计误差,bootstrap与MonteCarlo模拟的区别,jacknife与bootstrap的联合估计等问题。  相似文献   

15.
Varying‐coefficient models have claimed an increasing portion of statistical research and are now applied to censored data analysis in medical studies. We incorporate such flexible semiparametric regression tools for interval censored data with a cured proportion. We adopted a two‐part model to describe the overall survival experience for such complicated data. To fit the unknown functional components in the model, we take the local polynomial approach with bandwidth chosen by cross‐validation. We establish consistency and asymptotic distribution of the estimation and propose to use bootstrap for inference. We constructed a BIC‐type model selection method to recommend an appropriate specification of parametric and nonparametric components in the model. We conducted extensive simulations to assess the performance of our methods. An application on a decompression sickness data illustrates our methods. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

16.
Applied researchers frequently use automated model selection methods, such as backwards variable elimination, to develop parsimonious regression models. Statisticians have criticized the use of these methods for several reasons, amongst them are the facts that the estimated regression coefficients are biased and that the derived confidence intervals do not have the advertised coverage rates. We developed a method to improve estimation of regression coefficients and confidence intervals which employs backwards variable elimination in multiple bootstrap samples. In a given bootstrap sample, predictor variables that are not selected for inclusion in the final regression model have their regression coefficient set to zero. Regression coefficients are averaged across the bootstrap samples, and non-parametric percentile bootstrap confidence intervals are then constructed for each regression coefficient. We conducted a series of Monte Carlo simulations to examine the performance of this method for estimating regression coefficients and constructing confidence intervals for variables selected using backwards variable elimination. We demonstrated that this method results in confidence intervals with superior coverage compared with those developed from conventional backwards variable elimination. We illustrate the utility of our method by applying it to a large sample of subjects hospitalized with a heart attack.  相似文献   

17.
When several independent groups have conducted studies to estimate a procedure's success rate, it is often of interest to combine the results of these studies in the hopes of obtaining a better estimate for the true unknown success rate of the procedure. In this paper we present two hierarchical methods for estimating the overall rate of success. Both methods take into account the within-study and between-study variation and assume in the first stage that the number of successes within each study follows a binomial distribution given each study's own success rate. They differ, however, in their second stage assumptions. The first method assumes in the second stage that the rates of success from individual studies form a random sample having a constant expected value and variance. Generalized estimating equations (GEE) are then used to estimate the overall rate of success and its variance. The second method assumes in the second stage that the success rates from different studies follow a beta distribution. Both methods use the maximum likelihood approach to derive an estimate for the overall success rate and to construct the corresponding confidence intervals. We also present a two-stage bootstrap approach to estimating a confidence interval for the success rate when the number of studies is small. We then perform a simulation study to compare the two methods. Finally, we illustrate these two methods and obtain bootstrap confidence intervals in a medical example analysing the effectiveness of hyperdynamic therapy for cerebral vasospasm.  相似文献   

18.
In affected-sib-pair (ASP) studies, parameters such as the locus-specific sibling relative risk, lambda(s), may be estimated and used to decide whether or not to continue the search for susceptibility genes. Typically, a maximum likelihood point estimate of lambda(s) is given, but since this estimate may have substantial variability, it is of interest to obtain confidence limits for the true value of lambda(s). While a variety of methods for doing this exist, there is considerable uncertainty over their reliability. This is because the discrete nature of ASP data and the imposition of genetic "possible triangle" constraints during the likelihood maximization mean that asymptotic results may not apply. In this paper, we use simulation to evaluate the reliability of various asymptotic and simulation-based confidence intervals, the latter being based on a resampling, or bootstrap approach. We seek to identify, from the large pool of methods available, those methods that yield short intervals with accurate coverage probabilities for ASP data. Our results show that many of the most popular bootstrap confidence interval methods perform poorly for ASP data, giving coverage probabilities much lower than claimed. The test-inversion, profile-likelihood, and asymptotic methods, however, perform well, although some care is needed in choice of nuisance parameter. Overall, in simulations under a variety of different genetic hypotheses, we find that the asymptotic methods of confidence interval evaluation are the most reliable, even in small samples. We illustrate our results with a practical application to a real data set, obtaining confidence intervals for the sibling relative risks associated with several loci involved in type 1 diabetes.  相似文献   

19.
Group testing, where specimens are tested initially in pools, is widely used to screen individuals for sexually transmitted diseases. However, a common problem encountered in practice is that group testing can increase the number of false negative test results. This occurs primarily when positive individual specimens within a pool are diluted by negative ones, resulting in positive pools testing negatively. If the goal is to estimate a population‐level regression model relating individual disease status to observed covariates, severe bias can result if an adjustment for dilution is not made. Recognizing this as a critical issue, recent binary regression approaches in group testing have utilized continuous biomarker information to acknowledge the effect of dilution. In this paper, we have the same overall goal but take a different approach. We augment existing group testing regression models (that assume no dilution) with a parametric dilution submodel for pool‐level sensitivity and estimate all parameters using maximum likelihood. An advantage of our approach is that it does not rely on external biomarker test data, which may not be available in surveillance studies. Furthermore, unlike previous approaches, our framework allows one to formally test whether dilution is present based on the observed group testing data. We use simulation to illustrate the performance of our estimation and inference methods, and we apply these methods to 2 infectious disease data sets.  相似文献   

20.
Recently, interest has grown in the development of inferential techniques to compare treatment variabilities in the setting of a cross-over experiment. In particular, comparison of treatments with respect to intra-subject variability has greater interest than has inter-subject variability. We begin with a presentation of a general approach for statistical inference within a cross-over design. We discuss three different statistical models where model choice depends on the design and assumptions about carry-over effects. Each model incorporates t-variate random subject effects, where t is the number of treatments. We develop maximum likelihood (ML) and restricted maximum likelihood (REML) approaches to derive parameter estimators and we consider a special case in which closed-form expressions for the variance component estimators are available. Finally, we illustrate the methodologies with the analysis of data from three examples.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号