首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Effect size estimates to be combined in a systematic review are often found to be more variable than one would expect based on sampling differences alone. This is usually interpreted as evidence that the effect sizes are heterogeneous. A random-effects model is then often used to account for the heterogeneity in the effect sizes. A novel method for constructing confidence intervals for the amount of heterogeneity in the effect sizes is proposed that guarantees nominal coverage probabilities even in small samples when model assumptions are satisfied. A variety of existing approaches for constructing such confidence intervals are summarized and the various methods are applied to an example to illustrate their use. A simulation study reveals that the newly proposed method yields the most accurate coverage probabilities under conditions more analogous to practice, where assumptions about normally distributed effect size estimates and known sampling variances only hold asymptotically.  相似文献   

2.
Ma Y  Mazumdar M 《Statistics in medicine》2011,30(24):2911-2929
Meta-analysis is the methodology for combining findings from similar research studies asking the same question. When the question of interest involves multiple outcomes, multivariate meta-analysis is used to synthesize the outcomes simultaneously taking into account the correlation between the outcomes. Likelihood-based approaches, in particular restricted maximum likelihood (REML) method, are commonly utilized in this context. REML assumes a multivariate normal distribution for the random-effects model. This assumption is difficult to verify, especially for meta-analysis with small number of component studies. The use of REML also requires iterative estimation between parameters, needing moderately high computation time, especially when the dimension of outcomes is large. A multivariate method of moments (MMM) is available and is shown to perform equally well to REML. However, there is a lack of information on the performance of these two methods when the true data distribution is far from normality. In this paper, we propose a new nonparametric and non-iterative method for multivariate meta-analysis on the basis of the theory of U-statistic and compare the properties of these three procedures under both normal and skewed data through simulation studies. It is shown that the effect on estimates from REML because of non-normal data distribution is marginal and that the estimates from MMM and U-statistic-based approaches are very similar. Therefore, we conclude that for performing multivariate meta-analysis, the U-statistic estimation procedure is a viable alternative to REML and MMM. Easy implementation of all three methods are illustrated by their application to data from two published meta-analysis from the fields of hip fracture and periodontal disease. We discuss ideas for future research based on U-statistic for testing significance of between-study heterogeneity and for extending the work to meta-regression setting.  相似文献   

3.
In clinical trials with time‐to‐event outcomes, it is common to estimate the marginal hazard ratio from the proportional hazards model, even when the proportional hazards assumption is not valid. This is unavoidable from the perspective that the estimator must be specified a priori if probability statements about treatment effect estimates are desired. Marginal hazard ratio estimates under non‐proportional hazards are still useful, as they can be considered to be average treatment effect estimates over the support of the data. However, as many have shown, under non‐proportional hazard, the ‘usual’ unweighted marginal hazard ratio estimate is a function of the censoring distribution, which is not normally considered to be scientifically relevant when describing the treatment effect. In addition, in many practical settings, the censoring distribution is only conditionally independent (e.g., differing across treatment arms), which further complicates the interpretation. In this paper, we investigate an estimator of the hazard ratio that removes the influence of censoring and propose a consistent robust variance estimator. We compare the coverage probability of the estimator to both the usual Cox model estimator and an estimator proposed by Xu and O'Quigley (2000) when censoring is independent of the covariate. The new estimator should be used for inference that does not depend on the censoring distribution. It is particularly relevant to adaptive clinical trials where, by design, censoring distributions differ across treatment arms. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

4.
Reliability measures have been well studied over many years. Such measures have been thoroughly studied for two-factor models. Motivated by a medical research problem, point and confidence interval estimates of the intraclass correlation coefficient are extended to models containing three crossed random factors-subjects, raters, and occasions. The estimation is conducted using both analysis of variance and Monte Carlo Markov chain methods.  相似文献   

5.
A normal copula-based selection model is proposed for continuous longitudinal data with a non-ignorable non-monotone missing-data process. The normal copula is used to combine the distribution of the outcome of interest and that of the missing-data indicators given the covariates. Parameters in the model are estimated by a pseudo-likelihood method. We first use the GEE with a logistic link to estimate the parameters associated with the marginal distribution of the missing-data indicator given the covariates, assuming that covariates are always observed. Then we estimate other parameters by inserting the estimates from the first step into the full likelihood function. A simulation study is conducted to assess the robustness of the assumed model under different missing-data processes. The proposed method is then applied to one example from a community cohort study to demonstrate its capability to reduce bias.  相似文献   

6.
In the context of a mathematical model describing HIV infection, we discuss a Bayesian modelling approach to a non-linear random effects estimation problem. The model and the data exhibit a number of features that make the use of an ordinary non-linear mixed effects model intractable: (i) the data are from two compartments fitted simultaneously against the implicit numerical solution of a system of ordinary differential equations; (ii) data from one compartment are subject to censoring; (iii) random effects for one variable are assumed to be from a beta distribution. We show how the Bayesian framework can be exploited by incorporating prior knowledge on some of the parameters, and by combining the posterior distributions of the parameters to obtain estimates of quantities of interest that follow from the postulated model.  相似文献   

7.
Rice K 《Statistics in medicine》2003,22(20):3177-3194
We consider analysis of matched case-control studies where a binary exposure is potentially misclassified, and there may be a variety of matching ratios. The parameter of interest is the ratio of odds of case exposure to control exposure. By extending the conditional model for perfectly classified data via a random effects or Bayesian formulation, we obtain estimates and confidence intervals for the misclassified case which reduce back to standard analytic forms as the error probabilities reduce to zero. Several examples are given, highlighting different analytic phenomena. In a simulation study, using mixed matching ratios, the coverage of the intervals are found to be good, although point estimates are slightly biased on the log scale. Extensions of the basic model are given allowing for uncertainty in the knowledge of misclassification rates, and the inclusion of prior information about the parameter of interest.  相似文献   

8.
It is of interest to estimate the distribution of usual nutrient intake for a population from repeat 24‐h dietary recall assessments. A mixed effects model and quantile estimation procedure, developed at the National Cancer Institute (NCI), may be used for this purpose. The model incorporates a Box–Cox parameter and covariates to estimate usual daily intake of nutrients; model parameters are estimated via quasi‐Newton optimization of a likelihood approximated by the adaptive Gaussian quadrature. The parameter estimates are used in a Monte Carlo approach to generate empirical quantiles; standard errors are estimated by bootstrap. The NCI method is illustrated and compared with current estimation methods, including the individual mean and the semi‐parametric method developed at the Iowa State University (ISU), using data from a random sample and computer simulations. Both the NCI and ISU methods for nutrients are superior to the distribution of individual means. For simple (no covariate) models, quantile estimates are similar between the NCI and ISU methods. The bootstrap approach used by the NCI method to estimate standard errors of quantiles appears preferable to Taylor linearization. One major advantage of the NCI method is its ability to provide estimates for subpopulations through the incorporation of covariates into the model. The NCI method may be used for estimating the distribution of usual nutrient intake for populations and subpopulations as part of a unified framework of estimation of usual intake of dietary constituents. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

9.
The use of standard univariate fixed- and random-effects models in meta-analysis has become well known in the last 20 years. However, these models are unsuitable for meta-analysis of clinical trials that present multiple survival estimates (usually illustrated by a survival curve) during a follow-up period. Therefore, special methods are needed to combine the survival curve data from different trials in a meta-analysis. For this purpose, only fixed-effects models have been suggested in the literature. In this paper, we propose a multivariate random-effects model for joint analysis of survival proportions reported at multiple time points and in different studies, to be combined in a meta-analysis. The model could be seen as a generalization of the fixed-effects model of Dear (Biometrics 1994; 50:989-1002). We illustrate the method by using a simulated data example as well as using a clinical data example of meta-analysis with aggregated survival curve data. All analyses can be carried out with standard general linear MIXED model software. Copyright (c) 2008 John Wiley & Sons, Ltd.  相似文献   

10.
We propose a methodology for evaluation of agreement between two methods of measuring a continuous variable whose variability changes with magnitude. This problem routinely arises in method comparison studies that are common in health‐related disciplines. Assuming replicated measurements, we first model the data using a heteroscedastic mixed‐effects model, wherein a suitably defined true measurement serves as the variance covariate. Fitting this model poses some computational difficulties as the likelihood function is not available in a closed form. We deal with this issue by suggesting four estimation methods to obtain approximate maximum likelihood estimates. Two of these methods are based on numerical approximation of the likelihood, and the other two are based on approximation of the model. Next, we extend the existing agreement evaluation methodology designed for homoscedastic data to work under the proposed heteroscedastic model. This methodology can be used with any scalar measure of agreement. Simulations show that the suggested inference procedures generally work well for moderately large samples. They are illustrated by analyzing a data set of cholesterol measurements. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

11.
For complex traits, most associated single nucleotide variants (SNV) discovered to date have a small effect, and detection of association is only possible with large sample sizes. Because of patient confidentiality concerns, it is often not possible to pool genetic data from multiple cohorts, and meta‐analysis has emerged as the method of choice to combine results from multiple studies. Many meta‐analysis methods are available for single SNV analyses. As new approaches allow the capture of low frequency and rare genetic variation, it is of interest to jointly consider multiple variants to improve power. However, for the analysis of haplotypes formed by multiple SNVs, meta‐analysis remains a challenge, because different haplotypes may be observed across studies. We propose a two‐stage meta‐analysis approach to combine haplotype analysis results. In the first stage, each cohort estimate haplotype effect sizes in a regression framework, accounting for relatedness among observations if appropriate. For the second stage, we use a multivariate generalized least square meta‐analysis approach to combine haplotype effect estimates from multiple cohorts. Haplotype‐specific association tests and a global test of independence between haplotypes and traits are obtained within our framework. We demonstrate through simulation studies that we control the type‐I error rate, and our approach is more powerful than inverse variance weighted meta‐analysis of single SNV analysis when haplotype effects are present. We replicate a published haplotype association between fasting glucose‐associated locus (G6PC2) and fasting glucose in seven studies from the Cohorts for Heart and Aging Research in Genomic Epidemiology Consortium and we provide more precise haplotype effect estimates.  相似文献   

12.
In the process of identifying potential anticancer agents, the ability of a new agent is tested for cytotoxic activity against a panel of standard cancer cell lines. The National Cancer Institute (NCI) present the cytotoxic profile for each agent as a set of estimates of the dose required to inhibit the growth of each cell line. The NCI estimates are obtained from a linear interpolation method applied to the dose-response curves. In this paper non-linear fits are proposed as an alternative to interpolation. This is illustrated with data from two agents recently submitted to NCI for potential anticancer activity. Fitting of individual non-linear curves proved difficult, but a non-linear mixed model applied to the full set of cell lines overcame most of the problems.Two non-linear functional forms were fitted using random effect models by both maximum likelihood and a full Bayesian approach. Model-based toxicity estimates have some advantages over those obtained from interpolation. They provide standard errors for toxicity estimates and other derived quantities, allow model comparisons. Examples of each are illustrated.  相似文献   

13.
This article proposes a joint modeling framework for longitudinal insomnia measurements and a stochastic smoking cessation process in the presence of a latent permanent quitting state (i.e., ‘cure’). We use a generalized linear mixed‐effects model and a stochastic mixed‐effects model for the longitudinal measurements of insomnia symptom and for the smoking cessation process, respectively. We link these two models together via the latent random effects. We develop a Bayesian framework and Markov Chain Monte Carlo algorithm to obtain the parameter estimates. We formulate and compute the likelihood functions involving time‐dependent covariates. We explore the within‐subject correlation between insomnia and smoking processes. We apply the proposed methodology to simulation studies and the motivating dataset, that is, the Alpha‐Tocopherol, Beta‐Carotene Lung Cancer Prevention study, a large longitudinal cohort study of smokers from Finland. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

14.
In order to yield more flexible models, the Cox regression model, lambda(t;x) = lambda(0)(t)exp(betax), has been generalized using different non-parametric model estimation techniques. One generalization is the relaxation of log-linearity in x, lambda(t;x) = lambda(0)(t)exp[r(x)]. Another is the relaxation of the proportional hazards assumption, lambda(t;x) = lambda(0)(t)exp[beta(t)x]. These generalizations are typically considered independently of each other. We propose the product model, lambda(t;x) = lambda(0)(t)exp[beta(t)r(x)] which allows for joint estimation of both effects, and investigate its properties. The functions describing the time-dependent beta(t) and non-linear r(x) effects are modelled simultaneously using regression splines and estimated by maximum partial likelihood. Likelihood ratio tests are proposed to compare alternative models. Simulations indicate that both the recovery of the shapes of the two functions and the size of the tests are reasonably accurate provided they are based on the correct model. By contrast, type I error rates may be highly inflated, and the estimates considerably biased, if the model is misspecified. Applications in cancer epidemiology illustrate how the product model may yield new insights about the role of prognostic factors.  相似文献   

15.
The propensity adjustment is used to reduce bias in treatment effectiveness estimates from observational data. We show here that a mixed-effects implementation of the propensity adjustment can reduce bias in longitudinal studies of non-equivalent comparison groups. The strategy examined here involves two stages. Initially, a mixed-effects ordinal logistic regression model of propensity for treatment intensity includes variables that differentiate subjects who receive various doses of time-varying treatments. Second, a mixed-effects linear regression model compares the effectiveness of those ordinal doses on a continuous outcome over time. Here, a simulation study compares bias reduction that is achieved by implementing this propensity adjustment through various forms of stratification. The simulations demonstrate that bias decreased monotonically as the number of quantiles used for stratification increased from two to five. This was particularly pronounced with stronger effects of the confounding variables. The quartile and quintile strategies typically removed in excess of 80-90 per cent of the bias detected in unadjusted models; whereas a median-split approach removed from 20 to 45 per cent of bias. The approach is illustrated in an evaluation of the effectiveness of somatic treatments for major depression in a longitudinal, observational study of affective disorders.  相似文献   

16.
In epidemiology the analyses of family or twin studies do not always fully exploit the data, as information on differences between siblings is often used while between-families effect are not considered. We show how cross-sectional time-series linear regression analysis can be easily implemented to estimate within- and between-families effects simultaneously and how these effects can then be compared using the Hausman test. We illustrate this approach with data from the Uppsala family study on blood pressure in children with age ranging from 5.5 to 12.3 years for the younger and from 7.5 to 13.8 years for the older siblings. Comparing the effect of differences in birth weight on blood pressure within-family (in full siblings) and between-families (in unrelated children) allows us to study the contributions of fixed and pregnancy-specific maternal effects on birth weight and consequently on blood pressure. Our data showed a 0.88 mmHg decrease (95 per cent confidence interval: -1.7 to -0.03 mmHg) in systolic blood pressure for one standard deviation increase in birth weight between siblings within a family and 0.88 mmHg (95 per cent confidence interval: -1.6 to -0.2 mmHg) decrease in systolic blood pressure for one standard deviation increase in birth weight between unrelated children. These estimates were controlled for sex, age, pubertal stage, body size and pulse rate of the children at examination and for maternal body size and systolic blood pressure. The within- and between-families effects were not significantly different, p = 0.19, suggesting that fixed and pregnancy-specific factors have similar effects on childhood systolic blood pressure.  相似文献   

17.
Standard methods for fixed effects meta‐analysis assume that standard errors for study‐specific estimates are known, not estimated. While the impact of this simplifying assumption has been shown in a few special cases, its general impact is not well understood, nor are general‐purpose tools available for inference under more realistic assumptions. In this paper, we aim to elucidate the impact of using estimated standard errors in fixed effects meta‐analysis, showing why it does not go away in large samples and quantifying how badly miscalibrated standard inference will be if it is ignored. We also show the important role of a particular measure of heterogeneity in this miscalibration. These developments lead to confidence intervals for fixed effects meta‐analysis with improved performance for both location and scale parameters.  相似文献   

18.
Fixed‐effects meta‐analysis has been criticized because the assumption of homogeneity is often unrealistic and can result in underestimation of parameter uncertainty. Random‐effects meta‐analysis and meta‐regression are therefore typically used to accommodate explained and unexplained between‐study variability. However, it is not unusual to obtain a boundary estimate of zero for the (residual) between‐study standard deviation, resulting in fixed‐effects estimates of the other parameters and their standard errors. To avoid such boundary estimates, we suggest using Bayes modal (BM) estimation with a gamma prior on the between‐study standard deviation. When no prior information is available regarding the magnitude of the between‐study standard deviation, a weakly informative default prior can be used (with shape parameter 2 and rate parameter close to 0) that produces positive estimates but does not overrule the data, leading to only a small decrease in the log likelihood from its maximum. We review the most commonly used estimation methods for meta‐analysis and meta‐regression including classical and Bayesian methods and apply these methods, as well as our BM estimator, to real datasets. We then perform simulations to compare BM estimation with the other methods and find that BM estimation performs well by (i) avoiding boundary estimates; (ii) having smaller root mean squared error for the between‐study standard deviation; and (iii) better coverage for the overall effects than the other methods when the true model has at least a small or moderate amount of unexplained heterogeneity. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

19.
Measurement error models offer a flexible framework for modeling data collected in studies comparing methods of quantitative measurement. These models generally make two simplifying assumptions: (i) the measurements are homoscedastic, and (ii) the unobservable true values of the methods are linearly related. One or both of these assumptions may be violated in practice. In particular, error variabilities of the methods may depend on the magnitude of measurement, or the true values may be nonlinearly related. Data with these features call for a heteroscedastic measurement error model that allows nonlinear relationships in the true values. We present such a model for the case when the measurements are replicated, discuss its fitting, and explain how to evaluate similarity of measurement methods and agreement between them, which are two common goals of data analysis, under this model. Model fitting involves dealing with lack of a closed form for the likelihood function. We consider estimation methods that approximate either the likelihood or the model to yield approximate maximum likelihood estimates. The fitting methods are evaluated in a simulation study. The proposed methodology is used to analyze a cholesterol dataset. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

20.
Longitudinal observational studies provide rich opportunities to examine treatment effectiveness during the course of a chronic illness. However, there are threats to the validity of observational inferences. For instance, clinician judgment and self‐selection play key roles in treatment assignment. To account for this, an adjustment such as the propensity score can be used if certain assumptions are fulfilled. Here, we consider a problem that could surface in a longitudinal observational study and has been largely overlooked. It can occur when subjects have a varying number of distinct periods of therapeutic intervention. We evaluate the implications of baseline variables in the propensity model being associated with the number of post baseline observations per subject and refer to it as ‘covariate‐dependent representation’. An observational study of antidepressant treatment effectiveness serves as a motivating example. The analyses examine the first 20 years of follow‐up data from the National Institute of Mental Health Collaborative Depression Study, a longitudinal, observational study. A simulation study evaluates the consequences of covariate‐dependent representation in longitudinal observational studies of treatment effectiveness under a range of data specifications.The simulations found that estimates were adversely affected by underrepresentation when there was lower ICC among repeated doses and among repeated outcomes. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号