首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Methodology for the meta-analysis of individual patient data with survival end-points is proposed. Motivated by questions about the reliance on hazard ratios as summary measures of treatment effects, a parametric approach is considered and percentile ratios are introduced as an alternative to hazard ratios. The generalized log-gamma model, which includes many common time-to-event distributions as special cases, is discussed in detail. Likelihood inference for percentile ratios is outlined. The proposed methodology is used for a meta-analysis of glioma data that was one of the studies which motivated this work. A simulation study exploring the validity of the proposed methodology is available electronically.  相似文献   

2.
Differences across studies in terms of design features and methodology, clinical procedures, and patient characteristics, are factors that can contribute to variability in the treatment effect between studies in a meta-analysis (statistical heterogeneity). Regression modelling can be used to examine relationships between treatment effect and covariates with the aim of explaining the variability in terms of clinical, methodological, or other factors. Such an investigation can be undertaken using aggregate data or individual patient data. An aggregate data approach can be problematic as sufficient data are rarely available and translating aggregate effects to individual patients can often be misleading. An individual patient data approach, although usually more resource demanding, allows a more thorough investigation of potential sources of heterogeneity and enables a fuller analysis of time to event outcomes in meta-analysis. Hierarchical Cox regression models are used to identify and explore the evidence for heterogeneity in meta-analysis and examine the relationship between covariates and censored failure time data in this context. Alternative formulations of the model are possible and illustrated using individual patient data from a meta-analysis of five randomized controlled trials which compare two drugs for the treatment of epilepsy. The models are further applied to simulated data examples in which the degree of heterogeneity and magnitude of treatment effect are varied. The behaviour of each model in each situation is explored and compared.  相似文献   

3.
Meta‐analyses of clinical trials often treat the number of patients experiencing a medical event as binomially distributed when individual patient data for fitting standard time‐to‐event models are unavailable. Assuming identical drop‐out time distributions across arms, random censorship, and low proportions of patients with an event, a binomial approach results in a valid test of the null hypothesis of no treatment effect with minimal loss in efficiency compared with time‐to‐event methods. To deal with differences in follow‐up—at the cost of assuming specific distributions for event and drop‐out times—we propose a hierarchical multivariate meta‐analysis model using the aggregate data likelihood based on the number of cases, fatal cases, and discontinuations in each group, as well as the planned trial duration and groups sizes. Such a model also enables exchangeability assumptions about parameters of survival distributions, for which they are more appropriate than for the expected proportion of patients with an event across trials of substantially different length. Borrowing information from other trials within a meta‐analysis or from historical data is particularly useful for rare events data. Prior information or exchangeability assumptions also avoid the parameter identifiability problems that arise when using more flexible event and drop‐out time distributions than the exponential one. We discuss the derivation of robust historical priors and illustrate the discussed methods using an example. We also compare the proposed approach against other aggregate data meta‐analysis methods in a simulation study. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

4.
Meta‐analysis of time‐to‐event outcomes using the hazard ratio as a treatment effect measure has an underlying assumption that hazards are proportional. The between‐arm difference in the restricted mean survival time is a measure that avoids this assumption and allows the treatment effect to vary with time. We describe and evaluate meta‐analysis based on the restricted mean survival time for dealing with non‐proportional hazards and present a diagnostic method for the overall proportional hazards assumption. The methods are illustrated with the application to two individual participant meta‐analyses in cancer. The examples were chosen because they differ in disease severity and the patterns of follow‐up, in order to understand the potential impacts on the hazards and the overall effect estimates. We further investigate the estimation methods for restricted mean survival time by a simulation study. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

5.
Keene ON 《Statistics in medicine》2002,21(23):3687-3700
Estimates of the efficacy of new medicines are key to the investigation of their clinical effectiveness. The most widely recommended approach to summarizing time-to-event data from clinical trials is to use a hazard ratio. When the proportional hazards assumption is questionable, a hazard ratio depends on the length of patient follow-up. Hazard ratios do not directly translate into differences in times to events and therefore can present difficulties in interpretation. This paper describes an area where summary by hazard ratio would seem unsuitable and explores alternative estimates of efficacy. In particular, the difference in median time to event between treatments can provide a useful and consistent measure of efficacy. Methods of calculating confidence intervals for differences in medians for censored time-to-event will be described. Accelerated failure time models provide a useful alternative approach to proportional hazards modelling. Estimates of the ratio of the median time to event between treatments are directly available from these models. One of the reasons given for summarizing time-to-event studies by a hazard ratio is to facilitate meta-analyses. The bootstrap estimate of standard error for difference in median in each trial can provide a method for combining results based on summary statistics.  相似文献   

6.
In many longitudinal studies, the outcomes recorded on each subject include both a sequence of repeated measurements at pre-specified times and the time at which an event of particular interest occurs: for example, death, recurrence of symptoms or drop out from the study. The event time for each subject may be recorded exactly, interval censored or right censored. The term joint modelling refers to the statistical analysis of the resulting data while taking account of any association between the repeated measurement and time-to-event outcomes. In this paper, we first discuss different approaches to joint modelling and argue that the analysis strategy should depend on the scientific focus of the study. We then describe in detail a particularly simple, fully parametric approach. Finally, we use this approach to re-analyse data from a clinical trial of drug therapies for schizophrenic patients, in which the event time is an interval-censored or right-censored time to withdrawal from the study due to adverse side effects.  相似文献   

7.
The number needed to treat is a tool often used in clinical settings to illustrate the effect of a treatment. It has been widely adopted in the communication of risks to both clinicians and non‐clinicians, such as patients, who are better able to understand this measure than absolute risk or rate reductions. The concept was introduced by Laupacis, Sackett, and Roberts in 1988 for binary data, and extended to time‐to‐event data by Altman and Andersen in 1999. However, up to the present, there is no definition of the number needed to treat for time‐to‐event data with competing risks. This paper introduces such a definition using the cumulative incidence function and suggests non‐parametric and semi‐parametric inferential methods for right‐censored time‐to‐event data in the presence of competing risks. The procedures are illustrated using the data from a breast cancer clinical trial. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

8.
A new test for the detection of publication bias in meta-analysis with sparse binary data is proposed. The test statistic is based on observed and expected cell frequencies, and the variance of the observed cell frequencies. These quantities are utilized in a rank correlation test. Type I error rate and power of the test are evaluated in simulations; results are compared to those of two other commonly used test procedures. Sample sizes were generated according to findings in a survey of eight German medical journals. Simulation results indicate that, in contrast to existing test procedures, the new test holds the prescribed significance level when data are sparse. However, the power of all tests is low in many situations of practical importance.  相似文献   

9.
The use of standard univariate fixed- and random-effects models in meta-analysis has become well known in the last 20 years. However, these models are unsuitable for meta-analysis of clinical trials that present multiple survival estimates (usually illustrated by a survival curve) during a follow-up period. Therefore, special methods are needed to combine the survival curve data from different trials in a meta-analysis. For this purpose, only fixed-effects models have been suggested in the literature. In this paper, we propose a multivariate random-effects model for joint analysis of survival proportions reported at multiple time points and in different studies, to be combined in a meta-analysis. The model could be seen as a generalization of the fixed-effects model of Dear (Biometrics 1994; 50:989-1002). We illustrate the method by using a simulated data example as well as using a clinical data example of meta-analysis with aggregated survival curve data. All analyses can be carried out with standard general linear MIXED model software. Copyright (c) 2008 John Wiley & Sons, Ltd.  相似文献   

10.
The standard analysis of a time-to-event variable often involves the calculation of a hazard ratio based on a survival model such as Cox regression; however, many people consider such relative measures of effect to be poor expressions of clinical meaningfulness. Two absolute measures of effect are often used to assess clinical meaningfulness: (1) many disease areas frequently use the absolute difference in event rates (or its inverse, the number-needed-to-treat) and (2) oncology frequently uses the difference between the median survival times in the two groups. While both of these measures appear reasonable, they directly contradict each other. This paper describes the basic mathematics leading to the two measures and shows examples. The contradiction described here raises questions about the concept of clinical meaningfulness.  相似文献   

11.
Meta-analysis of individual patient data (IPD) is the gold-standard for synthesizing evidence across clinical studies. However, for some studies IPD may not be available and only aggregate data (AD), such as a treatment effect estimate and its standard error, may be obtained. In this situation, methods for combining IPD and AD are important to utilize all the available evidence. In this paper, we develop and assess a range of statistical methods for combining IPD and AD in meta-analysis of continuous outcomes from randomized controlled trials.The methods take either a one-step or a two-step approach. The latter is simple, with IPD reduced to AD so that standard AD meta-analysis techniques can be employed. The one-step approach is more complex but offers a flexible framework to include both patient-level and trial-level parameters. It uses a dummy variable to distinguish IPD trials from AD trials and to constrain which parameters the AD trials estimate. We show that this is important when assessing how patient-level covariates modify treatment effect, as aggregate-level relationships across trials are subject to ecological bias and confounding. We thus develop models to separate within-trial and across-trials treatment-covariate interactions; this ensures that only IPD trials estimate the former, whilst both IPD and AD trials estimate the latter in addition to the pooled treatment effect and any between-study heterogeneity. Extension to multiple correlated outcomes is also considered. Ten IPD trials in hypertension, with blood pressure the continuous outcome of interest, are used to assess the models and identify the benefits of utilizing AD alongside IPD.  相似文献   

12.
Huang Y  Dagne G  Wu L 《Statistics in medicine》2011,30(24):2930-2946
Normality (symmetry) of the model random errors is a routine assumption for mixed-effects models in many longitudinal studies, but it may be unrealistically obscuring important features of subject variations. Covariates are usually introduced in the models to partially explain inter-subject variations, but some covariates such as CD4 cell count may be often measured with substantial errors. This paper formulates a class of models in general forms that considers model errors to have skew-normal distributions for a joint behavior of longitudinal dynamic processes and time-to-event process of interest. For estimating model parameters, we propose a Bayesian approach to jointly model three components (response, covariate, and time-to-event processes) linked through the random effects that characterize the underlying individual-specific longitudinal processes. We discuss in detail special cases of the model class, which are offered to jointly model HIV dynamic response in the presence of CD4 covariate process with measurement errors and time to decrease in CD4/CD8 ratio, to provide a tool to assess antiretroviral treatment and to monitor disease progression. We illustrate the proposed methods using the data from a clinical trial study of HIV treatment. The findings from this research suggest that the joint models with a skew-normal distribution may provide more reliable and robust results if the data exhibit skewness, and particularly the results may be important for HIV/AIDS studies in providing quantitative guidance to better understand the virologic responses to antiretroviral treatment.  相似文献   

13.
14.
Meta-analyses of clinical trials are increasingly seeking to go beyond estimating the effect of a treatment and may also aim to investigate the effect of other covariates and how they alter treatment effectiveness. This requires the estimation of treatment-covariate interactions. Meta-regression can be used to estimate such interactions using published data, but it is known to lack statistical power, and is prone to bias.The use of individual patient data can improve estimation of such interactions, among other benefits, but it can be difficult and time-consuming to collect and analyse. This paper derives, under certain conditions, the power of meta-regression and IPD methods to detect treatment-covariate interactions. These power formulae are shown to depend on heterogeneity in the covariate distributions across studies. This allows the derivation of simple tests, based on heterogeneity statistics, for comparing the statistical power of the analysis methods.  相似文献   

15.

Background

Joint modelling of longitudinal and time‐to‐event data is often preferred over separate longitudinal or time‐to‐event analyses as it can account for study dropout, error in longitudinally measured covariates, and correlation between longitudinal and time‐to‐event outcomes. The joint modelling literature focuses mainly on the analysis of single studies with no methods currently available for the meta‐analysis of joint model estimates from multiple studies.

Methods

We propose a 2‐stage method for meta‐analysis of joint model estimates. These methods are applied to the INDANA dataset to combine joint model estimates of systolic blood pressure with time to death, time to myocardial infarction, and time to stroke. Results are compared to meta‐analyses of separate longitudinal or time‐to‐event models. A simulation study is conducted to contrast separate versus joint analyses over a range of scenarios.

Results

Using the real dataset, similar results were obtained by using the separate and joint analyses. However, the simulation study indicated a benefit of use of joint rather than separate methods in a meta‐analytic setting where association exists between the longitudinal and time‐to‐event outcomes.

Conclusions

Where evidence of association between longitudinal and time‐to‐event outcomes exists, results from joint models over standalone analyses should be pooled in 2‐stage meta‐analyses.  相似文献   

16.
Often multiple outcomes are of interest in each study identified by a systematic review, and in this situation a separate univariate meta-analysis is usually applied to synthesize the evidence for each outcome independently; an alternative approach is a single multivariate meta-analysis model that utilizes any correlation between outcomes and obtains all the pooled estimates jointly. Surprisingly, multivariate meta-analysis is rarely considered in practice, so in this paper we illustrate the benefits and limitations of the approach to provide helpful insight for practitioners.We compare a bivariate random-effects meta-analysis (BRMA) to two independent univariate random-effects meta-analyses (URMA), and show how and why a BRMA is able to 'borrow strength' across outcomes. Then, on application to two examples in healthcare, we show: (i) given complete data for both outcomes in each study, BRMA is likely to produce individual pooled estimates with very similar standard errors to those from URMA; (ii) given some studies where one of the outcomes is missing at random, the 'borrowing of strength' is likely to allow BRMA to produce individual pooled estimates with noticeably smaller standard errors than those from URMA; (iii) for either complete data or missing data, BRMA will produce a more appropriate standard error of the pooled difference between outcomes as it incorporates their correlation, which is not possible using URMA; and (iv) despite its advantages, BRMA may often not be possible due to the difficulty in obtaining the within-study correlations required to fit the model. Bivariate meta-regression and further research priorities are also discussed.  相似文献   

17.
Motivated by a real data example on renal graft failure, we propose a new semiparametric multivariate joint model that relates multiple longitudinal outcomes to a time-to-event. To allow for greater flexibility, key components of the model are modelled nonparametrically. In particular, for the subject-specific longitudinal evolutions we use a spline-based approach, the baseline risk function is assumed piecewise constant, and the distribution of the latent terms is modelled using a Dirichlet Process prior formulation. Additionally, we discuss the choice of a suitable parameterization, from a practitioner's point of view, to relate the longitudinal process to the survival outcome. Specifically, we present three main families of parameterizations, discuss their features, and present tools to choose between them.  相似文献   

18.
This paper addresses the problem of combining information from independent clinical trials which compare survival distributions of two treatment groups. Current meta-analytic methods which take censoring into account are often not feasible for meta-analyses which synthesize summarized results in published (or unpublished) references, as these methods require information usually not reported. The paper presents methodology which uses the log(-log) survival function difference, (i.e. log(-logS2(t))-log(-logS1(t)), as the contrast index to represent the multiplicative treatment effect on survival in independent trials. This article shows by the second mean value theorem for integrals that this contrast index, denoted as theta, is interpretable as a weighted average on a natural logarithmic scale of hazard ratios within the interval [0,t] in a trial. When the within-trial proportional hazards assumption is true, theta is the logarithm of the proportionality constant for the common hazard ratio for the interval considered within the trial. In this situation, an important advantage of using theta as a contrast index in the proposed methodology is that the estimation of theta is not affected by length of follow-up time. Other commonly used indices such as the odds ratio, risk ratio and risk differences do not have this invariance property under the proportional hazard model, since their estimation may be affected by length of follow-up time as a technical artefact. Thus, the proposed methodology obviates problems which often occur in survival meta-analysis because trials do not report survival at the same length of follow-up time. Even when the within-trial proportional hazards assumption is not realistic, the proposed methodology has the capability of testing a global null hypothesis of no multiplicative treatment effect on the survival distributions of two groups for all studies. A discussion of weighting schemes for meta-analysis is provided, in particular, a weighting scheme based on effective sample sizes is suggested for the meta-analysis of time-to-event data which involves censoring. A medical example illustrating the methodology is given. A simulation investigation suggested that the methodology performs well in the presence of moderate censoring.  相似文献   

19.
In this paper, we model multivariate time‐to‐event data by composite likelihood of pairwise frailty likelihoods and marginal hazards using natural cubic splines. Both right‐ and interval‐censored data are considered. The suggested approach is applied on two types of family studies using the gamma‐ and stable frailty distribution: The first study is on adoption data where the association between survival in families of adopted children and their adoptive and biological parents is studied. The second study is a cross‐sectional study of the occurrence of back and neck pain in twins, illustrating the methodology in the context of genetic epidemiology. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

20.
Functional data are increasingly collected in public health and medical studies to better understand many complex diseases. Besides the functional data, other clinical measures are often collected repeatedly. Investigating the association between these longitudinal data and time to a survival event is of great interest to these studies. In this article, we develop a functional joint model (FJM) to account for functional predictors in both longitudinal and survival submodels in the joint modeling framework. The parameters of FJM are estimated in a maximum likelihood framework via expectation maximization algorithm. The proposed FJM provides a flexible framework to incorporate many features both in joint modeling of longitudinal and survival data and in functional data analysis. The FJM is evaluated by a simulation study and is applied to the Alzheimer's Disease Neuroimaging Initiative study, a motivating clinical study testing whether serial brain imaging, clinical, and neuropsychological assessments can be combined to measure the progression of Alzheimer's disease. Copyright © 2017 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号