首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Information from historical trials is important for the design, interim monitoring, analysis, and interpretation of clinical trials. Meta‐analytic models can be used to synthesize the evidence from historical data, which are often only available in aggregate form. We consider evidence synthesis methods for trials with recurrent event endpoints, which are common in many therapeutic areas. Such endpoints are typically analyzed by negative binomial regression. However, the individual patient data necessary to fit such a model are usually unavailable for historical trials reported in the medical literature. We describe approaches for back‐calculating model parameter estimates and their standard errors from available summary statistics with various techniques, including approximate Bayesian computation. We propose to use a quadratic approximation to the log‐likelihood for each historical trial based on 2 independent terms for the log mean rate and the log of the dispersion parameter. A Bayesian hierarchical meta‐analysis model then provides the posterior predictive distribution for these parameters. Simulations show this approach with back‐calculated parameter estimates results in very similar inference as using parameter estimates from individual patient data as an input. We illustrate how to design and analyze a new randomized placebo‐controlled exacerbation trial in severe eosinophilic asthma using data from 11 historical trials.  相似文献   

2.
Joint modeling of longitudinal and survival data can provide more efficient and less biased estimates of treatment effects through accounting for the associations between these two data types. Sponsors of oncology clinical trials routinely and increasingly include patient-reported outcome (PRO) instruments to evaluate the effect of treatment on symptoms, functioning, and quality of life. Known publications of these trials typically do not include jointly modeled analyses and results. We formulated several joint models based on a latent growth model for longitudinal PRO data and a Cox proportional hazards model for survival data. The longitudinal and survival components were linked through either a latent growth trajectory or shared random effects. We applied these models to data from a randomized phase III oncology clinical trial in mesothelioma. We compared the results derived under different model specifications and showed that the use of joint modeling may result in improved estimates of the overall treatment effect.  相似文献   

3.
Quality of life (QOL) assessment is a key component of many clinical studies and frequently requires the use of single global summary measures that capture the overall balance of findings from a potentially wide-ranging assessment of QOL issues. We propose and evaluate an irregular multilevel latent variable model suitable for use as a global summary tool for health-related QOL assessments. The proposed model is a multiple indicator and multiple cause style of model with a two-level latent variable structure. We approach the modeling from a general multilevel modeling perspective, using a combination of random and nonrandom cluster types to accommodate the mixture of issues commonly evaluated in health-related QOL assessments--overall perceptions of QOL and health, along with specific psychological, physical, social, and functional issues. Using clinical trial data, we evaluate the merits and application of this approach in detail, both for mean global QOL and for change from baseline. We show that the proposed model generally performs well in comparing global patterns of treatment effect and provides more precise and reliable estimates than several common alternatives such as selecting from or averaging observed global item measures. A variety of computational methods could be used for estimation. We derived a closed-form expression for the marginal likelihood that can be used to obtain maximum likelihood parameter estimates when normality assumptions are reasonable. Our approach is useful for QOL evaluations aimed at pharmacoeconomic or individual clinical decision making and in obtaining summary QOL measures for use in quality-adjusted survival analyses.  相似文献   

4.
Economic evaluation is often seen as a branch of health economics divorced from mainstream econometric techniques. Instead, it is perceived as relying on statistical methods for clinical trials. Furthermore, the statistic of interest in cost-effectiveness analysis, the incremental cost-effectiveness ratio is not amenable to regression-based methods, hence the traditional reliance on comparing aggregate measures across the arms of a clinical trial. In this paper, we explore the potential for health economists undertaking cost-effectiveness analysis to exploit the plethora of established econometric techniques through the use of the net-benefit framework - a recently suggested reformulation of the cost-effectiveness problem that avoids the reliance on cost-effectiveness ratios and their associated statistical problems. This allows the formulation of the cost-effectiveness problem within a standard regression type framework. We provide an example with empirical data to illustrate how a regression type framework can enhance the net-benefit method. We go on to suggest that practical advantages of the net-benefit regression approach include being able to use established econometric techniques, adjust for imperfect randomisation, and identify important subgroups in order to estimate the marginal cost-effectiveness of an intervention.  相似文献   

5.
This paper describes the methods appropriate for calculating sample sizes for clinical trials assessing quality of life (QOL). An example from a randomized trial of patients with small cell lung cancer completing the Hospital Anxiety and Depression Scale (HADS) is used for illustration. Sample size estimates calculated assuming that the data are either of the Normal form or binary are compared to estimates derived using an ordered categorical approach. In our example, since the data are very skewed, the Normal and binary approaches are shown to be unsatisfactory: binary methods may lead to substantial over estimates of sample size and Normal methods take no account of the asymmetric nature of the distribution. When summarizing normative data for QOL scores the frequency distributions should always be given so that one can assess if non-parametric methods should be used for sample size calculations and analysis. Further work is needed to discover what changes in QOL scores represent clinical importance for health technology interventions.  相似文献   

6.
Quality of life (QoL) has become an accepted and widely used endpoint in clinical trials. The analytical tools used for QoL evaluations in clinical trials differ from those used for the more traditional endpoints, such as response to disease, overall survival or progression-free survival. Since QoL assessments are generally performed on self-administered questionnaires, QoL endpoints are more prone to a placebo effect than traditional clinical endpoints. The placebo effect is a well-documented phenomenon in clinical trials, which has led to dramatic consequences on the clinical development of new therapeutic agents. In order to account for the placebo effect, a multivariate latent variable model is proposed, which allows for misclassification in the QoL item responses. The approach is flexible in the sense that it can be used for the analysis of a wide variety of multi-dimensional QoL instruments. For statistical inference, maximum likelihood estimates and their standard errors are obtained using a Monte Carlo EM algorithm. The approach is illustrated with analysis of data from a cardiovascular phase III clinical trial.  相似文献   

7.
Group sequential methods are becoming increasingly popular for monitoring and analysing large controlled trials, especially clinical trials. They not only allow trialists to monitor the data as it accumulates, but also reduce the expected sample size. Such methods are traditionally based on preserving the overall type I error by increasing the conservatism of the hypothesis tests performed at any single analysis. Using methods which are based on hypothesis testing in this way makes point estimation and the calculation of confidence intervals difficult and controversial. We describe a class of group sequential procedures based on a single parameter which reflects initial scepticism towards unexpectedly large effects. These procedures have good expected and maximum sample sizes, and lead to natural point and interval estimates of the treatment difference. Hypothesis tests, point estimates and interval estimates calculated using this procedure are consistent with each other, and tests and estimates made at the end of the trial are consistent with interim tests and estimates. This class of sequential tests can be considered in both a traditional group sequential manner or as a Bayesian solution to the problem.  相似文献   

8.
Integrative Data Analysis (IDA) encompasses a collection of methods for data synthesis that pools participant-level data across multiple studies. Compared with single-study analyses, IDA provides larger sample sizes, better representation of participant characteristics, and often increased statistical power. Many of the methods currently available for IDA have focused on examining developmental changes using longitudinal observational studies employing different measures across time and study. However, IDA can also be useful in synthesizing across multiple randomized clinical trials to improve our understanding of the comprehensive effectiveness of interventions, as well as mediators and moderators of those effects. The pooling of data from randomized clinical trials presents a number of methodological challenges, and we discuss ways to examine potential threats to internal and external validity. Using as an illustration a synthesis of 19 randomized clinical trials on the prevention of adolescent depression, we articulate IDA methods that can be used to minimize threats to internal validity, including (1) heterogeneity in the outcome measures across trials, (2) heterogeneity in the follow-up assessments across trials, (3) heterogeneity in the sample characteristics across trials, (4) heterogeneity in the comparison conditions across trials, and (5) heterogeneity in the impact trajectories. We also demonstrate a technique for minimizing threats to external validity in synthesis analysis that may result from non-availability of some trial datasets. The proposed methods rely heavily on latent variable modeling extensions of the latent growth curve model, as well as missing data procedures. The goal is to provide strategies for researchers considering IDA.  相似文献   

9.

Integrative Data Analysis (IDA) encompasses a collection of methods for data synthesis that pools participant-level data across multiple studies. Compared with single-study analyses, IDA provides larger sample sizes, better representation of participant characteristics, and often increased statistical power. Many of the methods currently available for IDA have focused on examining developmental changes using longitudinal observational studies employing different measures across time and study. However, IDA can also be useful in synthesizing across multiple randomized clinical trials to improve our understanding of the comprehensive effectiveness of interventions, as well as mediators and moderators of those effects. The pooling of data from randomized clinical trials presents a number of methodological challenges, and we discuss ways to examine potential threats to internal and external validity. Using as an illustration a synthesis of 19 randomized clinical trials on the prevention of adolescent depression, we articulate IDA methods that can be used to minimize threats to internal validity, including (1) heterogeneity in the outcome measures across trials, (2) heterogeneity in the follow-up assessments across trials, (3) heterogeneity in the sample characteristics across trials, (4) heterogeneity in the comparison conditions across trials, and (5) heterogeneity in the impact trajectories. We also demonstrate a technique for minimizing threats to external validity in synthesis analysis that may result from non-availability of some trial datasets. The proposed methods rely heavily on latent variable modeling extensions of the latent growth curve model, as well as missing data procedures. The goal is to provide strategies for researchers considering IDA.

  相似文献   

10.
Many long-term clinical trials collect both a vector of repeated measurements and an event time on each subject; often, the two outcomes are dependent. One example is the use of surrogate markers to predict disease onset or survival. Another is longitudinal trials which have outcome-related dropout. We describe a mixture model for the joint distribution which accommodates incomplete repeated measures and right-censored event times, and provide methods for full maximum likelihood estimation. The methods are illustrated through analysis of data from a clinical trial for a new schizophrenia therapy; in the trial, dropout time is closely related to outcome, and the dropout process differs between treatments. The parameter estimates from the model are used to make a treatment comparison after adjusting for the effects of dropout. An added benefit of the analysis is that it permits using the repeated measures to increase efficiency of estimates of the event time distribution. © 1997 by John Wiley & Sons, Ltd.  相似文献   

11.
In clinical trials with a long period of time between randomization and the primary assessment of the patient, the same assessments are often undertaken at intermediate times. When an interim analysis is conducted, in addition to the patients who have completed the primary assessment, there will be those who have till then undergone only intermediate assessments. The efficiency of the interim analysis can be increased by the inclusion of data from these additional patients. This paper compares four methods of increasing information based on model-free estimates of transition probabilities to incorporate intermediate assessments from patients who have not completed the trial. It is assumed that the observations are binary and that there is one intermediate assessment. The methods are the score and Wald approaches, each with the log-odds ratio and probability difference parameterizations. Simulations show that all four approaches have good properties in moderate to large sample sizes.  相似文献   

12.
BACKGROUND: It has been recommended that onset of antidepressant action be assessed using survival analyses with assessments taken at least twice per week. However, such an assessment schedule is problematic to implement. The present study assessed the feasibility of comparing onset of action between treatments using a categorical repeated measures approach with a traditional assessment schedule. METHOD: Four scenarios representative of antidepressant clinical trials were created by varying mean improvements over time. Two assessment schedules were compared within the simulated 8-week studies: (i) 'frequent' assessment--16 postbaseline visits (twice-weekly for 8 weeks); (ii) 'traditional' assessment--5 postbaseline visits (Weeks 1, 2, 4, 6, and 8). Onset was defined as a 20 per cent improvement from baseline, and had to be sustained at all subsequent assessments. Differences between treatments were analysed with a survival analysis (KM = Kaplan-Meier product limit method) and a categorical mixed-effects model repeated measures analysis (MMRM-CAT). RESULTS: More frequent assessments resulted in small reductions in empirical standard errors compared with traditional assessments for both analytic methods. More frequent assessments altered estimates of treatment group differences in KM such that power was increased when the difference between treatments was increasing over time, but power decreased when the treatment difference decreased over time. More frequent assessments had a minimal effect on estimates of treatment group differences in MMRM-CAT. The MMRM-CAT analysis of data from a traditional assessment schedule provided adequate control of type I error, and had power comparable to or greater than that with KM analyses of data from either a frequent or a traditional assessment schedule. CONCLUSION: In the scenarios tested in this study it was reasonable to assess treatment group differences in onset of action with MMRM-CAT and a traditional assessment schedule. Additional research is needed to assess whether these findings hold in data with drop-out and across definitions of onset.  相似文献   

13.
A complication when assessing quality of life data longitudinally is that in many trials a substantial percentage of patients die before completing all of the assessments. Furthermore, a patient's risk of dying might be predicted by his current quality of life. This suggests jointly modelling quality of life and survival, and using this combined information to summarize the outcome. The aim of this paper is to address the complicated issues, such as death, present in analysing multiple-item ordinal quality of life data in clinical trials while recognizing the psychometric properties of the quality of life instrument being used. This is done by combining an item response model and Cox's proportional hazard model, where a latent variable process for quality of life determines the probability of selecting various options on quality of life items, and also serves as a time-dependent covariate in the survival model. We accomplish this by using Markov chain Monte Carlo methods to obtain parameter estimates. Then we compute a summary measure, area-under-QOL curve, to compare the efficacy of the treatments. The methods are illustrated with analysis of data from the Vesnarinone trial of patients with severe heart failure, in which quality of life was assessed with the Minnesota Living with Heart Failure Questionnaire.  相似文献   

14.
When randomized trial results are available for several different groups of patients, neither applying the overall results to each type of patient nor using group-specific results is entirely satisfactory. Instead, we estimate group-specific treatment effects using a Bayesian approach with informative priors for the treatment x group interactions. We describe how we elicited these prior beliefs about the effects of a new drug for the treatment of heart failure in three different patient groups. Using results from three trials, one in each patient group, the posterior mean treatment effects are very similar to the trial-specific maximum likelihood estimates, showing that in this case each trial effectively stands by itself. Our methods can also be applied to subgroup analyses in a single clinical trial, where subgroup-specific posterior means are likely to lie between the subgroup-specific maximum likelihood estimates and the pooled maximum likelihood estimates.  相似文献   

15.
《Value in health》2013,16(1):164-176
ObjectivesTo present a step-by-step example of the examination of heterogeneity within clinical trial data by using a growth mixture modeling (GMM) approach.MethodsSecondary data from a longitudinal double-blind clinical drug study were used. Patients received enalapril or placebo and were followed for 2 years during the drug component, followed by a 3-year postdrug component. Primary variables of interest were creatinine levels during the drug component and number of hospitalizations in the postdrug component. Latent growth modeling (LGM) methods were used to examine the treatment response variability in the data. GMM methods were applied where substantial variability was found to identify latent (unobserved) subsets of differential responders, using treatment groups as known classes. Post hoc analyses were applied to characterize emergent subgroups.ResultsLGM methods demonstrated a large variability in creatinine levels. GMM methods identified two subsets of patients for each treatment group. Placebo class 2 (7.0% of the total sample) and enalapril class 2 (8.5%) include individuals whose creatinine levels start at 1.114 mg/dl and 1.108 mg/dl, respectively, and show worsening (slopes: 0.023 and 0.017, respectively). Placebo class 1 (43.1%) and enalapril class 1 (41.4%) individuals start with lower creatinine levels (1.082 and 1.083 mg/dl, respectively) and show very minimal change (0.008 and 0.003, respectively). Post hoc analyses revealed significant differences between placebo/enalapril class 1 and placebo/enalapril class 2 in terms of New York Heart Association functional ability, depression, functional impairment, creatinine levels, mortality, and hospitalizations.ConclusionsGMM methods can identify subsets of differential responders in clinical trial data. This can result in a more accurate understanding of treatment effects.  相似文献   

16.
Because costs and outcomes of medical treatments may vary from country to country in important ways, decision makers are increasingly interested in having data based on their own country's health care situations. This paper proposes methods for estimating country-specific cost-effectiveness ratios from data available from multinational clinical trials. It examines how clinical and economic outcomes interact when estimating treatment effects on cost and proposes empirical methods for capturing these interactions and incorporating them when making country-specific estimates. We use data from a multinational phase III trial of tirilazad mesylate for the treatment of subarachnoid haemorrhage to illustrate these methods. Our findings suggest that it is possible for meaningful country-by-country differences to be found in such trial data. These differences can be useful in informing reimbursement, utilization, and other decisions taken at the country level. © 1998 John Wiley & Sons, Ltd.  相似文献   

17.
Risk assessments and intervention trials have been used by the U.S. Environmental Protection Agency to estimate drinking water health risks. Seldom are both methods used concurrently. Between 2001 and 2003, illness data from a trial were collected simultaneously with exposure data, providing a unique opportunity to compare direct risk estimates of waterborne disease from the intervention trial with indirect estimates from a risk assessment. Comparing the group with water treatment (active) with that without water treatment (sham), the estimated annual attributable disease rate (cases per 10,000 persons per year) from the trial provided no evidence of a significantly elevated drinking water risk [attributable risk=-365 cases/year, sham minus active; 95% confidence interval (CI) , -2,555 to 1,825]. The predicted mean rate of disease per 10,000 persons per person-year from the risk assessment was 13.9 (2.5, 97.5 percentiles: 1.6, 37.7) assuming 4 log removal due to viral disinfection and 5.5 (2.5, 97.5 percentiles: 1.4, 19.2) assuming 6 log removal. Risk assessments are important under conditions of low risk when estimates are difficult to attain from trials. In particular, this assessment pointed toward the importance of attaining site-specific treatment data and the clear need for a better understanding of viral removal by disinfection. Trials provide direct risk estimates, and the upper confidence limit estimates, even if not statistically significant, are informative about possible upper estimates of likely risk. These differences suggest that conclusions about waterborne disease risk may be strengthened by the joint use of these two approaches. Key words: drinking water, gastrointestinal, intervention trial, microbial risk assessment, waterborne pathogens.  相似文献   

18.
As evidence accumulates within a meta‐analysis, it is desirable to determine when the results could be considered conclusive to guide systematic review updates and future trial designs. Adapting sequential testing methodology from clinical trials for application to pooled meta‐analytic effect size estimates appears well suited for this objective. In this paper, we describe a Bayesian sequential meta‐analysis method, in which an informative heterogeneity prior is employed and stopping rule criteria are applied directly to the posterior distribution for the treatment effect parameter. Using simulation studies, we examine how well this approach performs under different parameter combinations by monitoring the proportion of sequential meta‐analyses that reach incorrect conclusions (to yield error rates), the number of studies required to reach conclusion, and the resulting parameter estimates. By adjusting the stopping rule thresholds, the overall error rates can be controlled within the target levels and are no higher than those of alternative frequentist and semi‐Bayes methods for the majority of the simulation scenarios. To illustrate the potential application of this method, we consider two contrasting meta‐analyses using data from the Cochrane Library and compare the results of employing different sequential methods while examining the effect of the heterogeneity prior in the proposed Bayesian approach. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

19.
In meta-analysis combining results from parallel and cross-over trials, there is a risk of bias originating from the carry-over effect in cross-over trials. When pooling treatment effects estimated from parallel trials and two-period two-treatment cross-over trials, meta-analytic estimators of treatment effect can be obtained from the combination of parallel trial results either with cross-over trial results based on data of the first period only or with cross-over trial results analysed with data from both periods. Taking data from the first cross-over period protects against carry-over but gives less efficient treatment estimators and may lead to selection bias. This study evaluates in terms of variance reduction and mean square error the cost of calculating meta-analysis estimates with data from the first period instead of data from the two cross-over periods. If the information on cross-over sequence is available, we recommend performing two combined design meta-analyses, one using the first cross-over period data and one based on data from both cross-over periods. To investigate simultaneously the statistical significance of these two estimators as well as the carry-over at meta-analysis level, a method based on a multivariate analysis of the meta-analytic treatment effect and carry-over estimates is proposed.  相似文献   

20.
We describe methods for meta‐analysis of randomised trials where a continuous outcome is of interest, such as blood pressure, recorded at both baseline (pre treatment) and follow‐up (post treatment). We used four examples for illustration, covering situations with and without individual participant data (IPD) and with and without baseline imbalance between treatment groups in each trial. Given IPD, meta‐analysts can choose to synthesise treatment effect estimates derived using analysis of covariance (ANCOVA), a regression of just final scores, or a regression of the change scores. When there is baseline balance in each trial, treatment effect estimates derived using ANCOVA are more precise and thus preferred. However, we show that meta‐analysis results for the summary treatment effect are similar regardless of the approach taken. Thus, without IPD, if trials are balanced, reviewers can happily utilise treatment effect estimates derived from any of the approaches. However, when some trials have baseline imbalance, meta‐analysts should use treatment effect estimates derived from ANCOVA, as this adjusts for imbalance and accounts for the correlation between baseline and follow‐up; we show that the other approaches can give substantially different meta‐analysis results. Without IPD and with unavailable ANCOVA estimates, reviewers should limit meta‐analyses to those trials with baseline balance. Trowman's method to adjust for baseline imbalance without IPD performs poorly in our examples and so is not recommended. Finally, we extend the ANCOVA model to estimate the interaction between treatment effect and baseline values and compare options for estimating this interaction given only aggregate data. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号