首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In randomized clinical trials, subjects are recruited at multiple study centres. Factors that vary across centres may exert a powerful independent influence on study outcomes. A common problem is how to incorporate these centre effects into the analysis of censored time-to-event data. We survey various methods and find substantial advantages in the gamma frailty model. This approach compares favourably with competing methods and appears minimally affected by violation of the assumption of a gamma-distributed frailty. Recent computational advances make use of the gamma frailty model a practical and appealing tool for addressing centre effects in the analysis of multicentre trials.  相似文献   

2.
Despite the use of standardized protocols in, multi-centre, randomized clinical trials, outcome may vary between centres. Such heterogeneity may alter the interpretation and reporting of the treatment effect. Below, we propose a general frailty modelling approach for investigating, inter alia, putative treatment-by-centre interactions in time-to-event data in multi-centre clinical trials. A correlated random effects model is used to model the baseline risk and the treatment effect across centres. It may be based on shared, individual or correlated random effects. For inference we develop the hierarchical-likelihood (or h-likelihood) approach which facilitates computation of prediction intervals for the random effects with proper precision. We illustrate our methods using disease-free time-to-event data on bladder cancer patients participating in an European Organization for Research and Treatment of Cancer trial, and a simulation study. We also demonstrate model selection using h-likelihood criteria.  相似文献   

3.
In a meta-analysis combining survival data from different clinical trials, an important issue is the possible heterogeneity between trials. Such intertrial variation can not only be explained by heterogeneity of treatment effects across trials but also by heterogeneity of their baseline risk. In addition, one might examine the relationship between magnitude of the treatment effect and the underlying risk of the patients in the different trials. Such a scenario can be accounted for by using additive random effects in the Cox model, with a random trial effect and a random treatment-by-trial interaction. We propose to use this kind of model with a general correlation structure for the random effects and to estimate parameters and hazard function using a semi-parametric penalized marginal likelihood method (maximum penalized likelihood estimators). This approach gives smoothed estimates of the hazard function, which represents incidence in epidemiology. The idea for the approach in this paper comes from the study of heterogeneity in a large meta-analysis of randomized trials in patients with head and neck cancers (meta-analysis of chemotherapy in head and neck cancers) and the effect of adding chemotherapy to locoregional treatment. The simulation study and the application demonstrate that the proposed approach yields satisfactory results and they illustrate the need to use a flexible variance-covariance structure for the random effects.  相似文献   

4.
Lam KF  Ip D 《Statistics in medicine》2003,22(12):2025-2034
Clustered grouped survival data arise naturally in clinical medicine and biological research. For example, in a randomized clinical trial, the variable of interest is the time to occurrence of a certain event with or without a new treatment and the data are collected from possibly correlated subjects from independent clusters. However it is sometimes impossible or too expensive to monitor the experimental subjects continuously. The subjects are examined regularly and the continuous survival data are thus grouped into a discrete time scale. With such a design, researchers are mainly interested in the effectiveness of the new treatment as well as the correlation among subjects from the same cluster, namely the intracluster correlation. This paper suggests a random effects approach to the estimation of the regression parameter with various choices of regression model and also the dependence parameter which characterizes the intracluster correlation. Time dependent covariates can be accommodated in the proposed model, and the estimation procedure will not be further complicated with large cluster sizes. The proposed method is applied to the data from the Diabetic Retinopathy Study, the objective of which is to evaluate the effectiveness of laser photocoagulation in delaying or preventing the onset of blindness in the left and right eyes of individuals with diabetes-associated retinopathy. The intracluster correlation using a grouped proportional hazards regression model can be estimated and the relationship between the regression parameter estimates based on the random effects approach and the marginal approach using a dynamic logistic regression model are discussed. Results from a simulation study of the proposed method are also presented.  相似文献   

5.
Clustered binary data arise frequently in medical research such as cross-over clinical trials and twin studies. For the analysis of such data either a random-effects model or a conditional likelihood approach can be used. In this paper, we compare numerically the random-effects model estimator and the conditional likelihood estimator and discuss their relative merits for the analysis of binary data.  相似文献   

6.
A generalized linear mixed model is an increasingly popular choice for the modelling of correlated, non-normal responses in a regression setting. A number of methods are currently available for fitting a generalized linear mixed model including Monte-Carlo Markov-Chain maximum likelihood algorithms, approximate maximum likelihood (PQL), iterative bias correction, and others. Of interest in this paper is to compare the parameter estimation of the various methods in the modelling of a count data set, the incidence of polio in the USA over the period 1970-1983, using a longlinear generalized linear mixed model with an autoregressive correlation structure. Despite the fact that all of these methods are considered valid modelling techniques, we find that parameter estimates and standard errors differ substantially between analyses, particularly in the estimation of the parameters describing the random effects distribution. A small simulation study is helpful in understanding some of these differences. The methods lead to reasonably similar predictions for future observations, with small differences observed in some monthly counts.  相似文献   

7.
Binocular data typically arise in ophthalmology where pairs of eyes are evaluated, through some diagnostic procedure, for the presence of certain diseases or pathologies. Treating eyes as independent and adopting the usual approach in estimating the sensitivity and specificity of a diagnostic test ignores the correlation between eyes. This may consequently yield incorrect estimates, especially of the standard errors. The paper proposes a likelihood-based method of accounting for the correlations between eyes and estimating sensitivity and specificity using a model for binocular or paired binary outcomes. Estimation of model parameters via maximum likelihood is outlined and approximate tests are provided. The efficiency of the estimates is assessed in a simulation study. An extension of the methodology to the case of several diagnostic tests, or the same test measured on several occasions, which arises in multi-reader studies, is given. A further extension to the case of multiple diseases is outlined as well. Data from a study on diabetic retinopathy are analysed to illustrate the methodology.  相似文献   

8.
In randomized clinical trials, it is common that patients may stop taking their assigned treatments and then switch to a standard treatment (standard of care available to the patient) but not the treatments under investigation. Although the availability of limited retrieved data on patients who switch to standard treatment, called off‐protocol data, could be highly valuable in assessing the associated treatment effect with the experimental therapy, it leads to a complex data structure requiring the development of models that link the information of per‐protocol data with the off‐protocol data. In this paper, we develop a novel Bayesian method to jointly model longitudinal treatment measurements under various dropout scenarios. Specifically, we propose a multivariate normal mixed‐effects model for repeated measurements from the assigned treatments and the standard treatment, a multivariate logistic regression model for those stopping the assigned treatments, logistic regression models for those starting a standard treatment off protocol, and a conditional multivariate logistic regression model for completely withdrawing from the study. We assume that withdrawing from the study is non‐ignorable, but intermittent missingness is assumed to be at random. We examine various properties of the proposed model. We develop an efficient Markov chain Monte Carlo sampling algorithm. We analyze in detail via the proposed method a real dataset from a clinical trial. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

9.
Aortic gradient and aortic regurgitation are echocardiographic markers of aortic valve function. Both are biomarkers repeatedly measured in patients with valve abnormalities, and thus, it is expected that they are biologically interrelated. Loss of follow‐up could be caused by multiple reasons, including valve progression related, such as an intervention or even the death of the patient. In that case, it would be of interest and appropriate to analyze these outcomes jointly. Joint models have recently received much attention because they cover a wide range of clinical applications and have promising results. We propose a joint model consisting of two longitudinal outcomes, one continuous (aortic gradient) and one ordinal (aortic regurgitation), and two time‐to‐events (death and reoperation). Moreover, we allow for more flexibility for the average evolution and the subject‐specific profiles of the continuous repeated outcome by using B‐splines. A disadvantage, however, is that when adopting a non‐linear structure for the model, we may have difficulties when interpreting the results. To overcome this problem, we propose a graphical approach. In this paper, we apply the proposed joint models under the Bayesian framework, using a data set including serial echocardiographic measurements of aortic gradient and aortic regurgitation and measurements of the occurrence of death and reoperation in patients who received a human tissue valve in the aortic position. The interpretation of the results will be discussed. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

10.
Medical cost data are typically highly skewed to the right with a large proportion of zero costs. It is also common for these data to be censored because of incomplete follow‐up and death. In the case of censoring due to death, it is important to consider the potential dependence between cost and survival. This association can occur because patients who incur a greater amount of medical cost tend to be frailer and hence are more likely to die. To handle this informative censoring issue, joint modeling of cost and survival with shared random effects has been proposed. In this paper, we extend this joint modeling approach to handle a final feature of many medical cost data sets, i.e., Specifically, the fact that data were obtained via a complex survey design. Specifically, we extend the joint model by incorporating the sample weights when estimating the parameters and using the Taylor series linearization approach when calculating the standard errors. We use a simulation study to compare the joint modeling approach with and without these adjustments. The simulation study shows that parameter estimates can be seriously biased when information about the complex survey design is ignored. It also shows that standard errors based on the Taylor series linearization approach provide satisfactory confidence interval coverage. The proposed joint model is applied to monthly hospital costs obtained from the 2004 National Long Term Care Survey. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

11.
A mixed effect model is proposed to jointly analyze multivariate longitudinal data with continuous, proportion, count, and binary responses. The association of the variables is modeled through the correlation of random effects. We use a quasi‐likelihood type approximation for nonlinear variables and transform the proposed model into a multivariate linear mixed model framework for estimation and inference. Via an extension to the EM approach, an efficient algorithm is developed to fit the model. The method is applied to physical activity data, which uses a wearable accelerometer device to measure daily movement and energy expenditure information. Our approach is also evaluated by a simulation study.  相似文献   

12.
Longitudinal binomial data are frequently generated from multiple questionnaires and assessments in various scientific settings for which the binomial data are often overdispersed. The standard generalized linear mixed effects model may result in severe underestimation of standard errors of estimated regression parameters in such cases and hence potentially bias the statistical inference. In this paper, we propose a longitudinal beta‐binomial model for overdispersed binomial data and estimate the regression parameters under a probit model using the generalized estimating equation method. A hybrid algorithm of the Fisher scoring and the method of moments is implemented for computing the method. Extensive simulation studies are conducted to justify the validity of the proposed method. Finally, the proposed method is applied to analyze functional impairment in subjects who are at risk of Huntington disease from a multisite observational study of prodromal Huntington disease. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

13.
Huang X  Wolfe RA  Hu C 《Statistics in medicine》2004,23(13):2089-2107
Frailty models are frequently used to analyse clustered survival data. The assumption of non-informative censoring is commonly used by these models, even though it may not be true in many situations. This article proposes a test for this assumption. It uses the estimated correlation between two types of martingale residuals, one from a model for failure and the other from a model for censoring. It distinguishes two types of censoring, namely withdrawal and the end of the study. Simulation studies show that the proposed test works well under various scenarios. For illustration, the test is applied to a data set for kidney disease patients from multiple dialysis centres.  相似文献   

14.
Xiang L  Ma X  Yau KK 《Statistics in medicine》2011,30(9):995-1006
The mixture cure model is an effective tool for analysis of survival data with a cure fraction. This approach integrates the logistic regression model for the proportion of cured subjects and the survival model (either the Cox proportional hazards or accelerated failure time model) for uncured subjects. Methods based on the mixture cure model have been extensively investigated in the literature for data with exact failure/censoring times. In this paper, we propose a mixture cure modeling procedure for analyzing clustered and interval-censored survival time data by incorporating random effects in both the logistic regression and PH regression components. Under the generalized linear mixed model framework, we develop the REML estimation for the parameters, as well as an iterative algorithm for estimation of the survival function for interval-censored data. The estimation procedure is implemented via an EM algorithm. A simulation study is conducted to evaluate the performance of the proposed method in various practical situations. To demonstrate its usefulness, we apply the proposed method to analyze the interval-censored relapse time data from a smoking cessation study whose subjects were recruited from 51 zip code regions in the southeastern corner of Minnesota.  相似文献   

15.
When analysing multicentre data, it may be of interest to test whether the distribution of the endpoint varies among centres. In a mixed‐effect model, testing for such a centre effect consists in testing to zero a random centre effect variance component. It has been shown that the usual asymptotic χ2 distribution of the likelihood ratio and score statistics under the null does not necessarily hold. In the case of censored data, mixed‐effects Cox models have been used to account for random effects, but few works have concentrated on testing to zero the variance component of the random effects. We propose a permutation test, using random permutation of the cluster indices, to test for a centre effect in multilevel censored data. Results from a simulation study indicate that the permutation tests have correct type I error rates, contrary to standard likelihood ratio tests, and are more powerful. The proposed tests are illustrated using data of a multicentre clinical trial of induction therapy in acute myeloid leukaemia patients. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

16.
BACKGROUND: It has been recommended that onset of antidepressant action be assessed using survival analyses with assessments taken at least twice per week. However, such an assessment schedule is problematic to implement. The present study assessed the feasibility of comparing onset of action between treatments using a categorical repeated measures approach with a traditional assessment schedule. METHOD: Four scenarios representative of antidepressant clinical trials were created by varying mean improvements over time. Two assessment schedules were compared within the simulated 8-week studies: (i) 'frequent' assessment--16 postbaseline visits (twice-weekly for 8 weeks); (ii) 'traditional' assessment--5 postbaseline visits (Weeks 1, 2, 4, 6, and 8). Onset was defined as a 20 per cent improvement from baseline, and had to be sustained at all subsequent assessments. Differences between treatments were analysed with a survival analysis (KM = Kaplan-Meier product limit method) and a categorical mixed-effects model repeated measures analysis (MMRM-CAT). RESULTS: More frequent assessments resulted in small reductions in empirical standard errors compared with traditional assessments for both analytic methods. More frequent assessments altered estimates of treatment group differences in KM such that power was increased when the difference between treatments was increasing over time, but power decreased when the treatment difference decreased over time. More frequent assessments had a minimal effect on estimates of treatment group differences in MMRM-CAT. The MMRM-CAT analysis of data from a traditional assessment schedule provided adequate control of type I error, and had power comparable to or greater than that with KM analyses of data from either a frequent or a traditional assessment schedule. CONCLUSION: In the scenarios tested in this study it was reasonable to assess treatment group differences in onset of action with MMRM-CAT and a traditional assessment schedule. Additional research is needed to assess whether these findings hold in data with drop-out and across definitions of onset.  相似文献   

17.
A strategy for early-stage breast cancer trials in recent years consists of a neoadjuvant trial with pathological complete response (pCR) at time of surgery as the efficacy endpoint, followed by the collection of long-term data to show efficacy in survival. To calculate an appropriate sample size to detect a survival difference based upon pCR data, it is necessary to relate the effect size in pCR with the effect size in survival. Here, we propose an exponential mixture model for survival time with parameters for the neoadjuvant pCR rates and an estimated benefit of achieving pCR to determine the treatment effect size. Through simulation studies, we demonstrated how to estimate the empirical power for detecting the survival efficacy under a parameter setting. We also showed a more efficient way to estimate the power for detecting the survival efficacy through estimated average hazard ratios and the Schoenfeld formula. Our method can be used to power future confirmatory adjuvant trials based on the preliminary data obtained from the neoadjuvant component.  相似文献   

18.
We present a simple semiparametric model for fitting subject-specific curves for longitudinal data. Individual curves are modelled as penalized splines with random coefficients. This model has a mixed model representation, and it is easily implemented in standard statistical software. We conduct an analysis of the long-term effect of radiation therapy on the height of children suffering from acute lymphoblastic leukaemia using penalized splines in the framework of semiparametric mixed effects models. The analysis revealed significant differences between therapies and showed that the growth rate of girls in the study cannot be fully explained by the group-average curve and that individual curves are necessary to reflect the individual response to treatment. We also show how to implement these models in S-PLUS and R in the appendix.  相似文献   

19.
Existing methods for power analysis for longitudinal study designs are limited in that they do not adequately address random missing data patterns. Although the pattern of missing data can be assessed during data analysis, it is unknown during the design phase of a study. The random nature of the missing data pattern adds another layer of complexity in addressing missing data for power analysis. In this paper, we model the occurrence of missing data with a two-state, first-order Markov process and integrate the modelling information into the power function to account for random missing data patterns. The Markov model is easily specified to accommodate different anticipated missing data processes. We develop this approach for the two most popular longitudinal models: the generalized estimating equations (GEE) and the linear mixed-effects model under the missing completely at random (MCAR) assumption. For GEE, we also limit our consideration to the working independence correlation model. The proposed methodology is illustrated with numerous examples that are motivated by real study designs.  相似文献   

20.
Existing methods for power and sample size estimation for longitudinal and other clustered study designs have limited applications. In this paper, we review and extend existing approaches to improve these limitations. In particular, we focus on power analysis for the two most popular approaches for clustered data analysis, the generalized estimating equations and the linear mixed-effects models. By basing the derivation of the power function on the asymptotic distribution of the model estimates, the proposed approach provides estimates of power that are consistent with the methods of inference for data analysis. The proposed methodology is illustrated with numerous examples that are motivated by real study designs.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号