首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Measurements on subjects in longitudinal medical studies are often collected at several different times or under different experimental conditions. Such multiple observations on the same subject generally produce serially correlated outcomes. Traditional regression methods assume that observations within subjects are independent which is not true in longitudinal data. In this paper we develop a Bayesian analysis for the traditional non-linear random effects models with errors that follow a continuous time autoregressive process. In this way, unequally spaced observations do not present a problem in the analysis. Parameter estimation of this model is done via the Gibbs sampling algorithm. The method is illustrated with data coming from a study in pregnant women in Santiago, Chile, that involves the non-linear regression of plasma volume on gestational age.  相似文献   

2.
The use of longitudinal data for predicting a subsequent binary event is often the focus of diagnostic studies. This is particularly important in obstetrics, where ultrasound measurements taken during fetal development may be useful for predicting various poor pregnancy outcomes. We propose a modeling framework for predicting a binary event from longitudinal measurements where a shared random effect links the two processes together. Under a Gaussian random effects assumption, the approach is simple to implement with standard statistical software. Using asymptotic and simulation results, we show that estimates of predictive accuracy under a Gaussian random effects distribution are robust to severe misspecification of this distribution. However, under some circumstances, estimates of individual risk may be sensitive to severe random effects misspecification. We illustrate the methodology with data from a longitudinal fetal growth study.  相似文献   

3.
We discuss the computation of randomization tests for clinical trials of two treatments when the primary outcome is based on a regression model. We begin by revisiting the seminal paper of Gail, Tan, and Piantadosi (1988), and then describe a method based on Monte Carlo generation of randomization sequences. The tests based on this Monte Carlo procedure are design based, in that they incorporate the particular randomization procedure used. We discuss permuted block designs, complete randomization, and biased coin designs. We also use a new technique by Plamadeala and Rosenberger (2012) for simple computation of conditional randomization tests. Like Gail, Tan, and Piantadosi, we focus on residuals from generalized linear models and martingale residuals from survival models. Such techniques do not apply to longitudinal data analysis, and we introduce a method for computation of randomization tests based on the predicted rate of change from a generalized linear mixed model when outcomes are longitudinal. We show, by simulation, that these randomization tests preserve the size and power well under model misspecification. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

4.
In this paper, we extend several regression diagnostic techniques commonly used in linear regression, such as leverage, infinitesimal influence, case deletion diagnostics, Cook's distance, and local influence to the linear mixed-effects model. In each case, the proposed new measure has a direct interpretation in terms of the effects on a parameter of interest, and collapses to the familiar linear regression measure when there are no random effects. The new measures are explicitly defined functions and do not necessitate re-estimation of the model, especially for cluster deletion diagnostics. The basis for both the cluster deletion diagnostics and Cook's distance is a generalization of Miller's simple update formula for case deletion for linear models. Pregibon's infinitesimal case deletion diagnostics is adapted to the linear mixed-effects model. A simple compact matrix formula is derived to assess the local influence of the fixed-effects regression coefficients. Finally, a link between the local influence approach and Cook's distance is established. These influence measures are applied to an analysis of 5-year Medicare reimbursements to colon cancer patients to identify the most influential observations and their effects on the fixed-effects coefficients.  相似文献   

5.
Household contact studies, a mainstay of tuberculosis transmission research, often assume that tuberculosis‐infected household contacts of an index case were infected within the household. However, strain genotyping has provided evidence against this assumption. Understanding the household versus community infection dynamic is essential for designing interventions. The misattribution of infection sources can also bias household transmission predictor estimates. We present a household‐community transmission model that estimates the probability of community infection, that is, the probability that a household contact of an index case was actually infected from a source outside the home and simultaneously estimates transmission predictors. We show through simulation that our method accurately predicts the probability of community infection in several scenarios and that not accounting for community‐acquired infection in household contact studies can bias risk factor estimates. Applying the model to data from Vitória, Brazil, produced household risk factor estimates similar to two other standard methods for age and sex. However, our model gave different estimates for sleeping proximity to index case and disease severity score. These results show that estimating both the probability of community infection and household transmission predictors is feasible and that standard tuberculosis transmission models likely underestimate the risk for two important transmission predictors. Copyright © 2017 John Wiley & Sons, Ltd.  相似文献   

6.
Wu H  Zhang JT 《Statistics in medicine》2002,21(23):3655-3675
Modelling HIV dynamics has played an important role in understanding the pathogenesis of HIV infection in the past several years. Non-linear parametric models, derived from the mechanisms of HIV infection and drug action, have been used to fit short-term clinical data from AIDS clinical trials. However, it is found that the parametric models may not be adequate to fit long-term HIV dynamic data. To preserve the meaningful interpretation of the short-term HIV dynamic models as well as to characterize the long-term dynamics, we introduce a class of semi-parametric non-linear mixed-effects (NLME) models. The models are non-linear in population characteristics (fixed effects) and individual variations (random effects), both of which are modelled semi-parametrically. A basis-based approach is proposed to fit the models, which transforms a general semi-parametric NLME model into a set of standard parametric NLME models, indexed by the bases used. The bases that we employ are natural cubic splines for easy implementation. The resulting standard NLME models are low-dimensional and easy to solve. Statistical inferences that include testing parametric against semi-parametric mixed-effects are investigated. Innovative bootstrap procedures are developed for simulating the empirical distributions of the test statistics. Small-scale simulation and bootstrap studies show that our bootstrap procedures work well. The proposed approach and procedures are applied to long-term HIV dynamic data from an AIDS clinical study.  相似文献   

7.
We propose statistical definitions of the individual benefit of a medical or behavioral treatment and of the severity of a chronic illness. These definitions are used to develop a graphical method that can be used by statisticians and clinicians in the data analysis of clinical trials from the perspective of personalized medicine. The method focuses on assessing and comparing individual effects of treatments rather than average effects and can be used with continuous and discrete responses, including dichotomous and count responses. The method is based on new developments in generalized linear mixed‐effects models, which are introduced in this article. To illustrate, analyses of data from the Sequenced Treatment Alternatives to Relieve Depression clinical trial of sequences of treatments for depression and data from a clinical trial of respiratory treatments are presented. The estimation of individual benefits is also explained. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

8.
It is a common practice to analyze complex longitudinal data using nonlinear mixed‐effects (NLME) models with normality assumption. The NLME models with normal distributions provide the most popular framework for modeling continuous longitudinal outcomes, assuming individuals are from a homogeneous population and relying on random‐effects to accommodate inter‐individual variation. However, the following two issues may standout: (i) normality assumption for model errors may cause lack of robustness and subsequently lead to invalid inference and unreasonable estimates, particularly, if the data exhibit skewness and (ii) a homogeneous population assumption may be unrealistically obscuring important features of between‐subject and within‐subject variations, which may result in unreliable modeling results. There has been relatively few studies concerning longitudinal data with both heterogeneity and skewness features. In the last two decades, the skew distributions have shown beneficial in dealing with asymmetric data in various applications. In this article, our objective is to address the simultaneous impact of both features arisen from longitudinal data by developing a flexible finite mixture of NLME models with skew distributions under Bayesian framework that allows estimates of both model parameters and class membership probabilities for longitudinal data. Simulation studies are conducted to assess the performance of the proposed models and methods, and a real example from an AIDS clinical trial illustrates the methodology by modeling the viral dynamics to compare potential models with different distribution specifications; the analysis results are reported. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

9.
Generalized linear models with random effects are often used to explain the serial dependence of longitudinal categorical data. Marginalized random effects models (MREMs) permit likelihood‐based estimations of marginal mean parameters and also explain the serial dependence of longitudinal data. In this paper, we extend the MREM to accommodate multivariate longitudinal binary data using a new covariance matrix with a Kronecker decomposition, which easily explains both the serial dependence and time‐specific response correlation. A maximum marginal likelihood estimation is proposed utilizing a quasi‐Newton algorithm with quasi‐Monte Carlo integration of the random effects. Our approach is applied to analyze metabolic syndrome data from the Korean Genomic Epidemiology Study for Korean adults. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

10.
Gibbs sampling-based generalized linear mixed models (GLMMs) provide a convenient and flexible way to extend variance components models for multivariate normally distributed continuous traits to other classes of phenotype. This includes binary traits and right-censored failure times such as age-at-onset data. The approach has applications in many areas of genetic epidemiology. However, the required GLMMs are sensitive to nonrandom ascertainment. In the absence of an appropriate correction for ascertainment, they can exhibit marked positive bias in the estimated grand mean and serious shrinkage in the estimated magnitude of variance components. To compound practical difficulties, it is currently difficult to implement a conventional adjustment for ascertainment because of the need to undertake repeated integration across the distribution of random effects. This is prohibitively slow when it must be repeated at every iteration of the Markov chain Monte Carlo (MCMC) procedure. This paper motivates a correction for ascertainment that is based on sampling random effects rather than integrating across them and can therefore be implemented in a general-purpose Gibbs sampling environment such as WinBUGS. The approach has the characteristic that it returns ascertainment-adjusted parameter estimates that pertain to the true distribution of determinants in the ascertained sample rather than in the general population. The implications of this characteristic are investigated and discussed. This paper extends the utility of Gibbs sampling-based GLMMs to a variety of settings in which family data are ascertained nonrandomly.  相似文献   

11.
In metagenomic studies, testing the association between microbiome composition and clinical outcomes translates to testing the nullity of variance components. Motivated by a lung human immunodeficiency virus (HIV) microbiome project, we study longitudinal microbiome data by using variance component models with more than two variance components. Current testing strategies only apply to models with exactly two variance components and when sample sizes are large. Therefore, they are not applicable to longitudinal microbiome studies. In this paper, we propose exact tests (score test, likelihood ratio test, and restricted likelihood ratio test) to (a) test the association of the overall microbiome composition in a longitudinal design and (b) detect the association of one specific microbiome cluster while adjusting for the effects from related clusters. Our approach combines the exact tests for null hypothesis with a single variance component with a strategy of reducing multiple variance components to a single one. Simulation studies demonstrate that our method has a correct type I error rate and superior power compared to existing methods at small sample sizes and weak signals. Finally, we apply our method to a longitudinal pulmonary microbiome study of HIV-infected patients and reveal two interesting genera Prevotella and Veillonella associated with forced vital capacity. Our findings shed light on the impact of the lung microbiome on HIV complexities. The method is implemented in the open-source, high-performance computing language Julia and is freely available at https://github.com/JingZhai63/VCmicrobiome .  相似文献   

12.
Studies of HIV dynamics in AIDS research are very important in understanding the pathogenesis of HIV‐1 infection and also in assessing the effectiveness of antiviral therapies. Nonlinear mixed‐effects (NLME) models have been used for modeling between‐subject and within‐subject variations in viral load measurements. Mostly, normality of both within‐subject random error and random‐effects is a routine assumption for NLME models, but it may be unrealistic, obscuring important features of between‐subject and within‐subject variations, particularly, if the data exhibit skewness. In this paper, we develop a Bayesian approach to NLME models and relax the normality assumption by considering both model random errors and random‐effects to have a multivariate skew‐normal distribution. The proposed model provides flexibility in capturing a broad range of non‐normal behavior and includes normality as a special case. We use a real data set from an AIDS study to illustrate the proposed approach by comparing various candidate models. We find that the model with skew‐normality provides better fit to the observed data and the corresponding estimates of parameters are significantly different from those based on the model with normality when skewness is present in the data. These findings suggest that it is very important to assume a model with skew‐normal distribution in order to achieve robust and reliable results, in particular, when the data exhibit skewness. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

13.
Longitudinal data sets in biomedical research often consist of large numbers of repeated measures. In many cases, the trajectories do not look globally linear or polynomial, making it difficult to summarize the data or test hypotheses using standard longitudinal data analysis based on various linear models. An alternative approach is to apply the approaches of functional data analysis, which directly target the continuous nonlinear curves underlying discretely sampled repeated measures. For the purposes of data exploration, many functional data analysis strategies have been developed based on various schemes of smoothing, but fewer options are available for making causal inferences regarding predictor-outcome relationships, a common task seen in hypothesis-driven medical studies. To compare groups of curves, two testing strategies with good power have been proposed for high-dimensional analysis of variance: the Fourier-based adaptive Neyman test and the wavelet-based thresholding test. Using a smoking cessation clinical trial data set, this paper demonstrates how to extend the strategies for hypothesis testing into the framework of functional linear regression models (FLRMs) with continuous functional responses and categorical or continuous scalar predictors. The analysis procedure consists of three steps: first, apply the Fourier or wavelet transform to the original repeated measures; then fit a multivariate linear model in the transformed domain; and finally, test the regression coefficients using either adaptive Neyman or thresholding statistics. Since a FLRM can be viewed as a natural extension of the traditional multiple linear regression model, the development of this model and computational tools should enhance the capacity of medical statistics for longitudinal data.  相似文献   

14.
The use of random-effects models for the analysis of longitudinal data with missing responses has been discussed by several authors. In this paper, we extend the non-linear random-effects model for a single response to the case of multiple responses, allowing for arbitrary patterns of observed and missing data. Parameters for this model are estimated via the EM algorithm and by the first-order approximation available in SAS Proc NLMIXED. The set of equations for this estimation procedure is derived and these are appropriately modified to deal with missing data. The methodology is illustrated with an example using data coming from a study involving 161 pregnant women presenting to a private obstetrics clinic in Santiago, Chile.  相似文献   

15.
Epidemiologic and clinical studies routinely collect longitudinal measures of multiple outcomes, including biomarker measures, cognitive functions, and clinical symptoms. These longitudinal outcomes can be used to establish the temporal order of relevant biological processes and their association with the onset of clinical symptoms. Univariate change point models have been used to model various clinical endpoints, such as CD4 count in studying the progression of HIV infection and cognitive function in the elderly. We propose to use bivariate change point models for two longitudinal outcomes with a focus on the correlation between the two change points. We consider three types of change point models in the bivariate model setting: the broken‐stick model, the Bacon–Watts model, and the smooth polynomial model. We adopt a Bayesian approach using a Markov chain Monte Carlo sampling method for parameter estimation and inference. We assess the proposed methods in simulation studies and demonstrate the methodology using data from a longitudinal study of dementia. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

16.
Provider profiling entails comparing the performance of hospitals on indicators of quality of care. Many common indicators of healthcare quality are binary (eg, short-term mortality, use of appropriate medications). Typically, provider profiling examines the variation in each indicator in isolation across hospitals. We developed Bayesian multivariate response random effects logistic regression models that allow one to simultaneously examine variation and covariation in multiple binary indicators across hospitals. Use of this model allows for (i) determining the probability that a hospital has poor performance on a single indicator; (ii) determining the probability that a hospital has poor performance on multiple indicators simultaneously; (iii) determining, by using the Mahalanobis distance, how far the performance of a given hospital is from that of an average hospital. We illustrate the utility of the method by applying it to 10 881 patients hospitalized with acute myocardial infarction at 102 hospitals. We considered six binary patient-level indicators of quality of care: use of reperfusion, assessment of left ventricular ejection fraction, measurement of cardiac troponins, use of acetylsalicylic acid within 6 hours of hospital arrival, use of beta-blockers within 12 hours of hospital arrival, and survival to 30 days after hospital admission. When considering the five measures evaluating processes of care, we found that there was a strong correlation between a hospital's performance on one indicator and its performance on a second indicator for five of the 10 possible comparisons. We compared inferences made using this approach with those obtained using a latent variable item response theory model.  相似文献   

17.
Lim J  Wang X  Lee S  Jung SH 《Statistics in medicine》2008,27(19):3833-3846
We propose a distribution-free procedure, an analogy of the DIP test in non-parametric regression, to test whether the means of responses are constant over time in repeated measures data. Unlike the existing tests, the proposed procedure requires very minimal assumptions to the distributions of both random effects and errors. We study the asymptotic reference distribution of the test statistic analytically and propose a permutation procedure to approximate the finite-sample reference distribution. The size and power of the proposed test are illustrated and compared with competitors through several simulation studies. We find that it performs well for data of small sizes, regardless of model specification. Finally, we apply our test to a data example to compare the effect of fatigue in two different methods used for cardiopulmonary resuscitation.  相似文献   

18.
We analyze data obtained from a study designed to evaluate training effects on the performance of certain motor activities of Parkinson's disease patients. Maximum likelihood methods were used to fit beta-binomial/Poisson regression models tailored to evaluate the effects of training on the numbers of attempted and successful specified manual movements in 1 min periods, controlling for disease stage and use of the preferred hand. We extend models previously considered by other authors in univariate settings to account for the repeated measures nature of the data. The results suggest that the expected number of attempts and successes increase with training, except for patients with advanced stages of the disease using the non-preferred hand.  相似文献   

19.
For analyses of longitudinal repeated‐measures data, statistical methods include the random effects model, fixed effects model and the method of generalized estimating equations. We examine the assumptions that underlie these approaches to assessing covariate effects on the mean of a continuous, dichotomous or count outcome. Access to statistical software to implement these models has led to widespread application in numerous disciplines. However, careful consideration should be paid to their critical assumptions to ascertain which model might be appropriate in a given setting. To illustrate similarities and differences that might exist in empirical results, we use a study that assessed depressive symptoms in low‐income pregnant women using a structured instrument with up to five assessments that spanned the pre‐natal and post‐natal periods. Understanding the conceptual differences between the methods is important in their proper application even though empirically they might not differ substantively. The choice of model in specific applications would depend on the relevant questions being addressed, which in turn informs the type of design and data collection that would be relevant. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

20.
Lin TI  Lee JC 《Statistics in medicine》2008,27(9):1490-1507
This paper extends the classical linear mixed model by considering a multivariate skew-normal assumption for the distribution of random effects. We present an efficient hybrid ECME-NR algorithm for the computation of maximum-likelihood estimates of parameters. A score test statistic for testing the existence of skewness preference among random effects is developed. The technique for the prediction of future responses under this model is also investigated. The methodology is illustrated through an application to Framingham cholesterol data and a simulation study.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号