首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Many endocrine systems are regulated by pulsatile hormones – hormones that are secreted intermittently in boluses rather than continuously over time. To study pulsatile secretion, blood is drawn every few minutes for an extended period. The result is a time series of hormone concentrations for each individual. The goal is to estimate pulsatile hormone secretion features such as frequency, location, duration, and amount of pulsatile and non‐pulsatile secretion and compare these features between groups. Various statistical approaches to analyzing these data have been proposed, but validation has generally focused on one hormone. Thus, we lack a broad understanding of each method's performance. By using simulated data with features seen in reproductive and stress hormones, we investigated the performance of three recently developed statistical approaches for analyzing pulsatile hormone data and compared them to a frequently used deconvolution approach. We found that methods incorporating a changing baseline modeled both constant and changing baseline shapes well; however, the added model flexibility resulted in a slight increase in bias in other model parameters. When pulses were well defined and baseline constant, Bayesian approaches performed similar to the existing deconvolution method. The increase in computation time of Bayesian approaches offered improved estimation and more accurate quantification of estimation variation in situations where pulse locations were not clearly identifiable. Within the class of deconvolution models for fitting pulsatile hormone data, the Bayesian approach with a changing baseline offered adequate results over the widest range of data. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

2.
In the context of a mathematical model describing HIV infection, we discuss a Bayesian modelling approach to a non-linear random effects estimation problem. The model and the data exhibit a number of features that make the use of an ordinary non-linear mixed effects model intractable: (i) the data are from two compartments fitted simultaneously against the implicit numerical solution of a system of ordinary differential equations; (ii) data from one compartment are subject to censoring; (iii) random effects for one variable are assumed to be from a beta distribution. We show how the Bayesian framework can be exploited by incorporating prior knowledge on some of the parameters, and by combining the posterior distributions of the parameters to obtain estimates of quantities of interest that follow from the postulated model.  相似文献   

3.
It is of interest to estimate the distribution of usual nutrient intake for a population from repeat 24‐h dietary recall assessments. A mixed effects model and quantile estimation procedure, developed at the National Cancer Institute (NCI), may be used for this purpose. The model incorporates a Box–Cox parameter and covariates to estimate usual daily intake of nutrients; model parameters are estimated via quasi‐Newton optimization of a likelihood approximated by the adaptive Gaussian quadrature. The parameter estimates are used in a Monte Carlo approach to generate empirical quantiles; standard errors are estimated by bootstrap. The NCI method is illustrated and compared with current estimation methods, including the individual mean and the semi‐parametric method developed at the Iowa State University (ISU), using data from a random sample and computer simulations. Both the NCI and ISU methods for nutrients are superior to the distribution of individual means. For simple (no covariate) models, quantile estimates are similar between the NCI and ISU methods. The bootstrap approach used by the NCI method to estimate standard errors of quantiles appears preferable to Taylor linearization. One major advantage of the NCI method is its ability to provide estimates for subpopulations through the incorporation of covariates into the model. The NCI method may be used for estimating the distribution of usual nutrient intake for populations and subpopulations as part of a unified framework of estimation of usual intake of dietary constituents. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

4.
Studies of older adults often involve interview questions regarding subjective constructs such as perceived disability. In some studies, when subjects are unable (e.g. due to cognitive impairment) or unwilling to respond to these questions, proxies (e.g. relatives or other care givers) are recruited to provide responses in place of the subject. Proxies are usually not approached to respond on behalf of subjects who respond for themselves; thus, for each subject, data from only one of the subject or proxy are available. Typically, proxy responses are simply substituted for missing subject responses, and standard complete‐data analyses are performed. However, this approach may introduce measurement error and produce biased parameter estimates. In this paper, we propose using pattern‐mixture models that relate non‐identifiable parameters to identifiable parameters to analyze data with proxy respondents. We posit three interpretable pattern‐mixture restrictions to be used with proxy data, and we propose estimation procedures using maximum likelihood and multiple imputation. The methods are applied to a cohort of elderly hip‐fracture patients. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

5.
This paper describes a semi-parametric Bayesian approach for estimating receiver operating characteristic (ROC) curves based on mixtures of Dirichlet process priors (MDP). We address difficulties in modelling the underlying distribution of screening scores due to non-normality that may lead to incorrect choices of diagnostic cut-offs and unreliable estimates of prevalence of the disease. MDP is a robust tool for modelling non-standard diagnostic distributions associated with imperfect classification of an underlying diseased population, for example, when a diagnostic test is not a gold standard. For posterior computations, we propose an efficient Gibbs sampling framework based on a finite-dimensional approximation to MDP. We show, using both simulated and real data sets, that MDP modelling for ROC curve estimation closely parallels the frequentist kernel density estimation (KDE) approach.  相似文献   

6.
In this paper, we formalize the application of multivariate meta‐analysis and meta‐regression to synthesize estimates of multi‐parameter associations obtained from different studies. This modelling approach extends the standard two‐stage analysis used to combine results across different sub‐groups or populations. The most straightforward application is for the meta‐analysis of non‐linear relationships, described for example by regression coefficients of splines or other functions, but the methodology easily generalizes to any setting where complex associations are described by multiple correlated parameters. The modelling framework of multivariate meta‐analysis is implemented in the package mvmeta within the statistical environment R . As an illustrative example, we propose a two‐stage analysis for investigating the non‐linear exposure–response relationship between temperature and non‐accidental mortality using time‐series data from multiple cities. Multivariate meta‐analysis represents a useful analytical tool for studying complex associations through a two‐stage procedure. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

7.
Xenograft trials allow tumor growth in human cell lines to be monitored over time in a mouse model. We consider the problem of inferring the effect of treatment combinations on tumor growth. A piecewise quadratic model with flexible phase change locations is proposed to model the effect of change in therapy over time. Each piece represents a growth phase, with phase changes in response to change in treatment. Piecewise slopes represent phase‐specific (log) linear growth rates and curvature parameters represent departure from linear growth. Trial data are analyzed in two stages: (i) subject‐specific curve fitting (ii) analysis of slope and curvature estimates across subjects. A least‐squares approach with penalty for phase change point location is proposed for curve fitting. In simulation studies, the method is shown to give consistent estimates of slope and curvature parameters under independent and AR (1) measurement error. The piecewise quadratic model is shown to give excellent fit (median R2=0.98) to growth data from a six armed xenograft trial on a lung carcinoma cell line. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

8.
It is a common practice to analyze complex longitudinal data using nonlinear mixed‐effects (NLME) models with normality assumption. The NLME models with normal distributions provide the most popular framework for modeling continuous longitudinal outcomes, assuming individuals are from a homogeneous population and relying on random‐effects to accommodate inter‐individual variation. However, the following two issues may standout: (i) normality assumption for model errors may cause lack of robustness and subsequently lead to invalid inference and unreasonable estimates, particularly, if the data exhibit skewness and (ii) a homogeneous population assumption may be unrealistically obscuring important features of between‐subject and within‐subject variations, which may result in unreliable modeling results. There has been relatively few studies concerning longitudinal data with both heterogeneity and skewness features. In the last two decades, the skew distributions have shown beneficial in dealing with asymmetric data in various applications. In this article, our objective is to address the simultaneous impact of both features arisen from longitudinal data by developing a flexible finite mixture of NLME models with skew distributions under Bayesian framework that allows estimates of both model parameters and class membership probabilities for longitudinal data. Simulation studies are conducted to assess the performance of the proposed models and methods, and a real example from an AIDS clinical trial illustrates the methodology by modeling the viral dynamics to compare potential models with different distribution specifications; the analysis results are reported. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

9.
We provide a simple and practical, yet flexible, penalized estimation method for a Cox proportional hazards model with current status data. We approximate the baseline cumulative hazard function by monotone B‐splines and use a hybrid approach based on the Fisher‐scoring algorithm and the isotonic regression to compute the penalized estimates. We show that the penalized estimator of the nonparametric component achieves the optimal rate of convergence under some smooth conditions and that the estimators of the regression parameters are asymptotically normal and efficient. Moreover, a simple variance estimation method is considered for inference on the regression parameters. We perform 2 extensive Monte Carlo studies to evaluate the finite‐sample performance of the penalized approach and compare it with the 3 competing R packages: C1.coxph, intcox, and ICsurv. A goodness‐of‐fit test and model diagnostics are also discussed. The methodology is illustrated with 2 real applications.  相似文献   

10.
Pulsatile secretion of hormones in the hypothalamic-pituitary-gonadal axis is critical for normal functioning of the reproductive system. Thus, appropriate characterization of pulsatile secretion is important for identifying the (patho)physiology of reproductive conditions. Existing analysis methods often fail to adequately characterize pulsatility, especially when the signal-to-noise ratio is low. Newer Bayesian analysis methods for pulsatile hormones may offer improved secretion quantification in noisier data. The objective of this study was to extensively validate a Bayesian analysis approach for analyzing pulsatile hormones in settings that occur in reproductive studies. An investigative approach was chosen so that clinical research teams will have the knowledge to adopt this newer analysis approach in practice. Three experimental conditions were investigated: luteinizing hormone (LH) profiles in ovariectomized ewes (N=6; high signal-to-noise setting), LH profiles in young ovulating women (N=12; lower signal-to-noise setting), and computer-simulated scenarios (N=200). For each experimental condition, differences in luteinizing hormone pulse outcomes (pulse number, average pulse size, hormone half-life, and non-pulse secretion) were obtained and compared between non-Bayesian and Bayesian analysis pulse analysis methods. For the ewe model, the estimated pulse number and mass were comparable between the Bayesian and non-Bayesian analyses. For the human model, only 4 of 12 subjects could be fitted with the non-Bayesian analysis compared to 10 of the 12 with Bayesian analysis. In general, the Bayesian analysis had lower false negative rates (<4.5%) compared to the non-Bayesian analysis while maintaining a high specificity (false positive rate <2.5%). The Bayesian analysis also had less biased estimates of all pulse features. In conclusion, Bayesian analysis provides a more reliable pulse characterization in low signal-to-noise experiments and should be used for the analysis of reproductive physiology studies of pulsatile hormones. Software is available at www.github.com/BayesPulse.

Abbreviations: LH: luteinizing hormone; FSH: follicle stimulating hormone; GnRH: gonadotropin-releasing hormone; FP: false positive; FN: false negative  相似文献   


11.
Nonresponses and missing data are common in observational studies. Ignoring or inadequately handling missing data may lead to biased parameter estimation, incorrect standard errors and, as a consequence, incorrect statistical inference and conclusions. We present a strategy for modelling non‐ignorable missingness where the probability of nonresponse depends on the outcome. Using a simple case of logistic regression, we quantify the bias in regression estimates and show the observed likelihood is non‐identifiable under non‐ignorable missing data mechanism. We then adopt a selection model factorisation of the joint distribution as the basis for a sensitivity analysis to study changes in estimated parameters and the robustness of study conclusions against different assumptions. A Bayesian framework for model estimation is used as it provides a flexible approach for incorporating different missing data assumptions and conducting sensitivity analysis. Using simulated data, we explore the performance of the Bayesian selection model in correcting for bias in a logistic regression. We then implement our strategy using survey data from the 45 and Up Study to investigate factors associated with worsening health from the baseline to follow‐up survey. Our findings have practical implications for the use of the 45 and Up Study data to answer important research questions relating to health and quality‐of‐life. Copyright © 2017 John Wiley & Sons, Ltd.  相似文献   

12.
It is routinely argued that, unlike standard regression‐based estimates, inverse probability weighted (IPW) estimates of the parameters of a correctly specified Cox marginal structural model (MSM) may remain unbiased in the presence of a time‐varying confounder affected by prior treatment. Previously proposed methods for simulating from a known Cox MSM lack knowledge of the law of the observed outcome conditional on the measured past. Although unbiased IPW estimation does not require this knowledge, standard regression‐based estimates rely on correct specification of this law. Thus, in typical high‐dimensional settings, such simulation methods cannot isolate bias due to complex time‐varying confounding as it may be conflated with bias due to misspecification of the outcome regression model. In this paper, we describe an approach to Cox MSM data generation that allows for a comparison of the bias of IPW estimates versus that of standard regression‐based estimates in the complete absence of model misspecification. This approach involves simulating data from a standard parametrization of the likelihood and solving for the underlying Cox MSM. We prove that solutions exist and computations are tractable under many data‐generating mechanisms. We show analytically and confirm in simulations that, in the absence of model misspecification, the bias of standard regression‐based estimates for the parameters of a Cox MSM is indeed a function of the coefficients in observed data models quantifying the presence of a time‐varying confounder affected by prior treatment. We discuss limitations of this approach including that implied by the ‘g‐null paradox’. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

13.
The case/pseudocontrol method provides a convenient framework for family-based association analysis of case-parent trios, incorporating several previously proposed methods such as the transmission/disequilibrium test and log-linear modelling of parent-of-origin effects. The method allows genotype and haplotype analysis at an arbitrary number of linked and unlinked multiallelic loci, as well as modelling of more complex effects such as epistasis, parent-of-origin effects, maternal genotype and mother-child interaction effects, and gene-environment interactions. Here we extend the method for analysis of quantitative as opposed to dichotomous (e.g. disease) traits. The resulting method can be thought of as a retrospective approach, modelling genotype given trait value, in contrast to prospective approaches that model trait given genotype. Through simulations and analytical derivations, we examine the power and properties of our proposed approach, and compare it to several previously proposed single-locus methods for quantitative trait association analysis. We investigate the performance of the different methods when extended to allow analysis of haplotype, maternal genotype and parent-of-origin effects. With randomly ascertained families, with or without population stratification, the prospective approach (modeling trait value given genotype) is found to be generally most effective, although the retrospective approach has some advantages with regard to estimation and interpretability of parameter estimates when applied to selected samples.  相似文献   

14.
Quantitative evidence synthesis through meta‐analysis is central to evidence‐based medicine. For well‐documented reasons, the meta‐analysis of individual patient data is held in higher regard than aggregate data. With access to individual patient data, the analysis is not restricted to a “two‐stage” approach (combining estimates and standard errors) but can estimate parameters of interest by fitting a single model to all of the data, a so‐called “one‐stage” analysis. There has been debate about the merits of one‐ and two‐stage analysis. Arguments for one‐stage analysis have typically noted that a wider range of models can be fitted and overall estimates may be more precise. The two‐stage side has emphasised that the models that can be fitted in two stages are sufficient to answer the relevant questions, with less scope for mistakes because there are fewer modelling choices to be made in the two‐stage approach. For Gaussian data, we consider the statistical arguments for flexibility and precision in small‐sample settings. Regarding flexibility, several of the models that can be fitted only in one stage may not be of serious interest to most meta‐analysis practitioners. Regarding precision, we consider fixed‐ and random‐effects meta‐analysis and see that, for a model making certain assumptions, the number of stages used to fit this model is irrelevant; the precision will be approximately equal. Meta‐analysts should choose modelling assumptions carefully. Sometimes relevant models can only be fitted in one stage. Otherwise, meta‐analysts are free to use whichever procedure is most convenient to fit the identified model.  相似文献   

15.
A recent meta‐regression of antidepressant efficacy on baseline depression severity has caused considerable controversy in the popular media. A central source of the controversy is a lack of clarity about the relation of meta‐regression parameters to corresponding parameters in models for subject‐level data. This paper focuses on a linear regression with continuous outcome and predictor, a case that is often considered less problematic. We frame meta‐regression in a general mixture setting that encompasses both finite and infinite mixture models. In many applications of meta‐analysis, the goal is to evaluate the efficacy of a treatment from several studies, and authors use meta‐regression on grouped data to explain variations in the treatment efficacy by study features. When the study feature is a characteristic that has been averaged over subjects, it is difficult not to interpret the meta‐regression results on a subject level, a practice that is still widespread in medical research. Although much of the attention in the literature is on methods of estimating meta‐regression model parameters, our results illustrate that estimation methods cannot protect against erroneous interpretations of meta‐regression on grouped data. We derive relations between meta‐regression parameters and within‐study model parameters and show that the conditions under which slopes from these models are equal cannot be verified on the basis of group‐level information only. The effects of these model violations cannot be known without subject‐level data. We conclude that interpretations of meta‐regression results are highly problematic when the predictor is a subject‐level characteristic that has been averaged over study subjects. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

16.
We develop a multivariate cure survival model to estimate lifetime patterns of colorectal cancer screening. Screening data cover long periods of time, with sparse observations for each person. Some events may occur before the study begins or after the study ends, so the data are both left‐censored and right‐censored, and some individuals are never screened (the ‘cured’ population). We propose a multivariate parametric cure model that can be used with left‐censored and right‐censored data. Our model allows for the estimation of the time to screening as well as the average number of times individuals will be screened. We calculate likelihood functions based on the observations for each subject using a distribution that accounts for within‐subject correlation and estimate parameters using Markov chain Monte Carlo methods. We apply our methods to the estimation of lifetime colorectal cancer screening behavior in the SEER‐Medicare data set. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

17.
This note implements an unstructured decaying product matrix via the quasi‐least squares approach for estimation of the correlation parameters in the framework of generalized estimating equations. The structure we consider is fairly general without requiring the large number of parameters that are involved in a fully unstructured matrix. It is straightforward to show that the quasi‐least squares estimators of the correlation parameters yield feasible values for the unstructured decaying product structure. Furthermore, subject to conditions that are easily checked, the quasi‐least squares estimators are valid for longitudinal Bernoulli data. We demonstrate implementation of the structure in a longitudinal clinical trial with both a continuous and binary outcome variable. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

18.
Ecological momentary assessment studies usually produce intensively measured longitudinal data with large numbers of observations per unit, and research interest is often centered around understanding the changes in variation of people's thoughts, emotions and behaviors. Hedeker et al developed a 2‐level mixed effects location scale model that allows observed covariates as well as unobserved variables to influence both the mean and the within‐subjects variance, for a 2‐level data structure where observations are nested within subjects. In some ecological momentary assessment studies, subjects are measured at multiple waves, and within each wave, subjects are measured over time. Li and Hedeker extended the original 2‐level model to a 3‐level data structure where observations are nested within days and days are then nested within subjects, by including a random location and scale intercept at the intermediate wave level. However, the 3‐level random intercept model assumes constant response change rate for both the mean and variance. To account for changes in variance across waves, as well as clustering attributable to waves, we propose a more comprehensive location scale model that allows subject heterogeneity at baseline as well as across different waves, for a 3‐level data structure where observations are nested within waves and waves are then further nested within subjects. The model parameters are estimated using Markov chain Monte Carlo methods. We provide details on the Bayesian estimation approach and demonstrate how the Stan statistical software can be used to sample from the desired distributions and achieve consistent estimates. The proposed model is validated via a series of simulation studies. Data from an adolescent smoking study are analyzed to demonstrate this approach. The analyses clearly favor the proposed model and show significant subject heterogeneity at baseline as well as change over time, for both mood mean and variance. The proposed 3‐level location scale model can be widely applied to areas of research where the interest lies in the consistency in addition to the mean level of the responses.  相似文献   

19.
Chen Z  Shi NZ  Gao W  Tang ML 《Statistics in medicine》2012,31(13):1323-1341
Semiparametric methods for longitudinal data with association within subjects have recently received considerable attention. However, existing methods for semiparametric longitudinal binary regression modeling (i) mainly concern mean structures with association parameters treated as nuisance; (ii) generally require a correct specification of the covariance structure for misspecified covariance structure may lead to inefficient mean parameter estimates; and (iii) usually run into computation and estimation problems when the time points are irregularly and possibly subject specific. In this article, we propose a semiparametric logistic regression model, which simultaneously takes into account both the mean and response-association structures (via conditional log-odds ratio) for multivariate longitudinal binary outcomes. Our main interest lies in efficient estimation of both the marginal and association parameters. The estimators of the parameters are obtained via the profile kernel approach. We evaluate the proposed methodology through simulation studies and apply it to a real dataset. Both theoretical and empirical results demonstrate that the proposed method yields highly efficient estimators and performs satisfactorily.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号