共查询到20条相似文献,搜索用时 15 毫秒
1.
Mixed‐effects models have recently become popular for analyzing sparse longitudinal data that arise naturally in biological, agricultural and biomedical studies. Traditional approaches assume independent residuals over time and explain the longitudinal dependence by random effects. However, when bivariate or multivariate traits are measured longitudinally, this fundamental assumption is likely to be violated because of intertrait dependence over time. We provide a more general framework where the dependence of the observations from the same subject over time is not assumed to be explained completely by the random effects of the model. We propose a novel, mixed model‐based approach and estimate the error–covariance structure nonparametrically under a generalized linear model framework. We use penalized splines to model the general effect of time, and we consider a Dirichlet process mixture of normal prior for the random‐effects distribution. We analyze blood pressure data from the Framingham Heart Study where body mass index, gender and time are treated as covariates. We compare our method with traditional methods including parametric modeling of the random effects and independent residual errors over time. We conduct extensive simulation studies to investigate the practical usefulness of the proposed method. The current approach is very helpful in analyzing bivariate irregular longitudinal traits. Copyright © 2013 John Wiley & Sons, Ltd. 相似文献
2.
This paper presents a Bayesian model for meta‐analysis of sparse discrete binomial data, which are out of the scope of the usual hierarchical normal random‐effect models. Treatment effectiveness data are often of this type. The crucial linking distribution between the effectiveness conditional on the healthcare center and the unconditional effectiveness is constructed from specific bivariate classes of distributions with given marginals. This assures coherency between the marginal and conditional prior distributions utilized in the analysis. Further, we impose a bivariate class of priors that is able to accommodate a wide range of heterogeneity degrees between the multicenter clinical trials involved. Applications to real multicenter data are given and compared with previous meta‐analysis. Copyright © 2014 John Wiley & Sons, Ltd. 相似文献
3.
Geethanjali Purushothaman Raunak Vikas 《Australasian physical & engineering sciences in medicine / supported by the Australasian College of Physical Scientists in Medicine and the Australasian Association of Physical Sciences in Medicine》2018,41(2):549-559
This paper focuses on identification of an effective pattern recognition scheme with the least number of time domain features for dexterous control of prosthetic hand to recognize the various finger movements from surface electromyogram (EMG) signals. Eight channels EMG from 8 able-bodied subjects for 15 individuals and combined finger activities have been considered in this work. In this work, an attempt has been made to recognize a number of classes with the least number of features. Therefore, EMG signals are pre-processed using dual tree complex wavelet transform to improve the discriminating capability of features and time domain features such as zero crossing, slope sign change, mean absolute value, and waveform length is extracted from the pre-processed data. The performance of extracted features is studied with different classifiers such as linear discriminant analysis, naive Bayes classifier, quadratic support vector machine and cubic support vector machine with and without feature selection algorithms. The feature selection has been studied using particle swarm optimization (PSO) and ant colony optimization (ACO) with different number of features to identify the effect of features. The results demonstrated that naive Bayes classifier with ant colony optimization shows an average classification accuracy of 88.89% with a response time of 0.058025 ms for recognizing the 15 different finger movements with 16 features with significant difference in accuracy compared to SVM classifier with feature selection for a significance level of 0.05. There is no significant difference in the accuracy, specificity and sensitivity of an SVM classifier with and without feature selection. But the processing time is significantly more than the LDA and NB classifier. The PSO and ACO results revealed that slope sign changes contribute to recognizing the activity. In PSO, mean absolute value has been found to be effective compared to waveform length, contradictory with ACO. Further, the zero crossings have been found to be not effective in classification of finger movements in both the methods. 相似文献
4.
We introduce a new approach to inference for subgroups in clinical trials. We use Bayesian model selection, and a threshold on posterior model probabilities to identify subgroup effects for reporting. For each covariate of interest, we define a separate class of models, and use the posterior probability associated with each model and the threshold to determine the existence of a subgroup effect. As usual in Bayesian clinical trial design we compute frequentist operating characteristics, and achieve the desired error probabilities by choosing an appropriate threshold(s) for the posterior probabilities. 相似文献
5.
目的 基于机器学习方法,提出一种固体核径迹图像的计算机识别算法,实现核径迹的自动、快速和准确识别,提高固体径迹图像分析效率。方法 首先利用形态学方法扫描143张含有径迹的图像,确定疑似径迹位置并截取1 250张素材图。选取素材的50%为训练集、30%为验证集,训练机器学习模型。另选素材的20%为测试集,测试模型识别效果。算法代码基于MATLAB软件编写并训练。结果 建立的固体径迹识别算法识别能力较强,测试集识别准确度可达84.8%。算法构建的机器学习模型程序能跟随训练数据量的投入不断进化,准确度进一步提升。结论 本算法在图像形态学基础上结合机器学习对径迹识别算法进行了研究,较好地实现固体径迹的自动识别。未来将加大模型的数据投入,优化算法,提高识别准确度,以期为图像径迹自动识别提供更精确高效的方法。 相似文献
6.
While genome-wide association studies (GWASs) have been widely used to uncover associations between diseases and genetic variants, standard SNP-level GWASs often lack the power to identify SNPs that individually have a moderate effect size but jointly contribute to the disease. To overcome this problem, pathway-based GWASs methods have been developed as an alternative strategy that complements SNP-level approaches. We propose a Bayesian method that uses the generalized fused hierarchical structured variable selection prior to identify pathways associated with the disease using SNP-level summary statistics. Our prior has the flexibility to take in pathway structural information so that it can model the gene-level correlation based on prior biological knowledge, an important feature that makes it appealing compared to existing pathway-based methods. Using simulations, we show that our method outperforms competing methods in various scenarios, particularly when we have pathway structural information that involves complex gene-gene interactions. We apply our method to the Wellcome Trust Case Control Consortium Crohn's disease GWAS data, demonstrating its practical application to real data. 相似文献
7.
Recent advances in human neuroimaging have shown that it is possible to accurately decode how the brain perceives information based only on non‐invasive functional magnetic resonance imaging measurements of brain activity. Two commonly used statistical approaches, namely, univariate analysis and multivariate pattern analysis often lead to distinct patterns of selected voxels. One current debate in brain decoding concerns whether the brain's representation of sound categories is localized or distributed. We hypothesize that the distributed pattern of voxels selected by most multivariate pattern analysis models can be an artifact due to the spatial correlation among voxels. Here, we propose a Bayesian spatially varying coefficient model, where the spatial correlation is modeled through the variance‐covariance matrix of the model coefficients. Combined with a proposed region selection strategy, we demonstrate that our approach is effective in identifying the truly localized patterns of the voxels while maintaining robustness to discover truly distributed pattern. In addition, we show that localized or clustered patterns can be artificially identified as distributed if without proper usage of the spatial correlation information in fMRI data. Copyright © 2016 John Wiley & Sons, Ltd. 相似文献
8.
In medical studies, we commonly encounter multiple events data such as recurrent infection or attack times in patients suffering from a given disease. A number of statistical procedures for the analysis of such data use the Cox proportional hazards model, modified to include a random effect term called frailty which summarizes the dependence of recurrent times within a subject. These unobserved random frailty effects capture subject effects that are not explained by the known covariates. They are typically modelled constant over time and are assumed to be independently and identically distributed across subjects. However, in some situations, the subject-specific random frailty may change over time in the same manner as time-dependent covariate effects. This paper presents a time-dependent frailty model for recurrent failure time data in the Bayesian context and estimates it using a Markov chain Monte Carlo method. Our approach is illustrated by a data set relating to patients with chronic granulomatous disease and it is compared to the constant frailty model using the deviance information criterion. 相似文献
9.
We introduce a principled method for Bayesian subgroup analysis. The approach is based on casting subgroup analysis as a Bayesian decision problem. The two main innovations are: (1) the explicit consideration of a “subgroup report,” comprising multiple subpopulations; and (2) adapting an inhomogeneous Markov chain Monte Carlo simulation scheme to implement stochastic optimization. The latter makes the search for “subgroup reports” practically feasible. 相似文献
10.
A surveillance system is proposed to detect an increase in the mean of a Poisson distribution of cases of a disease. This system, called short memory (SM), is based on conditional binomial tests which are performed sequentially at fixed time intervals. The probability of rejection at each test defines the run length distribution which has a geometric tail. A standard SM scheme outperforms other SM schemes. The CUSUM outperforms the SM schemes when the baseline mean is specified correctly. This type of misspecification does not affect the SM scheme. 相似文献
11.
Bayesian Monte Carlo Markov chain (MCMC) techniques have shown promise in dissecting complex genetic traits. The methods introduced by Heath ([1997], Am. J. Hum. Genet. 61:748-760), and implemented in the program Loki, have been able to localize genes for complex traits in both real and simulated data sets. Loki estimates the posterior probability of quantitative trait loci (QTL) at locations on a chromosome in an iterative MCMC process. Unfortunately, interpretation of the results and assessment of their significance have been difficult. Here, we introduce a score, the log of the posterior placement probability ratio (LOP), for assessing oligogenic QTL detection and localization. The LOP is the log of the posterior probability of linkage to the real chromosome divided by the posterior probability of linkage to an unlinked pseudochromosome, with marker informativeness similar to the marker data on the real chromosome. Since the LOP cannot be calculated exactly, we estimate it in simultaneous MCMC on both real and pseudochromosomes. We investigate empirically the distributional properties of the LOP in the presence and absence of trait genes. The LOP is not subject to trait model misspecification in the way a lod score may be, and we show that the LOP can detect linkage for loci of small effect when the lod score cannot. We show how, in the absence of linkage, an empirical distribution of the LOP may be estimated by simulation and used to provide an assessment of linkage detection significance. 相似文献
12.
Interim monitoring is routinely conducted in phase II clinical trials to terminate the trial early if the experimental treatment is futile. Interim monitoring requires that patients’ responses be ascertained shortly after the initiation of treatment so that the outcomes are known by the time the interim decision must be made. However, in some cases, response outcomes require a long time to be assessed, which causes difficulties for interim monitoring. To address this issue, we propose a Bayesian trial design to allow for continuously monitoring phase II clinical trials in the presence of delayed responses. We treat the delayed responses as missing data and handle them using a multiple imputation approach. Extensive simulations show that the proposed design yields desirable operating characteristics under various settings and dramatically reduces the trial duration. Copyright © 2014 John Wiley & Sons, Ltd. 相似文献
13.
Joseph G. Ibrahim Ming‐Hui Chen Mani Lakshminarayanan Guanghan F. Liu Joseph F. Heyse 《Statistics in medicine》2015,34(2):249-264
Developing sophisticated statistical methods for go/no‐go decisions is crucial for clinical trials, as planning phase III or phase IV trials is costly and time consuming. In this paper, we develop a novel Bayesian methodology for determining the probability of success of a treatment regimen on the basis of the current data of a given trial. We introduce a new criterion for calculating the probability of success that allows for inclusion of covariates as well as allowing for historical data based on the treatment regimen, and patient characteristics. A new class of prior distributions and covariate distributions is developed to achieve this goal. The methodology is quite general and can be used with univariate or multivariate continuous or discrete data, and it generalizes Chuang‐Stein's work. This methodology will be invaluable for informing the scientist on the likelihood of success of the compound, while including the information of covariates for patient characteristics in the trial population for planning future pre‐market or post‐market trials. Copyright © 2014 John Wiley & Sons, Ltd. 相似文献
14.
Pereira JA Pleguezuelos E Merí A Molina-Ros A Molina-Tomás MC Masdeu C 《Medical education》2007,41(2):189-195
OBJECTIVES: This study aimed to implement innovative teaching methods--blended learning strategies--that include the use of new information technologies in the teaching of human anatomy and to analyse both the impact of these strategies on academic performance, and the degree of user satisfaction. METHODS: The study was carried out among students in Year 1 of the biology degree curriculum (human biology profile) at Pompeu Fabra University, Barcelona. Two groups of students were tested on knowledge of the anatomy of the locomotor system and results compared between groups. Blended learning strategies were employed in 1 group (BL group, n = 69); the other (TT group; n = 65) received traditional teaching aided by complementary material that could be accessed on the Internet. Both groups were evaluated using the same types of examination. RESULTS: The average marks presented statistically significant differences (BL 6.3 versus TT 5.0; P < 0.0001). The percentage pass rate for the subject in the first call was higher in the BL group (87.9% versus 71.4%; P = 0.02), reflecting a lower incidence of students who failed to sit the examination (BL 4.3% versus TT 13.8%; P = 0.05). There were no differences regarding overall satisfaction with the teaching received. CONCLUSIONS: Blended learning was more effective than traditional teaching for teaching human anatomy. 相似文献
15.
Background
Directly standardized rates (DSRs) adjust for different age distributions in different populations and enable, say, the rates of disease between the populations to be directly compared. They are routinely published but there is concern that a DSR is not valid when it is based on a “small” number of events. The aim of this study was to determine the value at which a DSR should not be published when analyzing real data in England.Methods
Standard Monte Carlo simulation techniques were used assuming the number of events in 19 age groups (i.e., 0–4, 5–9, ... 90+ years) follow independent Poisson distributions. The total number of events, age specific risks, and the population sizes in each age group were varied. For each of 10,000 simulations the DSR (using the 2013 European Standard Population weights), together with the coverage of three different methods (normal approximation, Dobson, and Tiwari modified gamma) of estimating the 95% confidence intervals (CIs), were calculated.Results
The normal approximation was, as expected, not suitable for use when fewer than 100 events occurred. The Tiwari method and the Dobson method of calculating confidence intervals produced similar estimates and either was suitable when the expected or observed numbers of events were 10 or greater. The accuracy of the CIs was not influenced by the distribution of the events across categories (i.e., the degree of clustering, the age distributions of the sampling populations, and the number of categories with no events occurring in them).Conclusions
DSRs should not be given when the total observed number of events is less than 10. The Dobson method might be considered the preferred method due to the formulae being simpler than that of the Tiwari method and the coverage being slightly more accurate.16.
17.
A phage-typing scheme for Salmonella enteritidis 总被引:19,自引:0,他引:19
For many years phage typing has proved invaluable in epidemiological studies on Salmonella typhi, S. paratyphi A and B, S. typhimurium and a few other serotypes. A phage-typing scheme for S. enteritidis is described. This scheme to date differentiates 27 types using 10 typing phages. 相似文献
18.
Sten P. Willemsen Paul H. C. Eilers Régine P. M. Steegers‐Theunissen Emmanuel Lesaffre 《Statistics in medicine》2015,34(8):1351-1365
Most longitudinal growth curve models evaluate the evolution of each of the anthropometric measurements separately. When applied to a ‘reference population’, this exercise leads to univariate reference curves against which new individuals can be evaluated. However, growth should be evaluated in totality, that is, by evaluating all body characteristics jointly. Recently, Cole et al. suggested the Superimposition by Translation and Rotation (SITAR) model, which expresses individual growth curves by three subject‐specific parameters indicating their deviation from a flexible overall growth curve. This model allows the characterization of normal growth in a flexible though compact manner. In this paper, we generalize the SITAR model in a Bayesian way to multiple dimensions. The multivariate SITAR model allows us to create multivariate reference regions, which is advantageous for prediction. The usefulness of the model is illustrated on longitudinal measurements of embryonic growth obtained in the first semester of pregnancy, collected in the ongoing Rotterdam Predict study. Further, we demonstrate how the model can be used to find determinants of embryonic growth. Copyright © 2015 John Wiley & Sons, Ltd. 相似文献
19.
A Bayesian toolkit for genetic association studies 总被引:3,自引:0,他引:3
We present a range of modelling components designed to facilitate Bayesian analysis of genetic-association-study data. A key feature of our approach is the ability to combine different submodels together, almost arbitrarily, for dealing with the complexities of real data. In particular, we propose various techniques for selecting the "best" subset of genetic predictors for a specific phenotype (or set of phenotypes). At the same time, we may control for complex, non-linear relationships between phenotypes and additional (non-genetic) covariates as well as accounting for any residual correlation that exists among multiple phenotypes. Both of these additional modelling components are shown to potentially aid in detecting the underlying genetic signal. We may also account for uncertainty regarding missing genotype data. Indeed, at the heart of our approach is a novel method for reconstructing unobserved haplotypes and/or inferring the values of missing genotypes. This can be deployed independently or, alternatively, it can be fully integrated into arbitrary genotype- or haplotype-based association models such that the missing data and the association model are "estimated" simultaneously. The impact of such simultaneous analysis on inferences drawn from the association model is shown to be potentially significant. Our modelling components are packaged as an "add-on" interface to the widely used WinBUGS software, which allows Markov chain Monte Carlo analysis of a wide range of statistical models. We illustrate their use with a series of increasingly complex analyses conducted on simulated data based on a real pharmacogenetic example. 相似文献
20.
Sebastiani P Mandl KD Szolovits P Kohane IS Ramoni MF 《Statistics in medicine》2006,25(11):1803-16; discussion 1817-25
The severe acute respiratory syndrome (SARS) epidemic, the growing fear of an influenza pandemic and the recent shortage of flu vaccine highlight the need for surveillance systems able to provide early, quantitative predictions of epidemic events. We use dynamic Bayesian networks to discover the interplay among four data sources that are monitored for influenza surveillance. By integrating these different data sources into a dynamic model, we identify in children and infants presenting to the pediatric emergency department with respiratory syndromes an early indicator of impending influenza morbidity and mortality. Our findings show the importance of modelling the complex dynamics of data collected for influenza surveillance, and suggest that dynamic Bayesian networks could be suitable modelling tools for developing epidemic surveillance systems. 相似文献