首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Classical and Bayesian inference in neuroimaging: theory   总被引:11,自引:0,他引:11  
This paper reviews hierarchical observation models, used in functional neuroimaging, in a Bayesian light. It emphasizes the common ground shared by classical and Bayesian methods to show that conventional analyses of neuroimaging data can be usefully extended within an empirical Bayesian framework. In particular we formulate the procedures used in conventional data analysis in terms of hierarchical linear models and establish a connection between classical inference and parametric empirical Bayes (PEB) through covariance component estimation. This estimation is based on an expectation maximization or EM algorithm. The key point is that hierarchical models not only provide for appropriate inference at the highest level but that one can revisit lower levels suitably equipped to make Bayesian inferences. Bayesian inferences eschew many of the difficulties encountered with classical inference and characterize brain responses in a way that is more directly predicated on what one is interested in. The motivation for Bayesian approaches is reviewed and the theoretical background is presented in a way that relates to conventional methods, in particular restricted maximum likelihood (ReML). This paper is a technical and theoretical prelude to subsequent papers that deal with applications of the theory to a range of important issues in neuroimaging. These issues include; (i) Estimating nonsphericity or variance components in fMRI time-series that can arise from serial correlations within subject, or are induced by multisubject (i.e., hierarchical) studies. (ii) Spatiotemporal Bayesian models for imaging data, in which voxels-specific effects are constrained by responses in other voxels. (iii) Bayesian estimation of nonlinear models of hemodynamic responses and (iv) principled ways of mixing structural and functional priors in EEG source reconstruction. Although diverse, all these estimation problems are accommodated by the PEB framework described in this paper.  相似文献   

2.
The estimation of the activity-related ion currents by measuring the induced electromagnetic fields at the head surface is a challenging and severely ill-posed inverse problem. This is especially true in the recovery of brain networks involving deep-lying sources by means of EEG/MEG recordings which is still a challenging task for any inverse method. Recently, hierarchical Bayesian modeling (HBM) emerged as a unifying framework for current density reconstruction (CDR) approaches comprising most established methods as well as offering promising new methods. Our work examines the performance of fully-Bayesian inference methods for HBM for source configurations consisting of few, focal sources when used with realistic, high-resolution finite element (FE) head models. The main foci of interest are the correct depth localization, a well-known source of systematic error of many CDR methods, and the separation of single sources in multiple-source scenarios. Both aspects are very important in the analysis of neurophysiological data and in clinical applications. For these tasks, HBM provides a promising framework and is able to improve upon established CDR methods such as minimum norm estimation (MNE) or sLORETA in many aspects. For challenging multiple-source scenarios where the established methods show crucial errors, promising results are attained. Additionally, we introduce Wasserstein distances as performance measures for the validation of inverse methods in complex source scenarios.  相似文献   

3.
Magnetoencephalography (MEG) provides millisecond-scale temporal resolution for noninvasive mapping of human brain functions, but the problem of reconstructing the underlying source currents from the extracranial data has no unique solution. Several distributed source estimation methods based on different prior assumptions have been suggested for the resolution of this inverse problem. Recently, a hierarchical Bayesian generalization of the traditional minimum norm estimate (MNE) was proposed, in which the variance of distributed current at each cortical location is considered as a random variable and estimated from the data using the variational Bayesian (VB) framework. Here, we introduce an alternative scheme for performing Bayesian inference in the context of this hierarchical model by using Markov chain Monte Carlo (MCMC) strategies. In principle, the MCMC method is capable of numerically representing the true posterior distribution of the currents whereas the VB approach is inherently approximative. We point out some potential problems related to hyperprior selection in the previous work and study some possible solutions. A hyperprior sensitivity analysis is then performed, and the structure of the posterior distribution as revealed by the MCMC method is investigated. We show that the structure of the true posterior is rather complex with multiple modes corresponding to different possible solutions to the source reconstruction problem. We compare the results from the VB algorithm to those obtained from the MCMC simulation under different hyperparameter settings. The difficulties in using a unimodal variational distribution as a proxy for a truly multimodal distribution are also discussed. Simulated MEG data with realistic sensor and source geometries are used in performing the analyses.  相似文献   

4.
We present Posterior Temperature Optimized Bayesian Inverse Models (POTOBIM), an unsupervised Bayesian approach to inverse problems in medical imaging using mean-field variational inference with a fully tempered posterior. Bayesian methods exhibit useful properties for approaching inverse tasks, such as tomographic reconstruction or image denoising. A suitable prior distribution introduces regularization, which is needed to solve the ill-posed problem and reduces overfitting the data. In practice, however, this often results in a suboptimal posterior temperature, and the full potential of the Bayesian approach is not being exploited. In POTOBIM, we optimize both the parameters of the prior distribution and the posterior temperature with respect to reconstruction accuracy using Bayesian optimization with Gaussian process regression. Our method is extensively evaluated on four different inverse tasks on a variety of modalities with images from public data sets and we demonstrate that an optimized posterior temperature outperforms both non-Bayesian and Bayesian approaches without temperature optimization. The use of an optimized prior distribution and posterior temperature leads to improved accuracy and uncertainty estimation and we show that it is sufficient to find these hyperparameters per task domain. Well-tempered posteriors yield calibrated uncertainty, which increases the reliability in the predictions. Our source code is publicly available at github.com/Cardio-AI/mfvi-dip-mia.  相似文献   

5.
Monitoring the quality of image segmentation is key to many clinical applications. This quality assessment can be carried out by a human expert when the number of cases is limited. However, it becomes onerous when dealing with large image databases, so partial automation of this process is preferable. Previous works have proposed both supervised and unsupervised methods for the automated control of image segmentations. The former assume the availability of a subset of trusted segmented images on which supervised learning is performed, while the latter does not. In this paper, we introduce a novel unsupervised approach for quality assessment of segmented images based on a generic probabilistic model. Quality estimates are produced by comparing each segmentation with the output of a probabilistic segmentation model that relies on intensity and smoothness assumptions. Ranking cases with respect to these two assumptions allows the most challenging cases in a dataset to be detected. Furthermore, unlike prior work, our approach enables possible segmentation errors to be localized within an image. The proposed generic probabilistic segmentation method combines intensity mixture distributions with spatial regularization prior models whose parameters are estimated with variational Bayesian techniques. We introduce a novel smoothness prior based on the penalization of the derivatives of label maps which allows an automatic estimation of its hyperparameter in a fully data-driven way. Extensive evaluation of quality control on medical and COCO datasets is conducted, showing the ability to isolate atypical segmentations automatically and to predict, in some cases, the performance of segmentation algorithms.  相似文献   

6.
Classical and Bayesian inference in neuroimaging: applications   总被引:5,自引:0,他引:5  
In Friston et al. ((2002) Neuroimage 16: 465-483) we introduced empirical Bayes as a potentially useful way to estimate and make inferences about effects in hierarchical models. In this paper we present a series of models that exemplify the diversity of problems that can be addressed within this framework. In hierarchical linear observation models, both classical and empirical Bayesian approaches can be framed in terms of covariance component estimation (e.g., variance partitioning). To illustrate the use of the expectation-maximization (EM) algorithm in covariance component estimation we focus first on two important problems in fMRI: nonsphericity induced by (i) serial or temporal correlations among errors and (ii) variance components caused by the hierarchical nature of multisubject studies. In hierarchical observation models, variance components at higher levels can be used as constraints on the parameter estimates of lower levels. This enables the use of parametric empirical Bayesian (PEB) estimators, as distinct from classical maximum likelihood (ML) estimates. We develop this distinction to address: (i) The difference between response estimates based on ML and the conditional means from a Bayesian approach and the implications for estimates of intersubject variability. (ii) The relationship between fixed- and random-effect analyses. (iii) The specificity and sensitivity of Bayesian inference and, finally, (iv) the relative importance of the number of scans and subjects. The forgoing is concerned with within- and between-subject variability in multisubject hierarchical fMRI studies. In the second half of this paper we turn to Bayesian inference at the first (within-voxel) level, using PET data to show how priors can be derived from the (between-voxel) distribution of activations over the brain. This application uses exactly the same ideas and formalism but, in this instance, the second level is provided by observations over voxels as opposed to subjects. The ensuing posterior probability maps (PPMs) have enhanced anatomical precision and greater face validity, in relation to underlying anatomy. Furthermore, in comparison to conventional SPMs they are not confounded by the multiple comparison problem that, in a classical context, dictates high thresholds and low sensitivity. We conclude with some general comments on Bayesian approaches to image analysis and on some unresolved issues.  相似文献   

7.
Recently, we described a Bayesian inference approach to the MEG/EEG inverse problem that used numerical techniques to estimate the full posterior probability distributions of likely solutions upon which all inferences were based [Schmidt, D.M., George, J.S., Wood, C.C., 1999. Bayesian inference applied to the electromagnetic inverse problem. Human Brain Mapping 7, 195; Schmidt, D.M., George, J.S., Ranken, D.M., Wood, C.C., 2001. Spatial-temporal bayesian inference for MEG/EEG. In: Nenonen, J., Ilmoniemi, R. J., Katila, T. (Eds.), Biomag 2000: 12th International Conference on Biomagnetism. Espoo, Norway, p. 671]. Schmidt et al. (1999) focused on the analysis of data at a single point in time employing an extended region source model. They subsequently extended their work to a spatiotemporal Bayesian inference analysis of the full spatiotemporal MEG/EEG data set. Here, we formulate spatiotemporal Bayesian inference analysis using a multi-dipole model of neural activity. This approach is faster than the extended region model, does not require use of the subject's anatomical information, does not require prior determination of the number of dipoles, and yields quantitative probabilistic inferences. In addition, we have incorporated the ability to handle much more complex and realistic estimates of the background noise, which may be represented as a sum of Kronecker products of temporal and spatial noise covariance components. This reduces the effects of undermodeling noise. In order to reduce the rigidity of the multi-dipole formulation which commonly causes problems due to multiple local minima, we treat the given covariance of the background as uncertain and marginalize over it in the analysis. Markov Chain Monte Carlo (MCMC) was used to sample the many possible likely solutions. The spatiotemporal Bayesian dipole analysis is demonstrated using simulated and empirical whole-head MEG data.  相似文献   

8.
Myocardial blood flow can be quantified from dynamic contrast-enhanced magnetic resonance (MR) images through the fitting of tracer-kinetic models to the observed imaging data. The use of multi-compartment exchange models is desirable as they are physiologically motivated and resolve directly for both blood flow and microvascular function. However, the parameter estimates obtained with such models can be unreliable. This is due to the complexity of the models relative to the observed data which is limited by the low signal-to-noise ratio, the temporal resolution, the length of the acquisitions and other complex imaging artefacts.In this work, a Bayesian inference scheme is proposed which allows the reliable estimation of the parameters of the two-compartment exchange model from myocardial perfusion MR data. The Bayesian scheme allows the incorporation of prior knowledge on the physiological ranges of the model parameters and facilitates the use of the additional information that neighbouring voxels are likely to have similar kinetic parameter values. Hierarchical priors are used to avoid making a priori assumptions on the health of the patients. We provide both a theoretical introduction to Bayesian inference for tracer-kinetic modelling and specific implementation details for this application.This approach is validated in both in silico and in vivo settings. In silico, there was a significant reduction in mean-squared error with the ground-truth parameters using Bayesian inference as compared to using the standard non-linear least squares fitting. When applied to patient data the Bayesian inference scheme returns parameter values that are in-line with those previously reported in the literature, as well as giving parameter maps that match the independant clinical diagnosis of those patients.  相似文献   

9.
Hierarchical Bayesian estimation for MEG inverse problem   总被引:1,自引:0,他引:1  
Source current estimation from MEG measurement is an ill-posed problem that requires prior assumptions about brain activity and an efficient estimation algorithm. In this article, we propose a new hierarchical Bayesian method introducing a hierarchical prior that can effectively incorporate both structural and functional MRI data. In our method, the variance of the source current at each source location is considered an unknown parameter and estimated from the observed MEG data and prior information by using the Variational Bayesian method. The fMRI information can be imposed as prior information on the variance distribution rather than the variance itself so that it gives a soft constraint on the variance. A spatial smoothness constraint, that the neural activity within a few millimeter radius tends to be similar due to the neural connections, can also be implemented as a hierarchical prior. The proposed method provides a unified theory to deal with the following three situations: (1) MEG with no other data, (2) MEG with structural MRI data on cortical surfaces, and (3) MEG with both structural MRI and fMRI data. We investigated the performance of our method and conventional linear inverse methods under these three conditions. Simulation results indicate that our method has better accuracy and spatial resolution than the conventional linear inverse methods under all three conditions. It is also shown that accuracy of our method improves as MRI and fMRI information becomes available. Simulation results demonstrate that our method appropriately resolves the inverse problem even if fMRI data convey inaccurate information, while the Wiener filter method is seriously deteriorated by inaccurate fMRI information.  相似文献   

10.
Hauk O  Wakeman DG  Henson R 《NeuroImage》2011,54(3):1966-1974
Noise-normalization has been shown to partly compensate for the localization bias towards superficial sources in minimum norm estimation. However, it has been argued that in order to make inferences for the case of multiple sources, localization properties alone are insufficient. Instead, multiple measures of resolution should be applied to both point-spread and cross-talk functions (PSFs and CTFs). Here, we demonstrate that noise-normalization affects the shapes of PSFs, but not of CTFs. We evaluated PSFs and CTFs for the MNE, dSPM and sLORETA inverse operators, on the metrics dipole localization error (DLE), spatial dispersion (SD) and overall amplitude (OA). We used 306-channel MEG configurations obtained from 17 subjects in a real experiment, including individual noise covariance matrices and head geometries. We confirmed that for PSFs DLE improved after noise normalization, and is zero for sLORETA. However, SD was generally lower for the unnormalized MNE. OA distributions were similar for all three methods, indicating that all three methods may greatly underestimate some sources relative to others. The reliability of differences between methods across subjects was demonstrated using distributions of standard deviations and p-values from paired t-tests. As predicted, the shapes of CTFs were the same for all methods, reflecting the general resolution limits of the inverse problem. This means that noise-normalization is of no consequence where linear estimation procedures are used as "spatial filters." While low DLE is advantageous for the localization of a single source, or possibly a few spatially distinct sources, the benefit for the case of complex source distributions is not obvious. We suggest that software packages for source estimation should include comprehensive tools for evaluating the performance of different methods.  相似文献   

11.
This study shows that the spatial specificity of MEG beamformer estimates of electrical activity can be affected significantly by the way in which covariance estimates are calculated. We define spatial specificity as the ability to extract independent timecourse estimates of electrical brain activity from two separate brain locations in close proximity. Previous analytical and simulated results have shown that beamformer estimates are affected by narrowing the time frequency window in which covariance estimates are made. Here we build on this by both experimental validation of previous results, and investigating the effect of data averaging prior to covariance estimation. In appropriate circumstances, we show that averaging has a marked effect on spatial specificity. However the averaging process results in ill-conditioned covariance matrices, thus necessitating a suitable matrix regularisation strategy, an example of which is described. We apply our findings to an MEG retinotopic mapping paradigm. A moving visual stimulus is used to elicit brain activation at different retinotopic locations in the visual cortex. This gives the impression of a moving electrical dipolar source in the brain. We show that if appropriate beamformer optimisation is applied, the moving source can be tracked in the cortex. In addition to spatial reconstruction of the moving source, we show that timecourse estimates can be extracted from neighbouring locations of interest in the visual cortex. If appropriate methodology is employed, the sequential activation of separate retinotopic locations can be observed. The retinotopic paradigm represents an ideal platform to test the spatial specificity of source localisation strategies. We suggest that future comparisons of MEG source localisation techniques (e.g. beamformer, minimum norm, Bayesian) could be made using this retinotopic mapping paradigm.  相似文献   

12.
Intensity normalization is an important pre-processing step in the study and analysis of Magnetic Resonance Images (MRI) of human brains. As most parametric supervised automatic image segmentation and classification methods base their assumptions regarding the intensity distributions on a standardized intensity range, intensity normalization takes on a very significant role. One of the fast and accurate approaches proposed for intensity normalization is that of Nyul and colleagues. In this work, we present, for the first time, an extensive validation of this approach in real clinical domain where even after intensity inhomogeneity correction that accounts for scanner-specific artifacts, the MRI volumes can be affected from variations such as data heterogeneity resulting from multi-site multi-scanner acquisitions, the presence of multiple sclerosis (MS) lesions and the stage of disease progression in the brain. Using the distributional divergence criteria, we evaluate the effectiveness of the normalization in rendering, under the distributional assumptions of segmentation approaches, intensities that are more homogenous for the same tissue type while simultaneously resulting in better tissue type separation. We also demonstrate the advantage of the decile based piece-wise linear approach on the task of MS lesion segmentation against a linear normalization approach over three image segmentation algorithms: a standard Bayesian classifier, an outlier detection based approach and a Bayesian classifier with Markov Random Field (MRF) based post-processing. Finally, to demonstrate the independence of the effectiveness of normalization from the complexity of segmentation algorithm, we evaluate the Nyul method against the linear normalization on Bayesian algorithms of increasing complexity including a standard Bayesian classifier with Maximum Likelihood parameter estimation and a Bayesian classifier with integrated data priors, in addition to the above Bayesian classifier with MRF based post-processing to smooth the posteriors. In all relevant cases, the observed results are verified for statistical relevance using significance tests.  相似文献   

13.
Wu W  Chen Z  Gao S  Brown EN 《NeuroImage》2011,56(4):1929-1945
Multichannel electroencephalography (EEG) offers a non-invasive tool to explore spatio-temporal dynamics of brain activity. With EEG recordings consisting of multiple trials, traditional signal processing approaches that ignore inter-trial variability in the data may fail to accurately estimate the underlying spatio-temporal brain patterns. Moreover, precise characterization of such inter-trial variability per se can be of high scientific value in establishing the relationship between brain activity and behavior. In this paper, a statistical modeling framework is introduced for learning spatio-temporal decompositions of multiple-trial EEG data recorded under two contrasting experimental conditions. By modeling the variance of source signals as random variables varying across trials, the proposed two-stage hierarchical Bayesian model is able to capture inter-trial amplitude variability in the data in a sparse way where a parsimonious representation of the data can be obtained. A variational Bayesian (VB) algorithm is developed for statistical inference of the hierarchical model. The efficacy of the proposed modeling framework is validated with the analysis of both synthetic and real EEG data. In the simulation study we show that even at low signal-to-noise ratios our approach is able to recover with high precision the underlying spatio-temporal patterns and the dynamics of source amplitude across trials; on two brain-computer interface (BCI) data sets we show that our VB algorithm can extract physiologically meaningful spatio-temporal patterns and make more accurate predictions than other two widely used algorithms: the common spatial patterns (CSP) algorithm and the Infomax algorithm for independent component analysis (ICA). The results demonstrate that our statistical modeling framework can serve as a powerful tool for extracting brain patterns, characterizing trial-to-trial brain dynamics, and decoding brain states by exploiting useful structures in the data.  相似文献   

14.
We present a novel approach to MEG source estimation based on a regularized first-order multipole solution. The Gaussian regularizing prior is obtained by calculation of the sample mean and covariance matrix for the equivalent moments of realistic simulated cortical activity. We compare the regularized multipole localization framework to the classical dipole and general multipole source estimation methods by evaluating the ability of all three solutions to localize the centroids of physiologically plausible patches of activity simulated on the surface of a human cerebral cortex. The results, obtained with a realistic sensor configuration, a spherical head model, and given in terms of field and localization error, depict the performance of the dipolar and multipolar models as a function of variable source surface area (50-500 mm(2)), noise conditions (20, 10, and 5 dB SNR), source orientation (0-90 degrees ), and source depth (3-11 cm). We show that as the sources increase in size, they become less accurately modeled as current dipoles. The regularized multipole systematically outperforms the single dipole model, increasingly so as the spatial extent of the sources increases. In addition, our simulations demonstrate that as the orientation of the sources becomes more radial, dipole localization accuracy decreases substantially, while the performance of the regularized multipole model is far less sensitive to orientation and even succeeds in localizing quasi-radial source configurations. Furthermore, our results show that the multipole model is able to localize superficial sources with higher accuracy than the current dipole. These results indicate that the regularized multipole solution may be an attractive alternative to current-dipole-based source estimation methods in MEG.  相似文献   

15.
To use Electroencephalography (EEG) and Magnetoencephalography (MEG) as functional brain 3D imaging techniques, identifiable distributed source models are required. The reconstruction of EEG/MEG sources rests on inverting these models and is ill-posed because the solution does not depend continuously on the data and there is no unique solution in the absence of prior information or constraints. We have described a general framework that can account for several priors in a common inverse solution. An empirical Bayesian framework based on hierarchical linear models was proposed for the analysis of functional neuroimaging data [Friston, K., Penny, W., Phillips, C., Kiebel, S., Hinton, G., Ashburner, J., 2002. Classical and Bayesian inference in neuroimaging: theory. NeuroImage 16, 465-483] and was evaluated recently in the context of EEG [Phillips, C., Mattout, J., Rugg, M.D., Maquet, P., Friston, K., 2005. An empirical Bayesian solution to the source reconstruction problem in EEG. NeuroImage 24, 997-1011]. The approach consists of estimating the expected source distribution and its conditional variance that is constrained by an empirically determined mixture of prior variance components. Estimation uses Expectation-Maximization (EM) to give the Restricted Maximum Likelihood (ReML) estimate of the variance components (in terms of hyperparameters) and the Maximum A Posteriori (MAP) estimate of the source parameters. In this paper, we extend the framework to compare different combinations of priors, using a second level of inference based on Bayesian model selection. Using Monte-Carlo simulations, ReML is first compared to a classic Weighted Minimum Norm (WMN) solution under a single constraint. Then, the ReML estimates are evaluated using various combinations of priors. Both standard criterion and ROC-based measures were used to assess localization and detection performance. The empirical Bayes approach proved useful as: (1) ReML was significantly better than WMN for single priors; (2) valid location priors improved ReML source localization; (3) invalid location priors did not significantly impair performance. Finally, we show how model selection, using the log-evidence, can be used to select the best combination of priors. This enables a global strategy for multiple prior-based regularization of the MEG/EEG source reconstruction.  相似文献   

16.
We describe a Bayesian learning algorithm for Robust General Linear Models (RGLMs). The noise is modeled as a Mixture of Gaussians rather than the usual single Gaussian. This allows different data points to be associated with different noise levels and effectively provides a robust estimation of regression coefficients. A variational inference framework is used to prevent overfitting and provides a model order selection criterion for noise model order. This allows the RGLM to default to the usual GLM when robustness is not required. The method is compared to other robust regression methods and applied to synthetic data and fMRI.  相似文献   

17.
In this paper, we present an extensive performance evaluation of a novel source localization algorithm, Champagne. It is derived in an empirical Bayesian framework that yields sparse solutions to the inverse problem. It is robust to correlated sources and learns the statistics of non-stimulus-evoked activity to suppress the effect of noise and interfering brain activity. We tested Champagne on both simulated and real M/EEG data. The source locations used for the simulated data were chosen to test the performance on challenging source configurations. In simulations, we found that Champagne outperforms the benchmark algorithms in terms of both the accuracy of the source localizations and the correct estimation of source time courses. We also demonstrate that Champagne is more robust to correlated brain activity present in real MEG data and is able to resolve many distinct and functionally relevant brain areas with real MEG and EEG data.  相似文献   

18.
Most existing voxel-based lesion-symptom mapping methods are based on the same statistical foundation: null hypothesis significance testing (NHST). The two major limitations of these methods are the inability to infer that there is no difference in lesion proportions, and a requirement for multiple-comparison correction. We propose a Bayesian approach that directly models the posterior distribution of lesion-proportion difference, and makes decisions based on inference on this posterior distribution. Compared to NHST-based approaches, our Bayesian approach yields inference results with clearer semantics, and does not require multiple-comparison correction. We evaluated our Bayesian method using simulated data, and data from a study of acute ischemic left-hemispheric stroke. Results of both experiments indicate that the Bayesian approach is sensitive in detecting regions that characterize group differences.  相似文献   

19.
Rasch parameter estimation methods can be classified as non-interative and iterative. Non-iterative methods include the normal approximation algorithm (PROX) for complete dichotomous data. Iterative methods fall into 3 types. Datum-by-datum methods include Gaussian least-squares, minimum chi-square, and the pairwise (PAIR) method. Marginal methods without distributional assumptions include conditional maximum-likelihood estimation (CMLE), joint maximum-likelihood estimation (JMLE) and log-linear approaches. Marginal methods with distributional assumptions include marginal maximum-likelihood estimation (MMLE) and the normal approximation algorithm (PROX) for missing data. Estimates from all methods are characterized by standard errors and quality-control fit statistics. Standard errors can be local (defined relative to the measure of a particular item) or general (defined relative to the abstract origin of the scale). They can also be ideal (as though the data fit the model) or inflated by the misfit to the model present in the data. Five computer programs, implementing different estimation methods, produce statistically equivalent estimates. Nevertheless, comparing estimates from different programs requires care.  相似文献   

20.
Variational filtering   总被引:1,自引:0,他引:1  
Friston KJ 《NeuroImage》2008,41(3):747-766
This note presents a simple Bayesian filtering scheme, using variational calculus, for inference on the hidden states of dynamic systems. Variational filtering is a stochastic scheme that propagates particles over a changing variational energy landscape, such that their sample density approximates the conditional density of hidden and states and inputs. The key innovation, on which variational filtering rests, is a formulation in generalised coordinates of motion. This renders the scheme much simpler and more versatile than existing approaches, such as those based on particle filtering. We demonstrate variational filtering using simulated and real data from hemodynamic systems studied in neuroimaging and provide comparative evaluations using particle filtering and the fixed-form homologue of variational filtering, namely dynamic expectation maximisation.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号