首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
An empirical Bayesian solution to the source reconstruction problem in EEG   总被引:1,自引:0,他引:1  
Distributed linear solutions of the EEG source localisation problem are used routinely. In contrast to discrete dipole equivalent models, distributed linear solutions do not assume a fixed number of active sources and rest on a discretised fully 3D representation of the electrical activity of the brain. The ensuing inverse problem is underdetermined and constraints or priors are required to ensure the uniqueness of the solution. In a Bayesian framework, the conditional expectation of the source distribution, given the data, is attained by carefully balancing the minimisation of the residuals induced by noise and the improbability of the estimates as determined by their priors. This balance is specified by hyperparameters that control the relative importance of fitting and conforming to various constraints. Here we formulate the conventional "Weighted Minimum Norm" (WMN) solution in terms of hierarchical linear models. An "Expectation-Maximisation" (EM) algorithm is used to obtain a "Restricted Maximum Likelihood" (ReML) estimate of the hyperparameters, before estimating the "Maximum a Posteriori" solution itself. This procedure can be considered a generalisation of previous work that encompasses multiple constraints. Our approach was compared with the "classic" WMN and Maximum Smoothness solutions, using a simplified 2D source model with synthetic noisy data. The ReML solution was assessed with four types of source location priors: no priors, accurate priors, inaccurate priors, and both accurate and inaccurate priors. The ReML approach proved useful as: (1) The regularisation (or influence of the a priori source covariance) increased as the noise level increased. (2) The localisation error (LE) was negligible when accurate location priors were used. (3) When accurate and inaccurate location priors were used simultaneously, the solution was not influenced by the inaccurate priors. The ReML solution was then applied to real somatosensory-evoked responses to illustrate the application in an empirical setting.  相似文献   

2.
Classical and Bayesian inference in neuroimaging: theory   总被引:11,自引:0,他引:11  
This paper reviews hierarchical observation models, used in functional neuroimaging, in a Bayesian light. It emphasizes the common ground shared by classical and Bayesian methods to show that conventional analyses of neuroimaging data can be usefully extended within an empirical Bayesian framework. In particular we formulate the procedures used in conventional data analysis in terms of hierarchical linear models and establish a connection between classical inference and parametric empirical Bayes (PEB) through covariance component estimation. This estimation is based on an expectation maximization or EM algorithm. The key point is that hierarchical models not only provide for appropriate inference at the highest level but that one can revisit lower levels suitably equipped to make Bayesian inferences. Bayesian inferences eschew many of the difficulties encountered with classical inference and characterize brain responses in a way that is more directly predicated on what one is interested in. The motivation for Bayesian approaches is reviewed and the theoretical background is presented in a way that relates to conventional methods, in particular restricted maximum likelihood (ReML). This paper is a technical and theoretical prelude to subsequent papers that deal with applications of the theory to a range of important issues in neuroimaging. These issues include; (i) Estimating nonsphericity or variance components in fMRI time-series that can arise from serial correlations within subject, or are induced by multisubject (i.e., hierarchical) studies. (ii) Spatiotemporal Bayesian models for imaging data, in which voxels-specific effects are constrained by responses in other voxels. (iii) Bayesian estimation of nonlinear models of hemodynamic responses and (iv) principled ways of mixing structural and functional priors in EEG source reconstruction. Although diverse, all these estimation problems are accommodated by the PEB framework described in this paper.  相似文献   

3.
We recently outlined a Bayesian scheme for analyzing fMRI data using diffusion-based spatial priors [Harrison, L.M., Penny, W., Ashburner, J., Trujillo-Barreto, N., Friston, K.J., 2007. Diffusion-based spatial priors for imaging. NeuroImage 38, 677-695]. The current paper continues this theme, applying it to a single-subject functional magnetic resonance imaging (fMRI) study of the auditory system. We show that spatial priors on functional activations, based on diffusion, can be formulated in terms of the eigenmodes of a graph Laplacian. This allows one to discard eigenmodes with small eigenvalues, to provide a computationally efficient scheme. Furthermore, this formulation shows that diffusion-based priors are a generalization of conventional Laplacian priors [Penny, W.D., Trujillo-Barreto, N.J., Friston, K.J., 2005. Bayesian fMRI time series analysis with spatial priors. NeuroImage 24, 350-362]. Finally, we show how diffusion-based priors are a special case of Gaussian process models that can be inverted using classical covariance component estimation techniques like restricted maximum likelihood [Patterson, H.D., Thompson, R., 1974. Maximum likelihood estimation of components of variance. Paper presented at: 8th International Biometrics Conference (Constanta, Romania)]. The convention in SPM is to smooth data with a fixed isotropic Gaussian kernel before inverting a mass-univariate statistical model. This entails the strong assumption that data are generated smoothly throughout the brain. However, there is no way to determine if this assumption is supported by the data, because data are smoothed before statistical modeling. In contrast, if a spatial prior is used, smoothness is estimated given non-smoothed data. Explicit spatial priors enable formal model comparison of different prior assumptions, e.g., that data are generated from a stationary (i.e., fixed throughout the brain) or non-stationary spatial process. Indeed, for the auditory data we provide strong evidence for a non-stationary process, which concurs with a qualitative comparison of predicted activations at the boundary of functionally selective regions.  相似文献   

4.
Dynamic causal modelling (DCM) is a modelling framework used to describe causal interactions in dynamical systems. It was developed to infer the causal architecture of networks of neuronal populations in the brain [Friston, K.J., Harrison, L, Penny, W., 2003. Dynamic causal modelling. NeuroImage. Aug; 19 (4): 1273-302]. In current formulations of DCM, the mean structure of the likelihood is a nonlinear and numerical function of the parameters, which precludes exact or analytic Bayesian inversion. To date, approximations to the posterior depend on the assumption of normality (i.e., the Laplace assumption). In particular, two arguments have been used to motivate normality of the prior and posterior distributions. First, Gaussian priors on the parameters are specified carefully to ensure that activity in the dynamic system of neuronal populations converges to a steady state (i.e., the dynamic system is dissipative). Secondly, normality of the posterior is an approximation based on general asymptotic results, regarding the form of the posterior under infinite data [Friston, K.J., Harrison, L, Penny, W., 2003. Dynamic causal modelling. NeuroImage. Aug; 19 (4): 1273-302]. Here, we provide a critique of these assumptions and evaluate them numerically. We use a Bayesian inversion scheme (the Metropolis-Hastings algorithm) that eschews both assumptions. This affords an independent route to the posterior and an external means to assess the performance of conventional schemes for DCM. It also allows us to assess the sensitivity of the posterior to different priors. First, we retain the conventional priors and compare the ensuing approximate posterior (Laplace) to the exact posterior (MCMC). Our analyses show that the Laplace approximation is appropriate for practical purposes. In a second, independent set of analyses, we compare the exact posterior under conventional priors with an exact posterior under newly defined uninformative priors. Reassuringly, we observe that the posterior is, for all practical purposes, insensitive of the choice of prior.  相似文献   

5.
Multiple sparse priors for the M/EEG inverse problem   总被引:1,自引:0,他引:1  
This paper describes an application of hierarchical or empirical Bayes to the distributed source reconstruction problem in electro- and magnetoencephalography (EEG and MEG). The key contribution is the automatic selection of multiple cortical sources with compact spatial support that are specified in terms of empirical priors. This obviates the need to use priors with a specific form (e.g., smoothness or minimum norm) or with spatial structure (e.g., priors based on depth constraints or functional magnetic resonance imaging results). Furthermore, the inversion scheme allows for a sparse solution for distributed sources, of the sort enforced by equivalent current dipole (ECD) models. This means the approach automatically selects either a sparse or a distributed model, depending on the data. The scheme is compared with conventional applications of Bayesian solutions to quantify the improvement in performance.  相似文献   

6.
Mixed-effects and fMRI studies   总被引:1,自引:0,他引:1  
This note concerns mixed-effect (MFX) analyses in multisession functional magnetic resonance imaging (fMRI) studies. It clarifies the relationship between mixed-effect analyses and the two-stage "summary statistics" procedure (Holmes, A.P., Friston, K.J., 1998. Generalisability, random effects and population inference. NeuroImage 7, S754) that has been adopted widely for analyses of fMRI data at the group level. We describe a simple procedure, based on restricted maximum likelihood (ReML) estimates of covariance components, that enables full mixed-effects analyses in the context of statistical parametric mapping. Using this procedure, we compare the results of a full mixed-effects analysis with those obtained from the simpler two-stage procedure and comment on the situations when the two approaches may give different results.  相似文献   

7.
Valid conjunction inference with the minimum statistic   总被引:13,自引:0,他引:13  
In logic a conjunction is defined as an AND between truth statements. In neuroimaging, investigators may look for brain areas activated by task A AND by task B, or a conjunction of tasks (Price, C.J., Friston, K.J., 1997. Cognitive conjunction: a new approach to brain activation experiments. NeuroImage 5, 261-270). Friston et al. (Friston, K., Holmes, A., Price, C., Buchel, C., Worsley, K., 1999. Multisubject fMRI studies and conjunction analyses. NeuroImage 10, 85-396) introduced a minimum statistic test for conjunction. We refer to this method as the minimum statistic compared to the global null (MS/GN). The MS/GN is implemented in SPM2 and SPM99 software, and has been widely used as a test of conjunction. However, we assert that it does not have the correct null hypothesis for a test of logical AND, and further, this has led to confusion in the neuroimaging community. In this paper, we define a conjunction and explain the problem with the MS/GN test as a conjunction method. We present a survey of recent practice in neuroimaging which reveals that the MS/GN test is very often misinterpreted as evidence of a logical AND. We show that a correct test for a logical AND requires that all the comparisons in the conjunction are individually significant. This result holds even if the comparisons are not independent. We suggest that the revised test proposed here is the appropriate means for conjunction inference in neuroimaging.  相似文献   

8.
Distributed linear solutions of the EEG source localization problem are used routinely. Here we describe an approach based on the weighted minimum norm method that imposes constraints using anatomical and physiological information derived from other imaging modalities to regularize the solution. In this approach the hyperparameters controlling the degree of regularization are estimated using restricted maximum likelihood (ReML). EEG data are always contaminated by noise, e.g., exogenous noise and background brain activity. The conditional expectation of the source distribution, given the data, is attained by carefully balancing the minimization of the residuals induced by noise and the improbability of the estimates as determined by their priors. This balance is specified by hyperparameters that control the relative importance of fitting and conforming to prior constraints. Here we introduce a systematic approach to this regularization problem, in the context of a linear observation model we have described previously. In this model, basis functions are extracted to reduce the solution space a priori in the spatial and temporal domains. The basis sets are motivated by knowledge of the evoked EEG response and information theory. In this paper we focus on an iterative "expectation-maximization" procedure to jointly estimate the conditional expectation of the source distribution and the ReML hyperparameters on which this solution rests. We used simulated data mixed with real EEG noise to explore the behavior of the approach with various source locations, priors, and noise levels. The results enabled us to conclude: (i) Solutions in the space of informed basis functions have a high face and construct validity, in relation to conventional analyses. (ii) The hyperparameters controlling the degree of regularization vary largely with source geometry and noise. The second conclusion speaks to the usefulness of using adaptative ReML hyperparameter estimates.  相似文献   

9.
Recently, we described a Bayesian inference approach to the MEG/EEG inverse problem that used numerical techniques to estimate the full posterior probability distributions of likely solutions upon which all inferences were based [Schmidt, D.M., George, J.S., Wood, C.C., 1999. Bayesian inference applied to the electromagnetic inverse problem. Human Brain Mapping 7, 195; Schmidt, D.M., George, J.S., Ranken, D.M., Wood, C.C., 2001. Spatial-temporal bayesian inference for MEG/EEG. In: Nenonen, J., Ilmoniemi, R. J., Katila, T. (Eds.), Biomag 2000: 12th International Conference on Biomagnetism. Espoo, Norway, p. 671]. Schmidt et al. (1999) focused on the analysis of data at a single point in time employing an extended region source model. They subsequently extended their work to a spatiotemporal Bayesian inference analysis of the full spatiotemporal MEG/EEG data set. Here, we formulate spatiotemporal Bayesian inference analysis using a multi-dipole model of neural activity. This approach is faster than the extended region model, does not require use of the subject's anatomical information, does not require prior determination of the number of dipoles, and yields quantitative probabilistic inferences. In addition, we have incorporated the ability to handle much more complex and realistic estimates of the background noise, which may be represented as a sum of Kronecker products of temporal and spatial noise covariance components. This reduces the effects of undermodeling noise. In order to reduce the rigidity of the multi-dipole formulation which commonly causes problems due to multiple local minima, we treat the given covariance of the background as uncertain and marginalize over it in the analysis. Markov Chain Monte Carlo (MCMC) was used to sample the many possible likely solutions. The spatiotemporal Bayesian dipole analysis is demonstrated using simulated and empirical whole-head MEG data.  相似文献   

10.
The fundamental problem faced by noninvasive neuroimaging techniques such as EEG/MEG1 is to elucidate functionally important aspects of the microscopic neuronal network dynamics from macroscopic aggregate measurements. Due to the mixing of the activities of large neuronal populations in the observed macroscopic aggregate, recovering the underlying network that generates the signal in the absence of any additional information represents a considerable challenge. Recent MEG studies have shown that macroscopic measurements contain sufficient information to allow the differentiation between patterns of activity, which are likely to represent different stimulus-specific collective modes in the underlying network (Hadjipapas, A., Adjamian, P., Swettenham, J.B., Holliday, I.E., Barnes, G.R., 2007. Stimuli of varying spatial scale induce gamma activity with distinct temporal characteristics in human visual cortex. NeuroImage 35, 518–530). The next question arising in this context is whether aspects of collective network activity can be recovered from a macroscopic aggregate signal. We propose that this issue is most appropriately addressed if MEG/EEG signals are to be viewed as macroscopic aggregates arising from networks of coupled systems as opposed to aggregates across a mass of largely independent neural systems. We show that collective modes arising in a network of simulated coupled systems can be indeed recovered from the macroscopic aggregate. Moreover, we show that nonlinear state space methods yield a good approximation of the number of effective degrees of freedom in the network. Importantly, information about hidden variables, which do not directly contribute to the aggregate signal, can also be recovered. Finally, this theoretical framework can be applied to experimental MEG/EEG data in the future, enabling the inference of state dependent changes in the degree of local synchrony in the underlying network.  相似文献   

11.
Wipf D  Nagarajan S 《NeuroImage》2009,44(3):947-966
The ill-posed nature of the MEG (or related EEG) source localization problem requires the incorporation of prior assumptions when choosing an appropriate solution out of an infinite set of candidates. Bayesian approaches are useful in this capacity because they allow these assumptions to be explicitly quantified using postulated prior distributions. However, the means by which these priors are chosen, as well as the estimation and inference procedures that are subsequently adopted to affect localization, have led to a daunting array of algorithms with seemingly very different properties and assumptions. From the vantage point of a simple Gaussian scale mixture model with flexible covariance components, this paper analyzes and extends several broad categories of Bayesian inference directly applicable to source localization including empirical Bayesian approaches, standard MAP estimation, and multiple variational Bayesian (VB) approximations. Theoretical properties related to convergence, global and local minima, and localization bias are analyzed and fast algorithms are derived that improve upon existing methods. This perspective leads to explicit connections between many established algorithms and suggests natural extensions for handling unknown dipole orientations, extended source configurations, correlated sources, temporal smoothness, and computational expediency. Specific imaging methods elucidated under this paradigm include the weighted minimum l(2)-norm, FOCUSS, minimum current estimation, VESTAL, sLORETA, restricted maximum likelihood, covariance component estimation, beamforming, variational Bayes, the Laplace approximation, and automatic relevance determination, as well as many others. Perhaps surprisingly, all of these methods can be formulated as particular cases of covariance component estimation using different concave regularization terms and optimization rules, making general theoretical analyses and algorithmic extensions/improvements particularly relevant.  相似文献   

12.
We address some key issues entailed by population inference about responses evoked in distributed brain systems using magnetoencephalography (MEG). In particular, we look at model selection issues at the within-subject level and feature selection issues at the between-subject level, using responses evoked by intact and scrambled faces around 170 ms (M170). We compared the face validity of subject-specific forward models and their summary statistics in terms of how estimated responses reproduced over subjects. At the within-subject level, we focused on the use of multiple constraints, or priors, for inverting distributed source models. We used restricted maximum likelihood (ReML) estimates of prior covariance components (in both sensor and source space) and show that their relative importance is conserved over subjects. At the between-subject level, we used standard anatomical normalization methods to create posterior probability maps that furnish inference about regionally specific population responses. We used these to compare different summary statistics, namely; (i) whether to test for differences between condition-specific source estimates, or whether to test the source estimate of differences between conditions, and (ii) whether to accommodate differences in source orientation by using signed or unsigned (absolute) estimates of source activity.  相似文献   

13.
Magnetoencephalography (MEG) provides millisecond-scale temporal resolution for noninvasive mapping of human brain functions, but the problem of reconstructing the underlying source currents from the extracranial data has no unique solution. Several distributed source estimation methods based on different prior assumptions have been suggested for the resolution of this inverse problem. Recently, a hierarchical Bayesian generalization of the traditional minimum norm estimate (MNE) was proposed, in which the variance of distributed current at each cortical location is considered as a random variable and estimated from the data using the variational Bayesian (VB) framework. Here, we introduce an alternative scheme for performing Bayesian inference in the context of this hierarchical model by using Markov chain Monte Carlo (MCMC) strategies. In principle, the MCMC method is capable of numerically representing the true posterior distribution of the currents whereas the VB approach is inherently approximative. We point out some potential problems related to hyperprior selection in the previous work and study some possible solutions. A hyperprior sensitivity analysis is then performed, and the structure of the posterior distribution as revealed by the MCMC method is investigated. We show that the structure of the true posterior is rather complex with multiple modes corresponding to different possible solutions to the source reconstruction problem. We compare the results from the VB algorithm to those obtained from the MCMC simulation under different hyperparameter settings. The difficulties in using a unimodal variational distribution as a proxy for a truly multimodal distribution are also discussed. Simulated MEG data with realistic sensor and source geometries are used in performing the analyses.  相似文献   

14.
Lohmann G  Erfurth K  Müller K  Turner R 《NeuroImage》2012,59(3):2322-2329
Dynamic causal modelling (DCM) (Friston et al., 2003) is a technique designed to investigate the influence between brain areas using time series data obtained by EEG/MEG or functional magnetic resonance imaging (fMRI). The basic idea is to fit various models to time series data, and select one of those models using Bayesian model comparison. Here, we present a critical evaluation of DCM in which we show that DCM can be challenged on several grounds. We will discuss three main points relating to combinatorial explosion, the validity of the model selection procedure, and problems with respect to model validation.  相似文献   

15.
16.
17.
Classical and Bayesian inference in neuroimaging: applications   总被引:5,自引:0,他引:5  
In Friston et al. ((2002) Neuroimage 16: 465-483) we introduced empirical Bayes as a potentially useful way to estimate and make inferences about effects in hierarchical models. In this paper we present a series of models that exemplify the diversity of problems that can be addressed within this framework. In hierarchical linear observation models, both classical and empirical Bayesian approaches can be framed in terms of covariance component estimation (e.g., variance partitioning). To illustrate the use of the expectation-maximization (EM) algorithm in covariance component estimation we focus first on two important problems in fMRI: nonsphericity induced by (i) serial or temporal correlations among errors and (ii) variance components caused by the hierarchical nature of multisubject studies. In hierarchical observation models, variance components at higher levels can be used as constraints on the parameter estimates of lower levels. This enables the use of parametric empirical Bayesian (PEB) estimators, as distinct from classical maximum likelihood (ML) estimates. We develop this distinction to address: (i) The difference between response estimates based on ML and the conditional means from a Bayesian approach and the implications for estimates of intersubject variability. (ii) The relationship between fixed- and random-effect analyses. (iii) The specificity and sensitivity of Bayesian inference and, finally, (iv) the relative importance of the number of scans and subjects. The forgoing is concerned with within- and between-subject variability in multisubject hierarchical fMRI studies. In the second half of this paper we turn to Bayesian inference at the first (within-voxel) level, using PET data to show how priors can be derived from the (between-voxel) distribution of activations over the brain. This application uses exactly the same ideas and formalism but, in this instance, the second level is provided by observations over voxels as opposed to subjects. The ensuing posterior probability maps (PPMs) have enhanced anatomical precision and greater face validity, in relation to underlying anatomy. Furthermore, in comparison to conventional SPMs they are not confounded by the multiple comparison problem that, in a classical context, dictates high thresholds and low sensitivity. We conclude with some general comments on Bayesian approaches to image analysis and on some unresolved issues.  相似文献   

18.
Lapalme E  Lina JM  Mattout J 《NeuroImage》2006,30(1):160-171
In Amblard et al. [Amblard, C., Lapalme, E., Lina, J.M. 2004. Biomagnetic source detection by maximum entropy and graphical models. IEEE Trans. Biomed. Eng. 55 (3) 427--442], the authors introduced the maximum entropy on the mean (MEM) as a methodological framework for solving the magnetoencephalography (MEG) inverse problem. The main component of the MEM is a reference probability density that enables one to include all kind of prior information on the source intensity distribution to be estimated. This reference law also encompasses the definition of a model. We consider a distributed source model together with a clustering hypothesis that assumes functionally coherent dipoles. The reference probability distribution is defined as a prior parceling of the cortical surface. In this paper, we present a data-driven approach for parceling out the cortex into regions that are functionally coherent. Based on the recently developed multivariate source pre-localization (MSP) principle [Mattout, J., Pelegrini-Issac, M., Garnero, L., Benali, H. 2005. Multivariate source pre-localization (MSP): Use of functionally informed basis functions for better conditioning the MEG inverse problem. NeuroImage 26 (2) 356--373], the data-driven clustering (DDC) of the dipoles provides an efficient parceling of the sources as well as an estimate of parameters of the initial reference probability distribution. On MEG simulated data, the DDC is shown to further improve the MEM inverse approach, as evaluated considering two different iterative algorithms and using classical error metrics as well as ROC (receiver operating characteristic) curve analysis. The MEM solution is also compared to a LORETA-like inverse approach. The data-driven clustering allows to take most advantage of the MEM formalism. Its main trumps lie in the flexible probabilistic way of introducing priors and in the notion of spatial coherent regions of activation. The latter reduces the dimensionality of the problem. In so doing, it narrows down the gap between the two types of inverse methods, the popular dipolar approaches and the distributed ones.  相似文献   

19.
Liu Z  He B 《NeuroImage》2008,39(3):1198-1214
In response to the need of establishing a high-resolution spatiotemporal neuroimaging technique, tremendous efforts have been focused on developing multimodal strategies that combine the complementary advantages of high-spatial-resolution functional magnetic resonance imaging (fMRI) and high-temporal-resolution electroencephalography (EEG) or magnetoencephalography (MEG). A critical challenge to the fMRI-EEG/MEG integration lies in the spatial mismatches between fMRI activations and instantaneous electrical source activities. Such mismatches are fundamentally due to the fact that fMRI and EEG/MEG signals are generated and collected in highly different time scales. In this paper, we propose a new theoretical framework to solve the problem of fMRI-EEG integrated cortical source imaging. The new framework has two principal technical advancements. First, by assuming a linear neurovascular coupling, a method is derived to quantify the fMRI signal in each voxel as proportional to the time integral of the power of local electrical current during the period of event-related potentials (ERP). Second, the EEG inverse problem is solved for every time instant using an adaptive Wiener filter, in which the prior time-variant source covariance matrix is estimated by combining the quantified fMRI responses and the segmented EEG signals before response averaging. A series of computer simulations were conducted to evaluate the proposed methods in terms of imaging the instantaneous cortical current density (CCD) distribution and estimating the source time courses with a millisecond temporal resolution. As shown in the simulation results, the instantaneous CCD reconstruction by using the proposed fMRI-EEG integration method was robust against both fMRI false positives and false negatives while retaining a spatial resolution nearly as high as that of fMRI. The proposed method could also reliably estimate the source waveforms when multiple sources were temporally correlated or uncorrelated, or were sustained or transient, or had some features in frequency or phase, or had even more complicated temporal dynamics. Moreover, applying the proposed method to real fMRI and EEG data acquired in a visual experiment yielded a time series of reconstructed CCD images, in agreement with the traditional view of hierarchical visual processing. In conclusion, the proposed method provides a reliable technique for the fMRI-EEG integration and represents a significant advancement over the conventional fMRI-weighted EEG (or MEG) source imaging techniques and is also applicable to the fMRI-MEG integrated source imaging.  相似文献   

20.
In magneto- and electroencephalography (M/EEG), spatial modelling of sensor data is necessary to make inferences about underlying brain activity. Most source reconstruction techniques belong to one of two approaches: point source models, which explain the data with a small number of equivalent current dipoles and distributed source or imaging models, which use thousands of dipoles. Much methodological research has been devoted to developing sophisticated Bayesian source imaging inversion schemes, while dipoles have received less such attention. Dipole models have their advantages; they are often appropriate summaries of evoked responses or helpful first approximations. Here, we propose a variational Bayesian algorithm that enables the fast Bayesian inversion of dipole models. The approach allows for specification of priors on all the model parameters. The posterior distributions can be used to form Bayesian confidence intervals for interesting parameters, like dipole locations. Furthermore, competing models (e.g., models with different numbers of dipoles) can be compared using their evidence or marginal likelihood. Using synthetic data, we found the scheme provides accurate dipole localizations. We illustrate the advantage of our Bayesian scheme, using a multi-subject EEG auditory study, where we compare competing models for the generation of the N100 component.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号