首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 467 毫秒
1.
We have developed a novel probabilistic model that estimates neural source activity measured by MEG and EEG data while suppressing the effect of interference and noise sources. The model estimates contributions to sensor data from evoked sources, interference sources and sensor noise using Bayesian methods and by exploiting knowledge about their timing and spatial covariance properties. Full posterior distributions are computed rather than just the MAP estimates. In simulation, the algorithm can accurately localize and estimate the time courses of several simultaneously active dipoles, with rotating or fixed orientation, at noise levels typical for averaged MEG data. The algorithm even performs reasonably at noise levels typical of an average of just a few trials. The algorithm is superior to beamforming techniques, which we show to be an approximation to our graphical model, in estimation of temporally correlated sources. Success of this algorithm using MEG data for localizing bilateral auditory cortex, low-SNR somatosensory activations, and for localizing an epileptic spike source are also demonstrated.  相似文献   

2.
The synchronous brain activity measured via MEG (or EEG) can be interpreted as arising from a collection (possibly large) of current dipoles or sources located throughout the cortex. Estimating the number, location, and time course of these sources remains a challenging task, one that is significantly compounded by the effects of source correlations and unknown orientations and by the presence of interference from spontaneous brain activity, sensor noise, and other artifacts. This paper derives an empirical Bayesian method for addressing each of these issues in a principled fashion. The resulting algorithm guarantees descent of a cost function uniquely designed to handle unknown orientations and arbitrary correlations. Robust interference suppression is also easily incorporated. In a restricted setting, the proposed method is shown to produce theoretically zero reconstruction error estimating multiple dipoles even in the presence of strong correlations and unknown orientations, unlike a variety of existing Bayesian localization methods or common signal processing techniques such as beamforming and sLORETA. Empirical results on both simulated and real data sets verify the efficacy of this approach.  相似文献   

3.
Adaptive spatial filters (beamformers) have gained popularity as an effective method for the localization of brain activity from magnetoencephalography (MEG) data. Among the attractive features of some beamforming methods are high spatial resolution and no localization bias even in the presence of random noise. A drawback common to all beamforming methods, however, is significant degradation in performance in the presence of sources with high temporal correlations. Using numerical simulations and examples of auditory and visual evoked field responses, we demonstrate that, at typical signal-to-noise levels, the complete attenuation of fully correlated brain activity is less likely to occur, although significant localization and amplitude biases may occur. We compared various methods for correcting these biases and found the coherent source suppression model (CSSM) (Dalal et al., 2006) to be the most effective, with small biases for widely separated sources (e.g., bilateral auditory areas), however, amplitude biases increased systematically as distance between the sources was decreased. We assessed the performance and systematic biases that may result from the use of this model, and confirmed our findings with real examples of correlated brain activity in bilateral occipital and inferior temporal areas evoked by visually presented faces in a group of 21 adults. We demonstrated the ability to localize source activity in both regions, including correlated sources that are in close proximity (~ 3 cm) in bilateral primary visual cortex when using a priori information regarding source location. We conclude that CSSM, when carefully applied, can significantly improve localization accuracy, although amplitude biases may remain.  相似文献   

4.
MEG and EEG data contain additive correlated noise generated by environmental and physiological sources. To suppress this type of spatially coloured noise, source estimation is often performed with spatial whitening based on a measured or estimated noise covariance matrix. However, artifacts that span relatively small noise subspaces, such as cardiac, ocular, and muscle artifacts, are often explicitly removed by a variety of denoising methods (e.g., signal space projection) before source imaging. Here, we introduce a new approach, the spectral signal space projection (S(3)P) algorithm, in which time-frequency (TF)-specific spatial projectors are designed and applied to the noisy TF-transformed data, and whitened source estimation is performed in the TF domain. The approach can be used to derive spectral variants of all linear time domain whitened source estimation algorithms. The denoised sensor and source time series are obtained by the corresponding inverse TF-transform. The method is evaluated and compared with existing subspace projection and signal separation techniques using experimental data. Altogether, S(3)P provides an expanded framework for MEG/EEG data denoising and whitened source imaging in both the time and frequency/scale domains.  相似文献   

5.
This paper formulates a novel probabilistic graphical model for noisy stimulus-evoked MEG and EEG sensor data obtained in the presence of large background brain activity. The model describes the observed data in terms of unobserved evoked and background factors with additive sensor noise. We present an expectation maximization (EM) algorithm that estimates the model parameters from data. Using the model, the algorithm cleans the stimulus-evoked data by removing interference from background factors and noise artifacts and separates those data into contributions from independent factors. We demonstrate on real and simulated data that the algorithm outperforms benchmark methods for denoising and separation. We also show that the algorithm improves the performance of localization with beamforming algorithms.  相似文献   

6.
In understanding and modeling brain functioning by EEG/MEG, it is not only important to be able to identify active areas but also to understand interference among different areas. The EEG/MEG signals result from the superimposition of underlying brain source activities volume conducted through the head. The effects of volume conduction produce spurious interactions in the measured signals. It is fundamental to separate true source interactions from noise and to unmix the contribution of different systems composed by interacting sources in order to understand interference mechanisms. As a prerequisite, we consider the problem of unmixing the contribution of uncorrelated sources to a measured field. This problem is equivalent to the problem of unmixing the contribution of different uncorrelated compound systems composed by interacting sources. To this end, we develop a principal component analysis-based method, namely, the source principal component analysis (sPCA), which exploits the underlying assumption of orthogonality for sources, estimated from linear inverse methods, for the extraction of essential features in signal space. We then consider the problem of demixing the contribution of correlated sources that comprise each of the compound systems identified by using sPCA. While the sPCA orthogonality assumption is sufficient to separate uncorrelated systems, it cannot separate the individual components within each system. To address that problem, we introduce the Minimum Overlap Component Analysis (MOCA), employing a pure spatial criterion to unmix pairs of correlates (or coherent) sources. The proposed methods are tested in simulations and applied to EEG data from human micro and alpha rhythms.  相似文献   

7.
In the frame of 3D medical imaging, accurate segmentation of multimodal brain MR images is of interest for many brain disorders. However, due to several factors such as noise, imaging artifacts, intrinsic tissue variation and partial volume effects, tissue classification remains a challenging task. In this paper, we present a unifying framework for unsupervised segmentation of multimodal brain MR images including partial volume effect, bias field correction, and information given by a probabilistic atlas. Here-proposed method takes into account neighborhood information using a Hidden Markov Chain (HMC) model. Due to the limited resolution of imaging devices, voxels may be composed of a mixture of different tissue types, this partial volume effect is included to achieve an accurate segmentation of brain tissues. Instead of assigning each voxel to a single tissue class (i.e., hard classification), we compute the relative amount of each pure tissue class in each voxel (mixture estimation). Further, a bias field estimation step is added to the proposed algorithm to correct intensity inhomogeneities. Furthermore, atlas priors were incorporated using probabilistic brain atlas containing prior expectations about the spatial localization of different tissue classes. This atlas is considered as a complementary sensor and the proposed method is extended to multimodal brain MRI without any user-tunable parameter (unsupervised algorithm). To validate this new unifying framework, we present experimental results on both synthetic and real brain images, for which the ground truth is available. Comparison with other often used techniques demonstrates the accuracy and the robustness of this new Markovian segmentation scheme.  相似文献   

8.
In this paper, we present an extensive performance evaluation of a novel source localization algorithm, Champagne. It is derived in an empirical Bayesian framework that yields sparse solutions to the inverse problem. It is robust to correlated sources and learns the statistics of non-stimulus-evoked activity to suppress the effect of noise and interfering brain activity. We tested Champagne on both simulated and real M/EEG data. The source locations used for the simulated data were chosen to test the performance on challenging source configurations. In simulations, we found that Champagne outperforms the benchmark algorithms in terms of both the accuracy of the source localizations and the correct estimation of source time courses. We also demonstrate that Champagne is more robust to correlated brain activity present in real MEG data and is able to resolve many distinct and functionally relevant brain areas with real MEG and EEG data.  相似文献   

9.
Linearly constrained minimum variance beamformers are highly effective for analysis of weakly correlated brain activity, but their performance degrades when correlations become significant. Multiple constrained minimum variance (MCMV) beamformers are insensitive to source correlations but require a priori information about the source locations. Besides the question whether unbiased estimates of source positions and orientations can be obtained remained unanswered. In this work, we derive MCMV-based source localizers that can be applied to both induced and evoked brain activity. They may be regarded as a generalization of scalar minimum-variance beamformers for the case of multiple correlated sources. We show that for arbitrary noise covariance these beamformers provide simultaneous unbiased estimates of multiple source positions and orientations and remain bounded at singular points. We also propose an iterative search algorithm that makes it possible to find sources approximately without a priori assumptions about their locations and orientations. Simulations and analyses of real MEG data demonstrate that presented approach is superior to traditional single-source beamformers in situations where correlations between the sources are significant.  相似文献   

10.
We present several methods to improve the resolution of human brain mapping by combining information obtained from surface electroencephalography (EEG) and functional magnetic resonance imaging (fMRI) of the same participants performing the same task in separate imaging sessions. As an initial step in our methods we used independent component analysis (ICA) to obtain task-related sources for both EEG and fMRI. We then used that information in an integrated cost function that attempts to match both data sources and trades goodness of fit in one regime for another. We compared the performance and drawbacks of each method in localizing sources for a dual visual evoked response experiment, and we contrasted the results of adding fMRI information to simple EEG-only inversion methods. We found that adding fMRI information in a variety of ways gives superior results to classical minimum norm source estimation. Our findings lead us to favor a method which attempts to match EEG scalp dynamics along with voxel power obtained from ICA-processed blood oxygenation level dependent (BOLD) data; this method of joint inversion enables us to treat the two data sources as symmetrically as possible.  相似文献   

11.
Rowe DB 《NeuroImage》2005,25(4):1310-1324
In MRI and fMRI, images or voxel measurement are complex valued or bivariate at each time point. Recently, (Rowe, D.B., Logan, B.R., 2004. A complex way to compute fMRI activation. NeuroImage 23 (3), 1078-1092) introduced an fMRI magnitude activation model that utilized both the real and imaginary data in each voxel. This model, following traditional beliefs, specified that the phase time course were fixed unknown quantities which may be estimated voxel-by-voxel. Subsequently, (Rowe, D.B., Logan, B.R., 2005. Complex fMRI analysis with unrestricted phase is equivalent to a magnitude-only model. NeuroImage 24 (2), 603-606) generalized the model to have no restrictions on the phase time course. They showed that this unrestricted phase model was mathematically equivalent to the usual magnitude-only data model including regression coefficients and voxel activation statistic but philosophically different due to it derivation from complex data. Recent findings by (Hoogenrad, F.G., Reichenbach, J.R., Haacke, E.M., Lai, S., Kuppusamy, K., Sprenger, M., 1998. In vivo measurement of changes in venous blood-oxygenation with high resolution functional MRI at .95 Tesla by measuring changes in susceptibility and velocity. Magn. Reson. Med. 39 (1), 97-107) and (Menon, R.S., 2002. Postacquisition suppression of large-vessel BOLD signals in high-resolution fMRI. Magn. Reson. Med. 47 (1), 1-9) indicate that the voxel phase time course may exhibit task related changes. In this paper, a general complex fMRI activation model is introduced that describes both the magnitude and phase in complex data which can be used to specifically characterize task related change in both. Hypotheses regarding task related magnitude and/or phase changes are evaluated using derived activation statistics. It was found that the Rowe-Logan complex constant phase model strongly biases against voxels with task related phase changes and that the current very general complex linear phase model can be cast to address several different hypotheses sensitive to different magnitude/phase changes.  相似文献   

12.
Performing an accurate localization of sources of interictal spikes from EEG scalp measurements is of particular interest during the presurgical investigation of epilepsy. The purpose of this paper is to study the ability of six distributed source localization methods to recover extended sources of activated cortex. Due to the frequent lack of a gold standard to evaluate source localization methods, our evaluation was performed in a controlled environment using realistic simulations of EEG interictal spikes, involving several anatomical locations with several spatial extents. Simulated data were corrupted by physiological EEG noise. Simulations involving pairs of sources with the same amplitude were also studied. In addition to standard validation criteria (e.g., geodesic distance or mean square error), we proposed an original criterion dedicated to assess detection accuracy, based on receiver operating characteristic (ROC) analysis. Six source localization methods were evaluated: the minimum norm, the minimum norm weighted by multivariate source prelocalization (MSP), cortical LORETA with or without additional minimum norm regularization, and two derivations of the maximum entropy on the mean (MEM) approach. Results showed that LORETA-based and MEM-based methods were able to accurately recover sources of different spatial extents, with the exception of sources in temporo-mesial and fronto-mesial regions. Several spurious sources were generated by those methods, however, whereas methods using the MSP always located very accurately the maximum of activity but not its spatial extent. These findings suggest that one should always take into account the results from different localization methods when analyzing real interictal spikes.  相似文献   

13.
Ultrasound images are very noisy. Along with system noise, a significant noise source is the speckle phenomenon caused by interference in the viewed object. Most of the past approaches for denoising ultrasound images essentially blur the image and they do not handle attenuation. We discuss an approach that does not blur the image and handles attenuation. It is based on frequency compounding, in which images of the same object are acquired in different acoustic frequencies and, then, compounded. Existing frequency compounding methods have been based on simple averaging, and have achieved only limited enhancement. The reason is that the statistical and physical characteristics of the signal and noise vary with depth, and the noise is correlated between acoustic frequencies. Hence, we suggest two spatially varying frequency compounding methods, based on the understanding of these characteristics. As demonstrated in experiments, the proposed approaches suppress various noise sources and also recover attenuated objects while maintaining a high resolution.  相似文献   

14.
We describe the construction of a digital brain atlas composed of data from manually delineated MRI data. A total of 56 structures were labeled in MRI of 40 healthy, normal volunteers. This labeling was performed according to a set of protocols developed for this project. Pairs of raters were assigned to each structure and trained on the protocol for that structure. Each rater pair was tested for concordance on 6 of the 40 brains; once they had achieved reliability standards, they divided the task of delineating the remaining 34 brains. The data were then spatially normalized to well-known templates using 3 popular algorithms: AIR5.2.5's nonlinear warp (Woods et al., 1998) paired with the ICBM452 Warp 5 atlas (Rex et al., 2003), FSL's FLIRT (Smith et al., 2004) was paired with its own template, a skull-stripped version of the ICBM152 T1 average; and SPM5's unified segmentation method (Ashburner and Friston, 2005) was paired with its canonical brain, the whole head ICBM152 T1 average. We thus produced 3 variants of our atlas, where each was constructed from 40 representative samples of a data processing stream that one might use for analysis. For each normalization algorithm, the individual structure delineations were then resampled according to the computed transformations. We next computed averages at each voxel location to estimate the probability of that voxel belonging to each of the 56 structures. Each version of the atlas contains, for every voxel, probability densities for each region, thus providing a resource for automated probabilistic labeling of external data types registered into standard spaces; we also computed average intensity images and tissue density maps based on the three methods and target spaces. These atlases will serve as a resource for diverse applications including meta-analysis of functional and structural imaging data and other bioinformatics applications where display of arbitrary labels in probabilistically defined anatomic space will facilitate both knowledge-based development and visualization of findings from multiple disciplines.  相似文献   

15.
Correlating the activation foci identified in functional imaging studies of the human brain with structural (e.g., cytoarchitectonic) information on the activated areas is a major methodological challenge for neuroscience research. We here present a new approach to make use of three-dimensional probabilistic cytoarchitectonic maps, as obtained from the analysis of human post-mortem brains, for correlating microscopical, anatomical and functional imaging data of the cerebral cortex. We introduce a new, MATLAB based toolbox for the SPM2 software package which enables the integration of probabilistic cytoarchitectonic maps and results of functional imaging studies. The toolbox includes the functionality for the construction of summary maps combining probability of several cortical areas by finding the most probable assignment of each voxel to one of these areas. Its main feature is to provide several measures defining the degree of correspondence between architectonic areas and functional foci. The software, together with the presently available probability maps, is available as open source software to the neuroimaging community. This new toolbox provides an easy-to-use tool for the integrated analysis of functional and anatomical data in a common reference space.  相似文献   

16.
We present a numerical method to estimate the true threshold values in random fields needed to determine the significance of apparent signals observed in noisy images. To accomplish this, a quantile estimation algorithm is applied to derive the threshold with a predefined confidence interval from a large number of simulated random fields. Also, a computationally efficient method for generating a random field simulation is presented using resampling techniques. Applying these techniques, thresholds have been determined for a large variety of parameter settings (smoothness, voxel size, brain shape, type of statistics). By means of interpolation techniques, thresholds for additional arbitrary settings can be quickly derived without the need to run individual simulations. Compared to the parametric approach of Worsley et al. (1996) (Worsley, K.J., Marrett, S., Neelin P., Vandal, A.C., Friston, K.J., Evans, A.C., 1996. A unified statistical approach for determining significant signals in images of cerebral activation. Hum. Brain Mapp. 4, 58-73) and Friston et al. (1991) (Friston, K.J., Frith, C.D., Liddle, P.F., Frackowiak, R.S. 1991. Comparing functional (PET) images: the assessment of significant change. J. Cereb. Blood Flow Metab. 11(4), 690-699), and to the Bonferroni approach, these optimized thresholds lead to higher levels of significance (i.e., lower p values) with a specific amount of activation especially with fields of moderate smoothness (i.e., with a relative full width half maximum between 2 and 6). Alternatively, the threshold for a specified level of significance can be lowered. This improved statistical sensitivity is illustrated by the analysis of an actual event related functional magnetic resonance data set, and its limitations are tested by determining the false positive rate with experimental MR noise data. The grid of estimated threshold values as well as the interpolation algorithm to derive thresholds for arbitrary parameter settings are made available over the internet (http://neuro2.med.uni-magdeburg.de/quantile_estimation).  相似文献   

17.
Rowe DB  Logan BR 《NeuroImage》2004,23(3):1078-1092
In functional magnetic resonance imaging, voxel time courses after Fourier or non-Fourier "image reconstruction" are complex valued as a result of phase imperfections due to magnetic field inhomogeneities. Nearly all fMRI studies derive functional "activation" based on magnitude voxel time courses [Bandettini, P., Jesmanowicz, A., Wong, E., Hyde, J.S., 1993. Processing strategies for time-course data sets in functional MRI of the human brain. Magn. Reson. Med. 30 (2): 161-173 and Cox, R.W., Jesmanowicz, A., Hyde, J.S., 1995. Real-time functional magnetic resonance imaging. Magn. Reson. Med. 33 (2): 230-236]. Here, we propose to directly model the entire complex or bivariate data rather than just the magnitude-only data. A nonlinear multiple regression model is used to model activation of the complex signal, and a likelihood ratio test is derived to determine activation in each voxel. We investigate the performance of the model on a real dataset, then compare the magnitude-only and complex models under varying signal-to-noise ratios in a simulation study with varying activation contrast effects.  相似文献   

18.
Research on the cortical sources of nociceptive laser-evoked brain potentials (LEPs) began almost two decades ago (Tarkka and Treede, 1993). Whereas there is a large consensus on the sources of the late part of the LEP waveform (N2 and P2 waves), the relative contribution of the primary somatosensory cortex (S1) to the early part of the LEP waveform (N1 wave) is still debated.To address this issue we recorded LEPs elicited by the stimulation of four limbs in a large population (n = 35). Early LEP generators were estimated both at single-subject and group level, using three different approaches: distributed source analysis, dipolar source modeling, and probabilistic independent component analysis (ICA).We show that the scalp distribution of the earliest LEP response to hand stimulation was maximal over the central-parietal electrodes contralateral to the stimulated side, while that of the earliest LEP response to foot stimulation was maximal over the central-parietal midline electrodes. Crucially, all three approaches indicated hand and foot S1 areas as generators of the earliest LEP response.Altogether, these findings indicate that the earliest part of the scalp response elicited by a selective nociceptive stimulus is largely explained by activity in the contralateral S1, with negligible contribution from the secondary somatosensory cortex (S2).  相似文献   

19.
Deneux T  Faugeras O 《NeuroImage》2006,32(4):1669-1689
There is an increasing interest in using physiologically plausible models in fMRI analysis. These models do raise new mathematical problems in terms of parameter estimation and interpretation of the measured data. In this paper, we show how to use physiological models to map and analyze brain activity from fMRI data. We describe a maximum likelihood parameter estimation algorithm and a statistical test that allow the following two actions: selecting the most statistically significant hemodynamic model for the measured data and deriving activation maps based on such model. Furthermore, as parameter estimation may leave much incertitude on the exact values of parameters, model identifiability characterization is a particular focus of our work. We applied these methods to different variations of the Balloon Model (Buxton, R.B., Wang, E.C., and Frank, L.R. 1998. Dynamics of blood flow and oxygenation changes during brain activation: the balloon model. Magn. Reson. Med. 39: 855-864; Buxton, R.B., Uluda?, K., Dubowitz, D.J., and Liu, T.T. 2004. Modelling the hemodynamic response to brain activation. NeuroImage 23: 220-233; Friston, K. J., Mechelli, A., Turner, R., and Price, C. J. 2000. Nonlinear responses in fMRI: the balloon model, volterra kernels, and other hemodynamics. NeuroImage 12: 466-477) in a visual perception checkerboard experiment. Our model selection proved that hemodynamic models better explain the BOLD response than linear convolution, in particular because they are able to capture some features like poststimulus undershoot or nonlinear effects. On the other hand, nonlinear and linear models are comparable when signals get noisier, which explains that activation maps obtained in both frameworks are comparable. The tools we have developed prove that statistical inference methods used in the framework of the General Linear Model might be generalized to nonlinear models.  相似文献   

20.
Cortical rhythmic activity is increasingly employed for characterizing human brain function. Using MEG, it is possible to localize the generators of these rhythms. Traditionally, the source locations have been estimated using sequential dipole modeling. Recently, two new methods for localizing rhythmic activity have been developed, Dynamic Imaging of Coherent Sources (DICS) and Frequency-Domain Minimum Current Estimation (MCE(FD)). With new analysis methods emerging, the researcher faces the problem of choosing an appropriate strategy. The aim of this study was to compare the performance and reliability of these three methods. The evaluation was performed using measured data from four healthy subjects, as well as with simulations of rhythmic activity. We found that the methods gave comparable results, and that all three approaches localized the principal sources of oscillatory activity very well. Dipole modeling is a very powerful tool once appropriate subsets of sensors have been selected. MCE(FD) provides simultaneous localization of sources and was found to give a good overview of the data. With DICS, it was possible to separate close-by sources that were not retrieved by the other two methods.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号