首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
We have developed a novel probabilistic model that estimates neural source activity measured by MEG and EEG data while suppressing the effect of interference and noise sources. The model estimates contributions to sensor data from evoked sources, interference sources and sensor noise using Bayesian methods and by exploiting knowledge about their timing and spatial covariance properties. Full posterior distributions are computed rather than just the MAP estimates. In simulation, the algorithm can accurately localize and estimate the time courses of several simultaneously active dipoles, with rotating or fixed orientation, at noise levels typical for averaged MEG data. The algorithm even performs reasonably at noise levels typical of an average of just a few trials. The algorithm is superior to beamforming techniques, which we show to be an approximation to our graphical model, in estimation of temporally correlated sources. Success of this algorithm using MEG data for localizing bilateral auditory cortex, low-SNR somatosensory activations, and for localizing an epileptic spike source are also demonstrated.  相似文献   

2.
We present two related probabilistic methods for neural source reconstruction from MEG/EEG data that reduce effects of interference, noise, and correlated sources. Both methods localize source activity using a linear mixture of temporal basis functions (TBFs) learned from the data. In contrast to existing methods that use predetermined TBFs, we compute TBFs from data using a graphical factor analysis based model [Nagarajan, S.S., Attias, H.T., Hild, K.E., Sekihara, K., 2007a. A probabilistic algorithm for robust interference suppression in bioelectromagnetic sensor data. Stat Med 26, 3886-3910], which separates evoked or event-related source activity from ongoing spontaneous background brain activity. Both algorithms compute an optimal weighting of these TBFs at each voxel to provide a spatiotemporal map of activity across the brain and a source image map from the likelihood of a dipole source at each voxel. We explicitly model, with two different robust parameterizations, the contribution from signals outside a voxel of interest. The two models differ in a trade-off of computational speed versus accuracy of learning the unknown interference contributions. Performance in simulations and real data, both with large noise and interference and/or correlated sources, demonstrates significant improvement over existing source localization methods.  相似文献   

3.
The feasibility of linear normalization of child brain images with structural abnormalities due to periventricular leukomalacia (PVL) was assessed in terms of success rate and accuracy of the normalization algorithm. Ten T1-weighted brain images from healthy adult subject and 51 from children (4-11 years of age) were linearly transformed to achieve spatial registration with the standard MNI brain template. Twelve of the child brain images were radiologically normal, 22 showed PVL and 17 showed PVL with additional enlargement of the lateral ventricles. The effects of simple modifications to the normalization process were evaluated: changing the initial orientation and zoom parameters, masking non-brain areas, smoothing the images and using a pediatric template instead of the MNI template. Normalization failure was reduced by changing the initial zoom parameters and by removing background noise. The overall performance of the normalization algorithm was only improved when background noise was removed from the images. The results show that linear normalization of PVL affected brain images is feasible.  相似文献   

4.
Functional magnetic resonance imaging (fMRI) signal changes can be separated from background noise by various processing algorithms, including the well-known deconvolution method. However, discriminating signal changes due to task-related brain activities from those due to task-related head motion or other artifacts correlated in time to the task has been little addressed. We examine whether three exploratory fractal scaling analyses correctly classify these possibilities by capturing temporal self-similarity; namely, fluctuation analysis, wavelet multi-resolution analysis, and detrended fluctuation analysis (DFA). We specifically evaluate whether these fractal analytic methods can be effective and reliable in discriminating activations from artifacts. DFA is indeed robust for such classification. Brain activation maps derived by DFA are similar, but not identical, to maps derived by deconvolution. Deconvolution explicitly utilizes task timing to extract the signals whereas DFA does not, so these methods reveal somewhat different information from the data. DFA is better than deconvolution for distinguishing fMRI activations from task-related artifacts, although a combination of these approaches is superior to either one taken alone. We also present a method for estimating noise levels in fMRI data, validated with numerical simulations suggesting that Birn's model is effective for simulating fMRI signals. Simulations further corroborate that DFA is excellent at discriminating signal changes due to task-related brain activities from those due to task-related artifacts, under a range of conditions.  相似文献   

5.
We propose a novel algorithm for voxel-by-voxel compartment model analysis based on a maximum a posteriori (MAP) algorithm. Voxel-by-voxel compartment model analysis can derive functional images of living tissues, but it suffers from high noise statistics in voxel-based PET data and extended calculation times. We initially set up a feature space of the target radiopharmaceutical composed of a measured plasma time activity curve and a set of compartment model parameters, and measured the noise distribution of the PET data. The dynamic PET data were projected onto the feature space, and then clustered using the Mahalanobis distance. Our method was validated using simulation studies, and compared with ROI-based ordinary kinetic analysis for FDG. The parametric images exhibited an acceptable linear relation with the simulations and the ROI-based results, and the calculation time took about 10 min. We therefore concluded that our proposed MAP-based algorithm is practical.  相似文献   

6.
Jong Geun Park  Chulhee Lee   《NeuroImage》2009,47(4):1394-1407
In this paper, we propose a new skull stripping method for T1-weighted magnetic resonance (MR) brain images. Skull stripping has played an important role in neuroimage research because it is a basic preliminary step in many clinical applications. The process of skull stripping can be challenging due to the complexity of the human brain, variable parameters of MR scanners, individual characteristics, etc. In this paper, we aim to develop a computationally efficient and robust method. In the proposed algorithm, after eliminating the background voxels with histogram analysis, two seed regions of the brain and non-brain regions were automatically identified using a mask produced by morphological operations. Then we expanded these seed regions with a 2D region growing algorithm based on general brain anatomy information. The proposed algorithm was validated using 56 volumes of human brain data and simulated phantom data with manually segmented masks. It was compared with two popular automated skull stripping methods: the brain surface extractor (BSE) and the brain extraction tool (BET). The experimental results showed that the proposed algorithm produced accurate and stable results against data sets acquired from various MR scanners and effectively addressed difficult problems such as low contrast and large anatomical connections between the brain and surrounding tissues. The proposed method was also robust against noise, RF, and intensity inhomogeneities.  相似文献   

7.
Obstructive sleep apnea (OSA) is a highly prevalent disease in which upper airways are collapsed during sleep, leading to serious consequences. Snoring is the earliest symptom of OSA, but its potential in clinical diagnosis is not fully recognized yet. The first task in the automatic analysis of snore-related sounds (SRS) is to segment the SRS data as accurately as possible into three main classes: snoring (voiced non-silence), breathing (unvoiced non-silence) and silence. SRS data are generally contaminated with background noise. In this paper, we present classification performance of a new segmentation algorithm based on pattern recognition. We considered four features derived from SRS to classify samples of SRS into three classes. The features--number of zero crossings, energy of the signal, normalized autocorrelation coefficient at 1 ms delay and the first predictor coefficient of linear predictive coding (LPC) analysis--in combination were able to achieve a classification accuracy of 90.74% in classifying a set of test data. We also investigated the performance of the algorithm when three commonly used noise reduction (NR) techniques in speech processing--amplitude spectral subtraction (ASS), power spectral subtraction (PSS) and short time spectral amplitude (STSA) estimation--are used for noise reduction. We found that noise reduction together with a proper choice of features could improve the classification accuracy to 96.78%, making the automated analysis a possibility.  相似文献   

8.
Xie X  Cao Z  Weng X 《NeuroImage》2008,40(4):1672-1685
In this work, the spatiotemporal nonlinearity in resting-state fMRI data of the human brain was detected by nonlinear dynamics methods. Nine human subjects during resting state were imaged using single-shot gradient echo planar imaging on a 1.5T scanner. Eigenvalue spectra for the covariance matrix, correlation dimensions and Spatiotemporal Lyapunov Exponents were calculated to detect the spatiotemporal nonlinearity in resting-state fMRI data. By simulating, adjusting, and comparing the eigenvalue spectra of pure correlated noise with the corresponding real fMRI data, the intrinsic dimensionality was estimated. The intrinsic dimensionality was used to extract the first few principal components from the real fMRI data using Principal Component Analysis, which will preserve the correct phase dynamics, while reducing both computational load and noise level of the data. Then the phase-space was reconstructed using the time-delay embedding method for their principal components and the correlation dimension was estimated by the Grassberger-Procaccia algorithm of multiple variable series. The Spatiotemporal Lyapunov Exponents were calculated by using the method based on coupled map lattices. Through nonlinearity testing, there are significant differences of correlation dimensions and Spatiotemporal Lyapunov Exponents between fMRI data and their surrogate data. The fractal dimension and the positive Spatiotemporal Lyapunov Exponents characterize the spatiotemporal nonlinear dynamics property of resting-state fMRI data. Therefore, the results suggest that fluctuations presented in resting state may be an inherent model of basal neural activation of human brain, cannot be fully attributed to noise.  相似文献   

9.
We have developed a fuzzy logic-based algorithm to qualify the reliability of heart rate (HR) and respiratory rate (RR) vital-sign time-series data by assigning a confidence level to the data points while they are measured as a continuous data stream. The algorithm's membership functions are derived from physiology-based performance limits and mass-assignment-based data-driven characteristics of the signals. The assigned confidence levels are based on the reliability of each HR and RR measurement as well as the relationship between them. The algorithm was tested on HR and RR data collected from subjects undertaking a range of physical activities, and it showed acceptable performance in detecting four types of faults that result in low-confidence data points (receiver operating characteristic areas under the curve ranged from 0.67 (SD 0.04) to 0.83 (SD 0.03), mean and standard deviation (SD) over all faults). The algorithm is sensitive to noise in the raw HR and RR data and will flag many data points as low confidence if the data are noisy; prior processing of the data to reduce noise allows identification of only the most substantial faults. Depending on how HR and RR data are processed, the algorithm can be applied as a tool to evaluate sensor performance or to qualify HR and RR time-series data in terms of their reliability before use in automated decision-assist systems.  相似文献   

10.
Visible-light optical coherence tomography (vis-OCT) has enabled new spectroscopic applications, such as retinal oximetry, as a result of increased optical absorption and scattering contacts in biological tissue and improved axial resolution. Besides extracting tissue properties from back-scattered light, spectroscopic analyses must consider spectral alterations induced by image reconstruction itself. We investigated an intrinsic spectral bias in the background noise floor, which is hereby referred to as the spectrally-dependent background (SDBG). We developed an analytical model to predict the SDBG-induced bias and validated this model using numerically simulated and experimentally acquired data. We found that SDBG systemically altered the measured spectra of blood in human retinal vessels in vis-OCT, as compared to literature data. We provided solutions to quantify and compensate for SDBG in retinal oximetry. This work is particularly significant for clinical applications of vis-OCT.  相似文献   

11.
We present an algorithm that provides a partial volume segmentation of a T1-weighted image of the brain into gray matter, white matter and cerebrospinal fluid. The algorithm incorporates a non-uniform partial volume density that takes the curved nature of the cortex into account. The pure gray and white matter intensities are estimated from the image, using scanner noise and cortical partial volume effects. Expected tissue fractions are subsequently computed in each voxel. The algorithm has been tested for reliability, correct estimation of the pure tissue intensities on both real (repeated) MRI data and on simulated (brain) images. Intra-class correlation coefficients (ICCs) were above 0.93 for all volumes of the three tissue types for repeated scans from the same scanner, as well as for scans with different voxel sizes from different scanners with different field strengths. The implementation of our non-uniform partial volume density provided more reliable volumes and tissue fractions, compared to a uniform partial volume density. Applying the algorithm to simulated images showed that the pure tissue intensities were estimated accurately. Variations in cortical thickness did not influence the accuracy of the volume estimates, which is a valuable property when studying (possible) group differences. In conclusion, we have presented a new partial volume segmentation algorithm that allows for comparisons over scanners and voxel sizes.  相似文献   

12.
On the vertical accuracy of the ALOS world 3D-30m digital elevation model   总被引:1,自引:0,他引:1  
In this contribution, we assess the vertical accuracy of the Advanced Land Observing Satellite (ALOS) World 3D 30 m (AW3D30) digital elevation model (DEM) using the runway method (RWYM). The RWYM utilizes the longitudinal profiles of runways which are reliable and ubiquitous reference data. A reference dataset used in this project consists of 36 runways located at various points throughout the world. We found that AW3D30 has a remarkably low root mean square error (RMSE) of 1.78 m (one sigma). However, while analysing the results, it has become apparent that it also contains a widespread elevation anomaly. We conclude that this anomaly is the result of uncompensated sensor noise and the data processing algorithm. Also, we note that the traditional accuracy assessment of a DEM does not allow for identification of these type of anomalies in a DEM.  相似文献   

13.
目的 探讨cDNA基因芯片技术在应用过程中如何排除干扰,保证分析质量,获得进一步归纳分析的有效数据。方法 通过排除非特异本底、点样均一误差、非特异错配、同源组织错配等各方面因素,应用10368个基因的cDNA基因芯片研究人冠状动脉血管内皮细胞基因表达的总体变化趋势和不同功能集团间的分析,逐步分析不同干扰因素,获得有效处理数据。结果 三次重复试验中10368个基因芯片反应数据分别排除89、132和116个低于非特异本底荧光强度的杂交点,281、368和220个少于荧光杂交面积10%的不确定杂交反应,1113、1026和1096个不同种属间非特异错配,重复性筛选等因素,获得相对有效的与参考总RNA比较表达高于1.5倍323个,低于1.5倍1192个,高低1.5倍之间的2036个数据。结论 基因芯片技术在应用过程中受许多因素的影响,经过切实有效的方法排除干扰因素,可获得较具有客观性的结果,是进一步分析的必要前提。  相似文献   

14.
The signal to noise ratio of high-speed fluorescence microscopy is heavily influenced by photon counting noise and sensor noise due to the expected low photon budget. Denoising algorithms are developed to decrease these noise fluctuations in microscopy data by incorporating additional knowledge or assumptions about imaging systems or biological specimens. One question arises: whether there exists a theoretical precision limit for the performance of a microscopy denoising algorithm. In this paper, combining Cramér-Rao Lower Bound with constraints and the low-pass-filter property of microscope systems, we develop a method to calculate a theoretical variance lower bound of microscopy image denoising. We show that this lower bound is influenced by photon count, readout noise, detection wavelength, effective pixel size and the numerical aperture of the microscope system. We demonstrate our development by comparing multiple state-of-the-art denoising algorithms to this bound. This method establishes a framework to generate theoretical performance limit, under a specific prior knowledge, or assumption, as a reference benchmark for microscopy denoising algorithms.  相似文献   

15.
In this article we introduce the DRIFTER algorithm, which is a new model based Bayesian method for retrospective elimination of physiological noise from functional magnetic resonance imaging (fMRI) data. In the method, we first estimate the frequency trajectories of the physiological signals with the interacting multiple models (IMM) filter algorithm. The frequency trajectories can be estimated from external reference signals, or if the temporal resolution is high enough, from the fMRI data. The estimated frequency trajectories are then used in a state space model in combination of a Kalman filter (KF) and Rauch-Tung-Striebel (RTS) smoother, which separates the signal into an activation related cleaned signal, physiological noise, and white measurement noise components. Using experimental data, we show that the method outperforms the RETROICOR algorithm if the shape and amplitude of the physiological signals change over time.  相似文献   

16.
Fluorescence microscopy images are inevitably contaminated by background intensity contributions. Fluorescence from out-of-focus planes and scattered light are important sources of slowly varying, low spatial frequency background, whereas background varying from pixel to pixel (high frequency noise) is introduced by the detection system. Here we present a powerful, easy-to-use software, wavelet-based background and noise subtraction (WBNS), which effectively removes both of these components. To assess its performance, we apply WBNS to synthetic images and compare the results quantitatively with the ground truth and with images processed by other background removal algorithms. We further evaluate WBNS on real images taken with a light-sheet microscope and a super-resolution stimulated emission depletion microscope. For both cases, we compare the WBNS algorithm with hardware-based background removal techniques and present a quantitative assessment of the results. WBNS shows an excellent performance in all these applications and significantly enhances the visual appearance of fluorescence images. Moreover, it may serve as a pre-processing step for further quantitative analysis.  相似文献   

17.
In the frame of 3D medical imaging, accurate segmentation of multimodal brain MR images is of interest for many brain disorders. However, due to several factors such as noise, imaging artifacts, intrinsic tissue variation and partial volume effects, tissue classification remains a challenging task. In this paper, we present a unifying framework for unsupervised segmentation of multimodal brain MR images including partial volume effect, bias field correction, and information given by a probabilistic atlas. Here-proposed method takes into account neighborhood information using a Hidden Markov Chain (HMC) model. Due to the limited resolution of imaging devices, voxels may be composed of a mixture of different tissue types, this partial volume effect is included to achieve an accurate segmentation of brain tissues. Instead of assigning each voxel to a single tissue class (i.e., hard classification), we compute the relative amount of each pure tissue class in each voxel (mixture estimation). Further, a bias field estimation step is added to the proposed algorithm to correct intensity inhomogeneities. Furthermore, atlas priors were incorporated using probabilistic brain atlas containing prior expectations about the spatial localization of different tissue classes. This atlas is considered as a complementary sensor and the proposed method is extended to multimodal brain MRI without any user-tunable parameter (unsupervised algorithm). To validate this new unifying framework, we present experimental results on both synthetic and real brain images, for which the ground truth is available. Comparison with other often used techniques demonstrates the accuracy and the robustness of this new Markovian segmentation scheme.  相似文献   

18.
Shan ZY  Yue GH  Liu JZ 《NeuroImage》2002,17(3):1587-1598
Current semiautomated magnetic resonance (MR)-based brain segmentation and volume measurement methods are complex and not sufficiently accurate for certain applications. We have developed a simpler, more accurate automated algorithm for whole-brain segmentation and volume measurement in T(1)-weighted, three-dimensional MR images. This histogram-based brain segmentation (HBRS) algorithm is based on histograms and simple morphological operations. The algorithm's three steps are foreground/background thresholding, disconnection of brain from skull, and removal of residue fragments (sinus, cerebrospinal fluid, dura, and marrow). Brain volume was measured by counting the number of brain voxels. Accuracy was determined by applying HBRS to both simulated and real MR data. Comparing the brain volume rendered by HBRS with the volume on which the simulation is based, the average error was 1.38%. By applying HBRS to 20 normal MR data sets downloaded from the Internet Brain Segmentation Repository and comparing them with expert segmented data, the average Jaccard similarity was 0.963 and the kappa index was 0.981. The reproducibility of brain volume measurements was assessed by comparing data from two sessions (four total data sets) with human volunteers. Intrasession variability of brain volumes for sessions 1 and 2 was 0.55 +/- 0.56 and 0.74 +/- 0.56%, respectively; the mean difference between the two sessions was 0.60 +/- 0.46%. These results show that the HBRS algorithm is a simple, fast, and accurate method to determine brain volume with high reproducibility. This algorithm may be applied to various research and clinical investigations in which brain segmentation and volume measurement involving MRI data are needed.  相似文献   

19.
In this work we treat fMRI data analysis as a spatiotemporal system identification problem and address issues of model formulation, estimation, and model comparison. We present a new model that includes a physiologically based hemodynamic response and an empirically derived low-frequency noise model. We introduce an estimation method employing spatial regularization that improves the precision of spatially varying noise estimates. We call the algorithm locally regularized spatiotemporal (LRST) modeling. We develop a new model selection criterion and compare our model to the SPM-GLM method. Our findings suggest that our method offers a better approach to identifying appropriate statistical models for fMRI studies.  相似文献   

20.
A general framework for automatic model extraction from magnetic resonance (MR) images is described. The framework is based on a two-stage algorithm. In the first stage, a geometrical and topological multiresolution prior model is constructed. It is based on a pyramid of graphs. In the second stage, a matching algorithm is described. This algorithm is used to deform the prior pyramid in a constrained manner. The topological and the main geometrical properties of the model are preserved, and at the same time, the model adapts itself to the input data. We show that it performs a fast and robust model extraction from image data containing unstructured information and noise. The efficiency of the deformable pyramid is illustrated on a synthetic image. Several examples of the method applied to MR volumes are also represented.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号