首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 203 毫秒
1.
弥散加权磁共振图像(DWI)是使用特殊自旋平面回波序列进行快速成像,它易被噪声干扰,需要有效去噪以保证后续应用。目前去噪方法一般为常用图像去噪方法扩展,缺乏针对DWI多不同梯度磁场方向数据构成特点的针对性应用。本文提出一种DWI Rician噪声的线性最小均方误差(LMMSE)复原方法,使用局部信息的统计特征,对DWI的Rician噪声进行有效估计,并综合多梯度磁场方向改进使用LMMSE进行复原。在合成模拟DWI数据和真实人体脑部DWI数据上进行的仿真和实验表明,本文方法较之目前使用较多的逐梯度方向去噪方法能够更好去除DWI中Rician噪声,有效改善计算获得的DTI大小和方向信息的有效性和准确性。  相似文献   

2.
针对弥散加权磁共振图像噪声呈Rician分布,现有Wiener滤波基于高斯分布易于产生误差的缺陷,以及弥散加权磁共振图像多相近磁场方向数据共存特点,综合多相近磁场方向的弥散加权磁共振图像进行Wiener滤波复原,并将现有针对高斯噪声的Wiener滤波器基于Rician噪声分布进行改进,最后在估计复原参数的过程中引入各向异性概念提高复原参数估计的准确性,进一步提高复原质量.使用本方法分别在合成和真实脑部弥散加权磁共振图像上进行的仿真和实验表明,本方法能有效降低噪声对弥散加权磁共振图像的影响,提高由此计算获得弥散张量磁共振图像的大小和方向信息,在10% Rician噪声下,弥散加权磁共振图像的峰值信噪比提高10 dB,由此计算获得弥散张量磁共振图像角度平均偏移下降5度,可保障后续应用的准确性和可靠性.  相似文献   

3.
针对MR图像中空间变化Rician噪声的抑制问题,提出了一种噪声水平场的估计方法,同时结合方差稳定变换和BM3D算法实现MR图像的去噪.噪声水平场通过Rician噪声水平的局部估计和稀疏性约束模型进行估计,利用噪声水平场对噪声图像幅值进行空间自适应方差稳定变换,使得噪声与信号幅值和空间位置无关,采用BM3D算法即可实现对噪声的抑制,最后通过方差稳定逆变换得到无偏的去噪图像.仿真实验中,噪声水平场估计的平均相对误差小于0.2%,利用空间自适应方差稳定变换进行去噪,相比方差稳定变换,去噪图像的峰值信噪比可提高2 dB;采用真实乳腺MR图像进行去噪实验,利用自适应方差稳定变换可得到较高的Q度量.结果表明,所提出的方法能有效估计Rician噪声水平场,并用于抑制MR图像中空间变化的噪声.  相似文献   

4.
目的:为了更好的去除DR医学图像噪声.方法:通过分析其噪声来源,在小波去噪的基础上进行改进.引入方差不变性变换来调整原始图像的噪声模型为高斯噪声模型.图像分解为不同频率的不同子带的小波系数,分别进行不同阈值的滤波.结果:与普通的全局小波去噪方法相比,该方法不但可以保留图像的边缘信息,而且能提高去噪后图像的峰值信噪比.结论:用此方法处理DR图像在噪声去除、细节质量及骨骼锐化等方面比传统的高斯滤波及小波全局阈值滤波等方法效果要好.  相似文献   

5.
一种基于模糊均差和小波变换的医学图像去噪方法   总被引:1,自引:1,他引:1  
小波阈值萎缩法能够有效地去除图像中的噪声,去噪阈值直接影响去噪的效果,而噪声标准差在去噪阈值的确定中起着至关重要的作用。针对医学图像的特点、基于寻找更合适的噪声标准差估计方法,本研究提出了一种新的利用模糊均差代替普通标准方差估计噪声标准差的方法。在各层小波分解的低频图像中利用模糊积分估计噪声标准差,然后确定每一层去噪阈值,进行图像去噪。试验结果表明,本研究算法在去除噪声的同时也较好地保持了图像的细节。  相似文献   

6.
利用Bayesian估计的小波自适应阈值方法对图像进行去噪处理。通过高斯滤波和小波变换的三种方法(传统的硬阈值、传统的软阈值去噪、基于Bayesian估计的自适应阈值去噪)分别同时对加不同标准差σ的Rician噪声信号进行消噪处理,对比验证高斯滤波和传统小波阈值去噪的优劣,以及新的Bayesian估计自适应阈值小波去噪在磁共振成像(magnetic resonance imaging,MRI)图像信号去噪方面的优越性。小波去噪后的信号信噪比比高斯滤波去噪后信号的信噪比高,且均方根误差要低。采用基于Bayesian估计的自适应阈值小波去噪方法比采用的高斯滤波保留了更多有用信号,优化后的氧摄取分数(oxygen extraction fraction,OEF)值有一定程度增大,使结果更接近正电子发射型计算机断层显像(positron emission computed tomography,PET)测量金标准。成功完成信号和噪声分离优化,将一种新的基于Baysian估计的自适应小波阈值去噪应用到了功能核磁共振成像的降噪分析上,取得了不错的效果。  相似文献   

7.
在小波变换域中去除图像中的噪声是近年来的研究热点之一。目前在小波域中对加性噪声的去除已经有了许多研究结果,比如Donoho等的处理方法都得到了很好的应用。但是由于超声图像噪声情况的复杂性,其对去噪的方法提出了更高的要求。为了在去除噪声的同时能够更好的保护边缘及有用的细节信息,本研究结合Birg-éMassart等提出的非参数自适应估计理论,提出一种在平稳小波变换域中对超声图像去噪的方法。实验证明,这种基于非参数自适应估计理论的超声图像去噪方法,与Donoho阈值去噪方法相比,去噪效果有所提高。  相似文献   

8.
各向异性扩散模型在去除超声图像斑点噪声时不能有效保护图像细节,针对上述问题本文提出基于变分法的自适应最小能量去噪模型.首先直接将由微分方程表示的各向异性扩散模型转化为最小能量变分模型;然后引入欧拉弹性能量模型,在去除噪声的同时有效地保护和增强图像细节.同时为了解决数值求解过程中出现的迭代次数与迭代步长的矛盾,本文还提出迭代停止准则和自适应变步长去噪算法.仿真和真实超声图像的实验结果表明基于变分法的超声图像斑点噪声自适应滤波算法在去噪的同时能够很好地保护细节信息,而且能有效地减少迭代次数.  相似文献   

9.
基于小波变换和似然无偏估计的运动心电信号伪差消除法   总被引:1,自引:0,他引:1  
介绍了一种基于小波变换并结合似然无偏估计来消除运动心电信号中基线漂移和肌电噪声的新方法 ,且提出了评价心电消噪算法有效性的两个指标。该方法利用小波变换多分辨率分析的特性 ,将原始运动心电信号进行多尺度分解及单支重构 ,根据运动心电信号的自身特征 ,结合似然无偏估计针对不同的心电细节成分进行阈值消噪处理。研究结果表明 ,该方法能有效消除运动心电信号中的干扰成分 ,为进一步研究运动心电信号的特征识别分析提供了新途径。  相似文献   

10.
为了提高超声图像质量,解决传统去噪算法在抑制散斑噪声和保留超声图像纹理特征方面的难题,提出一种基于卷积神经网络的超声图像散斑去噪算法DSCNN(De-speckling CNN)。本文提出的算法利用卷积神经网络强大的拟合能力来学习从超声图像到其相应的高质量图像的复杂映射,同时,通过改进损失函数的方式来减少去噪过程中纹理信息的损失和细节的模糊。不同于以往简单地假设超声散斑噪声为乘性噪声,本文利用基于超声图像采集模型和散斑噪声形成模型的模拟超声成像技术为去噪模型生成更贴合真实超声图像的训练数据,解决深度学习方法训练数据匮乏以及在临床上无法获得与超声图像空间配准作为标签的无噪声图像的难题。通过与其他具有代表性的超声图像去噪算法比较,经DSCNN去噪后的超声图像无论在视觉效果还是图像质量评价指标上都取得了更好的结果,其中SSIM达到0.856 9,在文中所有方法中最高。  相似文献   

11.
Estimation of the noise variance of a magnetic resonance (MR) image is important for various post-processing tasks. In the literature, various methods for noise variance estimation from MR images are available, most of which however require user interaction and/or multiple (perfectly aligned) images. In this paper, we focus on automatic histogram-based noise variance estimation techniques. Previously described methods are reviewed and a new method based on the maximum likelihood (ML) principle is presented. Using Monte Carlo simulation experiments as well as experimental MR data sets, the noise variance estimation methods are compared in terms of the root mean squared error (RMSE). The results show that the newly proposed method is superior in terms of the RMSE.  相似文献   

12.
Most existing wavelet-based image denoising techniques are developed for additive white Gaussian noise. In applications to speckle reduction in medical ultrasound (US) images, the traditional approach is first to perform the logarithmic transform (homomorphic processing) to convert the multiplicative speckle noise model to an additive one, and then the wavelet filtering is performed on the log-transformed image, followed by an exponential operation. However, this non-linear operation leads to biased estimation of the signal and increases the computational complexity of the filtering method. To overcome these drawbacks, an efficient, non-homomorphic technique for speckle reduction in medical US images is proposed. The method relies on the true characterisation of the marginal statistics of the signal and speckle wavelet coefficients. The speckle component was modelled using the generalised Nakagami distribution, which is versatile enough to model the speckle statistics under various scattering conditions of interest in medical US images. By combining this speckle model with the generalised Gaussian signal first, the Bayesian shrinkage functions were derived using the maximum a posteriori (MAP) criterion. The resulting Bayesian processor used the local image statistics to achieve soft-adaptation from homogeneous to highly heterogeneous areas. Finally, the results showed that the proposed method, named GNDShrink, yielded a signal-to-noise ratio (SNR) gain of 0.42 dB over the best state-of-the-art despeckling method reported in the literature, 1.73 dB over the Lee filter and 1.31 dB over the Kaun filter at an input SNR of 12.0 dB, when tested on a US image. Further, the visual comparison of despeckled US images indicated that the new method suppressed the speckle noise well, while preserving the texture and organ surfaces.  相似文献   

13.
In this study, we evaluate whether diffusion‐weighted magnetic resonance imaging (DW‐MRI) data after denoising can provide a reliable estimation of brain intravoxel incoherent motion (IVIM) perfusion parameters. Brain DW‐MRI was performed in five healthy volunteers on a 3 T clinical scanner with 12 different b‐values ranging from 0 to 1000 s/mm2. DW‐MRI data denoised using the proposed method were fitted with a biexponential model to extract perfusion fraction (PF), diffusion coefficient (D) and pseudo‐diffusion coefficient (D*). To further evaluate the accuracy and precision of parameter estimation, IVIM parametric images obtained from one volunteer were used to resimulate the DW‐MRI data using the biexponential model with the same b‐values. Rician noise was added to generate DW‐MRI data with various signal‐to‐noise ratio (SNR) levels. The experimental results showed that the denoised DW‐MRI data yielded precise estimates for all IVIM parameters. We also found that IVIM parameters were significantly different between gray matter and white matter (P < 0.05), except for D* (P = 0.6). Our simulation results show that the proposed image denoising method displays good performance in estimating IVIM parameters (both bias and coefficient of variation were <12% for PF, D and D*) in the presence of different levels of simulated Rician noise (SNRb=0 = 20‐40). Simulations and experiments show that brain DW‐MRI data after denoising can provide a reliable estimation of IVIM parameters.  相似文献   

14.
In this paper, we propose a method to denoise magnitude magnetic resonance (MR) images, which are Rician distributed. Conventionally, maximum likelihood methods incorporate the Rice distribution to estimate the true, underlying signal from a local neighborhood within which the signal is assumed to be constant. However, if this assumption is not met, such filtering will lead to blurred edges and loss of fine structures. As a solution to this problem, we put forward the concept of restricted local neighborhoods where the true intensity for each noisy pixel is estimated from a set of preselected neighboring pixels. To this end, a reference image is created from the noisy image using a recently proposed nonlocal means algorithm. This reference image is used as a prior for further noise reduction. A scheme is developed to locally select an appropriate subset of pixels from which the underlying signal is estimated. Experimental results based on the peak signal to noise ratio, structural similarity index matrix, Bhattacharyya coefficient and mean absolute difference from synthetic and real MR images demonstrate the superior performance of the proposed method over other state-of-the-art methods.  相似文献   

15.
Li X  Li L  Lu H  Liang Z 《Medical physics》2005,32(7):2337-2345
Noise, partial volume (PV) effect, and image-intensity inhomogeneity render a challenging task for segmentation of brain magnetic resonance (MR) images. Most of the current MR image segmentation methods focus on only one or two of the above-mentioned effects. The objective of this paper is to propose a unified framework, based on the maximum a posteriori probability principle, by taking all these effects into account simultaneously in order to improve image segmentation performance. Instead of labeling each image voxel with a unique tissue type, the percentage of each voxel belonging to different tissues, which we call a mixture, is considered to address the PV effect. A Markov random field model is used to describe the noise effect by considering the nearby spatial information of the tissue mixture. The inhomogeneity effect is modeled as a bias field characterized by a zero mean Gaussian prior probability. The well-known fuzzy C-mean model is extended to define the likelihood function of the observed image. This framework reduces theoretically, under some assumptions, to the adaptive fuzzy C-mean (AFCM) algorithm proposed by Pham and Prince. Digital phantom and real clinical MR images were used to test the proposed framework. Improved performance over the AFCM algorithm was observed in a clinical environment where the inhomogeneity, noise level, and PV effect are commonly encountered.  相似文献   

16.
Limiting dilution assays. Experimental design and statistical analysis   总被引:9,自引:0,他引:9  
Two issues in limiting dilution analysis are considered. The first concerns the experimental design: a mathematical algorithm has been developed which calculates the number of replicate culture groups, and the (mean) number of cells per well to be used on the basis of the experimenter's a priori information about the unknown frequency. The procedure guarantees useful data if the a priori interval estimate of the frequency to be determined is correct and the cells are willing to grow. The second issue concerns the statistical method to be used for estimation of the unknown frequency. Several methods (minimum chi-square, maximum likelihood and the jackknife version of the maximum likelihood method) have been evaluated with artificial data from extensive Monte Carlo experiments. All three methods were useful in the statistical analysis of data. As a result of these experiments and theoretical considerations the jackknife version of the maximum likelihood estimation procedure is proposed as the statistical procedure of choice. The next best method is the maximum likelihood procedure.  相似文献   

17.
为了在纹理特征下改善肺结节良、恶性的模式识别,提出一种基于local jet变换空间的纹理特征提取方法。首先利用二维高斯函数的前三阶偏微分函数将结节原图像变换到local jet纹理图像空间,然后利用纹理描述子在该空间提取特征参数。以灰度值的前四阶矩和基于灰度共生矩阵的特征参数作为纹理描述子,分别提取结节原图像和变换后纹理图像的特征参数,以BP神经网络作为分类器,对同一纹理描述子下的2个不同图像空间的经核主成分分析优化后的特征参数集进行结节良、恶性分类。以157个肺结节(51个良性,106个恶性)作为实验数据进行对比实验,结果显示:两种纹理描述子基于local jet变换空间提取的特征参数分别获得82.69%和86.54%的分类正确率,较原图像空间提高6%~8%,同时AUC值提高约10%。实验结果表明,基于local jet变换空间提取的纹理特征可以有效地改善肺结节良、恶性的模式识别。  相似文献   

18.
In this paper, a novel lesion segmentation within breast ultrasound (BUS) image based on the cellular automata principle is proposed. Its energy transition function is formulated based on global image information difference and local image information difference using different energy transfer strategies. First, an energy decrease strategy is used for modeling the spatial relation information of pixels. For modeling global image information difference, a seed information comparison function is developed using an energy preserve strategy. Then, a texture information comparison function is proposed for considering local image difference in different regions, which is helpful for handling blurry boundaries. Moreover, two neighborhood systems (von Neumann and Moore neighborhood systems) are integrated as the evolution environment, and a similarity-based criterion is used for suppressing noise and reducing computation complexity. The proposed method was applied to 205 clinical BUS images for studying its characteristic and functionality, and several overlapping area error metrics and statistical evaluation methods are utilized for evaluating its performance. The experimental results demonstrate that the proposed method can handle BUS images with blurry boundaries and low contrast well and can segment breast lesions accurately and effectively.  相似文献   

19.
OBJECTIVE: The objective of this paper is to classify 3D medical images by analyzing spatial distributions to model and characterize the arrangement of the regions of interest (ROIs) in 3D space. METHODS AND MATERIAL: Two methods are proposed for facilitating such classification. The first method uses measures of similarity, such as the Mahalanobis distance and the Kullback-Leibler (KL) divergence, to compute the difference between spatial probability distributions of ROIs in an image of a new subject and each of the considered classes represented by historical data (e.g., normal versus disease class). A new subject is predicted to belong to the class corresponding to the most similar dataset. The second method employs the maximum likelihood (ML) principle to predict the class that most likely produced the dataset of the new subject. RESULTS: The proposed methods have been experimentally evaluated on three datasets: synthetic data (mixtures of Gaussian distributions), realistic lesion-deficit data (generated by a simulator conforming to a clinical study), and functional MRI activation data obtained from a study designed to explore neuroanatomical correlates of semantic processing in Alzheimer's disease (AD). CONCLUSION: Performed experiments demonstrated that the approaches based on the KL divergence and the ML method provide superior accuracy compared to the Mahalanobis distance. The later technique could still be a method of choice when the distributions differ significantly, since it is faster and less complex. The obtained classification accuracy with errors smaller than 1% supports that useful diagnosis assistance could be achieved assuming sufficiently informative historic data and sufficient information on the new subject.  相似文献   

20.
Weighted least squares (WLS) is the technique of choice for parameter estimation from noisy data in physiological modeling. WLS can be derived from maximum likelihood theory, provided that the measurement error variance is known and independent of the model parameters and the weights are calculated as the inverse of the measurement error variance. However, using measured values in lieu of predicted values to quantify the measurement error variance is approximately valid only when the noise in the data is relatively low. This practice may thus introduce sampling variation in the resulting estimates, as weights can be seriously mis-specified. To avoid this, extended least squares (ELS) has been used, especially in pharmacokinetics. ELS uses an augmented objective function where the measurement error variance depends explicitly on the model parameters. Although it is more complex, ELS accounts for the Gaussian maximum likelihood statistical model of the data better than WLS, yet its usage is not as widespread. The use of ELS in high data noise situations will result in more accurate parameter estimates than WLS (when the underlying model is correct). To support this claim, we have undertaken a simulation study using four different models with varying amounts of noise in the data and further assuming that the measurement error standard deviation is proportional to the model prediction. We also motivate this in terms of maximum likelihood and comment on the practical consequences of using WLS and ELS as well as give practical guidelines for choosing one method over the other.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号