首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到18条相似文献,搜索用时 160 毫秒
1.
目的 研究一种基于小波变换的数字胸片图像增强新算法.方法 小波分解后,首先利用小波阈值法进行去噪预处理,然后对高频分量采用非线性增强,对低频分量采用反锐化掩模增强方法,通过小波反变换重构出增强后的图像.结果 通过对传统增强方法和本文提出的小波增强新方法进行实验对比,验证了本文算法对数字胸片图像有较好的增强效果.结论 对于分辨力低、噪声干扰严重、光照不均的数字胸片图像,本文提出的基于小波变换的增强新方法可保留图像细节信息,同时有效去除噪声.  相似文献   

2.
基于阈值分割和Snake模型的弱边缘医学超声图像自动分割   总被引:1,自引:1,他引:0  
医学超声图像分割是图像处理中的一项关键技术.文章以胆结石超声图像为例,介绍一种新的弱边缘超声图像自动分割算法.首先采用基于直方图凹度分析的闽值分割方法确定Snake模型的初始蛇,再基于Snake模型结合贪婪算法对图像进行目标分割.实验结果表明该算法对弱边缘现象较为严重的医学超声图像进行目标分割时,定位准确,分割效果良好,足一种全自动的超声医学图像分割方法.  相似文献   

3.
基于小波变换的医学超声图像去噪及增强方法   总被引:6,自引:3,他引:6       下载免费PDF全文
目的探求一种基于小波变换的医学超声图像去噪及增强方法。方法提出了一种基于小波分析理论的医学超声图像噪声的综合抑制方法,首先对医学超声图像进行对数变换,将乘性噪声变成加性噪声;然后进行多尺度小波变换,将图像分解成一系列不同尺度上的小波系数,对变换后不同尺度的高频子图像进行非线性小波软阈值处理,阈值处理后的高频子图像进行增强;最后,经小波逆变换和指数变换恢复去噪后图像。结果原图像中斑纹噪声被有效去除,图像边缘细节得以保留。结论该方法可有效保留细节信号,极大限度地去除斑纹噪声。  相似文献   

4.
背景:基于马尔科夫随机场的图像分割算法已经成为医学图像分割的重要方法,其中,Gibbs场先验参数的取值对分割精度有很大的影响.目的:根据脑部MR图像的成像特点,探讨Gibbs场先验参数的估计方法,从而提高图像分割的精度.方法:通过对脑部MR图像的统计分析,得到图像高斯噪声的方差与Gibbs场先验参数的对应关系.然后在基于马尔可夫随机场图像分割算法的迭代过程中,根据高斯分布的方差估计值,用插值方法估计Gibbs场先验参数.结果与结论:通过对模拟脑部MR图像和临床脑部MR图像分割实验,表明该方法比传统的设定Gibbs场先验参数为某一常数的方法有更精确的图像分割能力,并且实现了图像的自适应分割,具有方法简单、运算速度快、稳健性好的特点.  相似文献   

5.
背景:X射线检查作为常规的检查方式得到了广泛的应用,然而由于现有技术的局限性,使得X射线图像往往具有灰度对比度低和噪声影响等缺点,因此,现有的X射线图像往往达不到医生的要求.目的:增强和去噪处理对比度较低且含有噪声的X射线图像,以达到易于医生理解和识别的目的.方法:针对空间域处理和变换域处理增强X射线图像的不足,提出了基于灰度对比和自适应小波变换的X射线图像增强算法.首先,选择需要增强和减弱的灰度范围,并根据八邻域灰度对比增强算法对X射线图像进行灰度变换,并用中值滤波算法对图像进行平滑;然后,对X射线图像进行小波分解,并运用相邻分解层之间相关系数的大小来确定细节信号和噪声.结果与结论:应用了灰度对比和自适应小波变换相结合的X射线图像增强算法,把基于空间域增强的方法和基于变换域的方法有机地结合起来,比传统的单一增强方法更为优越.实验结果证明它能自适应地增强X射线图像的灰度对比,使得图像细节的显示更加清晰,同时在一定程度上去除了噪声的干扰,对于灰度对比度较低的图像效果更加明显.  相似文献   

6.
基于MeanShift方法的肝脏CT图像的自动分割   总被引:1,自引:1,他引:0  
目的 探讨基于Mean Shift方法的肝脏CT图像的自动分割算法,以实现肝脏的自动分割。方法 首先对原始图像进行单次Mean Shift平滑 ,滤除噪声的影响以增强算法的鲁棒性,然后通过Mean Shift迭代自动选取初始种子点,最后采用基于区域生长的方法实现肝脏CT图像的自动分割。结果 实验证明此方法是一个准确、快速和有效的肝脏自动分割方法。结论 采用本文中提出的方法,可有效地实现肝脏的自动分割。  相似文献   

7.
彩色血液细胞图像的分割   总被引:2,自引:0,他引:2  
本文系统回顾了对彩色血液细胞图像进行分割的彩色空间和传统方法(包括阈值法、边缘检测法和流域分割法),也介绍了若干新的方法,如基于形变模型的边缘检测法、基于小波变换的流域算法和基于数学形态学的颗粒分析法。指出了要更加准确的对细胞图像进行分割,需要采用统计模式法和神经网络法技术等方法。  相似文献   

8.
一种数字人脑部切片图像分割新方法   总被引:4,自引:2,他引:2  
目的 提出一种人脑切片图像自动分割算法,以克服现有的方法对大量人工参与的依赖.方法 针对人脑切片图像的特征,提出一种基于区域生长的灰度直方图阈值化分割算法.首先通过区域生长过程对图像进行初始的粗分割,再用直方图阈值化方法进行二次细分割提取目标区域.结果 采用此方法准确有效地分割出了大脑白质和大脑皮质.结论 此算法结合切片图像的全局信息和局部信息应用于分割,是一种比较好的分割方法.  相似文献   

9.
目的 利用小波变换进行医学图像去噪.方法 通过分析二进小波变换下小波极大模值的特点,即信号的极大模值往往会大于噪声的极大模值,而且噪声的极大模值会随着尺度增大而急剧减少,信号的极大模值却改变很小,由此构造了更有效的去噪准则,即根据不同尺度上的极大模值信息,选择不同的域值来滤除噪声.结果 应用该方法进行医学图像去噪,能保持较高的峰值信噪比、图像细节和边缘特征以及图像清晰度.结论 基于小波极大模值信息的去噪方法能有效地降低医学图像中的噪声.  相似文献   

10.
降噪是医学图像处理中一个非常重要的问题,传统去噪方法在降低噪声的同时会模糊图像的边缘,各向异性扩散滤波在降低图像噪声的同时能够使图像的边缘得到保持.利用小波变换可以对图像进行多尺度分解,使我们可以在不同尺度上对图像进行处理.本文利用各向异性扩散滤波对MRI图像进行降噪,然后利用平稳小波变换对图像进行增强处理.实验结果表明,该方法在有效去除噪声的同时能够增强图像的细节,有效地提高了图像的质量.  相似文献   

11.
A multiscale maximum entropy method (MEM) for image deconvolution is implemented and applied to MODIS (moderate resolution imaging spectroradiometer) data to remove instrument point-spread function (PSF) effects. The implementation utilizes three efficient computational methods: a fast Fourier transform convolution, a wavelet image decomposition and an algorithm for gradient method step-size estimation that together enable rapid image deconvolution. Multiscale entropy uses wavelet transforms to implicitly include an image's two-dimensional structural information into the algorithm's entropy calculation. An evaluation using synthetic data shows that the deconvolution algorithm reduces the maximum individual pixel error from 90.01 to 0.34%. Deconvolution of MODIS data is shown to resolve significant features and is most effective in regions where there are large changes in radiance such as coastal zones or contrasting land covers.  相似文献   

12.
We propose a classification algorithm that utilizes the alpha-stable distribution to model the texture features of synthetic aperture radar (SAR) images. The SAR image is first decomposed by stationary wavelet transform (SWT). After that, the alpha-stable distribution is applied to model the high-frequency subband coefficients of the image at each decomposition scale. A regression-type method is then used to estimate the alpha-stable distribution parameters, which form a feature vector that fully describes the texture. Finally, a SAR image classification algorithm is derived by exploiting this feature vector based on the support vector machines (SVM) approach. Because different combinations of alpha-stable distribution parameters contribute to differences in classification precision, a multilevel SVM (MSVM) classification algorithm is also presented to address the issue. Experimental results indicate that the proposed SAR image classification algorithm is effective and the MSVM algorithm improves the classification performance. Moreover, our proposed algorithm has low computational cost as only a small number of the alpha-stable distribution parameters are processed.  相似文献   

13.
Photon shot noise is the main noise source of optical microscopy images and can be modeled by a Poisson process. Several discrete wavelet transform based methods have been proposed in the literature for denoising images corrupted by Poisson noise. However, the discrete wavelet transform (DWT) has disadvantages such as shift variance, aliasing, and lack of directional selectivity. To overcome these problems, a dual tree complex wavelet transform is used in our proposed denoising algorithm. Our denoising algorithm is based on the assumption that for the Poisson noise case threshold values for wavelet coefficients can be estimated from the approximation coefficients. Our proposed method was compared with one of the state of the art denoising algorithms. Better results were obtained by using the proposed algorithm in terms of image quality metrics. Furthermore, the contrast enhancement effect of the proposed method on collagen fıber images is examined. Our method allows fast and efficient enhancement of images obtained under low light intensity conditions.OCIS codes: (100.0100) Image processing, (100.7410) Wavelets, (100.3020) Image reconstruction-restoration  相似文献   

14.
A new adaptive wavelet packet-based approach to minimize speckle noise in ultrasound images is proposed. This method combines wavelet packet thresholding with a bilateral filter. Here, the best bases after wavelet packet decomposition are selected by comparing the first singular value of all sub-bands, and the noisy coefficients are thresholded using a modified NeighShrink technique. The algorithm is tested with various ultrasound images, and the results, in terms of peak signal-to-noise ratio and mean structural similarity values, are compared with those for some well-known de-speckling techniques. The simulation results indicate that the proposed method has better potential to minimize speckle noise and retain fine details of the ultrasound image.  相似文献   

15.
Optical coherence tomography angiography (OCTA) is a novel and clinically promising imaging modality to image retinal and sub-retinal vasculature. Based on repeated optical coherence tomography (OCT) scans, intensity changes are observed over time and used to compute OCTA image data. OCTA data are prone to noise and artifacts caused by variations in flow speed and patient movement. We propose a novel iterative maximum a posteriori signal recovery algorithm in order to generate OCTA volumes with reduced noise and increased image quality. This algorithm is based on previous work on probabilistic OCTA signal models and maximum likelihood estimates. Reconstruction results using total variation minimization and wavelet shrinkage for regularization were compared against an OCTA ground truth volume, merged from six co-registered single OCTA volumes. The results show a significant improvement in peak signal-to-noise ratio and structural similarity. The presented algorithm brings together OCTA image generation and Bayesian statistics and can be developed into new OCTA image generation and denoising algorithms.  相似文献   

16.
Wavelet denoising of multiframe optical coherence tomography data   总被引:2,自引:0,他引:2  
We introduce a novel speckle noise reduction algorithm for OCT images. Contrary to present approaches, the algorithm does not rely on simple averaging of multiple image frames or denoising on the final averaged image. Instead it uses wavelet decompositions of the single frames for a local noise and structure estimation. Based on this analysis, the wavelet detail coefficients are weighted, averaged and reconstructed. At a signal-to-noise gain at about 100% we observe only a minor sharpness decrease, as measured by a full-width-half-maximum reduction of 10.5%. While a similar signal-to-noise gain would require averaging of 29 frames, we achieve this result using only 8 frames as input to the algorithm. A possible application of the proposed algorithm is preprocessing in retinal structure segmentation algorithms, to allow a better differentiation between real tissue information and unwanted speckle noise.  相似文献   

17.
《Remote sensing letters.》2013,4(12):1153-1162
ABSTRACT

In this paper, in order to obtain high-quality training samples and improve the accuracy of the synthetic aperture radar (SAR) image change detection, we propose a coarse-to-fine SAR image change detection method. In the coarse change detection stage, we construct the difference image (DI) by the use of the wavelet frequency difference (WFD) method, and obtain the DI saliency map by the use of quaternion Fourier transform (QFT). At the same time, the noise in the non-salient regions is suppressed. Finally, the coarse change detection map obtained by selecting a threshold for the DI saliency map is pre-classified (changed pixels, unchanged pixels, undetermined pixels) by the fuzzy c-means (FCM) clustering algorithm. In the fine change detection phase, the neighbourhood features of the changed pixels and the unchanged pixels in the coarse change map are extracted and used as reliable samples for extreme learning machine (ELM) training. The trained ELM classifier is then used to perform change detection on the coarse change detection map, to obtain the final change detection map. Experiments on two real SAR datasets show that the proposed method can not only obtain reliable training samples, but it can also result in a significant improvement in change detection performance.  相似文献   

18.
To reduce scanning time and/or improve spatial/temporal resolution in some Magnetic Resonance Imaging (MRI) applications, parallel MRI acquisition techniques with multiple coils acquisition have emerged since the early 1990s as powerful imaging methods that allow a faster acquisition process. In these techniques, the full FOV image has to be reconstructed from the resulting acquired undersampled k-space data. To this end, several reconstruction techniques have been proposed such as the widely-used SENSitivity Encoding (SENSE) method. However, the reconstructed image generally presents artifacts when perturbations occur in both the measured data and the estimated coil sensitivity profiles. In this paper, we aim at achieving accurate image reconstruction under degraded experimental conditions (low magnetic field and high reduction factor), in which neither the SENSE method nor the Tikhonov regularization in the image domain give convincing results. To this end, we present a novel method for SENSE-based reconstruction which proceeds with regularization in the complex wavelet domain by promoting sparsity. The proposed approach relies on a fast algorithm that enables the minimization of regularized non-differentiable criteria including more general penalties than a classical ?(1) term. To further enhance the reconstructed image quality, local convex constraints are added to the regularization process. In vivo human brain experiments carried out on Gradient-Echo (GRE) anatomical and Echo Planar Imaging (EPI) functional MRI data at 1.5T indicate that our algorithm provides reconstructed images with reduced artifacts for high reduction factors.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号