首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 126 毫秒
1.
针对目前图像融合算法基于像素级实现、未充分考虑图像纹理特征、融合效果不理想的现状,本文通过结构张量特征值构造特征模板,在梯度域通过特征加权获得融合梯度场,充分考虑图像特征在融合过程中的决策力,将特征级融合思想嵌入到算法中,在此基础上实现三维数据的直接融合,同等对待3个维度的信息。在头部PET/CT图像融合实验中,本文算法较基于小波变换的三维数据融合方法,清晰度提升64%,交叉熵提升21%,客观评价优势显著。从图像整体亮度、边缘清晰程度等视觉效果方面比较,本文算法亦优于基于小波变换的融合方法。最后,将融合灰度图像和PET源图像通过Alpha半透明图像叠加进行伪彩色显示,进而提升融合结果中有效信息的辨识程度。  相似文献   

2.
目的:研究一种基于多小波变换的医学影像融合的算法。方法:对已配准的PET图像和CT图像进行预滤波后进行多小波分解,对分解后的图像低频分量采用平均梯度法及高频分量采用自适应加权法的融合规则进行图像融合,经过多小波重构及后滤波得到融合图像。结果:融合图像通过结合源图像的信息,增加了更多的细节和纹理信息,从而得到了良好的融合效果。结论:实验证明,基于该算法,可以得到图像的最佳融合结果。  相似文献   

3.
目的:融合PET/CT/MRI医学图像,使结果图像尽可能包含更多边缘和纹理特征等信息,以更好地区分病变、肿瘤与正常组织器官,为疾病诊断提供更多的有用信息。方法:提出一种基于非下采样剪切波变换(NSST)和脉冲耦合神经网络(PCNN)模型的融合方法。首先,根据图像局部区域能量和,对图像NSST低频系数进行加权融合;然后,根据PCNN神经元的点火次数,选择图像NSST高频方向系数;最后,通过逆NSST变换,得到融合后的图像。结果:分别对7组MRI/PET和CT/PET图像进行融合实验,其结果图像具有很好的视觉效果,且在互信息、边缘相似性、梯度相似性及空间频率4个指标综合评价中较其它算法更优。结论:本方法可以自适应捕获边缘和纹理信息,具有良好的融合效果。  相似文献   

4.
PET/CT显像的图像融合精度测试研究   总被引:1,自引:0,他引:1  
目的:通过模拟和临床实际应用状态,测试3D全身显像方式PET/CT显像图像融合精度.方法:用测试模型及其配套的22Na固体点源,模拟人体宽度和厚度,采集点源PET/CT图像数据;然后将相同的点源置于检查床两测的床垫下,在检查床无负重和负重85 kg时,分别采集PET/CT图像数据.重建图像后,在PET/CT融合图像的三个断层面分别寻找CT和PET点源图像的中心,测量二者中心偏差的mm数,即PET/CT图像融合精度.结果:PET和CT两种模态融合图像肉眼可见较明显的错位.模型点源PET/CT融合图像最大偏差为(4.25±0.26)mm、检查床无负重时为(3.96±0.26)mm,检查床负重85 kg时为(5.36±0.26)mm.结论:本研究方法可以比较精确地测量PET/CT图像融合精度,并有效地指导PET/CT仪器的验收和故障检测等.  相似文献   

5.
融合图像放疗靶区定位精度的检验和初步临床结果   总被引:1,自引:0,他引:1  
目的:探讨以图像融合技术为基础的肿瘤三维适形放疗靶区定位精度的检验及依据融合图像放疗靶区的确定与单纯CT影像放疗靶区确定的初步临床结果。方法:利用定制的模体分别行CT、MRI和PET成像,进行CT与MRI,CT与PET融合。检验融合后定制标记点的定位精度。对3例特殊病例分别以单纯CT图像为基础和融合图像为基础,进行三维适形放疗靶区认定,对不同医生之间和同一医生在不同时间,放疗靶区定义情况进行对照分析。结果:MRI/CT融合图像总定位精度小于2mm,PET/CT图像融合图像融合精度情况(包括同机融合和异机融合),采用不同的融合算法。定位精度有显著差异(P〈0.01,t=5.385)。单纯利用CT图像进行靶区的定义,不同医生之间,在不同的时间存在差异(P〈0.05),而采用融合技术可减少他们的争议和差异。结论:利用多模式图像融合可以提高靶区定义的准确性.有利于三维适形精确放射治疗。  相似文献   

6.
基于小波变换的CT/PET图像融合最佳参数研究   总被引:1,自引:1,他引:0  
为了提高基于小波变换图像融合的性能,在图像融合规则相对固定的情况下,提出一种确定最佳小波基函数和分解层数的方法。从图像的信息熵出发,通过比较低频子带图像熵差与原始图像熵差的接近程度,选择每一种小波基所对应的最佳分解层数;在小波分解层数确定的情况下,结合图像融合评价方法,选择最佳的小波基函数。与引入融合效果的评价构成一个闭环系统来确定小波参数相比,该方法极大地简化了判别过程;将该方法应用于CT/PET图像融合,获得了较好的融合效果。实验结果表明,该方法简单可行,对基于小波变换图像融合的小波参数选取有一定的指导意义。  相似文献   

7.
基于小波变换医学图像融合算法的对比分析   总被引:2,自引:1,他引:1  
小波变换融合方法具有重要的应用价值,而融合规则的选取直接影响着融合效果。为了获得医学临床上实用的小波融合算法,选择标准CT/MRI图像,通过调整和组合各种小波变换低频及高频融合规则进行仿真实验,深入对比分析各种融合规则对医学图像融合性能的影响。在此基础上,提出低频能量取大与高频系数绝对值取大相结合的融合改进算法,比目前基于传统小波融合规则的融合质量及各项客观评价指标都有明显提高,在各种算法比较中最优。采用多聚焦图像和临床实际的CT/MRI图像进行对比验证,表明了方法的有效性。理论分析和实验结果证明:选取合适的融合规则对融合结果影响很大,本研究提出的算法简单有效。  相似文献   

8.
针对当前基于神经网络、聚类分析以及支持向量机三种辅助诊断方法存在的诊断准确性低的问题,本研究提出一种基于随机森林的肺部肿瘤PET/CT图像计算机辅助诊断新方法。该方法首先对PET/CT图像进行预处理,包括灰度化、平滑以及分割等,然后提取PET/CT图像的灰度、形态和纹理等特征,最后利用随机森林算法进行肺部肿瘤PET/CT的辅助识别,以实现肺部肿瘤的病理诊断。结果表明,本方法的ROC曲线结果优于上述三种方法,提高了诊断准确性,可为医生诊疗提供重要参考。  相似文献   

9.
本研究提出基于集成SVM的肺部肿瘤PET/CT三模态计算机辅助诊断新方法.首先在临床采集肺部肿瘤患者PET、CT和PET/CT各2000例三模态图像数据上提取对同一病灶ROI区域;然后根据CT、PET和PET/CT的不同特点,从三模态图像的ROI区域中提取形状特征、灰度特征、Tamura纹理特征和GLCM特征等不同特征分别构成80、98、98维特征分量,并分别在不同特征空间里构造个体分类器,包括CT-SVM、PET-SVM、PET/CT-SVM;最后,基于相对多数投票原则,对CT-SVM、PET-SVM和PET/CT-SVM进行集成,识别对肺部肿瘤.实验结果表明,该方法能够有效提高肺部肿瘤的诊断正确率.  相似文献   

10.
本文中我们使用基于CT、MR和PET图像等值特征表面的配准算法对多模医学图像进行了配准研究,在CT、MR和PET的原始图像中提取等值特征表面,进行图像的几何对准,并对结果进行初步评估,同时对该算法的稳健性,搜索最近点策略和采样策略进行了研究,结果表明;这种方法能够达到亚像素级的配准精度,是一种稳健、高精度、全自动的配准方法。  相似文献   

11.
Feature-based registration is an effective technique for clinical use, because it can greatly reduce computational costs. However, this technique, which estimates the transformation by using feature points extracted from two images may cause misalignments, particularly in brain PET and CT images that have low correspondence rates between features due to differences in image characteristics. To cope with this limitation, we propose a robust feature-based registration technique using a Gaussian-weighted distance map (GWDM) that finds the best alignment of feature points even when features of two images are mismatched. A GWDM is generated by propagating the value of the Gaussian-weighted mask from feature points of CT images and leads the feature points of PET images to be aligned on an optimal location even though there is a localization error between feature points extracted from PET and CT images. Feature points are extracted from two images by our automatic brain segmentation method. In our experiments, simulated and clinical data sets were used to compare our method with conventional methods such as normalized mutual information (NMI)-based registration and chamfer matching in accuracy, robustness, and computational time. Experimental results showed that our method aligned the images robustly even in cases where conventional methods failed to find optimal locations. In addition, the accuracy of our method was comparable to that of the NMI-based registration method.  相似文献   

12.
A robust and fast hybrid method using a shell volume that consists of high contrast voxels with their neighbors is proposed for registering PET and MR/CT brain images. Whereas conventional hybrid methods find the best matched pairs from several manually selected or automatically extracted local regions, our method automatically selects a shell volume in the PET image, and finds the best matched corresponding volume using normalized mutual information (NMI) in overlapping volumes while transforming the shell volume into an MR or CT image. A shell volume not only can reduce irrelevant corresponding voxels between two images during optimization of transformation parameters, but also brings a more robust registration with less computational cost. Experimental results on clinical data sets showed that our method successfully aligned all PET and MR/CT image pairs without losing any diagnostic information, while the conventional registration methods failed in some cases.  相似文献   

13.
我们从PET-CT多模态图像序列的特点出发,提出了一种全新的图像配准及融合方法,它采用三次样条插值法对PET-CT图像进行层间插值,然后再利用最大互信息法进行配准,最后应用改进的主成分分析(PCA)法融合PET-CT图像用以增强PET显像效果,从而得到满意的配准以及融合结果。用三次样条插值法进行层间插值并恢复层间缺失图像的信息,弥补了现有配准方法的不足,提高了配准精度,使融合后的图像更加接近实际的物理断层。该方法已经成功应用于三维适形放疗(3D-CRT)系统的开发中。  相似文献   

14.
Image fusion means to integrate information from one image to another image. Medical images according to the nature of the images are divided into structural (such as CT and MRI) and functional (such as SPECT, PET). This article fused MRI and PET images and the purpose is adding structural information from MRI to functional information of PET images. The images decomposed with Nonsubsampled Contourlet Transform and then two images were fused with applying fusion rules. The coefficients of the low frequency band are combined by a maximal energy rule and coefficients of the high frequency bands are combined by a maximal variance rule. Finally, visual and quantitative criteria were used to evaluate the fusion result. In visual evaluation the opinion of two radiologists was used and in quantitative evaluation the proposed fusion method was compared with six existing methods and used criteria were entropy, mutual information, discrepancy and overall performance.  相似文献   

15.
The quality of dosimetry in radiotherapy treatment requires the accurate delimitation of the gross tumor volume. This can be achieved by complementing the anatomical detail provided by CT images through fusion with other imaging modalities that provide additional metabolic and physiological information. Therefore, use of multiple imaging modalities for radiotherapy treatment planning requires an accurate image registration method. This work describes tests carried out on a Discovery LS positron emission/computed tomography (PET/CT) system by General Electric Medical Systems (GEMS), for its later use to obtain images to delimit the target in radiotherapy treatment. Several phantoms have been used to verify image correlation, in combination with fiducial markers, which were used as a system of external landmarks. We analyzed the geometrical accuracy of two different fusion methods with the images obtained with these phantoms. We first studied the fusion method used by the PET/CT system by GEMS (hardware fusion) on the basis that there is satisfactory coincidence between the reconstruction centers in CT and PET systems; and secondly the fiducial fusion, a registration method, by means of least-squares fitting algorithm of a landmark points system. The study concluded with the verification of the centroid position of some phantom components in both imaging modalities. Centroids were estimated through a calculation similar to center-of-mass, weighted by the value of the CT number and the uptake intensity in PET. The mean deviations found for the hardware fusion method were: deltax/ +/-sigma = 3.3 mm +/- 1.0 mm and /deltax/ +/-sigma = 3.6 mm +/- 1.0 mm. These values were substantially improved upon applying fiducial fusion based on external landmark points: /deltax/ +/-sigma = 0.7 mm +/- 0.8 mm and /deltax/ +/-sigma = 0.3 mm 1.7 mm. We also noted that differences found for each of the fusion methods were similar for both the axial and helical CT image acquisition protocols.  相似文献   

16.
Color blending is a popular display method for functional and anatomic image fusion. The underlay image is typically displayed in grayscale, and the overlay image is displayed in pseudo colors. This pixel-level fusion provides too much information for reviewers to analyze quickly and effectively and clutters the display. To improve the fusion image reviewing speed and reduce the information clutter, a pixel-feature hybrid fusion method is proposed and tested for PET/CT images. Segments of the colormap are selectively masked to have a few discrete colors, and pixels displayed in the masked colors are made transparent. The colormap thus creates a false contouring effect on overlay images and allows the underlay to show through to give contours an anatomic context. The PET standardized uptake value (SUV) is used to control where colormap segments are masked. Examples show that SUV features can be extracted and blended with CT image instantaneously for viewing and diagnosis, and the non-feature part of the PET image is transparent. The proposed pixel-feature hybrid fusion highlights PET SUV features on CT images and reduces display clutters. It is easy to implement and can be used as complementarily to existing pixel-level fusion methods.  相似文献   

17.
The registration method based on mutual information is currently a popular technique for medical image registration, but the computation of the mutual information is complex and the registration speed is slow. In this work, a new slice accumulation pyramid (SAP) data structure was proposed to expedite the registration process. A numerical comparative study between the new data structure and the existing wavelet pyramid (WP) data structure was given, and the results confirmed that the new pyramid data structure was superior to the WP in both the calculation efficiency and the optimizing performance. Finally, SAP was applied to remove the artifacts between CT and MRI data sets, and the results showed the validation of SAP to registration of mulmodality images.  相似文献   

18.
Positron emission tomography (PET) provides important information on tumor biology, but lacks detailed anatomical information. Our aim in the present study was to develop and validate an automatic registration method for matching PET and CT scans of the head and neck. Three difficulties in achieving this goal are (1) nonrigid motions of the neck can hamper the use of automatic ridged body transformations; (2) emission scans contain too little anatomical information to apply standard image fusion methods; and (3) no objective way exists to quantify the quality of the match results. These problems are solved as follows: accurate and reproducible positioning of the patient was achieved by using a radiotherapy treatment mask. The proposed method makes use of the transmission rather than the emission scan. To obtain sufficient (anatomical) information for matching, two bed positions for the transmission scan were included in the protocol. A mutual information-based algorithm was used as a registration technique. PET and CT data were obtained in seven patients. Each patient had two CT scans and one PET scan. The datasets were used to estimate the consistency by matching PET to CT1, CT1 to CT2, and CT2 to PET using the full circle consistency test. It was found that using our method, consistency could be obtained of 4 mm and 1.3 degrees on average. The PET voxels used for registration were 5.15 mm, so the errors compared quite favorably with the voxel size. Cropping the images (removing the scanner bed from images) did not improve the consistency of the algorithm. The transmission scan, however, could potentially be reduced to a single position using this approach. In conclusion, the represented algorithm and validation technique has several features that are attractive from both theoretical and practical point of view, it is a user-independent, automatic validation technique for matching CT and PET scans of the head and neck, which gives the opportunity to compare different image enhancements.  相似文献   

19.
In recent years,many medical image fusion methods had been exploited to derive useful information from multimodality medical image data, but, not an appropriate fusion algorithm for anatomical and functional medical images. In this paper, the traditional method of wavelet fusion is improved and a new fusion algorithm of anatomical and functional medical images,in which high-frequency and low-frequency coefficients are studied respectively. When choosing high-frequency coefficients, the global gradient of each subimage is calculated to realize adaptive fusion,so that the fused image can reserve the functional information;while choosing the low coefficients is based on the analysis of the neighborbood region energy, so that the fused image can reserve the anatomical image' s edge and texture feature. Experimental results and the quality evaluation parameters show that the improved fusion algorithm can enhance the edge and texture feature and retain the function information and anatomical information effectively.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号