Image fusion means to integrate information from one image to another image. Medical images according to the nature of the images are divided into structural (such as CT and MRI) and functional (such as SPECT, PET). This article fused MRI and PET images and the purpose is adding structural information from MRI to functional information of PET images. The images decomposed with Nonsubsampled Contourlet Transform and then two images were fused with applying fusion rules. The coefficients of the low frequency band are combined by a maximal energy rule and coefficients of the high frequency bands are combined by a maximal variance rule. Finally, visual and quantitative criteria were used to evaluate the fusion result. In visual evaluation the opinion of two radiologists was used and in quantitative evaluation the proposed fusion method was compared with six existing methods and used criteria were entropy, mutual information, discrepancy and overall performance. 相似文献
In recent years,many medical image fusion methods had been exploited to derive useful information from multimodality medical image data, but, not an appropriate fusion algorithm for anatomical and functional medical images. In this paper, the traditional method of wavelet fusion is improved and a new fusion algorithm of anatomical and functional medical images,in which high-frequency and low-frequency coefficients are studied respectively. When choosing high-frequency coefficients, the global gradient of each subimage is calculated to realize adaptive fusion,so that the fused image can reserve the functional information;while choosing the low coefficients is based on the analysis of the neighborbood region energy, so that the fused image can reserve the anatomical image' s edge and texture feature. Experimental results and the quality evaluation parameters show that the improved fusion algorithm can enhance the edge and texture feature and retain the function information and anatomical information effectively. 相似文献
Fusion of CT and MR images allows simultaneous visualization of details of bony anatomy provided by CT image and details of soft tissue anatomy provided by MR image. This helps the radiologist for the precise diagnosis of disease and for more effective interventional treatment procedures. This paper aims at designing an effective CT and MR image fusion method. In the proposed method, first source images are decomposed by using nonsubsampled contourlet transform (NSCT) which is a shift-invariant, multiresolution and multidirection image decomposition transform. Maximum entropy of square of the coefficients with in a local window is used for low-frequency sub-band coefficient selection. Maximum weighted sum-modified Laplacian is used for high-frequency sub-bands coefficient selection. Finally fused image is obtained through inverse NSCT. CT and MR images of different cases have been used to test the proposed method and results are compared with those of the other conventional image fusion methods. Both visual analysis and quantitative evaluation of experimental results shows the superiority of proposed method as compared to other methods. 相似文献
In this paper, a detail-enhanced multimodality medical image fusion algorithm is proposed by using proposed multi-scale joint decomposition framework (MJDF) and shearing filter (SF). The MJDF constructed with gradient minimization smoothing filter (GMSF) and Gaussian low-pass filter (GLF) is used to decompose source images into low-pass layers, edge layers, and detail layers at multiple scales. In order to highlight the detail information in the fused image, the edge layer and the detail layer in each scale are weighted combined into a detail-enhanced layer. As directional filter is effective in capturing salient information, so SF is applied to the detail-enhanced layer to extract geometrical features and obtain directional coefficients. Visual saliency map-based fusion rule is designed for fusing low-pass layers, and the sum of standard deviation is used as activity level measurement for directional coefficients fusion. The final fusion result is obtained by synthesizing the fused low-pass layers and directional coefficients. Experimental results show that the proposed method with shift-invariance, directional selectivity, and detail-enhanced property is efficient in preserving and enhancing detail information of multimodality medical images.
Color blending is a popular display method for functional and anatomic image fusion. The underlay image is typically displayed
in grayscale, and the overlay image is displayed in pseudo colors. This pixel-level fusion provides too much information for
reviewers to analyze quickly and effectively and clutters the display. To improve the fusion image reviewing speed and reduce
the information clutter, a pixel-feature hybrid fusion method is proposed and tested for PET/CT images. Segments of the colormap
are selectively masked to have a few discrete colors, and pixels displayed in the masked colors are made transparent. The
colormap thus creates a false contouring effect on overlay images and allows the underlay to show through to give contours
an anatomic context. The PET standardized uptake value (SUV) is used to control where colormap segments are masked. Examples
show that SUV features can be extracted and blended with CT image instantaneously for viewing and diagnosis, and the non-feature
part of the PET image is transparent. The proposed pixel-feature hybrid fusion highlights PET SUV features on CT images and
reduces display clutters. It is easy to implement and can be used as complementarily to existing pixel-level fusion methods. 相似文献