首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
N C Phelan  J T Ennis 《Medical physics》1999,26(8):1607-1611
Image compression is fundamental to the efficient and cost-effective use of digital medical imaging technology and applications. Wavelet transform techniques currently provide the most promising approach to high-quality image compression which is essential for diagnostic medical applications. A novel approach to image compression based on the wavelet decomposition has been developed which utilizes the shape or morphology of wavelet transform coefficients in the wavelet domain to isolate and retain significant coefficients corresponding to image structure and features. The remaining coefficients are further compressed using a combination of run-length and Huffman coding. The technique has been implemented and applied to full 16 bit medical image data for a range of compression ratios. Objective peak signal-to-noise ratio performance of the compression technique was analyzed. Results indicate that good reconstructed image quality can be achieved at compression ratios of up to 15:1 for the image types studied. This technique represents an effective approach to the compression of diagnostic medical images and is worthy of further, more thorough, evaluation of diagnostic quality and accuracy in a clinical setting.  相似文献   

2.
3.
Quality Degradation in Lossy Wavelet Image Compression   总被引:2,自引:2,他引:0  
The objective of this study was to develop a method for measuring quality degradation in lossy wavelet image compression. Quality degradation is due to denoising and edge blurring effects that cause smoothness in the compressed image. The peak Moran z histogram ratio between the reconstructed and original images is used as an index for degradation after image compression. The Moran test is applied to images randomly selected from each medical modality, computerized tomography, magnetic resonance imaging, and computed radiography and compressed using the wavelet compression at various levels. The relationship between the quality degradation and compression ratio for each image modality agrees with previous reports that showed a preference for mildly compressed images. Preliminary results show that the peak Moran z histogram ratio can be used to quantify the quality degradation in lossy image compression. The potential for this method is applications for determining the optimal compression ratio (the maximized compression without seriously degrading image quality) of an image for teleradiology.  相似文献   

4.
New nonlinear image processing techniques, in particular smoothing based on the understanding of the image, may create computerized tomography (CT) images of good quality using less radiation. Such techniques may be applied before the reconstruction and particularly after it. Current CT scanners use strong linear low-pass filters applied to the CT projections, reducing noise but also deteriorating the resolution of the image. The method in this study was to apply a weak low-pass filter on the projections, to perform the reconstruction, and only then to apply a nonlinear filter on the image. Various kinds of nonlinear filters were investigated based on the fact that the image is approximately piecewise constant. The filters were applied with many values of several parameters and the effects on the spatial resolution and the noise reduction were evaluated. The signal-to-noise ratio of a high-contrast phantom image processed were compared with the nonlinear filter, with the SNR of the phantom images obtained with the built-in CT linear filters in two scanning modes, the normal and the ultra high resolution modes. It was found that the nonlinear filters improve the SNR of the image, compared to the built-in filters, about three times for the normal mode and twice for the UHR scanning mode. The most successful filter on low-contrast phantom image was applied and it also seems to lead to promising results. These results seem to show that applying nonlinear filters on CT images might lead to better image quality than using the current linear filters.(ABSTRACT TRUNCATED AT 250 WORDS)  相似文献   

5.
The purpose of this report is to assess clinically acceptable compression ratios on the detection of brain lesions at magnetic resonance imaging (MRI). Four consecutive T2-weighted and the corresponding T1-weighted images obtained in 20 patients were studied for 109 anatomic sites including 50 with lesions and 59 without lesions. The images were obtained on a 1.5-T MR unit with a pixel size of 0.9 to 1.2 x 0.47 mm and a section thickness of 5 mm. The image data were compressed by wavelet-based algorithm at ratios of 20:1, 40:1, and 60:1. Three radiologists reviewed these images on an interactive workstation and rated the presence or absence of a lesion with a 50 point scale for each anatomic site. The authors also evaluated the influence of pixel size on the quality of image compression. At receiver operating characteristic (ROC) analysis, no statistically significant difference was detected at a compression ratio of 20:1. A significant difference was observed with 40:1 compressed images for one reader (P = .023), and with 60:1 for all readers (P = .001 to .012). A root mean squared error (RMSE) was higher in 0.94- x 0.94-mm pixel size images than in 0.94- x 0.47-mm pixel size images at any compression ratio, indicating compression tolerance is lower for the larger pixel size images. The RMSE, subjective image quality, and error images of 10:1 compressed 0.94- x 0.94-mm pixel size images were comparable with those of 20:1 compressed 0.94- x 0.47-mm pixel size images. Wavelet compression can be acceptable clinically at ratios as high as 20:1 for brain MR images when a pixel size at image acquisition is around 1.0 x 0.5 mm, and as high as 10:1 for those with a pixel size around 1.0 x 1.0 mm.  相似文献   

6.
In this paper, a 3-channel wavelet transform method is introduced to recover palm print images following serious deformation. The deformation processing is actually a kind of digital re-sampling. A filter bank consisting of three filters is implemented for wavelet decomposition of the palm print image, and then a procedure of binary interpolation is performed after the image is reconstructed by another filter bank consisting of another three filters. The design of multi-channel wavelet filter bank is based on the Quadrature Mirror Filter (QMF) method. Because the noise is caused by the Morie stripe, the images are further de-noised after the geometry deformation is addressed. Acceptable results have been obtained.  相似文献   

7.
We attempt to develop a systematic scheme through adopting high-pass filtering (HPF) to well resolve value-preserved images such as medical images. Our approach is derived from the Poisson maximum a posteriori superresolution algorithm employing the HP filters, where four filters are considered such as two low-pass-filter-combination based filters, wavelet filter, and negative-oriented Laplacian HP filter. The proposed approach is incorporated into the procedure of finite-element-method (FEM)-based image reconstruction for diffuse optical tomography in the direct current domain, posterior to each iteration without altering the original FEM modeling. This approach is justified with various HPF for different cases that breast-like phantoms embedded with two or three inclusions that imitate tumors are employed to examine the resolution performances under certain extreme conditions. The proposed approach to enhancing image resolution is evaluated for all tested cases. A qualitative investigation of reconstruction performance for each case is presented. Following this, we define a set of measures on the quantitative evaluation for a range of resolutions including separation, size, contrast, and location, thereby providing a comparable evaluation to the visual quality. The most satisfactory result is obtained by using the wavelet HP filter, and it successfully justifies our proposed scheme.  相似文献   

8.
The modified embedded zero-tree wavelet (MEZW) compression algorithm for the one-dimensional signal was originally derived for image compression based on Shapiro's EZW algorithm. It is revealed that the proposed codec is significantly more efficient in compression and in computation than previously proposed ECG compression schemes. The coder also attains exact bit rate control and generates a bit stream progressive in quality or rate. The EZW and MEZW algorithms apply the chosen threshold values or the expressions in order to specify that the significant transformed coefficients are greatly significant. Thus, two different threshold definitions, namely percentage and dyadic thresholds, are used, and they are applied for different wavelet types in biorthogonal and orthogonal classes. In detail, the MEZW and EZW algorithms results are quantitatively compared in terms of the compression ratio (CR) and percentage root mean square difference (PRD). Experiments are carried out on the selected records from the MIT-BIH arrhythmia database and an original ECG signal. It is observed that the MEZW algorithm shows a clear advantage in the CR achieved for a given PRD over the traditional EZW, and it gives better results for the biorthogonal wavelets than the orthogonal wavelets.  相似文献   

9.
Wavelet compression of medical imagery.   总被引:3,自引:0,他引:3  
Wavelet compression is a transform-based compression technique recently shown to provide diagnostic-quality images at compression ratios as great as 30:1. Based on a recently developed field of applied mathematics, wavelet compression has found success in compression applications from digital fingerprints to seismic data. The underlying strength of the method is attributable in large part to the efficient representation of image data by the wavelet transform. This efficient or sparse representation forms the basis for high-quality image compression by providing subsequent steps of the compression scheme with data likely to result in long runs of zero. These long runs of zero in turn compress very efficiently, allowing wavelet compression to deliver substantially better performance than existing Fourier-based methods. Although the lack of standardization has historically been an impediment to widespread adoption of wavelet compression, this situation may begin to change as the operational benefits of the technology become better known.  相似文献   

10.
In this paper, a 3-channel wavelet transform method is introduced to recover palm print images following serious deformation. The deformation processing is actually a kind of digital re-sampling. A filter bank consisting of three filters is implemented for wavelet decomposition of the palm print image, and then a procedure of binary interpolation is performed after the image is reconstructed by another filter bank consisting of another three filters. The design of multi-channel wavelet filter bank is based on the Quadrature Mirror Filter (QMF) method. Because the noise is caused by the Morie stripe, the images are further de-noised after the geometry deformation is addressed. Acceptable results have been obtained.  相似文献   

11.
This paper introduces an effective technique for the compression of one-dimensional signals using wavelet transforms. It is based on generating a binary stream of 1s and 0s that encodes the wavelet coefficients structure (i.e., encodes the locations of zero and nonzero coefficients). A new coding algorithm, similar to the run length encoding, has been developed for the compression of the binary stream. The compression performances of the technique are measured using compression ratio (CR) and percent root-mean square difference (PRD) measures. To assess the technique properly we have evaluated the effect of signal length, threshold levels selection and wavelet filters on the quality of the reconstructed signal. The effect of finite word length representation on the compression ratio and PRD is also discussed. The technique is tested for the compression of normal and abnormal electrocardiogram (ECG) signals. The performance parameters of the proposed coding algorithm are measured and compression ratios of 19:1 and 45:1 with PRDs of 1% and 2.8% are achieved, respectively. At the receiver end, the received signal is decoded and inverse transformed before being processed. Finally, the merits and demerits of the technique are discussed.  相似文献   

12.
A prior ultrasound study indicated that images with low to moderate levels of JPEG and wavelet compression were acceptable for diagnostic purposes. The purpose of this study is to validate this prior finding using the Joint Photographic Experts Group (JPEG) baseline compression algorithm, at a compression ratio of approximately 10:1, on a sufficiently large number of grayscale and color ultrasound images to attain a statistically significant result. The practical goal of this study is to determine if it is feasible for radiologists to use irreversibly compressed images as an integral part of the day to day ultrasound practice (ie, perform primary diagnosis with, and store irreversibly compressed images in the ultrasound PACS archive). In this study, 5 Radiologists were asked to review 300 grayscale and color static ultrasound images selected from 4 major anatomic groups. Each image was compressed and decompressed using the JPEG baseline compression algorithm at a fixed quality factor resulting in an average compression ratio of approximately 9:1. The images were presented in pairs (original and compressed) in a blinded fashion on a PACS workstation in the ultrasound reading areas, and radiologists were asked to pick which image they preferred in terms of diagnostic utility and their degree of certainty (on a scale from 1 to 4). Of the 1499 total readings, 50.17% (95% confidence intervals at 47.6%, and 52.7%) indicated a preference for the original image in the pair, and 49.83% (95% confidence intervals at 47.3%, and 52.0%) indicated a preference for the compressed image. These findings led the authors to conclude that static color and gray-scale ultrasound images compressed with JPEG at approximately 9:1 are statistically indistinguishable from the originals for primary diagnostic purposes. Based on the authors laboratory experience with compression and the results of this and other prior studies, JPEG compression is now being applied to all ultrasound images in the authors' radiology practice before reading. No image quality-related issues have been encountered after 12 months of operation (approximately 48000 examinations).  相似文献   

13.
This presentation focuses on the quantitative comparison of three lossy compression methods applied to a variety of 12-bit medical images. One Joint Photographic Exports Group (JPEG) and two wavelet algorithms were used on a population of 60 images. The medical images were obtained in Digital Imaging and Communications in Medicine (DICOM) file format and ranged in matrix size from 256 × 256 (magnetic resonance [MR]) to 2,560 × 2,048 (computed radiography [CR], digital radiography [DR], etc). The algorithms were applied to each image at multiple levels of compression such that comparable compressed file sizes were obtained at each level. Each compressed image was then decompressed and quantitative analysis was performed to compare each compressed-thendecompressed image with its corresponding original image. The statistical measures computed were sum of absolute differences, sum of squared differences, and peak signal-to-noise ratio (PSNR). Our results verify other research studies which show that wavelet compression yields better compression quality at constant compressed file sizes compared with JPEG. The DICOM standard does not yet include wavelet as a recognized lossy compression standard. For implementers and users to adopt wavelet technology as part of their image management and communication installations, there has to be significant differences in quality and compressibility compared with JPEG to justify expensive software licenses and the introduction of proprietary elements in the standard. Our study shows that different wavelet implementations vary in their capacity to differentiate themselves from the old, established lossy JPEG.  相似文献   

14.
一种基于医学序列图像的"有损-无损"压缩方案   总被引:2,自引:0,他引:2  
针对医学序列图像的特点,提出了基于图像配准技术的差值预测方法,以减少序列图像之间的相关冗余.对于预测差值图像,采用基于可逆整数小波变换的EBCOT编码方法,实现对"非必要-必要"区域的"有损-无损"压缩.通过对实际医学图像(CT)的实验表明,该方法具有很好的压缩性能,可以有效满足现代医学图像的压缩要求.  相似文献   

15.
To address the low compression efficiency of lossless compression and the low image quality of general near-lossless compression, a novel near-lossless compression algorithm based on adaptive spatial prediction is proposed for medical sequence images for possible diagnostic use in this paper. The proposed method employs adaptive block size-based spatial prediction to predict blocks directly in the spatial domain and Lossless Hadamard Transform before quantization to improve the quality of reconstructed images. The block-based prediction breaks the pixel neighborhood constraint and takes full advantage of the local spatial correlations found in medical images. The adaptive block size guarantees a more rational division of images and the improved use of the local structure. The results indicate that the proposed algorithm can efficiently compress medical images and produces a better peak signal-to-noise ratio (PSNR) under the same pre-defined distortion than other near-lossless methods.  相似文献   

16.
This paper investigates the use of morphology-based nonlinear filters, and performs deterministic and statistical analysis of the linear combinations of the filters for the image quality enhancement of B-mode ultrasound images. The fact that the structuring element shape greatly influences the output of the filter, is one of the most important features of mathematical morphology. The present reported work comparatively evaluates the structuring elements for morphological liver image enhancement and verifies the hypothesis that the speckles visible in US images are short, slightly ‘banana-shaped’ white lines. Initially, five different liver images were morphologically filtered using 10 different structuring elements and then the filtered images were assessed quantitatively. Image quality parameters such as peak signal-to-noise ratio, mean square error and correlation coefficient have been used to evaluate the performance of the morphological filters with different structuring elements. To endorse the observation of the quantitative analysis, the filtered images were then evaluated qualitatively, based on the image features looked into by the medical fraternity. The evaluation parameters have been taken on the basis of the suggestions made by a group of radiologists. The results of the processed images were then evaluated by a different group of radiologists. A multi-point rank order method has been used to identify small differences or trends in observation. The subjective analysis by radiologists indicates that morphological filter using line shaped structuring element with length 2 performs better than the other structuring elements. These observations were found to be in line with the observations of quantitative analysis.  相似文献   

17.
OBJECTIVE: So far there is no ideal speckle reduction filtering technique that is capable of enhancing and reducing the level of noise in medical ultrasound (US) images, while efficiently responding to medical experts' validation criteria which quite often include a subjective component. This paper presents an interactive tool called evolutionary speckle reducing anisotropic diffusion filter (EVOSRAD) that performs adaptive speckle filtering on ultrasound B-mode still images. The medical expert runs the algorithm interactively, having a permanent control over the output, and guiding the filtering process towards obtaining enhanced images that agree to his/her subjective quality criteria. METHODS AND MATERIAL: We employ an interactive evolutionary algorithm (IGA) to adapt on-line the parameters of a speckle reducing anisotropic diffusion (SRAD) filter. For a given input US image, the algorithm evolves the parameters of the SRAD filter according to subjective criteria of the medical expert who runs the interactive algorithm. The method and its validation are applied to a test bed comprising both real and simulated obstetrics and gynecology (OB/GYN) ultrasound images. RESULTS: The potential of the method is analyzed in comparison to other speckle reduction filters: the original SRAD filter, the anisotropic diffusion, offset and median filters. Results obtained show the good potential of the method on several classes of OB/GYN ultrasound images, as well as on a synthetic image simulating a real fetal US image. Quality criteria for the evaluation and validation of the method include subjective scoring given by the medical expert who runs the interactive method, as well as objective global and local quality criteria. CONCLUSIONS: The method presented allows the medical expert to design its own filters according to the degree of medical expertise as well as to particular and often subjective assessment criteria. A filter is designed for a given class of ultrasound images and for a given medical expert who will later use the respective filter in clinical practice. The process of designing a filter is simple and employs an interactive visualization and scoring stage that does not require image processing knowledge. Results show that filters tailored using the presented method achieve better quality scores than other more generic speckle filtering techniques.  相似文献   

18.
S C Lo  E L Shen  S K Mun  J Chen 《Medical physics》1991,18(5):939-946
A new decomposition method using image splitting and gray-level remapping has been proposed for image compression, particularly for images with high contrast resolution. The effects of this method are especially evident in this radiological image compression study. In these experiments, the impact of this decomposition method was tested on image compression by employing it with two coding techniques on a set of clinically used CT images and several laser film digitized chest radiographs. One of the compression techniques used as zonal full-frame bit-allocation in the discrete cosine transform (DCT) domain, which is an enhanced full-frame DCT technique that has been proven to be an effective technique for radiological image compression. The other compression technique used was vector quantization with pruned tree-structured encoding, which through recent research has also been found to produce a low mean-square error and a high compression ratio. The parameters used in this study were mean-square error and the bit rate required for the compressed file. In addition to these parameters, the differences between the original and reconstructed images were presented so that the specific artifacts generated by both techniques could be discerned through visual perception.  相似文献   

19.
The purpose of this research was to determine if digitization and the application of various compression routines to digital images of temporomandibular joint (TMJ) radiographs would diminish observer accuracy in the detection of specific osseous characteristics associated with TMJ degenerative joint disease (DJD). Nine observers viewed 6 cropped hard-copy radiographic films each of 34 TMJs (17 radiographic series). Regions of interest measuring 2 in x 2 in were digitized using an 8-bit scanner with transparency adapter at 300 dpi. The images were placed into a montage of 6 images and stored as tagged image file format (TIFF), compressed at 4 levels (25:1, 50:1, 75:1, and 100:1) using a wavelet algorithm, and displayed to the observers on a computer monitor. Their observations regarding condylar faceting, sclerosis, osteophyte formation, erosion, and abnormal shape were analyzed using ROC. Kappa values were determined for relative condylar size and condylar position within the glenoid fossa. Indices were compared using ANOVA at a significance level of P < .05. Although significant and substantial observer variability was demonstrated, there were no statistically significant differences between image modalities, except for condylar position, in which TIFF and wavelet (at all compression ratios) performed better than the original image. For faceting, wavelet 100:1 performed better than radiographic film images. Little actual image file reduction was achieved at compression ratios above 25:1.  相似文献   

20.
A novel homomorphic wavelet thresholding technique for reducing speckle noise in medical ultrasound images is presented. First, we show that the speckle wavelet coefficients in the logarithmically transformed ultrasound images are best described by the Nakagami family of distributions. By exploiting this speckle model and the Laplacian signal prior, a closed form, data-driven, and spatially adaptive threshold is derived in the Bayesian framework. The spatial adaptivity allows the additional information of the image (such as identification of homogeneous or heterogeneous regions) to be incorporated into the algorithm. Further, the threshold has been extended to the redundant wavelet representation, which yields better results than the decimated wavelet transform. Experimental results demonstrate the improved performance of the proposed method over other well-known speckle reduction filters. The application of the proposed method to a realistic US test image shows that the new technique, named HomoGenThresh, outperforms the best wavelet-based denoising method reported in [1] by more than 1.6 dB, Lee filter by 3.6 dB, Kaun filter by 3.1 dB and band-adaptive soft thresholding [2] by 2.1 dB at an input signal-to-noise ratio (SNR) of 13.6 dB.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号