首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Increasing use of computerized ECG processing systems requires effective electrocardiogram (ECG) data compression techniques which aim to enlarge storage capacity and improve data transmission over phone and internet lines. This paper presents a compression technique for ECG signals using the singular value decomposition (SVD) combined with discrete wavelet transform (DWT). The central idea is to transform the ECG signal to a rectangular matrix, compute the SVD, and then discard small singular values of the matrix. The resulting compressed matrix is wavelet transformed, thresholded and coded to increase the compression ratio. The number of singular values and the threshold level adopted are based on the percentage root mean square difference (PRD) and the compression ratio required. The technique has been tested on ECG signals obtained from MIT-BIH arrhythmia database. The results showed that data reduction with high signal fidelity can thus be achieved with average data compression ratio of 25.2:1 and average PRD of 3.14. Comparison between the obtained results and recently published results show that the proposed technique gives better performance.  相似文献   

2.
Increasing use of computerized ECG processing systems requires effective electrocardiogram (ECG) data compression techniques which aim to enlarge storage capacity and improve data transmission over phone and internet lines. This paper presents a compression technique for ECG signals using the singular value decomposition (SVD) combined with discrete wavelet transform (DWT). The central idea is to transform the ECG signal to a rectangular matrix, compute the SVD, and then discard small singular values of the matrix. The resulting compressed matrix is wavelet transformed, thresholded and coded to increase the compression ratio. The number of singular values and the threshold level adopted are based on the percentage root mean square difference (PRD) and the compression ratio required. The technique has been tested on ECG signals obtained from MIT-BIH arrhythmia database. The results showed that data reduction with high signal fidelity can thus be achieved with average data compression ratio of 25.2:1 and average PRD of 3.14. Comparison between the obtained results and recently published results show that the proposed technique gives better performance.  相似文献   

3.
This paper presents a modified version of Set Partitioning In Hierarchical Trees (SPIHT) wavelet compression method, which has been developed for ECG signal compression. Two more steps in the existing technique have been added to achieve higher compression ratio (CR) and lower percentage rms difference (PRD). The method has been tested on selected records from the MIT-BIH arrhythmia database. Even with two more steps, the method retains its simplicity, computational efficiency and self-adaptiveness, without compromising on any other performance parameter.  相似文献   

4.
This paper presents a modified version of Set Partitioning In Hierarchical Trees (SPIHT) wavelet compression method, which has been developed for ECG signal compression. Two more steps in the existing technique have been added to achieve higher compression ratio (CR) and lower percentage rms difference (PRD). The method has been tested on selected records from the MIT-BIH arrhythmia database. Even with two more steps, the method retains its simplicity, computational efficiency and self-adaptiveness, without compromising on any other performance parameter.  相似文献   

5.
In this work, a filter bank-based algorithm for electrocardiogram (ECG) signals compression is proposed. The new coder consists of three different stages. In the first one--the subband decomposition stage--we compare the performance of a nearly perfect reconstruction (N-PR) cosine-modulated filter bank with the wavelet packet (WP) technique. Both schemes use the same coding algorithm, thus permitting an effective comparison. The target of the comparison is the quality of the reconstructed signal, which must remain within predetermined accuracy limits. We employ the most widely used quality criterion for the compressed ECG: the percentage root-mean-square difference (PRD). It is complemented by means of the maximum amplitude error (MAX). The tests have been done for the 12 principal cardiac leads, and the amount of compression is evaluated by means of the mean number of bits per sample (MBPS) and the compression ratio (CR). The implementation cost for both the filter bank and the WP technique has also been studied. The results show that the N-PR cosine-modulated filter bank method outperforms the WP technique in both quality and efficiency.  相似文献   

6.
目的为降低心电信号存储和传输的数据量,并克服传统心电压缩方法只利用导联内相关性的劣势,本文提出一种基于小波域主成分分析和分层编码(w PCA_LC)的压缩方法。方法首先通过心电电极获取12通道心电数据,对所有通道的心电信号做小波变换,每个尺度下的小波系数组成小波系数矩阵,在每个系数矩阵上做主成分分析(principal component analysis,PCA),之后对小波系数小的主成分做[位置增量,数据]的编码方式,其他主成分采用霍夫曼编码,最后使用本文算法压缩圣彼得堡心率失常数据库。结果实验表明,在均方根误差为5.2%时,本文算法的压缩比为71,远高于基于稀疏分解的方法和基于小波变换阈值选择的方法。结论基于小波域主成分分析的心电压缩算法对多导联心电信号具有较好的压缩性能。  相似文献   

7.
为了解决传统软、硬阈值算法对肌电信号去噪后心电图(ECG)信号幅值降低和存在局部异常尖峰,导致去噪效果较差的问题。通过研究小波阈值算法的去噪原理和优化规则,基于双曲正切函数构造出一种具有连续性、结构简单、灵活性较高的可调阈值函数和改进的分层阈值,并分析得到小波分解含噪ECG信号的最佳小波基函数和分解层数,提出了一种改进的小波阈值算法。将软、硬阈值算法、相关文献中的阈值算法和本文所提改进阈值算法对含有真实肌电信号噪声的ECG信号进行去噪对比研究。实验结果表明:本文改进阈值算法能较好地去除ECG信号中的肌电信号噪声,并能更好地保持ECG信号波形特征,且Pearson相关系数值大于其他阈值算法。定性和定量结果表明,本文所提改进阈值算法对ECG肌电信号噪声具有较好的去噪效果。  相似文献   

8.
This paper presents a combined wavelet and a modified run-length encoding schemefor the compression of electrocardiogram (ECG) signals. First, a discrete wavelet transform is applied to the ECG signal. The resulting coefficients are classified into significant and insignificant ones based on the required PRD (percent root mean square difference). Second, both coefficients are encoded using a modified run-length coding method. The scheme has been tested using ECG signals obtained from the MIT-BIH Compression Database. A compression of 20:1 (which is equivalent to 150 bit per second) is achieved with PRD less than 10.  相似文献   

9.
目的 为满足嵌入式移动无线终端传输高采样率心电信号的需要,设计一种实时心电数据压缩算法.方法 根据心电数据自身特点,在嵌入式S3C2440平台上,以Huffman算法、LZ77算法及LZW算法进行心电数据压缩并比较分析,在此基础上设计了一阶差分结合Huffman算法和LZ77算法的混合压缩算法.结果 心电数据的压缩结果显示,该算法压缩比达7.20,平均计算时间392 ms,与普通压缩算法相比具有更高的心电压缩比和更低的时间复杂度.结论 将该压缩算法运用到远程无线监测终端中能满足系统设计的要求.  相似文献   

10.
目的 为满足嵌入式移动无线终端传输高采样率心电信号的需要,设计一种实时心电数据压缩算法.方法 根据心电数据自身特点,在嵌入式S3C2440平台上,以Huffman算法、LZ77算法及LZW算法进行心电数据压缩并比较分析,在此基础上设计了一阶差分结合Huffman算法和LZ77算法的混合压缩算法.结果 心电数据的压缩结果显示,该算法压缩比达7.20,平均计算时间392 ms,与普通压缩算法相比具有更高的心电压缩比和更低的时间复杂度.结论 将该压缩算法运用到远程无线监测终端中能满足系统设计的要求.  相似文献   

11.
Abstract

Myocardial infarction (MI) is a coronary artery disease acquired due to the lack of blood supply in one or more sections of the myocardium, resulting in necrosis in that region. It has different types based on the region of necrosis. In this paper, a statistical approach for classification of Anteroseptal MI (ASMI) is proposed. The first step of the method involves noise elimination and feature extraction from the Electrocardiogram (ECG) signals, using multi-resolution wavelet analysis and thresholding-based techniques. In the next step a classification scheme is developed using the nearest neighbour classification rule (NN rule). Both temporal and amplitude features relevant for automatic ASMI diagnosis are extracted from four chest leads v1–v4. The distance metric for NN classifier is calculated using both Euclidian distance and Mahalanobis distance. A relative comparison between these two techniques reveals that the later is superior to the former, as evident from the classification accuracy. The proposed method is tested and validated using the PTB diagnostic database. Classification accuracy for Mahalanobis distance and Euclidean distance-based NN rule are 95.14% and 81.83%, respectively.  相似文献   

12.
The electrocardiogram (ECG) is one of the most used signals in the diagnosis of heart disease. It contains different waves which directly correlate to heart activity. Different methods have been used in order to detect these waves and consequently lead to heart activity diagnosis. This paper is interested more particularly to the detection of the T-wave. Such a wave represents the re-polarization state of the heart activity. The proposed approach is based on the algorithm procedure which allows the detection of the T-wave using a lot of filter including mean and median filter. The proposed algorithm is implemented and tested on a set of ECG recordings taken from, respectively, the European STT, MITBIH and MITBIH ST databases. The results are found to be very satisfactory in terms of sensitivity, predictivity and error compared to other works in the field.  相似文献   

13.
背景:心音信号包含了大量心脏瓣膜活动的生理信息,心音分析对诊断心脏疾病具有重要的临床意义。 目的:旨在通过心音的包络提取,分析心音信号的各种特征,进而判断心音中是否包含杂音,以改善传统听诊技术高度依赖医生经验、听诊范围受限的缺点。 方法:提出了一种采用小波变换来提取心音包络的方法,通过与采用希尔伯特-黄变换、数学形态学、平均香农能量等心音包络求解方法进行对比,证明这种方法具有算法简便、曲线光滑、特征点突出等优点。 结果与结论:将该方法用于临床真实心音的包络提取,利用支持向量机来训练所提取心音包络的面积和小波能量两个特征参数,判别心音信号是否明显包含杂音。选用35例心音数据对算法进行验证,结果表明该算法的准确率达到95%,具有很强的实用性。  相似文献   

14.
Error propagation and word-length-growth are two intrinsic effects influencing the performance of wavelet-based ECG data compression methods. To overcome these influences, a non-recursive 1-D discrete periodized wavelet transform (1-D NRDPWT) and a reversible round-off linear transformation (RROLT) theorem are developed. The 1-D NRDPWT can resist truncation error propagation in decomposition processes. By suppressing the word- length-growth effect, RROLT theorem enables the 1-D NRDPWT process to obtain reversible octave coefficients with minimum dynamic range (MDR). A non-linear quantization algorithm with high compression ratio (CR) is also developed. This algorithm supplies high and low octave coefficients with small and large decimal quantization scales, respectively. Evaluation is based on the percentage root-mean-square difference (PRD) performance measure, the maximum amplitude error (MAE), and visual inspection of the reconstructed signals. By using the MIT-BIH arrhythmia database, the experimental results show that this new approach can obtain a superior compression performance, particularly in high CR situations.  相似文献   

15.
Compression of electrocardiography (ECG) is necessary for efficient storage and transmission of the digitized ECG signals. Discrete wavelet transform (DWT) has recently emerged as a powerful technique for ECG signal compression due to its multi-resolution signal decomposition and locality properties. This paper presents an ECG compressor based on the selection of optimum threshold levels of DWT coefficients in different subbands that achieve maximum data volume reduction while preserving the significant signal morphology features upon reconstruction. First, the ECG is wavelet transformed into m subbands and the wavelet coefficients of each subband are thresholded using an optimal threshold level. Thresholding removes excessively small features and replaces them with zeroes. The threshold levels are defined for each signal so that the bit rate is minimized for a target distortion or, alternatively, the distortion is minimized for a target compression ratio. After thresholding, the resulting significant wavelet coefficients are coded using multi embedded zero tree (MEZW) coding technique. In order to assess the performance of the proposed compressor, records from the MIT-BIH Arrhythmia Database were compressed at different distortion levels, measured by the percentage rms difference (PRD), and compression ratios (CR). The method achieves good CR values with excellent reconstruction quality that compares favourably with various classical and state-of-the-art ECG compressors. Finally, it should be noted that the proposed method is flexible in controlling the quality of the reconstructed signals and the volume of the compressed signals by establishing a target PRD and a target CR a priori, respectively.  相似文献   

16.
Lossy image compression is thought to be a necessity as radiology moves toward a filmless environment. Compression algorithms based on the discrete cosine transform (DCT) are limited due to the infinite support of the cosine basis function. Wavelets, basis functions that have compact or nearly compact support, are mathematically better suited for decorrelating medical image data. A lossy compression algorithm based on semiorthogonal cubic spline wavelets has been implemented and tested on six different image modalities (magnetic resonance, x-ray computed tomography, single photon emission tomography, digital fluoroscopy, computed radiography, and ultrasound). The fidelity of the reconstructed wavelet images was compared to images compressed with a DCT algorithm for compression ratios of up to 40:1. The wavelet algorithm was found to have generally lower average error metrics and higher peak-signal-to-noise ratios than the DCT algorithm.  相似文献   

17.
目的提高心电信号的分类准确率,降低算法复杂度。方法首先以MIT-BIH心电数据作为学习模板,然后在心电信号的频域和时域上提取其离散余弦变换(discrete cosine transform,DCT)、RR间期和QRS复合波的三种特征值进行分析,最后采用最小欧式距离分类器判断待测心电信号的类型。结果该分类模型通过MIT-BIH和AHA国际标准心电数据库的验证,分别得到96.6%和94.1%的分类准确率。结论本文的心电分类模型区别于其他分类算法的一个最大特点就是算法复杂度低,这是异常心律能够被实时检测和预警的关键,而且建立的心电分类模型已经能够在普通的手机平台上实现。  相似文献   

18.
Abstract

For many years, heart function has been measured by the electrocardiogram (ECG) signal, while sounds produced in the heart can also contain information indicating normal or abnormal heart function. What has caused to restrict the use of the phonocardiography (PCG) signal was the lack of mastery of experts in the interpretation of these sounds, as well as its high potential for noise pollution. PCG is a non-invasive signal for monitoring physiological parameters of cardiac, which can make heart disease diagnostics more efficient. In recent years, attempts have been made to use PCG to detect heart disease independently without a need to match with the ECG. We propose a hybrid algorithm including empirical mode decomposition (EMD), Hilbert transform and Gaussian function for detecting heart sounds to distinguish first (S1) and second (S2) cardiac sounds by eliminating the effect of cardiac murmurs. In this article, 250 normal and 250 abnormal sound signals were examined. The overall positive predictivity of normal and abnormal S1 and S2 is 98.98%, 98.78, 98.78 and 98.37, respectively. Our results showed that the proposed method has a high potential for heart sounds determination, while maintains its simplicity and has a reasonable computational time.  相似文献   

19.
基于互信息的图像配准算法计算复杂度高,配准速度慢。针对这一问题,本文提出一种基于改进遗传算法和Powell算法相结合的医学图像配准方法。首先针对传统遗传算法收敛速度慢、易早熟的缺陷,本文对遗传操作中的交叉运算过程提出了改进策略,并将改进的遗传算法与Powell算法相结合,充分利用遗传算法的全局搜索能力与Powell算法的局部搜索能力。与Powell算法和未改进的遗传算法相比,本文提出的算法极大地缩短了图像配准所用的时间,同时提高了算法的抗噪性。  相似文献   

20.
The use of machine learning tools in medical diagnosis is increasing gradually. This is mainly because the effectiveness of classification and recognition systems has improved in a great deal to help medical experts in diagnosing diseases. Such a disease is breast cancer, which is a very common type of cancer among woman. As the incidence of this disease has increased significantly in the recent years, machine learning applications to this problem have also took a great attention as well as medical consideration. This study aims at diagnosing breast cancer with a new hybrid machine learning method. By hybridizing a fuzzy-artificial immune system with k-nearest neighbour algorithm, a method was obtained to solve this diagnosis problem via classifying Wisconsin Breast Cancer Dataset (WBCD). This data set is a very commonly used data set in the literature relating the use of classification systems for breast cancer diagnosis and it was used in this study to compare the classification performance of our proposed method with regard to other studies. We obtained a classification accuracy of 99.14%, which is the highest one reached so far. The classification accuracy was obtained via 10-fold cross validation. This result is for WBCD but it states that this method can be used confidently for other breast cancer diagnosis problems, too.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号