首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 234 毫秒
1.
In this paper, a recursive principal component analysis (RPCA)-based algorithm is applied for detecting and quantifying the motion artifact episodes encountered in an ECG signal. The motion artifact signal is synthesized by low-pass filtering a random noise signal with different spectral ranges of LPF (low pass filter): 0–5 Hz, 0–10 Hz, 0–15 Hz and 0–20 Hz. Further, the analysis of the algorithm is carried out for different values of SNR levels and forgetting factors (α) of an RPCA algorithm. The algorithm derives an error signal, wherever a motion artifact episode (noise) is present in the entire ECG signal with 100% accuracy. The RPCA error magnitude is almost zero for the clean signal portion and considerably high wherever the motion artifacts (noisy episodes) are encountered in the ECG signals. Further, the general trend of the algorithm is to produce a smaller magnitude of error for higher SNR (i.e. low level of noise) and vice versa. The quantification of the RPCA algorithm has been made by applying it over 25 ECG data-sets of different morphologies and genres with three different values of SNRs for each forgetting factor and for each of four spectral ranges.  相似文献   

2.
Abstract

Ambulatory ECG monitoring provides electrical activity of the heart when a person is involved in doing normal routine activities. Thus, the recorded ECG signal consists of cardiac signal along with motion artifacts introduced due to a person’s body movements during routine activities. Detection of motion artifacts due to different physical activities might help in further cardiac diagnosis. Ambulatory ECG signal analysis for detection of various motion artifacts using adaptive filtering approach is addressed in this paper. We have used BIOPAC MP 36 system for acquiring ECG signal. The ECG signals of five healthy subjects (aged between 22–30 years) were recorded while the person performed various body movements like up and down movement of the left hand, up and down movement of the right hand, waist twisting movement while standing and change from sitting down on a chair to standing up movement in lead I configuration. An adaptive filter-based approach has been used to extract the motion artifact component from the ambulatory ECG signal. The features of motion artifact signal, extracted using Gabor transform, have been used to train the artificial neural network (ANN) for classifying body movements.  相似文献   

3.
利用心电功率谱特征,探索心电数据压缩新方法。用小波分解心电信号为高频与低频分量,对低频分量继续分解达到要求的级数,对高频分量则根据其所在频段的能量,对临床诊断的价值加以取舍。对MIT生理信号数据库心电数据的压缩与还原分析表明,该方法平衡了压缩比与还原精度之间的矛盾,既具有较高的压缩比,又具有较高的还原精度,而且对信号的适应性也明显增强。另外,该压缩方法还具有一定的去噪作用。说明结合心电功率谱特征与小波变换方法压缩心电有其优势。  相似文献   

4.
This paper describes a hybrid technique based on the combination of wavelet transform and linear prediction to achieve very effective electrocardiogram (ECG) data compression. First, the ECG signal is wavelet transformed using four different discrete wavelet transforms (Daubechies, Coiflet, Biorthogonal and Symmlet). All the wavelet transforms are based on dyadic scales and decompose the ECG signals into five detailed levels and one approximation. Then, the wavelet coefficients are linearly predicted, where the error corresponding to the difference between these coefficients and the predicted ones is minimized in order to get the best predictor. In particular, the residuals of the wavelet coefficients are uncorrelated and hence can be represented with fewer bits compared to the original signal. To further increase the compression rate, the residual sequence obtained after linear prediction is coded using a newly developed coding technique. As a result, a compression ratio (Cr) of 20 to 1 is achieved with percentage root-mean square difference (PRD) less than 4%. The algorithm is compared to an alternative compression algorithm based on the direct use of wavelet transforms. Experiments on selected records from the MIT-BIH arrhythmia database reveal that the proposed method is significantly more efficient in compression. The proposed compression scheme may find applications in digital Holter recording, in ECG signal archiving and in ECG data transmission through communication channels.  相似文献   

5.
BackgroundThis paper proposes a novel method for automatically identifying sleep apnea (SA) severity based on deep learning from a short-term normal electrocardiography (ECG) signal.MethodsA convolutional neural network (CNN) was used as an identification model and implemented using a one-dimensional convolutional, pooling, and fully connected layer. An optimal architecture is incorporated into the CNN model for the precise identification of SA severity. A total of 144 subjects were studied. The nocturnal single-lead ECG signal was collected, and the short-term normal ECG was extracted from them. The short-term normal ECG was segmented for a duration of 30 seconds and divided into two datasets for training and evaluation. The training set consists of 82,952 segments (66,360 training set, 16,592 validation set) from 117 subjects, while the test set has 20,738 segments from 27 subjects.ResultsF1-score of 98.0% was obtained from the test set. Mild and moderate SA can be identified with an accuracy of 99.0%.ConclusionThe results showed the possibility of automatically identifying SA severity based on a short-term normal ECG signal.  相似文献   

6.
本研究针对心电数据的压缩问题,提出了一种新的基于小波变换的二维心电(ECG)数据压缩算法。该算法首先将一维原始ECG信号转化为二维序列信号,从而使ECG数据的两种相关性可得到充分地利用;然后对二维ECG序列进行小波变换,并对变换后的系数应用了一种改进的矢量量化(VQ)方法。在改进的VQ方法中,根据小波变换后系数的特点,构造了一种新的树矢量(TV)。利用本算法与已有基于小波变换的压缩算法和其他二维ECG信号的压缩算法,对MIT/BIH数据库中的心律不齐数据进行了对比压缩实验。结果表明:本算法适用于各种波形特征的ECG信号,并且在保证压缩质量的前提下,可以获得较大的压缩比,具有一定的应用价值。  相似文献   

7.
Compression of electrocardiography (ECG) is necessary for efficient storage and transmission of the digitized ECG signals. Discrete wavelet transform (DWT) has recently emerged as a powerful technique for ECG signal compression due to its multi-resolution signal decomposition and locality properties. This paper presents an ECG compressor based on the selection of optimum threshold levels of DWT coefficients in different subbands that achieve maximum data volume reduction while preserving the significant signal morphology features upon reconstruction. First, the ECG is wavelet transformed into m subbands and the wavelet coefficients of each subband are thresholded using an optimal threshold level. Thresholding removes excessively small features and replaces them with zeroes. The threshold levels are defined for each signal so that the bit rate is minimized for a target distortion or, alternatively, the distortion is minimized for a target compression ratio. After thresholding, the resulting significant wavelet coefficients are coded using multi embedded zero tree (MEZW) coding technique. In order to assess the performance of the proposed compressor, records from the MIT-BIH Arrhythmia Database were compressed at different distortion levels, measured by the percentage rms difference (PRD), and compression ratios (CR). The method achieves good CR values with excellent reconstruction quality that compares favourably with various classical and state-of-the-art ECG compressors. Finally, it should be noted that the proposed method is flexible in controlling the quality of the reconstructed signals and the volume of the compressed signals by establishing a target PRD and a target CR a priori, respectively.  相似文献   

8.
基于小波变换的心电信号准无损压缩算法   总被引:2,自引:0,他引:2  
提出了基于小波变换的心电信号准无损压缩算法。在对原始信号进行一级小波分解的基础上,根据高频分量和低频分量所占位数的不同分别进行无损压缩。实验结果表明该方法失真度非常小,而且算法简单,运算速度快。  相似文献   

9.
Abstract

Separating an information-bearing signal from the background noise is a general problem in signal processing. In a clinical environment during acquisition of an electrocardiogram (ECG) signal, The ECG signal is corrupted by various noise sources such as powerline interference (PLI), baseline wander and muscle artifacts. This paper presents novel methods for reduction of powerline interference in ECG signals using empirical wavelet transform (EWT) and adaptive filtering. The proposed methods are compared with the empirical mode decomposition (EMD) based PLI cancellation methods. A total of six methods for PLI reduction based on EMD and EWT are analysed and their results are presented in this paper. The EWT-based de-noising methods have less computational complexity and are more efficient as compared with the EMD-based de-noising methods.  相似文献   

10.
Abstract

Atrial and ventricular arrhythmias are symptoms of the main common causes of rapid death. The severity of these arrhythmias depends on their occurrence either within the atria or ventricles. These abnormalities of the heart activity may cause an immediate death or cause damage of the heart. In this paper, a new algorithm is proposed for the classification of life threatening cardiac arrhythmias including atrial fibrillation (AF), ventricular tachycardia (VT) and ventricular fibrillation (VF). The proposed technique uses a simple signal processing technique for analysing the non-linear dynamics of the ECG signals in the time domain. The classification algorithm is based upon the distribution of the attractor in the reconstructed phase space (RPS). The behaviour of the ECG signal in the reconstructed phase space is used to determine the classification features of the whole classifier. It is found that different arrhythmias occupy different regions in the reconstructed phase space. Three regions in the RPS are found to be more representative of the considered arrhythmias. Therefore, only three simple features are extracted to be used as classification parameters. To evaluate the performance of the presented classification algorithm, real datasets are obtained from the MIT database. A learning dataset is used to design the classification algorithm and a testing dataset is used to verify the algorithm. The algorithm is designed to guarantee achieving both 100% sensitivity and 100% specificity. The classification algorithm is validated by using 45 ECG signals spanning the considered life threatening arrhythmias. The obtained results show that the classification algorithm attains a sensitivity ranging from 85.7–100%, a specificity ranging from 86.7–100% and an overall accuracy of 95.55%.  相似文献   

11.
The modified embedded zero-tree wavelet (MEZW) compression algorithm for the one-dimensional signal was originally derived for image compression based on Shapiro's EZW algorithm. It is revealed that the proposed codec is significantly more efficient in compression and in computation than previously proposed ECG compression schemes. The coder also attains exact bit rate control and generates a bit stream progressive in quality or rate. The EZW and MEZW algorithms apply the chosen threshold values or the expressions in order to specify that the significant transformed coefficients are greatly significant. Thus, two different threshold definitions, namely percentage and dyadic thresholds, are used, and they are applied for different wavelet types in biorthogonal and orthogonal classes. In detail, the MEZW and EZW algorithms results are quantitatively compared in terms of the compression ratio (CR) and percentage root mean square difference (PRD). Experiments are carried out on the selected records from the MIT-BIH arrhythmia database and an original ECG signal. It is observed that the MEZW algorithm shows a clear advantage in the CR achieved for a given PRD over the traditional EZW, and it gives better results for the biorthogonal wavelets than the orthogonal wavelets.  相似文献   

12.
Study ObjectivesThe frequency of cortical arousals is an indicator of sleep quality. Additionally, cortical arousals are used to identify hypopneic events. However, it is inconvenient to record electroencephalogram (EEG) data during home sleep testing. Fortunately, most cortical arousal events are associated with autonomic nervous system activity that could be observed on an electrocardiography (ECG) signal. ECG data have lower noise and are easier to record at home than EEG. In this study, we developed a deep learning-based cortical arousal detection algorithm that uses a single-lead ECG to detect arousal during sleep.MethodsThis study included 1,547 polysomnography records that met study inclusion criteria and were selected from the Multi-Ethnic Study of Atherosclerosis database. We developed an end-to-end deep learning model consisting of convolutional neural networks and recurrent neural networks which: (1) accepted varying length physiological data; (2) directly extracted features from the raw ECG signal; (3) captured long-range dependencies in the physiological data; and (4) produced arousal probability in 1-s resolution.ResultsWe evaluated the model on a test set (n = 311). The model achieved a gross area under precision-recall curve score of 0.62 and a gross area under receiver operating characteristic curve score of 0.93.ConclusionThis study demonstrated the end-to-end deep learning approach with a single-lead ECG has the potential to be used to accurately detect arousals in home sleep tests.  相似文献   

13.
BackgroundOver the past two decades, high false alarm (FA) rates have remained an important yet unresolved concern in the Intensive Care Unit (ICU). High FA rates lead to desensitization of the attending staff to such warnings, with associated slowing in response times and detrimental decreases in the quality of care for the patient. False arrhythmia alarms are commonly due to single channel ECG artifacts and low voltage signals, and therefore it is likely that the FA rates may be reduced if information from other independent signals is used to form a more robust hypothesis of the alarm’s etiology.MethodsA large multi-parameter ICU database (PhysioNet’s MIMIC II database) was used to investigate the frequency of five categories of false critical (“red” or “life-threatening”) ECG arrhythmia alarms produced by a commercial ICU monitoring system, namely: asystole, extreme bradycardia, extreme tachycardia, ventricular tachycardia and ventricular fibrillation/tachycardia. Non-critical (“yellow”) arrhythmia alarms were not considered in this study. Multiple expert reviews of 5386 critical ECG arrhythmia alarms from a total of 447 adult patient records in the MIMIC II database were made using the associated 41,301 h of simultaneous ECG and arterial blood pressure (ABP) waveforms. An algorithm to suppress false critical ECG arrhythmia alarms using morphological and timing information derived from the ABP signal was then tested.ResultsAn average of 42.7% of the critical ECG arrhythmia alarms were found to be false, with each of the five alarm categories having FA rates between 23.1% and 90.7%. The FA suppression algorithm was able to suppress 59.7% of the false alarms, with FA reduction rates as high as 93.5% for asystole and 81.0% for extreme bradycardia. FA reduction rates were lowest for extreme tachycardia (63.7%) and ventricular-related alarms (58.2% for ventricular fibrillation/tachycardia and 33.0% for ventricular tachycardia). True alarm (TA) reduction rates were all 0%, except for ventricular tachycardia alarms (9.4%).ConclusionsThe FA suppression algorithm reduced the incidence of false critical ECG arrhythmia alarms from 42.7% to 17.2%, where simultaneous ECG and ABP data were available. The present algorithm demonstrated the potential of data fusion to reduce false ECG arrhythmia alarms in a clinical setting, but the non-zero TA reduction rate for ventricular tachycardia indicates the need for further refinement of the suppression strategy. To avoid suppressing any true alarms, the algorithm could be implemented for all alarms except ventricular tachycardia. Under these conditions the FA rate would be reduced from 42.7% to 22.7%. This implementation of the algorithm should be considered for prospective clinical evaluation. The public availability of a real-world ICU database of multi-parameter physiologic waveforms, together with their associated annotated alarms is a new and valuable research resource for algorithm developers.  相似文献   

14.
A new adaptive thresholding mechanism to determine the significant wavelet coefficients of an electrocardiogram (ECG) signal is proposed. It is based on estimating thresholds for different sub-bands using the concept of energy packing efficiency (EPE). Then thresholds are optimized using the particle swarm optimization (PSO) algorithm to achieve a target compression ratio with minimum distortion. Simulation results on several records taken from the MIT-BIH Arrhythmia database show that the PSO converges exactly to the target compression after four iterations while the cost function achieved its minimum value after six iterations. Compared to previously published schemes, lower distortions are achieved for the same compression ratios.  相似文献   

15.
A new adaptive thresholding mechanism to determine the significant wavelet coefficients of an electrocardiogram (ECG) signal is proposed. It is based on estimating thresholds for different sub-bands using the concept of energy packing efficiency (EPE). Then thresholds are optimized using the particle swarm optimization (PSO) algorithm to achieve a target compression ratio with minimum distortion. Simulation results on several records taken from the MIT-BIH Arrhythmia database show that the PSO converges exactly to the target compression after four iterations while the cost function achieved its minimum value after six iterations. Compared to previously published schemes, lower distortions are achieved for the same compression ratios.  相似文献   

16.
Abstract

Non-invasive detection of Atrial Fibrillation (AF) and Atrial Flutter (AFL) from ECG at the time of their onset can prevent forthcoming dangers for patients. In most of the previous detection algorithms, one of the steps includes filtering of the signal to remove noise and artefacts present in the signal. In this paper, a method of AF and AFL detection is proposed from ECG without the conventional filtering stage. Here Phase Rectified Signal Average (PRSA) technique is used with a novel optimized windowing method to achieve an averaged signal without quasi-periodicities. Both time domain and statistical features are extracted from a novel SQ concatenated section of the signal for non-linear Support Vector Machine (SVM) based classification. The performance of the proposed algorithm is tested with the MIT-BIH Arrhythmia database and good performance parameters are obtained, as indicated in the result section.  相似文献   

17.
本文提出了一种基于卷积网络的心电信号分类算法,设计了空洞卷积池化金字塔模块,通过不同尺寸的空洞卷积提取信息,再将各通道的信息聚合,在增强网络的特征提取能力的同时可以降低参数量。本文聚焦于窦性心律、房性早搏、心动过速以及心动过缓4种分类,使用的心电图数据集来自医院的实测数据,数据集包含75000名不同检测者的心电记录。经过测试,本文提出的模型在该数据集上取得了0.89的F1值,另外在CinC2017数据集上也达到了0.87的F1值。实验结果表明该分类算法具有优秀的特征提取和分类能力,在心电信号的实时分类中具备应用前景。  相似文献   

18.
19.
A quasi-periodic signal is a periodic signal with period and amplitude variations. The electrocardiogram (ECG) and several physiological signals can be treated as quasi-periodic. Vector quantization (VQ) is a valuable and universal tool for signal compression. However, the periodicity of a quasi-periodic signal causes data redundancy in the VQ codebook, where many codevectors are highly correlated. This paper explores the codebook (CB) redundancy in order to increase storage efficiency for physiological quasi-periodic signals. A quantitative CB redundancy measure and two redundancy reducing algorithms are proposed. Both algorithms use a mixed CB structure containing one and two-dimensional CBs. The first algorithm is applied to a CB directly, and the second one uses an LBG-like training algorithm to obtain a storage-efficient CB from a set of training vectors. With the MIT/BIH ECG database, the experimental results show that both algorithms can reduce the CB redundancy effectively with essentially no loss of signal quality. For comparison, the mean-shape VQ (MSVQ) proposed by Cárdenas-Barrera and Lorenzo-Ginori for ECG compression is implemented and the resulting average percent of the root-mean-square difference (PRD) is 10.78%. By using the first algorithm, the CB storage space is reduced by 40% and the resulting average PRD is 10.87%. The second algorithm can reduce the CB storage space by 75% and the average PRD is 10.27%, which is even better than the original MSVQ.  相似文献   

20.
This paper introduces an effective technique for the compression of one-dimensional signals using wavelet transforms. It is based on generating a binary stream of 1s and 0s that encodes the wavelet coefficients structure (i.e., encodes the locations of zero and nonzero coefficients). A new coding algorithm, similar to the run length encoding, has been developed for the compression of the binary stream. The compression performances of the technique are measured using compression ratio (CR) and percent root-mean square difference (PRD) measures. To assess the technique properly we have evaluated the effect of signal length, threshold levels selection and wavelet filters on the quality of the reconstructed signal. The effect of finite word length representation on the compression ratio and PRD is also discussed. The technique is tested for the compression of normal and abnormal electrocardiogram (ECG) signals. The performance parameters of the proposed coding algorithm are measured and compression ratios of 19:1 and 45:1 with PRDs of 1% and 2.8% are achieved, respectively. At the receiver end, the received signal is decoded and inverse transformed before being processed. Finally, the merits and demerits of the technique are discussed.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号