Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 18 de 18
Filter
1.
Comput Methods Programs Biomed ; 249: 108157, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38582037

ABSTRACT

BACKGROUND AND OBJECTIVE: T-wave alternans (TWA) is a fluctuation in the repolarization morphology of the ECG. It is associated with cardiac instability and sudden cardiac death risk. Diverse methods have been proposed for TWA analysis. However, TWA detection in ambulatory settings remains a challenge due to the absence of standardized evaluation metrics and detection thresholds. METHODS: In this work we use traditional TWA analysis signal processing-based methods for feature extraction, and two machine learning (ML) methods, namely, K-nearest-neighbor (KNN) and random forest (RF), for TWA detection, addressing hyper-parameter tuning and feature selection. The final goal is the detection in ambulatory recordings of short, non-sustained and sparse TWA events. RESULTS: We train ML methods to detect a wide variety of alternant voltage from 20 to 100 µV, i.e., ranging from non-visible micro-alternans to TWA of higher amplitudes, to recognize a wide range in concordance to risk stratification. In classification, RF outperforms significantly the recall in comparison with the signal processing methods, at the expense of a small lost in precision. Despite ambulatory detection stands for an imbalanced category context, the trained ML systems always outperform signal processing methods. CONCLUSIONS: We propose a comprehensive integration of multiple variables inspired by TWA signal processing methods to fed learning-based methods. ML models consistently outperform the best signal processing methods, yielding superior recall scores.


Subject(s)
Arrhythmias, Cardiac , Electrocardiography, Ambulatory , Humans , Electrocardiography, Ambulatory/methods , Heart Rate , Arrhythmias, Cardiac/diagnosis , Death, Sudden, Cardiac , Signal Processing, Computer-Assisted , Electrocardiography/methods
2.
Heliyon ; 9(1): e12947, 2023 Jan.
Article in English | MEDLINE | ID: mdl-36699267

ABSTRACT

Background and objective: T-wave alternans (TWA) is a fluctuation of the ST-T complex of the surface electrocardiogram (ECG) on an every-other-beat basis. It has been shown to be clinically helpful for sudden cardiac death stratification, though the lack of a gold standard to benchmark detection methods limits its application and impairs the development of alternative techniques. In this work, a novel approach based on machine learning for TWA detection is proposed. Additionally, a complete experimental setup is presented for TWA detection methods benchmarking. Methods: The proposed experimental setup is based on the use of open-source databases to enable experiment replication and the use of real ECG signals with added TWA episodes. Also, intra-patient overfitting and class imbalance have been carefully avoided. The Spectral Method (SM), the Modified Moving Average Method (MMA), and the Time Domain Method (TM) are used to obtain input features to the Machine Learning (ML) algorithms, namely, K Nearest Neighbor, Decision Trees, Random Forest, Support Vector Machine and Multi-Layer Perceptron. Results: There were not found large differences in the performance of the different ML algorithms. Decision Trees showed the best overall performance (accuracy 0.88 ± 0.04 , precision 0.89 ± 0.05 , Recall 0.90 ± 0.05 , F1 score 0.89 ± 0.03 ). Compared to the SM (accuracy 0.79, precision 0.93, Recall 0.64, F1 score 0.76) there was an improvement in every metric except for the precision. Conclusions: In this work, a realistic database to test the presence of TWA using ML algorithms was assembled. The ML algorithms overall outperformed the SM used as a gold standard. Learning from data to identify alternans elicits a substantial detection growth at the expense of a small increment of the false alarm.

3.
Sensors (Basel) ; 22(20)2022 Oct 14.
Article in English | MEDLINE | ID: mdl-36298178

ABSTRACT

Power line infrastructure is available almost everywhere. Positioning systems aim to estimate where a device or target is. Consequently, there may be an opportunity to use power lines for positioning purposes. This survey article reports the different efforts, working principles, and possibilities for implementing positioning systems relying on power line infrastructure for power line positioning systems (PLPS). Since Power Line Communication (PLC) systems of different characteristics have been deployed to provide communication services using the existing mains, we also address how PLC systems may be employed to build positioning systems. Although some efforts exist, PLPS are still prospective and thus open to research and development, and we try to indicate the possible directions and potential applications for PLPS.

4.
Sensors (Basel) ; 21(1)2021 Jan 04.
Article in English | MEDLINE | ID: mdl-33406684

ABSTRACT

The aim of this paper is to formulate the physical layer of the broadband and narrowband power line communication (PLC) systems described in standards IEEE 1901 and IEEE 1901.2, which address new communication technologies over electrical networks for Smart Grid and Internet of Things applications. Specifically, this paper presents a mathematical formulation by means of matrices of a transmitter and receiver system based on windowed OFDM. The proposed formulation is essential for obtaining the input-output relation, as well as an analysis of the interference present in the system. It is very useful for simulating PLC systems using software designed to operate primarily on whole matrices and arrays, such as Matlab. In addition, it eases the analysis and design of different receiver configurations, simply by modifying or adding a matrix. Since the relevant standards only describe the blocks corresponding to the transmitter, and leave the set-up of the receiver open to the manufacturer, we analysed four different possible schemes that include window functions in different configurations. In simulations, the behaviour of each of these schemes is analysed in terms of bit error and achievable data rates using artificial and real noises.

5.
Comput Methods Programs Biomed ; 145: 147-155, 2017 Jul.
Article in English | MEDLINE | ID: mdl-28552120

ABSTRACT

BACKGROUND AND OBJECTIVE: T-wave alternans (TWA) is a fluctuation of the ST-T complex occurring on an every-other-beat basis of the surface electrocardiogram (ECG). It has been shown to be an informative risk stratifier for sudden cardiac death, though the lack of gold standard to benchmark detection methods has promoted the use of synthetic signals. This work proposes a novel signal model to study the performance of a TWA detection. Additionally, the methodological validation of a denoising technique based on empirical mode decomposition (EMD), which is used here along with the spectral method, is also tackled. METHODS: The proposed test bed system is based on the following guidelines: (1) use of open source databases to enable experimental replication; (2) use of real ECG signals and physiological noise; (3) inclusion of randomized TWA episodes. Both sensitivity (Se) and specificity (Sp) are separately analyzed. Also a nonparametric hypothesis test, based on Bootstrap resampling, is used to determine whether the presence of the EMD block actually improves the performance. RESULTS: The results show an outstanding specificity when the EMD block is used, even in very noisy conditions (0.96 compared to 0.72 for SNR = 8 dB), being always superior than that of the conventional SM alone. Regarding the sensitivity, using the EMD method also outperforms in noisy conditions (0.57 compared to 0.46 for SNR=8 dB), while it decreases in noiseless conditions. CONCLUSIONS: The proposed test setting designed to analyze the performance guarantees that the actual physiological variability of the cardiac system is reproduced. The use of the EMD-based block in noisy environment enables the identification of most patients with fatal arrhythmias.


Subject(s)
Arrhythmias, Cardiac/diagnosis , Electrocardiography/standards , Benchmarking , Humans , Sensitivity and Specificity
6.
Physiol Meas ; 36(9): 1981-94, 2015 Sep.
Article in English | MEDLINE | ID: mdl-26260978

ABSTRACT

The aim of electrocardiogram (ECG) compression is to reduce the amount of data as much as possible while preserving the significant information for diagnosis. Objective metrics that are derived directly from the signal are suitable for controlling the quality of the compressed ECGs in practical applications. Many approaches have employed figures of merit based on the percentage root mean square difference (PRD) for this purpose. The benefits and drawbacks of the PRD measures, along with other metrics for quality assessment in ECG compression, are analysed in this work. We propose the use of the root mean square error (RMSE) for quality control because it provides a clearer and more stable idea about how much the retrieved ECG waveform, which is the reference signal for establishing diagnosis, separates from the original. For this reason, the RMSE is applied here as the target metric in a thresholding algorithm that relies on the retained energy. A state of the art compressor based on this approach, and its PRD-based counterpart, are implemented to test the actual capabilities of the proposed technique. Both compression schemes are employed in several experiments with the whole MIT-BIH Arrhythmia Database to assess both global and local signal distortion. The results show that, using the RMSE for quality control, the distortion of the reconstructed signal is better controlled without reducing the compression ratio.


Subject(s)
Data Compression/methods , Electrocardiography/methods , Algorithms , Arrhythmias, Cardiac/physiopathology , Data Compression/standards , Databases, Factual , Electrocardiography/standards , Quality Control
7.
Physiol Meas ; 33(7): 1237-47, 2012 Jul.
Article in English | MEDLINE | ID: mdl-22735392

ABSTRACT

Coding distortion in lossy electroencephalographic (EEG) signal compression methods is evaluated through tractable objective criteria. The percentage root-mean-square difference, which is a global and relative indicator of the quality held by reconstructed waveforms, is the most widely used criterion. However, this parameter does not ensure compliance with clinical standard guidelines that specify limits to allowable noise in EEG recordings. As a result, expert clinicians may have difficulties interpreting the resulting distortion of the EEG for a given value of this parameter. Conversely, the root-mean-square error is an alternative criterion that quantifies distortion in understandable units. In this paper, we demonstrate that the root-mean-square error is better suited to control and to assess the distortion introduced by compression methods. The experiments conducted in this paper show that the use of the root-mean-square error as target parameter in EEG compression allows both clinicians and scientists to infer whether coding error is clinically acceptable or not at no cost for the compression ratio.


Subject(s)
Data Compression/methods , Electroencephalography/methods , Statistics as Topic/methods , Adolescent , Child , Child, Preschool , Databases as Topic , Female , Humans , Infant , Male , Young Adult
8.
Med Eng Phys ; 34(7): 892-9, 2012 Sep.
Article in English | MEDLINE | ID: mdl-22056794

ABSTRACT

The recent use of long-term records in electroencephalography is becoming more frequent due to its diagnostic potential and the growth of novel signal processing methods that deal with these types of recordings. In these cases, the considerable volume of data to be managed makes compression necessary to reduce the bit rate for transmission and storage applications. In this paper, a new compression algorithm specifically designed to encode electroencephalographic (EEG) signals is proposed. Cosine modulated filter banks are used to decompose the EEG signal into a set of subbands well adapted to the frequency bands characteristic of the EEG. Given that no regular pattern may be easily extracted from the signal in time domain, a thresholding-based method is applied for quantizing samples. The method of retained energy is designed for efficiently computing the threshold in the decomposition domain which, at the same time, allows the quality of the reconstructed EEG to be controlled. The experiments are conducted over a large set of signals taken from two public databases available at Physionet and the results show that the compression scheme yields better compression than other reported methods.


Subject(s)
Electroencephalography/methods , Signal Processing, Computer-Assisted , Adolescent , Algorithms , Child , Child, Preschool , Databases, Factual , Entropy , Female , Humans , Infant , Male , Young Adult
9.
Article in English | MEDLINE | ID: mdl-22255966

ABSTRACT

Due to the large volume of information generated in an electroencephalographic (EEG) study, compression is needed for storage, processing or transmission for analysis. In this paper we evaluate and compare two lossy compression techniques applied to EEG signals. It compares the performance of compression schemes with decomposition by filter banks or wavelet Packets transformation, seeking the best value for compression, best quality and more efficient real time implementation. Due to specific properties of EEG signals, we propose a quantization stage adapted to the dynamic range of each band, looking for higher quality. The results show that the compressor with filter bank performs better than transform methods. Quantization adapted to the dynamic range significantly enhances the quality.


Subject(s)
Electroencephalography/instrumentation , Electroencephalography/methods , Signal Processing, Computer-Assisted , Algorithms , Computers , Data Compression , Humans , Models, Statistical , Polysomnography/instrumentation , Polysomnography/methods , Reproducibility of Results , Software , Wavelet Analysis
10.
Article in English | MEDLINE | ID: mdl-22255969

ABSTRACT

The aim of electrocardiogram (ECG) compression is to achieve as much compression as possible while the significant information is preserved in the reconstructed signal. Lossy thresholding-based compressors have shown good performance needing low computational resources. In this work, two compression schemes that include nearly perfect reconstruction cosine modulated filter banks for the signal decomposition are proposed. They are evaluated for highly reliable applications, where the reconstructed signal must be very similar to the original. The whole MIT-BIH Arrhythmia Database and suitable metrics are used in the assessment, to obtain representative results. Results show that the proposed compressors yield better performance than discrete wavelet transform-based techniques, when high quality requirements are imposed.


Subject(s)
Data Compression/methods , Electrocardiography/methods , Signal Processing, Computer-Assisted , Algorithms , Arrhythmias, Cardiac/physiopathology , Computers , Humans , Models, Statistical , Reproducibility of Results , Software , Wavelet Analysis
11.
IEEE Trans Biomed Eng ; 57(10): 2402-12, 2010 Oct.
Article in English | MEDLINE | ID: mdl-20409985

ABSTRACT

Repolarization alternans or T-wave alternans (TWA) is a subject of great interest as it has been shown as a risk stratifier for sudden cardiac death. As TWA consists of subtle and nonvisible variations of the ST-T complex, its detection may become more difficult in noisy environments, such as stress testing or Holter recordings. In this paper, a technique based on the empirical-mode decomposition (EMD) to separate the useful information of the ST-T complex from noise and artifacts is proposed. The identification of the useful part of the signal is based on the study of complexity in the EMD domain by means of the Hjorth descriptors. As a result, a robust technique to extract the trend of the ST-T complex has been achieved. The evaluation of the method is carried out with the spectral method (SM) over several public domain databases with ECGs sampled at different frequencies. The results show that the SM with the proposed technique outperforms the traditional SM by more than 2 dB. Also, the robustness of this technique is guaranteed as it does not introduce any additional distortion to the detector in noiseless conditions.


Subject(s)
Electrocardiography/methods , Models, Cardiovascular , Signal Processing, Computer-Assisted , Algorithms , Artifacts , Computer Simulation , Databases, Factual , Heart Ventricles/physiopathology , Humans , Nonlinear Dynamics
12.
J Voice ; 24(6): 667-77, 2010 Nov.
Article in English | MEDLINE | ID: mdl-20207107

ABSTRACT

A new index is introduced in this article to measure the degree of normality in the speech. The proposed parameter has demonstrated to be correlated with the perceived hoarseness, giving an indication of the degree of normality. The calculation of such a parameter is based on a statistical model developed to represent normal and pathological voices. The modeling is built around Gaussian mixture models and Mel frequency cepstral coefficients. The proposed index has been named pathological likelihood index (PLI). PLI is compared with other aperiodicity features (such as jitter and shimmer), and measurements sensitive to additive noise (such as harmonics-to-noise ratio (HNR), cepstrum-based HNR, normalized noise energy, and glottal-to-noise excitation ratio). The proposed parameter is revealed to be a good estimator of the presence of pathology, showing lower correlation with noise, frequency, and amplitude perturbation parameters than these classical features among them.


Subject(s)
Hoarseness/diagnosis , Likelihood Functions , Phonation , Speech Perception , Voice Quality , Fourier Analysis , Hoarseness/physiopathology , Hoarseness/psychology , Humans , Reproducibility of Results , Severity of Illness Index , Signal Processing, Computer-Assisted , Sound Spectrography , Speech Acoustics , Speech Production Measurement , Time Factors
13.
J Voice ; 24(1): 47-56, 2010 Jan.
Article in English | MEDLINE | ID: mdl-19135854

ABSTRACT

This paper evaluates the capabilities of the Glottal to Noise Excitation Ratio for the screening of voice disorders. A lot of effort has been made using this parameter to evaluate voice quality, but there do not exist any studies that evaluate the discrimination capabilities of this acoustic parameter to classify between normal and pathological voices, and neither are there any previous studies that reflect the normative values that could be used for screening purposes. A set of 226 speakers (53 normal and 173 pathological) taken from a voice disorders database were used to evaluate the usefulness of this parameter for discriminating normal and pathological voices. To evaluate this parameter, the effect of the bandwidth of the Hilbert envelopes and the frequency shift have been analyzed, concluding that a good discrimination is obtained with a bandwidth of 1000 Hz and a frequency shift of 300 Hz. The results confirm that the Glottal to Noise Excitation Ratio provides reliable measurements in terms of discrimination among normal and pathological voices, comparable to other classical long-term noise measurements found in the literature, such as Normalized Noise Energy or Harmonics to Noise Ratio, so this parameter can be considered a good choice for screening purposes.


Subject(s)
Glottis/physiopathology , Noise , Speech Acoustics , Voice Disorders/diagnosis , Voice Disorders/physiopathology , Adult , Algorithms , Area Under Curve , Databases as Topic , Female , Humans , Male , Middle Aged , ROC Curve , Sex Characteristics , Voice , Young Adult
14.
IEEE Trans Biomed Eng ; 55(12): 2831-5, 2008 Dec.
Article in English | MEDLINE | ID: mdl-19126465

ABSTRACT

This paper investigates the performance of an automatic system for voice pathology detection when the voice samples have been compressed in MP3 format and different binary rates (160, 96, 64, 48, 24, and 8 kb/s). The detectors employ cepstral and noise measurements, along with their derivatives, to characterize the voice signals. The classification is performed using Gaussian mixtures models and support vector machines. The results between the different proposed detectors are compared by means of detector error tradeoff (DET) and receiver operating characteristic (ROC) curves, concluding that there are no significant differences in the performance of the detector when the binary rates of the compressed data are above 64 kb/s. This has useful applications in telemedicine, reducing the storage space of voice recordings or transmitting them over narrow-band communications channels.


Subject(s)
Artifacts , Data Compression/methods , Sound Spectrography/methods , Speech Acoustics , Voice Disorders/diagnosis , Artificial Intelligence , Fourier Analysis , Humans , Multimedia , Normal Distribution , Pattern Recognition, Automated/methods , ROC Curve , Voice , Voice Disorders/physiopathology , Voice Quality
15.
IEEE Trans Biomed Eng ; 54(4): 766-9, 2007 Apr.
Article in English | MEDLINE | ID: mdl-17405386

ABSTRACT

Most of the recent electrocardiogram (ECG) compression approaches developed with the wavelet transform are implemented using the discrete wavelet transform. Conversely, wavelet packets (WP) are not extensively used, although they are an adaptive decomposition for representing signals. In this paper, we present a thresholding-based method to encode ECG signals using WP. The design of the compressor has been carried out according to two main goals: (1) The scheme should be simple to allow real-time implementation; (2) quality, i.e., the reconstructed signal should be as similar as possible to the original signal. The proposed scheme is versatile as far as neither QRS detection nor a priori signal information is required. As such, it can thus be applied to any ECG. Results show that WP perform efficiently and can now be considered as an alternative in ECG compression applications.


Subject(s)
Algorithms , Artifacts , Data Compression/methods , Electrocardiography/methods , Signal Processing, Computer-Assisted , Feasibility Studies , Humans , Reproducibility of Results , Sensitivity and Specificity
16.
Conf Proc IEEE Eng Med Biol Soc ; 2006: 2478-81, 2006.
Article in English | MEDLINE | ID: mdl-17946516

ABSTRACT

Nowadays, the most extended techniques to measure the voice quality are based on perceptual evaluation by well trained professionals. The GRBAS scale is a widely used method for perceptual evaluation of voice quality. The GRBAS scale is widely used in Japan and there is increasing interest in both Europe and the United States. However, this technique needs well-trained experts, and is based on the evaluator's expertise, depending a lot on his own psycho-physical state. Furthermore, a great variability in the assessments performed from one evaluator to another is observed. Therefore, an objective method to provide such measurement of voice quality would be very valuable. In this paper, the automatic assessment of voice quality is addressed by means of short-term Mel cepstral parameters (MFCC), and learning vector quantization (LVQ) in a pattern recognition stage. Results show that this approach provides acceptable results for this purpose, with accuracy around 65% at the best.


Subject(s)
Diagnosis, Computer-Assisted/methods , Pattern Recognition, Automated/methods , Severity of Illness Index , Sound Spectrography/methods , Speech Production Measurement/methods , Voice Disorders/diagnosis , Voice Quality , Algorithms , Artificial Intelligence , Humans , Reproducibility of Results , Sensitivity and Specificity , Voice Disorders/classification
17.
Med Eng Phys ; 27(9): 798-802, 2005 Nov.
Article in English | MEDLINE | ID: mdl-15869896

ABSTRACT

The quality measurement of the reconstructed signal in an electrocardiogram (ECG) compression scheme must be obtained by objective means being the percentage root-mean-square difference (PRD) the most widely used. However, this parameter is dependent on the dc level so that confusion can be stated in the evaluation of ECG compressors. In this communication, it will be shown that if the performance of an ECG coder is evaluated only in terms of quality, considering exclusively the PRD parameter, incorrect conclusions can be inferred. The objective of this work is to propose the joint use of several parameters, as simulations will show, effectiveness and performance of the ECG coder are evaluated with more precision, and the way of inferring conclusions from the obtained results is more reliable.


Subject(s)
Algorithms , Data Compression/methods , Diagnosis, Computer-Assisted/methods , Electrocardiography/methods , Signal Processing, Computer-Assisted , Data Interpretation, Statistical , Humans , Models, Cardiovascular , Models, Statistical
18.
Med Eng Phys ; 26(7): 553-68, 2004 Sep.
Article in English | MEDLINE | ID: mdl-15271283

ABSTRACT

In this work, a filter bank-based algorithm for electrocardiogram (ECG) signals compression is proposed. The new coder consists of three different stages. In the first one--the subband decomposition stage--we compare the performance of a nearly perfect reconstruction (N-PR) cosine-modulated filter bank with the wavelet packet (WP) technique. Both schemes use the same coding algorithm, thus permitting an effective comparison. The target of the comparison is the quality of the reconstructed signal, which must remain within predetermined accuracy limits. We employ the most widely used quality criterion for the compressed ECG: the percentage root-mean-square difference (PRD). It is complemented by means of the maximum amplitude error (MAX). The tests have been done for the 12 principal cardiac leads, and the amount of compression is evaluated by means of the mean number of bits per sample (MBPS) and the compression ratio (CR). The implementation cost for both the filter bank and the WP technique has also been studied. The results show that the N-PR cosine-modulated filter bank method outperforms the WP technique in both quality and efficiency.


Subject(s)
Algorithms , Electrocardiography , Models, Cardiovascular , Signal Processing, Computer-Assisted , Biomedical Engineering , Data Interpretation, Statistical , Time Factors
SELECTION OF CITATIONS
SEARCH DETAIL