Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 12 de 12
Filter
Add more filters










Publication year range
1.
Comput Biol Med ; 155: 106655, 2023 03.
Article in English | MEDLINE | ID: mdl-36812811

ABSTRACT

BACKGROUND/AIM: In atrial fibrillation (AF) ablation procedures, it is desirable to know whether a proper disconnection of the pulmonary veins (PVs) was achieved. We hypothesize that information about their isolation could be provided by analyzing changes in P-wave after ablation. Thus, we present a method to detect PV disconnection using P-wave signal analysis. METHODS: Conventional P-wave feature extraction was compared to an automatic feature extraction procedure based on creating low-dimensional latent spaces for cardiac signals with the Uniform Manifold Approximation and Projection (UMAP) method. A database of patients (19 controls and 16 AF individuals who underwent a PV ablation procedure) was collected. Standard 12-lead ECG was recorded, and P-waves were segmented and averaged to extract conventional features (duration, amplitude, and area) and their manifold representations provided by UMAP on a 3-dimensional latent space. A virtual patient was used to validate these results further and study the spatial distribution of the extracted characteristics over the whole torso surface. RESULTS: Both methods showed differences between P-wave before and after ablation. Conventional methods were more prone to noise, P-wave delineation errors, and inter-patient variability. P-wave differences were observed in the standard leads recordings. However, higher differences appeared in the torso region over the precordial leads. Recordings near the left scapula also yielded noticeable differences. CONCLUSIONS: P-wave analysis based on UMAP parameters detects PV disconnection after ablation in AF patients and is more robust than heuristic parameterization. Moreover, additional leads different from the standard 12-lead ECG should be used to detect PV isolation and possible future reconnections better.


Subject(s)
Atrial Fibrillation , Catheter Ablation , Cryosurgery , Pulmonary Veins , Humans , Heart Conduction System , Electrocardiography , Cryosurgery/methods , Catheter Ablation/methods , Treatment Outcome , Recurrence
2.
IEEE Trans Biomed Eng ; 69(10): 3029-3038, 2022 10.
Article in English | MEDLINE | ID: mdl-35294340

ABSTRACT

Electrocardiographic Imaging (ECGI) aims to estimate the intracardiac potentials noninvasively, hence allowing the clinicians to better visualize and understand many arrhythmia mechanisms. Most of the estimators of epicardial potentials use a signal model based on an estimated spatial transfer matrix together with Tikhonov regularization techniques, which works well specially in simulations, but it can give limited accuracy in some real data. Based on the quasielectrostatic potential superposition principle, we propose a simple signal model that supports the implementation of principled out-of-sample algorithms for several of the most widely used regularization criteria in ECGI problems, hence improving the generalization capabilities of several of the current estimation methods. Experiments on simple cases (cylindrical and Gaussian shapes scrutinizing fast and slow changes, respectively) and on real data (examples of torso tank measurements available from Utah University, and an animal torso and epicardium measurements available from Maastricht University, both in the EDGAR public repository) show that the superposition-based out-of-sample tuning of regularization parameters promotes stabilized estimation errors of the unknown source potentials, while slightly increasing the re-estimation error on the measured data, as natural in non-overfitted solutions. The superposition signal model can be used for designing adequate out-of-sample tuning of Tikhonov regularization techniques, and it can be taken into account when using other regularization techniques in current commercial systems and research toolboxes on ECGI.


Subject(s)
Electrocardiography , Pericardium , Algorithms , Animals , Body Surface Potential Mapping/methods , Electrocardiography/methods , Humans , Normal Distribution , Pericardium/diagnostic imaging
3.
Sensors (Basel) ; 20(11)2020 Jun 01.
Article in English | MEDLINE | ID: mdl-32492938

ABSTRACT

During the last years, Electrocardiographic Imaging (ECGI) has emerged as a powerful and promising clinical tool to support cardiologists. Starting from a plurality of potential measurements on the torso, ECGI yields a noninvasive estimation of their causing potentials on the epicardium. This unprecedented amount of measured cardiac signals needs to be conditioned and adapted to current knowledge and methods in cardiac electrophysiology in order to maximize its support to the clinical practice. In this setting, many cardiac indices are defined in terms of the so-called bipolar electrograms, which correspond with differential potentials between two spatially close potential measurements. Our aim was to contribute to the usefulness of ECGI recordings in the current knowledge and methods of cardiac electrophysiology. For this purpose, we first analyzed the basic stages of conventional cardiac signal processing and scrutinized the implications of the spatial-temporal nature of signals in ECGI scenarios. Specifically, the stages of baseline wander removal, low-pass filtering, and beat segmentation and synchronization were considered. We also aimed to establish a mathematical operator to provide suitable bipolar electrograms from the ECGI-estimated epicardium potentials. Results were obtained on data from an infarction patient and from a healthy subject. First, the low-frequency and high-frequency noises are shown to be non-independently distributed in the ECGI-estimated recordings due to their spatial dimension. Second, bipolar electrograms are better estimated when using the criterion of the maximum-amplitude difference between spatial neighbors, but also a temporal delay in discrete time of about 40 samples has to be included to obtain the usual morphology in clinical bipolar electrograms from catheters. We conclude that spatial-temporal digital signal processing and bipolar electrograms can pave the way towards the usefulness of ECGI recordings in the cardiological clinical practice. The companion paper is devoted to analyzing clinical indices obtained from ECGI epicardial electrograms measuring waveform variability and repolarization tissue properties.


Subject(s)
Body Surface Potential Mapping , Electrocardiography , Pericardium/physiology , Signal Processing, Computer-Assisted , Diagnostic Imaging , Humans
4.
Sensors (Basel) ; 20(11)2020 May 29.
Article in English | MEDLINE | ID: mdl-32485879

ABSTRACT

During the last years, attention and controversy have been present for the first commercially available equipment being used in Electrocardiographic Imaging (ECGI), a new cardiac diagnostic tool which opens up a new field of diagnostic possibilities. Previous knowledge and criteria of cardiologists using intracardiac Electrograms (EGM) should be revisited from the newly available spatial-temporal potentials, and digital signal processing should be readapted to this new data structure. Aiming to contribute to the usefulness of ECGI recordings in the current knowledge and methods of cardiac electrophysiology, we previously presented two results: First, spatial consistency can be observed even for very basic cardiac signal processing stages (such as baseline wander and low-pass filtering); second, useful bipolar EGMs can be obtained by a digital processing operator searching for the maximum amplitude and including a time delay. In addition, this work aims to demonstrate the functionality of ECGI for cardiac electrophysiology from a twofold view, namely, through the analysis of the EGM waveforms, and by studying the ventricular repolarization properties. The former is scrutinized in terms of the clustering properties of the unipolar an bipolar EGM waveforms, in control and myocardial infarction subjects, and the latter is analyzed using the properties of T-wave alternans (TWA) in control and in Long-QT syndrome (LQTS) example subjects. Clustered regions of the EGMs were spatially consistent and congruent with the presence of infarcted tissue in unipolar EGMs, and bipolar EGMs with adequate signal processing operators hold this consistency and yielded a larger, yet moderate, number of spatial-temporal regions. TWA was not present in control compared with an LQTS subject in terms of the estimated alternans amplitude from the unipolar EGMs, however, higher spatial-temporal variation was present in LQTS torso and epicardium measurements, which was consistent through three different methods of alternans estimation. We conclude that spatial-temporal analysis of EGMs in ECGI will pave the way towards enhanced usefulness in the clinical practice, so that atomic signal processing approach should be conveniently revisited to be able to deal with the great amount of information that ECGI conveys for the clinician.


Subject(s)
Arrhythmias, Cardiac , Electrocardiography , Electrophysiologic Techniques, Cardiac , Arrhythmias, Cardiac/diagnosis , Body Surface Potential Mapping , Cluster Analysis , Humans
5.
Entropy (Basel) ; 21(4)2019 Apr 19.
Article in English | MEDLINE | ID: mdl-33267133

ABSTRACT

Customer Relationship Management (CRM) is a fundamental tool in the hospitality industry nowadays, which can be seen as a big-data scenario due to the large amount of recordings which are annually handled by managers. Data quality is crucial for the success of these systems, and one of the main issues to be solved by businesses in general and by hospitality businesses in particular in this setting is the identification of duplicated customers, which has not received much attention in recent literature, probably and partly because it is not an easy-to-state problem in statistical terms. In the present work, we address the problem statement of duplicated customer identification as a large-scale data analysis, and we propose and benchmark a general-purpose solution for it. Our system consists of four basic elements: (a) A generic feature representation for the customer fields in a simple table-shape database; (b) An efficient distance for comparison among feature values, in terms of the Wagner-Fischer algorithm to calculate the Levenshtein distance; (c) A big-data implementation using basic map-reduce techniques to readily support the comparison of strategies; (d) An X-from-M criterion to identify those possible neighbors to a duplicated-customer candidate. We analyze the mass density function of the distances in the CRM text-based fields and characterized their behavior and consistency in terms of the entropy and of the mutual information for these fields. Our experiments in a large CRM from a multinational hospitality chain show that the distance distributions are statistically consistent for each feature, and that neighbourhood thresholds are automatically adjusted by the system at a first step and they can be subsequently more-finely tuned according to the manager experience. The entropy distributions for the different variables, as well as the mutual information between pairs, are characterized by multimodal profiles, where a wide gap between close and far fields is often present. This motivates the proposal of the so-called X-from-M strategy, which is shown to be computationally affordable, and can provide the expert with a reduced number of duplicated candidates to supervise, with low X values being enough to warrant the sensitivity required at the automatic detection stage. The proposed system again encourages and supports the benefits of big-data technologies in CRM scenarios for hotel chains, and rather than the use of ad-hoc heuristic rules, it promotes the research and development of theoretically principled approaches.

6.
Entropy (Basel) ; 21(6)2019 Jun 15.
Article in English | MEDLINE | ID: mdl-33267308

ABSTRACT

The identification of patients with increased risk of Sudden Cardiac Death (SCD) has been widely studied during recent decades, and several quantitative measurements have been proposed from the analysis of the electrocardiogram (ECG) stored in 1-day Holter recordings. Indices based on nonlinear dynamics of Heart Rate Variability (HRV) have shown to convey predictive information in terms of factors related with the cardiac regulation by the autonomous nervous system, and among them, multiscale methods aim to provide more complete descriptions than single-scale based measures. However, there is limited knowledge on the suitability of nonlinear measurements to characterize the cardiac dynamics in current long-term monitoring scenarios of several days. Here, we scrutinized the long-term robustness properties of three nonlinear methods for HRV characterization, namely, the Multiscale Entropy (MSE), the Multiscale Time Irreversibility (MTI), and the Multifractal Spectrum (MFS). These indices were selected because all of them have been theoretically designed to take into account the multiple time scales inherent in healthy and pathological cardiac dynamics, and they have been analyzed so far when monitoring up to 24 h of ECG signals, corresponding to about 20 time scales. We analyzed them in 7-day Holter recordings from two data sets, namely, patients with Atrial Fibrillation and with Congestive Heart Failure, by reaching up to 100 time scales. In addition, a new comparison procedure is proposed to statistically compare the poblational multiscale representations in different patient or processing conditions, in terms of the non-parametric estimation of confidence intervals for the averaged median differences. Our results show that variance reduction is actually obtained in the multiscale estimators. The MSE (MTI) exhibited the lowest (largest) bias and variance at large scales, whereas all the methods exhibited a consistent description of the large-scale processes in terms of multiscale index robustness. In all the methods, the used algorithms could turn to give some inconsistency in the multiscale profile, which was checked not to be due to the presence of artifacts, but rather with unclear origin. The reduction in standard error for several-day recordings compared to one-day recordings was more evident in MSE, whereas bias was more patently present in MFS. Our results pave the way of these techniques towards their use, with improved algorithmic implementations and nonparametric statistical tests, in long-term cardiac Holter monitoring scenarios.

7.
Sensors (Basel) ; 18(8)2018 Aug 04.
Article in English | MEDLINE | ID: mdl-30081559

ABSTRACT

In recent years, attention has been paid to wireless sensor networks (WSNs) applied to precision agriculture. However, few studies have compared the technologies of different communication standards in terms of topology and energy efficiency. This paper presents the design and implementation of the hardware and software of three WSNs with different technologies and topologies of wireless communication for tomato greenhouses in the Andean region of Ecuador, as well as the comparative study of the performance of each of them. Two companion papers describe the study of the dynamics of the energy consumption and of the monitored variables. Three WSNs were deployed, two of them with the IEEE 802.15.4 standard with star and mesh topologies (ZigBee and DigiMesh, respectively), and a third with the IEEE 802.11 standard with access point topology (WiFi). The measured variables were selected after investigation of the climatic conditions required for efficient tomato growth. The measurements for each variable could be displayed in real time using either a laboratory virtual instrument engineering workbench (LabVIEWTM) interface or an Android mobile application. The comparative study of the three networks made evident that the configuration of the DigiMesh network is the most complex for adding new nodes, due to its mesh topology. However, DigiMesh maintains the bit rate and prevents data loss by the location of the nodes as a function of crop height. It has been also shown that the WiFi network has better stability with larger precision in its measurements.

8.
Sensors (Basel) ; 18(8)2018 Aug 04.
Article in English | MEDLINE | ID: mdl-30081565

ABSTRACT

Tomato greenhouses are a crucial element in the Equadorian economy. Wireless sensor networks (WSNs) have received much attention in recent years in specialized applications such as precision farming. The energy consumption in WSNs is relevant nowadays for their adequate operation, and attention is being paid to analyzing the affecting factors, energy optimization techniques working on the network hardware or software, and characterizing the consumption in the nodes (especially in the ZigBee standard). However, limited information exists on the analysis of the consumption dynamics in each node, across different network technologies and communication topologies, or on the incidence of data transmission speed. The present study aims to provide a detailed analysis of the dynamics of the energy consumption for tomato greenhouse monitoring in Ecuador, in three types of WSNs, namely, ZigBee with star topology, ZigBee with mesh topology (referred to here as DigiMesh), and WiFi with access point topology. The networks were installed and maintained in operation with a line of sight between nodes and a 2-m length, whereas the energy consumption measurements of each node were acquired and stored in the laboratory. Each experiment was repeated ten times, and consumption measurements were taken every ten milliseconds at a rate of fifty thousand samples for each realization. The dynamics were scrutinized by analyzing the recorded time series using stochastic-process analysis methods, including amplitude probability functions and temporal autocorrelation, as well as bootstrap resampling techniques and representations of various embodiments with the so-called M-mode plots. Our results show that the energy consumption of each network strongly depends on the type of sensors installed in the nodes and on the network topology. Specifically, the CO2 sensor has the highest power consumption because its chemical composition requires preheating to start logging measurements. The ZigBee network is more efficient in energy saving independently of the transmission rate, since the communication modules have lower average consumption in data transmission, in contrast to the DigiMesh network, whose consumption is high due to its topology. Results also show that the average energy consumption in WiFi networks is the highest, given that the coordinator node is a Meshlium™ router with larger energy demand. The transmission duration in the ZigBee network is lower than in the other two networks. In conclusion, the ZigBee network with star topology is the most energy-suitable one when designing wireless monitoring systems in greenhouses. The proposed methodology for consumption dynamics analysis in tomato greenhouse WSNs can be applied to other scenarios where the practical choice of an energy-efficient network is necessary due to energy constrains in the sensor and coordinator nodes.

9.
Sensors (Basel) ; 18(8)2018 Aug 04.
Article in English | MEDLINE | ID: mdl-30081567

ABSTRACT

World population growth currently brings unequal access to food, whereas crop yields are not increasing at a similar rate, so that future food demand could be unmet. Many recent research works address the use of optimization techniques and technological resources on precision agriculture, especially in large demand crops, including climatic variables monitoring using wireless sensor networks (WSNs). However, few studies have focused on analyzing the dynamics of the environmental measurement properties in greenhouses. In the two companion papers, we describe the design and implementation of three WSNs with different technologies and topologies further scrutinizing their comparative performance, and a detailed analysis of their energy consumption dynamics is also presented, both considering tomato greenhouses in the Andean region of Ecuador. The three WSNs use ZigBee with star topology, ZigBee with mesh topology (referred to here as DigiMesh), and WiFi with access point topology. The present study provides a systematic and detailed analysis of the environmental measurement dynamics from multiparametric monitoring in Ecuadorian tomato greenhouses. A set of monitored variables (including CO2, air temperature, and wind direction, among others) are first analyzed in terms of their intrinsic variability and their short-term (circadian) rhythmometric behavior. Then, their cross-information is scrutinized in terms of scatter representations and mutual information analysis. Based on Bland⁻Altman diagrams, good quality rhythmometric models were obtained at high-rate sampling signals during four days when using moderate regularization and preprocessing filtering with 100-coefficient order. Accordingly, and especially for the adjustment of fast transition variables, it is appropriate to use high sampling rates and then to filter the signal to discriminate against false peaks and noise. In addition, for variables with similar behavior, a longer period of data acquisition is required for the adequate processing, which makes more precise the long-term modeling of the environmental signals.

10.
Biomed Eng Online ; 17(1): 86, 2018 Jun 20.
Article in English | MEDLINE | ID: mdl-29925384

ABSTRACT

BACKGROUND: The inverse problem in electrophysiology consists of the accurate estimation of the intracardiac electrical sources from a reduced set of electrodes at short distances and from outside the heart. This estimation can provide an image with relevant knowledge on arrhythmia mechanisms for the clinical practice. Methods based on truncated singular value decomposition (TSVD) and regularized least squares require a matrix inversion, which limits their resolution due to the unavoidable low-pass filter effect of the Tikhonov regularization techniques. METHODS: We propose to use, for the first time, a Mercer's kernel given by the Laplacian of the distance in the quasielectrostatic field equations, hence providing a Support Vector Regression (SVR) formulation by following the principles of the Dual Signal Model (DSM) principles for creating kernel algorithms. RESULTS: Simulations in one- and two-dimensional models show the performance of our Laplacian distance kernel technique versus several conventional methods. Firstly, the one-dimensional model is adjusted for yielding recorded electrograms, similar to the ones that are usually observed in electrophysiological studies, and suitable strategy is designed for the free-parameter search. Secondly, simulations both in one- and two-dimensional models show larger noise sensitivity in the estimated transfer matrix than in the observation measurements, and DSM-SVR is shown to be more robust to noisy transfer matrix than TSVD. CONCLUSION: These results suggest that our proposed DSM-SVR with Laplacian distance kernel can be an efficient alternative to improve the resolution in current and emerging intracardiac imaging systems.


Subject(s)
Electrophysiological Phenomena , Heart/physiology , Models, Cardiovascular , Electroencephalography , Least-Squares Analysis , Signal-To-Noise Ratio , Support Vector Machine
11.
Sensors (Basel) ; 18(5)2018 May 02.
Article in English | MEDLINE | ID: mdl-29724033

ABSTRACT

The intracardiac electrical activation maps are commonly used as a guide in the ablation of cardiac arrhythmias. The use of catheters with force sensors has been proposed in order to know if the electrode is in contact with the tissue during the registration of intracardiac electrograms (EGM). Although threshold criteria on force signals are often used to determine the catheter contact, this may be a limited criterion due to the complexity of the heart dynamics and cardiac vorticity. The present paper is devoted to determining the criteria and force signal profiles that guarantee the contact of the electrode with the tissue. In this study, we analyzed 1391 force signals and their associated EGM recorded during 2 and 8 s, respectively, in 17 patients (82 ± 60 points per patient). We aimed to establish a contact pattern by first visually examining and classifying the signals, according to their likely-contact joint profile and following the suggestions from experts in the doubtful cases. First, we used Principal Component Analysis to scrutinize the force signal dynamics by analyzing the main eigen-directions, first globally and then grouped according to the certainty of their tissue-catheter contact. Second, we used two different linear classifiers (Fisher discriminant and support vector machines) to identify the most relevant components of the previous signal models. We obtained three main types of eigenvectors, namely, pulsatile relevant, non-pulsatile relevant, and irrelevant components. The classifiers reached a moderate to sufficient discrimination capacity (areas under the curve between 0.84 and 0.95 depending on the contact certainty and on the classifier), which allowed us to analyze the relevant properties in the force signals. We conclude that the catheter-tissue contact profiles in force recordings are complex and do not depend only on the signal intensity being above a threshold at a single time instant, but also on time pulsatility and trends. These findings pave the way towards a subsystem which can be included in current intracardiac navigation systems assisted by force contact sensors, and it can provide the clinician with an estimate of the reliability on the tissue-catheter contact in the point-by-point EGM acquisition procedure.


Subject(s)
Arrhythmias, Cardiac/therapy , Catheter Ablation/methods , Electrophysiologic Techniques, Cardiac/standards , Humans , Reproducibility of Results
12.
Sensors (Basel) ; 17(10)2017 Oct 16.
Article in English | MEDLINE | ID: mdl-29035333

ABSTRACT

Pollution on water resources is usually analyzed with monitoring campaigns, which consist of programmed sampling, measurement, and recording of the most representative water quality parameters. These campaign measurements yields a non-uniform spatio-temporal sampled data structure to characterize complex dynamics phenomena. In this work, we propose an enhanced statistical interpolation method to provide water quality managers with statistically interpolated representations of spatial-temporal dynamics. Specifically, our proposal makes efficient use of the a priori available information of the quality parameter measurements through Support Vector Regression (SVR) based on Mercer's kernels. The methods are benchmarked against previously proposed methods in three segments of the Machángara River and one segment of the San Pedro River in Ecuador, and their different dynamics are shown by statistically interpolated spatial-temporal maps. The best interpolation performance in terms of mean absolute error was the SVR with Mercer's kernel given by either the Mahalanobis spatial-temporal covariance matrix or by the bivariate estimated autocorrelation function. In particular, the autocorrelation kernel provides with significant improvement of the estimation quality, consistently for all the six water quality variables, which points out the relevance of including a priori knowledge of the problem.

SELECTION OF CITATIONS
SEARCH DETAIL
...