Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 6 de 6
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Physiol Meas ; 44(7)2023 07 24.
Artigo em Inglês | MEDLINE | ID: mdl-37336241

RESUMO

Background.The analysis of multi-lead electrocardiographic (ECG) signals requires integrating the information derived from each lead to reach clinically relevant conclusions. This analysis could benefit from data-driven methods compacting the information in those leads into lower-dimensional representations (i.e. 2 or 3 dimensions instead of 12).Objective.We propose Laplacian Eigenmaps (LE) to create a unified framework where ECGs from different subjects can be compared and their abnormalities are enhanced.Approach.We conceive a normal reference ECG space based on LE, calculated using signals of healthy subjects in sinus rhythm. Signals from new subjects can be mapped onto this reference space creating a loop per heartbeat that captures ECG abnormalities. A set of parameters, based on distance metrics and on the shape of loops, are proposed to quantify the differences between subjects.Main results.This methodology was applied to find structural and arrhythmogenic changes in the ECG. The LE framework consistently captured the characteristics of healthy ECGs, confirming that normal signals behaved similarly in the LE space. Significant differences between normal signals, and those from patients with ischemic heart disease or dilated cardiomyopathy were detected. In contrast, LE biomarkers did not identify differences between patients with cardiomyopathy and a history of ventricular arrhythmia and their matched controls.Significance.This LE unified framework offers a new representation of multi-lead signals, reducing dimensionality while enhancing imperceptible abnormalities and enabling the comparison of signals of different subjects.


Assuntos
Eletrocardiografia , Isquemia Miocárdica , Humanos , Eletrocardiografia/métodos , Arritmias Cardíacas , Frequência Cardíaca
2.
Sci Rep ; 12(1): 6783, 2022 04 26.
Artigo em Inglês | MEDLINE | ID: mdl-35474073

RESUMO

Fragmented QRS (fQRS) is an electrocardiographic (ECG) marker of myocardial conduction abnormality, characterized by additional notches in the QRS complex. The presence of fQRS has been associated with an increased risk of all-cause mortality and arrhythmia in patients with cardiovascular disease. However, current binary visual analysis is prone to intra- and inter-observer variability and different definitions are problematic in clinical practice. Therefore, objective quantification of fQRS is needed and could further improve risk stratification of these patients. We present an automated method for fQRS detection and quantification. First, a novel robust QRS complex segmentation strategy is proposed, which combines multi-lead information and excludes abnormal heartbeats automatically. Afterwards extracted features, based on variational mode decomposition (VMD), phase-rectified signal averaging (PRSA) and the number of baseline-crossings of the ECG, were used to train a machine learning classifier (Support Vector Machine) to discriminate fragmented from non-fragmented ECG-traces using multi-center data and combining different fQRS criteria used in clinical settings. The best model was trained on the combination of two independent previously annotated datasets and, compared to these visual fQRS annotations, achieved Kappa scores of 0.68 and 0.44, respectively. We also show that the algorithm might be used in both regular sinus rhythm and irregular beats during atrial fibrillation. These results demonstrate that the proposed approach could be relevant for clinical practice by objectively assessing and quantifying fQRS. The study sets the path for further clinical application of the developed automated fQRS algorithm.


Assuntos
Fibrilação Atrial , Eletrocardiografia , Algoritmos , Fibrilação Atrial/diagnóstico , Eletrocardiografia/métodos , Humanos , Aprendizado de Máquina , Máquina de Vetores de Suporte
3.
PeerJ Comput Sci ; 7: e477, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-33981839

RESUMO

Feature selection techniques are very useful approaches for dimensionality reduction in data analysis. They provide interpretable results by reducing the dimensions of the data to a subset of the original set of features. When the data lack annotations, unsupervised feature selectors are required for their analysis. Several algorithms for this aim exist in the literature, but despite their large applicability, they can be very inaccessible or cumbersome to use, mainly due to the need for tuning non-intuitive parameters and the high computational demands. In this work, a publicly available ready-to-use unsupervised feature selector is proposed, with comparable results to the state-of-the-art at a much lower computational cost. The suggested approach belongs to the methods known as spectral feature selectors. These methods generally consist of two stages: manifold learning and subset selection. In the first stage, the underlying structures in the high-dimensional data are extracted, while in the second stage a subset of the features is selected to replicate these structures. This paper suggests two contributions to this field, related to each of the stages involved. In the manifold learning stage, the effect of non-linearities in the data is explored, making use of a radial basis function (RBF) kernel, for which an alternative solution for the estimation of the kernel parameter is presented for cases with high-dimensional data. Additionally, the use of a backwards greedy approach based on the least-squares utility metric for the subset selection stage is proposed. The combination of these new ingredients results in the utility metric for unsupervised feature selection U2FS algorithm. The proposed U2FS algorithm succeeds in selecting the correct features in a simulation environment. In addition, the performance of the method on benchmark datasets is comparable to the state-of-the-art, while requiring less computational time. Moreover, unlike the state-of-the-art, U2FS does not require any tuning of parameters.

4.
Sensors (Basel) ; 21(2)2021 Jan 19.
Artigo em Inglês | MEDLINE | ID: mdl-33477888

RESUMO

The electrocardiogram (ECG) is an important diagnostic tool for identifying cardiac problems. Nowadays, new ways to record ECG signals outside of the hospital are being investigated. A promising technique is capacitively coupled ECG (ccECG), which allows ECG signals to be recorded through insulating materials. However, as the ECG is no longer recorded in a controlled environment, this inevitably implies the presence of more artefacts. Artefact detection algorithms are used to detect and remove these. Typically, the training of a new algorithm requires a lot of ground truth data, which is costly to obtain. As many labelled contact ECG datasets exist, we could avoid the use of labelling new ccECG signals by making use of previous knowledge. Transfer learning can be used for this purpose. Here, we applied transfer learning to optimise the performance of an artefact detection model, trained on contact ECG, towards ccECG. We used ECG recordings from three different datasets, recorded with three recording devices. We showed that the accuracy of a contact-ECG classifier improved between 5 and 8% by means of transfer learning when tested on a ccECG dataset. Furthermore, we showed that only 20 segments of the ccECG dataset are sufficient to significantly increase the accuracy.


Assuntos
Artefatos , Eletrocardiografia , Processamento de Sinais Assistido por Computador , Algoritmos , Cardiopatias/diagnóstico , Humanos , Máquina de Vetores de Suporte
5.
Comput Methods Programs Biomed ; 182: 105050, 2019 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-31473442

RESUMO

BACKGROUND AND OBJECTIVES: The presence of noise sources could reduce the diagnostic capability of the ECG signal and result in inappropriate treatment decisions. To mitigate this problem, automated algorithms to detect artefacts and quantify the quality of the recorded signal are needed. In this study we present an automated method for the detection of artefacts and quantification of the signal quality. The suggested methodology extracts descriptive features from the autocorrelation function and feeds these to a RUSBoost classifier. The posterior probability of the clean class is used to create a continuous signal quality assessment index. Firstly, the robustness of the proposed algorithm is investigated and secondly, the novel signal quality assessment index is evaluated. METHODS: Data were used from three different studies: a Sleep study, the PhysioNet 2017 Challenge and a Stress study. Binary labels, clean or contaminated, were available from different annotators with experience in ECG analysis. Two types of realistic ECG noise from the MIT-BIH Noise Stress Test Database (NSTDB) were added to the Sleep study to test the quality index. Firstly, the model was trained on the Sleep dataset and subsequently tested on a subset of the other two datasets. Secondly, all recording conditions were taken into account by training the model on a subset derived from the three datasets. Lastly, the posterior probabilities of the model for the different levels of agreement between the annotators were compared. RESULTS: AUC values between 0.988 and 1.000 were obtained when training the model on the Sleep dataset. These results were further improved when training on the three datasets and thus taking all recording conditions into account. A Pearson correlation coefficient of 0.8131 was observed between the score of the clean class and the level of agreement. Additionally, significant quality decreases per noise level for both types of added noise were observed. CONCLUSIONS: The main novelty of this study is the new approach to ECG signal quality assessment based on the posterior clean class probability of the classifier.


Assuntos
Artefatos , Eletrocardiografia Ambulatorial/métodos , Algoritmos , Humanos , Aprendizado de Máquina , Probabilidade , Razão Sinal-Ruído
6.
Annu Int Conf IEEE Eng Med Biol Soc ; 2019: 6363-6366, 2019 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-31947298

RESUMO

Despite the multiple studies dealing with heartbeat classification, the accurate detection of Supraventricular heartbeats (SVEB) is still very challenging. Therefore, this study aims to question the current protocol followed to report heartbeat classification results, which impedes the improvement of the SVEB class without falling on over-fitting. In this study, a novel approach based on Variational Mode Decomposition (VMD) as source of features is proposed, and the impact of the use of the MIT-BIH Arrhythmia database is analyzed.The method proposed is based on single-lead electrocardiogram, and it characterizes heartbeats by a set of 45 features: 5 related to the time intervals between consecutive heartbeats, and the rest related to VMD. Each heartbeat is decomposed in their variational modes, which are, on their turn, characterized by their frequency content, morphology and higher order statistics. The 10 most relevant features are selected using a backwards wrapper feature selector, and they are fed into an LS-SVM classifier, which is trained to separate Normal (N), Supraventricular (SVEB), Ventricular (VEB) and Fusion (F) heartbeats. An inter-patient approach, using patient independent training, is considered as suggested in the literature.The method achieves sensitivities above 80% for the three most important classes of the database (N, SVEB and VEB), and high specificities for the N and VEB classes. Given the challenges related to the SVEB and F class present in the literature, the composition of the MIT-BIH database is analyzed and alternatives are suggested in order to train heartbeat classification algorithms in a novel and more realistic way.


Assuntos
Algoritmos , Frequência Cardíaca , Processamento de Sinais Assistido por Computador , Arritmias Cardíacas/diagnóstico , Calibragem , Eletrocardiografia , Humanos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...