Your browser doesn't support javascript.
loading
A heart sound segmentation method based on multi-feature fusion network / 中国胸心血管外科临床杂志
Article em Zh | WPRIM | ID: wpr-1031682
Biblioteca responsável: WPRO
ABSTRACT
@#Objective To propose a heart sound segmentation method based on multi-feature fusion network. Methods Data were obtained from the CinC/PhysioNet 2016 Challenge dataset (a total of 3 153 recordings from 764 patients, about 91.93% of whom were male, with an average age of 30.36 years). Firstly the features were extracted in time domain and time-frequency domain respectively, and reduced redundant features by feature dimensionality reduction. Then, we selected optimal features separately from the two feature spaces that performed best through feature selection. Next, the multi-feature fusion was completed through multi-scale dilated convolution, cooperative fusion, and channel attention mechanism. Finally, the fused features were fed into a bidirectional gated recurrent unit (BiGRU) network to heart sound segmentation results. Results The proposed method achieved precision, recall and F1 score of 96.70%, 96.99%, and 96.84% respectively. Conclusion The multi-feature fusion network proposed in this study has better heart sound segmentation performance, which can provide high-accuracy heart sound segmentation technology support for the design of automatic analysis of heart diseases based on heart sounds.
Palavras-chave
Texto completo: 1 Índice: WPRIM Idioma: Zh Revista: Chinese Journal of Clinical Thoracic and Cardiovascular Surgery Ano de publicação: 2024 Tipo de documento: Article
Texto completo: 1 Índice: WPRIM Idioma: Zh Revista: Chinese Journal of Clinical Thoracic and Cardiovascular Surgery Ano de publicação: 2024 Tipo de documento: Article