Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 51.507
Filter
1.
Sci Rep ; 14(1): 12823, 2024 06 04.
Article in English | MEDLINE | ID: mdl-38834839

ABSTRACT

The prevalence of cardiovascular disease (CVD) has surged in recent years, making it the foremost cause of mortality among humans. The Electrocardiogram (ECG), being one of the pivotal diagnostic tools for cardiovascular diseases, is increasingly gaining prominence in the field of machine learning. However, prevailing neural network models frequently disregard the spatial dimension features inherent in ECG signals. In this paper, we propose an ECG autoencoder network architecture incorporating low-rank attention (LRA-autoencoder). It is designed to capture potential spatial features of ECG signals by interpreting the signals from a spatial perspective and extracting correlations between different signal points. Additionally, the low-rank attention block (LRA-block) obtains spatial features of electrocardiogram signals through singular value decomposition, and then assigns these spatial features as weights to the electrocardiogram signals, thereby enhancing the differentiation of features among different categories. Finally, we utilize the ResNet-18 network classifier to assess the performance of the LRA-autoencoder on both the MIT-BIH Arrhythmia and PhysioNet Challenge 2017 datasets. The experimental results reveal that the proposed method demonstrates superior classification performance. The mean accuracy on the MIT-BIH Arrhythmia dataset is as high as 0.997, and the mean accuracy and F 1 -score on the PhysioNet Challenge 2017 dataset are 0.850 and 0.843.


Subject(s)
Electrocardiography , Neural Networks, Computer , Electrocardiography/methods , Humans , Arrhythmias, Cardiac/diagnosis , Arrhythmias, Cardiac/physiopathology , Machine Learning , Signal Processing, Computer-Assisted , Algorithms , Cardiovascular Diseases/diagnosis
2.
J Acoust Soc Am ; 155(6): 3639-3653, 2024 Jun 01.
Article in English | MEDLINE | ID: mdl-38836771

ABSTRACT

The estimation of auditory evoked potentials requires deconvolution when the duration of the responses to be recovered exceeds the inter-stimulus interval. Based on least squares deconvolution, in this article we extend the procedure to the case of a multi-response convolutional model, that is, a model in which different categories of stimulus are expected to evoke different responses. The computational cost of the multi-response deconvolution significantly increases with the number of responses to be deconvolved, which restricts its applicability in practical situations. In order to alleviate this restriction, we propose to perform the multi-response deconvolution in a reduced representation space associated with a latency-dependent filtering of auditory responses, which provides a significant dimensionality reduction. We demonstrate the practical viability of the multi-response deconvolution with auditory responses evoked by clicks presented at different levels and categorized according to their stimulation level. The multi-response deconvolution applied in a reduced representation space provides the least squares estimation of the responses with a reasonable computational load. matlab/Octave code implementing the proposed procedure is included as supplementary material.


Subject(s)
Acoustic Stimulation , Evoked Potentials, Auditory , Evoked Potentials, Auditory/physiology , Humans , Acoustic Stimulation/methods , Male , Adult , Electroencephalography/methods , Female , Least-Squares Analysis , Young Adult , Signal Processing, Computer-Assisted , Reaction Time , Auditory Perception/physiology
3.
PLoS One ; 19(6): e0304531, 2024.
Article in English | MEDLINE | ID: mdl-38843235

ABSTRACT

With the rapid development of modern communication technology, it has become a core problem in the field of communication to find new ways to effectively modulate signals and to classify and recognize the results of automatic modulation. To further improve the communication quality and system processing efficiency, this study combines two different neural network algorithms to optimize the traditional signal automatic modulation classification method. In this paper, the basic technology involved in the communication process, including automatic signal modulation technology and signal classification technology, is discussed. Then, combining parallel convolution and simple cyclic unit network, three different connection paths of automatic signal modulation classification model are constructed. The performance test results show that the classification model can achieve a stable training and verification state when the two networks are connected. After 20 and 29 iterations, the loss values are 0.13 and 0.18, respectively. In addition, when the signal-to-noise ratio (SNR) is 25dB, the classification accuracy of parallel convolutional neural network and simple cyclic unit network model is as high as 0.99. Finally, the classification models of parallel convolutional neural networks and simple cyclic unit networks have stable correct classification probabilities when Doppler shift conditions are introduced as interference in practical application environment. In summary, the neural network fusion classification model designed can significantly improve the shortcomings of traditional automatic modulation classification methods, and further improve the classification accuracy of modulated signals.


Subject(s)
Algorithms , Neural Networks, Computer , Signal-To-Noise Ratio , Signal Processing, Computer-Assisted , Humans
4.
Sci Rep ; 14(1): 12615, 2024 06 01.
Article in English | MEDLINE | ID: mdl-38824217

ABSTRACT

Standard clinical practice to assess fetal well-being during labour utilises monitoring of the fetal heart rate (FHR) using cardiotocography. However, visual evaluation of FHR signals can result in subjective interpretations leading to inter and intra-observer disagreement. Therefore, recent studies have proposed deep-learning-based methods to interpret FHR signals and detect fetal compromise. These methods have typically focused on evaluating fixed-length FHR segments at the conclusion of labour, leaving little time for clinicians to intervene. In this study, we propose a novel FHR evaluation method using an input length invariant deep learning model (FHR-LINet) to progressively evaluate FHR as labour progresses and achieve rapid detection of fetal compromise. Using our FHR-LINet model, we obtained approximately 25% reduction in the time taken to detect fetal compromise compared to the state-of-the-art multimodal convolutional neural network while achieving 27.5%, 45.0%, 56.5% and 65.0% mean true positive rate at 5%, 10%, 15% and 20% false positive rate respectively. A diagnostic system based on our approach could potentially enable earlier intervention for fetal compromise and improve clinical outcomes.


Subject(s)
Cardiotocography , Deep Learning , Heart Rate, Fetal , Heart Rate, Fetal/physiology , Humans , Pregnancy , Female , Cardiotocography/methods , Neural Networks, Computer , Fetal Monitoring/methods , Signal Processing, Computer-Assisted , Fetus
5.
Biomed Eng Online ; 23(1): 50, 2024 Jun 01.
Article in English | MEDLINE | ID: mdl-38824547

ABSTRACT

BACKGROUND: Over 60% of epilepsy patients globally are children, whose early diagnosis and treatment are critical for their development and can substantially reduce the disease's burden on both families and society. Numerous algorithms for automated epilepsy detection from EEGs have been proposed. Yet, the occurrence of epileptic seizures during an EEG exam cannot always be guaranteed in clinical practice. Models that exclusively use seizure EEGs for detection risk artificially enhanced performance metrics. Therefore, there is a pressing need for a universally applicable model that can perform automatic epilepsy detection in a variety of complex real-world scenarios. METHOD: To address this problem, we have devised a novel technique employing a temporal convolutional neural network with self-attention (TCN-SA). Our model comprises two primary components: a TCN for extracting time-variant features from EEG signals, followed by a self-attention (SA) layer that assigns importance to these features. By focusing on key features, our model achieves heightened classification accuracy for epilepsy detection. RESULTS: The efficacy of our model was validated on a pediatric epilepsy dataset we collected and on the Bonn dataset, attaining accuracies of 95.50% on our dataset, and 97.37% (A v. E), and 93.50% (B vs E), respectively. When compared with other deep learning architectures (temporal convolutional neural network, self-attention network, and standardized convolutional neural network) using the same datasets, our TCN-SA model demonstrated superior performance in the automated detection of epilepsy. CONCLUSION: The proven effectiveness of the TCN-SA approach substantiates its potential as a valuable tool for the automated detection of epilepsy, offering significant benefits in diverse and complex real-world clinical settings.


Subject(s)
Electroencephalography , Epilepsy , Neural Networks, Computer , Epilepsy/diagnosis , Humans , Signal Processing, Computer-Assisted , Automation , Child , Deep Learning , Diagnosis, Computer-Assisted/methods , Time Factors
6.
Trends Hear ; 28: 23312165241260029, 2024.
Article in English | MEDLINE | ID: mdl-38831646

ABSTRACT

The extent to which active noise cancelation (ANC), when combined with hearing assistance, can improve speech intelligibility in noise is not well understood. One possible source of benefit is ANC's ability to reduce the sound level of the direct (i.e., vent-transmitted) path. This reduction lowers the "floor" imposed by the direct path, thereby allowing any increases to the signal-to-noise ratio (SNR) created in the amplified path to be "realized" at the eardrum. Here we used a modeling approach to estimate this benefit. We compared pairs of simulated hearing aids that differ only in terms of their ability to provide ANC and computed intelligibility metrics on their outputs. The difference in metric scores between simulated devices is termed the "ANC Benefit." These simulations show that ANC Benefit increases as (1) the environmental sound level increases, (2) the ability of the hearing aid to improve SNR increases, (3) the strength of the ANC increases, and (4) the hearing loss severity decreases. The predicted size of the ANC Benefit can be substantial. For a moderate hearing loss, the model predicts improvement in intelligibility metrics of >30% when environments are moderately loud (>70 dB SPL) and devices are moderately capable of increasing SNR (by >4 dB). It appears that ANC can be a critical ingredient in hearing devices that attempt to improve SNR in loud environments. ANC will become more and more important as advanced SNR-improving algorithms (e.g., artificial intelligence speech enhancement) are included in hearing devices.


Subject(s)
Hearing Aids , Noise , Perceptual Masking , Signal-To-Noise Ratio , Speech Intelligibility , Speech Perception , Humans , Noise/adverse effects , Computer Simulation , Acoustic Stimulation , Correction of Hearing Impairment/instrumentation , Persons With Hearing Impairments/rehabilitation , Persons With Hearing Impairments/psychology , Hearing Loss/diagnosis , Hearing Loss/rehabilitation , Hearing Loss/physiopathology , Equipment Design , Signal Processing, Computer-Assisted
7.
Sci Rep ; 14(1): 10371, 2024 05 06.
Article in English | MEDLINE | ID: mdl-38710806

ABSTRACT

Emotion is a human sense that can influence an individual's life quality in both positive and negative ways. The ability to distinguish different types of emotion can lead researchers to estimate the current situation of patients or the probability of future disease. Recognizing emotions from images have problems concealing their feeling by modifying their facial expressions. This led researchers to consider Electroencephalography (EEG) signals for more accurate emotion detection. However, the complexity of EEG recordings and data analysis using conventional machine learning algorithms caused inconsistent emotion recognition. Therefore, utilizing hybrid deep learning models and other techniques has become common due to their ability to analyze complicated data and achieve higher performance by integrating diverse features of the models. However, researchers prioritize models with fewer parameters to achieve the highest average accuracy. This study improves the Convolutional Fuzzy Neural Network (CFNN) for emotion recognition using EEG signals to achieve a reliable detection system. Initially, the pre-processing and feature extraction phases are implemented to obtain noiseless and informative data. Then, the CFNN with modified architecture is trained to classify emotions. Several parametric and comparative experiments are performed. The proposed model achieved reliable performance for emotion recognition with an average accuracy of 98.21% and 98.08% for valence (pleasantness) and arousal (intensity), respectively, and outperformed state-of-the-art methods.


Subject(s)
Electroencephalography , Emotions , Fuzzy Logic , Neural Networks, Computer , Humans , Electroencephalography/methods , Emotions/physiology , Male , Female , Adult , Algorithms , Young Adult , Signal Processing, Computer-Assisted , Deep Learning , Facial Expression
8.
BMC Med Inform Decis Mak ; 24(1): 119, 2024 May 06.
Article in English | MEDLINE | ID: mdl-38711099

ABSTRACT

The goal is to enhance an automated sleep staging system's performance by leveraging the diverse signals captured through multi-modal polysomnography recordings. Three modalities of PSG signals, namely electroencephalogram (EEG), electrooculogram (EOG), and electromyogram (EMG), were considered to obtain the optimal fusions of the PSG signals, where 63 features were extracted. These include frequency-based, time-based, statistical-based, entropy-based, and non-linear-based features. We adopted the ReliefF (ReF) feature selection algorithms to find the suitable parts for each signal and superposition of PSG signals. Twelve top features were selected while correlated with the extracted feature sets' sleep stages. The selected features were fed into the AdaBoost with Random Forest (ADB + RF) classifier to validate the chosen segments and classify the sleep stages. This study's experiments were investigated by obtaining two testing schemes: epoch-wise testing and subject-wise testing. The suggested research was conducted using three publicly available datasets: ISRUC-Sleep subgroup1 (ISRUC-SG1), sleep-EDF(S-EDF), Physio bank CAP sleep database (PB-CAPSDB), and S-EDF-78 respectively. This work demonstrated that the proposed fusion strategy overestimates the common individual usage of PSG signals.


Subject(s)
Electroencephalography , Electromyography , Electrooculography , Machine Learning , Polysomnography , Sleep Stages , Humans , Sleep Stages/physiology , Adult , Male , Female , Signal Processing, Computer-Assisted
9.
PLoS One ; 19(5): e0302707, 2024.
Article in English | MEDLINE | ID: mdl-38713653

ABSTRACT

Knee osteoarthritis (OA) is a prevalent, debilitating joint condition primarily affecting the elderly. This investigation aims to develop an electromyography (EMG)-based method for diagnosing knee pathologies. EMG signals of the muscles surrounding the knee joint were examined and recorded. The principal components of the proposed method were preprocessing, high-order spectral analysis (HOSA), and diagnosis/recognition through deep learning. EMG signals from individuals with normal and OA knees while walking were extracted from a publicly available database. This examination focused on the quadriceps femoris, the medial gastrocnemius, the rectus femoris, the semitendinosus, and the vastus medialis. Filtration and rectification were utilized beforehand to eradicate noise and smooth EMG signals. Signals' higher-order spectra were analyzed with HOSA to obtain information about nonlinear interactions and phase coupling. Initially, the bicoherence representation of EMG signals was devised. The resulting images were fed into a deep-learning system for identification and analysis. A deep learning algorithm using adapted ResNet101 CNN model examined the images to determine whether the EMG signals were conventional or indicative of knee osteoarthritis. The validated test results demonstrated high accuracy and robust metrics, indicating that the proposed method is effective. The medial gastrocnemius (MG) muscle was able to distinguish Knee osteoarthritis (KOA) patients from normal with 96.3±1.7% accuracy and 0.994±0.008 AUC. MG has the highest prediction accuracy of KOA and can be used as the muscle of interest in future analysis. Despite the proposed method's superiority, some limitations still require special consideration and will be addressed in future research.


Subject(s)
Deep Learning , Electromyography , Knee Joint , Osteoarthritis, Knee , Humans , Electromyography/methods , Osteoarthritis, Knee/diagnosis , Osteoarthritis, Knee/physiopathology , Knee Joint/physiopathology , Male , Female , Muscle, Skeletal/physiopathology , Middle Aged , Signal Processing, Computer-Assisted , Algorithms , Adult , Aged
10.
Chaos ; 34(5)2024 May 01.
Article in English | MEDLINE | ID: mdl-38717398

ABSTRACT

We use a multiscale symbolic approach to study the complex dynamics of temporal lobe refractory epilepsy employing high-resolution intracranial electroencephalogram (iEEG). We consider the basal and preictal phases and meticulously analyze the dynamics across frequency bands, focusing on high-frequency oscillations up to 240 Hz. Our results reveal significant periodicities and critical time scales within neural dynamics across frequency bands. By bandpass filtering neural signals into delta, theta, alpha, beta, gamma, and ripple high-frequency bands (HFO), each associated with specific neural processes, we examine the distinct nonlinear dynamics. Our method introduces a reliable approach to pinpoint intrinsic time lag scales τ within frequency bands of the basal and preictal signals, which are crucial for the study of refractory epilepsy. Using metrics such as permutation entropy (H), Fisher information (F), and complexity (C), we explore nonlinear patterns within iEEG signals. We reveal the intrinsic τmax that maximize complexity within each frequency band, unveiling the nonlinear subtle patterns of the temporal structures within the basal and preictal signal. Examining the H×F and C×F values allows us to identify differences in the delta band and a band between 200 and 220 Hz (HFO 6) when comparing basal and preictal signals. Differences in Fisher information in the delta and HFO 6 bands before seizures highlight their role in capturing important system dynamics. This offers new perspectives on the intricate relationship between delta oscillations and HFO waves in patients with focal epilepsy, highlighting the importance of these patterns and their potential as biomarkers.


Subject(s)
Biomarkers , Delta Rhythm , Humans , Biomarkers/metabolism , Delta Rhythm/physiology , Electroencephalography/methods , Epilepsy/physiopathology , Signal Processing, Computer-Assisted , Male , Nonlinear Dynamics , Female , Adult , Epilepsy, Temporal Lobe/physiopathology
11.
Biomed Eng Online ; 23(1): 45, 2024 May 05.
Article in English | MEDLINE | ID: mdl-38705982

ABSTRACT

BACKGROUND: Sleep-disordered breathing (SDB) affects a significant portion of the population. As such, there is a need for accessible and affordable assessment methods for diagnosis but also case-finding and long-term follow-up. Research has focused on exploiting cardiac and respiratory signals to extract proxy measures for sleep combined with SDB event detection. We introduce a novel multi-task model combining cardiac activity and respiratory effort to perform sleep-wake classification and SDB event detection in order to automatically estimate the apnea-hypopnea index (AHI) as severity indicator. METHODS: The proposed multi-task model utilized both convolutional and recurrent neural networks and was formed by a shared part for common feature extraction, a task-specific part for sleep-wake classification, and a task-specific part for SDB event detection. The model was trained with RR intervals derived from electrocardiogram and respiratory effort signals. To assess performance, overnight polysomnography (PSG) recordings from 198 patients with varying degree of SDB were included, with manually annotated sleep stages and SDB events. RESULTS: We achieved a Cohen's kappa of 0.70 in the sleep-wake classification task, corresponding to a Spearman's correlation coefficient (R) of 0.830 between the estimated total sleep time (TST) and the TST obtained from PSG-based sleep scoring. Combining the sleep-wake classification and SDB detection results of the multi-task model, we obtained an R of 0.891 between the estimated and the reference AHI. For severity classification of SBD groups based on AHI, a Cohen's kappa of 0.58 was achieved. The multi-task model performed better than a single-task model proposed in a previous study for AHI estimation, in particular for patients with a lower sleep efficiency (R of 0.861 with the multi-task model and R of 0.746 with single-task model with subjects having sleep efficiency < 60%). CONCLUSION: Assisted with automatic sleep-wake classification, our multi-task model demonstrated proficiency in estimating AHI and assessing SDB severity based on AHI in a fully automatic manner using RR intervals and respiratory effort. This shows the potential for improving SDB screening with unobtrusive sensors also for subjects with low sleep efficiency without adding additional sensors for sleep-wake detection.


Subject(s)
Respiration , Signal Processing, Computer-Assisted , Sleep Apnea Syndromes , Sleep Apnea Syndromes/physiopathology , Sleep Apnea Syndromes/diagnosis , Humans , Male , Middle Aged , Polysomnography , Female , Machine Learning , Adult , Neural Networks, Computer , Electrocardiography , Aged , Wakefulness/physiology , Sleep
12.
Sci Rep ; 14(1): 10792, 2024 05 11.
Article in English | MEDLINE | ID: mdl-38734752

ABSTRACT

Epilepsy is a chronic neurological disease, characterized by spontaneous, unprovoked, recurrent seizures that may lead to long-term disability and premature death. Despite significant efforts made to improve epilepsy detection clinically and pre-clinically, the pervasive presence of noise in EEG signals continues to pose substantial challenges to their effective application. In addition, discriminant features for epilepsy detection have not been investigated yet. The objective of this study is to develop a hybrid model for epilepsy detection from noisy and fragmented EEG signals. We hypothesized that a hybrid model could surpass existing single models in epilepsy detection. Our approach involves manual noise rejection and a novel statistical channel selection technique to detect epilepsy even from noisy EEG signals. Our proposed Base-2-Meta stacking classifier achieved notable accuracy (0.98 ± 0.05), precision (0.98 ± 0.07), recall (0.98 ± 0.05), and F1 score (0.98 ± 0.04) even with noisy 5-s segmented EEG signals. Application of our approach to the specific problem like detection of epilepsy from noisy and fragmented EEG data reveals a performance that is not only superior to others, but also is translationally relevant, highlighting its potential application in a clinic setting, where EEG signals are often noisy or scanty. Our proposed metric DF-A (Discriminant feature-accuracy), for the first time, identified the most discriminant feature with models that give A accuracy or above (A = 95 used in this study). This groundbreaking approach allows for detecting discriminant features and can be used as potential electrographic biomarkers in epilepsy detection research. Moreover, our study introduces innovative insights into the understanding of these features, epilepsy detection, and cross-validation, markedly improving epilepsy detection in ways previously unavailable.


Subject(s)
Electroencephalography , Epilepsy , Electroencephalography/methods , Humans , Epilepsy/diagnosis , Epilepsy/physiopathology , Signal Processing, Computer-Assisted , Algorithms , Signal-To-Noise Ratio
13.
Biomed Eng Online ; 23(1): 48, 2024 May 17.
Article in English | MEDLINE | ID: mdl-38760808

ABSTRACT

Monitoring of ingestive activities is critically important for managing the health and wellness of individuals with various health conditions, including the elderly, diabetics, and individuals seeking better weight control. Monitoring swallowing events can be an ideal surrogate for developing streamlined methods for effective monitoring and quantification of eating or drinking events. Swallowing is an essential process for maintaining life. This seemingly simple process is the result of coordinated actions of several muscles and nerves in a complex fashion. In this study, we introduce automated methods for the detection and quantification of various eating and drinking activities. Wireless surface electromyography (sEMG) was used to detect chewing and swallowing from sEMG signals obtained from the sternocleidomastoid muscle, in addition to signals obtained from a wrist-mounted IMU sensor. A total of 4675 swallows were collected from 55 participants in the study. Multiple methods were employed to estimate bolus volumes in the case of fluid intake, including regression and classification models. Among the tested models, neural networks-based regression achieved an R2 of 0.88 and a root mean squared error of 0.2 (minimum bolus volume was 10 ml). Convolutional neural networks-based classification (when considering each bolus volume as a separate class) achieved an accuracy of over 99% using random cross-validation and around 66% using cross-subject validation. Multiple classification methods were also used for solid bolus type detection, including SVM and decision trees (DT), which achieved an accuracy above 99% with random validation and above 94% in cross-subject validation. Finally, regression models with both random and cross-subject validation were used for estimating the solid bolus volume with an R2 value that approached 1 and root mean squared error values as low as 0.00037 (minimum solid bolus weight was 3 gm). These reported results lay the foundation for a cost-effective and non-invasive method for monitoring swallowing activities which can be extremely beneficial in managing various chronic health conditions, such as diabetes and obesity.


Subject(s)
Deglutition , Electromyography , Humans , Deglutition/physiology , Male , Female , Automation , Signal Processing, Computer-Assisted , Adult , Neural Networks, Computer , Wireless Technology
14.
J Neurosci Methods ; 407: 110162, 2024 Jul.
Article in English | MEDLINE | ID: mdl-38740142

ABSTRACT

BACKGROUND: Progress in advancing sleep research employing polysomnography (PSG) has been negatively impacted by the limited availability of widely available, open-source sleep-specific analysis tools. NEW METHOD: Here, we introduce Counting Sheep PSG, an EEGLAB-compatible software for signal processing, visualization, event marking and manual sleep stage scoring of PSG data for MATLAB. RESULTS: Key features include: (1) signal processing tools including bad channel interpolation, down-sampling, re-referencing, filtering, independent component analysis, artifact subspace reconstruction, and power spectral analysis, (2) customizable display of polysomnographic data and hypnogram, (3) event marking mode including manual sleep stage scoring, (4) automatic event detections including movement artifact, sleep spindles, slow waves and eye movements, and (5) export of main descriptive sleep architecture statistics, event statistics and publication-ready hypnogram. COMPARISON WITH EXISTING METHODS: Counting Sheep PSG was built on the foundation created by sleepSMG (https://sleepsmg.sourceforge.net/). The scope and functionalities of the current software have made significant advancements in terms of EEGLAB integration/compatibility, preprocessing, artifact correction, event detection, functionality and ease of use. By comparison, commercial software can be costly and utilize proprietary data formats and algorithms, thereby restricting the ability to distribute and share data and analysis results. CONCLUSIONS: The field of sleep research remains shackled by an industry that resists standardization, prevents interoperability, builds-in planned obsolescence, maintains proprietary black-box data formats and analysis approaches. This presents a major challenge for the field of sleep research. The need for free, open-source software that can read open-format data is essential for scientific advancement to be made in the field.


Subject(s)
Polysomnography , Signal Processing, Computer-Assisted , Sleep Stages , Software , Polysomnography/methods , Humans , Sleep Stages/physiology , Electroencephalography/methods , Artifacts
15.
Biosensors (Basel) ; 14(5)2024 Apr 23.
Article in English | MEDLINE | ID: mdl-38785685

ABSTRACT

Brain-computer interface (BCI) for motor imagery is an advanced technology used in the field of medical rehabilitation. However, due to the poor accuracy of electroencephalogram feature classification, BCI systems often misrecognize user commands. Although many state-of-the-art feature selection methods aim to enhance classification accuracy, they usually overlook the interrelationships between individual features, indirectly impacting the accuracy of feature classification. To overcome this issue, we propose an adaptive feature learning model that employs a Riemannian geometric approach to generate a feature matrix from electroencephalogram signals, serving as the model's input. By integrating the enhanced adaptive L1 penalty and weighted fusion penalty into the sparse learning model, we select the most informative features from the matrix. Specifically, we measure the importance of features using mutual information and introduce an adaptive weight construction strategy to penalize regression coefficients corresponding to each variable adaptively. Moreover, the weighted fusion penalty balances weight differences among correlated variables, reducing the model's overreliance on specific variables and enhancing accuracy. The performance of the proposed method was validated on BCI Competition IV datasets IIa and IIb using the support vector machine. Experimental results demonstrate the effectiveness and superiority of the proposed model compared to the existing models.


Subject(s)
Brain-Computer Interfaces , Electroencephalography , Humans , Support Vector Machine , Algorithms , Signal Processing, Computer-Assisted , Machine Learning , Imagination/physiology
16.
Stud Health Technol Inform ; 314: 151-152, 2024 May 23.
Article in English | MEDLINE | ID: mdl-38785022

ABSTRACT

This study proposes an innovative application of the Goertzel Algorithm (GA) for the processing of vocal signals in dysphonia evaluation. Compared to the Fast Fourier Transform (FFT) representing the gold standard analysis technique in this context, GA demonstrates higher efficiency in terms of processing time and memory usage, also showing an improved discrimination between healthy and pathological conditions. This suggests that GA-based approaches could enhance the reliability and efficiency of vocal signal analysis, thus supporting physicians in dysphonia research and clinical monitoring.


Subject(s)
Algorithms , Dysphonia , Humans , Dysphonia/diagnosis , Signal Processing, Computer-Assisted , Sound Spectrography/methods , Reproducibility of Results , Fourier Analysis , Female , Male
17.
Stud Health Technol Inform ; 314: 155-159, 2024 May 23.
Article in English | MEDLINE | ID: mdl-38785023

ABSTRACT

Among its main benefits, telemonitoring enables personalized management of chronic diseases by means of biomarkers extracted from signals. In these applications, a thorough quality assessment is required to ensure the reliability of the monitored parameters. Motion artifacts are a common problem in recordings with wearable devices. In this work, we propose a fully automated and personalized method to detect motion artifacts in multimodal recordings devoted to the monitoring of the Cardiac Time Intervals (CTIs). The detection of motion artifacts was carried out by using template matching with a personalized template. The method yielded a balanced accuracy of 86%. Moreover, it proved effective to decrease the variability of the estimated CTIs by at least 17%. Our preliminary results show that personalized detection of motion artifacts improves the robustness of the assessment CTIs and opens to the use in wearable systems.


Subject(s)
Artifacts , Telemedicine , Humans , Wearable Electronic Devices , Reproducibility of Results , Monitoring, Physiologic/methods , Electrocardiography , Signal Processing, Computer-Assisted
18.
Med Eng Phys ; 128: 104154, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38697881

ABSTRACT

Brain-computer interfaces (BCIs) are used to understand brain functioning and develop therapies for neurological and neurodegenerative disorders. Therefore, BCIs are crucial in rehabilitating motor dysfunction and advancing motor imagery applications. For motor imagery, electroencephalogram (EEG) signals are used to classify the subject's intention of moving a body part without actually moving it. This paper presents a two-stage transformer-based architecture that employs handcrafted features and deep learning techniques to enhance the classification performance on benchmarked EEG signals. Stage-1 is built on parallel convolution based EEGNet, multi-head attention, and separable temporal convolution networks for spatiotemporal feature extraction. Further, for enhanced classification, in stage-2, additional features and embeddings extracted from stage-1 are used to train TabNet. In addition, a novel channel cluster swapping data augmentation technique is also developed to handle the issue of limited samples for training deep learning architectures. The developed two-stage architecture offered an average classification accuracy of 88.5 % and 88.3 % on the BCI Competition IV-2a and IV-2b datasets, respectively, which is approximately 3.0 % superior over similar recent reported works.


Subject(s)
Brain-Computer Interfaces , Electroencephalography , Signal Processing, Computer-Assisted , Humans , Imagination/physiology , Deep Learning , Motor Activity/physiology , Movement , Neural Networks, Computer
19.
Article in English | MEDLINE | ID: mdl-38768007

ABSTRACT

Electroencephalogram (EEG) is widely used in basic and clinical neuroscience to explore neural states in various populations, and classifying these EEG recordings is a fundamental challenge. While machine learning shows promising results in classifying long multivariate time series, optimal prediction models and feature extraction methods for EEG classification remain elusive. Our study addressed the problem of EEG classification under the framework of brain age prediction, applying a deep learning model on EEG time series. We hypothesized that decomposing EEG signals into oscillatory modes would yield more accurate age predictions than using raw or canonically frequency-filtered EEG. Specifically, we employed multivariate intrinsic mode functions (MIMFs), an empirical mode decomposition (EMD) variant based on multivariate iterative filtering (MIF), with a convolutional neural network (CNN) model. Testing a large dataset of routine clinical EEG scans (n = 6540) from patients aged 1 to 103 years, we found that an ad-hoc CNN model without fine-tuning could reasonably predict brain age from EEGs. Crucially, MIMF decomposition significantly improved performance compared to canonical brain rhythms (from delta to lower gamma oscillations). Our approach achieved a mean absolute error (MAE) of 13.76 ± 0.33 and a correlation coefficient of 0.64 ± 0.01 in brain age prediction over the entire lifespan. Our findings indicate that CNN models applied to EEGs, preserving their original temporal structure, remains a promising framework for EEG classification, wherein the adaptive signal decompositions such as the MIF can enhance CNN models' performance in this task.


Subject(s)
Brain , Electroencephalography , Neural Networks, Computer , Humans , Electroencephalography/methods , Young Adult , Adult , Child , Aged , Adolescent , Infant , Child, Preschool , Middle Aged , Aged, 80 and over , Male , Female , Brain/physiology , Algorithms , Deep Learning , Multivariate Analysis , Machine Learning , Signal Processing, Computer-Assisted
20.
J Neural Eng ; 21(3)2024 May 16.
Article in English | MEDLINE | ID: mdl-38718785

ABSTRACT

Objective.Recently, the demand for wearable devices using electroencephalography (EEG) has increased rapidly in many fields. Due to its volume and computation constraints, wearable devices usually compress and transmit EEG to external devices for analysis. However, current EEG compression algorithms are not tailor-made for wearable devices with limited computing and storage. Firstly, the huge amount of parameters makes it difficult to apply in wearable devices; secondly, it is tricky to learn EEG signals' distribution law due to the low signal-to-noise ratio, which leads to excessive reconstruction error and suboptimal compression performance.Approach.Here, a feature enhanced asymmetric encoding-decoding network is proposed. EEG is encoded with a lightweight model, and subsequently decoded with a multi-level feature fusion network by extracting the encoded features deeply and reconstructing the signal through a two-branch structure.Main results.On public EEG datasets, motor imagery and event-related potentials, experimental results show that the proposed method has achieved the state of the art compression performance. In addition, the neural representation analysis and the classification performance of the reconstructed EEG signals also show that our method tends to retain more task-related information as the compression ratio increases and retains reliable discriminative information after EEG compression.Significance.This paper tailors an asymmetric EEG compression method for wearable devices that achieves state-of-the-art compression performance in a lightweight manner, paving the way for the application of EEG-based wearable devices.


Subject(s)
Data Compression , Electroencephalography , Electroencephalography/methods , Data Compression/methods , Humans , Wearable Electronic Devices , Neural Networks, Computer , Algorithms , Signal Processing, Computer-Assisted , Imagination/physiology
SELECTION OF CITATIONS
SEARCH DETAIL
...