Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 751
Filter
1.
Article in English | MEDLINE | ID: mdl-38848223

ABSTRACT

Sleep staging serves as a fundamental assessment for sleep quality measurement and sleep disorder diagnosis. Although current deep learning approaches have successfully integrated multimodal sleep signals, enhancing the accuracy of automatic sleep staging, certain challenges remain, as follows: 1) optimizing the utilization of multi-modal information complementarity, 2) effectively extracting both long- and short-range temporal features of sleep information, and 3) addressing the class imbalance problem in sleep data. To address these challenges, this paper proposes a two-stream encode-decoder network, named TSEDSleepNet, which is inspired by the depth sensitive attention and automatic multi-modal fusion (DSA2F) framework. In TSEDSleepNet, a two-stream encoder is used to extract the multiscale features of electrooculogram (EOG) and electroencephalogram (EEG) signals. And a self-attention mechanism is utilized to fuse the multiscale features, generating multi-modal saliency features. Subsequently, the coarser-scale construction module (CSCM) is adopted to extract and construct multi-resolution features from the multiscale features and the salient features. Thereafter, a Transformer module is applied to capture both long- and short-range temporal features from the multi-resolution features. Finally, the long- and short-range temporal features are restored with low-layer details and mapped to the predicted classification results. Additionally, the Lovász loss function is applied to alleviate the class imbalance problem in sleep datasets. Our proposed method was tested on the Sleep-EDF-39 and Sleep-EDF-153 datasets, and it achieved classification accuracies of 88.9% and 85.2% and Macro-F1 scores of 84.8% and 79.7%, respectively, thus outperforming conventional traditional baseline models. These results highlight the efficacy of the proposed method in fusing multi-modal information. This method has potential for application as an adjunct tool for diagnosing sleep disorders.


Subject(s)
Algorithms , Deep Learning , Electroencephalography , Electrooculography , Neural Networks, Computer , Sleep Stages , Humans , Electroencephalography/methods , Sleep Stages/physiology , Electrooculography/methods , Male , Female , Adult , Polysomnography/methods , Signal Processing, Computer-Assisted , Young Adult
2.
PLoS One ; 19(5): e0303565, 2024.
Article in English | MEDLINE | ID: mdl-38781127

ABSTRACT

In this study, we attempted to improve brain-computer interface (BCI) systems by means of auditory stream segregation in which alternately presented tones are perceived as sequences of various different tones (streams). A 3-class BCI using three tone sequences, which were perceived as three different tone streams, was investigated and evaluated. Each presented musical tone was generated by a software synthesizer. Eleven subjects took part in the experiment. Stimuli were presented to each user's right ear. Subjects were requested to attend to one of three streams and to count the number of target stimuli in the attended stream. In addition, 64-channel electroencephalogram (EEG) and two-channel electrooculogram (EOG) signals were recorded from participants with a sampling frequency of 1000 Hz. The measured EEG data were classified based on Riemannian geometry to detect the object of the subject's selective attention. P300 activity was elicited by the target stimuli in the segregated tone streams. In five out of eleven subjects, P300 activity was elicited only by the target stimuli included in the attended stream. In a 10-fold cross validation test, a classification accuracy over 80% for five subjects and over 75% for nine subjects was achieved. For subjects whose accuracy was lower than 75%, either the P300 was also elicited for nonattended streams or the amplitude of P300 was small. It was concluded that the number of selected BCI systems based on auditory stream segregation can be increased to three classes, and these classes can be detected by a single ear without the aid of any visual modality.


Subject(s)
Acoustic Stimulation , Attention , Brain-Computer Interfaces , Electroencephalography , Humans , Male , Female , Electroencephalography/methods , Adult , Attention/physiology , Acoustic Stimulation/methods , Auditory Perception/physiology , Young Adult , Event-Related Potentials, P300/physiology , Electrooculography/methods
3.
Talanta ; 275: 126180, 2024 Aug 01.
Article in English | MEDLINE | ID: mdl-38703480

ABSTRACT

Organic Electrochemical Transistors (OECTs) are integral in detecting human bioelectric signals, attributing their significance to distinct electrochemical properties, the utilization of soft materials, compact dimensions, and pronounced biocompatibility. This review traverses the technological evolution of OECT, highlighting its profound impact on non-invasive detection methodologies within the biomedicalfield. Four sensor types rooted in OECT technology were introduced: Electrocardiogram (ECG), Electroencephalogram (EEG), Electromyography (EMG), and Electrooculography (EOG), which hold promise for integration into wearable detection systems. The fundamental detection principles, material compositions, and functional attributes of these sensors are examined. Additionally, the performance metrics and delineates viable optimization strategies for assorted physiological electrical detection sensors are discussed. The overarching goal of this review is to foster deeper insights into the generation, propagation, and modulation of electrophysiological signals, thereby advancing the application and development of OECT in medical sciences.


Subject(s)
Transistors, Electronic , Humans , Electromyography/methods , Electrocardiography/methods , Electrochemical Techniques/methods , Electrooculography/methods , Electroencephalography
4.
Article in English | MEDLINE | ID: mdl-38635384

ABSTRACT

Polysomnography (PSG) recordings have been widely used for sleep staging in clinics, containing multiple modality signals (i.e., EEG and EOG). Recently, many studies have combined EEG and EOG modalities for sleep staging, since they are the most and the second most powerful modality for sleep staging among PSG recordings, respectively. However, EEG is complex to collect and sensitive to environment noise or other body activities, imbedding its use in clinical practice. Comparatively, EOG is much more easily to be obtained. In order to make full use of the powerful ability of EEG and the easy collection of EOG, we propose a novel framework to simplify multimodal sleep staging with a single EOG modality. It still performs well with only EOG modality in the absence of the EEG. Specifically, we first model the correlation between EEG and EOG, and then based on the correlation we generate multimodal features with time and frequency guided generators by adopting the idea of generative adversarial learning. We collected a real-world sleep dataset containing 67 recordings and used other four public datasets for evaluation. Compared with other existing sleep staging methods, our framework performs the best when solely using the EOG modality. Moreover, under our framework, EOG provides a comparable performance to EEG.


Subject(s)
Algorithms , Electroencephalography , Electrooculography , Polysomnography , Sleep Stages , Humans , Electroencephalography/methods , Sleep Stages/physiology , Polysomnography/methods , Electrooculography/methods , Male , Adult , Female , Young Adult
5.
Physiol Meas ; 45(5)2024 May 15.
Article in English | MEDLINE | ID: mdl-38653318

ABSTRACT

Objective.Sleep staging based on full polysomnography is the gold standard in the diagnosis of many sleep disorders. It is however costly, complex, and obtrusive due to the use of multiple electrodes. Automatic sleep staging based on single-channel electro-oculography (EOG) is a promising alternative, requiring fewer electrodes which could be self-applied below the hairline. EOG sleep staging algorithms are however yet to be validated in clinical populations with sleep disorders.Approach.We utilized the SOMNIA dataset, comprising 774 recordings from subjects with various sleep disorders, including insomnia, sleep-disordered breathing, hypersomnolence, circadian rhythm disorders, parasomnias, and movement disorders. The recordings were divided into train (574), validation (100), and test (100) groups. We trained a neural network that integrated transformers within a U-Net backbone. This design facilitated learning of arbitrary-distance temporal relationships within and between the EOG and hypnogram.Main results.For 5-class sleep staging, we achieved median accuracies of 85.0% and 85.2% and Cohen's kappas of 0.781 and 0.796 for left and right EOG, respectively. The performance using the right EOG was significantly better than using the left EOG, possibly because in the recommended AASM setup, this electrode is located closer to the scalp. The proposed model is robust to the presence of a variety of sleep disorders, displaying no significant difference in performance for subjects with a certain sleep disorder compared to those without.Significance.The results show that accurate sleep staging using single-channel EOG can be done reliably for subjects with a variety of sleep disorders.


Subject(s)
Electrooculography , Sleep Stages , Sleep Wake Disorders , Humans , Sleep Stages/physiology , Electrooculography/methods , Sleep Wake Disorders/diagnosis , Sleep Wake Disorders/physiopathology , Male , Female , Adult , Cohort Studies , Middle Aged , Signal Processing, Computer-Assisted , Neural Networks, Computer , Young Adult , Polysomnography
6.
Comput Biol Med ; 173: 108314, 2024 May.
Article in English | MEDLINE | ID: mdl-38513392

ABSTRACT

Sleep staging is a vital aspect of sleep assessment, serving as a critical tool for evaluating the quality of sleep and identifying sleep disorders. Manual sleep staging is a laborious process, while automatic sleep staging is seldom utilized in clinical practice due to issues related to the inadequate accuracy and interpretability of classification results in automatic sleep staging models. In this work, a hybrid intelligent model is presented for automatic sleep staging, which integrates data intelligence and knowledge intelligence, to attain a balance between accuracy, interpretability, and generalizability in the sleep stage classification. Specifically, it is built on any combination of typical electroencephalography (EEG) and electrooculography (EOG) channels, including a temporal fully convolutional network based on the U-Net architecture and a multi-task feature mapping structure. The experimental results show that, compared to current interpretable automatic sleep staging models, our model achieves a Macro-F1 score of 0.804 on the ISRUC dataset and 0.780 on the Sleep-EDFx dataset. Moreover, we use knowledge intelligence to address issues of excessive jumps and unreasonable sleep stage transitions in the coarse sleep graphs obtained by the model. We also explore the different ways knowledge intelligence affects coarse sleep graphs by combining different sleep graph correction methods. Our research can offer convenient support for sleep physicians, indicating its significant potential in improving the efficiency of clinical sleep staging.


Subject(s)
Sleep Stages , Sleep , Polysomnography/methods , Electroencephalography/methods , Electrooculography/methods
7.
IEEE J Biomed Health Inform ; 28(6): 3466-3477, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38502613

ABSTRACT

Over recent decades, electroencephalogram (EEG) has become an essential tool in the field of clinical analysis and neurological disease research. However, EEG recordings are notably vulnerable to artifacts during acquisition, especially in clinical settings, which can significantly impede the accurate interpretation of neuronal activity. Blind source separation is currently the most popular method for EEG denoising, but most of the sources it separates often contain both artifacts and brain activity, which may lead to substantial information loss if handled improperly. In this paper, we introduce a dual-threshold denoising method combining spatial filtering with frequency-domain filtering to automatically eliminate electrooculogram (EOG) and electromyogram (EMG) artifacts from multi-channel EEG. The proposed method employs a fusion of second-order blind identification (SOBI) and canonical correlation analysis (CCA) to enhance source separation quality, followed by adaptive threshold to localize the artifact sources, and strict fixed threshold to remove strong artifact sources. Stationary wavelet transform (SWT) is utilized to decompose the weak artifact sources, with subsequent adjustment of wavelet coefficients in respective frequency bands tailored to the distinct characteristics of each artifact. The results of synthetic and real datasets show that our proposed method maximally retains the time-domain and frequency-domain information in the EEG during denoising. Compared with existing techniques, the proposed method achieves better denoising performance, which establishes a reliable foundation for subsequent clinical analyses.


Subject(s)
Artifacts , Electroencephalography , Signal Processing, Computer-Assisted , Humans , Electroencephalography/methods , Algorithms , Electromyography/methods , Adult , Wavelet Analysis , Electrooculography/methods , Male , Young Adult , Female
8.
Comput Methods Programs Biomed ; 244: 107992, 2024 Feb.
Article in English | MEDLINE | ID: mdl-38218118

ABSTRACT

BACKGROUND AND OBJECTIVE: Sleep staging is an essential step for sleep disorder diagnosis, which is time-intensive and laborious for experts to perform this work manually. Automatic sleep stage classification methods not only alleviate experts from these demanding tasks but also enhance the accuracy and efficiency of the classification process. METHODS: A novel multi-channel biosignal-based model constructed by the combination of a 3D convolutional operation and a graph convolutional operation is proposed for the automated sleep stages using various physiological signals. Both the 3D convolution and graph convolution can aggregate information from neighboring brain areas, which helps to learn intrinsic connections from the biosignals. Electroencephalogram (EEG), electromyogram (EMG), electrooculogram (EOG) and electrocardiogram (ECG) signals are employed to extract time domain and frequency domain features. Subsequently, these signals are input to the 3D convolutional and graph convolutional branches, respectively. The 3D convolution branch can explore the correlations between multi-channel signals and multi-band waves in each channel in the time series, while the graph convolution branch can explore the connections between each channel and each frequency band. In this work, we have developed the proposed multi-channel convolution combined sleep stage classification model (MixSleepNet) using ISRUC datasets (Subgroup 3 and 50 random samples from Subgroup 1). RESULTS: Based on the first expert's label, our generated MixSleepNet yielded an accuracy, F1-score and Cohen kappa scores of 0.830, 0.821 and 0.782, respectively for ISRUC-S3. It obtained accuracy, F1-score and Cohen kappa scores of 0.812, 0.786, and 0.756, respectively for the ISRUC-S1 dataset. In accordance with the evaluations conducted by the second expert, the comprehensive accuracies, F1-scores, and Cohen kappa coefficients for the ISRUC-S3 and ISRUC-S1 datasets are determined to be 0.837, 0.820, 0.789, and 0.829, 0.791, 0.775, respectively. CONCLUSION: The results of the performance metrics by the proposed method are much better than those from all the compared models. Additional experiments were carried out on the ISRUC-S3 sub-dataset to evaluate the contributions of each module towards the classification performance.


Subject(s)
Sleep Stages , Sleep , Sleep Stages/physiology , Time Factors , Electroencephalography/methods , Electrooculography/methods
9.
J Sleep Res ; 33(2): e13977, 2024 Apr.
Article in English | MEDLINE | ID: mdl-37400248

ABSTRACT

Sleep recordings are increasingly being conducted in patients' homes where patients apply the sensors themselves according to instructions. However, certain sensor types such as cup electrodes used in conventional polysomnography are unfeasible for self-application. To overcome this, self-applied forehead montages with electroencephalography and electro-oculography sensors have been developed. We evaluated the technical feasibility of a self-applied electrode set from Nox Medical (Reykjavik, Iceland) through home sleep recordings of healthy and suspected sleep-disordered adults (n = 174) in the context of sleep staging. Subjects slept with a double setup of conventional type II polysomnography sensors and self-applied forehead sensors. We found that the self-applied electroencephalography and electro-oculography electrodes had acceptable impedance levels but were more prone to losing proper skin-electrode contact than the conventional cup electrodes. Moreover, the forehead electroencephalography signals recorded using the self-applied electrodes expressed lower amplitudes (difference 25.3%-43.9%, p < 0.001) and less absolute power (at 1-40 Hz, p < 0.001) than the polysomnography electroencephalography signals in all sleep stages. However, the signals recorded with the self-applied electroencephalography electrodes expressed more relative power (p < 0.001) at very low frequencies (0.3-1.0 Hz) in all sleep stages. The electro-oculography signals recorded with the self-applied electrodes expressed comparable characteristics with standard electro-oculography. In conclusion, the results support the technical feasibility of the self-applied electroencephalography and electro-oculography for sleep staging in home sleep recordings, after adjustment for amplitude differences, especially for scoring Stage N3 sleep.


Subject(s)
Electroencephalography , Sleep , Adult , Humans , Polysomnography/methods , Feasibility Studies , Electrooculography/methods , Sleep Stages , Electrodes
10.
IEEE Trans Biomed Circuits Syst ; 18(2): 322-333, 2024 Apr.
Article in English | MEDLINE | ID: mdl-37851555

ABSTRACT

Human eye activity has been widely studied in many fields such as psychology, neuroscience, medicine, and human-computer interaction engineering. In previous studies, monitoring of human eye activity mainly depends on electrooculogram (EOG) that requires a contact sensor. This article proposes a novel eye movement monitoring method called continuous wave doppler oculogram (cDOG). Unlike the conventional EOG-based eye movement monitoring methods, cDOG based on continuous wave doppler radar sensor (cDRS) can remotely measure human eye activity without placing electrodes on the head. To verify the feasibility of using cDOG for eye movement monitoring, we first theoretically analyzed the association between the radar signal and the corresponding eye movements measured with EOG. Afterward, we conducted an experiment to compare EOG and cDOG measurements under the conditions of eyes closure and opening. In addition, different eye movement states were considered, including right-left saccade, up-down saccade, eye-blink, and fixation. Several representative time domain and frequency domain features obtained from cDOG and from EOG were compared in these states, allowing us to demonstrate the feasibility of using cDOG for monitoring eye movements. The experimental results show that there is a correlation between cDOG and EOG in the time and frequency domain features, the average time error of single eye movement is less than 280.5 ms, and the accuracy of cDOG in eye movement detection is higher than 92.35%, when the distance between the cDRS and the face is 10 cm and eyes is facing the radar directly.


Subject(s)
Eye Movements , Radar , Humans , Feasibility Studies , Electrooculography/methods , Blinking
11.
Psychophysiology ; 61(3): e14461, 2024 Mar.
Article in English | MEDLINE | ID: mdl-37855151

ABSTRACT

This study aimed to evaluate the utility and applicability of electrooculography (EOG) when studying ocular activity during complex motor behavior. Due to its lower spatial resolution relative to eye tracking (ET), it is unclear whether EOG can provide valid and accurate temporal measurements such as the duration of the Quiet Eye (QE), that is the uninterrupted dwell time on the visual target prior to and during action. However, because of its greater temporal resolution, EOG is better suited for temporal-spectral decomposition, a technique that allows us to distinguish between lower and higher frequency activity as a function of time. Sixteen golfers of varying expertise (novices to experts) putted 60 balls to a 4-m distant target on a flat surface while we recorded EOG, ET, performance accuracy, and putter kinematics. Correlational and discrepancy analyses confirmed that EOG yielded valid and accurate QE measurements, but only when using certain processing parameters. Nested cross-validation indicated that, among a set of ET and EOG temporal and spectral oculomotor features, EOG power was the most useful when predicting performance accuracy through robust regression. Follow-up cross-validation and correlational analyses revealed that more accurate performance was preceded by diminished lower-frequency activity immediately before movement initiation and elevated higher-frequency activity during movement recorded from the horizontal channel. This higher-frequency activity was also found to accompany a smoother movement execution. This study validates EOG algorithms (code provided) for measuring temporal parameters and presents a novel approach to extracting temporal and spectral oculomotor features during complex motor behavior.


Subject(s)
Algorithms , Eye Movements , Humans , Electrooculography/methods , Eye-Tracking Technology , Biomechanical Phenomena
12.
Article in English | MEDLINE | ID: mdl-38088999

ABSTRACT

Gaze estimation, as a technique that reflects individual attention, can be used for disability assistance and assisting physicians in diagnosing diseases such as autism spectrum disorder (ASD), Parkinson's disease, and attention deficit hyperactivity disorder (ADHD). Various techniques have been proposed for gaze estimation and achieved high resolution. Among these approaches, electrooculography (EOG)-based gaze estimation, as an economical and effective method, offers a promising solution for practical applications. OBJECTIVE: In this paper, we systematically investigated the possible EOG electrode locations which are spatially distributed around the orbital cavity. Afterward, quantities of informative features to characterize physiological information of eye movement from the temporal-spectral domain are extracted from the seven differential channels. METHODS AND PROCEDURES: To select the optimum channels and relevant features, and eliminate irrelevant information, a heuristical search algorithm (i.e., forward stepwise strategy) is applied. Subsequently, a comparative analysis of the impacts of electrode placement and feature contributions on gaze estimation is evaluated via 6 classic models with 18 subjects. RESULTS: Experimental results showed that the promising performance was achieved both in the Mean Absolute Error (MAE) and Root Mean Square Error (RMSE) within a wide gaze that ranges from -50° to +50°. The MAE and RMSE can be improved to 2.80° and 3.74° ultimately, while only using 10 features extracted from 2 channels. Compared with the prevailing EOG-based techniques, the performance improvement of MAE and RMSE range from 0.70° to 5.48° and 0.66° to 5.42°, respectively. CONCLUSION: We proposed a robust EOG-based gaze estimation approach by systematically investigating the optimal channel/feature combination. The experimental results indicated not only the superiority of the proposed approach but also its potential for clinical application. Clinical and translational impact statement: Accurate gaze estimation is a key step for assisting disabilities and accurate diagnosis of various diseases including ASD, Parkinson's disease, and ADHD. The proposed approach can accurately estimate the points of gaze via EOG signals, and thus has the potential for various related medical applications.


Subject(s)
Autism Spectrum Disorder , Parkinson Disease , Humans , Electrooculography/methods , Autism Spectrum Disorder/diagnosis , Parkinson Disease/diagnosis , Eye Movements , Electrodes
13.
Article in English | MEDLINE | ID: mdl-38083276

ABSTRACT

Human-machine interfaces (HMIs) based on Electro-oculogram (EOG) signals have been widely explored. However, due to the individual variability, it is still challenging for an EOG-based eye movement recognition model to achieve favorable results among cross-subjects. The classical transfer learning methods such as CORrelation Alignment (CORAL), Transfer Component Analysis (TCA), and Joint Distribution Adaptation (JDA) are mainly based on feature transformation and distribution alignment, which do not consider similarities/dissimilarities between target subject and source subjects. In this paper, the Kullback-Leibler (KL) divergence of the log-Power Spectral Density (log-PSD) features of horizontal EOG (HEOG) between the target subject and each source subject is calculated for adaptively selecting partial subjects that suppose to have similar distribution with target subject for further training. It not only consider the similarity but also reduce computational consumption. The results show that the proposed approach is superior to the baseline and classical transfer learning methods, and significantly improves the performance of target subjects who have poor performance with the primary classifiers. The best improvement of Support Vector Machines (SVM) classifier has improved by 13.1% for subject 31 compared with baseline result. The preliminary results of this study demonstrate the effectiveness of the proposed transfer framework and provide a promising tool for implementing cross-subject eye movement recognition models in real-life scenarios.


Subject(s)
Electroencephalography , Eye Movements , Humans , Electrooculography/methods , Electroencephalography/methods , Movement , Support Vector Machine
14.
Article in English | MEDLINE | ID: mdl-38083601

ABSTRACT

The rise in population and aging has led to a significant increase in the number of individuals affected by common causes of vision loss. Early diagnosis and treatment are crucial to avoid the consequences of visual impairment. However, in early stages, many visual problems are making it difficult to detect. Visual adaptation can compensate for several visual deficits with adaptive eye movements. These adaptive eye movements may serve as indicators of vision loss. In this work, we investigate the association between eye movement and blurred vision. By using Electrooculography (EOG) to record eye movements, we propose a new tracking model to identify the deterioration of refractive power. We verify the technical feasibility of this method by designing a blurred vision simulation experiment. Six sets of prescription lenses and a pair of flat lenses were used to create different levels of blurring effects. We analyzed binocular movements through EOG signals and performed a seven-class classification using the ResNet18 architecture. The results revealed an average classification accuracy of 94.7% in the subject-dependent model. However, the subject-independent model presented poor performance, with the highest accuracy reaching only 34.5%. Therefore, the potential of an EOG-based visual quality monitoring system is proven. Furthermore, our experimental design provides a novel approach to assessing blurred vision.


Subject(s)
Eye Movements , Vision, Low , Humans , Electrooculography/methods , Vision Disorders
15.
Article in English | MEDLINE | ID: mdl-38083634

ABSTRACT

Driving after consuming alcohol can be dangerous, as it negatively affects judgement, reaction time, coordination, and decision-making abilities, increasing the risk of accidents and putting oneself and other road users in danger. Therefore, it is critical to establish reliable and accurate methods to detect and assess intoxication levels. One such approach is electrooculography (EOG), a non-invasive technique that measures eye movements, which has been linked to intoxication levels and holds promise as a method of estimating them. In recent years, machine learning algorithms have been utilized to analyze EOG signals to estimate various physiological and behavioural states. The purpose of this study was to investigate the viability of using EOG analysis and machine learning to estimate intoxication levels in a simulated driving scenario. EOG signals were measured using JINS MEME_R smart glasses and the level of intoxication was simulated using drunk vision goggles. We employed traditional signal processing techniques and feature engineering strategies. For classification, we used boosted decision trees, obtaining a prediction accuracy of over 94% for a four-class classification problem. Our results indicate that EOG analysis and machine learning can be utilized to accurately estimate intoxication levels in a simulated driving scenario.


Subject(s)
Algorithms , Eye Movements , Electrooculography/methods , Reaction Time , Machine Learning
16.
Comput Biol Med ; 167: 107590, 2023 12.
Article in English | MEDLINE | ID: mdl-37897962

ABSTRACT

A large number of traffic accidents were caused by drowsiness while driving. In-vehicle alert system based on physiological signals was one of the most promising solutions to monitor driving fatigue. However, different physiological modalities can be used, and many relative studies compared different modalities without considering the implementation feasibility of portable or wearable devices. Moreover, evaluations of each modality in previous studies were based on inconsistent choices of fatigue label and signal features, making it hard to compare the results of different studies. Therefore, the modality comparison and fusion for continuous drowsiness estimation while driving was still unclear. This work sought to comprehensively compare widely-used physiological modalities, including forehead electroencephalogram (EEG), electrooculogram (EOG), R-R intervals (RRI) and breath, in a hardware setting feasible for portable or wearable devices to monitor driving fatigue. Moreover, a more general conclusion on modality comparison and fusion was reached based on the regression of features or their combinations and the awake-to-drowsy transition. Finally, the feature subset of fused modalities was produced by feature selection method, to select the optimal feature combination and reduce computation consumption. Considering practical feasibility, the most effective combination with the highest correlation coefficient was using forehead EEG or EOG, along with RRI and RRI-derived breath. If more comfort and convenience was required, the combination of RRI and RRI-derived breath was also promising.


Subject(s)
Electroencephalography , Wakefulness , Humans , Electroencephalography/methods , Accidents, Traffic/prevention & control , Electrooculography/methods , Fatigue
17.
Comput Biol Med ; 163: 107127, 2023 09.
Article in English | MEDLINE | ID: mdl-37311382

ABSTRACT

Nowadays, many sleep staging algorithms have not been widely used in practical situations due to the lack of persuasiveness of generalization outside the given datasets. Thus, to improve generalization, we select seven highly heterogeneous datasets covering 9970 records with over 20k hours among 7226 subjects spanning 950 days for training, validation, and evaluation. In this paper, we propose an automatic sleep staging architecture called TinyUStaging using single-lead EEG and EOG. The TinyUStaging is a lightweight U-Net with multiple attention modules to perform adaptive recalibration of the features, including Channel and Spatial Joint Attention (CSJA) block and Squeeze and Excitation (SE) block. Noteworthily, to address the class imbalance problem, we design sampling strategies with probability compensation and propose a class-aware Sparse Weighted Dice and Focal (SWDF) loss function to improve the recognition rate for minority classes (N1) and hard-to-classify samples (N3) especially for OSA patients. Additionally, two hold-out sets containing healthy and sleep-disordered subjects are considered to verify the generalization. Facing the background of large-scale imbalanced heterogeneous data, we perform subject-wise 5-fold cross-validation on each dataset, and the results demonstrate that our model outperforms many methods, especially in N1, achieving an average overall accuracy, macro F1-score (MF1), and kappa of 84.62%, 79.6%, and 0.764 on heterogeneous datasets under optimal partitioning, providing a solid foundation for out-of-hospital sleep monitoring. Moreover, the overall standard deviation of MF1 under different folds remains within 0.175, indicating that the model is relatively stable.


Subject(s)
Electroencephalography , Sleep Stages , Humans , Electrooculography/methods , Polysomnography/methods , Electroencephalography/methods , Sleep
18.
Sensors (Basel) ; 23(9)2023 May 07.
Article in English | MEDLINE | ID: mdl-37177757

ABSTRACT

The work carried out in this paper consists of the classification of the physiological signal generated by eye movement called Electrooculography (EOG). The human eye performs simultaneous movements, when focusing on an object, generating a potential change in origin between the retinal epithelium and the cornea and modeling the eyeball as a dipole with a positive and negative hemisphere. Supervised learning algorithms were implemented to classify five eye movements; left, right, down, up and blink. Wavelet Transform was used to obtain information in the frequency domain characterizing the EOG signal with a bandwidth of 0.5 to 50 Hz; training results were obtained with the implementation of K-Nearest Neighbor (KNN) 69.4%, a Support Vector Machine (SVM) of 76.9% and Decision Tree (DT) 60.5%, checking the accuracy through the Jaccard index and other metrics such as the confusion matrix and ROC (Receiver Operating Characteristic) curve. As a result, the best classifier for this application was the SVM with Jaccard Index.


Subject(s)
Algorithms , Support Vector Machine , Humans , Electrooculography/methods , Eye Movements , Wavelet Analysis
19.
Biomed Tech (Berl) ; 68(4): 361-372, 2023 Aug 28.
Article in English | MEDLINE | ID: mdl-36848391

ABSTRACT

Driver states are reported as one of the principal factors in driving safety. Distinguishing the driving driver state based on the artifact-free electroencephalogram (EEG) signal is an effective means, but redundant information and noise will inevitably reduce the signal-to-noise ratio of the EEG signal. This study proposes a method to automatically remove electrooculography (EOG) artifacts by noise fraction analysis. Specifically, multi-channel EEG recordings are collected after the driver experiences a long time driving and after a certain period of rest respectively. Noise fraction analysis is then applied to remove EOG artifacts by separating the multichannel EEG into components by optimizing the signal-to-noise quotient. The representation of data characteristics of the EEG after denoising is found in the Fisher ratio space. Additionally, a novel clustering algorithm is designed to identify denoising EEG by combining cluster ensemble and probability mixture model (CEPM). The EEG mapping plot is used to illustrate the effectiveness and efficiency of noise fraction analysis on the denoising of EEG signals. Adjusted rand index (ARI) and accuracy (ACC) are used to demonstrate clustering performance and precision. The results showed that the noise artifacts in the EEG were removed and the clustering accuracy of all participants was above 90%, resulting in a high driver fatigue recognition rate.


Subject(s)
Algorithms , Electroencephalography , Humans , Electroencephalography/methods , Electrooculography/methods , Cluster Analysis , Artifacts , Signal Processing, Computer-Assisted
20.
Sensors (Basel) ; 23(3)2023 Jan 21.
Article in English | MEDLINE | ID: mdl-36772275

ABSTRACT

Background: Portable electroencephalogram (EEG) systems are often used in health care applications to record brain signals because their ease of use. An electrooculogram (EOG) is a common, low frequency, high amplitude artifact of the eye blink signal that might confuse disease diagnosis. As a result, artifact removal approaches in single EEG portable devices are in high demand. Materials: Dataset 2a from the BCI Competition IV was employed. It contains the EEG data from nine subjects. To determine the EOG effect, each session starts with 5 min of EEG data. This recording lasted for two minutes with the eyes open, one minute with the eyes closed, and one minute with eye movements. Methodology: This article presents the automated removal of EOG artifacts from EEG signals. Circulant Singular Spectrum Analysis (CiSSA) was used to decompose the EOG contaminated EEG signals into intrinsic mode functions (IMFs). Next, we identified the artifact signal components using kurtosis and energy values and removed them using 4-level discrete wavelet transform (DWT). Results: The proposed approach was evaluated on synthetic and real EEG data and found to be effective in eliminating EOG artifacts while maintaining low frequency EEG information. CiSSA-DWT achieved the best signal to artifact ratio (SAR), mean absolute error (MAE), relative root mean square error (RRMSE), and correlation coefficient (CC) of 1.4525, 0.0801, 18.274, and 0.9883, respectively. Comparison: The developed technique outperforms existing artifact suppression techniques according to performance measures. Conclusions: This advancement is important for brain science and can contribute as an initial pre-processing step for research related to EEG signals.


Subject(s)
Artifacts , Wavelet Analysis , Humans , Electrooculography/methods , Eye Movements , Electroencephalography/methods , Algorithms , Signal Processing, Computer-Assisted
SELECTION OF CITATIONS
SEARCH DETAIL
...