Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 59
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Cogn Neurodyn ; 18(3): 863-875, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38826642

RESUMO

The human brain can effectively perform Facial Expression Recognition (FER) with a few samples by utilizing its cognitive ability. However, unlike the human brain, even the well-trained deep neural network is data-dependent and lacks cognitive ability. To tackle this challenge, this paper proposes a novel framework, Brain Machine Generative Adversarial Networks (BM-GAN), which utilizes the concept of brain's cognitive ability to guide a Convolutional Neural Network to generate LIKE-electroencephalograph (EEG) features. More specifically, we firstly obtain EEG signals triggered from facial emotion images, then we adopt BM-GAN to carry out the mutual generation of image visual features and EEG cognitive features. BM-GAN intends to use the cognitive knowledge learnt from EEG signals to instruct the model to perceive LIKE-EEG features. Thereby, BM-GAN has a superior performance for FER like the human brain. The proposed model consists of VisualNet, EEGNet, and BM-GAN. More specifically, VisualNet can obtain image visual features from facial emotion images and EEGNet can obtain EEG cognitive features from EEG signals. Subsequently, the BM-GAN completes the mutual generation of image visual features and EEG cognitive features. Finally, the predicted LIKE-EEG features of test images are used for FER. After learning, without the participation of the EEG signals, an average classification accuracy of 96.6 % is obtained on Chinese Facial Affective Picture System dataset using LIKE-EEG features for FER. Experiments demonstrate that the proposed method can produce an excellent performance for FER.

2.
J Neural Eng ; 2024 Jun 12.
Artigo em Inglês | MEDLINE | ID: mdl-38866001

RESUMO

OBJECTIVE: Electroencephalography (EEG) signals are promising biometrics owning to their invisibility, adapting to the application scenarios with high-security requirements. However, It is challenging to explore EEG identity features without the interference of device and state differences of the subject across sessions. Existing methods treat training sessions as a single domain, affected by the different data distribution among sessions. Although most multi-source unsupervised domain adaptation (MUDA) methods bridge the domain gap between multiple source and target domains individually, relationships among the domain-invariant features of each distribution alignment are neglected. Approach. In this paper, we propose a MUDA method, Tensorized Spatial-Frequency Attention Network (TSFAN), to assist the performance of the target domain for EEG-based biometric recognition. Specifically, significant relationships of domain-invariant features are modeled via a tensorized attention mechanism. It jointly incorporates appropriate common spatial-frequency representations of pairwise source and target but also cross-source domains, without the effect of distribution discrepancy among source domains. Additionally, considering the curse of dimensionality, our TSFAN is approximately represented in Tucker format. Benefiting the low-rank Tucker Network, the TSFAN can scale linearly in the number of domains, providing us the great flexibility to extend TSFAN to the case associated with an arbitrary number of sessions. Main results. Extensive experiments on the representative benchmarks demonstrate the effectiveness of TSFAN in EEG-based biometric recognition, outperforming state-of-the-art approaches, as verified by cross-session validation. Significance. The proposed TSFAN aims to investigate the presence of consistent EEG identity features across sessions. It is achieved by utilizing a novel tensorized attention mechanism that collaborates intra-source transferable information with inter-source interactions, while remaining unaffected by domain shifts in multiple source domains. Furthermore, the electrode selection shows that EEG-based identity features across sessions are distributed across brain regions, and 20 electrodes based on 10-20 standard system are able to extract stable identity information. .

3.
Med Biol Eng Comput ; 2024 May 03.
Artigo em Inglês | MEDLINE | ID: mdl-38700614

RESUMO

Electroencephalogram (EEG) signals are derived from the central nervous system and inherently difficult to camouflage, leading to the recent popularity of EEG-based emotion recognition. However, due to the non-stationary nature of EEG, inter-subject variabilities become obstacles for recognition models to well adapt to different subjects. In this paper, we propose a novel approach called semi-supervised bipartite graph construction with active EEG sample selection (SBGASS) for cross-subject emotion recognition, which offers two significant advantages. Firstly, SBGASS adaptively learns a bipartite graph to characterize the underlying relationships between labeled and unlabeled EEG samples, effectively implementing the semantic connection for samples from different subjects. Secondly, we employ active sample selection technique in this paper to reduce the impact of negative samples (outliers or noise in the data) on bipartite graph construction. Drawing from the experimental results with the SEED-IV data set, we have gained the following three insights. (1) SBGASS actively rejects negative labeled samples, which helps mitigate the impact of negative samples when constructing the optimal bipartite graph and improves the model performance. (2) Through the learned optimal bipartite graph in SBGASS, the transferability of labeled EEG samples is quantitatively analyzed, which exhibits a decreasing tendency as the distance between each labeled sample and the corresponding class centroid increases. (3) Besides the improved recognition accuracy, the spatial-frequency patterns in emotion recognition are investigated by the acquired projection matrix.

4.
Artigo em Inglês | MEDLINE | ID: mdl-38709613

RESUMO

Accurate decoding finger motor imagery is essential for fine motor control using EEG signals. However, decoding finger motor imagery is particularly challenging compared with ordinary motor imagery. This paper proposed a novel EEG decoding method of featuredependent frequency band selection, feature fusion, and ensemble learning (DSFE) for finger motor imagery. First, a feature-dependent frequency band selection method based on correlation coefficient (FDCC) was proposed to select feature-specific effective bands. Second, a feature fusion method was proposed to fuse different types of candidate features to produce multiple refined sets of decoding features. Finally, an ensemble model using the weighted voting strategy was proposed to make full use of these diverse sets of final features. The results on a public EEG dataset of five fingers motor imagery showed that the DSFE method is effective and achieves the highest decoding accuracy of 50.64%, which is 7.64% higher than existing studies using exactly the same data. The experiments further revealed that both the effective frequency bands of different subjects and the effective frequency bands of different types of features are different in finger motor imagery. Furthermore, compared with two-hand motor imagery, the effective decoding information of finger motor imagery is transferred to the lower frequency. The idea and findings in this paper provide a valuable perspective for understanding fine motor imagery in-depth.

5.
Health Inf Sci Syst ; 12(1): 9, 2024 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-38375134

RESUMO

Electroencephalograph (EEG) has been a reliable data source for building brain-computer interface (BCI) systems; however, it is not reasonable to use the feature vector extracted from multiple EEG channels and frequency bands to perform recognition directly due to the two deficiencies. One is that EEG data is weak and non-stationary, which easily causes different EEG samples to have different quality. The other is that different feature dimensions corresponding to different brain regions and frequency bands have different correlations to a certain mental task, which is not sufficiently investigated. To this end, a Joint Sample and Feature importance Assessment (JSFA) model was proposed to simultaneously explore the different impacts of EEG samples and features in mental state recognition, in which the former is based on the self-paced learning technique while the latter is completed by the feature self-weighting technique. The efficacy of JSFA is extensively evaluated on two EEG data sets, i.e., SEED-IV and SEED-VIG. One is a classification task for emotion recognition and the other is a regression task for driving fatigue detection. Experimental results demonstrate that JSFA can effectively identify the importance of different EEG samples and features, leading to enhanced recognition performance of corresponding BCI systems.

6.
Artigo em Inglês | MEDLINE | ID: mdl-37995161

RESUMO

Electroencephalography (EEG)-based motor imagery (MI) is one of brain computer interface (BCI) paradigms, which aims to build a direct communication pathway between human brain and external devices by decoding the brain activities. In a traditional way, MI BCI replies on a single brain, which suffers from the limitations, such as low accuracy and weak stability. To alleviate these limitations, multi-brain BCI has emerged based on the integration of multiple individuals' intelligence. Nevertheless, the existing decoding methods mainly use linear averaging or feature integration learning from multi-brain EEG data, and do not effectively utilize coupling relationship features, resulting in undesired decoding accuracy. To overcome these challenges, we proposed an EEG-based multi-brain MI decoding method, which utilizes coupling feature extraction and few-shot learning to capture coupling relationship features among multi-brains with only limited EEG data. We performed an experiment to collect EEG data from multiple persons who engaged in the same task simultaneously and compared the methods on the collected data. The comparison results showed that our proposed method improved the performance by 14.23% compared to the single-brain mode in the 10-shot three-class decoding task. It demonstrated the effectiveness of the proposed method and usability of the method in the context of only small amount of EEG data available.


Assuntos
Interfaces Cérebro-Computador , Imaginação , Humanos , Eletroencefalografia/métodos , Encéfalo , Algoritmos
7.
Cogn Neurodyn ; 17(5): 1271-1281, 2023 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-37786664

RESUMO

Electroencephalogram(EEG) becomes popular in emotion recognition for its capability of selectively reflecting the real emotional states. Existing graph-based methods have made primary progress in representing pairwise spatial relationships, but leaving higher-order relationships among EEG channels and higher-order relationships inside EEG series. Constructing a hypergraph is a general way of representing higher-order relations. In this paper, we propose a spatial-temporal hypergraph convolutional network(STHGCN) to capture higher-order relationships that existed in EEG recordings. STHGCN is a two-block hypergraph convolutional network, in which feature hypergraphs are constructed over the spectrum, space, and time domains, to explore spatial and temporal correlations under specific emotional states, namely the correlations of EEG channels and the dynamic relationships of temporal stamps. What's more, a self-attention mechanism is combined with the hypergraph convolutional network to initialize and update the relationships of EEG series. The experimental results demonstrate that constructed feature hypergraphs can effectively capture the correlations among valuable EEG channels and the correlations inside valuable EEG series, leading to the best emotion recognition accuracy among the graph methods. In addition, compared with other competitive methods, the proposed method achieves state-of-art results on SEED and SEED-IV datasets.

8.
J Neurosci Methods ; 395: 109909, 2023 07 15.
Artigo em Inglês | MEDLINE | ID: mdl-37399992

RESUMO

BACKGROUND: A common but easily overlooked affective overlap problem has not been received enough attention in electroencephalogram (EEG)-based emotion recognition research. In real life, affective overlap refers to the current emotional state of human being is sometimes influenced easily by his/her historical mood. In stimulus-evoked EEG collection experiment, due to the short rest interval in consecutive trials, the inner mechanisms of neural responses make subjects cannot switch their emotion state easily and quickly, which might lead to the affective overlap. For example, we might be still in sad state to some extent even if we are watching a comedy because we just saw a tragedy before. In pattern recognition, affective overlap usually means that there exists the feature-label inconsistency in EEG data. NEW METHODS: To alleviate the impact of inconsistent EEG data, we introduce a variable to adaptively explore the sample inconsistency in emotion recognition model development. Then, we propose a semi-supervised emotion recognition model for joint sample inconsistency and feature importance exploration (SIFIAE). Accordingly, an efficient optimization method to SIFIAE model is proposed. RESULTS: Extensive experiments on the SEED-V dataset demonstrate the effectiveness of SIFIAE. Specifically, SIFIAE achieves 69.10%, 67.01%, 71.50%, 73.26%, 72.07% and 71.35% average accuracies in six cross-session emotion recognition tasks. CONCLUSION: The results illustrated that the sample weights have a rising trend in the beginning of most trials, which coincides with the affective overlap hypothesis. The feature importance factor indicated the critical bands and channels are more obvious compared with some models without considering EEG feature-label inconsistency.


Assuntos
Emoções , Reconhecimento Psicológico , Humanos , Masculino , Feminino , Emoções/fisiologia , Afeto , Eletroencefalografia/métodos
9.
Math Biosci Eng ; 20(6): 11379-11402, 2023 04 27.
Artigo em Inglês | MEDLINE | ID: mdl-37322987

RESUMO

Electroencephalogram (EEG) signals are widely used in the field of emotion recognition since it is resistant to camouflage and contains abundant physiological information. However, EEG signals are non-stationary and have low signal-noise-ratio, making it more difficult to decode in comparison with data modalities such as facial expression and text. In this paper, we propose a model termed semi-supervised regression with adaptive graph learning (SRAGL) for cross-session EEG emotion recognition, which has two merits. On one hand, the emotional label information of unlabeled samples is jointly estimated with the other model variables by a semi-supervised regression in SRAGL. On the other hand, SRAGL adaptively learns a graph to depict the connections among EEG data samples which further facilitates the emotional label estimation process. From the experimental results on the SEED-IV data set, we have the following insights. 1) SRAGL achieves superior performance compared to some state-of-the-art algorithms. To be specific, the average accuracies are 78.18%, 80.55%, and 81.90% in the three cross-session emotion recognition tasks. 2) As the iteration number increases, SRAGL converges quickly and optimizes the emotion metric of EEG samples gradually, leading to a reliable similarity matrix finally. 3) Based on the learned regression projection matrix, we obtain the contribution of each EEG feature, which enables us to automatically identify critical frequency bands and brain regions in emotion recognition.


Assuntos
Encéfalo , Emoções , Emoções/fisiologia , Encéfalo/fisiologia , Algoritmos , Aprendizagem , Eletroencefalografia
10.
Cogn Neurodyn ; 17(3): 671-680, 2023 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-37265659

RESUMO

In recent years, emotion recognition using physiological signals has become a popular research topic. Physiological signal can reflect the real emotional state for individual which is widely applied to emotion recognition. Multimodal signals provide more discriminative information compared with single modal which arose the interest of related researchers. However, current studies on multimodal emotion recognition normally adopt one-stage fusion method which results in the overlook of cross-modal interaction. To solve this problem, we proposed a multi-stage multimodal dynamical fusion network (MSMDFN). Through the MSMDFN, the joint representation based on cross-modal correlation is obtained. Initially, the latent and essential interactions among various features extracted independently from multiple modalities are explored based on specific manner. Subsequently, the multi-stage fusion network is designed to split the fusion procedure into multi-stages using the correlation observed before. This allows us to exploit much more fine-grained unimodal, bimodal and trimodal intercorrelations. For evaluation, the MSMDFN was verified on multimodal benchmark DEAP. The experiments indicate that our method outperforms the related one-stage multi-modal emotion recognition works.

11.
Comput Methods Programs Biomed ; 238: 107593, 2023 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-37209578

RESUMO

BACKGROUND AND OBJECTIVE: Extracting cognitive representation and computational representation information simultaneously from electroencephalography (EEG) data and constructing corresponding information interaction models can effectively improve the recognition capability of brain cognitive status. However, due to the huge gap in the interaction between the two types of information, existing studies have yet to consider the advantages of the interaction of both. METHODS: This paper introduces a novel architecture named the bidirectional interaction-based hybrid network (BIHN) for EEG cognitive recognition. BIHN consists of two networks: a cognitive-based network named CogN (e.g., graph convolution network, GCN; capsule network, CapsNet) and a computing-based network named ComN (e.g., EEGNet). CogN is responsible for extracting cognitive representation features from EEG data, while ComN is responsible for extracting computational representation features. Additionally, a bidirectional distillation-based coadaptation (BDC) algorithm is proposed to facilitate information interaction between CogN and ComN to realize the coadaptation of the two networks through bidirectional closed-loop feedback. RESULTS: Cross-subject cognitive recognition experiments were performed on the Fatigue-Awake EEG dataset (FAAD, 2-class classification) and SEED dataset (3-class classification), and hybrid network pairs of GCN + EEGNet and CapsNet + EEGNet were verified. The proposed method achieved average accuracies of 78.76% (GCN + EEGNet) and 77.58% (CapsNet + EEGNet) on FAAD and 55.38% (GCN + EEGNet) and 55.10% (CapsNet + EEGNet) on SEED, outperforming the hybrid networks without the bidirectional interaction strategy. CONCLUSIONS: Experimental results show that BIHN can achieve superior performance on two EEG datasets and enhance the ability of both CogN and ComN in EEG processing as well as cognitive recognition. We also validated its effectiveness with different hybrid network pairs. The proposed method could greatly promote the development of brain-computer collaborative intelligence.


Assuntos
Algoritmos , Encéfalo , Eletroencefalografia , Cognição
12.
IEEE Trans Pattern Anal Mach Intell ; 45(9): 10703-10717, 2023 09.
Artigo em Inglês | MEDLINE | ID: mdl-37030724

RESUMO

Neural network models of machine learning have shown promising prospects for visual tasks, such as facial emotion recognition (FER). However, the generalization of the model trained from a dataset with a few samples is limited. Unlike the machine, the human brain can effectively realize the required information from a few samples to complete the visual tasks. To learn the generalization ability of the brain, in this article, we propose a novel brain-machine coupled learning method for facial emotion recognition to let the neural network learn the visual knowledge of the machine and cognitive knowledge of the brain simultaneously. The proposed method utilizes visual images and electroencephalogram (EEG) signals to couple training the models in the visual and cognitive domains. Each domain model consists of two types of interactive channels, common and private. Since the EEG signals can reflect brain activity, the cognitive process of the brain is decoded by a model following reverse engineering. Decoding the EEG signals induced by the facial emotion images, the common channel in the visual domain can approach the cognitive process in the cognitive domain. Moreover, the knowledge specific to each domain is found in each private channel using an adversarial strategy. After learning, without the participation of the EEG signals, only the concatenation of both channels in the visual domain is used to classify facial emotion images based on the visual knowledge of the machine and the cognitive knowledge learned from the brain. Experiments demonstrate that the proposed method can produce excellent performance on several public datasets. Further experiments show that the proposed method trained from the EEG signals has good generalization ability on new datasets and can be applied to other network models, illustrating the potential for practical applications.


Assuntos
Algoritmos , Reconhecimento Facial , Humanos , Encéfalo/diagnóstico por imagem , Emoções , Redes Neurais de Computação , Eletroencefalografia/métodos
13.
Brain Sci ; 13(3)2023 Mar 13.
Artigo em Inglês | MEDLINE | ID: mdl-36979295

RESUMO

BACKGROUND: It is crucial to understand the neural feedback mechanisms and the cognitive decision-making of the brain during the processing of rewards. Here, we report the first attempt for a simultaneous electroencephalography (EEG)-functional magnetic resonance imaging (fMRI) study in a gambling task by utilizing tensor decomposition. METHODS: First, the single-subject EEG data are represented as a third-order spectrogram tensor to extract frequency features. Next, the EEG and fMRI data are jointly decomposed into a superposition of multiple sources characterized by space-time-frequency profiles using coupled matrix tensor factorization (CMTF). Finally, graph-structured clustering is used to select the most appropriate model according to four quantitative indices. RESULTS: The results clearly show that not only are the regions of interest (ROIs) found in other literature activated, but also the olfactory cortex and fusiform gyrus which are usually ignored. It is found that regions including the orbitofrontal cortex and insula are activated for both winning and losing stimuli. Meanwhile, regions such as the superior orbital frontal gyrus and anterior cingulate cortex are activated upon winning stimuli, whereas the inferior frontal gyrus, cingulate cortex, and medial superior frontal gyrus are activated upon losing stimuli. CONCLUSION: This work sheds light on the reward-processing progress, provides a deeper understanding of brain function, and opens a new avenue in the investigation of neurovascular coupling via CMTF.

14.
Sensors (Basel) ; 23(3)2023 Jan 26.
Artigo em Inglês | MEDLINE | ID: mdl-36772444

RESUMO

Various relations existing in Electroencephalogram (EEG) data are significant for EEG feature representation. Thus, studies on the graph-based method focus on extracting relevancy between EEG channels. The shortcoming of existing graph studies is that they only consider a single relationship of EEG electrodes, which results an incomprehensive representation of EEG data and relatively low accuracy of emotion recognition. In this paper, we propose a fusion graph convolutional network (FGCN) to extract various relations existing in EEG data and fuse these extracted relations to represent EEG data more comprehensively for emotion recognition. First, the FGCN mines brain connection features on topology, causality, and function. Then, we propose a local fusion strategy to fuse these three graphs to fully utilize the valuable channels with strong topological, causal, and functional relations. Finally, the graph convolutional neural network is adopted to represent EEG data for emotion recognition better. Experiments on SEED and SEED-IV demonstrate that fusing different relation graphs are effective for improving the ability in emotion recognition. Furthermore, the emotion recognition accuracy of 3-class and 4-class is higher than that of other state-of-the-art methods.


Assuntos
Emoções , Reconhecimento Psicológico , Encéfalo , Eletroencefalografia , Redes Neurais de Computação
15.
Med Biol Eng Comput ; 61(5): 951-965, 2023 May.
Artigo em Inglês | MEDLINE | ID: mdl-36662378

RESUMO

The visual movement illusion (VMI) is a subjective experience. This illusion is produced by watching the subject's motion video. At the same time, VMI evokes awareness of body ownership. We applied the power spectral density (PSD) matrix and the partial directed correlation (PDC) matrix to build the PPDC matrix for the γ2 band (34-98.5 Hz), combining cerebral cortical and musculomotor cortical complexity and PPDC to quantify the degree of body ownership. Thirty-five healthy subjects were recruited to participate in this experiment. The subjects' electroencephalography (EEG) and surface electromyography (sEMG) data were recorded under resting conditions, observation conditions, illusion conditions, and actual seated front-kick movements. The results show the following: (1) VMI activates the cerebral cortex to some extent; (2) VMI enhances cortical muscle excitability in the rectus femoris and medial vastus muscles; (3) VMI induces a sense of body ownership; (4) the use of PPDC values, fuzzy entropy values of muscles, and fuzzy entropy values of the cerebral cortex can quantify whether VMI induces awareness of body ownership. These results illustrate that PPDC can be used as a biomarker to show that VMI affects changes in the cerebral cortex and as a quantitative tool to show whether body ownership awareness arises.


Assuntos
Ilusões , Humanos , Eletromiografia , Ilusões/fisiologia , Propriedade , Eletroencefalografia , Movimento/fisiologia , Extremidade Inferior , Mãos/fisiologia
16.
Int J Neurosci ; : 1-9, 2022 Jul 14.
Artigo em Inglês | MEDLINE | ID: mdl-35815432

RESUMO

Objective: Stroke is the leading cause of disability worldwide. Traditionally, doctors assess stroke rehabilitation assessment, which can be subjective. Therefore, an objective assessment method is required.Methods: In this context, we investigated the changes in brain functional connectivity patterns and corticomuscular coupling in stroke patients during rehabilitation. In this study, electroencephalogram (EEG) and electromyogram (EMG) of stroke patients were collected synchronously at baseline(BL), two weeks after BL, and four weeks after BL. A brain functional network was established, and the corticomuscular coupling relationship was calculated using phase transfer entropy (PTE).Results: We found that during the rehabilitation of stroke patients, the overall connection of the brain functional network was strengthened, and the network characteristic value increased. The average corticomuscular PTE appeared to first decrease and subsequently increase, and the PTE increase in the frontal lobe was significant.Value: In this study, PTE was used for the first time to analyze the relationship between EEG signals in patients with hemiplegia. We believe that our findings contribute to evaluating the rehabilitation of stroke patients with hemiplegia.

17.
Front Psychiatry ; 13: 928781, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35898631

RESUMO

Electroencephalogram (EEG)-based tools for brain functional connectivity (FC) analysis and visualization play an important role in evaluating brain cognitive function. However, existing similar FC analysis tools are not only visualized in 2 dimensions (2D) but also are highly prone to cause visual clutter and unable to dynamically reflect brain connectivity changes over time. Therefore, we design and implement an EEG-based FC visualization framework in this study, named EEG-FCV, for brain cognitive state evaluation. EEG-FCV is composed of three parts: the Data Processing module, Connectivity Analysis module, and Visualization module. Specially, FC is visualized in 3 dimensions (3D) by introducing three existing metrics: Pearson Correlation Coefficient (PCC), Coherence, and PLV. Furthermore, a novel metric named Comprehensive is proposed to solve the problem of visual clutter. EEG-FCV can also visualize dynamically brain FC changes over time. Experimental results on two available datasets show that EEG-FCV has not only results consistent with existing related studies on brain FC but also can reflect dynamically brain FC changes over time. We believe EEG-FCV could prompt further progress in brain cognitive function evaluation.

18.
IEEE J Biomed Health Inform ; 26(10): 5085-5096, 2022 10.
Artigo em Inglês | MEDLINE | ID: mdl-35881606

RESUMO

Functional corticomuscular coupling (FCMC) between the cerebral motor cortex and muscle activity reflects multi-layer and nonlinear interactions in the sensorimotor system. Considering the inherent multiscale characteristics of physiological signals, we proposed multiscale transfer spectral entropy (MSTSE) and introduced the unidirectionally coupled Hénon maps model to verify the effectiveness of MSTSE. We recorded electroencephalogram (EEG) and surface electromyography (sEMG) in steady-state grip tasks of 29 healthy participants and 27 patients. Then, we used MSTSE to analyze the FCMC base on EEG of the bilateral motor areas and the sEMG of the flexor digitorum superficialis (FDS). The results show that MSTSE is superior to transfer spectral entropy (TSE) method in restraining the spurious coupling and detecting the coupling more accurately. The coupling strength was higher in the ß1, ß2, and γ2 bands, among which, it was highest in the ß1 band, and reached its maximum at the 22-30 scale. On the directional characteristics of FCMC, the coupling strength of EEG→sEMG is superior to the opposite direction in most cases. In addition, the coupling strength of the stroke-affected side was lower than that of healthy controls' right hand in the ß1 and ß2 bands and the stroke-unaffected side in the ß1 band. The coupling strength of the stroke-affected side was higher than that of the stroke-unaffected side and the right hand of healthy controls in the sEMG→EEG direction of γ2 band. This study provides a new perspective and lays a foundation for analyzing FCMC and motor dysfunction.


Assuntos
Córtex Motor , Acidente Vascular Cerebral , Eletroencefalografia/métodos , Eletromiografia/métodos , Entropia , Humanos , Córtex Motor/fisiologia , Músculo Esquelético/fisiologia
19.
Cogn Neurodyn ; 16(4): 859-870, 2022 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-35847542

RESUMO

With the popularity of smartphones and the pervasion of mobile apps, people spend more and more time to interact with a diversity of apps on their smartphones, especially for young population. This raises a question: how people allocate attention to interfaces of apps during using them. To address this question, we, in this study, designed an experiment with two sessions (i.e., Session1: browsing original interfaces; Session 2: browsing interfaces after removal of colors and background) integrating with an eyetracking system. Attention fixation durations were recorded by an eye-tracker while participants browsed app interfaces. The whole screen of smartphone was divided into four even regions to explore fixation durations. The results revealed that participants gave significantly longer total fixation duration on the bottom left region compared to other regions in the session (1) Longer total fixation duration on the bottom was preserved, but there is no significant difference between left side and right side in the session2. Similar to the finding of total fixation duration, first fixation duration is also predominantly paid on the bottom area of the interface. Moreover, the skill in the use of mobile phone was quantified by assessing familiarity and accuracy of phone operation and was investigated in the association with the fixation durations. We found that first fixation duration of the bottom left region is significantly negatively correlated with the smartphone operation level in the session 1, but there is no significant correlation between them in the session (2) According to the results of ratio exploration, the ratio of the first fixation duration to the total fixation duration is not significantly different between areas of interest for both sessions. The findings of this study provide insights into the attention allocation during browsing app interfaces and are of implications on the design of app interfaces and advertisements as layout can be optimized according to the attention allocation to maximally deliver information.

20.
Front Hum Neurosci ; 16: 866118, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35669201

RESUMO

Human errors are widely considered among the major causes of road accidents. Furthermore, it is estimated that more than 90% of vehicle crashes causing fatal and permanent injuries are directly related to mental tiredness, fatigue, and drowsiness of the drivers. In particular, driving drowsiness is recognized as a crucial aspect in the context of road safety, since drowsy drivers can suddenly lose control of the car. Moreover, the driving drowsiness episodes mostly appear suddenly without any prior behavioral evidence. The present study aimed at characterizing the onset of drowsiness in car drivers by means of a multimodal neurophysiological approach to develop a synthetic electroencephalographic (EEG)-based index, able to detect drowsy events. The study involved 19 participants in a simulated scenario structured in a sequence of driving tasks under different situations and traffic conditions. The experimental conditions were designed to induce prominent mental drowsiness in the final part. The EEG-based index, so-called "MDrow index", was developed and validated to detect the driving drowsiness of the participants. The MDrow index was derived from the Global Field Power calculated in the Alpha EEG frequency band over the parietal brain sites. The results demonstrated the reliability of the proposed MDrow index in detecting the driving drowsiness experienced by the participants, resulting also more sensitive and timely sensible with respect to more conventional autonomic parameters, such as the EyeBlinks Rate and the Heart Rate Variability, and to subjective measurements (self-reports).

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...