Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 4 de 4
Filter
Add more filters










Database
Publication year range
1.
J Neural Eng ; 2024 Jun 12.
Article in English | MEDLINE | ID: mdl-38866001

ABSTRACT

OBJECTIVE: Electroencephalography (EEG) signals are promising biometrics owning to their invisibility, adapting to the application scenarios with high-security requirements. However, It is challenging to explore EEG identity features without the interference of device and state differences of the subject across sessions. Existing methods treat training sessions as a single domain, affected by the different data distribution among sessions. Although most multi-source unsupervised domain adaptation (MUDA) methods bridge the domain gap between multiple source and target domains individually, relationships among the domain-invariant features of each distribution alignment are neglected. Approach. In this paper, we propose a MUDA method, Tensorized Spatial-Frequency Attention Network (TSFAN), to assist the performance of the target domain for EEG-based biometric recognition. Specifically, significant relationships of domain-invariant features are modeled via a tensorized attention mechanism. It jointly incorporates appropriate common spatial-frequency representations of pairwise source and target but also cross-source domains, without the effect of distribution discrepancy among source domains. Additionally, considering the curse of dimensionality, our TSFAN is approximately represented in Tucker format. Benefiting the low-rank Tucker Network, the TSFAN can scale linearly in the number of domains, providing us the great flexibility to extend TSFAN to the case associated with an arbitrary number of sessions. Main results. Extensive experiments on the representative benchmarks demonstrate the effectiveness of TSFAN in EEG-based biometric recognition, outperforming state-of-the-art approaches, as verified by cross-session validation. Significance. The proposed TSFAN aims to investigate the presence of consistent EEG identity features across sessions. It is achieved by utilizing a novel tensorized attention mechanism that collaborates intra-source transferable information with inter-source interactions, while remaining unaffected by domain shifts in multiple source domains. Furthermore, the electrode selection shows that EEG-based identity features across sessions are distributed across brain regions, and 20 electrodes based on 10-20 standard system are able to extract stable identity information. .

2.
IEEE Trans Pattern Anal Mach Intell ; 45(9): 10703-10717, 2023 09.
Article in English | MEDLINE | ID: mdl-37030724

ABSTRACT

Neural network models of machine learning have shown promising prospects for visual tasks, such as facial emotion recognition (FER). However, the generalization of the model trained from a dataset with a few samples is limited. Unlike the machine, the human brain can effectively realize the required information from a few samples to complete the visual tasks. To learn the generalization ability of the brain, in this article, we propose a novel brain-machine coupled learning method for facial emotion recognition to let the neural network learn the visual knowledge of the machine and cognitive knowledge of the brain simultaneously. The proposed method utilizes visual images and electroencephalogram (EEG) signals to couple training the models in the visual and cognitive domains. Each domain model consists of two types of interactive channels, common and private. Since the EEG signals can reflect brain activity, the cognitive process of the brain is decoded by a model following reverse engineering. Decoding the EEG signals induced by the facial emotion images, the common channel in the visual domain can approach the cognitive process in the cognitive domain. Moreover, the knowledge specific to each domain is found in each private channel using an adversarial strategy. After learning, without the participation of the EEG signals, only the concatenation of both channels in the visual domain is used to classify facial emotion images based on the visual knowledge of the machine and the cognitive knowledge learned from the brain. Experiments demonstrate that the proposed method can produce excellent performance on several public datasets. Further experiments show that the proposed method trained from the EEG signals has good generalization ability on new datasets and can be applied to other network models, illustrating the potential for practical applications.


Subject(s)
Algorithms , Facial Recognition , Humans , Brain/diagnostic imaging , Emotions , Neural Networks, Computer , Electroencephalography/methods
3.
Article in English | MEDLINE | ID: mdl-33147145

ABSTRACT

Brainprint is a new type of biometric in the form of EEG, directly linking to intrinsic identity. Currently, most methods for brainprint recognition are based on traditional machine learning and only focus on a single brain cognition task. Due to the ability to extract high-level features and latent dependencies, deep learning can effectively overcome the limitation of specific tasks, but numerous samples are required for model training. Therefore, brainprint recognition in realistic scenes with multiple individuals and small amounts of samples in each class is challenging for deep learning. This article proposes a Convolutional Tensor-Train Neural Network (CTNN) for the multi-task brainprint recognition with small number of training samples. Firstly, local temporal and spatial features of the brainprint are extracted by the convolutional neural network (CNN) with depthwise separable convolution mechanism. Afterwards, we implement the TensorNet (TN) via low-rank representation to capture the multilinear intercorrelations, which integrates the local information into a global one with very limited parameters. The experimental results indicate that CTNN has high recognition accuracy over 99% on all four datasets, and it exploits brainprint under multi-task efficiently and scales well on training samples. Additionally, our method can also provide an interpretable biomarker, which shows specific seven channels are dominated for the recognition tasks.


Subject(s)
Machine Learning , Neural Networks, Computer , Brain , Humans
4.
Shanghai Kou Qiang Yi Xue ; 17(6): 591-4, 2008 Dec.
Article in Chinese | MEDLINE | ID: mdl-19148444

ABSTRACT

PURPOSE: To determine the age and sex characteristics of the children and type of dental procedures performed under dental general anesthesia (DGA) and to assess the results after six months to one year's follow-up. METHODS: A sample of 30 patients treated under dental general anesthesia (DGA) during 2006-2007 in the Department of Pediatric Dentistry of China Medical University was reviewed. All the teeth were treated one time. The dental procedures performed included caries restoration, indirect pulp capping, pulpotomy, root canal therapy (RCT) and dental extraction. Oral prophylaxis and topical fluoride applications were performed on all teeth. Pit and fissure sealing was performed on all healthy premolars and molars. SPSS10.0 software package was used for statistical analysis. Chi-square test was used to analyze the difference of the sex distribution in different age group and the difference of dental procedures performed between the primary teeth and the permanent teeth. RESULTS: The age of the patients ranged from 19 months to 14 years. The mental retardation patients accounted for 10% and mental healthy patients accounted for 90% of the sample studied. Males were more than females with the ratio about 2 to 1 in each age group. The dental procedures performed were caries restoration (18.67%), indirect pulp capping (23.26%), pulpotomy (0.77%), RCT (29.16%), dental extractions (2.05%) and fissure sealants (26.09%). The percentage of RCT was higher than that of caries restoration in the primary teeth, whereas the result was opposite as for the permanent teeth as indicated by Chi-square test (X(2)=11.630, P=0.001). New dental caries was not found except 2 patients who suffered from dysnoesia and were not cooperative to have regular examination. Fillings were lost in 3 cases, with 3 anterior teeth and 2 posterior teeth after RCT. All the children could cooperate except two mental retardation patients during the follow-up visit. CONCLUSIONS: Caries restoration and RCT are the most frequently performed procedures in pediatric patients using DGA. This indicates the need to design and implement integrate control and prevention programs for special pediatric patients. DGA is a safe and effective behavior management technique to treat uncooperative children.


Subject(s)
Anesthesia, General , Pediatric Dentistry , Child , China , Dental Care , Dental Caries , Female , Fluorides, Topical , Humans , Male , Molar , Pit and Fissure Sealants , Root Canal Therapy , Tooth Extraction , Tooth, Deciduous
SELECTION OF CITATIONS
SEARCH DETAIL
...