Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Neural Netw ; 179: 106505, 2024 Jul 03.
Artigo em Inglês | MEDLINE | ID: mdl-39002205

RESUMO

Unsupervised domain adaptation (UDA) aims to transfer knowledge in previous and related labeled datasets (sources) to a new unlabeled dataset (target). Despite the impressive performance, existing approaches have largely focused on image-based UDA only, and video-based UDA has been relatively understudied and received less attention due to the difficulty of adapting diverse modal video features and modeling temporal associations efficiently. To address this, existing studies use optical flow to capture motion cues between in-domain consecutive frames, but is limited by heavy compute requirements and modeling flow patterns across diverse domains is equally challenging. In this work, we propose an adversarial domain adaptation approach for video semantic segmentation that aims to align temporally associated pixels in successive source and target domain frames without relying on optical flow. Specifically, we introduce a Perceptual Consistency Matching (PCM) strategy that leverages perceptual similarity to identify pixels with high correlation across consecutive frames, and infer that such pixels should correspond to the same class. Therefore, we can enhance prediction accuracy for video-UDA by enforcing consistency not only between in-domain frames, but across domains using PCM objectives during model training. Extensive experiments on public datasets show the benefit of our approach over existing state-of-the-art UDA methods. Our approach not only addresses a crucial task in video domain adaptation but also offers notable improvements in performance with faster inference times.

2.
Artigo em Inglês | MEDLINE | ID: mdl-37379192

RESUMO

Recently, motor imagery (MI) electroencephalography (EEG) classification techniques using deep learning have shown improved performance over conventional techniques. However, improving the classification accuracy on unseen subjects is still challenging due to intersubject variability, scarcity of labeled unseen subject data, and low signal-to-noise ratio (SNR). In this context, we propose a novel two-way few-shot network able to efficiently learn how to learn representative features of unseen subject categories and classify them with limited MI EEG data. The pipeline includes an embedding module that learns feature representations from a set of signals, a temporal-attention module to emphasize important temporal features, an aggregation-attention module for key support signal discovery, and a relation module for final classification based on relation scores between a support set and a query signal. In addition to the unified learning of feature similarity and a few-shot classifier, our method can emphasize informative features in support data relevant to the query, which generalizes better on unseen subjects. Furthermore, we propose to fine-tune the model before testing by arbitrarily sampling a query signal from the provided support set to adapt to the distribution of the unseen subject. We evaluate our proposed method with three different embedding modules on cross-subject and cross-dataset classification tasks using brain-computer interface (BCI) competition IV 2a, 2b, and GIST datasets. Extensive experiments show that our model significantly improves over the baselines and outperforms existing few-shot approaches.

3.
IEEE Trans Haptics ; 15(3): 560-571, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35622790

RESUMO

In this study, for intention recognition, a convolutional neural network (CNN) classification model using the electromyography (EMG) signals acquired from the subject was developed. For sensory feedback, a rule-based wearable proprioceptive feedback haptic device, a new method for providing feedback on the grip information of a robotic prosthesis was proposed. Then, we constructed a closed-loop integrated system consisting of the CNN-based EMG classification model, the proposed haptic device, and a robotic prosthetic hand. Finally, an experiment was conducted in which the closed-loop integrated system was used to simultaneously evaluate the performance of the intention recognition and sensory feedback for a subject. The trained EMG classification model and the proposed haptic device showed the intention recognition and sensory feedback performance with 97% or higher accuracy in 10 grip states. Although some errors occurred in the intention recognition using the EMG classification model, in general, the grip intention of the subject was grasped relatively accurately, and the grip pattern was also accurately transmitted to the subject by the proposed haptic device. The integrated system which consists of the intention recognition using the CNN-based EMG classification model and the sensory feedback using the proposed haptic device is expected to be utilized for robotic prosthetic hand prosthesis control of limb loss participants.


Assuntos
Membros Artificiais , Procedimentos Cirúrgicos Robóticos , Eletromiografia/métodos , Retroalimentação Sensorial , Mãos , Interface Háptica , Humanos , Intenção
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...