Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 2 de 2
Filter
Add more filters










Database
Language
Publication year range
1.
IEEE Trans Image Process ; 31: 2683-2694, 2022.
Article in English | MEDLINE | ID: mdl-35320102

ABSTRACT

Sketch recognition relies on two types of information, namely, spatial contexts like the local structures in images and temporal contexts like the orders of strokes. Existing methods usually adopt convolutional neural networks (CNNs) to model spatial contexts, and recurrent neural networks (RNNs) for temporal contexts. However, most of them combine spatial and temporal features with late fusion or single-stage transformation, which is prone to losing the informative details in sketches. To tackle this problem, we propose a novel framework that aims at the multi-stage interactions and refinements of spatial and temporal features. Specifically, given a sketch represented by a stroke array, we first generate a temporal-enriched image (TEI), which is a pseudo-color image retaining the temporal order of strokes, to overcome the difficulty of CNNs in leveraging temporal information. We then construct a dual-branch network, in which a CNN branch and a RNN branch are adopted to process the stroke array and the TEI respectively. In the early stages of our network, considering the limited ability of RNNs in capturing spatial structures, we utilize multiple enhancement modules to enhance the stroke features with the TEI features. While in the last stage of our network, we propose a spatio-temporal enhancement module that refines stroke features and TEI features in a joint feature space. Furthermore, a bidirectional temporal-compatible unit that adaptively merges features in opposite temporal orders, is proposed to help RNNs tackle abrupt strokes. Comprehensive experimental results on QuickDraw and TU-Berlin demonstrate that the proposed method is a robust and efficient solution for sketch recognition.


Subject(s)
Neural Networks, Computer
2.
IEEE Trans Image Process ; 30: 7926-7937, 2021.
Article in English | MEDLINE | ID: mdl-34534079

ABSTRACT

Recent methods including CoViAR and DMC-Net provide a new paradigm for action recognition since they are directly targeted at compressed videos (e.g., MPEG4 files). It avoids the cumbersome decoding procedure of traditional methods, and leverages the pre-encoded motion vectors and residuals in compressed videos to complete recognition efficiently. However, motion vectors and residuals are noisy, sparse and highly correlated information, which cannot be effectively exploited by plain and separated networks. To tackle these issues, we propose a joint feature optimization and fusion framework that better utilizes motion vectors and residuals in the following three aspects. (i) We model the feature optimization problem as a reconstruction process that represents features by a set of bases, and propose a joint feature optimization module that extracts bases in the both modalities. (ii) A low-rank non-local attention module, which combines the non-local operation with the low-rank constraint, is proposed to tackle the noise and sparsity problem during the feature reconstruction process. (iii) A lightweight feature fusion module and a self-adaptive knowledge distillation method are introduced, which use motion vectors and residuals to generate predictions similar to those from networks with optical flows. With these proposed components embedded in a baseline network, the proposed network not only achieves the state-of-the-art performance on HMDB-51 and UCF-101, but also maintains its advantage in computational complexity.

SELECTION OF CITATIONS
SEARCH DETAIL
...