Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 2 de 2
Filter
Add more filters










Database
Language
Publication year range
1.
IEEE Trans Image Process ; 30: 5477-5489, 2021.
Article in English | MEDLINE | ID: mdl-33950840

ABSTRACT

Vision-language research has become very popular, which focuses on understanding of visual contents, language semantics and relationships between them. Video question answering (Video QA) is one of the typical tasks. Recently, several BERT style pre-training methods have been proposed and shown effectiveness on various vision-language tasks. In this work, we leverage the successful vision-language transformer structure to solve the Video QA problem. However, we do not pre-train it with any video data, because video pre-training requires massive computing resources and is hard to perform with only a few GPUs. Instead, our work aims to leverage image-language pre-training to help with video-language modeling, by sharing a common module design. We further introduce an adaptive spatio-temporal graph to enhance the vision-language representation learning. That is, we adaptively refine the spatio-temporal tubes of salient objects according to their spatio-temporal relations learned through a hierarchical graph convolution process. Finally, we can obtain a number of fine-grained tube-level video object representations, as the visual inputs of the vision-language transformer module. Experiments on three widely used Video QA datasets show that our model achieves the new state-of-the-art results.

2.
IEEE Trans Image Process ; 30: 2758-2770, 2021.
Article in English | MEDLINE | ID: mdl-33476268

ABSTRACT

Video question answering is an important task combining both Natural Language Processing and Computer Vision, which requires a machine to obtain a thorough understanding of the video. Most existing approaches simply capture spatio-temporal information in videos by using a combination of recurrent and convolutional neural networks. Nonetheless, most previous work focus on only salient frames or regions, which normally lacks some significant details, such as potential location and action relations. In this paper, we propose a new method called Graph-based Multi-interaction Network for video question answering. In our model, a new attention mechanism named multi-interaction is designed to capture both element-wise and segment-wise sequence interactions simultaneously, which can be found between and inside the multi-modal inputs. Moreover, we propose a graph-based relation-aware neural network to explore a more fine-grained visual representation, which could explore the relationships and dependencies between objects spatially and temporally. We evaluate our method on TGIF-QA and other two video QA datasets. The qualitative and quantitative experimental results show the effectiveness of our model, which achieves state-of-the-art performance.

SELECTION OF CITATIONS
SEARCH DETAIL
...