Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 4 de 4
Filter
Add more filters










Database
Language
Publication year range
1.
Article in English | MEDLINE | ID: mdl-38564351

ABSTRACT

This paper delves into the challenges of achieving scalable and effective multi-object modeling for semi-supervised Video Object Segmentation (VOS). Previous VOS methods decode features with a single positive object, limiting the learning of multi-object representation as they must match and segment each target separately under multi-object scenarios. Additionally, earlier techniques catered to specific application objectives and lacked the flexibility to fulfill different speed-accuracy requirements. To address these problems, we present two innovative approaches, Associating Objects with Transformers (AOT) and Associating Objects with Scalable Transformers (AOST). In pursuing effective multi-object modeling, AOT introduces the IDentification (ID) mechanism to allocate each object a unique identity. This approach enables the network to model the associations among all objects simultaneously, thus facilitating the tracking and segmentation of objects in a single network pass. To address the challenge of inflexible deployment, AOST further integrates scalable long short-term transformers that incorporate scalable supervision and layer-wise ID-based attention. This enables online architecture scalability in VOS for the first time and overcomes ID embeddings' representation limitations. Given the absence of a benchmark for VOS involving densely multi-object annotations, we propose a challenging Video Object Segmentation in the Wild (VOSW) benchmark to validate our approaches. We evaluated various AOT and AOST variants using extensive experiments across VOSW and five commonly used VOS benchmarks, including YouTube-VOS 2018 & 2019 Val, DAVIS-2017 Val & Test, and DAVIS-2016. Our approaches surpass the state-of-the-art competitors and display exceptional efficiency and scalability consistently across all six benchmarks. Moreover, we notably achieved the 1st position in the 3 rd Large-scale Video Object Segmentation Challenge. Project page: https://github.com/yoxu515/aot-benchmark.

2.
IEEE Trans Pattern Anal Mach Intell ; 45(8): 10055-10069, 2023 Aug.
Article in English | MEDLINE | ID: mdl-37819831

ABSTRACT

We explore the task of language-guided video segmentation (LVS). Previous algorithms mostly adopt 3D CNNs to learn video representation, struggling to capture long-term context and easily suffering from visual-linguistic misalignment. In light of this, we present Locater (local-global context aware Transformer), which augments the Transformer architecture with a finite memory so as to query the entire video with the language expression in an efficient manner. The memory is designed to involve two components - one for persistently preserving global video content, and one for dynamically gathering local temporal context and segmentation history. Based on the memorized local-global context and the particular content of each frame, Locater holistically and flexibly comprehends the expression as an adaptive query vector for each frame. The vector is used to query the corresponding frame for mask generation. The memory also allows Locater to process videos with linear time complexity and constant size memory, while Transformer-style self-attention computation scales quadratically with sequence length. To thoroughly examine the visual grounding capability of LVS models, we contribute a new LVS dataset, A2D-S +, which is built upon A2D-S dataset but poses increased challenges in disambiguating among similar objects. Experiments on three LVS datasets and our A2D-S + show that Locater outperforms previous state-of-the-arts. Further, we won the 1st place in the Referring Video Object Segmentation Track of the 3rd Large-scale Video Object Segmentation Challenge, where Locater served as the foundation for the winning solution.

3.
IEEE Trans Pattern Anal Mach Intell ; 45(9): 11297-11308, 2023 Sep.
Article in English | MEDLINE | ID: mdl-37037230

ABSTRACT

Scene understanding through pixel-level semantic parsing is one of the main problems in computer vision. Till now, image-based methods and datasets for scene parsing have been well explored. However, the real world is naturally dynamic instead of a static state. Thus, learning to perform video scene parsing is more practical for real-world applications. Considering that few datasets cover an extensive range of scenes and object categories with temporal pixel-level annotations, in this work, we present a large-scale video scene parsing dataset, namely VSPW (Video Scene Parsing in the Wild). To be specific, there are a total of 251,633 frames from 3,536 videos with densely pixel-wise annotations in VSPW, including a large variety of 231 scenes and 124 object categories. Besides, VSPW is densely annotated with a high frame rate of 15 f/s, and over 96% of videos from VSPW have high spatial resolutions from 720P to 4 K. To the best of our knowledge, VSPW is the first attempt to address the challenging video scene parsing task in the wild by considering diverse scenes. Based on our VSPW, we further propose Temporal Attention Blending (TAB) Networks to harness temporal context information for better pixel-level semantic understanding of videos. Extensive experiments on VSPW well demonstrate the superiority of the proposed TAB over other baseline approaches. We hope the new proposed dataset and the explorations in this work can help advance the challenging yet practical video scene parsing task in the future. Both the dataset and the code are available at www.vspwdataset.com.

4.
IEEE Trans Neural Netw Learn Syst ; 33(9): 4624-4634, 2022 09.
Article in English | MEDLINE | ID: mdl-33651698

ABSTRACT

We focus on the occlusion problem in person re-identification (re-id), which is one of the main challenges in real-world person retrieval scenarios. Previous methods on the occluded re-id problem usually assume that only the probes are occluded, thereby removing occlusions by manually cropping. However, this may not always hold in practice. This article relaxes this assumption and investigates a more general occlusion problem, where both the probe and gallery images could be occluded. The key to this challenging problem is depressing the noise information by identifying bodies and occlusions. We propose to incorporate the pose information into the re-id framework, which benefits the model in three aspects. First, it provides the location of the body. We then design a Pose-Masked Feature Branch to make our model focus on the body region only and filter those noise features brought by occlusions. Second, the estimated pose reveals which body parts are visible, giving us a hint to construct more informative person features. We propose a Pose-Embedded Feature Branch to adaptively re-calibrate channel-wise feature responses based on the visible body parts. Third, in testing, the estimated pose indicates which regions are informative and reliable for both probe and gallery images. Then we explicitly split the extracted spatial feature into parts. Only part features from those commonly visible parts are utilized in the retrieval. To better evaluate the performances of the occluded re-id, we also propose a large-scale data set for the occluded re-id with more than 35 000 images, namely Occluded-DukeMTMC. Extensive experiments show our approach surpasses previous methods on the occluded, partial, and non-occluded re-id data sets.


Subject(s)
Biometric Identification , Biometric Identification/methods , Humans , Neural Networks, Computer
SELECTION OF CITATIONS
SEARCH DETAIL
...