Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 4 de 4
Filter
Add more filters










Database
Language
Publication year range
1.
IEEE Trans Pattern Anal Mach Intell ; 46(5): 3406-3421, 2024 May.
Article in English | MEDLINE | ID: mdl-38109234

ABSTRACT

Vision-Language Pre-Training (VLP) has demonstrated remarkable potential in aligning image and text pairs, paving the way for a wide range of cross-modal learning tasks. Nevertheless, we have observed that VLP models often fall short in terms of visual grounding and localization capabilities, which are crucial for many downstream tasks, such as visual reasoning. In response, we introduce a novel Position-guided Text Prompt (PTP) paradigm to bolster the visual grounding abilities of cross-modal models trained with VLP. In the VLP phase, PTP divides an image into N x N blocks and employs a widely-used object detector to identify objects within each block. PTP then reframes the visual grounding task as a fill-in-the-blank problem, encouraging the model to predict objects in given blocks or regress the blocks of a given object, exemplified by filling "[P]" or "[O]" in a PTP sentence such as "The block [P] has a [O]." This strategy enhances the visual grounding capabilities of VLP models, enabling them to better tackle various downstream tasks. Additionally, we integrate the seconda-order relationships between objects to further enhance the visual grounding capabilities of our proposed PTP paradigm. Incorporating PTP into several state-of-the-art VLP frameworks leads to consistently significant improvements across representative cross-modal learning model architectures and multiple benchmarks, such as zero-shot Flickr30 k Retrieval (+5.6 in average recall@1) for ViLT baseline, and COCO Captioning (+5.5 in CIDEr) for the state-of-the-art BLIP baseline. Furthermore, PTP attains comparable results with object-detector-based methods and a faster inference speed, as it discards its object detector during inference, unlike other approaches.

2.
IEEE Trans Image Process ; 32: 3254-3265, 2023.
Article in English | MEDLINE | ID: mdl-37256800

ABSTRACT

Early activity prediction/recognition aims to recognize action categories before they are fully conveyed. Compared to full-length action sequences, partial video sequences only provide insufficient discrimination information, which makes predicting the class labels for some similar activities challenging, especially when only very few frames can be observed. To address this challenge, in this paper, we propose a novel meta negative network, namely, Magi-Net, that utilizes a contrastive learning scheme to alleviate the insufficiency of discriminative information. In our Magi-Net model, the positive samples are generated by augmenting an input anchor conditioned on all observation ratios, while the negative samples are selected from a trainable negative look-up memory (LUM) table, which stores the training samples and the corresponding misleading categories. Furthermore, a meta negative sample optimization strategy (MetaSOS) is proposed to boost the training of Magi-Net by encouraging the model to learn from the most informative negative samples via a meta learning scheme. Extensive experiments are conducted on several public skeleton-based activity datasets, and the results show the efficacy of the proposed Magi-Net model.

3.
IEEE Int Conf Comput Vis Workshops ; 2023: 11674-11685, 2023 Oct.
Article in English | MEDLINE | ID: mdl-38784111

ABSTRACT

Curriculum design is a fundamental component of education. For example, when we learn mathematics at school, we build upon our knowledge of addition to learn multiplication. These and other concepts must be mastered before our first algebra lesson, which also reinforces our addition and multiplication skills. Designing a curriculum for teaching either a human or a machine shares the underlying goal of maximizing knowledge transfer from earlier to later tasks, while also minimizing forgetting of learned tasks. Prior research on curriculum design for image classification focuses on the ordering of training examples during a single offline task. Here, we investigate the effect of the order in which multiple distinct tasks are learned in a sequence. We focus on the online class-incremental continual learning setting, where algorithms or humans must learn image classes one at a time during a single pass through a dataset. We find that curriculum consistently influences learning outcomes for humans and for multiple continual machine learning algorithms across several benchmark datasets. We introduce a novel-object recognition dataset for human curriculum learning experiments and observe that curricula that are effective for humans are highly correlated with those that are effective for machines. As an initial step towards automated curriculum design for online class-incremental learning, we propose a novel algorithm, dubbed Curriculum Designer (CD), that designs and ranks curricula based on inter-class feature similarities. We find significant overlap between curricula that are empirically highly effective and those that are highly ranked by our CD. Our study establishes a framework for further research on teaching humans and machines to learn continuously using optimized curricula. Our code and data are available through this link.

4.
IEEE Trans Image Process ; 31: 5203-5213, 2022.
Article in English | MEDLINE | ID: mdl-35914045

ABSTRACT

Weakly-Supervised Temporal Action Localization (WSTAL) aims to localize actions in untrimmed videos with only video-level labels. Currently, most state-of-the-art WSTAL methods follow a Multi-Instance Learning (MIL) pipeline: producing snippet-level predictions first and then aggregating to the video-level prediction. However, we argue that existing methods have overlooked two important drawbacks: 1) inadequate use of motion information and 2) the incompatibility of prevailing cross-entropy training loss. In this paper, we analyze that the motion cues behind the optical flow features are complementary informative. Inspired by this, we propose to build a context-dependent motion prior, termed as motionness. Specifically, a motion graph is introduced to model motionness based on the local motion carrier (e.g., optical flow). In addition, to highlight more informative video snippets, a motion-guided loss is proposed to modulate the network training conditioned on motionness scores. Extensive ablation studies confirm that motionness efficaciously models action-of-interest, and the motion-guided loss leads to more accurate results. Besides, our motion-guided loss is a plug-and-play loss function and is applicable with existing WSTAL methods. Without loss of generality, based on the standard MIL pipeline, our method achieves new state-of-the-art performance on three challenging benchmarks, including THUMOS'14, ActivityNet v1.2 and v1.3.

SELECTION OF CITATIONS
SEARCH DETAIL
...