Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 17 de 17
Filter
Add more filters










Publication year range
1.
IEEE Trans Image Process ; 33: 297-309, 2024.
Article in English | MEDLINE | ID: mdl-38100340

ABSTRACT

Recognizing actions performed on unseen objects, known as Compositional Action Recognition (CAR), has attracted increasing attention in recent years. The main challenge is to overcome the distribution shift of "action-objects" pairs between the training and testing sets. Previous works for CAR usually introduce extra information (e.g. bounding box) to enhance the dynamic cues of video features. However, these approaches do not essentially eliminate the inherent inductive bias in the video, which can be regarded as the stumbling block for model generalization. Because the video features are usually extracted from the visually cluttered areas in which many objects cannot be removed or masked explicitly. To this end, this work attempts to implicitly accomplish semantic-level decoupling of "object-action" in the high-level feature space. Specifically, we propose a novel Semantic-Decoupling Transformer framework, dubbed as DeFormer, which contains two insightful sub-modules: Objects-Motion Decoupler (OMD) and Semantic-Decoupling Constrainer (SDC). In OMD, we initialize several learnable tokens incorporating annotation priors to learn an instance-level representation and then decouple it into the appearance feature and motion feature in high-level visual space. In SDC, we use textual information in the high-level language space to construct a dual-contrastive association to constrain the decoupled appearance feature and motion feature obtained in OMD. Extensive experiments verify the generalization ability of DeFormer. Specifically, compared to the baseline method, DeFormer achieves absolute improvements of 3%, 3.3%, and 5.4% under three different settings on STH-ELSE, while corresponding improvements on EPIC-KITCHENS-55 are 4.7%, 9.2%, and 4.4%. Besides, DeFormer gains state-of-the-art results either on ground-truth or detected annotations.

2.
IEEE Trans Pattern Anal Mach Intell ; 45(8): 10317-10330, 2023 Aug.
Article in English | MEDLINE | ID: mdl-37030795

ABSTRACT

In order to enable the model to generalize to unseen "action-objects" (compositional action), previous methods encode multiple pieces of information (i.e., the appearance, position, and identity of visual instances) independently and concatenate them for classification. However, these methods ignore the potential supervisory role of instance information (i.e., position and identity) in the process of visual perception. To this end, we present a novel framework, namely Progressive Instance-aware Feature Learning (PIFL), to progressively extract, reason, and predict dynamic cues of moving instances from videos for compositional action recognition. Specifically, this framework extracts features from foreground instances that are likely to be relevant to human actions (Position-aware Appearance Feature Extraction in Section III-B1), performs identity-aware reasoning among instance-centric features with semantic-specific interactions (Identity-aware Feature Interaction in Section III-B2), and finally predicts instances' position from observed states to force the model into perceiving their movement (Semantic-aware Position Prediction in Section III-B3). We evaluate our approach on two compositional action recognition benchmarks, namely, Something-Else and IKEA-Assembly. Our approach achieves consistent accuracy gain beyond off-the-shelf action recognition algorithms in terms of both ground truth and detected position of instances.


Subject(s)
Algorithms , Visual Perception , Humans , Learning
3.
Article in English | MEDLINE | ID: mdl-37028288

ABSTRACT

Contrastive learning has been successfully leveraged to learn action representations for addressing the problem of semisupervised skeleton-based action recognition. However, most contrastive learning-based methods only contrast global features mixing spatiotemporal information, which confuses the spatial-and temporal-specific information reflecting different semantic at the frame level and joint level. Thus, we propose a novel spatiotemporal decouple-and-squeeze contrastive learning (SDS-CL) framework to comprehensively learn more abundant representations of skeleton-based actions by jointly contrasting spatial-squeezing features, temporal-squeezing features, and global features. In SDS-CL, we design a new spatiotemporal-decoupling intra-inter attention (SIIA) mechanism to obtain the spatiotemporal-decoupling attentive features for capturing spatiotemporal specific information by calculating spatial-and temporal-decoupling intra-attention maps among joint/motion features, as well as spatial-and temporal-decoupling inter-attention maps between joint and motion features. Moreover, we present a new spatial-squeezing temporal-contrasting loss (STL), a new temporal-squeezing spatial-contrasting loss (TSL), and the global-contrasting loss (GL) to contrast the spatial-squeezing joint and motion features at the frame level, temporal-squeezing joint and motion features at the joint level, as well as global joint and motion features at the skeleton level. Extensive experimental results on four public datasets show that the proposed SDS-CL achieves performance gains compared with other competitive methods.

4.
IEEE Trans Pattern Anal Mach Intell ; 45(6): 6955-6968, 2023 Jun.
Article in English | MEDLINE | ID: mdl-33108281

ABSTRACT

Group activity recognition (GAR) is a challenging task aimed at recognizing the behavior of a group of people. It is a complex inference process in which visual cues collected from individuals are integrated into the final prediction, being aware of the interaction between them. This paper goes one step further beyond the existing approaches by designing a Hierarchical Graph-based Cross Inference Network (HiGCIN), in which three levels of information, i.e., the body-region level, person level, and group-activity level, are constructed, learned, and inferred in an end-to-end manner. Primarily, we present a generic Cross Inference Block (CIB), which is able to concurrently capture the latent spatiotemporal dependencies among body regions and persons. Based on the CIB, two modules are designed to extract and refine features for group activities at each level. Experiments on two popular benchmarks verify the effectiveness of our approach, particularly in the ability to infer with multilevel visual cues. In addition, training our approach does not require individual action labels to be provided, which greatly reduces the amount of labor required in data annotation.

5.
IEEE Trans Pattern Anal Mach Intell ; 45(6): 7559-7576, 2023 Jun.
Article in English | MEDLINE | ID: mdl-36395133

ABSTRACT

In the semi-supervised skeleton-based action recognition task, obtaining more discriminative information from both labeled and unlabeled data is a challenging problem. As the current mainstream approach, contrastive learning can learn more representations of augmented data, which can be considered as the pretext task of action recognition. However, such a method still confronts three main limitations: 1) It usually learns global-granularity features that cannot well reflect the local motion information. 2) The positive/negative pairs are usually pre-defined, some of which are ambiguous. 3) It generally measures the distance between positive/negative pairs only within the same granularity, which neglects the contrasting between the cross-granularity positive and negative pairs. Toward these limitations, we propose a novel Multi-granularity Anchor-Contrastive representation Learning (dubbed as MAC-Learning) to learn multi-granularity representations by conducting inter- and intra-granularity contrastive pretext tasks on the learnable and structural-link skeletons among three types of granularities covering local, context, and global views. To avoid the disturbance of ambiguous pairs from noisy and outlier samples, we design a more reliable Multi-granularity Anchor-Contrastive Loss (dubbed as MAC-Loss) that measures the agreement/disagreement between high-confidence soft-positive/negative pairs based on the anchor graph instead of the hard-positive/negative pairs in the conventional contrastive loss. Extensive experiments on both NTU RGB+D and Northwestern-UCLA datasets show that the proposed MAC-Learning outperforms existing competitive methods in semi-supervised skeleton-based action recognition tasks.

6.
IEEE Trans Image Process ; 31: 3852-3867, 2022.
Article in English | MEDLINE | ID: mdl-35617181

ABSTRACT

Semi-supervised skeleton-based action recognition is a challenging problem due to insufficient labeled data. For addressing this problem, some representative methods leverage contrastive learning to obtain more features from the pre-augmented skeleton actions. Such methods usually adopt a two-stage way: first randomly augment samples, and then learn their representations via contrastive learning. Since skeleton samples have already been randomly augmented, the representation ability of the subsequent contrastive learning is limited due to the inconsistency between the augmentations and representations. Thus, we propose a novel X-invariant Contrastive Augmentation and Representation learning (X-CAR) framework to thoroughly obtain rotate-shear-scale (X for short) invariant features by learning augmentations and representations of skeleton sequences in a one-stage way. In X-CAR, a new Adaptive-combination Augmentation (AA) mechanism is designed to rotate, shear, and scale the skeletons by learnable controlling factors in an adaptive way rather than a random way. Here, such controlling factors are also learned in the whole contrastive learning process, which can facilitate the consistency between the learned augmentations and representations of skeleton sequences. In addition, we relax the pre-definition of positive and negative samples to avoid the confusing allocation of ambiguous samples, and present a new Pull-Push Contrastive Loss (PPCL) to pull the augmenting skeleton close to the original skeleton, while push far away from the other skeletons. Experimental results on both NTU RGB+D and North-Western UCLA datasets show that the proposed X-CAR achieves better accuracy compared with other competitive methods in the semi-supervised scenario.

7.
IEEE Trans Neural Netw Learn Syst ; 33(12): 7574-7588, 2022 12.
Article in English | MEDLINE | ID: mdl-34138718

ABSTRACT

Group activity recognition (GAR) aiming at understanding the behavior of a group of people in a video clip has received increasing attention recently. Nevertheless, most of the existing solutions ignore that not all the persons contribute to the group activity of the scene equally. That is to say, the contribution from different individual behaviors to group activity is different; meanwhile, the contribution from people with different spatial positions is also different. To this end, we propose a novel Position-aware Participation-Contributed Temporal Dynamic Model (P2CTDM), in which two types of the key actor are constructed and learned. Specifically, we focus on the behaviors of key actors, who maintain steady motions (long moving time, called long motions) or display remarkable motions (but closely related to other people and the group activity, called flash motions) at a certain moment. For capturing long motions, we rank individual motions according to their intensity measured by stacking optical flows. For capturing flash motions that are closely related to other people, we design a position-aware interaction module (PIM) that simultaneously considers the feature similarity and position information. Beyond that, for capturing flash motions that are highly related to the group activity, we also present an aggregation long short-term memory (Agg-LSTM) to fuse the outputs from PIM by time-varying trainable attention factors. Four widely used benchmarks are adopted to evaluate the performance of the proposed P2CTDM compared to the state of the art.


Subject(s)
Motion Perception , Neural Networks, Computer , Humans , Recognition, Psychology , Attention
8.
IEEE Trans Pattern Anal Mach Intell ; 44(6): 3300-3315, 2022 06.
Article in English | MEDLINE | ID: mdl-33434123

ABSTRACT

Human motion prediction aims to generate future motions based on the observed human motions. Witnessing the success of Recurrent Neural Networks (RNN) in modeling sequential data, recent works utilize RNNs to model human-skeleton motions on the observed motion sequence and predict future human motions. However, these methods disregard the existence of the spatial coherence among joints and the temporal evolution among skeletons, which reflects the crucial characteristics of human motions in spatiotemporal space. To this end, we propose a novel Skeleton-Joint Co-Attention Recurrent Neural Networks (SC-RNN) to capture the spatial coherence among joints, and the temporal evolution among skeletons simultaneously on a skeleton-joint co-attention feature map in spatiotemporal space. First, a skeleton-joint feature map is constructed as the representation of the observed motion sequence. Second, we design a new Skeleton-Joint Co-Attention (SCA) mechanism to dynamically learn a skeleton-joint co-attention feature map of this skeleton-joint feature map, which can refine the useful observed motion information to predict one future motion. Third, a variant of GRU embedded with SCA collaboratively models the human-skeleton motion and human-joint motion in spatiotemporal space by regarding the skeleton-joint co-attention feature map as the motion context. Experimental results of human motion prediction demonstrate that the proposed method outperforms the competing methods.


Subject(s)
Algorithms , Neural Networks, Computer , Attention , Humans , Motion , Skeleton
9.
IEEE Trans Pattern Anal Mach Intell ; 44(2): 636-647, 2022 02.
Article in English | MEDLINE | ID: mdl-31329548

ABSTRACT

This work aims to address the group activity recognition problem by exploring human motion characteristics. Traditional methods hold that the motions of all persons contribute equally to the group activity, which suppresses the contributions of some relevant motions to the whole activity while overstating some irrelevant motions. To address this problem, we present a Spatio-Temporal Context Coherence (STCC) constraint and a Global Context Coherence (GCC) constraint to capture the relevant motions and quantify their contributions to the group activity, respectively. Based on this, we propose a novel Coherence Constrained Graph LSTM (CCG-LSTM) with STCC and GCC to effectively recognize group activity, by modeling the relevant motions of individuals while suppressing the irrelevant motions. Specifically, to capture the relevant motions, we build the CCG-LSTM with a temporal confidence gate and a spatial confidence gate to control the memory state updating in terms of the temporally previous state and the spatially neighboring states, respectively. In addition, an attention mechanism is employed to quantify the contribution of a certain motion by measuring the consistency between itself and the whole activity at each time step. Finally, we conduct experiments on two widely-used datasets to illustrate the effectiveness of the proposed CCG-LSTM compared with the state-of-the-art methods.


Subject(s)
Algorithms , Neural Networks, Computer , Humans , Motion
10.
Article in English | MEDLINE | ID: mdl-34138719

ABSTRACT

Cross-modality visible-infrared person reidentification (VI-ReID), which aims to retrieve pedestrian images captured by both visible and infrared cameras, is a challenging but essential task for smart surveillance systems. The huge barrier between visible and infrared images has led to the large cross-modality discrepancy and intraclass variations. Most existing VI-ReID methods tend to learn discriminative modality-sharable features based on either global or part-based representations, lacking effective optimization objectives. In this article, we propose a novel global-local multichannel (GLMC) network for VI-ReID, which can learn multigranularity representations based on both global and local features. The coarse- and fine-grained information can complement each other to form a more discriminative feature descriptor. Besides, we also propose a novel center loss function that aims to simultaneously improve the intraclass cross-modality similarity and enlarge the interclass discrepancy to explicitly handle the cross-modality discrepancy issue and avoid the model fluctuating problem. Experimental results on two public datasets have demonstrated the superiority of the proposed method compared with state-of-the-art approaches in terms of effectiveness.

11.
IEEE Trans Neural Netw Learn Syst ; 32(2): 663-674, 2021 02.
Article in English | MEDLINE | ID: mdl-32275607

ABSTRACT

This article aims to tackle the problem of group activity recognition in the multiple-person scene. To model the group activity with multiple persons, most long short-term memory (LSTM)-based methods first learn the person-level action representations by several LSTMs and then integrate all the person-level action representations into the following LSTM to learn the group-level activity representation. This type of solution is a two-stage strategy, which neglects the "host-parasite" relationship between the group-level activity ("host") and person-level actions ("parasite") in spatiotemporal space. To this end, we propose a novel graph LSTM-in-LSTM (GLIL) for group activity recognition by modeling the person-level actions and the group-level activity simultaneously. GLIL is a "host-parasite" architecture, which can be seen as several person LSTMs (P-LSTMs) in the local view or a graph LSTM (G-LSTM) in the global view. Specifically, P-LSTMs model the person-level actions based on the interactions among persons. Meanwhile, G-LSTM models the group-level activity, where the person-level motion information in multiple P-LSTMs is selectively integrated and stored into G-LSTM based on their contributions to the inference of the group activity class. Furthermore, to use the person-level temporal features instead of the person-level static features as the input of GLIL, we introduce a residual LSTM with the residual connection to learn the person-level residual features, consisting of temporal features and static features. Experimental results on two public data sets illustrate the effectiveness of the proposed GLIL compared with state-of-the-art methods.


Subject(s)
Host-Parasite Interactions , Mass Gatherings , Memory, Long-Term , Memory, Short-Term , Algorithms , Deep Learning , Humans , Models, Neurological , Neural Networks, Computer , Social Interaction
12.
IEEE Trans Pattern Anal Mach Intell ; 43(3): 1110-1118, 2021 03.
Article in English | MEDLINE | ID: mdl-31545711

ABSTRACT

In this work, we aim to address the problem of human interaction recognition in videos by exploring the long-term inter-related dynamics among multiple persons. Recently, Long Short-Term Memory (LSTM) has become a popular choice to model individual dynamic for single-person action recognition due to its ability to capture the temporal motion information in a range. However, most existing LSTM-based methods focus only on capturing the dynamics of human interaction by simply combining all dynamics of individuals or modeling them as a whole. Such methods neglect the inter-related dynamics of how human interactions change over time. To this end, we propose a novel Hierarchical Long Short-Term Concurrent Memory (H-LSTCM) to model the long-term inter-related dynamics among a group of persons for recognizing human interactions. Specifically, we first feed each person's static features into a Single-Person LSTM to model the single-person dynamic. Subsequently, at one time step, the outputs of all Single-Person LSTM units are fed into a novel Concurrent LSTM (Co-LSTM) unit, which mainly consists of multiple sub-memory units, a new cell gate, and a new co-memory cell. In the Co-LSTM unit, each sub-memory unit stores individual motion information, while this Co-LSTM unit selectively integrates and stores inter-related motion information between multiple interacting persons from multiple sub-memory units via the cell gate and co-memory cell, respectively. Extensive experiments on several public datasets validate the effectiveness of the proposed H-LSTCM by comparing against baseline and state-of-the-art methods.


Subject(s)
Algorithms , Neural Networks, Computer , Humans , Memory, Long-Term , Motion , Recognition, Psychology
13.
IEEE Trans Pattern Anal Mach Intell ; 41(8): 2027-2034, 2019 08.
Article in English | MEDLINE | ID: mdl-30908192

ABSTRACT

Image retagging aims to improve the tag quality of social images by completing the missing tags, rectifying the noise-corrupted tags, and assigning new high-quality tags. Recent approaches simultaneously explore visual, user and tag information to improve the performance of image retagging by mining the tag-image-user associations. However, such methods will become computationally infeasible with the rapidly increasing number of images, tags and users. It has been proven that the anchor graph can significantly accelerate large-scale graph-based learning by exploring only a small number of anchor points. Inspired by this, we propose a novel Social anchor-Unit GrAph Regularized Tensor Completion (SUGAR-TC) method to efficiently refine the tags of social images, which is insensitive to the scale of data. First, we construct an anchor-unit graph across multiple domains (e.g., image and user domains) rather than traditional anchor graph in a single domain. Second, a tensor completion based on Social anchor-Unit GrAph Regularization (SUGAR) is implemented to refine the tags of the anchor images. Finally, we efficiently assign tags to non-anchor images by leveraging the relationship between the non-anchor units and the anchor units. Experimental results on a real-world social image database well demonstrate the effectiveness and efficiency of SUGAR-TC, outperforming the state-of-the-art methods.

14.
IEEE Trans Image Process ; 28(5): 2173-2186, 2019 May.
Article in English | MEDLINE | ID: mdl-30507504

ABSTRACT

Hashing has attracted increasing research attention in recent years due to its high efficiency of computation and storage in image retrieval. Recent works have demonstrated the superiority of simultaneous feature representations and hash functions learning with deep neural networks. However, most existing deep hashing methods directly learn the hash functions by encoding the global semantic information, while ignoring the local spatial information of images. The loss of local spatial structure makes the performance bottleneck of hash functions, therefore limiting its application for accurate similarity retrieval. In this paper, we propose a novel deep ordinal hashing (DOH) method, which learns ordinal representations to generate ranking-based hash codes by leveraging the ranking structure of feature space from both local and global views. In particular, to effectively build the ranking structure, we propose to learn the rank correlation space by exploiting the local spatial information from fully convolutional network and the global semantic information from the convolutional neural network simultaneously. More specifically, an effective spatial attention model is designed to capture the local spatial information by selectively learning well-specified locations closely related to target objects. In such hashing framework, the local spatial and global semantic nature of images is captured in an end-to-end ranking-to-hashing manner. Experimental results conducted on three widely used datasets demonstrate that the proposed DOH method significantly outperforms the state-of-the-art hashing methods.

15.
IEEE Trans Pattern Anal Mach Intell ; 40(4): 905-917, 2018 04.
Article in English | MEDLINE | ID: mdl-28534768

ABSTRACT

Age progression is defined as aesthetically re-rendering the aging face at any future age for an individual face. In this work, we aim to automatically render aging faces in a personalized way. Basically, for each age group, we learn an aging dictionary to reveal its aging characteristics (e.g., wrinkles), where the dictionary bases corresponding to the same index yet from two neighboring aging dictionaries form a particular aging pattern cross these two age groups, and a linear combination of all these patterns expresses a particular personalized aging process. Moreover, two factors are taken into consideration in the dictionary learning process. First, beyond the aging dictionaries, each person may have extra personalized facial characteristics, e.g., mole, which are invariant in the aging process. Second, it is challenging or even impossible to collect faces of all age groups for a particular person, yet much easier and more practical to get face pairs from neighboring age groups. To this end, we propose a novel Bi-level Dictionary Learning based Personalized Age Progression (BDL-PAP) method. Here, bi-level dictionary learning is formulated to learn the aging dictionaries based on face pairs from neighboring age groups. Extensive experiments well demonstrate the advantages of the proposed BDL-PAP over other state-of-the-arts in term of personalized age progression, as well as the performance gain for cross-age face verification by synthesizing aging faces.


Subject(s)
Aging/physiology , Deep Learning , Face/physiology , Image Processing, Computer-Assisted/methods , Pattern Recognition, Automated/methods , Adolescent , Adult , Aged , Aged, 80 and over , Child , Databases, Factual , Female , Humans , Male , Middle Aged , Young Adult
16.
IEEE Trans Pattern Anal Mach Intell ; 39(8): 1662-1674, 2017 08.
Article in English | MEDLINE | ID: mdl-28113651

ABSTRACT

Social image tag refinement, which aims to improve tag quality by automatically completing the missing tags and rectifying the noise-corrupted ones, is an essential component for social image search. Conventional approaches mainly focus on exploring the visual and tag information, without considering the user information, which often reveals important hints on the (in)correct tags of social images. Towards this end, we propose a novel tri-clustered tensor completion framework to collaboratively explore these three kinds of information to improve the performance of social image tag refinement. Specifically, the inter-relations among users, images and tags are modeled by a tensor, and the intra-relations between users, images and tags are explored by three regularizations respectively. To address the challenges of the super-sparse and large-scale tensor factorization that demands expensive computing and memory cost, we propose a novel tri-clustering method to divide the tensor into a certain number of sub-tensors by simultaneously clustering users, images and tags into a bunch of tri-clusters. And then we investigate two strategies to complete these sub-tensors by considering (in)dependence between the sub-tensors. Experimental results on a real-world social image database demonstrate the superiority of the proposed method compared with the state-of-the-art methods.

17.
IEEE Trans Image Process ; 25(6): 2469-79, 2016 06.
Article in English | MEDLINE | ID: mdl-27019492

ABSTRACT

Similarity-preserving hashing is a commonly used method for nearest neighbor search in large-scale image retrieval. For image retrieval, deep-network-based hashing methods are appealing, since they can simultaneously learn effective image representations and compact hash codes. This paper focuses on deep-network-based hashing for multi-label images, each of which may contain objects of multiple categories. In most existing hashing methods, each image is represented by one piece of hash code, which is referred to as semantic hashing. This setting may be suboptimal for multi-label image retrieval. To solve this problem, we propose a deep architecture that learns instance-aware image representations for multi-label image data, which are organized in multiple groups, with each group containing the features for one category. The instance-aware representations not only bring advantages to semantic hashing but also can be used in category-aware hashing, in which an image is represented by multiple pieces of hash codes and each piece of code corresponds to a category. Extensive evaluations conducted on several benchmark data sets demonstrate that for both the semantic hashing and the category-aware hashing, the proposed method shows substantial improvement over the state-of-the-art supervised and unsupervised hashing methods.

SELECTION OF CITATIONS
SEARCH DETAIL
...