Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 2 de 2
Filter
Add more filters










Database
Language
Publication year range
1.
IEEE Trans Pattern Anal Mach Intell ; 45(12): 14920-14937, 2023 12.
Article in English | MEDLINE | ID: mdl-37672380

ABSTRACT

Gait depicts individuals' unique and distinguishing walking patterns and has become one of the most promising biometric features for human identification. As a fine-grained recognition task, gait recognition is easily affected by many factors and usually requires a large amount of completely annotated data that is costly and insatiable. This paper proposes a large-scale self-supervised benchmark for gait recognition with contrastive learning, aiming to learn the general gait representation from massive unlabelled walking videos for practical applications via offering informative walking priors and diverse real-world variations. Specifically, we collect a large-scale unlabelled gait dataset GaitLU-1M consisting of 1.02M walking sequences and propose a conceptually simple yet empirically powerful baseline model GaitSSB. Experimentally, we evaluate the pre-trained model on four widely-used gait benchmarks, CASIA-B, OU-MVLP, GREW and Gait3D with or without transfer learning. The unsupervised results are comparable to or even better than the early model-based and GEI-based methods. After transfer learning, GaitSSB outperforms existing methods by a large margin in most cases, and also showcases the superior generalization capacity. Further experiments indicate that the pre-training can save about 50% and 80% annotation costs of GREW and Gait3D. Theoretically, we discuss the critical issues for gait-specific contrastive framework and present some insights for further study. As far as we know, GaitLU-1M is the first large-scale unlabelled gait dataset, and GaitSSB is the first method that achieves remarkable unsupervised results on the aforementioned benchmarks.


Subject(s)
Algorithms , Benchmarking , Humans , Gait , Walking , Videotape Recording
2.
IEEE Trans Neural Netw Learn Syst ; 34(11): 8978-8988, 2023 11.
Article in English | MEDLINE | ID: mdl-35294358

ABSTRACT

Gait recognition receives increasing attention since it can be conducted at a long distance in a nonintrusive way and applied to the condition of changing clothes. Most existing methods take the silhouettes of gait sequences as the input and learn a unified representation from multiple silhouettes to match probe and gallery. However, these models are all faced with the lack of interpretability, e.g., it is not clear which silhouette in a gait sequence and which part in the human body are relatively more important for recognition. In this work, we propose a gait quality aware network (GQAN) for gait recognition which explicitly assesses the quality of each silhouette and each part via two blocks: frame quality block (FQBlock) and part quality block (PQBlock). Specifically, FQBlock works in a squeeze-and-excitation style to recalibrate the features for each silhouette, and the scores of all the channels are added as frame quality indicator. PQBlock predicts a score for each part which is used to compute the weighted distance between the probe and gallery. Particularly, we propose a part quality loss (PQLoss) which enables GQAN to be trained in an end-to-end manner with only sequence-level identity annotations. This work is meaningful by moving toward the interpretability of silhouette-based gait recognition, and our method also achieves very competitive performance on CASIA-B and OUMVLP.


Subject(s)
Algorithms , Pattern Recognition, Automated , Humans , Pattern Recognition, Automated/methods , Neural Networks, Computer , Gait , Learning
SELECTION OF CITATIONS
SEARCH DETAIL
...