Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 2 de 2
Filter
Add more filters










Database
Language
Publication year range
1.
Article in English | MEDLINE | ID: mdl-36107890

ABSTRACT

Recently, deep metric learning (DML) has achieved great success. Some existing DML methods propose adaptive sample mining strategies, which learn to weight the samples, leading to interesting performance. However, these methods suffer from a small memory (e.g., one training batch), limiting their efficacy. In this work, we introduce a data-driven method, meta-mining strategy with semiglobal information (MMSI), to apply meta-learning to learn to weight samples during the whole training, leading to an adaptive mining strategy. To introduce richer information than one training batch only, we elaborately take advantage of the validation set of meta-learning by implicitly adding additional validation sample information to training. Furthermore, motivated by the latest self-supervised learning, we introduce a dictionary (memory) that maintains very large and diverse information. Together with the validation set, this dictionary presents much richer information to the training, leading to promising performance. In addition, we propose a new theoretical framework that can formulate pairwise and tripletwise metric learning loss functions in a unified framework. This framework brings new insights to society and facilitates us to generalize our MMSI to many existing DML methods. We conduct extensive experiments on three public datasets, CUB200-2011, Cars-196, and Stanford Online Products (SOP). Results show that our method can achieve the state of the art or very competitive performance. Our source codes have been made available at https://github.com/NUST-Machine-Intelligence-Laboratory/MMSI.

2.
Entropy (Basel) ; 24(4)2022 Mar 25.
Article in English | MEDLINE | ID: mdl-35455120

ABSTRACT

This work proposes a new computational framework for learning a structured generative model for real-world datasets. In particular, we propose to learn a Closed-loop Transcriptionbetween a multi-class, multi-dimensional data distribution and a Linear discriminative representation (CTRL) in the feature space that consists of multiple independent multi-dimensional linear subspaces. In particular, we argue that the optimal encoding and decoding mappings sought can be formulated as a two-player minimax game between the encoder and decoderfor the learned representation. A natural utility function for this game is the so-called rate reduction, a simple information-theoretic measure for distances between mixtures of subspace-like Gaussians in the feature space. Our formulation draws inspiration from closed-loop error feedback from control systems and avoids expensive evaluating and minimizing of approximated distances between arbitrary distributions in either the data space or the feature space. To a large extent, this new formulation unifies the concepts and benefits of Auto-Encoding and GAN and naturally extends them to the settings of learning a both discriminative and generative representation for multi-class and multi-dimensional real-world data. Our extensive experiments on many benchmark imagery datasets demonstrate tremendous potential of this new closed-loop formulation: under fair comparison, visual quality of the learned decoder and classification performance of the encoder is competitive and arguably better than existing methods based on GAN, VAE, or a combination of both. Unlike existing generative models, the so-learned features of the multiple classes are structured instead of hidden: different classes are explicitly mapped onto corresponding independent principal subspaces in the feature space, and diverse visual attributes within each class are modeled by the independent principal components within each subspace.

SELECTION OF CITATIONS
SEARCH DETAIL
...