Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 7 de 7
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Nat Commun ; 13(1): 3094, 2022 06 02.
Artigo em Inglês | MEDLINE | ID: mdl-35655064

RESUMO

The fundamental goal of artificial intelligence (AI) is to mimic the core cognitive activities of human. Despite tremendous success in the AI research, most of existing methods have only single-cognitive ability. To overcome this limitation and take a solid step towards artificial general intelligence (AGI), we develop a foundation model pre-trained with huge multimodal data, which can be quickly adapted for various downstream cognitive tasks. To achieve this goal, we propose to pre-train our foundation model by self-supervised learning with weak semantic correlation data crawled from the Internet and show that promising results can be obtained on a wide range of downstream tasks. Particularly, with the developed model-interpretability tools, we demonstrate that strong imagination ability is now possessed by our foundation model. We believe that our work makes a transformative stride towards AGI, from our common practice of "weak or narrow AI" to that of "strong or generalized AI".


Assuntos
Inteligência Artificial , Inteligência , Coleta de Dados , Humanos
2.
IEEE Trans Pattern Anal Mach Intell ; 44(6): 3123-3138, 2022 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-33434122

RESUMO

Supervised dimensionality reduction for sequence data learns a transformation that maps the observations in sequences onto a low-dimensional subspace by maximizing the separability of sequences in different classes. It is typically more challenging than conventional dimensionality reduction for static data, because measuring the separability of sequences involves non-linear procedures to manipulate the temporal structures. In this paper, we propose a linear method, called order-preserving Wasserstein discriminant analysis (OWDA), and its deep extension, namely DeepOWDA, to learn linear and non-linear discriminative subspace for sequence data, respectively. We construct novel separability measures between sequence classes based on the order-preserving Wasserstein (OPW) distance to capture the essential differences among their temporal structures. Specifically, for each class, we extract the OPW barycenter and construct the intra-class scatter as the dispersion of the training sequences around the barycenter. The inter-class distance is measured as the OPW distance between the corresponding barycenters. We learn the linear and non-linear transformations by maximizing the inter-class distance and minimizing the intra-class scatter. In this way, the proposed OWDA and DeepOWDA are able to concentrate on the distinctive differences among classes by lifting the geometric relations with temporal constraints. Experiments on four 3D action recognition datasets show the effectiveness of OWDA and DeepOWDA.

3.
IEEE Trans Pattern Anal Mach Intell ; 44(12): 9844-9859, 2022 12.
Artigo em Inglês | MEDLINE | ID: mdl-34941503

RESUMO

Audiovisual scenes are pervasive in our daily life. It is commonplace for humans to discriminatively localize different sounding objects but quite challenging for machines to achieve class-aware sounding objects localization without category annotations, i.e., localizing the sounding object and recognizing its category. To address this problem, we propose a two-stage step-by-step learning framework to localize and recognize sounding objects in complex audiovisual scenarios using only the correspondence between audio and vision. First, we propose to determine the sounding area via coarse-grained audiovisual correspondence in the single source cases. Then visual features in the sounding area are leveraged as candidate object representations to establish a category-representation object dictionary for expressive visual character extraction. We generate class-aware object localization maps in cocktail-party scenarios and use audiovisual correspondence to suppress silent areas by referring to this dictionary. Finally, we employ category-level audiovisual consistency as the supervision to achieve fine-grained audio and sounding object distribution alignment. Experiments on both realistic and synthesized videos show that our model is superior in localizing and recognizing objects as well as filtering out silent ones. We also transfer the learned audiovisual network into the unsupervised object detection task, obtaining reasonable performance.


Assuntos
Algoritmos , Percepção Visual , Humanos , Aprendizagem
4.
IEEE Trans Pattern Anal Mach Intell ; 43(7): 2510-2523, 2021 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-31940521

RESUMO

Zero-shot learning (ZSL) is made possible by learning a projection function between a feature space and a semantic space (e.g., an attribute space). Key to ZSL is thus to learn a projection that is robust against the often large domain gap between the seen and unseen class domains. In this work, this is achieved by unseen class data synthesis and robust projection function learning. Specifically, a novel semantic data synthesis strategy is proposed, by which semantic class prototypes (e.g., attribute vectors) are used to simply perturb seen class data for generating unseen class ones. As in any data synthesis/hallucination approach, there are ambiguities and uncertainties on how well the synthesised data can capture the targeted unseen class data distribution. To cope with this, the second contribution of this work is a novel projection learning model termed competitive bidirectional projection learning (BPL) designed to best utilise the ambiguous synthesised data. Specifically, we assume that each synthesised data point can belong to any unseen class; and the most likely two class candidates are exploited to learn a robust projection function in a competitive fashion. As a third contribution, we show that the proposed ZSL model can be easily extended to few-shot learning (FSL) by again exploiting semantic (class prototype guided) feature synthesis and competitive BPL. Extensive experiments show that our model achieves the state-of-the-art results on both problems.

5.
IEEE Trans Image Process ; 28(4): 1720-1731, 2019 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-30452369

RESUMO

Image annotation aims to annotate a given image with a variable number of class labels corresponding to diverse visual concepts. In this paper, we address two main issues in large-scale image annotation: 1) how to learn a rich feature representation suitable for predicting a diverse set of visual concepts ranging from object, scene to abstract concept and 2) how to annotate an image with the optimal number of class labels. To address the first issue, we propose a novel multi-scale deep model for extracting rich and discriminative features capable of representing a wide range of visual concepts. Specifically, a novel two-branch deep neural network architecture is proposed, which comprises a very deep main network branch and a companion feature fusion network branch designed for fusing the multi-scale features computed from the main branch. The deep model is also made multi-modal by taking noisy user-provided tags as model input to complement the image input. For tackling the second issue, we introduce a label quantity prediction auxiliary task to the main label prediction task to explicitly estimate the optimal label number for a given image. Extensive experiments are carried out on two large-scale image annotation benchmark datasets, and the results show that our method significantly outperforms the state of the art.

6.
IEEE Trans Cybern ; 48(1): 253-263, 2018 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-28114055

RESUMO

In this paper, we present a large-scale sparse learning (LSSL) approach to solve the challenging task of semantic segmentation of images with noisy tags. Different from the traditional strongly supervised methods that exploit pixel-level labels for semantic segmentation, we make use of much weaker supervision (i.e., noisy tags of images) and then formulate the task of semantic segmentation as a weakly supervised learning (WSL) problem from the view point of noise reduction of superpixel labels. By learning the data manifolds, we transform the WSL problem into an LSSL problem. Based on nonlinear approximation and dimension reduction techniques, a linear-time-complexity algorithm is developed to solve the LSSL problem efficiently. We further extend the LSSL approach to visual feature refinement for semantic segmentation. The experiments demonstrate that the proposed LSSL approach can achieve promising results in semantic segmentation of images with noisy tags.

7.
IEEE Trans Image Process ; 24(1): 176-88, 2015 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-25438314

RESUMO

This paper presents a new semantic sparse recoding method to generate more descriptive and robust representation of visual content for image applications. Although the visual bag-of-words (BOW) representation has been reported to achieve promising results in different image applications, its visual codebook is completely learnt from low-level visual features using quantization techniques and thus the so-called semantic gap remains unbridgeable. To handle such challenging issue, we utilize the annotations (predicted by algorithms or shared by users) of all the images to improve the original visual BOW representation. This is further formulated as a sparse coding problem so that the noise issue induced by the inaccurate quantization of visual features can also be handled to some extent. By developing an efficient sparse coding algorithm, we successfully generate a new visual BOW representation for image applications. Since such sparse coding has actually incorporated the high-level semantic information into the original visual codebook, we thus consider it as semantic sparse recoding of the visual content. Finally, we apply our semantic sparse recoding method to automatic image annotation and social image classification. The experimental results on several benchmark datasets show the promising performance of our semantic sparse recoding method in these two image applications.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...