Your browser doesn't support javascript.
A Web Knowledge-Driven Multimodal Retrieval Method in Computational Social Systems: Unsupervised and Robust Graph Convolutional Hashing
IEEE Transactions on Computational Social Systems ; : 1-11, 2022.
Article Dans Anglais | Web of Science | ID: covidwho-2123176
ABSTRACT
Multimodal retrieval has received widespread consideration since it can commendably provide massive related data support for the development of computational social systems (CSSs). However, the existing works still face the following challenges 1) rely on the tedious manual marking process when extended to CSS, which not only introduces subjective errors but also consumes abundant time and labor costs;2) only using strongly aligned data for training, lacks concern for the adjacency information, which makes the poor robustness and semantic heterogeneity gap difficult to be effectively fit;and 3) mapping features into real-valued forms, which leads to the characteristics of high storage and low retrieval efficiency. To address these issues in turn, we have designed a multimodal retrieval framework based on web-knowledge-driven, called unsupervised and robust graph convolutional hashing (URGCH). The specific implementations are as follows first, a "secondary semantic self-fusion" approach is proposed, which mainly extracts semantic-rich features through pretrained neural networks, constructs the joint semantic matrix through semantic fusion, and eliminates the process of manual marking;second, a "adaptive computing" approach is designed to construct enhanced semantic graph features through the knowledge-infused of neighborhoods and uses graph convolutional networks for knowledge fusion coding, which enables URGCH to sufficiently fit the semantic modality gap while obtaining satisfactory robustness features;Third, combined with hash learning, the multimodality data are mapped into the form of binary code, which reduces storage requirements and improves retrieval efficiency. Eventually, we perform plentiful experiments on the web dataset. The results evidence that URGCH exceeds other baselines about 1%-3.7% in mean average precisions (MAPs), displays superior performance in all the aspects, and can meaningfully provide multimodal data retrieval services to CSS.
Mots clés

Texte intégral: Disponible Collection: Bases de données des oragnisations internationales Base de données: Web of Science langue: Anglais Revue: IEEE Transactions on Computational Social Systems Année: 2022 Type de document: Article

Documents relatifs à ce sujet

MEDLINE

...
LILACS

LIS


Texte intégral: Disponible Collection: Bases de données des oragnisations internationales Base de données: Web of Science langue: Anglais Revue: IEEE Transactions on Computational Social Systems Année: 2022 Type de document: Article