Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Mais filtros











Base de dados
Intervalo de ano de publicação
1.
Artigo em Inglês | MEDLINE | ID: mdl-37999962

RESUMO

Graph neural networks (GNNs) have achieved state-of-the-art performance in various graph representation learning scenarios. However, when applied to graph data in real world, GNNs have encountered scalability issues. Existing GNNs often have high computational load in both training and inference stages, making them incapable of meeting the performance needs of large-scale scenarios with a large number of nodes. Although several studies on scalable GNNs have developed, they either merely improve GNNs with limited scalability or come at the expense of reduced effectiveness. Inspired by knowledge distillation's (KDs) achievement in preserving performances while balancing scalability in computer vision and natural language processing, we propose an enhanced scalable GNN via KD (KD-SGNN) to improve the scalability and effectiveness of GNNs. On the one hand, KD-SGNN adopts the idea of decoupled GNNs, which decouples feature transformation and feature propagation in GNNs and leverages preprocessing techniques to improve the scalability of GNNs. On the other hand, KD-SGNN proposes two KD mechanisms (i.e., soft-target (ST) distillation and shallow imitation (SI) distillation) to improve the expressiveness. The scalability and effectiveness of KD-SGNN are evaluated on multiple real datasets. Besides, the effectiveness of the proposed KD mechanisms is also verified through comprehensive analyses.

2.
IEEE Trans Neural Netw Learn Syst ; 34(8): 4296-4307, 2023 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-34637383

RESUMO

Graph convolutional networks (GCNs) have achieved great success in many applications and have caught significant attention in both academic and industrial domains. However, repeatedly employing graph convolutional layers would render the node embeddings indistinguishable. For the sake of avoiding oversmoothing, most GCN-based models are restricted in a shallow architecture. Therefore, the expressive power of these models is insufficient since they ignore information beyond local neighborhoods. Furthermore, existing methods either do not consider the semantics from high-order local structures or neglect the node homophily (i.e., node similarity), which severely limits the performance of the model. In this article, we take above problems into consideration and propose a novel Semantics and Homophily preserving Network Embedding (SHNE) model. In particular, SHNE leverages higher order connectivity patterns to capture structural semantics. To exploit node homophily, SHNE utilizes both structural and feature similarity to discover potential correlated neighbors for each node from the whole graph; thus, distant but informative nodes can also contribute to the model. Moreover, with the proposed dual-attention mechanisms, SHNE learns comprehensive embeddings with additional information from various semantic spaces. Furthermore, we also design a semantic regularizer to improve the quality of the combined representation. Extensive experiments demonstrate that SHNE outperforms state-of-the-art methods on benchmark datasets.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA