Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 9 de 9
Filter
Add more filters










Database
Language
Publication year range
1.
Sci Rep ; 14(1): 9711, 2024 Apr 27.
Article in English | MEDLINE | ID: mdl-38678041

ABSTRACT

Based on the system dynamics theory, this paper establishes an environmental mass event evolution model and explores the evolution law of mass events caused by environmental problems. From a methodological point of view, the mixed-strategy evolutionary game principle and dynamic punishment measures are combined, and simulation analysis is carried out by Anylogic software, and the results show that there is no stable evolutionary equilibrium solution for the two sides of the game in the traditional asymmetric mixed-strategy game model, and after adjusting the game payoff matrix and incorporating the dynamic punishment strategy, stable evolutionary equilibrium solutions appear in the evolutionary game model, and the system begins to tend to be stabilized. The process and conclusions of the simulation experiment provide methodological reference and theoretical support for the analysis of the evolution of environmental mass events.

2.
Math Biosci Eng ; 21(3): 3563-3593, 2024 Feb 05.
Article in English | MEDLINE | ID: mdl-38549296

ABSTRACT

Dynamic recommendation systems aim to achieve real-time updates and dynamic migration of user interests, primarily utilizing user-item interaction sequences with timestamps to capture the dynamic changes in user interests and item attributes. Recent research has mainly centered on two aspects. First, it involves modeling the dynamic interaction relationships between users and items using dynamic graphs. Second, it focuses on mining their long-term and short-term interaction patterns. This is achieved through the joint learning of static and dynamic embeddings for both users and items. Although most existing methods have achieved some success in modeling the historical interaction sequences between users and items, there is still room for improvement, particularly in terms of modeling the long-term dependency structures of dynamic interaction histories and extracting the most relevant delayed interaction patterns. To address this issue, we proposed a Dynamic Context-Aware Recommendation System for dynamic recommendation. Specifically, our model is built on a dynamic graph and utilizes the static embeddings of recent user-item interactions as dynamic context. Additionally, we constructed a Gated Multi-Layer Perceptron encoder to capture the long-term dependency structure in the dynamic interaction history and extract high-level features. Then, we introduced an Attention Pooling network to learn similarity scores between high-level features in the user-item dynamic interaction history. By calculating bidirectional attention weights, we extracted the most relevant delayed interaction patterns from the historical sequence to predict the dynamic embeddings of users and items. Additionally, we proposed a loss function called the Pairwise Cosine Similarity loss for dynamic recommendation to jointly optimize the static and dynamic embeddings of two types of nodes. Finally, extensive experiments on two real-world datasets, LastFM, and the Global Terrorism Database showed that our model achieves consistent improvements over state-of-the-art baselines.

3.
Sci Rep ; 13(1): 16966, 2023 Oct 08.
Article in English | MEDLINE | ID: mdl-37807013

ABSTRACT

Graph neural networks (GNNs) have significant advantages in dealing with non-Euclidean data and have been widely used in various fields. However, most of the existing GNN models face two main challenges: (1) Most GNN models built upon the message-passing framework exhibit a shallow structure, which hampers their ability to efficiently transmit information between distant nodes. To address this, we aim to propose a novel message-passing framework, enabling the construction of GNN models with deep architectures akin to convolutional neural networks (CNNs), potentially comprising dozens or even hundreds of layers. (2) Existing models often approach the learning of edge and node features as separate tasks. To overcome this limitation, we aspire to develop a deep graph convolutional neural network learning framework capable of simultaneously acquiring edge embeddings and node embeddings. By utilizing the learned multi-dimensional edge feature matrix, we construct multi-channel filters to more effectively capture accurate node features. To address these challenges, we propose the Co-embedding of Edges and Nodes with Deep Graph Convolutional Neural Networks (CEN-DGCNN). In our approach, we propose a novel message-passing framework that can fully integrate and utilize both node features and multi-dimensional edge features. Based on this framework, we develop a deep graph convolutional neural network model that prevents over-smoothing and obtains node non-local structural features and refined high-order node features by extracting long-distance dependencies between nodes and utilizing multi-dimensional edge features. Moreover, we propose a novel graph convolutional layer that can learn node embeddings and multi-dimensional edge embeddings simultaneously. The layer updates multi-dimensional edge embeddings across layers based on node features and an attention mechanism, which enables efficient utilization and fusion of both node and edge features. Additionally, we propose a multi-dimensional edge feature encoding method based on directed edges, and use the resulting multi-dimensional edge feature matrix to construct a multi-channel filter to filter the node information. Lastly, extensive experiments show that CEN-DGCNN outperforms a large number of graph neural network baseline methods, demonstrating the effectiveness of our proposed method.

4.
Math Biosci Eng ; 20(8): 14096-14116, 2023 Jun 25.
Article in English | MEDLINE | ID: mdl-37679127

ABSTRACT

With the rise of multi-modal methods, multi-modal knowledge graphs have become a better choice for storing human knowledge. However, knowledge graphs often suffer from the problem of incompleteness due to the infinite and constantly updating nature of knowledge, and thus the task of knowledge graph completion has been proposed. Existing multi-modal knowledge graph completion methods mostly rely on either embedding-based representations or graph neural networks, and there is still room for improvement in terms of interpretability and the ability to handle multi-hop tasks. Therefore, we propose a new method for multi-modal knowledge graph completion. Our method aims to learn multi-level graph structural features to fully explore hidden relationships within the knowledge graph and to improve reasoning accuracy. Specifically, we first use a Transformer architecture to separately learn about data representations for both the image and text modalities. Then, with the help of multimodal gating units, we filter out irrelevant information and perform feature fusion to obtain a unified encoding of knowledge representations. Furthermore, we extract multi-level path features using a width-adjustable sliding window and learn about structural feature information in the knowledge graph using graph convolutional operations. Finally, we use a scoring function to evaluate the probability of the truthfulness of encoded triplets and to complete the prediction task. To demonstrate the effectiveness of the model, we conduct experiments on two publicly available datasets, FB15K-237-IMG and WN18-IMG, and achieve improvements of 1.8 and 0.7%, respectively, in the Hits@1 metric.

5.
Front Neurorobot ; 17: 1181143, 2023.
Article in English | MEDLINE | ID: mdl-37408584

ABSTRACT

In the field of human-computer interaction, accurate identification of talking objects can help robots to accomplish subsequent tasks such as decision-making or recommendation; therefore, object determination is of great interest as a pre-requisite task. Whether it is named entity recognition (NER) in natural language processing (NLP) work or object detection (OD) task in the computer vision (CV) field, the essence is to achieve object recognition. Currently, multimodal approaches are widely used in basic image recognition and natural language processing tasks. This multimodal architecture can perform entity recognition tasks more accurately, but when faced with short texts and images containing more noise, we find that there is still room for optimization in the image-text-based multimodal named entity recognition (MNER) architecture. In this study, we propose a new multi-level multimodal named entity recognition architecture, which is a network capable of extracting useful visual information for boosting semantic understanding and subsequently improving entity identification efficacy. Specifically, we first performed image and text encoding separately and then built a symmetric neural network architecture based on Transformer for multimodal feature fusion. We utilized a gating mechanism to filter visual information that is significantly related to the textual content, in order to enhance text understanding and achieve semantic disambiguation. Furthermore, we incorporated character-level vector encoding to reduce text noise. Finally, we employed Conditional Random Fields for label classification task. Experiments on the Twitter dataset show that our model works to increase the accuracy of the MNER task.

7.
PeerJ Comput Sci ; 9: e1368, 2023.
Article in English | MEDLINE | ID: mdl-37346515

ABSTRACT

The dynamic recommender system realizes the real-time recommendation for users by learning the dynamic interest characteristics, which is especially suitable for the scenarios of rapid transfer of user interests, such as e-commerce and social media. The dynamic recommendation model mainly depends on the user-item history interaction sequence with timestamp, which contains historical records that reflect changes in the true interests of users and the popularity of items. Previous methods usually model interaction sequences to learn the dynamic embedding of users and items. However, these methods can not directly capture the excitation effects of different historical information on the evolution process of both sides of the interaction, i.e., the ability of events to influence the occurrence of another event. In this work, we propose a Dynamic Graph Hawkes Process based on Linear complexity Self-Attention (DGHP-LISA) for dynamic recommender systems, which is a new framework for modeling the dynamic relationship between users and items at the same time. Specifically, DGHP-LISA is built on dynamic graph and uses Hawkes process to capture the excitation effects between events. In addition, we propose a new self-attention with linear complexity to model the time correlation of different historical events and the dynamic correlation between different update mechanisms, which drives more accurate modeling of the evolution process of both sides of the interaction. Extensive experiments on three real-world datasets show that our model achieves consistent improvements over state-of-the-art baselines.

8.
Sci Rep ; 13(1): 6887, 2023 Apr 27.
Article in English | MEDLINE | ID: mdl-37106057

ABSTRACT

Although numerous spatiotemporal approaches have been presented to address the problem of missing spatiotemporal data, there are still limitations in concurrently capturing the underlying spatiotemporal dependence of spatiotemporal graph data. Furthermore, most imputation methods miss the hidden dynamic connection associations that exist between graph nodes over time. To address the aforementioned spatiotemporal data imputation challenge, we present an attention-based message passing and dynamic graph convolution network (ADGCN). Specifically, this paper uses attention mechanisms to unify temporal and spatial continuity and aggregate node neighbor information in multiple directions. Furthermore, a dynamic graph convolution module is designed to capture constantly changing spatial correlations in sensors utilizing a new dynamic graph generation method with gating to transmit node information. Extensive imputation tests in the air quality and traffic flow domains were carried out on four real missing data sets. Experiments show that the ADGCN outperforms the state-of-the-art baseline.

9.
PLoS One ; 18(3): e0279604, 2023.
Article in English | MEDLINE | ID: mdl-36897837

ABSTRACT

Graph Convolutional Networks (GCNs) are powerful deep learning methods for non-Euclidean structure data and achieve impressive performance in many fields. But most of the state-of-the-art GCN models are shallow structures with depths of no more than 3 to 4 layers, which greatly limits the ability of GCN models to extract high-level features of nodes. There are two main reasons for this: 1) Overlaying too many graph convolution layers will lead to the problem of over-smoothing. 2) Graph convolution is a kind of localized filter, which is easily affected by local properties. To solve the above problems, we first propose a novel general framework for graph neural networks called Non-local Message Passing (NLMP). Under this framework, very deep graph convolutional networks can be flexibly designed, and the over-smoothing phenomenon can be suppressed very effectively. Second, we propose a new spatial graph convolution layer to extract node multiscale high-level node features. Finally, we design an end-to-end Deep Graph Convolutional Neural Network II (DGCNNII) model for graph classification task, which is up to 32 layers deep. And the effectiveness of our proposed method is demonstrated by quantifying the graph smoothness of each layer and ablation studies. Experiments on benchmark graph classification datasets show that DGCNNII outperforms a large number of shallow graph neural network baseline methods.


Subject(s)
Benchmarking , Neural Networks, Computer
SELECTION OF CITATIONS
SEARCH DETAIL
...