Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 2 de 2
Filter
Add more filters










Database
Language
Publication year range
1.
PLoS One ; 19(4): e0290291, 2024.
Article in English | MEDLINE | ID: mdl-38648224

ABSTRACT

It is becoming harder to tell rumors from non-rumors as social media becomes a key news source, which invites malicious manipulation that could do harm to the public's health or cause financial loss. When faced with situations when the session structure of comment sections is deliberately disrupted, traditional models do not handle them adequately. In order to do this, we provide a novel rumor detection architecture that combines dual comparison learning, adversarial training, and attention filters. We suggest the attention filter module to achieve the filtering of some dangerous comments as well as the filtering of some useless comments, allowing the nodes to enter the GAT graph neural network with greater structural information. The adversarial training module (ADV) simulates the occurrence of malicious comments through perturbation, giving the comments some defense against malicious comments. It also serves as a hard negative sample to aid double contrast learning (DCL), which aims to learn the differences between various comments, and incorporates the final loss in the form of a loss function to strengthen the model. According to experimental findings, our AGAD (Attention Graph Adversarial Dual Contrast Learning) model outperforms other cutting-edge algorithms on a number of rumor detection tasks. The code is available at https://github.com/icezhangGG/AGAD.git.


Subject(s)
Algorithms , Neural Networks, Computer , Social Media , Humans , Attention , Machine Learning
2.
PLoS One ; 18(6): e0286915, 2023.
Article in English | MEDLINE | ID: mdl-37289767

ABSTRACT

Few-shot Relation Classification identifies the relation between target entity pairs in unstructured natural language texts by training on a small number of labeled samples. Recent prototype network-based studies have focused on enhancing the prototype representation capability of models by incorporating external knowledge. However, the majority of these works constrain the representation of class prototypes implicitly through complex network structures, such as multi-attention mechanisms, graph neural networks, and contrastive learning, which constrict the model's ability to generalize. In addition, most models with triplet loss disregard intra-class compactness during model training, thereby limiting the model's ability to handle outlier samples with low semantic similarity. Therefore, this paper proposes a non-weighted prototype enhancement module that uses the feature-level similarity between prototypes and relation information as a gate to filter and complete features. Meanwhile, we design a class cluster loss that samples difficult positive and negative samples and explicitly constrains both intra-class compactness and inter-class separability to learn a metric space with high discriminability. Extensive experiments were done on the publicly available dataset FewRel 1.0 and 2.0, and the results show the effectiveness of the proposed model.


Subject(s)
Knowledge , Language , Learning , Neural Networks, Computer , Semantics
SELECTION OF CITATIONS
SEARCH DETAIL
...