Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 2 de 2
Filter
Add more filters










Database
Language
Publication year range
1.
JMIR Form Res ; 6(2): e18539, 2022 Feb 14.
Article in English | MEDLINE | ID: mdl-35156925

ABSTRACT

BACKGROUND: With the advent of digital technology and specifically user-generated contents in social media, new ways emerged for studying possible stigma of people in relation with mental health. Several pieces of work studied the discourse conveyed about psychiatric pathologies on Twitter considering mostly tweets in English and a limited number of psychiatric disorders terms. This paper proposes the first study to analyze the use of a wide range of psychiatric terms in tweets in French. OBJECTIVE: Our aim is to study how generic, nosographic, and therapeutic psychiatric terms are used on Twitter in French. More specifically, our study has 3 complementary goals: (1) to analyze the types of psychiatric word use (medical, misuse, or irrelevant), (2) to analyze the polarity conveyed in the tweets that use these terms (positive, negative, or neural), and (3) to compare the frequency of these terms to those observed in related work (mainly in English). METHODS: Our study was conducted on a corpus of tweets in French posted from January 1, 2016, to December 31, 2018, and collected using dedicated keywords. The corpus was manually annotated by clinical psychiatrists following a multilayer annotation scheme that includes the type of word use and the opinion orientation of the tweet. A qualitative analysis was performed to measure the reliability of the produced manual annotation, and then a quantitative analysis was performed considering mainly term frequency in each layer and exploring the interactions between them. RESULTS: One of the first results is a resource as an annotated dataset. The initial dataset is composed of 22,579 tweets in French containing at least one of the selected psychiatric terms. From this set, experts in psychiatry randomly annotated 3040 tweets that corresponded to the resource resulting from our work. The second result is the analysis of the annotations showing that terms are misused in 45.33% (1378/3040) of the tweets and that their associated polarity is negative in 86.21% (1188/1378) of the cases. When considering the 3 types of term use, 52.14% (1585/3040) of the tweets are associated with a negative polarity. Misused terms related to psychotic disorders (721/1300, 55.46%) were more frequent to those related to depression (15/280, 5.4%). CONCLUSIONS: Some psychiatric terms are misused in the corpora we studied, which is consistent with the results reported in related work in other languages. Thanks to the great diversity of studied terms, this work highlighted a disparity in the representations and ways of using psychiatric terms. Moreover, our study is important to help psychiatrists to be aware of the term use in new communication media such as social networks that are widely used. This study has the huge advantage to be reproducible thanks to the framework and guidelines we produced so that the study could be renewed in order to analyze the evolution of term usage. While the newly build dataset is a valuable resource for other analytical studies, it could also serve to train machine learning algorithms to automatically identify stigma in social media.

2.
Cognit Comput ; 14(1): 322-352, 2022.
Article in English | MEDLINE | ID: mdl-34221180

ABSTRACT

Hate Speech and harassment are widespread in online communication, due to users' freedom and anonymity and the lack of regulation provided by social media platforms. Hate speech is topically focused (misogyny, sexism, racism, xenophobia, homophobia, etc.), and each specific manifestation of hate speech targets different vulnerable groups based on characteristics such as gender (misogyny, sexism), ethnicity, race, religion (xenophobia, racism, Islamophobia), sexual orientation (homophobia), and so on. Most automatic hate speech detection approaches cast the problem into a binary classification task without addressing either the topical focus or the target-oriented nature of hate speech. In this paper, we propose to tackle, for the first time, hate speech detection from a multi-target perspective. We leverage manually annotated datasets, to investigate the problem of transferring knowledge from different datasets with different topical focuses and targets. Our contribution is threefold: (1) we explore the ability of hate speech detection models to capture common properties from topic-generic datasets and transfer this knowledge to recognize specific manifestations of hate speech; (2) we experiment with the development of models to detect both topics (racism, xenophobia, sexism, misogyny) and hate speech targets, going beyond standard binary classification, to investigate how to detect hate speech at a finer level of granularity and how to transfer knowledge across different topics and targets; and (3) we study the impact of affective knowledge encoded in sentic computing resources (SenticNet, EmoSenticNet) and in semantically structured hate lexicons (HurtLex) in determining specific manifestations of hate speech. We experimented with different neural models including multitask approaches. Our study shows that: (1) training a model on a combination of several (training sets from several) topic-specific datasets is more effective than training a model on a topic-generic dataset; (2) the multi-task approach outperforms a single-task model when detecting both the hatefulness of a tweet and its topical focus in the context of a multi-label classification approach; and (3) the models incorporating EmoSenticNet emotions, the first level emotions of SenticNet, a blend of SenticNet and EmoSenticNet emotions or affective features based on Hurtlex, obtained the best results. Our results demonstrate that multi-target hate speech detection from existing datasets is feasible, which is a first step towards hate speech detection for a specific topic/target when dedicated annotated data are missing. Moreover, we prove that domain-independent affective knowledge, injected into our models, helps finer-grained hate speech detection.

SELECTION OF CITATIONS
SEARCH DETAIL
...