Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 3 de 3
Filter
Add more filters










Database
Language
Publication year range
1.
J Med Internet Res ; 25: e49435, 2023 11 23.
Article in English | MEDLINE | ID: mdl-37850906

ABSTRACT

BACKGROUND: To contain and curb the spread of COVID-19, the governments of countries around the world have used different strategies (lockdown, mandatory vaccination, immunity passports, voluntary social distancing, etc). OBJECTIVE: This study aims to examine the reactions produced by the public announcement of a binding political decision presented by the president of the French Republic, Emmanuel Macron, on July 12, 2021, which imposed vaccination on caregivers and an immunity passport on all French people to access restaurants, cinemas, bars, and so forth. METHODS: To measure these announcement reactions, 901,908 unique tweets posted on Twitter (Twitter Inc) between July 12 and August 11, 2021, were extracted. A neural network was constructed to examine the arguments of the tweets and to identify the types of arguments used by Twitter users. RESULTS: This study shows that in the debate about mandatory vaccination and immunity passports, mostly "con" arguments (399,803/847,725, 47%; χ26=952.8; P<.001) and "scientific" arguments (317,156/803,583, 39%; χ26=5006.8; P<.001) were used. CONCLUSIONS: This study shows that during July and August 2021, social events permeating the public sphere and discussions about mandatory vaccination and immunity passports collided on Twitter. Moreover, a political decision based on scientific arguments led citizens to challenge it using pseudoscientific arguments contesting the effectiveness of vaccination and the validity of these political decisions.


Subject(s)
COVID-19 , Social Media , Humans , COVID-19/prevention & control , Natural Language Processing , Communicable Disease Control , Neural Networks, Computer
2.
JMIR Med Inform ; 10(5): e37831, 2022 May 17.
Article in English | MEDLINE | ID: mdl-35512274

ABSTRACT

BACKGROUND: As the COVID-19 pandemic progressed, disinformation, fake news, and conspiracy theories spread through many parts of society. However, the disinformation spreading through social media is, according to the literature, one of the causes of increased COVID-19 vaccine hesitancy. In this context, the analysis of social media posts is particularly important, but the large amount of data exchanged on social media platforms requires specific methods. This is why machine learning and natural language processing models are increasingly applied to social media data. OBJECTIVE: The aim of this study is to examine the capability of the CamemBERT French-language model to faithfully predict the elaborated categories, with the knowledge that tweets about vaccination are often ambiguous, sarcastic, or irrelevant to the studied topic. METHODS: A total of 901,908 unique French-language tweets related to vaccination published between July 12, 2021, and August 11, 2021, were extracted using Twitter's application programming interface (version 2; Twitter Inc). Approximately 2000 randomly selected tweets were labeled with 2 types of categorizations: (1) arguments for (pros) or against (cons) vaccination (health measures included) and (2) type of content (scientific, political, social, or vaccination status). The CamemBERT model was fine-tuned and tested for the classification of French-language tweets. The model's performance was assessed by computing the F1-score, and confusion matrices were obtained. RESULTS: The accuracy of the applied machine learning reached up to 70.6% for the first classification (pro and con tweets) and up to 90% for the second classification (scientific and political tweets). Furthermore, a tweet was 1.86 times more likely to be incorrectly classified by the model if it contained fewer than 170 characters (odds ratio 1.86; 95% CI 1.20-2.86). CONCLUSIONS: The accuracy of the model is affected by the classification chosen and the topic of the message examined. When the vaccine debate is jostled by contested political decisions, tweet content becomes so heterogeneous that the accuracy of the model drops for less differentiated classes. However, our tests showed that it is possible to improve the accuracy by selecting tweets using a new method based on tweet length.

3.
J Med Internet Res ; 24(5): e28354, 2022 05 27.
Article in English | MEDLINE | ID: mdl-35622395

ABSTRACT

Google Scholar (GS) is a free tool that may be used by researchers to analyze citations; find appropriate literature; or evaluate the quality of an author or a contender for tenure, promotion, a faculty position, funding, or research grants. GS has become a major bibliographic and citation database. For assessing the literature, databases, such as PubMed, PsycINFO, Scopus, and Web of Science, can be used in place of GS because they are more reliable. The aim of this study was to examine the accuracy of citation data collected from GS and provide a comprehensive description of the errors and miscounts identified. For this purpose, 281 documents that cited 2 specific works were retrieved via Publish or Perish software (PoP) and were examined. This work studied the false-positive issue inherent in the analysis of neuroimaging data. The results revealed an unprecedented error rate, with 279 of 281 (99.3%) examined references containing at least one error. Nonacademic documents tended to contain more errors than academic publications (U=5117.0; P<.001). This viewpoint article, based on a case study examining GS data accuracy, shows that GS data not only fail to be accurate but also potentially expose researchers, who would use these data without verification, to substantial biases in their analyses and results. Further work must be conducted to assess the consequences of using GS data extracted by PoP.


Subject(s)
Bibliometrics , Search Engine , Databases, Factual , Humans , PubMed , Publishing
SELECTION OF CITATIONS
SEARCH DETAIL
...