Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 7 de 7
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Alcohol Alcohol ; 59(2)2024 Jan 17.
Artigo em Inglês | MEDLINE | ID: mdl-38234055

RESUMO

BACKGROUND: Music is an integral part of our lives and is often played in public places like restaurants. People exposed to music that contained alcohol-related lyrics in a bar scenario consumed significantly more alcohol than those exposed to music with less alcohol-related lyrics. Existing methods to quantify alcohol exposure in song lyrics have used manual annotation that is burdensome and time intensive. In this paper, we aim to build a deep learning algorithm (LYDIA) that can automatically detect and identify alcohol exposure and its context in song lyrics. METHODS: We identified 673 potentially alcohol-related words including brand names, urban slang, and beverage names. We collected all the lyrics from the Billboard's top-100 songs from 1959 to 2020 (N = 6110). We developed an annotation tool to annotate both the alcohol-relation of the word (alcohol, non-alcohol, or unsure) and the context (positive, negative, or neutral) of the word in the song lyrics. RESULTS: LYDIA achieved an accuracy of 86.6% in identifying the alcohol-relation of the word, and 72.9% in identifying its context. LYDIA can distinguish with an accuracy of 97.24% between the words that have positive and negative relation to alcohol; and with an accuracy of 98.37% between the positive and negative context. CONCLUSION: LYDIA can automatically identify alcohol exposure and its context in song lyrics, which will allow for the swift analysis of future lyrics and can be used to help raise awareness about the amount of alcohol in music. Highlights Developed a deep learning algorithm (LYDIA) to identify alcohol words in songs. LYDIA achieved an accuracy of 86.6% in identifying alcohol-relation of the words. LYDIA's accuracy in identifying positive, negative, or neutral context was 72.9%. LYDIA can automatically provide evidence of alcohol in millions of songs. This can raise awareness of harms of listening to songs with alcohol words.


Assuntos
Aprendizado Profundo , Música , Humanos , Imageamento por Ressonância Magnética , Bebidas
2.
Addiction ; 119(5): 951-959, 2024 05.
Artigo em Inglês | MEDLINE | ID: mdl-38212974

RESUMO

A vast amount of media-related text data is generated daily in the form of social media posts, news stories or academic articles. These text data provide opportunities for researchers to analyse and understand how substance-related issues are being discussed. The main methods to analyse large text data (content analyses or specifically trained deep-learning models) require substantial manual annotation and resources. A machine-learning approach called 'zero-shot learning' may be quicker, more flexible and require fewer resources. Zero-shot learning uses models trained on large, unlabelled (or weakly labelled) data sets to classify previously unseen data into categories on which the model has not been specifically trained. This means that a pre-existing zero-shot learning model can be used to analyse media-related text data without the need for task-specific annotation or model training. This approach may be particularly important for analysing data that is time critical. This article describes the relatively new concept of zero-shot learning and how it can be applied to text data in substance use research, including a brief practical tutorial.


Assuntos
Mídias Sociais , Transtornos Relacionados ao Uso de Substâncias , Humanos , Aprendizado de Máquina
3.
Sci Rep ; 13(1): 11891, 2023 Jul 23.
Artigo em Inglês | MEDLINE | ID: mdl-37482586

RESUMO

Exposure to alcohol content in media increases alcohol consumption and related harm. With exponential growth of media content, it is important to use algorithms to automatically detect and quantify alcohol exposure. Foundation models such as Contrastive Language-Image Pretraining (CLIP) can detect alcohol exposure through Zero-Shot Learning (ZSL) without any additional training. In this paper, we evaluated the ZSL performance of CLIP against a supervised algorithm called Alcoholic Beverage Identification Deep Learning Algorithm Version-2 (ABIDLA2), which is specifically trained to recognise alcoholic beverages in images, across three tasks. We found ZSL achieved similar performance compared to ABIDLA2 in two out of three tasks. However, ABIDLA2 outperformed ZSL in a fine-grained classification task in which determining subtle differences among alcoholic beverages (including containers) are essential. We also found that phrase engineering is essential for improving the performance of ZSL. To conclude, like ABIDLA2, ZSL with little phrase engineering can achieve promising performance in identifying alcohol exposure in images. This makes it easier for researchers, with little or no programming background, to implement ZSL effectively to obtain insightful analytics from digital media. Such analytics can assist researchers and policy makers to propose regulations that can prevent alcohol exposure and eventually prevent alcohol consumption.

5.
Alcohol ; 109: 49-54, 2023 06.
Artigo em Inglês | MEDLINE | ID: mdl-36584742

RESUMO

BACKGROUND: Acute alcohol intoxication impairs cognitive and psychomotor abilities leading to various public health hazards such as road traffic accidents and alcohol-related violence. Intoxicated individuals are usually identified by measuring their blood alcohol concentration (BAC) using breathalyzers that are expensive and labor intensive. In this paper, we developed the Audio-based Deep Learning Algorithm to Identify Alcohol Inebriation (ADLAIA) that can instantly predict an individual's intoxication status based on a 12-s recording of their speech. METHODS: ADLAIA was trained on a publicly available German Alcohol Language Corpus that comprises a total of 12,360 audio clips of inebriated and sober speakers (total of 162, aged 21-64, 47.7% female). ADLAIA's performance was determined by computing the unweighted average recall (UAR) and accuracy of inebriation prediction. RESULTS: ADLAIA was able to identify inebriated speakers - with a BAC of 0.05% or higher - with an UAR of 68.09% and accuracy of 67.67%. ADLAIA had a higher performance (UAR of 75.7%) in identifying intoxicated speakers (BAC > 0.12%). CONCLUSION: Being able to identify intoxicated individuals solely based on their speech, ADLAIA could be integrated into mobile applications and used in environments (such as bars, sports stadiums) to get instantaneous results about inebriation status of individuals.


Assuntos
Intoxicação Alcoólica , Aprendizado Profundo , Humanos , Feminino , Masculino , Concentração Alcoólica no Sangue , Consumo de Bebidas Alcoólicas , Violência , Etanol
6.
Alcohol Clin Exp Res ; 46(10): 1837-1845, 2022 10.
Artigo em Inglês | MEDLINE | ID: mdl-36242596

RESUMO

BACKGROUND: Seeing alcohol in media has been demonstrated to increase alcohol craving, impulsive decision-making, and hazardous drinking. Due to the exponential growth of (social) media use it is important to develop algorithms to quantify alcohol exposure efficiently in electronic images. In this article, we describe the development of an improved version of the Alcoholic Beverage Identification Deep Learning Algorithm (ABIDLA), called ABIDLA2. METHODS: ABIDLA2 was trained on 191,286 images downloaded from Google Image Search results (based on search terms) and Bing Image Search results. In Task-1, ABIDLA2 identified images as containing one of eight beverage categories (beer/cider cup, beer/cider bottle, beer/cider can, wine, champagne, cocktails, whiskey/cognac/brandy, other images). In Task-2, ABIDLA2 made a binary classification between images containing an "alcoholic beverage" or "other". An ablation study was performed to determine which techniques improved algorithm performance. RESULTS: ABIDLA2 was most accurate in identifying Whiskey/Cognac/Brandy (88.1%) followed by Beer/Cider Can (80.5%), Beer/Cider Bottle (78.3%), and Wine (77.8%). Its overall accuracy was 77.0% (Task-1) and 87.7% (Task-2). Even the identification of the least accurate beverage category (Champagne, 64.5%) was more than five times higher than random chance (12.5% = 1/8 categories). The implementation of balanced data sampler to address class skewness and the use of self-training to make use of a large, secondary, weakly labeled dataset particularly improved overall algorithm performance. CONCLUSION: With extended capabilities and a higher accuracy, ABIDLA2 outperforms its predecessor and enables the screening of any kind of electronic media rapidly to estimate the quantity of alcohol exposure. Quantifying alcohol exposure automatically through algorithms like ABIDLA2 is important because viewing images of alcoholic beverages in media tends to increase alcohol consumption and related harms.


Assuntos
Aprendizado Profundo , Bebidas Alcoólicas , Cerveja , Etanol , Bebidas , Eletrônica
7.
Drug Alcohol Depend ; 208: 107841, 2020 03 01.
Artigo em Inglês | MEDLINE | ID: mdl-31954949

RESUMO

BACKGROUND: Evidence demonstrates that seeing alcoholic beverages in electronic media increases alcohol initiation and frequent and excessive drinking, particularly among young people. To efficiently assess this exposure, the aim was to develop the Alcoholic Beverage Identification Deep Learning Algorithm (ABIDLA) to automatically identify beer, wine and champagne/sparkling wine from images. METHODS: Using a specifically developed software, three coders annotated 57,186 images downloaded from Google. Supplemented by 10,000 images from ImageNet, images were split randomly into training data (70 %), validation data (10 %) and testing data (20 %). For retest reliability, a fourth coder re-annotated a random subset of 2004 images. Algorithms were trained using two state-of-the-art convolutional neural networks, Resnet (with different depths) and Densenet-121. RESULTS: With a correct classification (accuracy) of 73.75 % when using six beverage categories (beer glass, beer bottle, beer can, wine, champagne, and other images), 84.09 % with three (beer, wine/champagne, others) and 85.22 % with two (beer/wine/champagne, others), Densenet-121 slightly outperformed all Resnet models. The highest accuracy was obtained for wine (78.91 %) followed by beer can (77.43 %) and beer cup (73.56 %). Interrater reliability was almost perfect between the coders and the expert (Kappa = .903) and substantial between Densenet-121 and the coders (Kappa = .681). CONCLUSIONS: Free from any response or coding burden and with a relatively high accuracy, the ABIDLA offers the possibility to screen all kinds of electronic media for images of alcohol. Providing more comprehensive evidence on exposure to alcoholic beverages is important because exposure instigates alcohol initiation and frequent and excessive drinking.


Assuntos
Consumo de Bebidas Alcoólicas/psicologia , Bebidas Alcoólicas/classificação , Algoritmos , Aprendizado Profundo , Meios de Comunicação de Massa/classificação , Reconhecimento Automatizado de Padrão/classificação , Adolescente , Adulto , Cerveja/classificação , Feminino , Humanos , Masculino , Reconhecimento Automatizado de Padrão/métodos , Reprodutibilidade dos Testes , Vinho/classificação
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...