Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 12 de 12
Filtrar
1.
JMIR Infodemiology ; 3: e43011, 2023 Aug 16.
Artigo em Inglês | MEDLINE | ID: mdl-37379362

RESUMO

BACKGROUND: During the COVID-19 pandemic, web-based media coverage of preventative strategies proliferated substantially. News media was constantly informing people about changes in public health policy and practices such as mask-wearing. Hence, exploring news media content on face mask use is useful to analyze dominant topics and their trends. OBJECTIVE: The aim of the study was to examine news related to face masks as well as to identify related topics and temporal trends in Australian web-based news media during the early COVID-19 pandemic period. METHODS: Following data collection from the Google News platform, a trend analysis on the mask-related news titles from Australian news publishers was conducted. Then, a latent Dirichlet allocation topic modeling algorithm was applied along with evaluation matrices (quantitative and qualitative measures). Afterward, topic trends were developed and analyzed in the context of mask use during the pandemic. RESULTS: A total of 2345 face mask-related eligible news titles were collected from January 25, 2020, to January 25, 2021. Mask-related news showed an increasing trend corresponding to increasing COVID-19 cases in Australia. The best-fitted latent Dirichlet allocation model discovered 8 different topics with a coherence score of 0.66 and a perplexity measure of -11.29. The major topics were T1 (mask-related international affairs), T2 (introducing mask mandate in places such as Melbourne and Sydney), and T4 (antimask sentiment). Topic trends revealed that T2 was the most frequent topic in January 2021 (77 news titles), corresponding to the mandatory mask-wearing policy in Sydney. CONCLUSIONS: This study demonstrated that Australian news media reflected a wide range of community concerns about face masks, peaking as COVID-19 incidence increased. Harnessing the news media platforms for understanding the media agenda and community concerns may assist in effective health communication during a pandemic response.

2.
Artigo em Inglês | MEDLINE | ID: mdl-35263257

RESUMO

Detecting a community in a network is a matter of discerning the distinct features and connections of a group of members that are different from those in other communities. The ability to do this is of great significance in network analysis. However, beyond the classic spectral clustering and statistical inference methods, there have been significant developments with deep learning techniques for community detection in recent years--particularly when it comes to handling high-dimensional network data. Hence, a comprehensive review of the latest progress in community detection through deep learning is timely. To frame the survey, we have devised a new taxonomy covering different state-of-the-art methods, including deep learning models based on deep neural networks (DNNs), deep nonnegative matrix factorization, and deep sparse filtering. The main category, i.e., DNNs, is further divided into convolutional networks, graph attention networks, generative adversarial networks, and autoencoders. The popular benchmark datasets, evaluation metrics, and open-source implementations to address experimentation settings are also summarized. This is followed by a discussion on the practical applications of community detection in various domains. The survey concludes with suggestions of challenging topics that would make for fruitful future research directions in this fast-growing deep learning field.

3.
IEEE Comput Graph Appl ; 41(2): 8-16, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-33729921

RESUMO

We argue that visualization research has overwhelmingly focused on users from the economically developed world. However, billions of people around the world are rapidly emerging as new users of information technology. Most of the next billion users of visualization technologies will come from parts of the world that are extremely populous but historically ignored by the visualization research community. Their needs may be different to the types of users that researchers have targeted in the past, but, at the same time, they may have even more to gain in terms of access to data potentially affecting their quality of life. We propose a call to action for the visualization community to identify opportunities and use cases where users can benefit from visualization; develop universal design principles; extend evaluations by including the general population; and engage with a wider global population.

4.
BMC Bioinformatics ; 21(Suppl 19): 572, 2020 Dec 21.
Artigo em Inglês | MEDLINE | ID: mdl-33349237

RESUMO

BACKGROUND: Finding relevant literature is crucial for many biomedical research activities and in the practice of evidence-based medicine. Search engines such as PubMed provide a means to search and retrieve published literature, given a query. However, they are limited in how users can control the processing of queries and articles-or as we call them documents-by the search engine. To give this control to both biomedical researchers and computer scientists working in biomedical information retrieval, we introduce a public online tool for searching over biomedical literature. Our setup is guided by the NIST setup of the relevant TREC evaluation tasks in genomics, clinical decision support, and precision medicine. RESULTS: To provide benchmark results for some of the most common biomedical information retrieval strategies, such as querying MeSH subject headings with a specific weight or querying over the title of the articles only, we present our evaluations on public datasets. Our experiments report well-known information retrieval metrics such as precision at a cutoff of ranked documents. CONCLUSIONS: We introduce the A2A search and benchmarking tool which is publicly available for the researchers who want to explore different search strategies over published biomedical literature. We outline several query formulation strategies and present their evaluations with known human judgements for a large pool of topics, from genomics to precision medicine.


Assuntos
Armazenamento e Recuperação da Informação/métodos , Software , Pesquisa Biomédica , Bases de Dados Factuais , Humanos , Medical Subject Headings
5.
Sci Rep ; 10(1): 18241, 2020 10 26.
Artigo em Inglês | MEDLINE | ID: mdl-33106506

RESUMO

This study examines publicly available online search data in China to investigate the spread of public awareness of the 2019 novel coronavirus (SARS-CoV-2) outbreak. We found that cities that had previously suffered from SARS (in 2003-04) and have greater migration ties to Wuhan had earlier, stronger and more durable public awareness of the outbreak. Our data indicate that 48 such cities developed awareness up to 19 days earlier than 255 comparable cities, giving them an opportunity to better prepare. This study suggests that it is important to consider memory of prior catastrophic events as they will influence the public response to emerging threats.


Assuntos
Conscientização , Infecções por Coronavirus/patologia , Relações Interpessoais , Pneumonia Viral/patologia , Betacoronavirus/isolamento & purificação , Blogging , COVID-19 , China/epidemiologia , Infecções por Coronavirus/epidemiologia , Infecções por Coronavirus/virologia , Surtos de Doenças , Humanos , Memória , Pandemias , Pneumonia Viral/epidemiologia , Pneumonia Viral/virologia , SARS-CoV-2 , Mídias Sociais
6.
PLoS One ; 15(3): e0230322, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-32182277

RESUMO

First reported in March 2014, an Ebola epidemic impacted West Africa, most notably Liberia, Guinea and Sierra Leone. We demonstrate the value of social media for automated surveillance of infectious diseases such as the West Africa Ebola epidemic. We experiment with two variations of an existing surveillance architecture: the first aggregates tweets related to different symptoms together, while the second considers tweets about each symptom separately and then aggregates the set of alerts generated by the architecture. Using a dataset of tweets posted from the affected region from 2011 to 2014, we obtain alerts in December 2013, which is three months prior to the official announcement of the epidemic. Among the two variations, the second, which produces a restricted but useful set of alerts, can potentially be applied to other infectious disease surveillance and alert systems.


Assuntos
Mineração de Dados/métodos , Epidemias/prevenção & controle , Monitoramento Epidemiológico , Doença pelo Vírus Ebola/epidemiologia , Mídias Sociais/estatística & dados numéricos , Conjuntos de Dados como Assunto , Ebolavirus , Epidemias/estatística & dados numéricos , Guiné/epidemiologia , Doença pelo Vírus Ebola/diagnóstico , Doença pelo Vírus Ebola/virologia , Humanos , Libéria/epidemiologia , Serra Leoa/epidemiologia
7.
Epidemiology ; 31(1): 90-97, 2020 01.
Artigo em Inglês | MEDLINE | ID: mdl-31651659

RESUMO

BACKGROUND: Melbourne, Australia, witnessed a thunderstorm asthma outbreak on 21 November 2016, resulting in over 8,000 hospital admissions by 6 P.M. This is a typical acute disease event. Because the time to respond is short for acute disease events, an algorithm based on time between events has shown promise. Shorter the time between consecutive incidents of the disease, more likely the outbreak. Social media posts such as tweets can be used as input to the monitoring algorithm. However, due to the large volume of tweets, a large number of alerts may be produced. We refer to this problem as alert swamping. METHODS: We present a four-step architecture for the early detection of the acute disease event, using social media posts (tweets) on Twitter. To curb alert swamping, the first three steps of the algorithm ensure the relevance of the tweets. The fourth step is a monitoring algorithm based on time between events. We experiment with a dataset of tweets posted in Melbourne from 2014 to 2016, focusing on the thunderstorm asthma outbreak in Melbourne in November 2016. RESULTS: Out of our 18 experiment combinations, three detected the thunderstorm asthma outbreak up to 9 hours before the time mentioned in the official report, and five were able to detect it before the first news report. CONCLUSIONS: With appropriate checks against alert swamping in place and the use of a monitoring algorithm based on time between events, tweets can provide early alerts for an acute disease event such as thunderstorm asthma.


Assuntos
Asma , Surtos de Doenças , Vigilância em Saúde Pública , Mídias Sociais , Doença Aguda , Algoritmos , Asma/epidemiologia , Austrália/epidemiologia , Humanos , Vigilância em Saúde Pública/métodos
8.
Behav Res Methods ; 51(4): 1766-1781, 2019 08.
Artigo em Inglês | MEDLINE | ID: mdl-30941697

RESUMO

To qualitative researchers, social media offers a novel opportunity to harvest a massive and diverse range of content without the need for intrusive or intensive data collection procedures. However, performing a qualitative analysis across a massive social media data set is cumbersome and impractical. Instead, researchers often extract a subset of content to analyze, but a framework to facilitate this process is currently lacking. We present a four-phased framework for improving this extraction process, which blends the capacities of data science techniques to compress large data sets into smaller spaces, with the capabilities of qualitative analysis to address research questions. We demonstrate this framework by investigating the topics of Australian Twitter commentary on climate change, using quantitative (non-negative matrix inter-joint factorization; topic alignment) and qualitative (thematic analysis) techniques. Our approach is useful for researchers seeking to perform qualitative analyses of social media, or researchers wanting to supplement their quantitative work with a qualitative analysis of broader social context and meaning.


Assuntos
Mídias Sociais/estatística & dados numéricos , Algoritmos , Coleta de Dados , Pesquisa Qualitativa , Meio Social
9.
J Biomed Inform ; 59: 169-84, 2016 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-26631762

RESUMO

BACKGROUND: Evidence-based medicine practice requires medical practitioners to rely on the best available evidence, in addition to their expertise, when making clinical decisions. The medical domain boasts a large amount of published medical research data, indexed in various medical databases such as MEDLINE. As the size of this data grows, practitioners increasingly face the problem of information overload, and past research has established the time-associated obstacles faced by evidence-based medicine practitioners. In this paper, we focus on the problem of automatic text summarisation to help practitioners quickly find query-focused information from relevant documents. METHODS: We utilise an annotated corpus that is specialised for the task of evidence-based summarisation of text. In contrast to past summarisation approaches, which mostly rely on surface level features to identify salient pieces of texts that form the summaries, our approach focuses on the use of corpus-based statistics, and domain-specific lexical knowledge for the identification of summary contents. We also apply a target-sentence-specific summarisation technique that reduces the problem of underfitting that persists in generic summarisation models. RESULTS: In automatic evaluations run over a large number of annotated summaries, our extractive summarisation technique statistically outperforms various baseline and benchmark summarisation models with a percentile rank of 96.8%. A manual evaluation shows that our extractive summarisation approach is capable of selecting content with high recall and precision, and may thus be used to generate bottom-line answers to practitioners' queries. CONCLUSIONS: Our research shows that the incorporation of specialised data and domain-specific knowledge can significantly improve text summarisation performance in the medical domain. Due to the vast amounts of medical text available, and the high growth of this form of data, we suspect that such summarisation techniques will address the time-related obstacles associated with evidence-based medicine.


Assuntos
Mineração de Dados/métodos , Medicina Baseada em Evidências , Informática Médica/métodos , Humanos , Modelos Estatísticos
10.
Artif Intell Med ; 64(2): 89-103, 2015 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-25983133

RESUMO

BACKGROUND: Evidence-based medicine practice requires practitioners to obtain the best available medical evidence, and appraise the quality of the evidence when making clinical decisions. Primarily due to the plethora of electronically available data from the medical literature, the manual appraisal of the quality of evidence is a time-consuming process. We present a fully automatic approach for predicting the quality of medical evidence in order to aid practitioners at point-of-care. METHODS: Our approach extracts relevant information from medical article abstracts and utilises data from a specialised corpus to apply supervised machine learning for the prediction of the quality grades. Following an in-depth analysis of the usefulness of features (e.g., publication types of articles), they are extracted from the text via rule-based approaches and from the meta-data associated with the articles, and then applied in the supervised classification model. We propose the use of a highly scalable and portable approach using a sequence of high precision classifiers, and introduce a simple evaluation metric called average error distance (AED) that simplifies the comparison of systems. We also perform elaborate human evaluations to compare the performance of our system against human judgments. RESULTS: We test and evaluate our approaches on a publicly available, specialised, annotated corpus containing 1132 evidence-based recommendations. Our rule-based approach performs exceptionally well at the automatic extraction of publication types of articles, with F-scores of up to 0.99 for high-quality publication types. For evidence quality classification, our approach obtains an accuracy of 63.84% and an AED of 0.271. The human evaluations show that the performance of our system, in terms of AED and accuracy, is comparable to the performance of humans on the same data. CONCLUSIONS: The experiments suggest that our structured text classification framework achieves evaluation results comparable to those of human performance. Our overall classification approach and evaluation technique are also highly portable and can be used for various evidence grading scales.


Assuntos
Mineração de Dados/métodos , Sistemas de Apoio a Decisões Clínicas , Técnicas de Apoio para a Decisão , Medicina Baseada em Evidências/classificação , Processamento de Linguagem Natural , Reconhecimento Automatizado de Padrão , Qualidade da Assistência à Saúde , Automação , Fidelidade a Diretrizes , Humanos , Julgamento , Guias de Prática Clínica como Assunto , Linguagens de Programação
11.
IEEE J Biomed Health Inform ; 19(4): 1246-52, 2015 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-25700477

RESUMO

Research data on predisposition to mental health problems, and the fluctuations and regulation of emotions, thoughts, and behaviors are traditionally collected through surveys, which cannot provide a real-time insight into the emotional state of individuals or communities. Large datasets such as World Health Organization (WHO) statistics are collected less than once per year, whereas social network platforms, such as Twitter, offer the opportunity for real-time analysis of expressed mood. Such patterns are valuable to the mental health research community, to help understand the periods and locations of greatest demand and unmet need. We describe the "We Feel" system for analyzing global and regional variations in emotional expression, and report the results of validation against known patterns of variation in mood. 2.73 ×10(9) emotional tweets were collected over a 12-week period, and automatically annotated for emotion, geographic location, and gender. Principal component analysis (PCA) of the data illustrated a dominant in-phase pattern across all emotions, modulated by antiphase patterns for "positive" and "negative" emotions. The first three principal components accounted for over 90% of the variation in the data. PCA was also used to remove the dominant diurnal and weekly variations allowing identification of significant events within the data, with z-scores showing expression of emotions over 80 standard deviations from the mean. We also correlate emotional expression with WHO data at a national level and although no correlations were observed for the burden of depression, the burden of anxiety and suicide rates appeared to correlate with expression of particular emotions.


Assuntos
Emoções/classificação , Computação em Informática Médica , Saúde Mental/classificação , Mídias Sociais , Feminino , Humanos , Masculino
12.
Australas Med J ; 5(9): 478-81, 2012.
Artigo em Inglês | MEDLINE | ID: mdl-23115581

RESUMO

BACKGROUND: Evidence Based Medicine (EBM) practice requires practitioners to extract evidence from published medical research when answering clinical queries. Due to the time- consuming nature of this practice, there is a strong motivation for systems that can automatically summarise medical documents and help practitioners find relevant information. AIM: The aim of this work is to propose an automatic query- focused, extractive summarisation approach that selects informative sentences from medical documents. METHOD: We use a corpus that is specifically designed for summarisation in the EBM domain. We use approximately half the corpus for deriving important statistics associated with the best possible extractive summaries. We take into account factors such as sentence position, length, sentence content, and the type of the query posed. Using the statistics from the first set, we evaluate our approach on a separate set. Evaluation of the qualities of the generated summaries is performed automatically using ROUGE, which is a popular tool for evaluating automatic summaries. RESULTS: Our summarisation approach outperforms all baselines (best baseline score: 0.1594; our score 0.1653). Further improvements are achieved when query types are taken into account. CONCLUSION: The quality of extractive summarisation in the medical domain can be significantly improved by incorporating domain knowledge and statistics derived from a specialised corpus. Such techniques can therefore be applied for content selection in end-to-end summarisation systems.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...