Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 5 de 5
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Entropy (Basel) ; 24(5)2022 Apr 25.
Artigo em Inglês | MEDLINE | ID: mdl-35626485

RESUMO

A novel yet simple extension of the symmetric logistic distribution is proposed by introducing a skewness parameter. It is shown how the three parameters of the ensuing skew logistic distribution may be estimated using maximum likelihood. The skew logistic distribution is then extended to the skew bi-logistic distribution to allow the modelling of multiple waves in epidemic time series data. The proposed skew-logistic model is validated on COVID-19 data from the UK, and is evaluated for goodness-of-fit against the logistic and normal distributions using the recently formulated empirical survival Jensen-Shannon divergence (ESJS) and the Kolmogorov-Smirnov two-sample test statistic (KS2). We employ 95% bootstrap confidence intervals to assess the improvement in goodness-of-fit of the skew logistic distribution over the other distributions. The obtained confidence intervals for the ESJS are narrower than those for the KS2 on using this dataset, implying that the ESJS is more powerful than the KS2.

2.
J Med Internet Res ; 24(2): e30397, 2022 02 28.
Artigo em Inglês | MEDLINE | ID: mdl-35142636

RESUMO

BACKGROUND: The COVID-19 pandemic has created a pressing need for integrating information from disparate sources in order to assist decision makers. Social media is important in this respect; however, to make sense of the textual information it provides and be able to automate the processing of large amounts of data, natural language processing methods are needed. Social media posts are often noisy, yet they may provide valuable insights regarding the severity and prevalence of the disease in the population. Here, we adopt a triage and diagnosis approach to analyzing social media posts using machine learning techniques for the purpose of disease detection and surveillance. We thus obtain useful prevalence and incidence statistics to identify disease symptoms and their severities, motivated by public health concerns. OBJECTIVE: This study aims to develop an end-to-end natural language processing pipeline for triage and diagnosis of COVID-19 from patient-authored social media posts in order to provide researchers and public health practitioners with additional information on the symptoms, severity, and prevalence of the disease rather than to provide an actionable decision at the individual level. METHODS: The text processing pipeline first extracted COVID-19 symptoms and related concepts, such as severity, duration, negations, and body parts, from patients' posts using conditional random fields. An unsupervised rule-based algorithm was then applied to establish relations between concepts in the next step of the pipeline. The extracted concepts and relations were subsequently used to construct 2 different vector representations of each post. These vectors were separately applied to build support vector machine learning models to triage patients into 3 categories and diagnose them for COVID-19. RESULTS: We reported macro- and microaveraged F1 scores in the range of 71%-96% and 61%-87%, respectively, for the triage and diagnosis of COVID-19 when the models were trained on human-labeled data. Our experimental results indicated that similar performance can be achieved when the models are trained using predicted labels from concept extraction and rule-based classifiers, thus yielding end-to-end machine learning. In addition, we highlighted important features uncovered by our diagnostic machine learning models and compared them with the most frequent symptoms revealed in another COVID-19 data set. In particular, we found that the most important features are not always the most frequent ones. CONCLUSIONS: Our preliminary results show that it is possible to automatically triage and diagnose patients for COVID-19 from social media natural language narratives, using a machine learning pipeline in order to provide information on the severity and prevalence of the disease for use within health surveillance systems.


Assuntos
COVID-19 , Mídias Sociais , COVID-19/diagnóstico , COVID-19/epidemiologia , Humanos , Processamento de Linguagem Natural , Pandemias , SARS-CoV-2 , Triagem
3.
J Biomed Inform ; 110: 103568, 2020 10.
Artigo em Inglês | MEDLINE | ID: mdl-32942027

RESUMO

Our goal is to summarise and aggregate information from social media regarding the symptoms of a disease, the drugs used and the treatment effects both positive and negative. To achieve this we first apply a supervised machine learning method to automatically extract medical concepts from natural language text. In an environment such as social media, where new data is continuously streamed, we need a methodology that will allow us to continuously train with the new data. To attain such incremental re-training, a semi-supervised methodology is developed, which is capable of learning new concepts from a small set of labelled data together with the much larger set of unlabelled data. The semi-supervised methodology deploys a conditional random field (CRF) as the base-line training algorithm for extracting medical concepts. The methodology iteratively augments to the training set sentences having high confidence, and adds terms to existing dictionaries to be used as features with the base-line model for further classification. Our empirical results show that the base-line CRF performs strongly across a range of different dictionary and training sizes; when the base-line is built with the full training data the F1 score reaches the range 84%-90%. Moreover, we show that the semi-supervised method produces a mild but significant improvement over the base-line. We also discuss the significance of the potential improvement of the semi-supervised methodology and found that it is significantly more accurate in most cases than the underlying base-line model.


Assuntos
Mídias Sociais , Algoritmos , Humanos , Idioma , Aprendizado de Máquina Supervisionado
4.
PLoS One ; 13(7): e0200098, 2018.
Artigo em Inglês | MEDLINE | ID: mdl-29990357

RESUMO

We propose the χ-index as a bibliometric indicator that generalises the h-index. While the h-index is determined by the maximum square that fits under the citation curve of an author when plotting the number of citations in decreasing order, the χ-index is determined by the maximum area rectangle that fits under the curve. The height of the maximum rectangle is the number of citations ck to the kth most-cited publication, where k is the width of the rectangle. The χ-index is then defined as [Formula: see text], for convenience of comparison with the h-index and other similar indices. We present a comprehensive empirical comparison between the χ-index and other bibliometric indices, focusing on a comparison with the h-index, by analysing two datasets-a large set of Google Scholar profiles and a small set of Nobel prize winners. Our results show that, although the χ and h indices are strongly correlated, they do exhibit significant differences. In particular, we show that, for these data sets, there are a substantial number of profiles for which χ is significantly larger than h. Furthermore, restricting these profiles to the cases when ck > k or ck < k corresponds to, respectively, classifying researchers as either tending to influential, i.e. having many more than h citations, or tending to prolific, i.e. having many more than h publications.


Assuntos
Bibliometria , Algoritmos , Humanos , Prêmio Nobel , Publicações , Pesquisadores , Comunicação Acadêmica
5.
PLoS One ; 11(5): e0155285, 2016.
Artigo em Inglês | MEDLINE | ID: mdl-27171426

RESUMO

Previous research shows that users tend to change their assessment of search results over time. This is a first study that investigates the factors and reasons for these changes, and describes a stochastic model of user behaviour that may explain these changes. In particular, we hypothesise that most of the changes are local, i.e. between results with similar or close relevance to the query, and thus belong to the same"coarse" relevance category. According to the theory of coarse beliefs and categorical thinking, humans tend to divide the range of values under consideration into coarse categories, and are thus able to distinguish only between cross-category values but not within them. To test this hypothesis we conducted five experiments with about 120 subjects divided into 3 groups. Each student in every group was asked to rank and assign relevance scores to the same set of search results over two or three rounds, with a period of three to nine weeks between each round. The subjects of the last three-round experiment were then exposed to the differences in their judgements and were asked to explain them. We make use of a Markov chain model to measure change in users' judgments between the different rounds. The Markov chain demonstrates that the changes converge, and that a majority of the changes are local to a neighbouring relevance category. We found that most of the subjects were satisfied with their changes, and did not perceive them as mistakes but rather as a legitimate phenomenon, since they believe that time has influenced their relevance assessment. Both our quantitative analysis and user comments support the hypothesis of the existence of coarse relevance categories resulting from categorical thinking in the context of user evaluation of search results.


Assuntos
Cadeias de Markov , Modelos Teóricos , Ferramenta de Busca , Algoritmos , Julgamento , Estatísticas não Paramétricas
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...