Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 4 de 4
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Animals (Basel) ; 14(14)2024 Jul 09.
Artigo em Inglês | MEDLINE | ID: mdl-39061483

RESUMO

Federated learning is a collaborative machine learning paradigm where multiple parties jointly train a predictive model while keeping their data. On the other hand, multi-label learning deals with classification tasks where instances may simultaneously belong to multiple classes. This study introduces the concept of Federated Multi-Label Learning (FMLL), combining these two important approaches. The proposed approach leverages federated learning principles to address multi-label classification tasks. Specifically, it adopts the Binary Relevance (BR) strategy to handle the multi-label nature of the data and employs the Reduced-Error Pruning Tree (REPTree) as the base classifier. The effectiveness of the FMLL method was demonstrated by experiments carried out on three diverse datasets within the context of animal science: Amphibians, Anuran-Calls-(MFCCs), and HackerEarth-Adopt-A-Buddy. The accuracy rates achieved across these animal datasets were 73.24%, 94.50%, and 86.12%, respectively. Compared to state-of-the-art methods, FMLL exhibited remarkable improvements (above 10%) in average accuracy, precision, recall, and F-score metrics.

2.
Entropy (Basel) ; 26(5)2024 May 04.
Artigo em Inglês | MEDLINE | ID: mdl-38785652

RESUMO

In a standard binary supervised classification task, the existence of both negative and positive samples in the training dataset are required to construct a classification model. However, this condition is not met in certain applications where only one class of samples is obtainable. To overcome this problem, a different classification method, which learns from positive and unlabeled (PU) data, must be incorporated. In this study, a novel method is presented: neighborhood-based positive unlabeled learning using decision tree (NPULUD). First, NPULUD uses the nearest neighborhood approach for the PU strategy and then employs a decision tree algorithm for the classification task by utilizing the entropy measure. Entropy played a pivotal role in assessing the level of uncertainty in the training dataset, as a decision tree was developed with the purpose of classification. Through experiments, we validated our method over 24 real-world datasets. The proposed method attained an average accuracy of 87.24%, while the traditional supervised learning approach obtained an average accuracy of 83.99% on the datasets. Additionally, it is also demonstrated that our method obtained a statistically notable enhancement (7.74%), with respect to state-of-the-art peers, on average.

3.
Entropy (Basel) ; 25(1)2023 Jan 11.
Artigo em Inglês | MEDLINE | ID: mdl-36673290

RESUMO

As one of the entropy-based methods, the k-Star algorithm benefits from information theory in computing the distances between data instances during the classification task. k-Star is a machine learning method with a high classification performance and strong generalization ability. Nevertheless, as a standard supervised learning method, it performs learning only from labeled data. This paper proposes an improved method, called Semi-Supervised k-Star (SSS), which makes efficient predictions by considering unlabeled data in addition to labeled data. Moreover, it introduces a novel semi-supervised learning approach, called holo-training, against self-training. It has the advantage of enabling a powerful and robust model of data by combining multiple classifiers and using an entropy measure. The results of extensive experimental studies showed that the proposed holo-training approach outperformed the self-training approach on 13 out of the 18 datasets. Furthermore, the proposed SSS method achieved higher accuracy (95.25%) than the state-of-the-art semi-supervised methods (90.01%) on average. The significance of the experimental results was validated by using both the Binomial Sign test and the Friedman test.

4.
Entropy (Basel) ; 25(1)2022 Dec 23.
Artigo em Inglês | MEDLINE | ID: mdl-36673169

RESUMO

The aim of this study is to develop a new approach to be able to correctly predict the outcome of electronic sports (eSports) matches using machine learning methods. Previous research has emphasized player-centric prediction and has used standard (single-instance) classification techniques. However, a team-centric classification is required since team cooperation is essential in completing game missions and achieving final success. To bridge this gap, in this study, we propose a new approach, called Multi-Objective Multi-Instance Learning (MOMIL). It is the first study that applies the multi-instance learning technique to make win predictions in eSports. The proposed approach jointly considers the objectives of the players in a team to capture relationships between players during the classification. In this study, entropy was used as a measure to determine the impurity (uncertainty) of the training dataset when building decision trees for classification. The experiments that were carried out on a publicly available eSports dataset show that the proposed multi-objective multi-instance classification approach outperforms the standard classification approach in terms of accuracy. Unlike the previous studies, we built the models on season-based data. Our approach is up to 95% accurate for win prediction in eSports. Our method achieved higher performance than the state-of-the-art methods tested on the same dataset.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...