Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Artigo em Inglês | MEDLINE | ID: mdl-38319779

RESUMO

Uncertainty quantification is critical for ensuring the safety of deep learning-enabled health diagnostics, as it helps the model account for unknown factors and reduces the risk of misdiagnosis. However, existing uncertainty quantification studies often overlook the significant issue of class imbalance, which is common in medical data. In this paper, we propose a class-balanced evidential deep learning framework to achieve fair and reliable uncertainty estimates for health diagnostic models. This framework advances the state-of-the-art uncertainty quantification method of evidential deep learning with two novel mechanisms to address the challenges posed by class imbalance. Specifically, we introduce a pooling loss that enables the model to learn less biased evidence among classes and a learnable prior to regularize the posterior distribution that accounts for the quality of uncertainty estimates. Extensive experiments using benchmark data with varying degrees of imbalance and various naturally imbalanced health data demonstrate the effectiveness and superiority of our method. Our work pushes the envelope of uncertainty quantification from theoretical studies to realistic healthcare application scenarios. By enhancing uncertainty estimation for class-imbalanced data, we contribute to the development of more reliable and practical deep learning-enabled health diagnostic systems1.

2.
Artigo em Inglês | MEDLINE | ID: mdl-38083574

RESUMO

Supervised machine learning (ML) is revolutionising healthcare, but the acquisition of reliable labels for signals harvested from medical sensors is usually challenging, manual, and costly. Active learning can assist in establishing labels on-the-fly by querying the user only for the most uncertain -and thus informative- samples. However, current approaches rely on naive data selection algorithms, which still require many iterations to achieve the desired accuracy. To this aim, we introduce a novel framework that exploits data augmentation for estimating the uncertainty introduced by sensor signals.Our experiments on classifying medical signals show that our framework selects informative samples up to 50% more diverse. Sample diversity is a key indicator of uncertainty, and our framework can capture this diversity better than previous solutions as it picks unlabelled samples with a higher average point distance during the first queries compared to the baselines, which pick samples that are closer together. Through our experiments, we show that augmentation-based uncertainty makes better decisions, as the more informative signals are labelled first and the learner is able to train on samples with more diverse features earlier on, thus enabling the potential expansion of ML in more real-life healthcare use cases.


Assuntos
Algoritmos , Aprendizado de Máquina Supervisionado , Incerteza , Aprendizagem Baseada em Problemas
3.
Annu Int Conf IEEE Eng Med Biol Soc ; 2022: 313-316, 2022 07.
Artigo em Inglês | MEDLINE | ID: mdl-36086386

RESUMO

Deep learning techniques are increasingly used for decision-making in health applications, however, these can easily be manipulated by adversarial examples across different clinical domains. Their security and privacy vulnerabilities raise concerns about the practical deployment of these systems. The number and variety of the adversarial attacks grow continuously, making it difficult for mitigation approaches to provide effective solutions. Current mitigation techniques often rely on expensive re-training procedures as new attacks emerge. In this paper, we propose a novel adversarial mitigation technique for biosignal classification tasks. Our approach is based on recent findings interpreting early exit neural networks as an ensemble of weight sharing sub-networks. Our experiments on state-of-the-art deep learning models show that early exit ensembles can provide robustness generalizable to various white box and universal adversarial attacks. The approach increases the accuracy of vulnerable deep learning models up to 60 percentage points, while providing adversarial mitigation comparable to adversarial training. This is achieved without previous exposure to the adversarial perturbation or the computational burden of re-training.


Assuntos
Redes Neurais de Computação
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...