Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 3 de 3
Filter
Add more filters










Database
Language
Publication year range
1.
Front Bioinform ; 3: 1194993, 2023.
Article in English | MEDLINE | ID: mdl-37484865

ABSTRACT

Artificial Intelligence (AI) has achieved remarkable success in image generation, image analysis, and language modeling, making data-driven techniques increasingly relevant in practical real-world applications, promising enhanced creativity and efficiency for human users. However, the deployment of AI in high-stakes domains such as infrastructure and healthcare still raises concerns regarding algorithm accountability and safety. The emerging field of explainable AI (XAI) has made significant strides in developing interfaces that enable humans to comprehend the decisions made by data-driven models. Among these approaches, concept-based explainability stands out due to its ability to align explanations with high-level concepts familiar to users. Nonetheless, early research in adversarial machine learning has unveiled that exposing model explanations can render victim models more susceptible to attacks. This is the first study to investigate and compare the impact of concept-based explanations on the privacy of Deep Learning based AI models in the context of biomedical image analysis. An extensive privacy benchmark is conducted on three different state-of-the-art model architectures (ResNet50, NFNet, ConvNeXt) trained on two biomedical (ISIC and EyePACS) and one synthetic dataset (SCDB). The success of membership inference attacks while exposing varying degrees of attribution-based and concept-based explanations is systematically compared. The findings indicate that, in theory, concept-based explanations can potentially increase the vulnerability of a private AI system by up to 16% compared to attributions in the baseline setting. However, it is demonstrated that, in more realistic attack scenarios, the threat posed by explanations is negligible in practice. Furthermore, actionable recommendations are provided to ensure the safe deployment of concept-based XAI systems. In addition, the impact of differential privacy (DP) on the quality of concept-based explanations is explored, revealing that while negatively influencing the explanation ability, DP can have an adverse effect on the models' privacy.

2.
Health Inf Sci Syst ; 10(1): 21, 2022 Dec.
Article in English | MEDLINE | ID: mdl-36039095

ABSTRACT

Purpose: Diabetic foot is a common complication associated with diabetes mellitus (DM) leading to ulcerations in the feet. Due to diabetic neuropathy, most patients have reduced sensitivity to pain. As a result, minor injuries go unnoticed and progress into ulcers. The timely detection of potential ulceration points and intervention is crucial in preventing amputation. Changes in plantar temperature are one of the early signs of ulceration. Previous studies have focused on either binary classification or grading of DM severity, but neglect the holistic consideration of the problem. Moreover, multi-class studies exhibit severe performance variations between different classes. Methods: We propose a new convolutional neural network for discrimination between non-DM and five DM severity grades from plantar thermal images and compare its performance against pre-trained networks such as AlexNet and related works. We address the lack of data and imbalanced class distribution, prevalent in prior work, achieving well-balanced classification performance. Results: Our proposed model achieved the best performance with a mean accuracy of 0.9827, mean sensitivity of 0.9684 and mean specificity of 0.9892 in combined diabetic foot detection and grading. Conclusion: To the best of our knowledge, this study sets a new state-of-the-art in plantar foot thermogram detection and grading, while being the first to implement a holistic multi-class classification and grading solution. Reliable automatic thermogram grading is a first step towards the development of smart health devices for DM patients.

3.
Comput Methods Programs Biomed ; 215: 106620, 2022 Mar.
Article in English | MEDLINE | ID: mdl-35033756

ABSTRACT

BACKGROUND AND OBJECTIVES: One principal impediment in the successful deployment of Artificial Intelligence (AI) based Computer-Aided Diagnosis (CAD) systems in everyday clinical workflows is their lack of transparent decision-making. Although commonly used eXplainable AI (XAI) methods provide insights into these largely opaque algorithms, such explanations are usually convoluted and not readily comprehensible. The explanation of decisions regarding the malignancy of skin lesions from dermoscopic images demands particular clarity, as the underlying medical problem definition is ambiguous in itself. This work presents ExAID (Explainable AI for Dermatology), a novel XAI framework for biomedical image analysis that provides multi-modal concept-based explanations, consisting of easy-to-understand textual explanations and visual maps, to justify the predictions. METHODS: Our framework relies on Concept Activation Vectors to map human-understandable concepts to those learned by an arbitrary Deep Learning (DL) based algorithm, and Concept Localisation Maps to highlight those concepts in the input space. This identification of relevant concepts is then used to construct fine-grained textual explanations supplemented by concept-wise location information to provide comprehensive and coherent multi-modal explanations. All decision-related information is presented in a diagnostic interface for use in clinical routines. Moreover, the framework includes an educational mode providing dataset-level explanation statistics as well as tools for data and model exploration to aid medical research and education processes. RESULTS: Through rigorous quantitative and qualitative evaluation of our framework on a range of publicly available dermoscopic image datasets, we show the utility of multi-modal explanations for CAD-assisted scenarios even in case of wrong disease predictions. We demonstrate that concept detectors for the explanation of pre-trained networks reach accuracies of up to 81.46%, which is comparable to supervised networks trained end-to-end. CONCLUSIONS: We present a new end-to-end framework for the multi-modal explanation of DL-based biomedical image analysis in Melanoma classification and evaluate its utility on an array of datasets. Since perspicuous explanation is one of the cornerstones of any CAD system, we believe that ExAID will accelerate the transition from AI research to practice by providing dermatologists and researchers with an effective tool that they can both understand and trust. ExAID can also serve as the basis for similar applications in other biomedical fields.


Subject(s)
Artificial Intelligence , Melanoma , Algorithms , Computers , Diagnosis, Computer-Assisted , Humans
SELECTION OF CITATIONS
SEARCH DETAIL
...