Your browser doesn't support javascript.
Analysis of the No-sign Adversarial Attack on the COVID Chest X-ray Classification
2022 International Conference on Image Processing and Media Computing, ICIPMC 2022 ; : 73-79, 2022.
Article in English | Scopus | ID: covidwho-2078213
ABSTRACT
Coronavirus disease 2019 (COVID-19) epidemics have spread worldwide in recent years by its extreme contagiousness. The diagnosis of COVID-19 is crucial in the prevention, control, and treatment procedure. Deep learning-based image classification models have been proved to be valid for the pneumonia classification using chest X-ray images, helping physicians to diagnose and treat the disease more effectively. However, the vulnerability of deep neural networks was confirmed by researchers through the injection of tiny perturbations which is imperceptible to humans. These adversarial samples become a major threat to the medical safety system, especially in the disease detection field. In this regard, we made some experiments to attack chest X-ray images to investigate the efficiency of two types of attack methods with the sign and no-sign operators. Fast Gradient Sign Method (FGSM), Basic Iterative Method (BIM), and Projected Gradient Descent (PGD) were selected and transformed into no-sign attack methods to analyze their effectiveness on white-box and black-box testing. We theoretically and experimentally confirmed the alternative no-sign attack methods were more efficient. © 2022 IEEE.
Keywords

Full text: Available Collection: Databases of international organizations Database: Scopus Language: English Journal: 2022 International Conference on Image Processing and Media Computing, ICIPMC 2022 Year: 2022 Document Type: Article

Similar

MEDLINE

...
LILACS

LIS


Full text: Available Collection: Databases of international organizations Database: Scopus Language: English Journal: 2022 International Conference on Image Processing and Media Computing, ICIPMC 2022 Year: 2022 Document Type: Article