Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 4 de 4
Filter
Add more filters










Database
Language
Publication year range
1.
J Imaging ; 8(2)2022 Feb 04.
Article in English | MEDLINE | ID: mdl-35200740

ABSTRACT

Transfer learning from natural images is used in deep neural networks (DNNs) for medical image classification to achieve a computer-aided clinical diagnosis. Although the adversarial vulnerability of DNNs hinders practical applications owing to the high stakes of diagnosis, adversarial attacks are expected to be limited because training datasets (medical images), which are often required for adversarial attacks, are generally unavailable in terms of security and privacy preservation. Nevertheless, in this study, we demonstrated that adversarial attacks are also possible using natural images for medical DNN models with transfer learning, even if such medical images are unavailable; in particular, we showed that universal adversarial perturbations (UAPs) can also be generated from natural images. UAPs from natural images are useful for both non-targeted and targeted attacks. The performance of UAPs from natural images was significantly higher than that of random controls. The use of transfer learning causes a security hole, which decreases the reliability and safety of computer-based disease diagnosis. Model training from random initialization reduced the performance of UAPs from natural images; however, it did not completely avoid vulnerability to UAPs. The vulnerability of UAPs to natural images is expected to become a significant security threat.

2.
BMC Med Imaging ; 21(1): 9, 2021 01 07.
Article in English | MEDLINE | ID: mdl-33413181

ABSTRACT

BACKGROUND: Deep neural networks (DNNs) are widely investigated in medical image classification to achieve automated support for clinical diagnosis. It is necessary to evaluate the robustness of medical DNN tasks against adversarial attacks, as high-stake decision-making will be made based on the diagnosis. Several previous studies have considered simple adversarial attacks. However, the vulnerability of DNNs to more realistic and higher risk attacks, such as universal adversarial perturbation (UAP), which is a single perturbation that can induce DNN failure in most classification tasks has not been evaluated yet. METHODS: We focus on three representative DNN-based medical image classification tasks (i.e., skin cancer, referable diabetic retinopathy, and pneumonia classifications) and investigate their vulnerability to the seven model architectures of UAPs. RESULTS: We demonstrate that DNNs are vulnerable to both nontargeted UAPs, which cause a task failure resulting in an input being assigned an incorrect class, and to targeted UAPs, which cause the DNN to classify an input into a specific class. The almost imperceptible UAPs achieved > 80% success rates for nontargeted and targeted attacks. The vulnerability to UAPs depended very little on the model architecture. Moreover, we discovered that adversarial retraining, which is known to be an effective method for adversarial defenses, increased DNNs' robustness against UAPs in only very few cases. CONCLUSION: Unlike previous assumptions, the results indicate that DNN-based clinical diagnosis is easier to deceive because of adversarial attacks. Adversaries can cause failed diagnoses at lower costs (e.g., without consideration of data distribution); moreover, they can affect the diagnosis. The effects of adversarial defenses may not be limited. Our findings emphasize that more careful consideration is required in developing DNNs for medical imaging and their practical applications.


Subject(s)
Diagnostic Imaging/classification , Image Interpretation, Computer-Assisted/methods , Image Interpretation, Computer-Assisted/standards , Neural Networks, Computer , Diabetic Retinopathy/classification , Diabetic Retinopathy/diagnostic imaging , Diagnostic Imaging/standards , Humans , Photography/classification , Pneumonia/classification , Pneumonia/diagnostic imaging , Radiography, Thoracic/classification , Skin Neoplasms/classification , Skin Neoplasms/diagnostic imaging , Tomography, Optical Coherence/classification
3.
PLoS One ; 15(12): e0243963, 2020.
Article in English | MEDLINE | ID: mdl-33332412

ABSTRACT

Owing the epidemic of the novel coronavirus disease 2019 (COVID-19), chest X-ray computed tomography imaging is being used for effectively screening COVID-19 patients. The development of computer-aided systems based on deep neural networks (DNNs) has become an advanced open source to rapidly and accurately detect COVID-19 cases because the need for expert radiologists, who are limited in number, forms a bottleneck for screening. However, thus far, the vulnerability of DNN-based systems has been poorly evaluated, although realistic and high-risk attacks using universal adversarial perturbation (UAP), a single (input image agnostic) perturbation that can induce DNN failure in most classification tasks, are available. Thus, we focus on representative DNN models for detecting COVID-19 cases from chest X-ray images and evaluate their vulnerability to UAPs. We consider non-targeted UAPs, which cause a task failure, resulting in an input being assigned an incorrect label, and targeted UAPs, which cause the DNN to classify an input into a specific class. The results demonstrate that the models are vulnerable to non-targeted and targeted UAPs, even in the case of small UAPs. In particular, the 2% norm of the UAPs to the average norm of an image in the image dataset achieves >85% and >90% success rates for the non-targeted and targeted attacks, respectively. Owing to the non-targeted UAPs, the DNN models judge most chest X-ray images as COVID-19 cases. The targeted UAPs allow the DNN models to classify most chest X-ray images into a specified target class. The results indicate that careful consideration is required in practical applications of DNNs to COVID-19 diagnosis; in particular, they emphasize the need for strategies to address security concerns. As an example, we show that iterative fine-tuning of DNN models using UAPs improves the robustness of DNN models against UAPs.


Subject(s)
COVID-19/diagnostic imaging , Databases, Factual , Lung/diagnostic imaging , Neural Networks, Computer , SARS-CoV-2 , Tomography, X-Ray Computed , COVID-19/epidemiology , Female , Humans , Male , Thorax/diagnostic imaging
4.
BMC Bioinformatics ; 20(1): 329, 2019 Jun 13.
Article in English | MEDLINE | ID: mdl-31195956

ABSTRACT

BACKGROUND: Co-occurrence networks-ecological associations between sampled populations of microbial communities inferred from taxonomic composition data obtained from high-throughput sequencing techniques-are widely used in microbial ecology. Several co-occurrence network methods have been proposed. Co-occurrence network methods only infer ecological associations and are often used to discuss species interactions. However, validity of this application of co-occurrence network methods is currently debated. In particular, they simply evaluate using parametric statistical models, even though microbial compositions are determined through population dynamics. RESULTS: We comprehensively evaluated the validity of common methods for inferring microbial ecological networks through realistic simulations. We evaluated how correctly nine widely used methods describe interaction patterns in ecological communities. Contrary to previous studies, the performance of the co-occurrence network methods on compositional data was almost equal to or less than that of classical methods (e.g., Pearson's correlation). The methods described the interaction patterns in dense and/or heterogeneous networks rather inadequately. Co-occurrence network performance also depended upon interaction types; specifically, the interaction patterns in competitive communities were relatively accurately predicted while those in predator-prey (parasitic) communities were relatively inadequately predicted. CONCLUSIONS: Our findings indicated that co-occurrence network approaches may be insufficient in interpreting species interactions in microbiome studies. However, the results do not diminish the importance of these approaches. Rather, they highlight the need for further careful evaluation of the validity of these much-used methods and the development of more suitable methods for inferring microbial ecological networks.


Subject(s)
Ecosystem , Microbiota , Models, Biological
SELECTION OF CITATIONS
SEARCH DETAIL
...