Your browser doesn't support javascript.
An Algorithm for Out-Of-Distribution Attack to Neural Network Encoder (preprint)
arxiv; 2020.
Preprint in English | PREPRINT-ARXIV | ID: ppzbmed-2009.08016v3
ABSTRACT
Deep neural network (DNN), especially convolutional neural network, has achieved superior performance on image classification tasks. However, such performance is only guaranteed if the input to a trained model is similar to the training samples, i.e., the input follows the probability distribution of the training set. Out-Of-Distribution (OOD) samples do not follow the distribution of training set, and therefore the predicted class labels on OOD samples become meaningless. Classification-based methods have been proposed for OOD detection; however, in this study we show that this type of method is theoretically ineffective and practically breakable because of dimensionality reduction in the model. We also show that Glow likelihood-based OOD detection is ineffective as well. Our analysis is demonstrated on five open datasets, including a COVID-19 CT dataset. At last, we present a simple theoretical solution with guaranteed performance for OOD detection.
Subject(s)

Full text: Available Collection: Preprints Database: PREPRINT-ARXIV Main subject: COVID-19 Language: English Year: 2020 Document Type: Preprint

Similar

MEDLINE

...
LILACS

LIS


Full text: Available Collection: Preprints Database: PREPRINT-ARXIV Main subject: COVID-19 Language: English Year: 2020 Document Type: Preprint