ABSTRACT
Transferable adversarial attacks against Deep neural networks (DNNs) have received broad attention in recent years. An adversarial example can be crafted by a surrogate model and then attack the unknown target model successfully, which brings a severe threat to DNNs. The exact underlying reasons for the transferability are still not completely understood. Previous work mostly explores the causes from the model perspective, e.g., decision boundary, model architecture, and model capacity. Here, we investigate the transferability from the data distribution perspective and hypothesize that pushing the image away from its original distribution can enhance the adversarial transferability. To be specific, moving the image out of its original distribution makes different models hardly classify the image correctly, which benefits the untargeted attack, and dragging the image into the target distribution misleads the models to classify the image as the target class, which benefits the targeted attack. Towards this end, we propose a novel method that crafts adversarial examples by manipulating the distribution of the image. We conduct comprehensive transferable attacks against multiple DNNs to demonstrate the effectiveness of the proposed method. Our method can significantly improve the transferability of the crafted attacks and achieves state-of-the-art performance in both untargeted and targeted scenarios, surpassing the previous best method by up to 40% in some cases. In summary, our work provides new insight into studying adversarial transferability and provides a strong counterpart for future research on adversarial defense.
Subject(s)
Neural Networks, ComputerABSTRACT
Three-dimensional (3-D) meshes are commonly used to represent virtual surfaces and volumes. Over the past decade, 3-D meshes have emerged in industrial, medical, and entertainment applications, being of large practical significance for 3-D mesh steganography and steganalysis. In this article, we provide a systematic survey of the literature on 3-D mesh steganography and steganalysis. Compared with an earlier survey (Girdhar et al., 2017), we propose a new taxonomy of steganographic algorithms with four categories: 1) two-state domain, 2) LSB domain, 3) permutation domain, and 4) transform domain. Regarding steganalysis algorithms, we divide them into two categories: 1) universal steganalysis and 2) specific steganalysis. For each category, the history of technical developments and the current technological level are introduced and discussed. Finally, we highlight some promising future research directions and challenges in improving the performance of 3-D mesh steganography and steganalysis.
ABSTRACT
The standard tensor voting technique shows its versatility in tasks such as object recognition and semantic segmentation by recognizing feature points and sharp edges that can segment a model into several patches. We propose a neighborhood-level representation-guided tensor voting model for 3D mesh steganalysis. Because existing steganalytic methods do not analyze correlations among neighborhood faces, they are not very effective at discriminating stego meshes from cover meshes. In this paper, we propose to utilize a tensor voting model to reveal the artifacts caused by embedding data. In the proposed steganalytic scheme, the normal voting tensor (NVT) operation is performed on original mesh faces and smoothed mesh faces separately. Then, the absolute values of the differences between the eigenvalues of the two tensors (from the original face and the smoothed face) are regarded as features that capture intricate relationships among the vertices. Subsequently, the extracted features are processed with a nonlinear mapping to boost the feature effectiveness. The experimental results show that the proposed feature sets prevail over state-of-the-art feature sets including LFS64 and ELFS124 under various steganographic schemes.
ABSTRACT
Message hiding in texture image synthesis is a novel steganography approach by which we resample a smaller texture image and synthesize a new texture image with a similar local appearance and an arbitrary size. However, the mirror operation over the image boundary is flawed and is easy to attack. We propose an attacking method on this steganography, which can not only detect the stego-images but can also extract the hidden messages.