Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
ISA Trans ; 132: 80-93, 2023 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-36494214

RESUMO

Gait identification based on Deep Learning (DL) techniques has recently emerged as biometric technology for surveillance. We leveraged the vulnerabilities and decision-making abilities of the DL model in gait-based autonomous surveillance systems when attackers have no access to underlying model gradients/structures using a patch-based black-box adversarial attack with Reinforcement Learning (RL). These automated surveillance systems are secured, blocking the attacker's access. Therefore, the attack can be conducted in an RL framework where the agent's goal is determining the optimal image location, causing the model to perform incorrectly when perturbed with random pixels. Furthermore, the proposed adversarial attack presents encouraging results (maximum success rate = 77.59%). Researchers should explore system resilience scenarios (e.g., when attackers have no system access) before using these models in surveillance applications.


Assuntos
Redes Neurais de Computação , Reforço Psicológico , Biometria , Marcha , Tecnologia
2.
PeerJ Comput Sci ; 8: e1125, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-36426246

RESUMO

Background: Deepfakes are fake images or videos generated by deep learning algorithms. Ongoing progress in deep learning techniques like auto-encoders and generative adversarial networks (GANs) is approaching a level that makes deepfake detection ideally impossible. A deepfake is created by swapping videos, images, or audio with the target, consequently raising digital media threats over the internet. Much work has been done to detect deepfake videos through feature detection using a convolutional neural network (CNN), recurrent neural network (RNN), and spatiotemporal CNN. However, these techniques are not effective in the future due to continuous improvements in GANs. Style GANs can create fake videos with high accuracy that cannot be easily detected. Hence, deepfake prevention is the need of the hour rather than just mere detection. Methods: Recently, blockchain-based ownership methods, image tags, and watermarks in video frames have been used to prevent deepfake. However, this process is not fully functional. An image frame could be faked by copying watermarks and reusing them to create a deepfake. In this research, an enhanced modified version of the steganography technique RivaGAN is used to address the issue. The proposed approach encodes watermarks into features of the video frames by training an "attention model" with the ReLU activation function to achieve a fast learning rate. Results: The proposed attention-generating approach has been validated with multiple activation functions and learning rates. It achieved 99.7% accuracy in embedding watermarks into the frames of the video. After generating the attention model, the generative adversarial network has trained using DeepFaceLab 2.0 and has tested the prevention of deepfake attacks using watermark embedded videos comprising 8,074 frames from different benchmark datasets. The proposed approach has acquired a 100% success rate in preventing deepfake attacks. Our code is available at https://github.com/shahidmuneer/deepfakes-watermarking-technique.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...