Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Scanning ; 2023: 7705844, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37101709

RESUMO

In this work, ultrasonic severe surface rolling (USSR), a new surface nanocrystallization technique, is used to prepare gradient nanostructure (GNS) on the commercial Q345 structural steel. The microstructure of the GNS surface layer is characterized by employing EBSD and TEM, and the result indicates that a nanoscale substructure is formed at the topmost surface layer. The substructures are composed of subgrains and dislocation cells and have an average size of 309.4 nm. The GNS surface layer after USSR processing for one pass has a thickness of approximately 300 µm. The uniaxial tensile measurement indicates that the yield strength of the USSR sample improves by 25.1% compared to the as-received sample with slightly decreased ductility. The nanoscale substructure, refined grains, high density of dislocations, and hetero-deformation-induced strengthening are identified as responsible for the enhanced strength. This study provides a feasible approach to improving the mechanical properties of structural steel for wide applications.

2.
Sensors (Basel) ; 22(9)2022 Apr 30.
Artigo em Inglês | MEDLINE | ID: mdl-35591134

RESUMO

Deep-learning technologies have shown impressive performance on many tasks in recent years. However, there are multiple serious security risks when using deep-learning technologies. For examples, state-of-the-art deep-learning technologies are vulnerable to adversarial examples that make the model's predictions wrong due to some specific subtle perturbation, and these technologies can be abused for the tampering with and forgery of multimedia, i.e., deep forgery. In this paper, we propose a universal detection framework for adversarial examples and fake images. We observe some differences in the distribution of model outputs for normal and adversarial examples (fake images) and train the detector to learn the differences. We perform extensive experiments on the CIFAR10 and CIFAR100 datasets. Experimental results show that the proposed framework has good feasibility and effectiveness in detecting adversarial examples or fake images. Moreover, the proposed framework has good generalizability for the different datasets and model structures.


Assuntos
Aprendizado Profundo , Redes Neurais de Computação , Multimídia
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...