Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 5 de 5
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
PLoS One ; 17(4): e0266060, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35476838

RESUMO

The reason for the existence of adversarial samples is still barely understood. Here, we explore the transferability of learned features to Out-of-Distribution (OoD) classes. We do this by assessing neural networks' capability to encode the existing features, revealing an intriguing connection with adversarial attacks and defences. The principal idea is that, "if an algorithm learns rich features, such features should represent Out-of-Distribution classes as a combination of previously learned In-Distribution (ID) classes". This is because OoD classes usually share several regular features with ID classes, given that the features learned are general enough. We further introduce two metrics to assess the transferred features representing OoD classes. One is based on inter-cluster validation techniques, while the other captures the influence of a class over learned features. Experiments suggest that several adversarial defences decrease the attack accuracy of some attacks and improve the transferability-of-features as measured by our metrics. Experiments also reveal a relationship between the proposed metrics and adversarial attacks (a high Pearson correlation coefficient and low p-value). Further, statistical tests suggest that several adversarial defences, in general, significantly improve transferability. Our tests suggests that models having a higher transferability-of-features have generally higher robustness against adversarial attacks. Thus, the experiments suggest that the objectives of adversarial machine learning might be much closer to domain transfer learning, as previously thought.


Assuntos
Aprendizado de Máquina , Redes Neurais de Computação , Algoritmos , Correlação de Dados
2.
PLoS One ; 17(4): e0265723, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35421125

RESUMO

There are different types of adversarial attacks and defences for machine learning algorithms which makes assessing the robustness of an algorithm a daunting task. Moreover, there is an intrinsic bias in these adversarial attacks and defences to make matters worse. Here, we organise the problems faced: a) Model Dependence, b) Insufficient Evaluation, c) False Adversarial Samples, and d) Perturbation Dependent Results. Based on this, we propose a model agnostic adversarial robustness assessment method based on L0 and L∞ distance-based norms and the concept of robustness levels to tackle the problems. We validate our robustness assessment on several neural network architectures (WideResNet, ResNet, AllConv, DenseNet, NIN, LeNet and CapsNet) and adversarial defences for image classification problem. The proposed robustness assessment reveals that the robustness may vary significantly depending on the metric used (i.e., L0 or L∞). Hence, the duality should be taken into account for a correct evaluation. Moreover, a mathematical derivation and a counter-example suggest that L1 and L2 metrics alone are not sufficient to avoid spurious adversarial samples. Interestingly, the threshold attack of the proposed assessment is a novel L∞ black-box adversarial method which requires even more minor perturbation than the One-Pixel Attack (only 12% of One-Pixel Attack's amount of perturbation) to achieve similar results. We further show that all current networks and defences are vulnerable at all levels of robustness, suggesting that current networks and defences are only effective against a few attacks keeping the models vulnerable to different types of attacks.


Assuntos
Aprendizado de Máquina , Redes Neurais de Computação , Algoritmos
3.
Cogn Neurodyn ; 15(5): 743-755, 2021 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-34603540

RESUMO

Decision-making models in the behavioral, cognitive, and neural sciences typically consist of forced-choice paradigms with two alternatives. While theoretically it is feasible to translate any decision situation to a sequence of binary choices, real-life decision-making is typically more complex and nonlinear, involving choices among multiple items, graded judgments, and deferments of decision-making. Here, we discuss how the complexity of real-life decision-making can be addressed using conventional decision-making models by focusing on the interactive dynamics between criteria settings and the collection of evidence. Decision-makers can engage in multi-stage, parallel decision-making by exploiting the space for deliberation, with non-binary readings of evidence available at any point in time. The interactive dynamics principally adhere to the speed-accuracy tradeoff, such that increasing the space for deliberation enables extended data collection. The setting of space for deliberation reflects a form of meta-decision-making that can, and should be, studied empirically as a value-based exercise that weighs the prior propensities, the economics of information seeking, and the potential outcomes. Importantly, the control of the space for deliberation raises a question of agency. Decision-makers may actively and explicitly set their own decision parameters, but these parameters may also be set by environmental pressures. Thus, decision-makers may be influenced-or nudged in a particular direction-by how decision problems are framed, with a sense of urgency or a binary definition of choice options. We argue that a proper understanding of these mechanisms has important practical implications toward the optimal usage of space for deliberation.

4.
IEEE Trans Neural Netw Learn Syst ; 28(8): 1759-1773, 2017 08.
Artigo em Inglês | MEDLINE | ID: mdl-28113564

RESUMO

Learning algorithms are being increasingly adopted in various applications. However, further expansion will require methods that work more automatically. To enable this level of automation, a more powerful solution representation is needed. However, by increasing the representation complexity, a second problem arises. The search space becomes huge, and therefore, an associated scalable and efficient searching algorithm is also required. To solve both the problems, first a powerful representation is proposed that unifies most of the neural networks features from the literature into one representation. Second, a new diversity preserving method called spectrum diversity is created based on the new concept of chromosome spectrum that creates a spectrum out of the characteristics and frequency of alleles in a chromosome. The combination of spectrum diversity with a unified neuron representation enables the algorithm to either surpass or equal NeuroEvolution of Augmenting Topologies on all of the five classes of problems tested. Ablation tests justify the good results, showing the importance of added new features in the unified neuron representation. Part of the success is attributed to the novelty-focused evolution and good scalability with a chromosome size provided by spectrum diversity. Thus, this paper sheds light on a new representation and diversity preserving mechanism that should impact algorithms and applications to come.

5.
Evol Comput ; 23(1): 1-36, 2015.
Artigo em Inglês | MEDLINE | ID: mdl-24437665

RESUMO

Structured evolutionary algorithms have been investigated for some time. However, they have been under explored especially in the field of multi-objective optimization. Despite good results, the use of complex dynamics and structures keep the understanding and adoption rate of structured evolutionary algorithms low. Here, we propose a general subpopulation framework that has the capability of integrating optimization algorithms without restrictions as well as aiding the design of structured algorithms. The proposed framework is capable of generalizing most of the structured evolutionary algorithms, such as cellular algorithms, island models, spatial predator-prey, and restricted mating based algorithms. Moreover, we propose two algorithms based on the general subpopulation framework, demonstrating that with the simple addition of a number of single-objective differential evolution algorithms for each objective, the results improve greatly, even when the combined algorithms behave poorly when evaluated alone at the tests. Most importantly, the comparison between the subpopulation algorithms and their related panmictic algorithms suggests that the competition between different strategies inside one population can have deleterious consequences for an algorithm and reveals a strong benefit of using the subpopulation framework.


Assuntos
Algoritmos , Metodologias Computacionais , Modelos Teóricos , Simulação por Computador
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...