Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 2 de 2
Filter
Add more filters










Database
Language
Publication year range
1.
IEEE Trans Neural Netw ; 21(12): 1915-24, 2010 Dec.
Article in English | MEDLINE | ID: mdl-20937580

ABSTRACT

Neural cryptography deals with the problem of "key exchange" between two neural networks using the mutual learning concept. The two networks exchange their outputs (in bits) and the key between the two communicating parties is eventually represented in the final learned weights, when the two networks are said to be synchronized. Security of neural synchronization is put at risk if an attacker is capable of synchronizing with any of the two parties during the training process. Therefore, diminishing the probability of such a threat improves the reliability of exchanging the output bits through a public channel. The synchronization with feedback algorithm is one of the existing algorithms that enhances the security of neural cryptography. This paper proposes three new algorithms to enhance the mutual learning process. They mainly depend on disrupting the attacker confidence in the exchanged outputs and input patterns during training. The first algorithm is called "Do not Trust My Partner" (DTMP), which relies on one party sending erroneous output bits, with the other party being capable of predicting and correcting this error. The second algorithm is called "Synchronization with Common Secret Feedback" (SCSFB), where inputs are kept partially secret and the attacker has to train its network on input patterns that are different from the training sets used by the communicating parties. The third algorithm is a hybrid technique combining the features of the DTMP and SCSFB. The proposed approaches are shown to outperform the synchronization with feedback algorithm in the time needed for the parties to synchronize.


Subject(s)
Algorithms , Neural Networks, Computer , Feedback , Probability , Research Design
2.
IEEE Trans Neural Netw ; 15(2): 505-14, 2004 Mar.
Article in English | MEDLINE | ID: mdl-15384542

ABSTRACT

The process of training neural networks on parallel architectures has been used to assess the performance of so many parallel machines. In this paper, we are investigating the implementation of backpropagation (BP) on the Alex AVX-2 coarse-grained MIMD machine. A host-worker parallel implementation is carried out in order to train different networks to learn the NetTalk dictionary. First, a computational model is constructed using a single processor to complete the learning process. Also, a communication model for the host-worker topology is developed in order to compute the communication overhead in the broadcasting/gathering process. Both models are then used to predict the machine performance when p processors are used and a comparison with the actual measured performance of the parallel architecture implementation is carried out. Simulation results show that both models can be used effectively to predict the machine performance for the NetTalk problem. Finally, a comparison between the AVX-2 NetTalk implementation and the performance of other parallel platforms is presented.


Subject(s)
Databases, Factual , Neural Networks, Computer , Artificial Intelligence
SELECTION OF CITATIONS
SEARCH DETAIL
...