Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
IEEE Trans Cybern ; 47(8): 2044-2057, 2017 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-28371788

RESUMO

This paper considers solving a class of optimization problems which are modeled as the sum of all agents' convex cost functions and each agent is only accessible to its individual function. Communication between agents in multiagent networks is assumed to be limited: each agent can only interact information with its neighbors by using time-varying communication channels with limited capacities. A technique which overcomes the limitation is to implement a quantization process to the interacted information. The quantized information is first encoded as a binary sequence at the side of each agent before sending. After the binary sequence is received by the neighboring agent, corresponding decoding scheme is utilized to resume the original information with a certain degree of error which is caused by the quantization process. With the availability of each agent's encoding states (associated with its out-channels) and decoding states (associated with its in-channels), we devise a set of distributed optimization algorithms that generate two iterative sequences, one of which converges to the optimal solution and the other of which reaches to the optimal value. We prove that if the parameters satisfy some mild conditions, the quantization errors are bounded and the consensus optimization can be achieved. How to minimize the number of quantization level of each connected communication channel in fixed networks is also explored thoroughly. It is found that, by properly choosing system parameters, one bit information exchange suffices to ensure consensus optimization. Finally, we present two numerical simulation experiments to illustrate the efficacy of the algorithms as well as to validate the theoretical findings.

2.
IEEE Trans Cybern ; 47(3): 809-814, 2017 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-26887026

RESUMO

Recently, projection neural network (PNN) was proposed for solving monotone variational inequalities (VIs) and related convex optimization problems. In this paper, considering the inertial term into first order PNNs, an inertial PNN (IPNN) is also proposed for solving VIs. Under certain conditions, the IPNN is proved to be stable, and can be applied to solve a broader class of constrained optimization problems related to VIs. Compared with existing neural networks (NNs), the presence of the inertial term allows us to overcome some drawbacks of many NNs, which are constructed based on the steepest descent method, and this model is more convenient for exploring different Karush-Kuhn-Tucker optimal solution for nonconvex optimization problems. Finally, simulation results on three numerical examples show the effectiveness and performance of the proposed NN.


Assuntos
Modelos Teóricos , Redes Neurais de Computação , Algoritmos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...