Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 10 de 10
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Neural Netw ; 175: 106291, 2024 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-38593557

RESUMO

This paper considers a distributed constrained optimization problem over a multi-agent network in the non-Euclidean sense. The gossip protocol is adopted to relieve the communication burden, which also adapts to the constantly changing topology of the network. Based on this idea, a gossip-based distributed stochastic mirror descent (GB-DSMD) algorithm is proposed to handle the problem under consideration. The performances of GB-DSMD algorithms with constant and diminishing step sizes are analyzed, respectively. When the step size is constant, the error bound between the optimal function value and the expected function value corresponding to the average iteration output of the algorithm is derived. While for the case of the diminishing step size, it is proved that the output of the algorithm uniformly approaches to the optimal value with probability 1. Finally, as a numerical example, the distributed logistic regression is reported to demonstrate the effectiveness of the GB-DSMD algorithm.


Assuntos
Algoritmos , Redes Neurais de Computação , Processos Estocásticos , Simulação por Computador , Modelos Logísticos
2.
IEEE Trans Cybern ; 53(6): 3561-3573, 2023 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-34818207

RESUMO

This article is concerned with the distributed stochastic multiagent-constrained optimization problem over a time-varying network with a class of communication noise. This article considers the problem in composite optimization setting, which is more general in the literature of noisy network optimization. It is noteworthy that the mainstream existing methods for noisy network optimization are Euclidean projection based. Based on the Bregman projection-based mirror descent scheme, we present a non-Euclidean method and investigate their convergence behavior. This method is the distributed stochastic composite mirror descent type method (DSCMD-N), which provides a more general algorithm framework. Some new error bounds for DSCMD-N are obtained. To the best of our knowledge, this is the first work to analyze and derive convergence rates of optimization algorithm in noisy network optimization. We also show that an optimal rate of O(1/√T) in nonsmooth convex optimization can be obtained for the proposed method under appropriate communication noise condition. Moveover, novel convergence results are comprehensively derived in expectation convergence, high probability convergence, and almost surely sense.

3.
IEEE Trans Neural Netw Learn Syst ; 34(9): 6480-6491, 2023 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-34982702

RESUMO

This article is concerned with the distributed convex constrained optimization over a time-varying multiagent network in the non-Euclidean sense, where the bandwidth limitation of the network is considered. To save the network resources so as to reduce the communication costs, we apply an event-triggered strategy (ETS) in the information interaction of all the agents over the network. Then, an event-triggered distributed stochastic mirror descent (ET-DSMD) algorithm, which utilizes the Bregman divergence as the distance-measuring function, is presented to investigate the multiagent optimization problem subject to a convex constraint set. Moreover, we also analyze the convergence of the developed ET-DSMD algorithm. An upper bound for the convergence result of each agent is established, which is dependent on the trigger threshold. It shows that a sublinear upper bound can be guaranteed if the trigger threshold converges to zero as time goes to infinity. Finally, a distributed logistic regression example is provided to prove the feasibility of the developed ET-DSMD algorithm.

4.
IEEE Trans Cybern ; PP2022 Jun 20.
Artigo em Inglês | MEDLINE | ID: mdl-35724296

RESUMO

This article studies the distributed online stochastic convex optimization problem with the time-varying constraint over a multiagent system constructed by various agents. The sequences of cost functions and constraint functions, both of which have dynamic parameters following time-varying distributions, are unacquainted to the agent ahead of time. Agents in the network are able to interact with their neighbors through a sequence of strongly connected and time-varying graphs. We develop the adaptive distributed bandit primal-dual algorithm whose step size and regularization sequences are adaptive and have no prior knowledge about the total iteration span T . The adaptive distributed bandit primal-dual algorithm applies bandit feedback with a one-point or two-point gradient estimator to evaluate gradient values. It is illustrated in this article that if the drift of the benchmark sequence is sublinear, then the adaptive distributed bandit primal-dual algorithm exhibits sublinear expected dynamic regret and constraint violation using both two kinds of gradient estimator to compute gradient information. We present a numerical experiment to show the performance of the proposed method.

5.
IEEE Trans Cybern ; 52(4): 2263-2273, 2022 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-32609617

RESUMO

In this article, we concentrate on distributed online convex optimization problems over multiagent systems, where the communication between nodes is represented by a class of directed graphs that are time varying and uniformly strongly connected. This problem is in bandit feedback, in the sense that at each time only the cost function value at the committed point is revealed to each node. Then, nodes update their decisions by exchanging information with their neighbors only. To deal with Lipschitz continuous and strongly convex cost functions, a distributed online convex optimization algorithm that achieves sublinear individual regret for every node is developed. The algorithm is built on the algorithm called the push-sum scheme that releases the request of doubly stochastic weight matrices, and the one-point gradient estimator that requires the function value at only one point at every iteration, instead of the gradient information of loss function. The expected regret of our proposed algorithm scales as O (T2/3 ln2/3(T)) , and T is the number of iterations. To validate the performance of the algorithm developed in this article, we give a simulation of a common numerical example.

6.
IEEE Trans Neural Netw Learn Syst ; 32(6): 2344-2357, 2021 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-32614775

RESUMO

This article considers the problem of stochastic strongly convex optimization over a network of multiple interacting nodes. The optimization is under a global inequality constraint and the restriction that nodes have only access to the stochastic gradients of their objective functions. We propose an efficient distributed non-primal-dual algorithm, by incorporating the inequality constraint into the objective via a smoothing technique. We show that the proposed algorithm achieves an optimal O((1)/(T)) ( T is the total number of iterations) convergence rate in the mean square distance from the optimal solution. In particular, we establish a high probability bound for the proposed algorithm, by showing that with a probability at least 1-δ , the proposed algorithm converges at a rate of O(ln(ln(T)/δ)/ T) . Finally, we provide numerical experiments to demonstrate the efficacy of the proposed algorithm.

7.
IEEE Trans Cybern ; 48(11): 3045-3055, 2018 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-28991757

RESUMO

In this paper, we consider the problem of solving distributed constrained optimization over a multiagent network that consists of multiple interacting nodes in online setting, where the objective functions of nodes are time-varying and the constraint set is characterized by an inequality. Through introducing a regularized convex-concave function, we present a consensus-based adaptive primal-dual subgradient algorithm that removes the need for knowing the total number of iterations in advance. We show that the proposed algorithm attains an [where ] regret bound and an bound on the violation of constraints; in addition, we show an improvement to an regret bound when the objective functions are strongly convex. The proposed algorithm allows a novel tradeoffs between the regret and the violation of constraints. Finally, a numerical example is provided to illustrate the effectiveness of the algorithm.

8.
IEEE Trans Neural Netw Learn Syst ; 27(2): 284-94, 2016 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-26469757

RESUMO

This paper studies the problem of minimizing a sum of (possible nonsmooth) convex functions that are corresponding to multiple interacting nodes, subject to a convex state constraint set. Time-varying directed network is considered here. Two types of computational constraints are investigated in this paper: one where the information of gradients is not available and the other where the projection steps can only be calculated approximately. We devise a distributed zeroth-order method, the implementation of which needs only functional evaluations and approximate projection. In particular, we show that the proposed method generates expected function value sequences that converge to the optimal value, provided that the projection errors decrease at appropriate rates.

9.
IEEE Trans Cybern ; 46(9): 2109-18, 2016 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-26285232

RESUMO

In this paper, we study the distributed constrained optimization problem where the objective function is the sum of local convex cost functions of distributed nodes in a network, subject to a global inequality constraint. To solve this problem, we propose a consensus-based distributed regularized primal-dual subgradient method. In contrast to the existing methods, most of which require projecting the estimates onto the constraint set at every iteration, only one projection at the last iteration is needed for our proposed method. We establish the convergence of the method by showing that it achieves an O ( K (-1/4) ) convergence rate for general distributed constrained optimization, where K is the iteration counter. Finally, a numerical example is provided to validate the convergence of the propose method.

10.
IEEE Trans Neural Netw Learn Syst ; 26(6): 1342-7, 2015 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-25099738

RESUMO

In this brief, we consider the multiagent optimization over a network where multiple agents try to minimize a sum of nonsmooth but Lipschitz continuous functions, subject to a convex state constraint set. The underlying network topology is modeled as time varying. We propose a randomized derivative-free method, where in each update, the random gradient-free oracles are utilized instead of the subgradients (SGs). In contrast to the existing work, we do not require that agents are able to compute the SGs of their objective functions. We establish the convergence of the method to an approximate solution of the multiagent optimization problem within the error level depending on the smoothing parameter and the Lipschitz constant of each agent's objective function. Finally, a numerical example is provided to demonstrate the effectiveness of the method.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...