Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 4 de 4
Filter
Add more filters










Database
Language
Publication year range
1.
Sensors (Basel) ; 23(20)2023 Oct 22.
Article in English | MEDLINE | ID: mdl-37896724

ABSTRACT

This paper proposes an adaptive distributed hybrid control approach to investigate the output containment tracking problem of heterogeneous wide-area networks with intermittent communication. First, a clustered network is modeled for a wide-area scenario. An aperiodic intermittent communication mechanism is exerted on the clusters such that clusters only communicate through leaders. Second, in order to remove the assumption that each follower must know the system matrix of the leaders and achieve output containment, a distributed adaptive hybrid control strategy is proposed for each agent under the internal model and adaptive estimation mechanism. Third, sufficient conditions based on average dwell-time are provided for the output containment achievement using a Lyapunov function method, from which the exponential stability of the closed-loop system is analyzed. Finally, simulation results are presented to demonstrate the effectiveness of the proposed adaptive distributed intermittent control strategy.

2.
Neural Netw ; 167: 588-600, 2023 Oct.
Article in English | MEDLINE | ID: mdl-37703669

ABSTRACT

This paper considers an optimal control of an affine nonlinear system with unknown system dynamics. A new identifier-critic framework is proposed to solve the optimal control problem. Firstly, a neural network identifier is built to estimate the unknown system dynamics, and a critic NN is constructed to solve the Hamiltonian-Jacobi-Bellman equation associated with the optimal control problem. A dynamic regressor extension and mixing technique is applied to design the weight update laws with relaxed persistence of excitation conditions for the two classes of neural networks. The parameter estimation of the update laws and the stability of the closed-loop system under the adaptive optimal control are analyzed using a Lyapunov function method. Numerical simulation results are presented to demonstrate the effectiveness of the proposed IC learning based optimal control algorithm for the affine nonlinear system.


Subject(s)
Neural Networks, Computer , Nonlinear Dynamics , Computer Simulation , Algorithms , Learning
3.
Neural Netw ; 164: 105-114, 2023 Jul.
Article in English | MEDLINE | ID: mdl-37148606

ABSTRACT

In this paper, a novel adaptive critic control method is designed to solve an optimal H∞ tracking control problem for continuous nonlinear systems with nonzero equilibrium based on adaptive dynamic programming (ADP). To guarantee the finiteness of a cost function, traditional methods generally assume that the controlled system has a zero equilibrium point, which is not true in practical systems. In order to overcome such obstacle and realize H∞ optimal tracking control, this paper proposes a novel cost function design with respect to disturbance, tracking error and the derivative of tracking error. Based on the designed cost function, the H∞ control problem is formulated as two-player zero-sum differential games, and then a policy iteration (PI) algorithm is proposed to solve the corresponding Hamilton-Jacobi-Isaacs (HJI) equation. In order to obtain the online solution to the HJI equation, a single-critic neural network structure based on PI algorithm is established to learn the optimal control policy and the worst-case disturbance law. It is worth mentioning that the proposed adaptive critic control method can simplify the controller design process when the equilibrium of the systems is not zero. Finally, simulations are conducted to evaluate the tracking performance of the proposed control methods.


Subject(s)
Neural Networks, Computer , Nonlinear Dynamics , Feedback , Algorithms , Learning
4.
IEEE Trans Neural Netw Learn Syst ; 33(8): 4043-4055, 2022 Aug.
Article in English | MEDLINE | ID: mdl-33587710

ABSTRACT

In this article, a novel reinforcement learning (RL) method is developed to solve the optimal tracking control problem of unknown nonlinear multiagent systems (MASs). Different from the representative RL-based optimal control algorithms, an internal reinforce Q-learning (IrQ-L) method is proposed, in which an internal reinforce reward (IRR) function is introduced for each agent to improve its capability of receiving more long-term information from the local environment. In the IrQL designs, a Q-function is defined on the basis of IRR function and an iterative IrQL algorithm is developed to learn optimally distributed control scheme, followed by the rigorous convergence and stability analysis. Furthermore, a distributed online learning framework, namely, reinforce-critic-actor neural networks, is established in the implementation of the proposed approach, which is aimed at estimating the IRR function, the Q-function, and the optimal control scheme, respectively. The implemented procedure is designed in a data-driven way without needing knowledge of the system dynamics. Finally, simulations and comparison results with the classical method are given to demonstrate the effectiveness of the proposed tracking control method.

SELECTION OF CITATIONS
SEARCH DETAIL
...