Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 2 de 2
Filter
Add more filters










Database
Language
Publication year range
1.
Sensors (Basel) ; 21(13)2021 Jun 23.
Article in English | MEDLINE | ID: mdl-34201820

ABSTRACT

Model predictive control (MPC) is a multi-objective control technique that can handle system constraints. However, the performance of an MPC controller highly relies on a proper prioritization weight for each objective, which highlights the need for a precise weight tuning technique. In this paper, we propose an analytical tuning technique by matching the MPC controller performance with the performance of a linear quadratic regulator (LQR) controller. The proposed methodology derives the transformation of a LQR weighting matrix with a fixed weighting factor using a discrete algebraic Riccati equation (DARE) and designs an MPC controller using the idea of a discrete time linear quadratic tracking problem (LQT) in the presence of constraints. The proposed methodology ensures optimal performance between unconstrained MPC and LQR controllers and provides a sub-optimal solution while the constraints are active during transient operations. The resulting MPC behaves as the discrete time LQR by selecting an appropriate weighting matrix in the MPC control problem and ensures the asymptotic stability of the system. In this paper, the effectiveness of the proposed technique is investigated in the application of a novel vehicle collision avoidance system that is designed in the form of linear inequality constraints within MPC. The simulation results confirm the potency of the proposed MPC control technique in performing a safe, feasible and collision-free path while respecting the inputs, states and collision avoidance constraints.

2.
Front Robot AI ; 8: 638849, 2021.
Article in English | MEDLINE | ID: mdl-34017860

ABSTRACT

This paper adds on to the on-going efforts to provide more autonomy to space robots and introduces the concept of programming by demonstration or imitation learning for trajectory planning of manipulators on free-floating spacecraft. A redundant 7-DoF robotic arm is mounted on small spacecraft dedicated for debris removal, on-orbit servicing and assembly, autonomous and rendezvous docking. The motion of robot (or manipulator) arm induces reaction forces on the spacecraft and hence its attitude changes prompting the Attitude Determination and Control System (ADCS) to take large corrective action. The method introduced here is capable of finding the trajectory that minimizes the attitudinal changes thereby reducing the load on ADCS. One of the critical elements in spacecraft trajectory planning and control is the power consumption. The approach introduced in this work carry out trajectory learning offline by collecting data from demonstrations and encoding it as a probabilistic distribution of trajectories. The learned trajectory distribution can be used for planning in previously unseen situations by conditioning the probabilistic distribution. Hence almost no power is required for computations after deployment. Sampling from a conditioned distribution provides several possible trajectories from the same start to goal state. To determine the trajectory that minimizes attitudinal changes, a cost term is defined and the trajectory which minimizes this cost is considered the optimal one.

SELECTION OF CITATIONS
SEARCH DETAIL
...