Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 6 de 6
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
ISA Trans ; 144: 228-244, 2024 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-38030447

RESUMO

In this paper, a new off-policy two-dimensional (2D) reinforcement learning approach is proposed to deal with the optimal tracking control (OTC) issue of batch processes with network-induced dropout and disturbances. A dropout 2D augmented Smith predictor is first devised to estimate the present extended state utilizing past data of time and batch orientations. The dropout 2D value function and Q-function are further defined, and their relation is analyzed to meet the optimal performance. On this basis, the dropout 2D Bellman equation is derived according to the principle of the Q-function. For the sake of addressing the dropout 2D OTC problem of batch processes, two algorithms, i.e., the off-line 2D policy iteration algorithm and the off-policy 2D Q-learning algorithm, are presented. The latter method is developed by applying only the input and the estimated state, not the underlying information of the system. Meanwhile, the analysis with regard to the unbiasedness of solutions and convergence is separately given. The effectiveness of the provided methodologies is eventually validated through the application of a simulated case during the filling process.

3.
Sci Rep ; 12(1): 7548, 2022 05 09.
Artigo em Inglês | MEDLINE | ID: mdl-35534491

RESUMO

Fluidized catalytic cracking unit (FCCU) main fractionator is a complex system with multivariable, nonlinear and uncertainty. Its modeling is a hard nut to crack. Ordinary modeling methods are difficult to estimate its dynamic characteristics accurately. In this work, the gray wolf optimizer with bubble-net predation (GWO_BP) is proposed for solving this complex optimization problem. GWO_BP can effectively balance the detectability and exploitability to find the optimal value faster, and improve the accuracy. The head wolf has the best fitness value in GWO. GWO_BP uses the spiral bubble predation method of whale to replace the surrounding hunting scheme of the head wolf, which enhances the global search ability and speeds up the convergence speed. And Lévy flight is applied to improve the wolf search strategy to update the positions of wolfpack for overcoming the disadvantage of easily falling into local optimum. The experiments of the basic GWO, the particle swarm optimization (PSO) and the GWO_BP are carried out with 12 typical test functions. The experimental results show that GWO_BP has the best optimization accuracy. Then, the GWO_BP is used to solve the parameter estimation problem of FCCU main fractionator model. The simulation results show that the FCCU main fractionator model established by the proposed modeling method can accurately reflect the dynamic characteristics of the real world.


Assuntos
Comportamento Predatório , Lobos , Algoritmos , Animais , Simulação por Computador
4.
ISA Trans ; 125: 10-21, 2022 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-34130858

RESUMO

In view that the previous control methods usually rely too much on the models of batch process and have difficulty in a practical batch process with unknown dynamics, a novel data-driven two-dimensional (2D) off-policy Q-learning approach for optimal tracking control (OTC) is proposed to make the batch process obtain a model-free control law. Firstly, an extended state space equation composing of the state and output error is established for ensuring tracking performance of the designed controller. Secondly, the behavior policy of generating data and the target policy of optimization as well as learning is introduced based on this extended system. Then, the Bellman equation independent of model parameters is given via analyzing the relation between 2D value function and 2D Q-function. The measured data along the batch and time directions of batch process are just taken to carry out the policy iteration, which can figure out the optimal control problem despite lacking systematic dynamic information. The unbiasedness and convergence of the designed 2D off-policy Q-learning algorithm are proved. Finally, a simulation case for injection molding process manifests that control effect and tracking effect gradually become better with the increasing number of batches.

5.
ISA Trans ; 102: 23-32, 2020 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-32139034

RESUMO

In order to suppress the influence of lumped system disturbance, such as external disturbance and internal disturbance caused by model mismatch and coupling between variables, more effectively, a multivariable non-minimum state space predictive control method based on disturbance observer (MNMSSPC-D) is proposed in this paper. Most of the existing methods based on the feedback control and feedforward compensation cannot guarantee optimal output. Unlike the existing methods, the proposed method extends the estimated disturbance and output variables into the state variables, forming a multivariable non-minimum state space (MNMSS) prediction model, and then uses the rolling optimization principle in predictive control to design the controller based on the formed prediction model. The main advantages of the proposed method are that the state can be guaranteed to be available to the MNMSS model and the optimal control performance and anti-disturbance ability of system can be obtained by the designed controller. The proposed MNMSSPC-D method is verified by the simulation with a heavy oil fractionator.

6.
Cell Biosci ; 9: 34, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-31044068

RESUMO

BACKGROUND: Previous studies have shown that in myogenic precursors, the homeoprotein Msx1 and its protein partners, histone methyltransferases and repressive histone marks, tend to be enriched on target myogenic regulatory genes at the nuclear periphery. The nuclear periphery localization of Msx1 and its protein partners is required for Msx1's function of preventing myogenic precursors from pre-maturation through repressing target myogenic regulatory genes. However, the mechanisms underlying the maintenance of Msx1 and its protein partners' nuclear periphery localization are unknown. RESULTS: We show that an inner nuclear membrane protein, Emerin, performs as an anchor settled at the inner nuclear membrane to keep Msx1 and its protein partners Ezh2, H3K27me3 enriching at the nuclear periphery, and participates in inhibition of myogenesis mediated by Msx1. Msx1 interacts with Emerin both in C2C12 myoblasts and mouse developing limbs, which is the prerequisite for Emerin mediating the precise location of Msx1, Ezh2, and H3K27me3. The deficiency of Emerin in C2C12 myoblasts disturbs the nuclear periphery localization of Msx1, Ezh2, and H3K27me3, directly indicating Emerin functioning as an anchor. Furthermore, Emerin cooperates with Msx1 to repress target myogenic regulatory genes, and assists Msx1 with inhibition of myogenesis. CONCLUSIONS: Emerin cooperates with Msx1 to inhibit myogenesis through maintaining the nuclear periphery localization of Msx1 and Msx1's protein partners.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...