Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Annu Int Conf IEEE Eng Med Biol Soc ; 2021: 4998-5004, 2021 11.
Artigo em Inglês | MEDLINE | ID: mdl-34892330

RESUMO

MIT's Emergency-Vent Project was launched in March 2020 to develop safe guidance and a reference design for a bridge ventilator that could be rapidly produced in a distributed manner worldwide. The system uses a novel servo-based robotic gripper to automate the squeezing of a manual resuscitator bag evenly from both sides to provide ventilation according to clinically specified parameters. In just one month, the team designed and built prototype ventilators, tested them in a series of porcine trials, and collaborated with industry partners to enable mass production. We released the design, including mechanical drawings, design spreadsheets, circuit diagrams, and control code into an open source format and assisted production efforts worldwide.Clinical relevance- This work demonstrated the viability of automating the compression of a manual resuscitator bag, with pressure feedback, to provide bridge ventilation support.


Assuntos
COVID-19 , Animais , Humanos , Respiração , Ressuscitação , SARS-CoV-2 , Suínos , Ventiladores Mecânicos
2.
IEEE Trans Syst Man Cybern B Cybern ; 38(4): 943-9, 2008 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-18632382

RESUMO

Convergence of the value-iteration-based heuristic dynamic programming (HDP) algorithm is proven in the case of general nonlinear systems. That is, it is shown that HDP converges to the optimal control and the optimal value function that solves the Hamilton-Jacobi-Bellman equation appearing in infinite-horizon discrete-time (DT) nonlinear optimal control. It is assumed that, at each iteration, the value and action update equations can be exactly solved. The following two standard neural networks (NN) are used: a critic NN is used to approximate the value function, whereas an action network is used to approximate the optimal control policy. It is stressed that this approach allows the implementation of HDP without knowing the internal dynamics of the system. The exact solution assumption holds for some classes of nonlinear systems and, specifically, in the specific case of the DT linear quadratic regulator (LQR), where the action is linear and the value quadratic in the states and NNs have zero approximation error. It is stressed that, for the LQR, HDP may be implemented without knowing the system A matrix by using two NNs. This fact is not generally appreciated in the folklore of HDP for the DT LQR, where only one critic NN is generally used.


Assuntos
Algoritmos , Modelos Teóricos , Programação Linear , Teoria de Sistemas , Simulação por Computador , Retroalimentação
3.
IEEE Trans Syst Man Cybern B Cybern ; 37(1): 240-7, 2007 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-17278575

RESUMO

In this correspondence, adaptive critic approximate dynamic programming designs are derived to solve the discrete-time zero-sum game in which the state and action spaces are continuous. This results in a forward-in-time reinforcement learning algorithm that converges to the Nash equilibrium of the corresponding zero-sum game. The results in this correspondence can be thought of as a way to solve the Riccati equation of the well-known discrete-time H(infinity) optimal control problem forward in time. Two schemes are presented, namely: 1) a heuristic dynamic programming and 2) a dual-heuristic dynamic programming, to solve for the value function and the costate of the game, respectively. An H(infinity) autopilot design for an F-16 aircraft is presented to illustrate the results.


Assuntos
Inteligência Artificial , Teoria dos Jogos , Modelos Teóricos , Processamento de Sinais Assistido por Computador , Simulação por Computador
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...