Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 1 de 1
Filter
Add more filters










Database
Language
Publication year range
1.
Article in English | MEDLINE | ID: mdl-38968016

ABSTRACT

Intermittent pneumatic compression (IPC) systems apply external pressure to the lower limbs and enhance peripheral blood flow. We previously introduced a cardiac-gated compression system that enhanced arterial blood velocity (BV) in the lower limb compared to fixed compression timing (CT) for seated and standing sub7 jects. However, these pilot studies found that the CT that maximized BV was not constant across individuals and could change over time. Current CT modelling methods for IPC are limited to predictions for a single day and one heartbeat ahead. However, IPC therapy for may span weeks or longer, the BV response to compression can vary with physiological state, and the best CT for eliciting the desired physiological outcome may change, even for the same individual. We propose that a deep reinforcement learning (DRL) algorithm can learn and adaptively modify CT to achieve a selected outcome using IPC. Herein, we target maximizing lower limb arterial BV as the desired out19 come and build participant-specific simulated lower limb environments for 6 participants. We show that DRL can adaptively learn the CT for IPC that maximized arterial BV. Compared to previous work, the DRL agent achieves 98% ± 2 of the resultant blood flow and is faster at maximizing BV; the DRL agent can learn an "optimal" policy in 15 minutes ± 2 on average and can adapt on the fly. Given a desired objective, we posit that the proposed DRL agent can be implemented in IPC systems to rapidly learn the (potentially time-varying) "optimal" CT with a human-in-the-loop.

SELECTION OF CITATIONS
SEARCH DETAIL
...