Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 2 de 2
Filter
Add more filters










Database
Language
Publication year range
1.
Accid Anal Prev ; 190: 107179, 2023 Sep.
Article in English | MEDLINE | ID: mdl-37385116

ABSTRACT

A large number of freeway accident disposals are well-recorded by accident reports and surveillance videos, but it is not easy to get the emergency experience reused from past recorded accidents. To reuse emergency experience for better emergency decision-making, this paper proposed a knowledge-based experience transfer method to transfer task-level freeway accident disposal experience via multi-agent reinforcement learning algorithm with policy distillation. First, the Markov decision process is used to simulate the emergency decision-making process of multi-type freeway accident scene at the task level. Then, an adaptive knowledge transfer method named policy distilled multi-agent deep deterministic policy gradient (PD-MADDPG) algorithm is proposed to reuse experience from past freeway accident records to current accidents for fast decision-making and optimal onsite disposal. The performance of the proposed algorithm is evaluated on instantiated cases of freeway accidents that occurred on the freeway in Shaanxi Province, China. Aside from achieving better emergency decisions performance than various typical decision-making methods, the result shows decision maker with transferred knowledge owns 65.22%, 11.37%, 9.23%, 7.76% and 1.71% higher average reward than those without in the five studied cases, respectively. Indicating that the emergency experience transferred from past accidents contributes to fast emergency decision-making and optimal accident onsite disposal.


Subject(s)
Accidents, Traffic , Algorithms , Humans , China
2.
Adv Mater ; 34(6): e2107811, 2022 Feb.
Article in English | MEDLINE | ID: mdl-34791712

ABSTRACT

Reinforcement learning (RL) has shown outstanding performance in handling complex tasks in recent years. Eligibility trace (ET), a fundamental and important mechanism in reinforcement learning, records critical states with attenuation and guides the update of policy, which plays a crucial role in accelerating the convergence of RL training. However, ET implementation on conventional digital computing hardware is energy hungry and restricted by the memory wall due to massive calculation of exponential decay functions. Here, in-memory realization of ET for energy-efficient reinforcement learning with outstanding performance in discrete- and continuous-state RL tasks is demonstrated. For the first time, the inherent conductance drift of phase change memory is exploited as physical decay function to realize in-memory eligibility trace, demonstrating excellent performance during RL training in various tasks. The spontaneous in-memory decay computing and storage of policy in the same phase change memory give rise to significantly enhanced energy efficiency compared with traditional graphics processing unit platforms. This work therefore provides a holistic energy and hardware efficient method for both training and inference of reinforcement learning.


Subject(s)
Learning , Reinforcement, Psychology
SELECTION OF CITATIONS
SEARCH DETAIL
...