Deep reinforcement learning-based driving strategy for avoidance of chain collisions and its safety efficiency analysis in autonomous vehicles
Abstract
Vehicle control in autonomous traffic flow is often handled using the best decision-making reinforcement learning methods. However, unexpected critical situations make the collisions more severe and, consequently, the chain collisions. In this work, we first review the leading causes of chain collisions and their subsequent chain events, which might provide an indication of how to prevent and mitigate the crash severity of chain collisions. Then, we consider the problem of chain collision avoidance as a Markov Decision Process problem in order to propose a reinforcement learning-based decision-making strategy and analyse the safety efficiency of existing methods in driving security. To address this, A reward function is being developed to deal with the challenge of multiple vehicle collision avoidance. A perception network structure based on formation and on actor-critic methodologies is employed to enhance the decision-making process. Finally, in the safety efficiency analysis phase, we investigated the safety efficiency performance of the agent vehicle in both single-agent and multi-agent autonomous driving environments. Three state-of-the-art contemporary actor-critic algorithms are used to create an extensive simulation in Unity3D. Moreover, to demonstrate the accuracy of the safety efficiency analysis, multiple training runs of the neural networks in respect of training performance, speed of training, success rate, and stability of rewards with a trade-off between exploitation and exploration during training are presented. Two aspects (single-agent and multi-agent) have assessed the efficiency of algorithms. Every aspect has been analyzed regarding the traffic flows: (1) the controlling efficiency of unexpected traffic situations by the sudden slowdown, (2) abrupt lane change, and (3) smoothly reaching the destination. All the findings of the analysis are intended to shed insight on the benefits of a greater, more reliable autonomous traffic set-up for academics and policymakers, and also to pave the way for the actual carry-out of a driver-less traffic world.Citation
Muzahid, A.J.M., Kamarulzaman, S.F., Rahman, M.A. and Alenezi, A.H. (2022) Deep reinforcement learning-based driving strategy for avoidance of chain collisions and its safety efficiency analysis in autonomous vehicles. IEEE Access, 10, pp. 43303-43319.Publisher
IEEEJournal
IEEE AccessAdditional Links
https://ieeexplore.ieee.org/document/9758806Type
Journal articleLanguage
enDescription
© 2022 The Authors. Published by IEEE. This is an open access article available under a Creative Commons licence. The published version can be accessed at the following link on the publisher’s website: https://ieeexplore.ieee.org/document/9758806ISSN
2169-3536EISSN
2169-3536Sponsors
This work was supported in part by the Ministry of Higher Education of Malaysia through the Fundamental Research Grant Scheme under Grant FRGS/1/2018/TK08/UMP/02/2; and in part by the Deputyship for Research & Innovation, Ministry of Education in Saudi Arabia under Project IF-2020-NBU-418.ae974a485f413a2113503eed53cd6c53
10.1109/ACCESS.2022.3167812
Scopus Count
Collections
Except where otherwise noted, this item's license is described as https://creativecommons.org/licenses/by/4.0/