Open Access Repository

A reinforcement learning-based vehicle platoon control strategy for reducing energy consumption in traffic oscillations

Li, M, Cao, Z ORCID: 0000-0003-3656-0328 and Li, Z 2021 , 'A reinforcement learning-based vehicle platoon control strategy for reducing energy consumption in traffic oscillations' , IEEE Transactions on Neural Networks and Learning Systems , pp. 1-15 , doi: 10.1109/TNNLS.2021.3071959.

Full text not available from this repository.


The vehicle platoon will be the most dominant driving mode on future roads. To the best of our knowledge, few reinforcement learning (RL) algorithms have been applied in vehicle platoon control, which has large-scale action and state spaces. Some RL-based methods were applied to solve single-agent problems. If we need to tackle multiagent problems, we will use multiagent RL algorithms since the parameters space grows exponentially with the increasing number of agents involved. Previous multiagent RL algorithms generally may provide redundant information to agents, indicating a large amount of useless or unrelated information, which may cause to be difficult for convergence training and pattern extractions from shared information. Also, random actions usually contribute to crashes, especially at the beginning of training. In this study, a communication proximal policy optimization (CommPPO) algorithm was proposed to tackle the above issues. In specific, the CommPPO model adopts a parameter-sharing structure to allow the dynamic variation of agent numbers, which can well handle various platoon dynamics, including splitting and merging. The communication protocol of the CommPPO consists of two parts. In the state part, the widely used predecessor-leader follower typology in the platoon is adopted to transmit global and local state information to agents. In the reward part, a new reward communication channel is proposed to solve the spurious reward and ``lazy agent'' problems in some existing multiagent RLs. Moreover, a curriculum learning approach is adopted to reduce crashes and speed up training. To validate the proposed strategy for platoon control, two existing multiagent RLs and a traditional platoon control strategy were applied in the same scenarios for comparison. Results showed that the CommPPO algorithm gained more rewards and achieved the largest fuel consumption reduction (11.6%).

Item Type: Article
Authors/Creators:Li, M and Cao, Z and Li, Z
Keywords: communication channel, multi-agent, reinforcement learning, vehicle platoon, fuel consumption
Journal or Publication Title: IEEE Transactions on Neural Networks and Learning Systems
Publisher: Institute of Electrical and Electronics Engineers
ISSN: 2162-237X
DOI / ID Number: 10.1109/TNNLS.2021.3071959
Copyright Information:

Copyright 2021 IEEE

Related URLs:
Item Statistics: View statistics for this item

Actions (login required)

Item Control Page Item Control Page