Chinese
Conference Papers

[2020 ]Integrating deep reinforcement learning with optimal trajectory planner for automated driving

Weitao Zhou; Kun Jiang; Zhong Cao; Nanshan Deng; Diange Yang

Trajectory planning in the intersection is a challenging problem due to the strong uncertain intentions of surrounding agents. Conventional methods may fail in some corner cases when the ad-hoc parameters or predictions do not match the real traffic. This paper proposes a trajectory planning method, adaptive to the uncertain interactions, called Value-Estimation-Guild (VEG) trajectory planner. The method builds on the Frenét frame trajectory planner, in the meantime, uses the deep reinforcement learning to deal with the high uncertainty. The deep reinforcement learning learns from past failures and adjusts the sample direction of the optimal planner under the Frenét frame. In this way, the generated trajectory can be partially optimal and adapt to the stochastic as well. This method drives the automated vehicle through intersections and completes the unprotected left turn mission. During the testing, traffic density, surrounding vehicles' types, and intentions are all generated randomly. The statistics results show that the proposed trajectory planner works well under high uncertainty. It helps the automatic vehicles to finish the unprotected left turn with a success rate of 94.4%, compared with the baseline method of 90%.

Lab Leader

Diange Yangydg@tsinghua.edu.cn

Deputy Director of Lab

Kun Jiang jiangkun@tsinghua.edu.cn