Generalizing Motion Planners with Mixture of Experts for Autonomous Driving

Qiao Sun1 , Huimin Wang4, Jiahao Zhan2, Fan Nie3. Xin Wen4, Leimeng Xu4, Kun Zhan4, Peng Jia4, Xianpeng Lang4, Hang Zhao5,
1.Shanghai Qi Zhi Institute 2.Fudan University 3.Stanford University 4.LiAuto 5.Tsinghua University

Abstract

Large real-world driving datasets have sparked significant research into various aspects of data-driven motion planners for autonomous driving. These include data augmentation, model architecture, reward design, training strategies, and planner pipelines. These planners promise better generalizations on complicated and few-shot cases than previous methods. However, experiment results show that many of these approaches produce limited generalization abilities in planning performance due to overly complex designs or training paradigms. In this paper, we review and benchmark previous methods focusing on generalizations. The experimental results indicate that as models are appropriately scaled, many design elements become redundant. We introduce StateTransformer-2 (STR2), a scalable, decoder-only motion planner that uses a Vision Transformer (ViT) encoder and a mixture-of-experts (MoE) causal Transformer architecture. The MoE backbone addresses modality collapse and reward balancing by expert routing during training. Extensive experiments on the NuPlan dataset show that our method generalizes better than previous approaches across different test sets and closed-loop simulations. Furthermore, we assess its scalability on billions of real-world urban driving scenarios, demonstrating consistent accuracy improvements as both data and model size grow.

Method Overview

Model Architecture

An overview of the STR2-CPKS model which has a sequence of context, proposal, key points, and future states for the MoE backbone to model. For STR2-CKS, proposals are removed in the sequence for better efficiency. The context part has rasterized environment information encoded by scalable ViT encoders and past ego states.

Model Performance

The planning results, in red, from PDM-Hybrid and STR2 at the pickup area at the top, and an illustration of the MoE model learning and balancing different explicit rewards at the bottom.

Performance comparison on testHard Set with details of the closed-loop reactive simulations. Higher scores indicate better performance

Scaling results with the size of the training dataset, counted as the number of tokens D in the left and scaling results with model parameters N in the right. All axes are logarithmically scaled.

Qualitative Results(good cases)

Traversing pickup dropoff(Test14-hard).
Overtake parked vehicle.
Construction Zone.

Model Ablation Results