DriViDOC: Driving from Vision through Differentiable Optimal Control

1Siemens Digital Industries Software, Leuven, Belgium 2Dept. of Mechanical Engineering, KU Leuven, Belgium 2Dept. of Electrical Engineering (ESAT), KU Leuven, Belgium

Accepted to IROS 2024!

DriViDOC structure

Abstract

This paper proposes DriViDOC: a framework for Driving from Vision through Differentiable Optimal Control, and its application to learn autonomous driving controllers from human demonstrations. DriViDOC combines the automatic inference of relevant features from camera frames with the properties of nonlinear model predictive control (NMPC), such as constraint satisfaction. Our approach leverages the differentiability of parametric NMPC, allowing for end-to-end learning of the driving model from images to control. The model is trained on an offline dataset comprising various human demonstrations collected on a motion-base driving simulator. During online testing, the model demonstrates successful imitation of different driving styles, and the interpreted NMPC parameters provide insights into the achievement of specific driving behaviors. Our experimental results show that DriViDOC outperforms other methods involving NMPC and neural networks, exhibiting an average improvement of 20% in imitation scores.

NMPC parameters are dynamically changed by the CNN based on the driving context

Video

BibTeX

@inproceedings{acerbo2024drividoc,
  author    = {Acerbo, Flavia Sofia and Swevers, Jan and Tuytelaars, Tinne and Tong, Son},
  title     = {Driving from Vision through Differentiable Optimal Control},
  booktitle = {Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)},
  year      = {2024},
}