6. Autonomous Vehicles#
Self-driving cars can be thought of as large-scale wheeled mobile robots that navigate in the real world based on sensor data.
In this chapter we look at some of the basic concepts involved in autonomous driving. Needless to say, the topic of autonomous vehicles is rather large, and we only cover a small selection in this chapter.
We begin by becoming a bit more serious about movement in the plane, first introducing the matrix group SO(2) to represent rotation in the plane, and then extending this to the matrix group SE(2), which can be used to represent both rotation and translation in the plane. We then introduce kinematics in the form of Ackermann steering, which is common in automobiles.
In addition to cameras, a very popular sensor in autonomous driving is the LIDAR sensor. We develop the basic geometry of LIDAR sensors, and then present the iterative closest points (ICP) algorithm as a way to obtain relative pose measurements from successive LIDAR scans. This leads naturally to the problem of simultaneous localization and mapping or SLAM, a very popular topic in robotics. Here we cover the most basic version, Pose SLAM, which only needs relative pose measurements.
In section 5 we look at motion primitives to do some motion planning on the road. Finally, in section 6, we discuss the basics of deep reinforcement learning.