4. Warehouse Robots in 2D#
Warehouse operations and logistics are among the most successful applications areas for mobile robots.
In this chapter, we consider a simple mobile robot that operates in a warehouse, and whose main task is to transport products from various storage locations to a shipping station. The representations and models in this chapter are significantly more complex than those in the previous chapters, but they are also much more realistic than those of previous chapters, and in some cases are not so different from what is used in current state-of-the-art robotic systems.
For the first time, we will consider the case in which the state space is continuous. While we do not consider rotation in this chapter, we will assume that the robot can translate freely to any point on the warehouse floor, so the state space will be a subset of \(\mathbb{R}^2\). In order to represent uncertainty in state, we will introduce continuous probability distributions on \(\mathbb{R}^2\), specifically, the multivariate Gaussian distribution. To model the motion of such a robot, we introduce omni-wheels, and investigate the geometry of motion using such wheels. Then, to model uncertainty in the motion model, we again use the multi-variate Gaussian distribution.
For sensing and perception, we again use continuous probability distributions: one-dimensional distributions for a range sensor, and two-dimensional distributions for a GPS-like sensor. We then introduce the perception problem of localization: given a map of the environment along with the action and sensor measurement histories, determine the robot’s current location (i.e., the current state). We first look at “Markov Localization”, which is a straightforward application of HMMs. We then introduce a particle filter as a better alternative. When applied to localization, the resulting algorithm is called “Monte Carlo Localization.” Finally, as an advanced technique, Kalman smoothing is discussed. These three solution approaches demonstrate trade-offs between computational cost and accuracy.
For planning, we will extend our value iteration algorithm of the previous chapter to the case of planning motions for a robot that translates in the plane when there is uncertainty in both the motion model and knowledge of the current state. This requires defining a reward function that guides the robot to its goal while avoiding collisions with obstacles.