{ "cells": [ { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "view-in-github", "tags": [ "no-tex" ] }, "source": [ "\"Open" ] }, { "cell_type": "code", "execution_count": 1, "metadata": { "id": "JoW4C_OkOMhe", "tags": [ "remove-cell" ] }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Note: you may need to restart the kernel to use updated packages.\n" ] } ], "source": [ "%pip install -q -U gtbook\n" ] }, { "cell_type": "code", "execution_count": 2, "metadata": { "id": "10-snNDwOSuC", "tags": [ "remove-cell" ] }, "outputs": [], "source": [ "import numpy as np\n", "import gtsam\n", "\n", "import gtbook\n", "import gtbook.display\n", "from gtbook import vacuum\n", "from gtbook.discrete import Variables\n", "VARIABLES = Variables()\n", "def pretty(obj):\n", " return gtbook.display.pretty(obj, VARIABLES)\n", "def show(obj, **kwargs):\n", " return gtbook.display.show(obj, VARIABLES, **kwargs)\n", "\n", "# From section 3.2:\n", "N = 5\n", "X = VARIABLES.discrete_series(\"X\", range(1, N+1), vacuum.rooms)\n", "A = VARIABLES.discrete_series(\"A\", range(1, N), vacuum.action_space)\n", "\n", "# From section 3.5:\n", "conditional = gtsam.DiscreteConditional((2,5), [(0,5), (1,4)], vacuum.action_spec)\n", "R = np.empty((5, 4, 5), float)\n", "T = np.empty((5, 4, 5), float)\n", "for assignment, value in conditional.enumerate():\n", " x, a, y = assignment[0], assignment[1], assignment[2]\n", " R[x, a, y] = 10.0 if y == vacuum.rooms.index(\"Living Room\") else 0.0\n", " T[x, a, y] = value" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": { "id": "nAvx4-UCNzt2" }, "source": [ "```{index} learning; reinforcement learning\n", "```\n", "\n", "# Learning to Act Optimally\n", "\n", "> Learning to act optimally in a stochastic world.\n", "\n", "\"Splash\n", "\n", "When a Markov Decision Process is fully specified we can *compute* an optimal policy.\n", "Below we first define optimal value functions and examine their properties, most notably the Bellman equation.\n", "We then discuss value iteration and policy iteration, two algorithms to calculate the optimal value function and its associated optimal policy. However, both these algorithms need a fully-defined MDP.\n", "\n", "When the MDP is not known in advance, however, we have to *learn* an optimal policy over time. There are two main approaches: model-based and model-free." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "```{index} optimal policy, principle of optimality\n", "```\n", "## The Optimal Value Function\n", "\n", "> The optimal policy maximizes the value function.\n", "\n", "We now turn our attention to defining the *optimal* value function,\n", "which can be used to construct the **optimal policy** $\\pi^*$.\n", "From Section 3.5 we know how to compute the value function for an arbitrary policy $\\pi$:\n", "\n", "$$V^\\pi(x) = \\bar{R}(x,\\pi(x)) + \\gamma \\sum_{x'} P(x'|x, \\pi(x)) V^\\pi(x').$$\n", "\n", "\n", "To begin, we recall the famous **principle of optimality**\n", "as stated by Bellman in a\n", "[1960 article in the IEEE Transactions on Automatic Control](https://www.rand.org/content/dam/rand/pubs/papers/2008/P1416.pdf) {cite:p}`Bellman60`:\n", "\n", "> *An optimal policy has the property that whatever the initial state and initial decision are, the remaining decisions must constitute an optimal policy with regard to the state resulting from the first decision.*" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "```{index} optimal value function, Bellman equation\n", "```\n", "This principle enables a key step in deriving a recursive formulation for the optimal policy. Indeed, the **optimal value function** $V^*: {\\cal X} \\rightarrow {\\cal A}$\n", "is merely the value function for the optimal policy.\n", "This can be written mathematically as\n", "\n", "$$\n", "\\begin{aligned}\n", "V^*(x) &= \\max_\\pi V^{\\pi}(x) \\\\\n", "&=\n", "\\max_\\pi \\left\\{ \\bar{R}(x,\\pi(x)) + \\gamma \\sum_{x'} P(x'|x, \\pi(x)) V^\\pi(x') \\right\\}\\\\\n", "&=\n", "\\max_\\pi \\left\\{ \\bar{R}(x,\\pi(x)) + \\gamma \\sum_{x'} P(x'|x, \\pi(x)) V^*(x') \\right\\}\\\\\n", "&=\n", "\\max_a \\left\\{ \\bar{R}(x,a) + \\gamma \\sum_{x'} P(x'|x, a) V^*(x') \\right\\} \\\\\n", "\\end{aligned}\n", "$$\n", "\n", "In the above, the second line follows immediately by using the definition of $V^\\pi$ above. The third line is more interesting.\n", "By applying the principle of optimality, we replace $V^\\pi(x')$ with $V^*(x')$.\n", "Simply put, if remaining decisions from state $x'$ must constitute an optimal policy,\n", "the corresponding value function at $x'$ will be the optimal value function for $x'$.\n", "For the fourth line,\n", "because the value function has been written in recursive form,\n", "$\\pi$ is applied only to the current state (i.e., when $\\pi$ is evaluated in the optimization,\n", "it always appears as $\\pi(x)$).\n", "Therefore, we can write the optimization\n", "as a maximization with respect to the *action* applied in the *current state*, rather than as a\n", "maximization with respect to the entire policy $\\pi$!\n", "\n", "\n", "This equation is known as the **Bellman equation**.\n", "It is named after Richard Bellman, the mathematician\n", "who discovered it, and it is one of the most important equations in all of computer science.\n", "The Bellman equation has a very nice interpretation: \n", "the optimal value function of a state is the maximum expected reward \n", "*plus* the discounted expected value function when acting optimally in the future." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "```{index} action values\n", "```\n", "## Action Values and the Optimal Policy\n", "\n", "Using Bellman's equation, it is straightforward to compute the optimal policy $\\pi^*$ from a given state $x$:\n", "\n", "$$\n", "\\pi^*(x) = \\arg\n", "\\max_a \\left\\{ \\bar{R}(x,a) + \\gamma \\sum_{x'} P(x'|x, a) V^*(x') \\right\\}.\n", "$$\n", "\n", "This computation is performed so often that it is convenient to introduce the so-called **$Q$-function**, which is the value of being in state $x$ and taking action $a$, for a given value function $V$:\n", "\n", "$$\n", "\\begin{aligned}\n", "Q(x,a; V) \\doteq \\bar{R}(x,a) + \\gamma \\sum_{x'} P(x'|x, a) V(x') \n", "\\end{aligned}\n", "$$\n", "\n", "Another name for Q-values is **action values**, to be contrasted with the state values, i.e., the value function $V(x)$. They allow us to write the optimal policy $\\pi^*(x)$ simply as picking, for any given state $x$, the action $a$ with the highest action value $Q(x,a; V^*)$ computed from the optimal value function $V^*$:\n", "\n", "$$\n", "\\pi^*(x) = \\arg \\max_a Q(x,a; V^*)\n", "$$\n", "\n", "We will use $Q$-values in many of the algorithms in this section, and an efficient way to compute a Q-value from a value function is given below:" ] }, { "cell_type": "code", "execution_count": 3, "metadata": {}, "outputs": [], "source": [ "def Q_value(V, x, a, gamma=0.9):\n", " \"\"\"Calculate Q(x,a) from given value function\"\"\"\n", " return T[x,a] @ (R[x,a] + gamma * V)\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "A very efficient way to compute all Q-values for all state-action pairs at once, using `numpy`, is\n", "\n", "```python\n", "Q = np.sum(T * (R + gamma * V), axis=2)\n", "```\n", "\n", "which we will also use below. It yields a matrix of size $|X| \\times |A|$." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Exercise\n", "\n", "1. Try to understand the function `Q_value` above for calculating the Q-values. Use the notebook to investigate the calculation for specific values of $x$ and $a$.\n", "\n", "2. Similarly, try to understand the \"vectorized\" form above that yields the entire table of Q-values at once." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Policy Iteration\n", "\n", "> By iteratively improving an estimate of the optimal policy, we eventually find $\\pi^*$." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "```{index} policy Iteration\n", "```\n", "We will describe two methods for determining the optimal policy.\n", "The method we describe below, policy iteration, iteratively improves candidate policies, ultimately converging to the optimal policy $\\pi^*$.\n", "The second method, value iteration, iteratively improves an estimate of $V^*$, ultimately converging to the optimal value function.\n", "Both, however, need access to the MDP's transition probabilities and the reward function.\n", "\n", "**Policy Iteration** starts with an initial guess at the optimal policy, and then iteratively improves our guess until no further improvements are possible.\n", "In particular, policy iteration generates a sequence of policies\n", "$\\pi^0, \\pi^1, \\dots \\pi^n$, such that $\\pi^{i+1}$ is better than policy $\\pi^i$.\n", "This process ends when no further improvement is possible, which\n", "occurs when $\\pi^{i+1} = \\pi^i.$\n", "\n", "To improve the policy $\\pi^i$, we update the action chosen *for each state* by applying\n", "Bellman's equation using $\\pi^i$ in place of $\\pi^*$.\n", "This can be achieved with the following algorithm:\n", "\n", "Start with a random policy $\\pi^0$ and $i=0$, and repeat until convergence:\n", "1. Compute the value function $V^{\\pi^i}$\n", "2. Improve the policy for each state $x \\in {\\cal X}$ using the update rule: \n", "\n", "$$\n", "\\pi^{i+1}(x) \\leftarrow\\arg \\max_a Q(x,a; V^{\\pi^i})\n", "$$\n", "\n", "3. Increment $i$\n", "\n", "Notice that this algorithm has the side benefit of computing \n", "successively better approximations to the value function at each iteration.\n", "Because there are a finite number of actions that can be applied in each state, there are only finitely many ways to update\n", "a policy. Therefore, we expect this policy iteration algorithm to converge in finite time." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We already know how to do step (1) above, using the`calculate_value_function`.\n", "The second step of the algorithm is easily implemented with the following code:" ] }, { "cell_type": "code", "execution_count": 4, "metadata": {}, "outputs": [], "source": [ "def update_policy(value_function):\n", " \"\"\"Update policy given a value function\"\"\"\n", " new_policy = [None for _ in range(5)]\n", " for x, room in enumerate(vacuum.rooms):\n", " Q_values = [Q_value(value_function, x, a) for a in range(4)]\n", " new_policy[x] = np.argmax(Q_values)\n", " return new_policy\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The whole policy iteration algorithm then simply iterates these until the policy no longer changes. If no initial policy is given, we can\n", "start with a zero value function\n", "$V^{\\pi^0}(x) = 0$ for all $x$:" ] }, { "cell_type": "code", "execution_count": 5, "metadata": {}, "outputs": [], "source": [ "def policy_iteration(pi=None, max_iterations=100):\n", " \"\"\"Do policy iteration, starting from policy `pi`.\"\"\"\n", " for _ in range(max_iterations):\n", " value_for_pi = vacuum.calculate_value_function(R, T, pi) if pi is not None else np.zeros((5,))\n", " new_policy = update_policy(value_for_pi)\n", " if new_policy == pi:\n", " return pi, value_for_pi\n", " pi = new_policy\n", " raise RuntimeError(\"No stable policy found after {max_iterations} iterations\")\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "On the other hand, if we have a guess for the initial policy, we can initialize\n", "$\\pi^0$ accordingly.\n", "For example, we can start with a not-so-smart `always_right` policy:" ] }, { "cell_type": "code", "execution_count": 6, "metadata": {}, "outputs": [], "source": [ "RIGHT = vacuum.action_space.index(\"R\")\n", "\n", "always_right = [RIGHT, RIGHT, RIGHT, RIGHT, RIGHT]\n" ] }, { "cell_type": "code", "execution_count": 7, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "['L', 'L', 'R', 'U', 'U']\n" ] } ], "source": [ "optimal_policy, optimal_value_function = policy_iteration(always_right)\n", "print([vacuum.action_space[a] for a in optimal_policy])\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Starting with the `always_right` policy, our policy iteration algorithm converges to an\n", "intuitively pleasing policy.\n", "In the dining room and kitchen we go `left`, in the office we go `right`, and in the hallway and dining room we go `up`.\n", "This is significantly different from the `always_right` policy (which might be better named `almost_always_wrong`).\n", "In fact, it is exactly the `reasonable_policy` that we created in Section 3.5.\n", "We already knew that it should be pretty good at getting to the living room as fast as possible. In fact, it is optimal!\n", "\n", "We also print out the optimal value function below, which shows that if we are close to the living room the value function is very high, but it is a bit lower in the office in the dining room:" ] }, { "cell_type": "code", "execution_count": 8, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ " Living Room : 100.00\n", " Kitchen : 97.56\n", " Office : 85.66\n", " Hallway : 97.56\n", " Dining Room : 85.66\n" ] } ], "source": [ "for i,room in enumerate(vacuum.rooms):\n", " print(f\" {room:12}: {optimal_value_function[i]:.2f}\")\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The optimal policy is also obtained when we start without a policy, starting with a zero value function:" ] }, { "cell_type": "code", "execution_count": 9, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "['L', 'L', 'R', 'U', 'U']\n" ] } ], "source": [ "optimal_policy, _ = policy_iteration()\n", "print([vacuum.action_space[a] for a in optimal_policy])\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "```{index} value iteration\n", "```\n", "## Value Iteration\n", "\n", "> Dynamic programming can be used to obtain the optimal value function.\n", "\n", "Let us restate Bellman's equation, which must hold for each state $x$:\n", "\n", "$$\n", "V^*(x) = \\max_a \\left\\{ \\bar{R}(x,a) + \\gamma \\sum_{x'} P(x'|x, a) V^*(x') \\right\\}.\n", "$$\n", "\n", "If we have $n$ states, and since we would then have $n$ equations, it seems like we should be able to solve for the $n$ unknown values $V^*(x)$.\n", "Sadly, they are not *linear* equations, as the maximization operation is not linear. Hence, unlike the case when the policy is fixed, we cannot just solve a system of linear equations to recover $V^*$.\n", "\n", "**Value iteration** approximates $V^*$ by constructing a sequence of estimates,\n", "$V^0, V^1, \\dots , V^n$ that converges to $V^*$.\n", "Starting with an initial guess, $V^0$, at each iteration we update\n", "our approximation of the value function for each state by the update rule:\n", "\n", "$$\n", "V^{i+1}(x) \\leftarrow \\max_a \\left\\{ \\bar{R}(x,a) + \\gamma \\sum_{x'} P(x'|x, a) V^i(x') \\right\\} \n", "$$\n", "\n", "Notice that the right hand side includes two terms:\n", "the expected reward (which we can compute exactly), and a term in $V^i$ (our current best guess at the value function).\n", "Value iteration operates by iteratively using our *current best guess* $V^i$ along with the *known* expected reward to update the approximation.\n", "Unlike policy iteration, we do not expect value iteration to converge to the exact result in finite time.\n", "Therefore, we cannot use $V^{i+1} = V^i$ as our termination condition.\n", "Instead, we often use a condition such as $|V^{i+1} - V^i| < \\epsilon$, for some small value of $\\epsilon$\n", "as the termination condition.\n", "\n", "Finally, note that we can once again use the Q-values to obtain a concise description for the value update:\n", "\n", "$$\n", "V^{i+1}(x) \\leftarrow \\max_a Q(x, a; V^i).\n", "$$\n", "\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In code, this is actually easier than policy iteration, using the concise vectorized Q-table update we discussed above:" ] }, { "cell_type": "code", "execution_count": 10, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "[100. 98. 90. 98. 90.]\n", "[100. 97.64 86.76 97.64 86.76]\n", "[100. 97.58 85.92 97.58 85.92]\n", "[100. 97.56 85.72 97.56 85.72]\n", "[100. 97.56 85.68 97.56 85.68]\n", "[100. 97.56 85.67 97.56 85.67]\n", "[100. 97.56 85.66 97.56 85.66]\n", "[100. 97.56 85.66 97.56 85.66]\n", "[100. 97.56 85.66 97.56 85.66]\n", "[100. 97.56 85.66 97.56 85.66]\n" ] } ], "source": [ "V_k = np.full((5,), 100)\n", "for k in range(10):\n", " Q_k = np.sum(T * (R + 0.9 * V_k), axis=2) # 5 x 4\n", " V_k = np.max(Q_k, axis=1) # max over actions\n", " print(np.round(V_k,2))\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Compare with optimal value function:" ] }, { "cell_type": "code", "execution_count": 11, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "[100. 97.56 85.66 97.56 85.66]\n" ] } ], "source": [ "print(np.round(optimal_value_function, 2))\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "And we can easily *extract* the optimal policy:" ] }, { "cell_type": "code", "execution_count": 12, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "policy = [0 0 1 2 0]\n", "['L', 'L', 'R', 'U', 'L']\n" ] } ], "source": [ "Q_k = np.sum(T * (R + 0.9 * V_k), axis=2)\n", "pi_k = np.argmax(Q_k, axis=1)\n", "print(f\"policy = {pi_k}\")\n", "print([vacuum.action_space[a] for a in pi_k])\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Exercise\n", "\n", "1. Above we initialized the value function at 100 everywhere. Examine the effect on convergence of initializing it differently.\n", "\n", "2. Implement a convergence criterion that stops the iterations after convergence." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Model-based Reinforcement Learning\n", "\n", "> Just explore, then solve the MDP.\n", "\n", "We can attempt to *learn* the MDP and then solve it. Both policy and value iteration require access to the transition probabilities $T$ and the reward function $R$. However, when faced with a new environment, we might not know how our robot will behave. And likewise, we might not have access to the reward function: how can we know in advance where we will find pots of gold?" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "One way to learn the MDP is to randomly explore. Let's adapt the `policy_rollout` code from the previous section to generate a whole lot of *experiences* of the form $(x,a,x',r)$." ] }, { "cell_type": "code", "execution_count": 13, "metadata": {}, "outputs": [], "source": [ "def explore_randomly(x1, horizon=N):\n", " \"\"\"Roll out states given a random policy, for given horizon.\"\"\"\n", " data = []\n", " x = x1\n", " for _ in range(1, horizon):\n", " a = np.random.choice(4)\n", " next_state_distribution = gtsam.DiscreteDistribution(X[1], T[x, a])\n", " x_prime = next_state_distribution.sample()\n", " data.append((x, a, x_prime, R[x, a, x_prime]))\n", " x = x_prime\n", " return data\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let us use it to create 499 experiences and show the first 5:" ] }, { "cell_type": "code", "execution_count": 14, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "[(0, 3, 0, 10.0), (0, 2, 0, 10.0), (0, 3, 3, 0.0), (3, 3, 3, 0.0), (3, 2, 3, 0.0)]\n" ] } ], "source": [ "data = explore_randomly(vacuum.rooms.index(\"Living Room\"), horizon=500)\n", "print(data[:5])\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can *estimate* the transition probabilities $T$ and reward table $R$ from the data, and then we can use the algorithms from before to calculate the value function and/or optimal policy.\n", "\n", "The math is just a variant of what we saw in the learning section of the last chapter. The rewards are the easiest to estimate:\n", "\n", "$$\n", "R(x,a,x') \\approx \\frac{1}{N(x,a,x')} \\sum_{x,a,x'} r\n", "$$\n", "\n", "where $N(x,a,x')$ counts how many times an experience $(x,a,x')$ was recorded. The transition probabilities are a bit trickier:\n", "\n", "$$\n", "P(x'|x,a) \\approx \\frac{N(x,a,x)}{N(x,a)}\n", "$$\n", "\n", "where $N(x,a)=\\sum_{x'} N(x,a,x')$ is the number of times we took action $a$ in a state $x$. \n", "\n", "The code associated with that is fairly simple, modulo some numpy trickery to deal with division by zero and *broadcasting* the division:" ] }, { "cell_type": "code", "execution_count": 15, "metadata": {}, "outputs": [], "source": [ "R_sum = np.zeros((5, 4, 5), float)\n", "T_count = np.zeros((5, 4, 5), float)\n", "count = np.zeros((5, 4), int)\n", "for x, a, x_prime, r in data:\n", " R_sum[x, a, x_prime] += r\n", " T_count[x, a, x_prime] += 1\n", "R_estimate = np.divide(R_sum, T_count, where=T_count!=0)\n", "xa_count = np.sum(T_count, axis=2)\n", "T_estimate = T_count/np.expand_dims(xa_count, axis=-1)\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Above `T_count` corresponds to $N(x,a,x')$, and the variable `xa_count` is $N(x,a)$. It is good to check the latter to see whether our experiences were more or less representative, i.e., visited all state-action pairs:" ] }, { "cell_type": "code", "execution_count": 16, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "array([[30., 22., 23., 30.],\n", " [23., 24., 26., 25.],\n", " [13., 25., 19., 19.],\n", " [23., 32., 32., 33.],\n", " [28., 25., 30., 17.]])" ] }, "execution_count": 16, "metadata": {}, "output_type": "execute_result" } ], "source": [ "xa_count\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This seems pretty good. If not, we can always gather more data, which we encourage you to experiment with.\n", "\n", "We can compare the ground truth transition probabilities $T$ with the estimated transition probabilities $\\hat{T}$, e.g., for the living room:" ] }, { "cell_type": "code", "execution_count": 17, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "ground truth:\n", "[[1. 0. 0. 0. 0. ]\n", " [0.2 0.8 0. 0. 0. ]\n", " [1. 0. 0. 0. 0. ]\n", " [0.2 0. 0. 0.8 0. ]]\n", "estimate:\n", "[[1. 0. 0. 0. 0. ]\n", " [0.32 0.68 0. 0. 0. ]\n", " [1. 0. 0. 0. 0. ]\n", " [0.2 0. 0. 0.8 0. ]]\n" ] } ], "source": [ "print(f\"ground truth:\\n{T[0]}\")\n", "print(f\"estimate:\\n{np.round(T_estimate[0],2)}\")\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Not bad. And for the rewards:" ] }, { "cell_type": "code", "execution_count": 18, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "ground truth:\n", "[[10. 0. 0. 0. 0.]\n", " [10. 0. 0. 0. 0.]\n", " [10. 0. 0. 0. 0.]\n", " [10. 0. 0. 0. 0.]]\n", "estimate:\n", "[[10. 0. 0. 0. 0.]\n", " [10. 0. 0. 0. 0.]\n", " [10. 0. 0. 0. 0.]\n", " [10. 0. 0. 0. 0.]]\n" ] } ], "source": [ "print(f\"ground truth:\\n{R[0]}\")\n", "print(f\"estimate:\\n{np.round(R_estimate[0],2)}\")\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In summary, learning in this context can simply be done by gathering lots of experiences, and estimating models for how the world behaves. After that, you can use either policy or value iteration to recover the optimal policy." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Model-free Reinforcement Learning\n", "\n", "> All you need is Q." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "A different, model-free approach is **Q-learning**. In the above we tried to *model* the world by trying estimate the (large) transition and reward tables. However, remember from the previous section that there is a much smaller table of Q-values $Q(x,a)$ that also allow us to act optimally. This is because we can calculate the optimal policy $\\pi^*(x)$ from the optimal Q-values $Q^*(x,a) \\doteq Q(x, a; V^*)$:\n", "\n", "$$\n", "\\pi^*(x) = \\arg \\max_a Q^*(x,a).\n", "$$\n", "\n", "This begs the question whether we can simply learn the Q-values instead, which might be more *sample-efficient*. In other words, we would get more accurate values with less training data, as we have less quantities to estimate.\n", "\n", "To do this, recall that the Bellman equation can be written as \n", "\n", "$$\n", "V^*(x) = \\max_a Q^*(x,a)\n", "$$\n", "\n", "allowing us to rewrite the Q-values from above as \n", "\n", "$$\n", "Q^*(x,a) = \\sum_{x'} P(x'|x, a) \\{ R(x,a,x') + \\gamma \\max_{a'} Q^*(x',a') \\}\n", "$$\n", "\n", "This gives us a way to estimate the Q-values, as we can approximate the above using a Monte Carlo estimate, summing over our experiences:\n", "\n", "$$\n", "Q^*(x,a) \\approx \\frac{1}{N(x,a)} \\sum_{x,a,x'} R(x,a,x') + \\gamma \\max_{a'} Q^*(x',a')\n", "$$\n", "\n", "Unfortunately the estimate above *depends* on the optimal Q-values. Hence, the final Q-learning algorithm applies this estimate gradually, by \"alpha-blending\" between old and new estimates, which also averages over the reward:\n", "\n", "$$\n", "\\hat{Q}(x,a) \\leftarrow (1-\\alpha) \\hat{Q}(x,a) + \\alpha \\{R(x,a,x') + \\gamma \\max_{a'} \\hat{Q}(x',a') \\}\n", "$$\n", "\n", "In code:" ] }, { "cell_type": "code", "execution_count": 19, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "[[86.40766254 77.92098421 84.74734752 78.34155364]\n", " [77.06276123 72.92342935 72.46003579 66.34761267]\n", " [46.22374231 72.00930878 47.51226021 54.96612765]\n", " [58.46553085 67.75606372 85.08905827 73.9787325 ]\n", " [74.71184623 63.98503874 72.80753072 66.4324432 ]]\n" ] } ], "source": [ "alpha = 0.5 # learning rate\n", "gamma = 0.9 # discount factor\n", "Q = np.zeros((5, 4), float)\n", "for x, a, x_prime, r in data:\n", " old_Q_estimate = Q[x,a]\n", " new_Q_estimate = r + gamma * np.max(Q[x_prime])\n", " Q[x, a] = (1.0-alpha) * old_Q_estimate + alpha * new_Q_estimate\n", "print(Q)\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "These values are not yet quite accurate, as you can ascertain yourself by changing the number of experiences above, but note that an optimal policy can be achieved before we even converge." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "```{index} greedy action selection\n", "```\n", "### Exploration vs Exploitation\n", "\n", "The above assumed that we gather data by acting *randomly*, but that might be very inefficient. Indeed, we might be spending a lot of time - literally - bumping our heads into the walls. A better idea might be to act randomly at first (exploration), but as time progresses, spend more and more time refining the optimal policy by trying to act optimally (exploitation).\n", "\n", "Greedy action selection can lead to bad learning outcomes. We will use Q-learning as an example, but similar problems exist for other reinforcement learning methods. During Q-learning, upon reaching a state $x$, the **greedy action selection** method is to simply pick the action $a^*$ according to the *current* estimate of the Q-values:\n", "\n", "$$\n", "a^* = \\arg \\max_a \\hat{Q}(x,a).\n", "$$\n", "\n", "Unfortunately, this tends to often lead to Q-learning getting stuck in local minima of the policy search space: state-action pairs that might be more promising are never visited as their correct (higher) Q-values have not been estimated correctly, so they always get passed over.\n", "\n", "Epsilon-greedy or $\\epsilon$-greedy methods balance exploration with exploitation while learning. Instead of always choosing the best possible action according to the current estimate, we could simply choose an action at random a fraction of the time, say with probability $\\epsilon$. This is the **epsilon-greedy** method. Typical values for $\\epsilon$ are 0.01 or even 0.1, i.e., 10% of the time we choose to act randomly. Schemes also exist to decrease $\\epsilon$ over time.\n", "\n", "### Exercise\n", "\n", "Think about how to apply $\\epsilon$-greedy methods in the model-based reinforcement learning method we discussed above." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Summary\n", "\n", "We discussed\n", "\n", "- The optimal policy and value function, governed by the Bellman equation.\n", "- Two algorithms to compute those: policy iteration and value iteration.\n", "- A model-based method to learn from experience.\n", "- A model-free method, Q-learning, that updates the action values.\n", "- Balancing exploitation and exploration.\n", "\n", "The field of reinforcement learning is much richer, and we will return to it several times throughout this book.\n", "\n" ] } ], "metadata": { "colab": { "collapsed_sections": [], "include_colab_link": true, "name": "S36_vacuum_RL.ipynb", "provenance": [] }, "interpreter": { "hash": "c6e4e9f98eb68ad3b7c296f83d20e6de614cb42e90992a65aa266555a3137d0d" }, "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.9.19" }, "latex_metadata": { "affiliation": "Georgia Institute of Technology", "author": "Frank Dellaert and Seth Hutchinson", "title": "Introduction to Robotics" } }, "nbformat": 4, "nbformat_minor": 2 }