Markov Decision Processes and Dynamic Programming

Similar documents
Markov Decision Processes and Dynamic Programming

Markov Decision Processes and Dynamic Programming

Introduction to Reinforcement Learning

Approximate Dynamic Programming

Approximate Dynamic Programming

Approximate Dynamic Programming

MDP Preliminaries. Nan Jiang. February 10, 2019

Reinforcement Learning

An Adaptive Clustering Method for Model-free Reinforcement Learning

Reinforcement Learning. Introduction

Planning in Markov Decision Processes

Finite-Sample Analysis in Reinforcement Learning

Internet Monetization

Basics of reinforcement learning

MS&E338 Reinforcement Learning Lecture 1 - April 2, Introduction

An Empirical Algorithm for Relative Value Iteration for Average-cost MDPs

Introduction to Reinforcement Learning. CMPT 882 Mar. 18

INTRODUCTION TO MARKOV DECISION PROCESSES

Introduction to Reinforcement Learning Part 1: Markov Decision Processes

Elements of Reinforcement Learning

Prioritized Sweeping Converges to the Optimal Value Function

Reinforcement Learning and Deep Reinforcement Learning

Christopher Watkins and Peter Dayan. Noga Zaslavsky. The Hebrew University of Jerusalem Advanced Seminar in Deep Learning (67679) November 1, 2015

Markov Decision Processes Infinite Horizon Problems

Reinforcement Learning

Analysis of Classification-based Policy Iteration Algorithms

CS599 Lecture 1 Introduction To RL

Optimal Stopping Problems

Markov decision processes

Lecture notes for Analysis of Algorithms : Markov decision processes

, and rewards and transition matrices as shown below:

Total Expected Discounted Reward MDPs: Existence of Optimal Policies

Decision Theory: Markov Decision Processes

MARKOV DECISION PROCESSES (MDP) AND REINFORCEMENT LEARNING (RL) Versione originale delle slide fornita dal Prof. Francesco Lo Presti

Reinforcement learning

Lecture 1: March 7, 2018

Introduction to Reinforcement Learning

Reinforcement Learning

CMU Lecture 11: Markov Decision Processes II. Teacher: Gianni A. Di Caro

Motivation for introducing probabilities

Value and Policy Iteration

Reinforcement Learning with Function Approximation. Joseph Christian G. Noel

Open Theoretical Questions in Reinforcement Learning

Abstract Dynamic Programming

Artificial Intelligence & Sequential Decision Problems

Model-Based Reinforcement Learning with Continuous States and Actions

The Art of Sequential Optimization via Simulations

Lecture 17: Reinforcement Learning, Finite Markov Decision Processes

Balancing and Control of a Freely-Swinging Pendulum Using a Model-Free Reinforcement Learning Algorithm

Understanding (Exact) Dynamic Programming through Bellman Operators

Biasing Approximate Dynamic Programming with a Lower Discount Factor

Some AI Planning Problems

On the Convergence of Optimistic Policy Iteration

Near-Optimal Control of Queueing Systems via Approximate One-Step Policy Improvement

Distributed Optimization. Song Chong EE, KAIST

Chapter 3: The Reinforcement Learning Problem

Exponential Moving Average Based Multiagent Reinforcement Learning Algorithms

Reinforcement Learning

CS 7180: Behavioral Modeling and Decisionmaking

Central-limit approach to risk-aware Markov decision processes

Lecture 18: Reinforcement Learning Sanjeev Arora Elad Hazan

Markov Decision Processes and Solving Finite Problems. February 8, 2017

Lecture 3: Markov Decision Processes

Optimal Control of an Inventory System with Joint Production and Pricing Decisions

Weighted Sup-Norm Contractions in Dynamic Programming: A Review and Some New Applications

Procedia Computer Science 00 (2011) 000 6

CS 598 Statistical Reinforcement Learning. Nan Jiang

Algorithms for MDPs and Their Convergence

Optimistic Policy Iteration and Q-learning in Dynamic Programming

Lecture 3: The Reinforcement Learning Problem

Real Time Value Iteration and the State-Action Value Function

Today s s Lecture. Applicability of Neural Networks. Back-propagation. Review of Neural Networks. Lecture 20: Learning -4. Markov-Decision Processes

Reinforcement Learning as Classification Leveraging Modern Classifiers

Reinforcement Learning. Yishay Mansour Tel-Aviv University

Markov Decision Processes and their Applications to Supply Chain Management

Lecture 3: Policy Evaluation Without Knowing How the World Works / Model Free Policy Evaluation

The Multi-Arm Bandit Framework

Chapter 3: The Reinforcement Learning Problem

Infinite-Horizon Discounted Markov Decision Processes

CMU Lecture 12: Reinforcement Learning. Teacher: Gianni A. Di Caro

Markov Decision Processes

Stochastic Shortest Path Problems

Introduction to Reinforcement Learning. Part 6: Core Theory II: Bellman Equations and Dynamic Programming

1 Markov decision processes

Reinforcement Learning and Optimal Control. ASU, CSE 691, Winter 2019

A Gentle Introduction to Reinforcement Learning

Course 16:198:520: Introduction To Artificial Intelligence Lecture 13. Decision Making. Abdeslam Boularias. Wednesday, December 7, 2016

Reinforcement Learning: An Introduction

CSE 573. Markov Decision Processes: Heuristic Search & Real-Time Dynamic Programming. Slides adapted from Andrey Kolobov and Mausam

Value Function Based Reinforcement Learning in Changing Markovian Environments

Markov decision processes and interval Markov chains: exploiting the connection

Example I: Capital Accumulation

Efficient Approximate Planning in Continuous Space Markovian Decision Problems

Mathematical Optimization Models and Applications

Control Theory : Course Summary

Approximate dynamic programming for stochastic reachability

1 Stochastic Dynamic Programming

ilstd: Eligibility Traces and Convergence Analysis

Reinforcement Learning and Control

Practicable Robust Markov Decision Processes

Transcription:

Markov Decision Processes and Dynamic Programming A. LAZARIC (SequeL Team @INRIA-Lille) ENS Cachan - Master 2 MVA SequeL INRIA Lille MVA-RL Course

How to model an RL problem The Markov Decision Process A. LAZARIC Markov Decision Process and Dynamic Programming Sept 29th, 2015-2/103

How to model an RL problem The Markov Decision Process Tools Model Value Functions A. LAZARIC Markov Decision Process and Dynamic Programming Sept 29th, 2015-2/103

Mathematical Tools How to model an RL problem The Markov Decision Process Tools Model Value Functions A. LAZARIC Markov Decision Process and Dynamic Programming Sept 29th, 2015-3/103

Mathematical Tools Probability Theory Definition (Conditional probability) Given two events A and B with P(B) > 0, the conditional probability of A given B is P(A B) = P(A B). P(B) A. LAZARIC Markov Decision Process and Dynamic Programming Sept 29th, 2015-4/103

Mathematical Tools Probability Theory Definition (Conditional probability) Given two events A and B with P(B) > 0, the conditional probability of A given B is P(A B) = P(A B). P(B) Similarly, if X and Y are non-degenerate and jointly continuous random variables with density f X,Y (x, y) then if B has positive measure then the conditional probability is y B x A P(X A Y B) = f X,Y (x, y)dxdy x f X,Y (x, y)dxdy. y B A. LAZARIC Markov Decision Process and Dynamic Programming Sept 29th, 2015-4/103

Mathematical Tools Probability Theory Definition (Law of total expectation) Given a function f and two random variables X, Y we have that E X,Y [ f (X, Y ) ] = EX [E Y [ f (x, Y ) X = x ] ]. A. LAZARIC Markov Decision Process and Dynamic Programming Sept 29th, 2015-5/103

Mathematical Tools Norms and Contractions Definition Given a vector space V R d a function f : V R + 0 an only if If f (v) = 0 for some v V, then v = 0. For any λ R, v V, f (λv) = λ f (v). is a norm if Triangle inequality: For any v, u V, f (v + u) f (v) + f (u). A. LAZARIC Markov Decision Process and Dynamic Programming Sept 29th, 2015-6/103

Mathematical Tools Norms and Contractions L p -norm ( d ) 1/p v p = v i p. i=1 A. LAZARIC Markov Decision Process and Dynamic Programming Sept 29th, 2015-7/103

Mathematical Tools Norms and Contractions L p -norm L -norm ( d ) 1/p v p = v i p. i=1 v = max 1 i d v i. A. LAZARIC Markov Decision Process and Dynamic Programming Sept 29th, 2015-7/103

Mathematical Tools Norms and Contractions L p -norm L -norm L µ,p -norm ( d ) 1/p v p = v i p. i=1 v = max 1 i d v i. ( d v µ,p = i=1 v i p ) 1/p. µ i A. LAZARIC Markov Decision Process and Dynamic Programming Sept 29th, 2015-7/103

Mathematical Tools Norms and Contractions L p -norm L -norm L µ,p -norm L µ, -norm ( d ) 1/p v p = v i p. i=1 v = max 1 i d v i. ( d v µ,p = i=1 v i p ) 1/p. µ i v i v µ, = max. 1 i d µ i A. LAZARIC Markov Decision Process and Dynamic Programming Sept 29th, 2015-7/103

Mathematical Tools Norms and Contractions L p -norm L -norm ( d ) 1/p v p = v i p. i=1 v = max 1 i d v i. L µ,p -norm L µ, -norm ( d v µ,p = i=1 v i p ) 1/p. µ i v i v µ, = max. 1 i d µ i L 2,P -matrix norm (P is a positive definite matrix) v 2 P = v Pv. A. LAZARIC Markov Decision Process and Dynamic Programming Sept 29th, 2015-7/103

Mathematical Tools Norms and Contractions Definition A sequence of vectors v n V (with n N) is said to converge in norm to v V if lim n v n v = 0. A. LAZARIC Markov Decision Process and Dynamic Programming Sept 29th, 2015-8/103

Mathematical Tools Norms and Contractions Definition A sequence of vectors v n V (with n N) is said to converge in norm to v V if lim n v n v = 0. Definition A sequence of vectors v n V (with n N) is a Cauchy sequence if lim sup m n v n v m = 0. n A. LAZARIC Markov Decision Process and Dynamic Programming Sept 29th, 2015-8/103

Mathematical Tools Norms and Contractions Definition A sequence of vectors v n V (with n N) is said to converge in norm to v V if lim n v n v = 0. Definition A sequence of vectors v n V (with n N) is a Cauchy sequence if Definition lim sup m n v n v m = 0. n A vector space V equipped with a norm is complete if every Cauchy sequence in V is convergent in the norm of the space. A. LAZARIC Markov Decision Process and Dynamic Programming Sept 29th, 2015-8/103

Mathematical Tools Norms and Contractions Definition An operator T : V V is L-Lipschitz if for any v, u V T v T u L u v. If L 1 then T is a non-expansion, while if L < 1 then T is a L-contraction. If T is Lipschitz then it is also continuous, that is if v n v then T v n T v. A. LAZARIC Markov Decision Process and Dynamic Programming Sept 29th, 2015-9/103

Mathematical Tools Norms and Contractions Definition An operator T : V V is L-Lipschitz if for any v, u V T v T u L u v. If L 1 then T is a non-expansion, while if L < 1 then T is a L-contraction. If T is Lipschitz then it is also continuous, that is Definition if v n v then T v n T v. A vector v V is a fixed point of the operator T : V V if T v = v. A. LAZARIC Markov Decision Process and Dynamic Programming Sept 29th, 2015-9/103

Mathematical Tools Norms and Contractions Proposition (Banach Fixed Point Theorem) Let V be a complete vector space equipped with the norm and T : V V be a γ-contraction mapping. Then 1. T admits a unique fixed point v. 2. For any v 0 V, if v n+1 = T v n then v n v with a geometric convergence rate: v n v γ n v 0 v. A. LAZARIC Markov Decision Process and Dynamic Programming Sept 29th, 2015-10/103

Mathematical Tools Linear Algebra Given a square matrix A R N N : Eigenvalues of a matrix (1). v R N and λ R are eigenvector and eigenvalue of A if Av = λv. A. LAZARIC Markov Decision Process and Dynamic Programming Sept 29th, 2015-11/103

Mathematical Tools Linear Algebra Given a square matrix A R N N : Eigenvalues of a matrix (1). v R N and λ R are eigenvector and eigenvalue of A if Av = λv. Eigenvalues of a matrix (2). If A has eigenvalues {λ i } N i=1, then B = (I αa) has eigenvalues {µ i } µ i = 1 αλ i. A. LAZARIC Markov Decision Process and Dynamic Programming Sept 29th, 2015-11/103

Mathematical Tools Linear Algebra Given a square matrix A R N N : Eigenvalues of a matrix (1). v R N and λ R are eigenvector and eigenvalue of A if Av = λv. Eigenvalues of a matrix (2). If A has eigenvalues {λ i } N i=1, then B = (I αa) has eigenvalues {µ i } µ i = 1 αλ i. Matrix inversion. A can be inverted if and only if i, λ i 0. A. LAZARIC Markov Decision Process and Dynamic Programming Sept 29th, 2015-11/103

Mathematical Tools Linear Algebra Stochastic matrix. A square matrix P R N N is a stochastic matrix if 1. all non-zero entries, i, j, [P] i,j 0 2. all the rows sum to one, i, N j=1 [P] i,j = 1. All the eigenvalues of a stochastic matrix are bounded by 1, i.e., i, λ i 1. A. LAZARIC Markov Decision Process and Dynamic Programming Sept 29th, 2015-12/103

How to model an RL problem The Markov Decision Process Tools Model Value Functions A. LAZARIC Markov Decision Process and Dynamic Programming Sept 29th, 2015-13/103

The Reinforcement Learning Model Environment action / state / actuation perception Agent A. LAZARIC Markov Decision Process and Dynamic Programming Sept 29th, 2015-14/103

The Reinforcement Learning Model action / actuation Environment Critic reward Learning Agent state / perception A. LAZARIC Markov Decision Process and Dynamic Programming Sept 29th, 2015-14/103

The Reinforcement Learning Model The environment Controllability: fully (e.g., chess) or partially (e.g., portfolio optimization) Uncertainty: deterministic (e.g., chess) or stochastic (e.g., backgammon) Reactive: adversarial (e.g., chess) or fixed (e.g., tetris) Observability: full (e.g., chess) or partial (e.g., robotics) Availability: known (e.g., chess) or unknown (e.g., robotics) A. LAZARIC Markov Decision Process and Dynamic Programming Sept 29th, 2015-15/103

The Reinforcement Learning Model The environment Controllability: fully (e.g., chess) or partially (e.g., portfolio optimization) Uncertainty: deterministic (e.g., chess) or stochastic (e.g., backgammon) Reactive: adversarial (e.g., chess) or fixed (e.g., tetris) Observability: full (e.g., chess) or partial (e.g., robotics) Availability: known (e.g., chess) or unknown (e.g., robotics) The critic Sparse (e.g., win or loose) vs informative (e.g., closer or further) Preference reward Frequent or sporadic Known or unknown A. LAZARIC Markov Decision Process and Dynamic Programming Sept 29th, 2015-15/103

The Reinforcement Learning Model The environment Controllability: fully (e.g., chess) or partially (e.g., portfolio optimization) Uncertainty: deterministic (e.g., chess) or stochastic (e.g., backgammon) Reactive: adversarial (e.g., chess) or fixed (e.g., tetris) Observability: full (e.g., chess) or partial (e.g., robotics) Availability: known (e.g., chess) or unknown (e.g., robotics) The critic Sparse (e.g., win or loose) vs informative (e.g., closer or further) Preference reward Frequent or sporadic Known or unknown The agent Open loop control Close loop control (i.e., adaptive) Non-stationary close loop control (i.e., learning) A. LAZARIC Markov Decision Process and Dynamic Programming Sept 29th, 2015-15/103

Markov Chains Definition (Markov chain) Let the state space X be a bounded compact subset of the Euclidean space, the discrete-time dynamic system (x t ) t N X is a Markov chain if it satisfies the Markov property P(x t+1 = x x t, x t 1,..., x 0 ) = P(x t+1 = x x t ), Given an initial state x 0 X, a Markov chain is defined by the transition probability p p(y x) = P(x t+1 = y x t = x). A. LAZARIC Markov Decision Process and Dynamic Programming Sept 29th, 2015-16/103

Markov Decision Process Definition (Markov decision process [1, 4, 3, 5, 2]) A Markov decision process is defined as a tuple M = (X, A, p, r) where A. LAZARIC Markov Decision Process and Dynamic Programming Sept 29th, 2015-17/103

Markov Decision Process Definition (Markov decision process [1, 4, 3, 5, 2]) A Markov decision process is defined as a tuple M = (X, A, p, r) where X is the state space, A. LAZARIC Markov Decision Process and Dynamic Programming Sept 29th, 2015-17/103

Markov Decision Process Definition (Markov decision process [1, 4, 3, 5, 2]) A Markov decision process is defined as a tuple M = (X, A, p, r) where X is the state space, A is the action space, A. LAZARIC Markov Decision Process and Dynamic Programming Sept 29th, 2015-17/103

Markov Decision Process Definition (Markov decision process [1, 4, 3, 5, 2]) A Markov decision process is defined as a tuple M = (X, A, p, r) where X is the state space, A is the action space, p(y x, a) is the transition probability with p(y x, a) = P(x t+1 = y x t = x, a t = a), A. LAZARIC Markov Decision Process and Dynamic Programming Sept 29th, 2015-17/103

Markov Decision Process Definition (Markov decision process [1, 4, 3, 5, 2]) A Markov decision process is defined as a tuple M = (X, A, p, r) where X is the state space, A is the action space, p(y x, a) is the transition probability with p(y x, a) = P(x t+1 = y x t = x, a t = a), r(x, a, y) is the reward of transition (x, a, y). A. LAZARIC Markov Decision Process and Dynamic Programming Sept 29th, 2015-17/103

Markov Decision Process: the Assumptions Time assumption: time is discrete t t + 1 Possible relaxations Identify the proper time granularity Most of MDP literature extends to continuous time A. LAZARIC Markov Decision Process and Dynamic Programming Sept 29th, 2015-18/103

Markov Decision Process: the Assumptions Markov assumption: the current state x and action a are a sufficient statistics for the next state y p(y x, a) = P(x t+1 = y x t = x, a t = a) Possible relaxations Define a new state h t = (x t, x t 1, x t 2,...) Move to partially observable MDP (PO-MDP) Move to predictive state representation (PSR) model A. LAZARIC Markov Decision Process and Dynamic Programming Sept 29th, 2015-19/103

Markov Decision Process: the Assumptions Reward assumption: the reward is uniquely defined by a transition (or part of it) r(x, a, y) Possible relaxations Distinguish between global goal and reward function Move to inverse reinforcement learning (IRL) to induce the reward function from desired behaviors A. LAZARIC Markov Decision Process and Dynamic Programming Sept 29th, 2015-20/103

Markov Decision Process: the Assumptions Stationarity assumption: the dynamics and reward do not change over time p(y x, a) = P(x t+1 = y x t = x, a t = a) r(x, a, y) Possible relaxations Identify and remove the non-stationary components (e.g., cyclo-stationary dynamics) Identify the time-scale of the changes A. LAZARIC Markov Decision Process and Dynamic Programming Sept 29th, 2015-21/103

Question Is the MDP formalism powerful enough? Let s try! A. LAZARIC Markov Decision Process and Dynamic Programming Sept 29th, 2015-22/103

Example: the Retail Store Management Problem Description. At each month t, a store contains x t items of a specific goods and the demand for that goods is D t. At the end of each month the manager of the store can order a t more items from his supplier. Furthermore we know that The cost of maintaining an inventory of x is h(x). The cost to order a items is C(a). The income for selling q items is f (q). If the demand D is bigger than the available inventory x, customers that cannot be served leave. The value of the remaining inventory at the end of the year is g(x). Constraint: the store has a maximum capacity M. A. LAZARIC Markov Decision Process and Dynamic Programming Sept 29th, 2015-23/103

Example: the Retail Store Management Problem State space: x X = {0, 1,..., M}. A. LAZARIC Markov Decision Process and Dynamic Programming Sept 29th, 2015-24/103

Example: the Retail Store Management Problem State space: x X = {0, 1,..., M}. Action space: it is not possible to order more items that the capacity of the store, then the action space should depend on the current state. Formally, at statex, a A(x) = {0, 1,..., M x}. A. LAZARIC Markov Decision Process and Dynamic Programming Sept 29th, 2015-24/103

Example: the Retail Store Management Problem State space: x X = {0, 1,..., M}. Action space: it is not possible to order more items that the capacity of the store, then the action space should depend on the current state. Formally, at statex, a A(x) = {0, 1,..., M x}. Dynamics: x t+1 = [x t + a t D t ] +. Problem: the dynamics should be Markov and stationary! A. LAZARIC Markov Decision Process and Dynamic Programming Sept 29th, 2015-24/103

Example: the Retail Store Management Problem State space: x X = {0, 1,..., M}. Action space: it is not possible to order more items that the capacity of the store, then the action space should depend on the current state. Formally, at statex, a A(x) = {0, 1,..., M x}. Dynamics: x t+1 = [x t + a t D t ] +. Problem: the dynamics should be Markov and stationary! The demand D t is stochastic and time-independent. Formally, i.i.d. D t D. A. LAZARIC Markov Decision Process and Dynamic Programming Sept 29th, 2015-24/103

Example: the Retail Store Management Problem State space: x X = {0, 1,..., M}. Action space: it is not possible to order more items that the capacity of the store, then the action space should depend on the current state. Formally, at statex, a A(x) = {0, 1,..., M x}. Dynamics: x t+1 = [x t + a t D t ] +. Problem: the dynamics should be Markov and stationary! The demand D t is stochastic and time-independent. Formally, i.i.d. D t D. Reward: r t = C(a t ) h(x t + a t ) + f ([x t + a t x t+1 ] + ). A. LAZARIC Markov Decision Process and Dynamic Programming Sept 29th, 2015-24/103

Exercise: the Parking Problem A driver wants to park his car as close as possible to the restaurant. 1 2 Reward t T p(t) Restaurant Reward 0 A. LAZARIC Markov Decision Process and Dynamic Programming Sept 29th, 2015-25/103

Exercise: the Parking Problem A driver wants to park his car as close as possible to the restaurant. 1 2 Reward t T p(t) Restaurant Reward 0 The driver cannot see whether a place is available unless he is in front of it. There are P places. At each place i the driver can either move to the next place or park (if the place is available). The closer to the restaurant the parking, the higher the satisfaction. If the driver doesn t park anywhere, then he/she leaves the restaurant and has to find another one. A. LAZARIC Markov Decision Process and Dynamic Programming Sept 29th, 2015-25/103

Policy Definition (Policy) A decision rule π t can be Deterministic: π t : X A, Stochastic: π t : X (A), A. LAZARIC Markov Decision Process and Dynamic Programming Sept 29th, 2015-26/103

Policy Definition (Policy) A decision rule π t can be Deterministic: π t : X A, Stochastic: π t : X (A), A policy (strategy, plan) can be Non-stationary: π = (π 0, π 1, π 2,... ), Stationary (Markovian): π = (π, π, π,... ). A. LAZARIC Markov Decision Process and Dynamic Programming Sept 29th, 2015-26/103

Policy Definition (Policy) A decision rule π t can be Deterministic: π t : X A, Stochastic: π t : X (A), A policy (strategy, plan) can be Non-stationary: π = (π 0, π 1, π 2,... ), Stationary (Markovian): π = (π, π, π,... ). Remark: MDP M + stationary policy π Markov chain of state X and transition probability p(y x) = p(y x, π(x)). A. LAZARIC Markov Decision Process and Dynamic Programming Sept 29th, 2015-26/103

Example: the Retail Store Management Problem Stationary policy 1 Stationary policy 2 Non-stationary policy π(x) = { M x if x < M/4 0 otherwise π(x) = max{(m x)/2 x; 0} π t (x) = { M x if t < 6 (M x)/5 otherwise A. LAZARIC Markov Decision Process and Dynamic Programming Sept 29th, 2015-27/103

How to model an RL problem The Markov Decision Process The Model Value Functions A. LAZARIC Markov Decision Process and Dynamic Programming Sept 29th, 2015-28/103

Question How do we evaluate a policy and compare two policies? Value function! A. LAZARIC Markov Decision Process and Dynamic Programming Sept 29th, 2015-29/103

Optimization over Time Horizon Finite time horizon T : deadline at time T, the agent focuses on the sum of the rewards up to T. A. LAZARIC Markov Decision Process and Dynamic Programming Sept 29th, 2015-30/103

Optimization over Time Horizon Finite time horizon T : deadline at time T, the agent focuses on the sum of the rewards up to T. Infinite time horizon with discount: the problem never terminates but rewards which are closer in time receive a higher importance. A. LAZARIC Markov Decision Process and Dynamic Programming Sept 29th, 2015-30/103

Optimization over Time Horizon Finite time horizon T : deadline at time T, the agent focuses on the sum of the rewards up to T. Infinite time horizon with discount: the problem never terminates but rewards which are closer in time receive a higher importance. Infinite time horizon with terminal state: the problem never terminates but the agent will eventually reach a termination state. A. LAZARIC Markov Decision Process and Dynamic Programming Sept 29th, 2015-30/103

Optimization over Time Horizon Finite time horizon T : deadline at time T, the agent focuses on the sum of the rewards up to T. Infinite time horizon with discount: the problem never terminates but rewards which are closer in time receive a higher importance. Infinite time horizon with terminal state: the problem never terminates but the agent will eventually reach a termination state. Infinite time horizon with average reward: the problem never terminates but the agent only focuses on the (expected) average of the rewards. A. LAZARIC Markov Decision Process and Dynamic Programming Sept 29th, 2015-30/103

State Value Function Finite time horizon T : deadline at time T, the agent focuses on the sum of the rewards up to T. [ T 1 ] V π (t, x) = E r(x s, π s (x s )) + R(x T ) x t = x; π, s=t where R is a value function for the final state. A. LAZARIC Markov Decision Process and Dynamic Programming Sept 29th, 2015-31/103

State Value Function Finite time horizon T : deadline at time T, the agent focuses on the sum of the rewards up to T. [ T 1 ] V π (t, x) = E r(x s, π s (x s )) + R(x T ) x t = x; π, s=t where R is a value function for the final state. Used when: there is an intrinsic deadline to meet. A. LAZARIC Markov Decision Process and Dynamic Programming Sept 29th, 2015-31/103

State Value Function Infinite time horizon with discount: the problem never terminates but rewards which are closer in time receive a higher importance. [ ] V π (x) = E γ t r(x t, π(x t )) x 0 = x; π, t=0 with discount factor 0 γ < 1: small = short-term rewards, big = long-term rewards for any γ [0, 1) the series always converge (for bounded rewards) A. LAZARIC Markov Decision Process and Dynamic Programming Sept 29th, 2015-32/103

State Value Function Infinite time horizon with discount: the problem never terminates but rewards which are closer in time receive a higher importance. [ ] V π (x) = E γ t r(x t, π(x t )) x 0 = x; π, t=0 with discount factor 0 γ < 1: small = short-term rewards, big = long-term rewards for any γ [0, 1) the series always converge (for bounded rewards) Used when: there is uncertainty about the deadline and/or an intrinsic definition of discount. A. LAZARIC Markov Decision Process and Dynamic Programming Sept 29th, 2015-32/103

State Value Function Infinite time horizon with terminal state: the problem never terminates but the agent will eventually reach a termination state. [ T ] V π (x) = E r(x t, π(x t )) x 0 = x; π, t=0 where T is the first (random) time when the termination state is achieved. A. LAZARIC Markov Decision Process and Dynamic Programming Sept 29th, 2015-33/103

State Value Function Infinite time horizon with terminal state: the problem never terminates but the agent will eventually reach a termination state. [ T ] V π (x) = E r(x t, π(x t )) x 0 = x; π, t=0 where T is the first (random) time when the termination state is achieved. Used when: there is a known goal or a failure condition. A. LAZARIC Markov Decision Process and Dynamic Programming Sept 29th, 2015-33/103

State Value Function Infinite time horizon with average reward: the problem never terminates but the agent only focuses on the (expected) average of the rewards. [ 1 V π (x) = lim E T T T 1 t=0 ] r(x t, π(x t )) x 0 = x; π. A. LAZARIC Markov Decision Process and Dynamic Programming Sept 29th, 2015-34/103

State Value Function Infinite time horizon with average reward: the problem never terminates but the agent only focuses on the (expected) average of the rewards. [ 1 V π (x) = lim E T T T 1 t=0 ] r(x t, π(x t )) x 0 = x; π. Used when: the system should be constantly controlled over time. A. LAZARIC Markov Decision Process and Dynamic Programming Sept 29th, 2015-34/103

State Value Function Technical note: the expectations refer to all possible stochastic trajectories. A. LAZARIC Markov Decision Process and Dynamic Programming Sept 29th, 2015-35/103

State Value Function Technical note: the expectations refer to all possible stochastic trajectories. A non-stationary policy π applied from state x 0 returns (x 0, r 0, x 1, r 1, x 2, r 2,...) where r t = r(x t, π t (x t )) and x t p( x t 1, a t = π(x t )) are random realizations. The value function (discounted infinite horizon) is V π (x) = E (x1,x 2,...) [ t=0 ] γ t r(x t, π(x t )) x 0 = x; π, A. LAZARIC Markov Decision Process and Dynamic Programming Sept 29th, 2015-35/103

Example: the Retail Store Management Problem Simulation A. LAZARIC Markov Decision Process and Dynamic Programming Sept 29th, 2015-36/103

Optimal Value Function Definition (Optimal policy and optimal value function) The solution to an MDP is an optimal policy π satisfying π arg max V π π Π in all the states x X, where Π is some policy set of interest. A. LAZARIC Markov Decision Process and Dynamic Programming Sept 29th, 2015-37/103

Optimal Value Function Definition (Optimal policy and optimal value function) The solution to an MDP is an optimal policy π satisfying π arg max V π π Π in all the states x X, where Π is some policy set of interest. The corresponding value function is the optimal value function V = V π A. LAZARIC Markov Decision Process and Dynamic Programming Sept 29th, 2015-37/103

Optimal Value Function Remarks 1. π arg max( ) and not π = arg max( ) because an MDP may admit more than one optimal policy 2. π achieves the largest possible value function in every state 3. there always exists an optimal deterministic policy 4. expect for problems with a finite horizon, there always exists an optimal stationary policy A. LAZARIC Markov Decision Process and Dynamic Programming Sept 29th, 2015-38/103

Summary 1. MDP is a powerful model for interaction between an agent and a stochastic environment 2. The value function defines the objective to optimize A. LAZARIC Markov Decision Process and Dynamic Programming Sept 29th, 2015-39/103

Limitations 1. All the previous value functions define an objective in expectation 2. Other utility functions may be used 3. Risk measures could be integrated but they may induce weird problems and make the solution more difficult A. LAZARIC Markov Decision Process and Dynamic Programming Sept 29th, 2015-40/103

How to solve exactly an MDP Dynamic Programming A. LAZARIC Markov Decision Process and Dynamic Programming Sept 29th, 2015-41/103

How to solve exactly an MDP Dynamic Programming Bellman Equations Value Iteration Policy Iteration A. LAZARIC Markov Decision Process and Dynamic Programming Sept 29th, 2015-41/103

Notice From now on we mostly work on the discounted infinite horizon setting. Most results smoothly extend to other settings. A. LAZARIC Markov Decision Process and Dynamic Programming Sept 29th, 2015-42/103

The Optimization Problem max π V π (x 0 ) = max π E [ r(x 0, π(x 0 )) + γr(x 1, π(x 1 )) + γ 2 r(x 2, π(x 2 )) +... ] very challenging (we should try as many as A S policies!) A. LAZARIC Markov Decision Process and Dynamic Programming Sept 29th, 2015-43/103

The Optimization Problem max π V π (x 0 ) = max π E [ r(x 0, π(x 0 )) + γr(x 1, π(x 1 )) + γ 2 r(x 2, π(x 2 )) +... ] very challenging (we should try as many as A S policies!) we need to leverage the structure of the MDP to simplify the optimization problem A. LAZARIC Markov Decision Process and Dynamic Programming Sept 29th, 2015-43/103

How to solve exactly an MDP Dynamic Programming Bellman Equations Value Iteration Policy Iteration A. LAZARIC Markov Decision Process and Dynamic Programming Sept 29th, 2015-44/103

The Bellman Equation Proposition For any stationary policy π = (π, π,... ), the state value function at a state x X satisfies the Bellman equation: V π (x) = r(x, π(x)) + γ y p(y x, π(x))v π (y). A. LAZARIC Markov Decision Process and Dynamic Programming Sept 29th, 2015-45/103

The Bellman Equation Proof. For any policy π, V π (x) = E [ γ t r(x t, π(x t )) x 0 = x; π ] t 0 = r(x, π(x)) + E [ γ t r(x t, π(x t )) x 0 = x; π ] = r(x, π(x)) + γ y = r(x, π(x)) + γ y t 1 P(x 1 = y x 0 = x; π(x 0 ))E [ γ t 1 r(x t, π(x t )) x 1 = y; π ] p(y x, π(x))v π (y). t 1 A. LAZARIC Markov Decision Process and Dynamic Programming Sept 29th, 2015-46/103

Example: the student dilemma 1 p=0.5 Rest 0.5 2 r=1 Rest 0.4 r= 10 r=0 Work Work 0.3 0.4 0.6 5 0.5 0.5 0.7 Rest 0.6 r=100 r= 1 3 0.5 Work 0.5 r= 10 0.9 Rest 0.1 1 Work 4 6 7 r= 1000 A. LAZARIC Markov Decision Process and Dynamic Programming Sept 29th, 2015-47/103

Example: the student dilemma Model: all the transitions are Markov, states x 5, x 6, x 7 are terminal. Setting: infinite horizon with terminal states. Objective: find the policy that maximizes the expected sum of rewards before achieving a terminal state. Notice: not a discounted infinite horizon setting! But the Bellman equations hold unchanged. A. LAZARIC Markov Decision Process and Dynamic Programming Sept 29th, 2015-48/103

Example: the student dilemma V = 88.3 1 r=0 0.5 Rest Work r= 1 V = 86.9 3 p=0.5 0.5 Work 0.5 r=1 0.5 0.3 0.4 0.7 Rest 0.6 Work 0.5 r= 10 V = 88.9 4 V = 88.3 2 Rest 0.6 0.9 Rest Work 0.4 0.1 1 r=100 r= 10 V = 10 5 V = 100 6 r= 1000 V = 1000 7 A. LAZARIC Markov Decision Process and Dynamic Programming Sept 29th, 2015-49/103

Example: the student dilemma Computing V 4 : V 6 = 100 V 4 = 10 + (0.9V 6 + 0.1V 4 ) V 4 = 10 + 0.9V 6 0.9 = 88.8 A. LAZARIC Markov Decision Process and Dynamic Programming Sept 29th, 2015-50/103

Example: the student dilemma Computing V 3 : no need to consider all possible trajectories V 4 = 88.8 V 3 = 1 + (0.5V 4 + 0.5V 3 ) V 3 = 1 + 0.5V 4 0.5 = 86.8 A. LAZARIC Markov Decision Process and Dynamic Programming Sept 29th, 2015-51/103

Example: the student dilemma Computing V 3 : no need to consider all possible trajectories V 4 = 88.8 V 3 = 1 + (0.5V 4 + 0.5V 3 ) V 3 = 1 + 0.5V 4 0.5 and so on for the rest... = 86.8 A. LAZARIC Markov Decision Process and Dynamic Programming Sept 29th, 2015-51/103

The Optimal Bellman Equation Bellman s Principle of Optimality [1]: An optimal policy has the property that, whatever the initial state and the initial decision are, the remaining decisions must constitute an optimal policy with regard to the state resulting from the first decision. A. LAZARIC Markov Decision Process and Dynamic Programming Sept 29th, 2015-52/103

The Optimal Bellman Equation Proposition The optimal value function V (i.e., V = max π V π ) is the solution to the optimal Bellman equation: V [ (x) = max a A r(x, a) + γ p(y x, a)v (y) ]. and the optimal policy is π (x) = arg max a A y [ r(x, a) + γ p(y x, a)v (y) ]. y A. LAZARIC Markov Decision Process and Dynamic Programming Sept 29th, 2015-53/103

The Optimal Bellman Equation Proof. For any policy π = (a, π ) (possibly non-stationary), V (x) (a) = max π E[ γ t r(x t, π(x t )) x 0 = x; π ] t 0 [ (b) = max r(x, a) + γ (a,π ) y (c) = max a (d) = max a [ r(x, a) + γ y [ r(x, a) + γ y ] p(y x, a)v π (y) ] p(y x, a) max V π (y) π ] p(y x, a)v (y). A. LAZARIC Markov Decision Process and Dynamic Programming Sept 29th, 2015-54/103

System of Equations The Bellman equation V π (x) = r(x, π(x)) + γ y p(y x, π(x))v π (y). is a linear system of equations with N unknowns and N linear constraints. A. LAZARIC Markov Decision Process and Dynamic Programming Sept 29th, 2015-55/103

Example: the student dilemma V = 88.3 1 r=0 0.5 Rest Work r= 1 V = 86.9 3 p=0.5 0.5 Work 0.5 r=1 0.5 0.3 0.4 0.7 Rest 0.6 Work 0.5 r= 10 V = 88.9 4 V = 88.3 2 Rest 0.6 0.9 Rest Work 0.4 0.1 1 r=100 r= 10 V = 10 5 V = 100 6 r= 1000 V = 1000 7 A. LAZARIC Markov Decision Process and Dynamic Programming Sept 29th, 2015-56/103

Example: the student dilemma V π (x) = r(x, π(x))+γ y p(y x, π(x))v π (y) V 1 = 88.3 r=0 0.5 Rest Work r= 1 V 3 = 86.9 p=0.5 0.5 r=1 0.5 Work 0.3 0.7 Rest Work 0.4 V = 88.3 2 0.6 Rest Rest 0.6 0.9 0.4 r=100 r= 10 V 5 = 10 V = 100 6 V 1 = 0 + 0.5V 1 + 0.5V 2 V 2 = 1 + 0.3V 1 + 0.7V 3 V 3 = 1 + 0.5V 4 + 0.5V 3 V 4 = 10 + 0.9V 6 + 0.1V 4 V 5 = 10 V 6 = 100 = 1000 V 7 System of equations 0.5 0.5 r= 10 V = 88.9 4 Work (V, R R 7, P R 7 7 ) 0.1 V = R + PV V = (I P) 1 R 1 r= 1000 V = 1000 7 A. LAZARIC Markov Decision Process and Dynamic Programming Sept 29th, 2015-57/103

System of Equations The optimal Bellman equation V [ (x) = max a A r(x, a) + γ p(y x, a)v (y) ]. is a (highly) non-linear system of equations with N unknowns and N non-linear constraints (i.e., the max operator). y A. LAZARIC Markov Decision Process and Dynamic Programming Sept 29th, 2015-58/103

Example: the student dilemma 1 p=0.5 Rest 0.5 2 r=1 Rest 0.4 r= 10 r=0 Work Work 0.3 0.4 0.6 5 0.5 0.5 0.7 Rest 0.6 r=100 r= 1 3 0.5 Work 0.5 r= 10 0.9 Rest 0.1 1 Work 4 6 7 r= 1000 A. LAZARIC Markov Decision Process and Dynamic Programming Sept 29th, 2015-59/103

Example: the student dilemma [ V (x) = max r(x, a) + γ y p(y x, a)v (y) ] a A System of equations V 1 = max { } 0 + 0.5V 1 + 0.5V 2 ; 0 + 0.5V 1 + 0.5V 3 V 2 = max { } 1 + 0.4V 5 + 0.6V 2 ; 1 + 0.3V 1 + 0.7V 3 V 3 = max { } 1 + 0.4V 2 + 0.6V 3 ; 1 + 0.5V 4 + 0.5V 3 V 4 = max { } 10 + 0.9V 6 + 0.1V 4 ; 10 + V 7 V 5 = 10 V 6 = 100 = 1000 V 7 too complicated, we need to find an alternative solution. 1 r=0 0.5 Rest Work r= 1 p=0.5 0.5 3 Work 0.5 0.5 0.3 2 0.7 Rest Work 0.5 r= 10 0.4 r=1 0.6 Rest Rest 4 0.6 0.9 Work 0.4 0.1 1 6 5 r=100 r= 10 7 r= 1000 A. LAZARIC Markov Decision Process and Dynamic Programming Sept 29th, 2015-60/103

The Bellman Operators Notation. w.l.o.g. a discrete state space X = N and V π R N. Definition For any W R N, the Bellman operator T π : R N R N is T π W (x) = r(x, π(x)) + γ y p(y x, π(x))w (y), and the optimal Bellman operator (or dynamic programming operator) is [ T W (x) = max a A r(x, a) + γ p(y x, a)w (y) ]. y A. LAZARIC Markov Decision Process and Dynamic Programming Sept 29th, 2015-61/103

The Bellman Operators Proposition Properties of the Bellman operators 1. Monotonicity: for any W 1, W 2 R N, if W 1 W 2 component-wise, then T π W 1 T π W 2, T W 1 T W 2. A. LAZARIC Markov Decision Process and Dynamic Programming Sept 29th, 2015-62/103

The Bellman Operators Proposition Properties of the Bellman operators 1. Monotonicity: for any W 1, W 2 R N, if W 1 W 2 component-wise, then 2. Offset: for any scalar c R, T π W 1 T π W 2, T W 1 T W 2. T π (W + ci N ) = T π W + γci N, T (W + ci N ) = T W + γci N, A. LAZARIC Markov Decision Process and Dynamic Programming Sept 29th, 2015-62/103

The Bellman Operators Proposition 3. Contraction in L -norm: for any W 1, W 2 R N T π W 1 T π W 2 γ W 1 W 2, T W 1 T W 2 γ W 1 W 2. A. LAZARIC Markov Decision Process and Dynamic Programming Sept 29th, 2015-63/103

The Bellman Operators Proposition 3. Contraction in L -norm: for any W 1, W 2 R N T π W 1 T π W 2 γ W 1 W 2, T W 1 T W 2 γ W 1 W 2. 4. Fixed point: For any policy π V π is the unique fixed point of T π, V is the unique fixed point of T. A. LAZARIC Markov Decision Process and Dynamic Programming Sept 29th, 2015-63/103

The Bellman Operators Proposition 3. Contraction in L -norm: for any W 1, W 2 R N T π W 1 T π W 2 γ W 1 W 2, 4. Fixed point: For any policy π T W 1 T W 2 γ W 1 W 2. V π is the unique fixed point of T π, V is the unique fixed point of T. Furthermore for any W R N and any stationary policy π lim (T π ) k W = V π, k lim (T k )k W = V. A. LAZARIC Markov Decision Process and Dynamic Programming Sept 29th, 2015-63/103

The Bellman Equation Proof. The contraction property (3) holds since for any x X we have T W 1(x) T W 2(x) [ = max r(x, a) + γ p(y x, a)w ] [ a 1(y) max r(x, a ) + γ a y y (a) max a = γ max a [ r(x, a) + γ p(y x, a)w ] 1(y) [ r(x, a) + γ y y p(y x, a) W 1(y) W 2(y) y γ W 1 W 2 max a p(y x, a) = γ W 1 W 2, y p(y x, a )W 2(y) ] p(y x, a)w 2(y) ] where in (a) we used max a f (a) max a g(a ) max a(f (a) g(a)). A. LAZARIC Markov Decision Process and Dynamic Programming Sept 29th, 2015-64/103

Exercise: Fixed Point Revise the Banach fixed point theorem and prove the fixed point property of the Bellman operator. A. LAZARIC Markov Decision Process and Dynamic Programming Sept 29th, 2015-65/103

Dynamic Programming How to solve exactly an MDP Dynamic Programming Bellman Equations Value Iteration Policy Iteration A. LAZARIC Markov Decision Process and Dynamic Programming Sept 29th, 2015-66/103

Dynamic Programming Question How do we compute the value functions / solve an MDP? Value/Policy Iteration algorithms! A. LAZARIC Markov Decision Process and Dynamic Programming Sept 29th, 2015-67/103

Dynamic Programming System of Equations The Bellman equation V π (x) = r(x, π(x)) + γ y p(y x, π(x))v π (y). is a linear system of equations with N unknowns and N linear constraints. A. LAZARIC Markov Decision Process and Dynamic Programming Sept 29th, 2015-68/103

Dynamic Programming System of Equations The Bellman equation V π (x) = r(x, π(x)) + γ y p(y x, π(x))v π (y). is a linear system of equations with N unknowns and N linear constraints. The optimal Bellman equation V [ (x) = max a A r(x, a) + γ p(y x, a)v (y) ]. is a (highly) non-linear system of equations with N unknowns and N non-linear constraints (i.e., the max operator). y A. LAZARIC Markov Decision Process and Dynamic Programming Sept 29th, 2015-68/103

Dynamic Programming Value Iteration: the Idea 1. Let V 0 be any vector in R N A. LAZARIC Markov Decision Process and Dynamic Programming Sept 29th, 2015-69/103

Dynamic Programming Value Iteration: the Idea 1. Let V 0 be any vector in R N 2. At each iteration k = 1, 2,..., K A. LAZARIC Markov Decision Process and Dynamic Programming Sept 29th, 2015-69/103

Dynamic Programming Value Iteration: the Idea 1. Let V 0 be any vector in R N 2. At each iteration k = 1, 2,..., K Compute V k+1 = T V k A. LAZARIC Markov Decision Process and Dynamic Programming Sept 29th, 2015-69/103

Dynamic Programming Value Iteration: the Idea 1. Let V 0 be any vector in R N 2. At each iteration k = 1, 2,..., K Compute V k+1 = T V k 3. Return the greedy policy π K (x) arg max a A [ r(x, a) + γ y ] p(y x, a)v K (y). A. LAZARIC Markov Decision Process and Dynamic Programming Sept 29th, 2015-69/103

Dynamic Programming Value Iteration: the Guarantees From the fixed point property of T : lim V k = V k A. LAZARIC Markov Decision Process and Dynamic Programming Sept 29th, 2015-70/103

Dynamic Programming Value Iteration: the Guarantees From the fixed point property of T : lim V k = V k From the contraction property of T V k+1 V = T V k T V γ V k V γ k+1 V 0 V 0 A. LAZARIC Markov Decision Process and Dynamic Programming Sept 29th, 2015-70/103

Dynamic Programming Value Iteration: the Guarantees From the fixed point property of T : lim V k = V k From the contraction property of T V k+1 V = T V k T V γ V k V γ k+1 V 0 V 0 Convergence rate. Let ɛ > 0 and r r max, then after at most iterations V K V ɛ. K = log(r max/ɛ) log(1/γ) A. LAZARIC Markov Decision Process and Dynamic Programming Sept 29th, 2015-70/103

Dynamic Programming Value Iteration: the Complexity Time complexity Each iteration and the computation of the greedy policy take O(N 2 A ) operations. [ V k+1 (x) = T V k (x) = max a A r(x, a) + γ p(y x, a)v k (y) ] [ π K (x) arg max r(x, a) + γ a A y y ] p(y x, a)v K (y) Total time complexity O(KN 2 A ) Space complexity Storing the MDP: dynamics O(N 2 A ) and reward O(N A ). Storing the value function and the optimal policy O(N). A. LAZARIC Markov Decision Process and Dynamic Programming Sept 29th, 2015-71/103

Dynamic Programming State-Action Value Function Definition In discounted infinite horizon problems, for any policy π, the state-action value function (or Q-function) Q π : X A R is [ ] Q π (x, a) = E γ t r(x t, a t ) x 0 = x, a 0 = a, a t = π(x t ), t 1, t 0 and the corresponding optimal Q-function is Q (x, a) = max π Qπ (x, a). A. LAZARIC Markov Decision Process and Dynamic Programming Sept 29th, 2015-72/103

Dynamic Programming State-Action Value Function The relationships between the V-function and the Q-function are: Q π (x, a) = r(x, a) + γ p(y x, a)v π (y) y X V π (x) = Q π (x, π(x)) Q (x, a) = r(x, a) + γ p(y x, a)v (y) y X V (x) = Q (x, π (x)) = max a A Q (x, a). A. LAZARIC Markov Decision Process and Dynamic Programming Sept 29th, 2015-73/103

Dynamic Programming Value Iteration: Extensions and Implementations Q-iteration. 1. Let Q 0 be any Q-function 2. At each iteration k = 1, 2,..., K Compute Qk+1 = T Q k 3. Return the greedy policy Comparison π K (x) arg max a A Q(x,a) Increased space and time complexity to O(N A ) and O(N 2 A 2 ) Computing the greedy policy is cheaper O(N A ) A. LAZARIC Markov Decision Process and Dynamic Programming Sept 29th, 2015-74/103

Dynamic Programming Value Iteration: Extensions and Implementations Asynchronous VI. 1. Let V 0 be any vector in R N 2. At each iteration k = 1, 2,..., K Choose a state xk Compute Vk+1 (x k ) = T V k (x k ) 3. Return the greedy policy Comparison π K (x) arg max a A Reduced time complexity to O(N A ) [ r(x, a) + γ p(y x, a)v K (y) ]. Increased number of iterations to at most O(KN) but much smaller in practice if states are properly prioritized Convergence guarantees y A. LAZARIC Markov Decision Process and Dynamic Programming Sept 29th, 2015-75/103

Dynamic Programming How to solve exactly an MDP Dynamic Programming Bellman Equations Value Iteration Policy Iteration A. LAZARIC Markov Decision Process and Dynamic Programming Sept 29th, 2015-76/103

Dynamic Programming Policy Iteration: the Idea 1. Let π 0 be any stationary policy A. LAZARIC Markov Decision Process and Dynamic Programming Sept 29th, 2015-77/103

Dynamic Programming Policy Iteration: the Idea 1. Let π 0 be any stationary policy 2. At each iteration k = 1, 2,..., K A. LAZARIC Markov Decision Process and Dynamic Programming Sept 29th, 2015-77/103

Dynamic Programming Policy Iteration: the Idea 1. Let π 0 be any stationary policy 2. At each iteration k = 1, 2,..., K Policy evaluation given πk, compute V π k. A. LAZARIC Markov Decision Process and Dynamic Programming Sept 29th, 2015-77/103

Dynamic Programming Policy Iteration: the Idea 1. Let π 0 be any stationary policy 2. At each iteration k = 1, 2,..., K Policy evaluation given πk, compute V π k. Policy improvement: compute the greedy policy [ π k+1 (x) arg max a A r(x, a) + γ p(y x, a)v π k (y) ]. y A. LAZARIC Markov Decision Process and Dynamic Programming Sept 29th, 2015-77/103

Dynamic Programming Policy Iteration: the Idea 1. Let π 0 be any stationary policy 2. At each iteration k = 1, 2,..., K Policy evaluation given πk, compute V π k. Policy improvement: compute the greedy policy [ π k+1 (x) arg max a A r(x, a) + γ p(y x, a)v π k (y) ]. 3. Return the last policy π K y A. LAZARIC Markov Decision Process and Dynamic Programming Sept 29th, 2015-77/103

Dynamic Programming Policy Iteration: the Idea 1. Let π 0 be any stationary policy 2. At each iteration k = 1, 2,..., K Policy evaluation given πk, compute V π k. Policy improvement: compute the greedy policy [ π k+1 (x) arg max a A r(x, a) + γ p(y x, a)v π k (y) ]. 3. Return the last policy π K y Remark: usually K is the smallest k such that V π k = V π k+1. A. LAZARIC Markov Decision Process and Dynamic Programming Sept 29th, 2015-77/103

Dynamic Programming Policy Iteration: the Guarantees Proposition The policy iteration algorithm generates a sequences of policies with non-decreasing performance V π k+1 V π k, and it converges to π in a finite number of iterations. A. LAZARIC Markov Decision Process and Dynamic Programming Sept 29th, 2015-78/103

Dynamic Programming Policy Iteration: the Guarantees Proof. From the definition of the Bellman operators and the greedy policy π k+1 V π k = T π k V π k T V π k = T π k+1 V π k, (1) A. LAZARIC Markov Decision Process and Dynamic Programming Sept 29th, 2015-79/103

Dynamic Programming Policy Iteration: the Guarantees Proof. From the definition of the Bellman operators and the greedy policy π k+1 V π k = T π k V π k T V π k = T π k+1 V π k, (1) and from the monotonicity property of T π k+1, it follows that V π k T π k+1 V π k, T π k+1 V π k (T π k+1 ) 2 V π k,... (T π k+1 ) n 1 V π k (T π k+1 ) n V π k,... A. LAZARIC Markov Decision Process and Dynamic Programming Sept 29th, 2015-79/103

Dynamic Programming Policy Iteration: the Guarantees Proof. From the definition of the Bellman operators and the greedy policy π k+1 V π k = T π k V π k T V π k = T π k+1 V π k, (1) and from the monotonicity property of T π k+1, it follows that V π k T π k+1 V π k, T π k+1 V π k (T π k+1 ) 2 V π k,... (T π k+1 ) n 1 V π k (T π k+1 ) n V π k,... Joining all the inequalities in the chain we obtain V π k lim n (T π k+1 ) n V π k = V π k+1. A. LAZARIC Markov Decision Process and Dynamic Programming Sept 29th, 2015-79/103

Dynamic Programming Policy Iteration: the Guarantees Proof. From the definition of the Bellman operators and the greedy policy π k+1 V π k = T π k V π k T V π k = T π k+1 V π k, (1) and from the monotonicity property of T π k+1, it follows that V π k T π k+1 V π k, T π k+1 V π k (T π k+1 ) 2 V π k,... (T π k+1 ) n 1 V π k (T π k+1 ) n V π k,... Joining all the inequalities in the chain we obtain V π k lim n (T π k+1 ) n V π k = V π k+1. Then (V π k ) k is a non-decreasing sequence. A. LAZARIC Markov Decision Process and Dynamic Programming Sept 29th, 2015-79/103

Dynamic Programming Policy Iteration: the Guarantees Proof (cont d). Since a finite MDP admits a finite number of policies, then the termination condition is eventually met for a specific k. Thus eq. 2 holds with an equality and we obtain V π k = T V π k and V π k = V which implies that π k is an optimal policy. A. LAZARIC Markov Decision Process and Dynamic Programming Sept 29th, 2015-80/103

Dynamic Programming Policy Iteration Notation. For any policy π the reward vector is r π (x) = r(x, π(x)) and the transition matrix is [P π ] x,y = p(y x, π(x)) A. LAZARIC Markov Decision Process and Dynamic Programming Sept 29th, 2015-81/103

Dynamic Programming Policy Iteration: the Policy Evaluation Step Direct computation. For any policy π compute V π = (I γp π ) 1 r π. Complexity: O(N 3 ) (improvable to O(N 2.807 )). A. LAZARIC Markov Decision Process and Dynamic Programming Sept 29th, 2015-82/103

Dynamic Programming Policy Iteration: the Policy Evaluation Step Direct computation. For any policy π compute V π = (I γp π ) 1 r π. Complexity: O(N 3 ) (improvable to O(N 2.807 )). Iterative policy evaluation. For any policy π lim T π V 0 = V π. n Complexity: An ɛ-approximation of V π requires O(N 2 log 1/ɛ log 1/γ ) steps. A. LAZARIC Markov Decision Process and Dynamic Programming Sept 29th, 2015-82/103

Dynamic Programming Policy Iteration: the Policy Evaluation Step Direct computation. For any policy π compute V π = (I γp π ) 1 r π. Complexity: O(N 3 ) (improvable to O(N 2.807 )). Iterative policy evaluation. For any policy π lim T π V 0 = V π. n Complexity: An ɛ-approximation of V π requires O(N 2 log 1/ɛ log 1/γ ) steps. Monte-Carlo simulation. In each state x, simulate n trajectories ((x i t ) t 0, ) 1 i n following policy π and compute ˆV π (x) 1 n n γ t r(xt i, π(xt i )). i=1 t 0 Complexity: In each state, the approximation error is O(1/ n). A. LAZARIC Markov Decision Process and Dynamic Programming Sept 29th, 2015-82/103

Dynamic Programming Policy Iteration: the Policy Improvement Step If the policy is evaluated with V, then the policy improvement has complexity O(N A ) (computation of an expectation). A. LAZARIC Markov Decision Process and Dynamic Programming Sept 29th, 2015-83/103

Dynamic Programming Policy Iteration: the Policy Improvement Step If the policy is evaluated with V, then the policy improvement has complexity O(N A ) (computation of an expectation). If the policy is evaluated with Q, then the policy improvement has complexity O( A ) corresponding to π k+1 (x) arg max Q(x, a), a A A. LAZARIC Markov Decision Process and Dynamic Programming Sept 29th, 2015-83/103

Dynamic Programming Policy Iteration: Number of Iterations At most O ( N A 1 γ log( 1 1 γ )) A. LAZARIC Markov Decision Process and Dynamic Programming Sept 29th, 2015-84/103

Dynamic Programming Comparison between Value and Policy Iteration Value Iteration Pros: each iteration is very computationally efficient. Cons: convergence is only asymptotic. Policy Iteration Pros: converge in a finite number of iterations (often small in practice). Cons: each iteration requires a full policy evaluation and it might be expensive. A. LAZARIC Markov Decision Process and Dynamic Programming Sept 29th, 2015-85/103

Dynamic Programming The Grid-World Problem A. LAZARIC Markov Decision Process and Dynamic Programming Sept 29th, 2015-86/103

Dynamic Programming How to solve exactly an MDP Dynamic Programming Bellman Equations Value Iteration Policy Iteration A. LAZARIC Markov Decision Process and Dynamic Programming Sept 29th, 2015-87/103

Dynamic Programming Other Algorithms Modified Policy Iteration λ-policy Iteration Linear programming Policy search A. LAZARIC Markov Decision Process and Dynamic Programming Sept 29th, 2015-88/103

Dynamic Programming Summary Bellman equations provide a compact formulation of value functions DP provide a general tool to solve MDPs A. LAZARIC Markov Decision Process and Dynamic Programming Sept 29th, 2015-89/103

Dynamic Programming Bibliography I R. E. Bellman. Dynamic Programming. Princeton University Press, Princeton, N.J., 1957. D.P. Bertsekas and J. Tsitsiklis. Neuro-Dynamic Programming. Athena Scientific, Belmont, MA, 1996. W. Fleming and R. Rishel. Deterministic and stochastic optimal control. Applications of Mathematics, 1, Springer-Verlag, Berlin New York, 1975. R. A. Howard. Dynamic Programming and Markov Processes. MIT Press, Cambridge, MA, 1960. M.L. Puterman. Markov Decision Processes: Discrete Stochastic Dynamic Programming. John Wiley & Sons, Inc., New York, Etats-Unis, 1994. A. LAZARIC Markov Decision Process and Dynamic Programming Sept 29th, 2015-90/103

Dynamic Programming Reinforcement Learning Alessandro Lazaric alessandro.lazaric@inria.fr sequel.lille.inria.fr