Reinforcement Learning

Similar documents
Lecture 7: Value Function Approximation

Reinforcement Learning

Reinforcement Learning

CS599 Lecture 2 Function Approximation in RL

CSE 190: Reinforcement Learning: An Introduction. Chapter 8: Generalization and Function Approximation. Pop Quiz: What Function Are We Approximating?

Chapter 8: Generalization and Function Approximation

CSE 190: Reinforcement Learning: An Introduction. Chapter 8: Generalization and Function Approximation. Pop Quiz: What Function Are We Approximating?

Reinforcement Learning

Introduction to Reinforcement Learning. CMPT 882 Mar. 18

Generalization and Function Approximation

Reinforcement Learning

Lecture 8: Policy Gradient

Reinforcement Learning. Spring 2018 Defining MDPs, Planning

Reinforcement Learning with Function Approximation. Joseph Christian G. Noel

Algorithms for Fast Gradient Temporal Difference Learning

Fast Gradient-Descent Methods for Temporal-Difference Learning with Linear Function Approximation

15-889e Policy Search: Gradient Methods Emma Brunskill. All slides from David Silver (with EB adding minor modificafons), unless otherwise noted

Lecture 23: Reinforcement Learning

Least squares temporal difference learning

Temporal difference learning

PART A and ONE question from PART B; or ONE question from PART A and TWO questions from PART B.

ilstd: Eligibility Traces and Convergence Analysis

Notes on Reinforcement Learning

CS 287: Advanced Robotics Fall Lecture 14: Reinforcement Learning with Function Approximation and TD Gammon case study

Machine Learning I Reinforcement Learning

Grundlagen der Künstlichen Intelligenz

Temporal Difference. Learning KENNETH TRAN. Principal Research Engineer, MSR AI

Reinforcement Learning

Reinforcement Learning. George Konidaris

Reinforcement Learning. Donglin Zeng, Department of Biostatistics, University of North Carolina

Reinforcement Learning: An Introduction. ****Draft****

CS599 Lecture 1 Introduction To RL

Replacing eligibility trace for action-value learning with function approximation

Policy Evaluation with Temporal Differences: A Survey and Comparison

Learning Control for Air Hockey Striking using Deep Reinforcement Learning

(Deep) Reinforcement Learning

Approximation Methods in Reinforcement Learning

Grundlagen der Künstlichen Intelligenz

Fast Gradient-Descent Methods for Temporal-Difference Learning with Linear Function Approximation

Introduction to Reinforcement Learning

1944 IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, VOL. 24, NO. 12, DECEMBER 2013

PART A and ONE question from PART B; or ONE question from PART A and TWO questions from PART B.

INF 5860 Machine learning for image classification. Lecture 14: Reinforcement learning May 9, 2018

Actor-critic methods. Dialogue Systems Group, Cambridge University Engineering Department. February 21, 2017

Q-Learning in Continuous State Action Spaces

Machine Learning I Continuous Reinforcement Learning

Off-Policy Actor-Critic

Open Theoretical Questions in Reinforcement Learning

An online kernel-based clustering approach for value function approximation

Reinforcement Learning. Introduction

Reinforcement Learning. Summer 2017 Defining MDPs, Planning

Reinforcement Learning

Reinforcement Learning: An Introduction

Decision Theory: Q-Learning

Reinforcement Learning

Introduction to Reinforcement Learning. Part 5: Temporal-Difference Learning

A Convergent O(n) Algorithm for Off-policy Temporal-difference Learning with Linear Function Approximation

Machine Learning. Reinforcement learning. Hamid Beigy. Sharif University of Technology. Fall 1396

Dyna-Style Planning with Linear Function Approximation and Prioritized Sweeping

Chapter 7: Eligibility Traces. R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 1

Dueling Network Architectures for Deep Reinforcement Learning (ICML 2016)

Reinforcement Learning

ROB 537: Learning-Based Control. Announcements: Project background due Today. HW 3 Due on 10/30 Midterm Exam on 11/6.

Reinforcement Learning as Variational Inference: Two Recent Approaches

Least-Squares Temporal Difference Learning based on Extreme Learning Machine

Reinforcement Learning

Reinforcement Learning. Machine Learning, Fall 2010

Reinforcement Learning Part 2

Reinforcement Learning: the basics

Basics of reinforcement learning

Approximate Dynamic Programming

Christopher Watkins and Peter Dayan. Noga Zaslavsky. The Hebrew University of Jerusalem Advanced Seminar in Deep Learning (67679) November 1, 2015

Reinforcement learning

Prof. Dr. Ann Nowé. Artificial Intelligence Lab ai.vub.ac.be

This question has three parts, each of which can be answered concisely, but be prepared to explain and justify your concise answer.

Reinforcement learning an introduction

Approximate Dynamic Programming

Lecture 10 - Planning under Uncertainty (III)

arxiv: v1 [cs.ai] 5 Nov 2017

6 Reinforcement Learning

State Space Abstractions for Reinforcement Learning

Marks. bonus points. } Assignment 1: Should be out this weekend. } Mid-term: Before the last lecture. } Mid-term deferred exam:

Reinforcement Learning and Deep Reinforcement Learning

Deep Reinforcement Learning SISL. Jeremy Morton (jmorton2) November 7, Stanford Intelligent Systems Laboratory

Lecture 18: Reinforcement Learning Sanjeev Arora Elad Hazan

Reinforcement Learning and Control

Introduction. Reinforcement Learning

Convergent Temporal-Difference Learning with Arbitrary Smooth Function Approximation

Emphatic Temporal-Difference Learning


Lecture 4: Approximate dynamic programming

Variance Reduction for Policy Gradient Methods. March 13, 2017

Olivier Sigaud. September 21, 2012

Decision Theory: Markov Decision Processes

A Residual Gradient Fuzzy Reinforcement Learning Algorithm for Differential Games

Linear Least-squares Dyna-style Planning

MARKOV DECISION PROCESSES (MDP) AND REINFORCEMENT LEARNING (RL) Versione originale delle slide fornita dal Prof. Francesco Lo Presti

Lecture 3: Markov Decision Processes

Reinforcement Learning II

Deep reinforcement learning. Dialogue Systems Group, Cambridge University Engineering Department

Transcription:

Reinforcement Learning RL in continuous MDPs March April, 2015

Large/Continuous MDPs Large/Continuous state space Tabular representation cannot be used Large/Continuous action space Maximization over action space is problematic Continuous time Hamilton Jacoby Bellman equation

Function Approximation in RL: Why? Reinforcement learning can be used to solve large problems, e.g. Backgammon: 10 20 states Computer Go: 10 170 states Helicopter: continuous state space How can we scale up the model-free for prediction and control?

Value Function Approximation So far we have represented value function by a lookup table Every state s has an entry V (s) Or every state action pair s, a has an entry Q(s, a) Problem with large MDPs There are too many states and/or actions to store in memory It is too slow to learn the value of each state individually Solution for large MDPS Estimate value function with function approximation V θ (s) V π (s) Q θ (s, a) Q π (s, a) Generalize from seen states to unseen states Update parameter θ using MC or TD learning

Which Function Approximation? There are many function approximators, e.g. Artificial neural network Decision tree Nearest neighbor Fourier/wavelet bases Coarse coding In principle, any function approximator can be used. However, the choice may be affected by some properties of RL: Experience is not i.i.d. successive timesteps are correlated During control, value function V π (s) is non stationary Agent s action affect the subsequent data it receives Feedback is delayed, not instantaneous

Gradient Descent Let L(θ) be a differentiable function of parameter vector θ Define the gradient of L(θ) to be θ L(θ) = L(θ) θ 1. L(θ) θ n To find a local minimum of L(θ) Adjust the parameter θ in the direction of negative gradient where α is a stepsize parameter θ = 1 2 α θl(θ)

Value Function Approximation by Stochastic Gradient Descent Goal: find parameter vector θ minimizing mean squared error between approximate value function V θ (s) and true value function V π (s) L(θ) = E π [(V π (s) V θ (s)) 2 s t = s] Gradient descent finds a local minimum θ = 1 2 α θl(θ) = αe π [(V π (s) V θ (s)) θ V θ (s) s t = s] Stochastic gradient descent samples the gradient θ = α(v π (s) V θ (s)) θ V θ (s) Expected update is equal to full gradient update

Feature Vectors Represent state by a feature vector φ 1 (s) φ(s) =. φ n (s) For example: Distance of robot from landmarks Trends in the stock market Piece and pawn configurations in chess

Linear Value Function Approximation Represent value function by a linear combination of features n V θ (s) = φ(s) T θ = φ j (s)θ j j=1 Objective function is quadratic [ ] J(θ) = E (V π (s) φ(s) T θ) 2 s t = s Stochastic gradient descent converges to global optimum Update rule is particularly simple θ V θ (s) = φ(s) θ = α(v π (s) V θ (s))φ(s) Update = stepsize prediction error feature value

Table Lookup Features Table lookup is a special case of linear value function approximation Using table lookup features 1(s = s 1 ) φ table (s) =. 1(s = s n ) Parameter vector θ gives value of each individual state T 1(s = s 1 ) θ 1 V (s) =.. 1(s = s n ) θ n

Coarse Coding Example of linear value function approximation: Coarse coding provides large feature vector φ(s) Parameter vector θ gives a value to each feature

Generalization in Coarse Coding

Stochastic Gradient Descent with Coarse Coding

Tile Coding Binary feature for each tile Number of features present at any one time is constant Binary features means weighted sum easy to compute Easy to compute indices of the features present

Tile Coding Irregular tilings Hashing CMAC [Albus, 1971] Cerebellar Model Architecture Computer

Radial Basis Functions (RBFs) e.g., Gaussians φ i (s) = exp ( ) s c i 2 2σi 2

Prediction Algorithm Have assumed true value function V π (s) given by supervisor But in RL there is no supervisor, only rewards In practice, we substitute a target for V π (s) For MC, the target is the return v t θ = α(v t V θ (s)) θ V θ (s) For TD(0), the target is the TD target r + γv (s ) θ = α(r + γv (s ) V θ (s)) θ V θ (s) For TD(λ), the target is the λ return v λ t θ = α(v λ t V θ (s)) θ V θ (s)

Monte Carlo with Value Function Approximation The return v t is an unbiased, noisy sample of true value V π (s) Can therefore apply supervised learning to training data : s 1, v 1, s 2, v 2,..., s T, v T For example, using linear Monte Carlo policy evaluation θ = α(v t V θ (s)) θ V θ (s) = α(v t V θ (s))φ(s) Monte Carlo evaluation converges to a local optimum Even when using non linear value function approximation

TD Learning with Value Function Approximation The TD target r t+1 + γv θ (s t+1 ) is a biased sample of true value V π (s t ) Can still apply supervised learning to training data : s 1, r 2 + γv θ (s 2 ), s 2, r 3 + γv θ (s 3 ),..., s T 1, r t For example, using linear TD(0) θ = α(r + γv θ (s ) V θ (s)) θ V θ (s) = αδφ(s) Linear TD(0) converges (close) to global optimum

TD(λ) with Value Function Approximation The λ return vt λ is also a biased sample of true value V π (s) Can again apply supervised learning to training data : s 1, v λ 1, s 2, v λ 2,..., s T 1, v λ T 1 Forward view linear TD(λ) θ = α(vt λ V θ (s t )) θ V θ (s t ) = α(vt λ V θ (s t ))φ(s t ) Backward view linear TD(λ) δ t = r t+1 + γv (s t+1 ) V (s t ) e t = γλe t 1 + φ(s t ) θ = αδ t e t Forward view and backward view linear TD(λ) are equivalent

Control with Value Function Approximation Policy Evaluation: Approximate policy evaluation, Q θ Q π Policy Improvement: ɛ greedy policy improvement

Action Value Function Approximation Approximate the action-value function Q θ (s, a) Q π (s, a) Minimize the mean squared error between approximate action value function Q θ (s, a) and true action value function Q π (s, a) Use stochastic gradient descent to find local minimum 1 2 θj(θ) = (Q π (s, a) Q θ (s, a)) θ Q θ (s, a) θ = α(q π (s, a) Q θ (s, a)) θ Q θ (s, a)

Linear Action Value Function Approximation Represent state and action by a feature vector φ 1 (s, a) φ(s, a) =. φ n (s, a) Represent action value function by linear combination of features n Q θ (s, a) = φ(s, a) T θ = φ j (s, a)θ j j=1 Stochastic gradient descent update θ Q θ (s, a) = φ(s, a) θ = α(q π (s, a) Q θ (s, a))φ(s, a)

Control Algorithms Like prediction, we must substitute a target for Q π (s, a) For MC, the target is the return v t θ = α(v t Q θ (s t, a t ))φ(s t, a t ) For TD(0), the target is the TD target r t+1 + γq(s t+1, a t+1 ) θ = α(r t+1 + γq(s t+1, a t+1 ) Q θ (s t, a t ))φ(s t, a t ) For forward view TD(λ), target is λ return v λ t θ = α(v λ t Q θ (s t, a t ))φ(s t, a t ) For backward view TD(λ), equivalent update is δ t = r t+1 + γq θ (s t+1, a t+1 ) Q θ (s t, a t ) e t = γλe t 1 + φ(s t, a t ) θ = αδ t e t

GPI Linear Gradient Descent SARSA(0) Initialize θ arbitrarily loop s, a initial state and action of episode Q(s, a) = θ T φ(s, a) repeat Q a = φ(s, a) Take action a, observe reward, r, and next state s δ r Q a Uniformly draw a number ρ [0, 1] if ρ 1 ɛ then for all a A(s) do Q a θ T φ(s, a) end for a arg max a Q a else a a random action A(s) end if Q a θ T φ(s, a) δ δ + γq a θ θ + αδ Q a until s is terminal end loop

GPI Linear Gradient Descent Watkins Q(0) Initialize θ arbitrarily loop s, a initial state and action of episode Q(s, a) = θ T φ(s, a) repeat Q a = φ(s, a) Take action a, observe reward, r, and next state s δ r Q a for all a A(s) do Q a θ T φ(s, a) end for δ δ + γ max a Q a θ θ + αδ Q a Uniformly draw a number ρ [0, 1] if ρ 1 ɛ then for all a A(s) do Q a θ T φ(s, a) end for a arg max a Q a else a a random action A(s) end if Q a θ T φ(s, a) until s is terminal end loop

Linear SARSA with Coarse Coding in Mountain Car

Linear SARSA with Radial Basis Functions in Mountain Car

Study of λ: Should We Bootstrap?

Convergence Questions The previous results show it is desirable to bootstrap But now we consider convergence issues When do incremental prediction algorithms converge? When using bootstrapping (i.e., TD with λ < 1)? When using linear function approximation? When using off policy learning? Ideally, we would like algorithms that converge in all cases

Baird s Counterexample V k (s) = θ(7)+2θ(1) V k (s) = θ(7)+2θ(2) V k (s) = θ(7)+2θ(3) V k (s) = θ(7)+2θ(4) V k (s) = θ(7)+2θ(5) 100% 99% V k (s) = 2θ(7)+θ(6) 1% terminal state Reward is always zero

Parameter Divergence in Baird s Counterexample Parameter values, θ k (i) (log scale, broken at ±1) Using uniform distribution γ = 0.99, α = 0.01, θ 0 = [1, 1, 1, 1, 1, 10, 1] T

Another Example Baird s counterexample has two simple fixes: use on policy distribution Instead of taking small steps towards expected one step returns, change value function to the best least squares approximation This works if the feature vectors form a linearly independent set When an exact solution is not possible, it does not work even if we consider the best approximation at each iteration 1 ɛ θ 2θ ɛ θ k+1 = arg min {V θ (s) E π[r t+1 + γv θk (s t+1 ) s t = s]} θ R s S = arg min [θ 2γθ k] 2 + [2θ 2(1 ɛ)γθ k ] 2 = 6 4ɛ γθ k θ R 5

Convergence of Prediction Algorithms On/Off Policy Algorithm Table Lookup Linear Non Linear MC OK OK OK On Policy TD(0) OK OK KO TD(λ) OK OK KO MC OK OK OK Off Policy TD(0) OK KO KO TD(λ) OK KO KO

Problematic Triumvirate Bootstrapping Off Policy Non Linear Function Approximation We have not quite achieved our ideal goal for prediction algorithms

Convergence of Control Algorithms Algorithm Table Lookup Linear Monte Carlo Control OK OK SARSA OK (OK) Q learning OK KO (OK) = chatters around near optimal value function

What Objective Function does TD(0) Optimize? Mean squared error MSE(θ) = V θ V π 2 D = E s D[(V θ (s) V π (s)) 2 ] Mean squared TD error MSTDE(θ) = E D,P,π [δt 2 s t = s] Mean squared Bellman error MSBE(θ) = V θ T π V θ 2 D = E s D[E P,π [δ t s t = s] 2 ] Mean squared projected Bellman error MSPBE(θ) = V θ ΠT π V θ 2 D

Split A Example Solution minimizing MSE: Correct solution V (A) = 0.5, V (B) = 1, V (C) = 0 Solution minimizing MSTDE (with table lookup): V (A) = 0.5, V (B) = 0.75, V (C) = 0.25 Solution minimizing MSBE (with function approximation): V (A) = 0.5, V (B) = 0.75, V (C) = 0.25 Solution minimizing MSPBE: Correct solution V (A) = 0.5, V (B) = 1, V (C) = 0

TD(0) Objective TD(0) finds the solution minimizing MSPBE But TD does not follow the gradient of any objective function This is why TD(0) can diverge when off policy or using non linear function approximation

Gradient Temporal Difference Learning Gradient TD learning explicitly follows gradient of MSPBE θ = 1 2 α θ V θ ΠT π V θ 2 D MSPBE is optimized at TD(0) fixed point Gradient descent of MSPBE finds TD(0) fixed point robustly

Gradient Temporal Difference Learning Algorithm MSPBE(θ) = V θ ΠT π V θ 2 D 1 2 θmspbe(θ) = E[δφ] γe[φ φ T ]E[φφ T ] 1 E[δφ] def = E[δφ] γe[φ φ T ]w This leads to the linear TDC algorithm θ = α(δφ γφ (φ T w)) w = β(δ φ T w)φ The highlighted term is a correction of the TD(0) update θ = αδφ

Gradient Temporal Difference Learning Results

Convergence of Prediction Algorithms On/Off Policy Algorithm Table Lookup Linear Non Linear MC OK OK OK On Policy TD(0) OK OK KO TDC OK OK OK MC OK OK OK Off Policy TD(0) OK KO KO TDC OK OK OK

Reinforcement Learning Gradient descent is simple and appealing But it is not sample efficient seek to find the best fitting value function Given the agent s experience ( training data )

Least Squares Prediction Given value function approximation V θ (s) V π (s) And experience D consisting of state, value pairs D = { s 1, v π 1, s 2, v π 2,..., s T, v π T } Which parameters θ give the best fitting value function V θ (s)? Least squares algorithms find parameter vector θ minimizing sum squared error between V θ (s t ) and target values v π t, LS(θ) = T t=1 (v π t V θ (s t )) 2 = E D [(v π V θ (s)) 2 ]

Stochastic Gradient Descent with Experience Replay Given experience consisting of state, value pairs D = { s 1, v1 π, s 2, v2 π,..., s T, vt π } Repeat 1 Sample state, value from experience s, v π D 2 Apply stochastic gradient descent update θ = α(v π V θ (s)) θ V θ (s) Converges to least squares solution θ π = arg min LS(θ) θ

Linear Least Squares Prediction Experience replay finds least squares solution But it may take many iterations Using linear value function approximation V θ (s) = φ(s) T θ We can solve the least squares solution directly

Linear Least Squares Prediction (2) At minimum of LS(θ), the expected update must be zero T t=1 E D[ θ] = 0 φ(s t)(v π t φ(s t) T θ) = 0 T t=1 φ(s t)v π t = θ = T φ(s t)φ(s t) T θ t=1 ( T ) 1 T φ(s t)φ(s t) T t=1 t=1 φ(s t)v π t For N features, direct solution time is O(N 3 ) solution time is O(N 2 ) using Shermann Morrison

Linear Least Squares Prediction Algorithms We do not know true value vt π In practice, our training data must use noisy or biased samples of vt π LSMC: Least Squares Monte Carlo uses return v π t v t LSTD: Least Squares Temporal Difference uses TD target vt π r t+1 + γv θ (s t+1 ) LSTD(λ): Least Squares TD(λ) uses λ return v π t v λ t In each case solve directly for fixed point of MC/TD/TD(λ)

Linear Least Squares Prediction Algorithms (2) LSMC LSTD 0 = θ = T α(v t V θ (s t ))φ(s t ) t=1 1 T φ(s t )φ(s t ) T T φ(s t )v t t=1 t=1 0 = θ = T α(r t+1 + γv θ (s t+1 ) V θ (s t ))φ(s t ) t=1 1 T φ(s t )(γφ(s t+1 ) φ(s t )) T T φ(s t )r t+1 t=1 t=1 LSTD(λ) 0 = θ = T αδ t e t t=1 1 T e t (γφ(s t+1 ) φ(s t )) T T e t r t+1 t=1 t=1

Convergence of Linear Least Squares Prediction Algorithms On/Off Policy Algorithm Table Lookup Linear LSMC OK OK On Policy LSTD(0) OK OK LSTD(λ) OK OK LSMC OK OK Off Policy LSTD(0) OK OK LSTD(λ) OK OK

Least Squares Policy Iteration Policy Evaluation: Least Squares policy evaluation, Q θ Q π Policy Improvement: ɛ greedy policy improvement

Least Squares Action Value Function Approximation Approximate Q(s, a) using linear combination of features Q θ = φ(s, a) T θ Q π (s, a) Minimize least squares error between approximate action value function Q θ (s, a) and true action value function Q π (s, a) LSTDQ algorithm minimizes least squares TD error Expected SARSA update must be zero at TD fixed point T T 0 = αδ t φ(s t, a t ) = α(r t+1 + γq θ (s t+1, a t+1 ) Q θ (s t, a t ))φ(s t, a t ) t=1 t=1 ( T ) 1 T θ = φ(s t, a t )(γφ(s t+1, a t+1 ) φ(s t, a t )) φ(s t, a t )r t+1 t=1 t=1 Similarly for LSMCQ and LSTDQ(λ)

Least Squares Policy Iteration Algorithm The following pseudocode uses LSTDQ for policy evaluation It repeatedly re evaluates experience D with different policies function LSPI TD(D,π 0 ) π π 0 repeat π π Q LSTDQ(π, D) for all s S do π (s) arg max a A Q(s, a) end for until π π return π end function

Convergence of Control Algorithms Algorithm Table Lookup Linear LSPI MC OK OK LSPI TD(0) OK OK LSPI TD(λ) OK OK

Chain Walk Example Consider the 50 state version of this problems Reward +1 in states 10 and 41, 0 elsewhere Optimal policy: R (1 9), L (10 25), R (26 41), L (42 50) Features: 10 evenly spaced Gaussians (σ = 4) for each action Experience: 10,000 steps from random walk policy

LSPI in Chain Walk Action Value Function

LSPI in Chain Walk Policy

Fitted Q Iteration Implements fitted value iteration Given a dataset of experience tuple D, solve a sequence of regression problems At iteration i, build an approximation ˆQ i over a dataset obtained by (T Q i 1 )(s, a) Allows to use a large class of regression (averagers), e.g. Kernel averaging Regression trees Fuzzy regression With other regression it may diverge In practice, good results also with neural networks

Fitted Q Iteration Algorithm Input: a set of four tuples D = { s i, a i, r i+1, s i+1 }L i=1 and a regression algorithm Initialize: set N to 0, ˆQ N (s, a) = 0, s, a repeat N N + 1 Build a training set T S = { s i, a i, r i+1 + γ max a A ˆQN 1 (s i+1, a) }L i=1 Use the regression algorithm on T S to build ˆQ N (s, a) until Stopping condition Stopping condition: Fixed number of iterations When the distance between ˆQ N and ˆQ N 1 drops below a threshold