Reinforcement Learning

Size: px
Start display at page:

Download "Reinforcement Learning"

Transcription

1 Reinforcement Learning RL in continuous MDPs March April, 2015

2 Large/Continuous MDPs Large/Continuous state space Tabular representation cannot be used Large/Continuous action space Maximization over action space is problematic Continuous time Hamilton Jacoby Bellman equation

3 Function Approximation in RL: Why? Reinforcement learning can be used to solve large problems, e.g. Backgammon: states Computer Go: states Helicopter: continuous state space How can we scale up the model-free for prediction and control?

4 Value Function Approximation So far we have represented value function by a lookup table Every state s has an entry V (s) Or every state action pair s, a has an entry Q(s, a) Problem with large MDPs There are too many states and/or actions to store in memory It is too slow to learn the value of each state individually Solution for large MDPS Estimate value function with function approximation V θ (s) V π (s) Q θ (s, a) Q π (s, a) Generalize from seen states to unseen states Update parameter θ using MC or TD learning

5 Which Function Approximation? There are many function approximators, e.g. Artificial neural network Decision tree Nearest neighbor Fourier/wavelet bases Coarse coding In principle, any function approximator can be used. However, the choice may be affected by some properties of RL: Experience is not i.i.d. successive timesteps are correlated During control, value function V π (s) is non stationary Agent s action affect the subsequent data it receives Feedback is delayed, not instantaneous

6 Gradient Descent Let L(θ) be a differentiable function of parameter vector θ Define the gradient of L(θ) to be θ L(θ) = L(θ) θ 1. L(θ) θ n To find a local minimum of L(θ) Adjust the parameter θ in the direction of negative gradient where α is a stepsize parameter θ = 1 2 α θl(θ)

7 Value Function Approximation by Stochastic Gradient Descent Goal: find parameter vector θ minimizing mean squared error between approximate value function V θ (s) and true value function V π (s) L(θ) = E π [(V π (s) V θ (s)) 2 s t = s] Gradient descent finds a local minimum θ = 1 2 α θl(θ) = αe π [(V π (s) V θ (s)) θ V θ (s) s t = s] Stochastic gradient descent samples the gradient θ = α(v π (s) V θ (s)) θ V θ (s) Expected update is equal to full gradient update

8 Feature Vectors Represent state by a feature vector φ 1 (s) φ(s) =. φ n (s) For example: Distance of robot from landmarks Trends in the stock market Piece and pawn configurations in chess

9 Linear Value Function Approximation Represent value function by a linear combination of features n V θ (s) = φ(s) T θ = φ j (s)θ j j=1 Objective function is quadratic [ ] J(θ) = E (V π (s) φ(s) T θ) 2 s t = s Stochastic gradient descent converges to global optimum Update rule is particularly simple θ V θ (s) = φ(s) θ = α(v π (s) V θ (s))φ(s) Update = stepsize prediction error feature value

10 Table Lookup Features Table lookup is a special case of linear value function approximation Using table lookup features 1(s = s 1 ) φ table (s) =. 1(s = s n ) Parameter vector θ gives value of each individual state T 1(s = s 1 ) θ 1 V (s) =.. 1(s = s n ) θ n

11 Coarse Coding Example of linear value function approximation: Coarse coding provides large feature vector φ(s) Parameter vector θ gives a value to each feature

12 Generalization in Coarse Coding

13 Stochastic Gradient Descent with Coarse Coding

14 Tile Coding Binary feature for each tile Number of features present at any one time is constant Binary features means weighted sum easy to compute Easy to compute indices of the features present

15 Tile Coding Irregular tilings Hashing CMAC [Albus, 1971] Cerebellar Model Architecture Computer

16 Radial Basis Functions (RBFs) e.g., Gaussians φ i (s) = exp ( ) s c i 2 2σi 2

17 Prediction Algorithm Have assumed true value function V π (s) given by supervisor But in RL there is no supervisor, only rewards In practice, we substitute a target for V π (s) For MC, the target is the return v t θ = α(v t V θ (s)) θ V θ (s) For TD(0), the target is the TD target r + γv (s ) θ = α(r + γv (s ) V θ (s)) θ V θ (s) For TD(λ), the target is the λ return v λ t θ = α(v λ t V θ (s)) θ V θ (s)

18 Monte Carlo with Value Function Approximation The return v t is an unbiased, noisy sample of true value V π (s) Can therefore apply supervised learning to training data : s 1, v 1, s 2, v 2,..., s T, v T For example, using linear Monte Carlo policy evaluation θ = α(v t V θ (s)) θ V θ (s) = α(v t V θ (s))φ(s) Monte Carlo evaluation converges to a local optimum Even when using non linear value function approximation

19 TD Learning with Value Function Approximation The TD target r t+1 + γv θ (s t+1 ) is a biased sample of true value V π (s t ) Can still apply supervised learning to training data : s 1, r 2 + γv θ (s 2 ), s 2, r 3 + γv θ (s 3 ),..., s T 1, r t For example, using linear TD(0) θ = α(r + γv θ (s ) V θ (s)) θ V θ (s) = αδφ(s) Linear TD(0) converges (close) to global optimum

20 TD(λ) with Value Function Approximation The λ return vt λ is also a biased sample of true value V π (s) Can again apply supervised learning to training data : s 1, v λ 1, s 2, v λ 2,..., s T 1, v λ T 1 Forward view linear TD(λ) θ = α(vt λ V θ (s t )) θ V θ (s t ) = α(vt λ V θ (s t ))φ(s t ) Backward view linear TD(λ) δ t = r t+1 + γv (s t+1 ) V (s t ) e t = γλe t 1 + φ(s t ) θ = αδ t e t Forward view and backward view linear TD(λ) are equivalent

21 Control with Value Function Approximation Policy Evaluation: Approximate policy evaluation, Q θ Q π Policy Improvement: ɛ greedy policy improvement

22 Action Value Function Approximation Approximate the action-value function Q θ (s, a) Q π (s, a) Minimize the mean squared error between approximate action value function Q θ (s, a) and true action value function Q π (s, a) Use stochastic gradient descent to find local minimum 1 2 θj(θ) = (Q π (s, a) Q θ (s, a)) θ Q θ (s, a) θ = α(q π (s, a) Q θ (s, a)) θ Q θ (s, a)

23 Linear Action Value Function Approximation Represent state and action by a feature vector φ 1 (s, a) φ(s, a) =. φ n (s, a) Represent action value function by linear combination of features n Q θ (s, a) = φ(s, a) T θ = φ j (s, a)θ j j=1 Stochastic gradient descent update θ Q θ (s, a) = φ(s, a) θ = α(q π (s, a) Q θ (s, a))φ(s, a)

24 Control Algorithms Like prediction, we must substitute a target for Q π (s, a) For MC, the target is the return v t θ = α(v t Q θ (s t, a t ))φ(s t, a t ) For TD(0), the target is the TD target r t+1 + γq(s t+1, a t+1 ) θ = α(r t+1 + γq(s t+1, a t+1 ) Q θ (s t, a t ))φ(s t, a t ) For forward view TD(λ), target is λ return v λ t θ = α(v λ t Q θ (s t, a t ))φ(s t, a t ) For backward view TD(λ), equivalent update is δ t = r t+1 + γq θ (s t+1, a t+1 ) Q θ (s t, a t ) e t = γλe t 1 + φ(s t, a t ) θ = αδ t e t

25 GPI Linear Gradient Descent SARSA(0) Initialize θ arbitrarily loop s, a initial state and action of episode Q(s, a) = θ T φ(s, a) repeat Q a = φ(s, a) Take action a, observe reward, r, and next state s δ r Q a Uniformly draw a number ρ [0, 1] if ρ 1 ɛ then for all a A(s) do Q a θ T φ(s, a) end for a arg max a Q a else a a random action A(s) end if Q a θ T φ(s, a) δ δ + γq a θ θ + αδ Q a until s is terminal end loop

26 GPI Linear Gradient Descent Watkins Q(0) Initialize θ arbitrarily loop s, a initial state and action of episode Q(s, a) = θ T φ(s, a) repeat Q a = φ(s, a) Take action a, observe reward, r, and next state s δ r Q a for all a A(s) do Q a θ T φ(s, a) end for δ δ + γ max a Q a θ θ + αδ Q a Uniformly draw a number ρ [0, 1] if ρ 1 ɛ then for all a A(s) do Q a θ T φ(s, a) end for a arg max a Q a else a a random action A(s) end if Q a θ T φ(s, a) until s is terminal end loop

27 Linear SARSA with Coarse Coding in Mountain Car

28 Linear SARSA with Radial Basis Functions in Mountain Car

29 Study of λ: Should We Bootstrap?

30 Convergence Questions The previous results show it is desirable to bootstrap But now we consider convergence issues When do incremental prediction algorithms converge? When using bootstrapping (i.e., TD with λ < 1)? When using linear function approximation? When using off policy learning? Ideally, we would like algorithms that converge in all cases

31 Baird s Counterexample V k (s) = θ(7)+2θ(1) V k (s) = θ(7)+2θ(2) V k (s) = θ(7)+2θ(3) V k (s) = θ(7)+2θ(4) V k (s) = θ(7)+2θ(5) 100% 99% V k (s) = 2θ(7)+θ(6) 1% terminal state Reward is always zero

32 Parameter Divergence in Baird s Counterexample Parameter values, θ k (i) (log scale, broken at ±1) Using uniform distribution γ = 0.99, α = 0.01, θ 0 = [1, 1, 1, 1, 1, 10, 1] T

33 Another Example Baird s counterexample has two simple fixes: use on policy distribution Instead of taking small steps towards expected one step returns, change value function to the best least squares approximation This works if the feature vectors form a linearly independent set When an exact solution is not possible, it does not work even if we consider the best approximation at each iteration 1 ɛ θ 2θ ɛ θ k+1 = arg min {V θ (s) E π[r t+1 + γv θk (s t+1 ) s t = s]} θ R s S = arg min [θ 2γθ k] 2 + [2θ 2(1 ɛ)γθ k ] 2 = 6 4ɛ γθ k θ R 5

34 Convergence of Prediction Algorithms On/Off Policy Algorithm Table Lookup Linear Non Linear MC OK OK OK On Policy TD(0) OK OK KO TD(λ) OK OK KO MC OK OK OK Off Policy TD(0) OK KO KO TD(λ) OK KO KO

35 Problematic Triumvirate Bootstrapping Off Policy Non Linear Function Approximation We have not quite achieved our ideal goal for prediction algorithms

36 Convergence of Control Algorithms Algorithm Table Lookup Linear Monte Carlo Control OK OK SARSA OK (OK) Q learning OK KO (OK) = chatters around near optimal value function

37 What Objective Function does TD(0) Optimize? Mean squared error MSE(θ) = V θ V π 2 D = E s D[(V θ (s) V π (s)) 2 ] Mean squared TD error MSTDE(θ) = E D,P,π [δt 2 s t = s] Mean squared Bellman error MSBE(θ) = V θ T π V θ 2 D = E s D[E P,π [δ t s t = s] 2 ] Mean squared projected Bellman error MSPBE(θ) = V θ ΠT π V θ 2 D

38 Split A Example Solution minimizing MSE: Correct solution V (A) = 0.5, V (B) = 1, V (C) = 0 Solution minimizing MSTDE (with table lookup): V (A) = 0.5, V (B) = 0.75, V (C) = 0.25 Solution minimizing MSBE (with function approximation): V (A) = 0.5, V (B) = 0.75, V (C) = 0.25 Solution minimizing MSPBE: Correct solution V (A) = 0.5, V (B) = 1, V (C) = 0

39 TD(0) Objective TD(0) finds the solution minimizing MSPBE But TD does not follow the gradient of any objective function This is why TD(0) can diverge when off policy or using non linear function approximation

40 Gradient Temporal Difference Learning Gradient TD learning explicitly follows gradient of MSPBE θ = 1 2 α θ V θ ΠT π V θ 2 D MSPBE is optimized at TD(0) fixed point Gradient descent of MSPBE finds TD(0) fixed point robustly

41 Gradient Temporal Difference Learning Algorithm MSPBE(θ) = V θ ΠT π V θ 2 D 1 2 θmspbe(θ) = E[δφ] γe[φ φ T ]E[φφ T ] 1 E[δφ] def = E[δφ] γe[φ φ T ]w This leads to the linear TDC algorithm θ = α(δφ γφ (φ T w)) w = β(δ φ T w)φ The highlighted term is a correction of the TD(0) update θ = αδφ

42 Gradient Temporal Difference Learning Results

43 Convergence of Prediction Algorithms On/Off Policy Algorithm Table Lookup Linear Non Linear MC OK OK OK On Policy TD(0) OK OK KO TDC OK OK OK MC OK OK OK Off Policy TD(0) OK KO KO TDC OK OK OK

44 Reinforcement Learning Gradient descent is simple and appealing But it is not sample efficient seek to find the best fitting value function Given the agent s experience ( training data )

45 Least Squares Prediction Given value function approximation V θ (s) V π (s) And experience D consisting of state, value pairs D = { s 1, v π 1, s 2, v π 2,..., s T, v π T } Which parameters θ give the best fitting value function V θ (s)? Least squares algorithms find parameter vector θ minimizing sum squared error between V θ (s t ) and target values v π t, LS(θ) = T t=1 (v π t V θ (s t )) 2 = E D [(v π V θ (s)) 2 ]

46 Stochastic Gradient Descent with Experience Replay Given experience consisting of state, value pairs D = { s 1, v1 π, s 2, v2 π,..., s T, vt π } Repeat 1 Sample state, value from experience s, v π D 2 Apply stochastic gradient descent update θ = α(v π V θ (s)) θ V θ (s) Converges to least squares solution θ π = arg min LS(θ) θ

47 Linear Least Squares Prediction Experience replay finds least squares solution But it may take many iterations Using linear value function approximation V θ (s) = φ(s) T θ We can solve the least squares solution directly

48 Linear Least Squares Prediction (2) At minimum of LS(θ), the expected update must be zero T t=1 E D[ θ] = 0 φ(s t)(v π t φ(s t) T θ) = 0 T t=1 φ(s t)v π t = θ = T φ(s t)φ(s t) T θ t=1 ( T ) 1 T φ(s t)φ(s t) T t=1 t=1 φ(s t)v π t For N features, direct solution time is O(N 3 ) solution time is O(N 2 ) using Shermann Morrison

49 Linear Least Squares Prediction Algorithms We do not know true value vt π In practice, our training data must use noisy or biased samples of vt π LSMC: Least Squares Monte Carlo uses return v π t v t LSTD: Least Squares Temporal Difference uses TD target vt π r t+1 + γv θ (s t+1 ) LSTD(λ): Least Squares TD(λ) uses λ return v π t v λ t In each case solve directly for fixed point of MC/TD/TD(λ)

50 Linear Least Squares Prediction Algorithms (2) LSMC LSTD 0 = θ = T α(v t V θ (s t ))φ(s t ) t=1 1 T φ(s t )φ(s t ) T T φ(s t )v t t=1 t=1 0 = θ = T α(r t+1 + γv θ (s t+1 ) V θ (s t ))φ(s t ) t=1 1 T φ(s t )(γφ(s t+1 ) φ(s t )) T T φ(s t )r t+1 t=1 t=1 LSTD(λ) 0 = θ = T αδ t e t t=1 1 T e t (γφ(s t+1 ) φ(s t )) T T e t r t+1 t=1 t=1

51 Convergence of Linear Least Squares Prediction Algorithms On/Off Policy Algorithm Table Lookup Linear LSMC OK OK On Policy LSTD(0) OK OK LSTD(λ) OK OK LSMC OK OK Off Policy LSTD(0) OK OK LSTD(λ) OK OK

52 Least Squares Policy Iteration Policy Evaluation: Least Squares policy evaluation, Q θ Q π Policy Improvement: ɛ greedy policy improvement

53 Least Squares Action Value Function Approximation Approximate Q(s, a) using linear combination of features Q θ = φ(s, a) T θ Q π (s, a) Minimize least squares error between approximate action value function Q θ (s, a) and true action value function Q π (s, a) LSTDQ algorithm minimizes least squares TD error Expected SARSA update must be zero at TD fixed point T T 0 = αδ t φ(s t, a t ) = α(r t+1 + γq θ (s t+1, a t+1 ) Q θ (s t, a t ))φ(s t, a t ) t=1 t=1 ( T ) 1 T θ = φ(s t, a t )(γφ(s t+1, a t+1 ) φ(s t, a t )) φ(s t, a t )r t+1 t=1 t=1 Similarly for LSMCQ and LSTDQ(λ)

54 Least Squares Policy Iteration Algorithm The following pseudocode uses LSTDQ for policy evaluation It repeatedly re evaluates experience D with different policies function LSPI TD(D,π 0 ) π π 0 repeat π π Q LSTDQ(π, D) for all s S do π (s) arg max a A Q(s, a) end for until π π return π end function

55 Convergence of Control Algorithms Algorithm Table Lookup Linear LSPI MC OK OK LSPI TD(0) OK OK LSPI TD(λ) OK OK

56 Chain Walk Example Consider the 50 state version of this problems Reward +1 in states 10 and 41, 0 elsewhere Optimal policy: R (1 9), L (10 25), R (26 41), L (42 50) Features: 10 evenly spaced Gaussians (σ = 4) for each action Experience: 10,000 steps from random walk policy

57 LSPI in Chain Walk Action Value Function

58 LSPI in Chain Walk Policy

59 Fitted Q Iteration Implements fitted value iteration Given a dataset of experience tuple D, solve a sequence of regression problems At iteration i, build an approximation ˆQ i over a dataset obtained by (T Q i 1 )(s, a) Allows to use a large class of regression (averagers), e.g. Kernel averaging Regression trees Fuzzy regression With other regression it may diverge In practice, good results also with neural networks

60 Fitted Q Iteration Algorithm Input: a set of four tuples D = { s i, a i, r i+1, s i+1 }L i=1 and a regression algorithm Initialize: set N to 0, ˆQ N (s, a) = 0, s, a repeat N N + 1 Build a training set T S = { s i, a i, r i+1 + γ max a A ˆQN 1 (s i+1, a) }L i=1 Use the regression algorithm on T S to build ˆQ N (s, a) until Stopping condition Stopping condition: Fixed number of iterations When the distance between ˆQ N and ˆQ N 1 drops below a threshold

Lecture 7: Value Function Approximation

Lecture 7: Value Function Approximation Lecture 7: Value Function Approximation Joseph Modayil Outline 1 Introduction 2 3 Batch Methods Introduction Large-Scale Reinforcement Learning Reinforcement learning can be used to solve large problems,

More information

Reinforcement Learning

Reinforcement Learning Reinforcement Learning Function approximation Mario Martin CS-UPC May 18, 2018 Mario Martin (CS-UPC) Reinforcement Learning May 18, 2018 / 65 Recap Algorithms: MonteCarlo methods for Policy Evaluation

More information

Reinforcement Learning

Reinforcement Learning Reinforcement Learning Function approximation Daniel Hennes 19.06.2017 University Stuttgart - IPVS - Machine Learning & Robotics 1 Today Eligibility traces n-step TD returns Forward and backward view Function

More information

CS599 Lecture 2 Function Approximation in RL

CS599 Lecture 2 Function Approximation in RL CS599 Lecture 2 Function Approximation in RL Look at how experience with a limited part of the state set be used to produce good behavior over a much larger part. Overview of function approximation (FA)

More information

CSE 190: Reinforcement Learning: An Introduction. Chapter 8: Generalization and Function Approximation. Pop Quiz: What Function Are We Approximating?

CSE 190: Reinforcement Learning: An Introduction. Chapter 8: Generalization and Function Approximation. Pop Quiz: What Function Are We Approximating? CSE 190: Reinforcement Learning: An Introduction Chapter 8: Generalization and Function Approximation Objectives of this chapter: Look at how experience with a limited part of the state set be used to

More information

Chapter 8: Generalization and Function Approximation

Chapter 8: Generalization and Function Approximation Chapter 8: Generalization and Function Approximation Objectives of this chapter: Look at how experience with a limited part of the state set be used to produce good behavior over a much larger part. Overview

More information

CSE 190: Reinforcement Learning: An Introduction. Chapter 8: Generalization and Function Approximation. Pop Quiz: What Function Are We Approximating?

CSE 190: Reinforcement Learning: An Introduction. Chapter 8: Generalization and Function Approximation. Pop Quiz: What Function Are We Approximating? CSE 190: Reinforcement Learning: An Introduction Chapter 8: Generalization and Function Approximation Objectives of this chapter: Look at how experience with a limited part of the state set be used to

More information

Reinforcement Learning

Reinforcement Learning Reinforcement Learning Value Function Approximation Continuous state/action space, mean-square error, gradient temporal difference learning, least-square temporal difference, least squares policy iteration

More information

Introduction to Reinforcement Learning. CMPT 882 Mar. 18

Introduction to Reinforcement Learning. CMPT 882 Mar. 18 Introduction to Reinforcement Learning CMPT 882 Mar. 18 Outline for the week Basic ideas in RL Value functions and value iteration Policy evaluation and policy improvement Model-free RL Monte-Carlo and

More information

Generalization and Function Approximation

Generalization and Function Approximation Generalization and Function Approximation 0 Generalization and Function Approximation Suggested reading: Chapter 8 in R. S. Sutton, A. G. Barto: Reinforcement Learning: An Introduction MIT Press, 1998.

More information

Reinforcement Learning

Reinforcement Learning Reinforcement Learning Function Approximation Continuous state/action space, mean-square error, gradient temporal difference learning, least-square temporal difference, least squares policy iteration Vien

More information

Lecture 8: Policy Gradient

Lecture 8: Policy Gradient Lecture 8: Policy Gradient Hado van Hasselt Outline 1 Introduction 2 Finite Difference Policy Gradient 3 Monte-Carlo Policy Gradient 4 Actor-Critic Policy Gradient Introduction Vapnik s rule Never solve

More information

Reinforcement Learning. Spring 2018 Defining MDPs, Planning

Reinforcement Learning. Spring 2018 Defining MDPs, Planning Reinforcement Learning Spring 2018 Defining MDPs, Planning understandability 0 Slide 10 time You are here Markov Process Where you will go depends only on where you are Markov Process: Information state

More information

Reinforcement Learning with Function Approximation. Joseph Christian G. Noel

Reinforcement Learning with Function Approximation. Joseph Christian G. Noel Reinforcement Learning with Function Approximation Joseph Christian G. Noel November 2011 Abstract Reinforcement learning (RL) is a key problem in the field of Artificial Intelligence. The main goal is

More information

Algorithms for Fast Gradient Temporal Difference Learning

Algorithms for Fast Gradient Temporal Difference Learning Algorithms for Fast Gradient Temporal Difference Learning Christoph Dann Autonomous Learning Systems Seminar Department of Computer Science TU Darmstadt Darmstadt, Germany cdann@cdann.de Abstract Temporal

More information

Fast Gradient-Descent Methods for Temporal-Difference Learning with Linear Function Approximation

Fast Gradient-Descent Methods for Temporal-Difference Learning with Linear Function Approximation Fast Gradient-Descent Methods for Temporal-Difference Learning with Linear Function Approximation Rich Sutton, University of Alberta Hamid Maei, University of Alberta Doina Precup, McGill University Shalabh

More information

15-889e Policy Search: Gradient Methods Emma Brunskill. All slides from David Silver (with EB adding minor modificafons), unless otherwise noted

15-889e Policy Search: Gradient Methods Emma Brunskill. All slides from David Silver (with EB adding minor modificafons), unless otherwise noted 15-889e Policy Search: Gradient Methods Emma Brunskill All slides from David Silver (with EB adding minor modificafons), unless otherwise noted Outline 1 Introduction 2 Finite Difference Policy Gradient

More information

Lecture 23: Reinforcement Learning

Lecture 23: Reinforcement Learning Lecture 23: Reinforcement Learning MDPs revisited Model-based learning Monte Carlo value function estimation Temporal-difference (TD) learning Exploration November 23, 2006 1 COMP-424 Lecture 23 Recall:

More information

Least squares temporal difference learning

Least squares temporal difference learning Least squares temporal difference learning TD(λ) Good properties of TD Easy to implement, traces achieve the forward view Linear complexity = fast Combines easily with linear function approximation Outperforms

More information

Temporal difference learning

Temporal difference learning Temporal difference learning AI & Agents for IET Lecturer: S Luz http://www.scss.tcd.ie/~luzs/t/cs7032/ February 4, 2014 Recall background & assumptions Environment is a finite MDP (i.e. A and S are finite).

More information

PART A and ONE question from PART B; or ONE question from PART A and TWO questions from PART B.

PART A and ONE question from PART B; or ONE question from PART A and TWO questions from PART B. Advanced Topics in Machine Learning, GI13, 2010/11 Advanced Topics in Machine Learning, GI13, 2010/11 Answer any THREE questions. Each question is worth 20 marks. Use separate answer books Answer any THREE

More information

ilstd: Eligibility Traces and Convergence Analysis

ilstd: Eligibility Traces and Convergence Analysis ilstd: Eligibility Traces and Convergence Analysis Alborz Geramifard Michael Bowling Martin Zinkevich Richard S. Sutton Department of Computing Science University of Alberta Edmonton, Alberta {alborz,bowling,maz,sutton}@cs.ualberta.ca

More information

Notes on Reinforcement Learning

Notes on Reinforcement Learning 1 Introduction Notes on Reinforcement Learning Paulo Eduardo Rauber 2014 Reinforcement learning is the study of agents that act in an environment with the goal of maximizing cumulative reward signals.

More information

CS 287: Advanced Robotics Fall Lecture 14: Reinforcement Learning with Function Approximation and TD Gammon case study

CS 287: Advanced Robotics Fall Lecture 14: Reinforcement Learning with Function Approximation and TD Gammon case study CS 287: Advanced Robotics Fall 2009 Lecture 14: Reinforcement Learning with Function Approximation and TD Gammon case study Pieter Abbeel UC Berkeley EECS Assignment #1 Roll-out: nice example paper: X.

More information

Machine Learning I Reinforcement Learning

Machine Learning I Reinforcement Learning Machine Learning I Reinforcement Learning Thomas Rückstieß Technische Universität München December 17/18, 2009 Literature Book: Reinforcement Learning: An Introduction Sutton & Barto (free online version:

More information

Grundlagen der Künstlichen Intelligenz

Grundlagen der Künstlichen Intelligenz Grundlagen der Künstlichen Intelligenz Reinforcement learning II Daniel Hennes 11.12.2017 (WS 2017/18) University Stuttgart - IPVS - Machine Learning & Robotics 1 Today Eligibility traces n-step TD returns

More information

Temporal Difference. Learning KENNETH TRAN. Principal Research Engineer, MSR AI

Temporal Difference. Learning KENNETH TRAN. Principal Research Engineer, MSR AI Temporal Difference Learning KENNETH TRAN Principal Research Engineer, MSR AI Temporal Difference Learning Policy Evaluation Intro to model-free learning Monte Carlo Learning Temporal Difference Learning

More information

Reinforcement Learning

Reinforcement Learning Reinforcement Learning Temporal Difference Learning Temporal difference learning, TD prediction, Q-learning, elibigility traces. (many slides from Marc Toussaint) Vien Ngo MLR, University of Stuttgart

More information

Reinforcement Learning. George Konidaris

Reinforcement Learning. George Konidaris Reinforcement Learning George Konidaris gdk@cs.brown.edu Fall 2017 Machine Learning Subfield of AI concerned with learning from data. Broadly, using: Experience To Improve Performance On Some Task (Tom

More information

Reinforcement Learning. Donglin Zeng, Department of Biostatistics, University of North Carolina

Reinforcement Learning. Donglin Zeng, Department of Biostatistics, University of North Carolina Reinforcement Learning Introduction Introduction Unsupervised learning has no outcome (no feedback). Supervised learning has outcome so we know what to predict. Reinforcement learning is in between it

More information

Reinforcement Learning: An Introduction. ****Draft****

Reinforcement Learning: An Introduction. ****Draft**** i Reinforcement Learning: An Introduction Second edition, in progress ****Draft**** Richard S. Sutton and Andrew G. Barto c 2014, 2015 A Bradford Book The MIT Press Cambridge, Massachusetts London, England

More information

CS599 Lecture 1 Introduction To RL

CS599 Lecture 1 Introduction To RL CS599 Lecture 1 Introduction To RL Reinforcement Learning Introduction Learning from rewards Policies Value Functions Rewards Models of the Environment Exploitation vs. Exploration Dynamic Programming

More information

Replacing eligibility trace for action-value learning with function approximation

Replacing eligibility trace for action-value learning with function approximation Replacing eligibility trace for action-value learning with function approximation Kary FRÄMLING Helsinki University of Technology PL 5500, FI-02015 TKK - Finland Abstract. The eligibility trace is one

More information

Policy Evaluation with Temporal Differences: A Survey and Comparison

Policy Evaluation with Temporal Differences: A Survey and Comparison Journal of Machine Learning Research 15 (2014) 809-883 Submitted 5/13; Revised 11/13; Published 3/14 Policy Evaluation with Temporal Differences: A Survey and Comparison Christoph Dann Gerhard Neumann

More information

Learning Control for Air Hockey Striking using Deep Reinforcement Learning

Learning Control for Air Hockey Striking using Deep Reinforcement Learning Learning Control for Air Hockey Striking using Deep Reinforcement Learning Ayal Taitler, Nahum Shimkin Faculty of Electrical Engineering Technion - Israel Institute of Technology May 8, 2017 A. Taitler,

More information

(Deep) Reinforcement Learning

(Deep) Reinforcement Learning Martin Matyášek Artificial Intelligence Center Czech Technical University in Prague October 27, 2016 Martin Matyášek VPD, 2016 1 / 17 Reinforcement Learning in a picture R. S. Sutton and A. G. Barto 2015

More information

Approximation Methods in Reinforcement Learning

Approximation Methods in Reinforcement Learning 2018 CS420, Machine Learning, Lecture 12 Approximation Methods in Reinforcement Learning Weinan Zhang Shanghai Jiao Tong University http://wnzhang.net http://wnzhang.net/teaching/cs420/index.html Reinforcement

More information

Grundlagen der Künstlichen Intelligenz

Grundlagen der Künstlichen Intelligenz Grundlagen der Künstlichen Intelligenz Reinforcement learning Daniel Hennes 4.12.2017 (WS 2017/18) University Stuttgart - IPVS - Machine Learning & Robotics 1 Today Reinforcement learning Model based and

More information

Fast Gradient-Descent Methods for Temporal-Difference Learning with Linear Function Approximation

Fast Gradient-Descent Methods for Temporal-Difference Learning with Linear Function Approximation Fast Gradient-Descent Methods for Temporal-Difference Learning with Linear Function Approximation Richard S. Sutton, Hamid Reza Maei, Doina Precup, Shalabh Bhatnagar, 3 David Silver, Csaba Szepesvári,

More information

Introduction to Reinforcement Learning

Introduction to Reinforcement Learning CSCI-699: Advanced Topics in Deep Learning 01/16/2019 Nitin Kamra Spring 2019 Introduction to Reinforcement Learning 1 What is Reinforcement Learning? So far we have seen unsupervised and supervised learning.

More information

1944 IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, VOL. 24, NO. 12, DECEMBER 2013

1944 IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, VOL. 24, NO. 12, DECEMBER 2013 1944 IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, VOL. 24, NO. 12, DECEMBER 2013 Online Selective Kernel-Based Temporal Difference Learning Xingguo Chen, Yang Gao, Member, IEEE, andruiliwang

More information

PART A and ONE question from PART B; or ONE question from PART A and TWO questions from PART B.

PART A and ONE question from PART B; or ONE question from PART A and TWO questions from PART B. Advanced Topics in Machine Learning, GI13, 2010/11 Advanced Topics in Machine Learning, GI13, 2010/11 Answer any THREE questions. Each question is worth 20 marks. Use separate answer books Answer any THREE

More information

INF 5860 Machine learning for image classification. Lecture 14: Reinforcement learning May 9, 2018

INF 5860 Machine learning for image classification. Lecture 14: Reinforcement learning May 9, 2018 Machine learning for image classification Lecture 14: Reinforcement learning May 9, 2018 Page 3 Outline Motivation Introduction to reinforcement learning (RL) Value function based methods (Q-learning)

More information

Actor-critic methods. Dialogue Systems Group, Cambridge University Engineering Department. February 21, 2017

Actor-critic methods. Dialogue Systems Group, Cambridge University Engineering Department. February 21, 2017 Actor-critic methods Milica Gašić Dialogue Systems Group, Cambridge University Engineering Department February 21, 2017 1 / 21 In this lecture... The actor-critic architecture Least-Squares Policy Iteration

More information

Q-Learning in Continuous State Action Spaces

Q-Learning in Continuous State Action Spaces Q-Learning in Continuous State Action Spaces Alex Irpan alexirpan@berkeley.edu December 5, 2015 Contents 1 Introduction 1 2 Background 1 3 Q-Learning 2 4 Q-Learning In Continuous Spaces 4 5 Experimental

More information

Machine Learning I Continuous Reinforcement Learning

Machine Learning I Continuous Reinforcement Learning Machine Learning I Continuous Reinforcement Learning Thomas Rückstieß Technische Universität München January 7/8, 2010 RL Problem Statement (reminder) state s t+1 ENVIRONMENT reward r t+1 new step r t

More information

Off-Policy Actor-Critic

Off-Policy Actor-Critic Off-Policy Actor-Critic Ludovic Trottier Laval University July 25 2012 Ludovic Trottier (DAMAS Laboratory) Off-Policy Actor-Critic July 25 2012 1 / 34 Table of Contents 1 Reinforcement Learning Theory

More information

Open Theoretical Questions in Reinforcement Learning

Open Theoretical Questions in Reinforcement Learning Open Theoretical Questions in Reinforcement Learning Richard S. Sutton AT&T Labs, Florham Park, NJ 07932, USA, sutton@research.att.com, www.cs.umass.edu/~rich Reinforcement learning (RL) concerns the problem

More information

An online kernel-based clustering approach for value function approximation

An online kernel-based clustering approach for value function approximation An online kernel-based clustering approach for value function approximation N. Tziortziotis and K. Blekas Department of Computer Science, University of Ioannina P.O.Box 1186, Ioannina 45110 - Greece {ntziorzi,kblekas}@cs.uoi.gr

More information

Reinforcement Learning. Introduction

Reinforcement Learning. Introduction Reinforcement Learning Introduction Reinforcement Learning Agent interacts and learns from a stochastic environment Science of sequential decision making Many faces of reinforcement learning Optimal control

More information

Reinforcement Learning. Summer 2017 Defining MDPs, Planning

Reinforcement Learning. Summer 2017 Defining MDPs, Planning Reinforcement Learning Summer 2017 Defining MDPs, Planning understandability 0 Slide 10 time You are here Markov Process Where you will go depends only on where you are Markov Process: Information state

More information

Reinforcement Learning

Reinforcement Learning Reinforcement Learning Temporal Difference Learning Temporal difference learning, TD prediction, Q-learning, elibigility traces. (many slides from Marc Toussaint) Vien Ngo Marc Toussaint University of

More information

Reinforcement Learning: An Introduction

Reinforcement Learning: An Introduction Introduction Betreuer: Freek Stulp Hauptseminar Intelligente Autonome Systeme (WiSe 04/05) Forschungs- und Lehreinheit Informatik IX Technische Universität München November 24, 2004 Introduction What is

More information

Decision Theory: Q-Learning

Decision Theory: Q-Learning Decision Theory: Q-Learning CPSC 322 Decision Theory 5 Textbook 12.5 Decision Theory: Q-Learning CPSC 322 Decision Theory 5, Slide 1 Lecture Overview 1 Recap 2 Asynchronous Value Iteration 3 Q-Learning

More information

Reinforcement Learning

Reinforcement Learning Reinforcement Learning Policy Search: Actor-Critic and Gradient Policy search Mario Martin CS-UPC May 28, 2018 Mario Martin (CS-UPC) Reinforcement Learning May 28, 2018 / 63 Goal of this lecture So far

More information

Introduction to Reinforcement Learning. Part 5: Temporal-Difference Learning

Introduction to Reinforcement Learning. Part 5: Temporal-Difference Learning Introduction to Reinforcement Learning Part 5: emporal-difference Learning What everybody should know about emporal-difference (D) learning Used to learn value functions without human input Learns a guess

More information

A Convergent O(n) Algorithm for Off-policy Temporal-difference Learning with Linear Function Approximation

A Convergent O(n) Algorithm for Off-policy Temporal-difference Learning with Linear Function Approximation A Convergent O(n) Algorithm for Off-policy Temporal-difference Learning with Linear Function Approximation Richard S. Sutton, Csaba Szepesvári, Hamid Reza Maei Reinforcement Learning and Artificial Intelligence

More information

Machine Learning. Reinforcement learning. Hamid Beigy. Sharif University of Technology. Fall 1396

Machine Learning. Reinforcement learning. Hamid Beigy. Sharif University of Technology. Fall 1396 Machine Learning Reinforcement learning Hamid Beigy Sharif University of Technology Fall 1396 Hamid Beigy (Sharif University of Technology) Machine Learning Fall 1396 1 / 32 Table of contents 1 Introduction

More information

Dyna-Style Planning with Linear Function Approximation and Prioritized Sweeping

Dyna-Style Planning with Linear Function Approximation and Prioritized Sweeping Dyna-Style Planning with Linear Function Approximation and Prioritized Sweeping Richard S Sutton, Csaba Szepesvári, Alborz Geramifard, Michael Bowling Reinforcement Learning and Artificial Intelligence

More information

Chapter 7: Eligibility Traces. R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 1

Chapter 7: Eligibility Traces. R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 1 Chapter 7: Eligibility Traces R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 1 Midterm Mean = 77.33 Median = 82 R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction

More information

Dueling Network Architectures for Deep Reinforcement Learning (ICML 2016)

Dueling Network Architectures for Deep Reinforcement Learning (ICML 2016) Dueling Network Architectures for Deep Reinforcement Learning (ICML 2016) Yoonho Lee Department of Computer Science and Engineering Pohang University of Science and Technology October 11, 2016 Outline

More information

Reinforcement Learning

Reinforcement Learning Reinforcement Learning Policy gradients Daniel Hennes 26.06.2017 University Stuttgart - IPVS - Machine Learning & Robotics 1 Policy based reinforcement learning So far we approximated the action value

More information

ROB 537: Learning-Based Control. Announcements: Project background due Today. HW 3 Due on 10/30 Midterm Exam on 11/6.

ROB 537: Learning-Based Control. Announcements: Project background due Today. HW 3 Due on 10/30 Midterm Exam on 11/6. ROB 537: Learning-Based Control Week 5, Lecture 1 Policy Gradient, Eligibility Traces, Transfer Learning (MaC Taylor Announcements: Project background due Today HW 3 Due on 10/30 Midterm Exam on 11/6 Reading:

More information

Reinforcement Learning as Variational Inference: Two Recent Approaches

Reinforcement Learning as Variational Inference: Two Recent Approaches Reinforcement Learning as Variational Inference: Two Recent Approaches Rohith Kuditipudi Duke University 11 August 2017 Outline 1 Background 2 Stein Variational Policy Gradient 3 Soft Q-Learning 4 Closing

More information

Least-Squares Temporal Difference Learning based on Extreme Learning Machine

Least-Squares Temporal Difference Learning based on Extreme Learning Machine Least-Squares Temporal Difference Learning based on Extreme Learning Machine Pablo Escandell-Montero, José M. Martínez-Martínez, José D. Martín-Guerrero, Emilio Soria-Olivas, Juan Gómez-Sanchis IDAL, Intelligent

More information

Reinforcement Learning

Reinforcement Learning 1 Reinforcement Learning Chris Watkins Department of Computer Science Royal Holloway, University of London July 27, 2015 2 Plan 1 Why reinforcement learning? Where does this theory come from? Markov decision

More information

Reinforcement Learning. Machine Learning, Fall 2010

Reinforcement Learning. Machine Learning, Fall 2010 Reinforcement Learning Machine Learning, Fall 2010 1 Administrativia This week: finish RL, most likely start graphical models LA2: due on Thursday LA3: comes out on Thursday TA Office hours: Today 1:30-2:30

More information

Reinforcement Learning Part 2

Reinforcement Learning Part 2 Reinforcement Learning Part 2 Dipendra Misra Cornell University dkm@cs.cornell.edu https://dipendramisra.wordpress.com/ From previous tutorial Reinforcement Learning Exploration No supervision Agent-Reward-Environment

More information

Reinforcement Learning: the basics

Reinforcement Learning: the basics Reinforcement Learning: the basics Olivier Sigaud Université Pierre et Marie Curie, PARIS 6 http://people.isir.upmc.fr/sigaud August 6, 2012 1 / 46 Introduction Action selection/planning Learning by trial-and-error

More information

Basics of reinforcement learning

Basics of reinforcement learning Basics of reinforcement learning Lucian Buşoniu TMLSS, 20 July 2018 Main idea of reinforcement learning (RL) Learn a sequential decision policy to optimize the cumulative performance of an unknown system

More information

Approximate Dynamic Programming

Approximate Dynamic Programming Master MVA: Reinforcement Learning Lecture: 5 Approximate Dynamic Programming Lecturer: Alessandro Lazaric http://researchers.lille.inria.fr/ lazaric/webpage/teaching.html Objectives of the lecture 1.

More information

Christopher Watkins and Peter Dayan. Noga Zaslavsky. The Hebrew University of Jerusalem Advanced Seminar in Deep Learning (67679) November 1, 2015

Christopher Watkins and Peter Dayan. Noga Zaslavsky. The Hebrew University of Jerusalem Advanced Seminar in Deep Learning (67679) November 1, 2015 Q-Learning Christopher Watkins and Peter Dayan Noga Zaslavsky The Hebrew University of Jerusalem Advanced Seminar in Deep Learning (67679) November 1, 2015 Noga Zaslavsky Q-Learning (Watkins & Dayan, 1992)

More information

Reinforcement learning

Reinforcement learning Reinforcement learning Stuart Russell, UC Berkeley Stuart Russell, UC Berkeley 1 Outline Sequential decision making Dynamic programming algorithms Reinforcement learning algorithms temporal difference

More information

Prof. Dr. Ann Nowé. Artificial Intelligence Lab ai.vub.ac.be

Prof. Dr. Ann Nowé. Artificial Intelligence Lab ai.vub.ac.be REINFORCEMENT LEARNING AN INTRODUCTION Prof. Dr. Ann Nowé Artificial Intelligence Lab ai.vub.ac.be REINFORCEMENT LEARNING WHAT IS IT? What is it? Learning from interaction Learning about, from, and while

More information

This question has three parts, each of which can be answered concisely, but be prepared to explain and justify your concise answer.

This question has three parts, each of which can be answered concisely, but be prepared to explain and justify your concise answer. This question has three parts, each of which can be answered concisely, but be prepared to explain and justify your concise answer. 1. Suppose you have a policy and its action-value function, q, then you

More information

Reinforcement learning an introduction

Reinforcement learning an introduction Reinforcement learning an introduction Prof. Dr. Ann Nowé Computational Modeling Group AIlab ai.vub.ac.be November 2013 Reinforcement Learning What is it? Learning from interaction Learning about, from,

More information

Approximate Dynamic Programming

Approximate Dynamic Programming Approximate Dynamic Programming A. LAZARIC (SequeL Team @INRIA-Lille) Ecole Centrale - Option DAD SequeL INRIA Lille EC-RL Course Value Iteration: the Idea 1. Let V 0 be any vector in R N A. LAZARIC Reinforcement

More information

Lecture 10 - Planning under Uncertainty (III)

Lecture 10 - Planning under Uncertainty (III) Lecture 10 - Planning under Uncertainty (III) Jesse Hoey School of Computer Science University of Waterloo March 27, 2018 Readings: Poole & Mackworth (2nd ed.)chapter 12.1,12.3-12.9 1/ 34 Reinforcement

More information

arxiv: v1 [cs.ai] 5 Nov 2017

arxiv: v1 [cs.ai] 5 Nov 2017 arxiv:1711.01569v1 [cs.ai] 5 Nov 2017 Markus Dumke Department of Statistics Ludwig-Maximilians-Universität München markus.dumke@campus.lmu.de Abstract Temporal-difference (TD) learning is an important

More information

6 Reinforcement Learning

6 Reinforcement Learning 6 Reinforcement Learning As discussed above, a basic form of supervised learning is function approximation, relating input vectors to output vectors, or, more generally, finding density functions p(y,

More information

State Space Abstractions for Reinforcement Learning

State Space Abstractions for Reinforcement Learning State Space Abstractions for Reinforcement Learning Rowan McAllister and Thang Bui MLG RCC 6 November 24 / 24 Outline Introduction Markov Decision Process Reinforcement Learning State Abstraction 2 Abstraction

More information

Marks. bonus points. } Assignment 1: Should be out this weekend. } Mid-term: Before the last lecture. } Mid-term deferred exam:

Marks. bonus points. } Assignment 1: Should be out this weekend. } Mid-term: Before the last lecture. } Mid-term deferred exam: Marks } Assignment 1: Should be out this weekend } All are marked, I m trying to tally them and perhaps add bonus points } Mid-term: Before the last lecture } Mid-term deferred exam: } This Saturday, 9am-10.30am,

More information

Reinforcement Learning and Deep Reinforcement Learning

Reinforcement Learning and Deep Reinforcement Learning Reinforcement Learning and Deep Reinforcement Learning Ashis Kumer Biswas, Ph.D. ashis.biswas@ucdenver.edu Deep Learning November 5, 2018 1 / 64 Outlines 1 Principles of Reinforcement Learning 2 The Q

More information

Deep Reinforcement Learning SISL. Jeremy Morton (jmorton2) November 7, Stanford Intelligent Systems Laboratory

Deep Reinforcement Learning SISL. Jeremy Morton (jmorton2) November 7, Stanford Intelligent Systems Laboratory Deep Reinforcement Learning Jeremy Morton (jmorton2) November 7, 2016 SISL Stanford Intelligent Systems Laboratory Overview 2 1 Motivation 2 Neural Networks 3 Deep Reinforcement Learning 4 Deep Learning

More information

Lecture 18: Reinforcement Learning Sanjeev Arora Elad Hazan

Lecture 18: Reinforcement Learning Sanjeev Arora Elad Hazan COS 402 Machine Learning and Artificial Intelligence Fall 2016 Lecture 18: Reinforcement Learning Sanjeev Arora Elad Hazan Some slides borrowed from Peter Bodik and David Silver Course progress Learning

More information

Reinforcement Learning and Control

Reinforcement Learning and Control CS9 Lecture notes Andrew Ng Part XIII Reinforcement Learning and Control We now begin our study of reinforcement learning and adaptive control. In supervised learning, we saw algorithms that tried to make

More information

Introduction. Reinforcement Learning

Introduction. Reinforcement Learning Introduction Reinforcement Learning Learn Act Sense Scott Sanner NICTA / ANU First.Last@nicta.com.au Lecture Goals 1) To understand formal models for decisionmaking under uncertainty and their properties

More information

Convergent Temporal-Difference Learning with Arbitrary Smooth Function Approximation

Convergent Temporal-Difference Learning with Arbitrary Smooth Function Approximation Convergent Temporal-Difference Learning with Arbitrary Smooth Function Approximation Hamid R. Maei University of Alberta Edmonton, AB, Canada Csaba Szepesvári University of Alberta Edmonton, AB, Canada

More information

Emphatic Temporal-Difference Learning

Emphatic Temporal-Difference Learning Emphatic Temporal-Difference Learning A Rupam Mahmood Huizhen Yu Martha White Richard S Sutton Reinforcement Learning and Artificial Intelligence Laboratory Department of Computing Science, University

More information

https://www.youtube.com/watch?v=ymvi1l746us Eligibility traces Chapter 12, plus some extra stuff! Like n-step methods, but better! Eligibility traces A mechanism that allow TD, Sarsa and Q-learning to

More information

Lecture 4: Approximate dynamic programming

Lecture 4: Approximate dynamic programming IEOR 800: Reinforcement learning By Shipra Agrawal Lecture 4: Approximate dynamic programming Deep Q Networks discussed in the last lecture are an instance of approximate dynamic programming. These are

More information

Variance Reduction for Policy Gradient Methods. March 13, 2017

Variance Reduction for Policy Gradient Methods. March 13, 2017 Variance Reduction for Policy Gradient Methods March 13, 2017 Reward Shaping Reward Shaping Reward Shaping Reward shaping: r(s, a, s ) = r(s, a, s ) + γφ(s ) Φ(s) for arbitrary potential Φ Theorem: r admits

More information

Olivier Sigaud. September 21, 2012

Olivier Sigaud. September 21, 2012 Supervised and Reinforcement Learning Tools for Motor Learning Models Olivier Sigaud Université Pierre et Marie Curie - Paris 6 September 21, 2012 1 / 64 Introduction Who is speaking? 2 / 64 Introduction

More information

Decision Theory: Markov Decision Processes

Decision Theory: Markov Decision Processes Decision Theory: Markov Decision Processes CPSC 322 Lecture 33 March 31, 2006 Textbook 12.5 Decision Theory: Markov Decision Processes CPSC 322 Lecture 33, Slide 1 Lecture Overview Recap Rewards and Policies

More information

A Residual Gradient Fuzzy Reinforcement Learning Algorithm for Differential Games

A Residual Gradient Fuzzy Reinforcement Learning Algorithm for Differential Games International Journal of Fuzzy Systems manuscript (will be inserted by the editor) A Residual Gradient Fuzzy Reinforcement Learning Algorithm for Differential Games Mostafa D Awheda Howard M Schwartz Received:

More information

Linear Least-squares Dyna-style Planning

Linear Least-squares Dyna-style Planning Linear Least-squares Dyna-style Planning Hengshuai Yao Department of Computing Science University of Alberta Edmonton, AB, Canada T6G2E8 hengshua@cs.ualberta.ca Abstract World model is very important for

More information

MARKOV DECISION PROCESSES (MDP) AND REINFORCEMENT LEARNING (RL) Versione originale delle slide fornita dal Prof. Francesco Lo Presti

MARKOV DECISION PROCESSES (MDP) AND REINFORCEMENT LEARNING (RL) Versione originale delle slide fornita dal Prof. Francesco Lo Presti 1 MARKOV DECISION PROCESSES (MDP) AND REINFORCEMENT LEARNING (RL) Versione originale delle slide fornita dal Prof. Francesco Lo Presti Historical background 2 Original motivation: animal learning Early

More information

Lecture 3: Markov Decision Processes

Lecture 3: Markov Decision Processes Lecture 3: Markov Decision Processes Joseph Modayil 1 Markov Processes 2 Markov Reward Processes 3 Markov Decision Processes 4 Extensions to MDPs Markov Processes Introduction Introduction to MDPs Markov

More information

Reinforcement Learning II

Reinforcement Learning II Reinforcement Learning II Andrea Bonarini Artificial Intelligence and Robotics Lab Department of Electronics and Information Politecnico di Milano E-mail: bonarini@elet.polimi.it URL:http://www.dei.polimi.it/people/bonarini

More information

Deep reinforcement learning. Dialogue Systems Group, Cambridge University Engineering Department

Deep reinforcement learning. Dialogue Systems Group, Cambridge University Engineering Department Deep reinforcement learning Milica Gašić Dialogue Systems Group, Cambridge University Engineering Department 1 / 25 In this lecture... Introduction to deep reinforcement learning Value-based Deep RL Deep

More information