Fast Gradient-Descent Methods for Temporal-Difference Learning with Linear Function Approximation

Size: px
Start display at page:

Download "Fast Gradient-Descent Methods for Temporal-Difference Learning with Linear Function Approximation"

Transcription

1 Fast Gradient-Descent Methods for Temporal-Difference Learning with Linear Function Approximation Rich Sutton, University of Alberta Hamid Maei, University of Alberta Doina Precup, McGill University Shalabh Bhatnagar, Indian Institute of Science, Bangalore David Silver, University of Alberta Csaba Szepesvari, University of Alberta Eric Wiewiora, University of Alberta

2 a breakthrough in RL function approximation in TD learning is now straightforward as straightforward as it is in supervised learning TD learning can now be done as gradientdescent in a novel Bellman error

3 limitations (for this paper) linear function approximation one-step TD methods (λ = ) prediction (policy evaluation), not control

4 limitations (for this paper) linear function approximation one-step TD methods (λ = ) prediction (policy evaluation), not control all of these are being removed in current work

5 keys to the breakthrough a new Bellman error objective function an algorithmic trick a second set of weights to estimate one of the sub-expectations and avoid the need for double sampling introduced in prior work (Sutton, Szepesvari & Maei, 28)

6 outline ways in which TD with FA has not been straightforward the new Bellman error objective function derivation of new algorithms (the trick) results (theory and experiments)

7 TD+FA was not straightforward with linear FA, off-policy methods such as Q- learning diverge on some problems (Baird, 1995) with nonlinear FA, even on-policy methods can diverge (Tsitsiklis & Van Roy, 1997) convergence guaranteed only for one very important special case linear FA, learning about the policy being followed second-order or importance-sampling methods are complex, slow or messy no true gradient-descent methods

8 Baird s counterexample a simple Markov chain linear FA, all rewards zero deterministic, expectation-based full backups (as in DP) each state updated once per sweep (as in DP) weights can diverge to ± Error RMSPBE 1 GTD GTD2 TDC TD Sweeps

9 outline ways in which TD with FA has not been straightforward the new Bellman error objective function derivation of new algorithms (the trick) results (theory and experiments)

10 e.g. linear value-function approximation in Computer Go state s 1 35 states

11 e.g. linear value-function approximation in Computer Go state feature vector s φ s 1 35 states 1 5 binary features

12 e.g. linear value-function approximation in Computer Go state feature vector parameter vector x s φ s θ 1 35 states 1 5 binary features and parameters

13 e.g. linear value-function approximation in Computer Go state feature vector parameter vector x s φ s θ 1 35 states 1 5 binary features and parameters

14 e.g. linear value-function approximation in Computer Go state feature vector parameter vector x = = 3 estimated value s φ s θ 1 35 states 1 5 binary features and parameters

15 state transitions: Notation s r s feature vectors: φ φ R n n #states approximate values: V θ (s) =θ φ θ R n parameter vector TD error: TD() algorithm: true values: δ = r + γθ φ θ φ γ [, 1) θ = αδφ α > V (s) =E[r s]+γ s P ss V (s ) Bellman operator: over per-state vectors TV = R + γpv V = TV

16 state transitions: Notation s r s feature vectors: φ φ R n n #states approximate values: V θ (s) =θ φ θ R n parameter vector TD error: TD() algorithm: true values: δ = r + γθ φ θ φ γ [, 1) θ = αδφ α > V (s) =E[r s]+γ s P ss V (s ) Bellman operator: over per-state vectors TV = R + γpv V = TV

17 state transitions: Notation s r s feature vectors: φ φ R n n #states approximate values: V θ (s) =θ φ θ R n parameter vector TD error: TD() algorithm: true values: δ = r + γθ φ θ φ γ [, 1) θ = αδφ α > V (s) =E[r s]+γ s P ss V (s ) Bellman operator: over per-state vectors TV = R + γpv V = TV

18 Value function geometry V RMSBE T TV! " "TV!!, D V! RMSPBE The space spanned by the feature vectors, weighted by the state visitation distribution

19 Value function geometry V RMSBE T TV! " T takes you outside the space "TV!!, D V! RMSPBE The space spanned by the feature vectors, weighted by the state visitation distribution

20 Value function geometry V!, D V! RMSBE T RMSPBE TV! " "TV! T takes you outside the space Π projects you back into it The space spanned by the feature vectors, weighted by the state visitation distribution

21 Value function geometry Previous work on gradient methods for TD minimized this objective fn (Baird 1995, 1999) RMSBE T TV! " "TV! V T takes you outside the space Π projects you back into it V!!, D RMSPBE The space spanned by the feature vectors, weighted by the state visitation distribution

22 Value function geometry Previous work on gradient methods for TD minimized this objective fn (Baird 1995, 1999) RMSBE T TV! " "TV! V T takes you outside the space Π projects you back into it V!!, D RMSPBE Better objective fn? The space spanned by the feature vectors, weighted by the state visitation distribution Mean Square Projected Bellman Error (MSPBE)

23 TD objective functions (to be minimized) Error from the true values Error in the Bellman equation (Bellman residual) Error in the Bellman equation after projection (MSPBE) Zero expected TD update Norm Expected TD update Expected squared TD error V θ V 2 D V θ TV θ 2 D V θ ΠTV θ 2 D V θ = ΠTV θ E[ θ TD ] E[δ 2 ]

24 TD objective functions (to be minimized) Error from the true values Error in the Bellman equation (Bellman residual) V θ V 2 D V θ TV θ 2 D Not TD Error in the Bellman equation after projection (MSPBE) V θ ΠTV θ 2 D Zero expected TD update Norm Expected TD update Expected squared TD error V θ = ΠTV θ E[ θ TD ] E[δ 2 ]

25 TD objective functions (to be minimized) Error from the true values Error in the Bellman equation (Bellman residual) V θ V 2 D V θ TV θ 2 D Not TD Not right Error in the Bellman equation after projection (MSPBE) V θ ΠTV θ 2 D Zero expected TD update Norm Expected TD update Expected squared TD error V θ = ΠTV θ E[ θ TD ] E[δ 2 ]

26 TD objective functions (to be minimized) Error from the true values Error in the Bellman equation (Bellman residual) V θ V 2 D V θ TV θ 2 D Not TD Not right Error in the Bellman equation after projection (MSPBE) V θ ΠTV θ 2 D Right! Zero expected TD update Norm Expected TD update Expected squared TD error V θ = ΠTV θ E[ θ TD ] E[δ 2 ]

27 TD objective functions (to be minimized) Error from the true values Error in the Bellman equation (Bellman residual) V θ V 2 D V θ TV θ 2 D Not TD Not right Error in the Bellman equation after projection (MSPBE) V θ ΠTV θ 2 D Right! Zero expected TD update Norm Expected TD update Expected squared TD error V θ = ΠTV θ E[ θ TD ] E[δ 2 ]

28 TD objective functions (to be minimized) Error from the true values Error in the Bellman equation (Bellman residual) V θ V 2 D V θ TV θ 2 D Not TD Not right Error in the Bellman equation after projection (MSPBE) Zero expected TD update Norm Expected TD update Expected squared TD error V θ ΠTV θ 2 D V θ = ΠTV θ E[ θ TD ] E[δ 2 ] Right! Not an objective

29 backwardsbootstrapping example The two A states look the same; they share a single feature and must be given the same approximate value V (A1) = V (A2) = 1 2 A1 B 1 All transitions are deterministic; Bellman error = TD error A Clearly, the right solution is V (B) = 1, V(C) = A2 C But the solution the minimizes the Bellman error is With function approx... V (B) = 3 4,V(C) =1 4

30 backwardsbootstrapping example The two A states look the same; they share a single feature and must be given the same approximate value A1 A A2 B B C 1 C V (A1) = V (A2) = 1 2 All transitions are deterministic; Bellman error = TD error Clearly, the right solution is V (B) = 1, V(C) = But the solution the minimizes the Bellman error is With function approx... V (B) = 3 4,V(C) =1 4

31 TD objective functions (to be minimized) Error from the true values Error in the Bellman equation (Bellman residual) V θ V 2 D V θ TV θ 2 D Not TD Not right Error in the Bellman equation after projection (MSPBE) Zero expected TD update Norm Expected TD update Expected squared TD error V θ ΠTV θ 2 D Right! Not an objective V θ = ΠTV θ, E[ θ TD ]= E[ θ TD ] E[δ 2 ]

32 TD objective functions (to be minimized) Error from the true values Error in the Bellman equation (Bellman residual) V θ V 2 D V θ TV θ 2 D Not TD Not right Error in the Bellman equation after projection (MSPBE) Zero expected TD update V θ ΠTV θ 2 D Right! Not an objective V θ = ΠTV θ, E[ θ TD ]= Norm Expected TD update Expected squared TD error E[ θ TD ] E[δ 2 ] previous work

33 TD objective functions (to be minimized) Error from the true values Error in the Bellman equation (Bellman residual) V θ V 2 D V θ TV θ 2 D Not TD Not right Error in the Bellman equation after projection (MSPBE) Zero expected TD update V θ ΠTV θ 2 D Right! Not an objective V θ = ΠTV θ, E[ θ TD ]= Norm Expected TD update E[ θ TD ] previous work Expected squared TD error E[δ 2 ] Not right; residual gradient

34 outline ways in which TD with FA has not been straightforward the new Bellman error objective function derivation of new algorithms (the trick) results (theory and experiments)

35 Gradient-descent learning 1. Pick an objective function J(θ), a parameterized function to be minimized 2. Use calculus to analytically compute the gradient θ J(θ) 3. Find a sample gradient that you can sample on every time step and whose expected value equals the gradient 4. Take small steps in θ proportional to the sample gradient: θ = α θ J t (θ)

36 Derivation of the TDC algorithm θ = 1 2 α θj(θ) = 1 2 α θ V θ ΠTV θ 2 D = 1 2 α θ ( E [ φφ ] ) 1 = α ( θ ) E [ φφ ] 1 = αe [ θ ( r + γθ φ θ φ ) φ ] E [ φφ ] 1 = αe [(γφ φ) φ] E [ φφ ] 1 = α ( E [ φφ ] γe [ φ φ ]) E [ φφ ] 1 = α αγe [ φ φ ] E [ φφ ] 1 α αγe [ φ φ ] w (sampling) αδφ αγφ φ w This is the main trick!

37 Derivation of the TDC algorithm θ = 1 2 α θj(θ) = 1 2 α θ V θ ΠTV θ 2 D = 1 2 α θ ( E [ φφ ] ) 1 = α ( θ ) E [ φφ ] 1 = αe [ θ ( r + γθ φ θ φ ) φ ] E [ φφ ] 1 = αe [(γφ φ) φ] E [ φφ ] 1 = α ( E [ φφ ] γe [ φ φ ]) E [ φφ ] 1 = α αγe [ φ φ ] E [ φφ ] 1 α αγe [ φ φ ] w (sampling) αδφ αγφ φ w This is the main trick!

38 Derivation of the TDC algorithm θ = 1 2 α θj(θ) = 1 2 α θ V θ ΠTV θ 2 D = 1 2 α θ ( E [ φφ ] ) 1 s r s φ φ = α ( θ ) E [ φφ ] 1 = αe [ θ ( r + γθ φ θ φ ) φ ] E [ φφ ] 1 = αe [(γφ φ) φ] E [ φφ ] 1 = α ( E [ φφ ] γe [ φ φ ]) E [ φφ ] 1 = α αγe [ φ φ ] E [ φφ ] 1 α αγe [ φ φ ] w (sampling) αδφ αγφ φ w This is the main trick!

39 Derivation of the TDC algorithm θ = 1 2 α θj(θ) = 1 2 α θ V θ ΠTV θ 2 D = 1 2 α θ ( E [ φφ ] ) 1 s r s φ φ = α ( θ ) E [ φφ ] 1 = αe [ θ ( r + γθ φ θ φ ) φ ] E [ φφ ] 1 = αe [(γφ φ) φ] E [ φφ ] 1 = α ( E [ φφ ] γe [ φ φ ]) E [ φφ ] 1 = α αγe [ φ φ ] E [ φφ ] 1 α αγe [ φ φ ] w (sampling) αδφ αγφ φ w This is the main trick!

40 Derivation of the TDC algorithm θ = 1 2 α θj(θ) = 1 2 α θ V θ ΠTV θ 2 D = 1 2 α θ ( E [ φφ ] ) 1 s r s φ φ = α ( θ ) E [ φφ ] 1 = αe [ θ ( r + γθ φ θ φ ) φ ] E [ φφ ] 1 = αe [(γφ φ) φ] E [ φφ ] 1 = α ( E [ φφ ] γe [ φ φ ]) E [ φφ ] 1 = α αγe [ φ φ ] E [ φφ ] 1 α αγe [ φ φ ] w (sampling) αδφ αγφ φ w This is the main trick!

41 Derivation of the TDC algorithm θ = 1 2 α θj(θ) = 1 2 α θ V θ ΠTV θ 2 D = 1 2 α θ ( E [ φφ ] ) 1 s r s φ φ = α ( θ ) E [ φφ ] 1 = αe [ θ ( r + γθ φ θ φ ) φ ] E [ φφ ] 1 = αe [(γφ φ) φ] E [ φφ ] 1 = α ( E [ φφ ] γe [ φ φ ]) E [ φφ ] 1 = α αγe [ φ φ ] E [ φφ ] 1 α αγe [ φ φ ] w (sampling) αδφ αγφ φ w This is the main trick!

42 Derivation of the TDC algorithm θ = 1 2 α θj(θ) = 1 2 α θ V θ ΠTV θ 2 D = 1 2 α θ ( E [ φφ ] ) 1 s r s φ φ = α ( θ ) E [ φφ ] 1 = αe [ θ ( r + γθ φ θ φ ) φ ] E [ φφ ] 1 = αe [(γφ φ) φ] E [ φφ ] 1 = α ( E [ φφ ] γe [ φ φ ]) E [ φφ ] 1 = α αγe [ φ φ ] E [ φφ ] 1 α αγe [ φ φ ] w (sampling) αδφ αγφ φ w This is the main trick!

43 Derivation of the TDC algorithm θ = 1 2 α θj(θ) = 1 2 α θ V θ ΠTV θ 2 D = 1 2 α θ ( E [ φφ ] ) 1 s r s φ φ = α ( θ ) E [ φφ ] 1 = αe [ ( θ r + γθ φ θ φ ) φ ] E [ φφ ] 1 = αe [(γφ φ) φ] E [ φφ ] 1 = α ( E [ φφ ] γe [ φ φ ]) E [ φφ ] 1 = α αγe [ φ φ ] E [ φφ ] 1 α αγe [ φ φ ] w (sampling) αδφ αγφ φ w

44 Derivation of the TDC algorithm θ = 1 2 α θj(θ) = 1 2 α θ V θ ΠTV θ 2 D = 1 2 α θ ( E [ φφ ] ) 1 s r s φ φ = α ( θ ) E [ φφ ] 1 = αe [ ( θ r + γθ φ θ φ ) φ ] E [ φφ ] 1 = αe [(γφ φ) φ] E [ φφ ] 1 = α ( E [ φφ ] γe [ φ φ ]) E [ φφ ] 1 = α αγe [ φ φ ] E [ φφ ] 1 α αγe [ φ φ ] w (sampling) αδφ αγφ φ w

45 Derivation of the TDC algorithm θ = 1 2 α θj(θ) = 1 2 α θ V θ ΠTV θ 2 D = 1 2 α θ ( E [ φφ ] ) 1 s r s φ φ = α ( θ ) E [ φφ ] 1 = αe [ θ ( r + γθ φ θ φ ) φ ] E [ φφ ] 1 = αe [(γφ φ) φ] E [ φφ ] 1 = α ( E [ φφ ] γe [ φ φ ]) E [ φφ ] 1 = α αγe [ φ φ ] E [ φφ ] 1 α αγe [ φ φ ] w (sampling) αδφ αγφ φ w This is the trick! w R n is a second set of weights

46 Derivation of the TDC algorithm θ = 1 2 α θj(θ) = 1 2 α θ V θ ΠTV θ 2 D = 1 2 α θ ( E [ φφ ] ) 1 s r s φ φ = α ( θ ) E [ φφ ] 1 = αe [ θ ( r + γθ φ θ φ ) φ ] E [ φφ ] 1 = αe [(γφ φ) φ] E [ φφ ] 1 = α ( E [ φφ ] γe [ φ φ ]) E [ φφ ] 1 = α αγe [ φ φ ] E [ φφ ] 1 α αγe [ φ φ ] w (sampling) αδφ αγφ φ w This is the trick! w R n is a second set of weights

47 The complete TD with gradient correction (TDC) algorithm on each transition s r s update two parameters where θ θ + αδφ αγφ ( φ ) w w w + β(δ φ w)φ δ = r + γθ φ θ φ φ φ

48 The complete TD with gradient correction (TDC) algorithm on each transition s r s update two parameters where θ θ + αδφ αγφ ( φ ) w w w + β(δ φ w)φ δ = r + γθ φ θ φ φ φ TD()

49 The complete TD with gradient correction (TDC) algorithm on each transition s r s φ φ update two parameters TD() θ θ + αδφ αγφ ( φ w ) with gradient correction where w w + β(δ φ w)φ δ = r + γθ φ θ φ

50 The complete TD with gradient correction (TDC) algorithm on each transition s r s update two parameters φ φ θ θ + αδφ αγφ ( φ w ) where w w + β(δ φ w)φ δ = r + γθ φ θ φ estimate of the TD error ( δ) for the current state φ

51 outline ways in which TD with FA has not been straightforward the new Bellman error objective function derivation of new algorithms (the trick) results (theory and experiments)

52 Three new algorithms GTD, the original gradient TD algorithm (Sutton, Szepevari & Maei, 28) GTD2, a second-generation GTD TDC

53 Convergence theorems For arbitrary on- or off-policy training All algorithms converge w.p.1 to the TD fix-point: GTD, GTD2 converge at one time scale α = β TDC converges in a two-time-scale sense α, β α β

54 Summary of empirical results on small problems.12 Random Walk - Tabular features.2 Random Walk - Inverted features RMSPBE TD GTD2 GTD TDC ! GTD GTD2 TD TDC 1 2 episodes RMSPBE TD GTD GTD2 TDC ! GTD TDC GTD2 TD 25 5 episodes.13 Random Walk - Dependent features 2.8 TD Boyan Chain RMSPBE TDC GTD2 GTD TD ! TDC TD GTD2 GTD episodes Usually GTD << GTD2 < TD, TDC RMSPBE TDC GTD GTD ! TDC GTD2 TD GTD 5 1 episodes

55 Computer Go experiment Learn a linear value function (probability of winning) for 9x9 Go from self play One million features, each corresponding to a template on a part of the Go board An established experimental testbed E[ θ TD ] RNEU.8.6 GTD2 TDC.4 TD GTD2.2 GTD TDC α

56 conclusions the new algorithms are roughly the same efficiency as conventional TD on on-policy problems but are guaranteed convergent under general off-policy training as well their key ideas appear to extend quite broadly, to control, general λ, non-linear settings, DP, intra-option learning, TD nets... TD with FA is now straightforward the curse of dimensionality is removed

Fast Gradient-Descent Methods for Temporal-Difference Learning with Linear Function Approximation

Fast Gradient-Descent Methods for Temporal-Difference Learning with Linear Function Approximation Fast Gradient-Descent Methods for Temporal-Difference Learning with Linear Function Approximation Richard S. Sutton, Hamid Reza Maei, Doina Precup, Shalabh Bhatnagar, 3 David Silver, Csaba Szepesvári,

More information

Convergent Temporal-Difference Learning with Arbitrary Smooth Function Approximation

Convergent Temporal-Difference Learning with Arbitrary Smooth Function Approximation Convergent Temporal-Difference Learning with Arbitrary Smooth Function Approximation Hamid R. Maei University of Alberta Edmonton, AB, Canada Csaba Szepesvári University of Alberta Edmonton, AB, Canada

More information

Reinforcement Learning

Reinforcement Learning Reinforcement Learning Function Approximation Continuous state/action space, mean-square error, gradient temporal difference learning, least-square temporal difference, least squares policy iteration Vien

More information

Reinforcement Learning

Reinforcement Learning Reinforcement Learning RL in continuous MDPs March April, 2015 Large/Continuous MDPs Large/Continuous state space Tabular representation cannot be used Large/Continuous action space Maximization over action

More information

Reinforcement Learning Algorithms in Markov Decision Processes AAAI-10 Tutorial. Part II: Learning to predict values

Reinforcement Learning Algorithms in Markov Decision Processes AAAI-10 Tutorial. Part II: Learning to predict values Ideas and Motivation Background Off-policy learning Option formalism Learning about one policy while behaving according to another Needed for RL w/exploration (as in Q-learning) Needed for learning abstract

More information

Lecture 7: Value Function Approximation

Lecture 7: Value Function Approximation Lecture 7: Value Function Approximation Joseph Modayil Outline 1 Introduction 2 3 Batch Methods Introduction Large-Scale Reinforcement Learning Reinforcement learning can be used to solve large problems,

More information

A Convergent O(n) Algorithm for Off-policy Temporal-difference Learning with Linear Function Approximation

A Convergent O(n) Algorithm for Off-policy Temporal-difference Learning with Linear Function Approximation A Convergent O(n) Algorithm for Off-policy Temporal-difference Learning with Linear Function Approximation Richard S. Sutton, Csaba Szepesvári, Hamid Reza Maei Reinforcement Learning and Artificial Intelligence

More information

Gradient Temporal Difference Networks

Gradient Temporal Difference Networks JMLR: Workshop and Conference Proceedings 24:117 129, 2012 10th European Workshop on Reinforcement Learning Gradient Temporal Difference Networks David Silver d.silver@cs.ucl.ac.uk Department of Computer

More information

CSE 190: Reinforcement Learning: An Introduction. Chapter 8: Generalization and Function Approximation. Pop Quiz: What Function Are We Approximating?

CSE 190: Reinforcement Learning: An Introduction. Chapter 8: Generalization and Function Approximation. Pop Quiz: What Function Are We Approximating? CSE 190: Reinforcement Learning: An Introduction Chapter 8: Generalization and Function Approximation Objectives of this chapter: Look at how experience with a limited part of the state set be used to

More information

Introduction to Reinforcement Learning. CMPT 882 Mar. 18

Introduction to Reinforcement Learning. CMPT 882 Mar. 18 Introduction to Reinforcement Learning CMPT 882 Mar. 18 Outline for the week Basic ideas in RL Value functions and value iteration Policy evaluation and policy improvement Model-free RL Monte-Carlo and

More information

Algorithms for Fast Gradient Temporal Difference Learning

Algorithms for Fast Gradient Temporal Difference Learning Algorithms for Fast Gradient Temporal Difference Learning Christoph Dann Autonomous Learning Systems Seminar Department of Computer Science TU Darmstadt Darmstadt, Germany cdann@cdann.de Abstract Temporal

More information

CSE 190: Reinforcement Learning: An Introduction. Chapter 8: Generalization and Function Approximation. Pop Quiz: What Function Are We Approximating?

CSE 190: Reinforcement Learning: An Introduction. Chapter 8: Generalization and Function Approximation. Pop Quiz: What Function Are We Approximating? CSE 190: Reinforcement Learning: An Introduction Chapter 8: Generalization and Function Approximation Objectives of this chapter: Look at how experience with a limited part of the state set be used to

More information

Recurrent Gradient Temporal Difference Networks

Recurrent Gradient Temporal Difference Networks Recurrent Gradient Temporal Difference Networks David Silver Department of Computer Science, CSML, University College London London, WC1E 6BT d.silver@cs.ucl.ac.uk Abstract Temporal-difference (TD) networks

More information

Lecture 4: Approximate dynamic programming

Lecture 4: Approximate dynamic programming IEOR 800: Reinforcement learning By Shipra Agrawal Lecture 4: Approximate dynamic programming Deep Q Networks discussed in the last lecture are an instance of approximate dynamic programming. These are

More information

Reinforcement Learning Part 2

Reinforcement Learning Part 2 Reinforcement Learning Part 2 Dipendra Misra Cornell University dkm@cs.cornell.edu https://dipendramisra.wordpress.com/ From previous tutorial Reinforcement Learning Exploration No supervision Agent-Reward-Environment

More information

Finite Time Bounds for Temporal Difference Learning with Function Approximation: Problems with some state-of-the-art results

Finite Time Bounds for Temporal Difference Learning with Function Approximation: Problems with some state-of-the-art results Finite Time Bounds for Temporal Difference Learning with Function Approximation: Problems with some state-of-the-art results Chandrashekar Lakshmi Narayanan Csaba Szepesvári Abstract In all branches of

More information

Off-Policy Actor-Critic

Off-Policy Actor-Critic Off-Policy Actor-Critic Ludovic Trottier Laval University July 25 2012 Ludovic Trottier (DAMAS Laboratory) Off-Policy Actor-Critic July 25 2012 1 / 34 Table of Contents 1 Reinforcement Learning Theory

More information

Linear Least-squares Dyna-style Planning

Linear Least-squares Dyna-style Planning Linear Least-squares Dyna-style Planning Hengshuai Yao Department of Computing Science University of Alberta Edmonton, AB, Canada T6G2E8 hengshua@cs.ualberta.ca Abstract World model is very important for

More information

Chapter 8: Generalization and Function Approximation

Chapter 8: Generalization and Function Approximation Chapter 8: Generalization and Function Approximation Objectives of this chapter: Look at how experience with a limited part of the state set be used to produce good behavior over a much larger part. Overview

More information

Dyna-Style Planning with Linear Function Approximation and Prioritized Sweeping

Dyna-Style Planning with Linear Function Approximation and Prioritized Sweeping Dyna-Style Planning with Linear Function Approximation and Prioritized Sweeping Richard S Sutton, Csaba Szepesvári, Alborz Geramifard, Michael Bowling Reinforcement Learning and Artificial Intelligence

More information

Generalization and Function Approximation

Generalization and Function Approximation Generalization and Function Approximation 0 Generalization and Function Approximation Suggested reading: Chapter 8 in R. S. Sutton, A. G. Barto: Reinforcement Learning: An Introduction MIT Press, 1998.

More information

Overview Example (TD-Gammon) Admission Why approximate RL is hard TD() Fitted value iteration (collocation) Example (k-nn for hill-car)

Overview Example (TD-Gammon) Admission Why approximate RL is hard TD() Fitted value iteration (collocation) Example (k-nn for hill-car) Function Approximation in Reinforcement Learning Gordon Geo ggordon@cs.cmu.edu November 5, 999 Overview Example (TD-Gammon) Admission Why approximate RL is hard TD() Fitted value iteration (collocation)

More information

Proximal Gradient Temporal Difference Learning Algorithms

Proximal Gradient Temporal Difference Learning Algorithms Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence (IJCAI-16) Proximal Gradient Temporal Difference Learning Algorithms Bo Liu, Ji Liu, Mohammad Ghavamzadeh, Sridhar

More information

Least-Squares λ Policy Iteration: Bias-Variance Trade-off in Control Problems

Least-Squares λ Policy Iteration: Bias-Variance Trade-off in Control Problems : Bias-Variance Trade-off in Control Problems Christophe Thiery Bruno Scherrer LORIA - INRIA Lorraine - Campus Scientifique - BP 239 5456 Vandœuvre-lès-Nancy CEDEX - FRANCE thierych@loria.fr scherrer@loria.fr

More information

Tutorial on Policy Gradient Methods. Jan Peters

Tutorial on Policy Gradient Methods. Jan Peters Tutorial on Policy Gradient Methods Jan Peters Outline 1. Reinforcement Learning 2. Finite Difference vs Likelihood-Ratio Policy Gradients 3. Likelihood-Ratio Policy Gradients 4. Conclusion General Setup

More information

CS599 Lecture 2 Function Approximation in RL

CS599 Lecture 2 Function Approximation in RL CS599 Lecture 2 Function Approximation in RL Look at how experience with a limited part of the state set be used to produce good behavior over a much larger part. Overview of function approximation (FA)

More information

CS 287: Advanced Robotics Fall Lecture 14: Reinforcement Learning with Function Approximation and TD Gammon case study

CS 287: Advanced Robotics Fall Lecture 14: Reinforcement Learning with Function Approximation and TD Gammon case study CS 287: Advanced Robotics Fall 2009 Lecture 14: Reinforcement Learning with Function Approximation and TD Gammon case study Pieter Abbeel UC Berkeley EECS Assignment #1 Roll-out: nice example paper: X.

More information

Reinforcement. Function Approximation. Learning with KATJA HOFMANN. Researcher, MSR Cambridge

Reinforcement. Function Approximation. Learning with KATJA HOFMANN. Researcher, MSR Cambridge Reinforcement Learning with Function Approximation KATJA HOFMANN Researcher, MSR Cambridge Representation and Generalization in RL Focus on training stability Learning generalizable value functions Navigating

More information

Lecture 3: Policy Evaluation Without Knowing How the World Works / Model Free Policy Evaluation

Lecture 3: Policy Evaluation Without Knowing How the World Works / Model Free Policy Evaluation Lecture 3: Policy Evaluation Without Knowing How the World Works / Model Free Policy Evaluation CS234: RL Emma Brunskill Winter 2018 Material builds on structure from David SIlver s Lecture 4: Model-Free

More information

Least squares temporal difference learning

Least squares temporal difference learning Least squares temporal difference learning TD(λ) Good properties of TD Easy to implement, traces achieve the forward view Linear complexity = fast Combines easily with linear function approximation Outperforms

More information

A UNIFIED FRAMEWORK FOR LINEAR FUNCTION APPROXIMATION OF VALUE FUNCTIONS IN STOCHASTIC CONTROL

A UNIFIED FRAMEWORK FOR LINEAR FUNCTION APPROXIMATION OF VALUE FUNCTIONS IN STOCHASTIC CONTROL A UNIFIED FRAMEWORK FOR LINEAR FUNCTION APPROXIMATION OF VALUE FUNCTIONS IN STOCHASTIC CONTROL Matilde Sánchez-Fernández Sergio Valcárcel, Santiago Zazo ABSTRACT This paper contributes with a unified formulation

More information

The Fixed Points of Off-Policy TD

The Fixed Points of Off-Policy TD The Fixed Points of Off-Policy TD J. Zico Kolter Computer Science and Artificial Intelligence Laboratory Massachusetts Institute of Technology Cambridge, MA 2139 kolter@csail.mit.edu Abstract Off-policy

More information

6 Reinforcement Learning

6 Reinforcement Learning 6 Reinforcement Learning As discussed above, a basic form of supervised learning is function approximation, relating input vectors to output vectors, or, more generally, finding density functions p(y,

More information

Reinforcement Learning

Reinforcement Learning Reinforcement Learning Value Function Approximation Continuous state/action space, mean-square error, gradient temporal difference learning, least-square temporal difference, least squares policy iteration

More information

Toward Off-Policy Learning Control with Function Approximation

Toward Off-Policy Learning Control with Function Approximation Hamid Reza Maei, Csaba Szepesvári, Shalabh Bhatnagar, Richard S. Sutton Department of Computing Science, University of Alberta, Edmonton, Canada T6G 2E8 Department of Computer Science and Automation, Indian

More information

Bayesian Active Learning With Basis Functions

Bayesian Active Learning With Basis Functions Bayesian Active Learning With Basis Functions Ilya O. Ryzhov Warren B. Powell Operations Research and Financial Engineering Princeton University Princeton, NJ 08544, USA IEEE ADPRL April 13, 2011 1 / 29

More information

State Space Abstractions for Reinforcement Learning

State Space Abstractions for Reinforcement Learning State Space Abstractions for Reinforcement Learning Rowan McAllister and Thang Bui MLG RCC 6 November 24 / 24 Outline Introduction Markov Decision Process Reinforcement Learning State Abstraction 2 Abstraction

More information

Temporal difference learning

Temporal difference learning Temporal difference learning AI & Agents for IET Lecturer: S Luz http://www.scss.tcd.ie/~luzs/t/cs7032/ February 4, 2014 Recall background & assumptions Environment is a finite MDP (i.e. A and S are finite).

More information

Policy Evaluation with Temporal Differences: A Survey and Comparison

Policy Evaluation with Temporal Differences: A Survey and Comparison Journal of Machine Learning Research 15 (2014) 809-883 Submitted 5/13; Revised 11/13; Published 3/14 Policy Evaluation with Temporal Differences: A Survey and Comparison Christoph Dann Gerhard Neumann

More information

An Emphatic Approach to the Problem of Off-policy Temporal-Difference Learning

An Emphatic Approach to the Problem of Off-policy Temporal-Difference Learning Journal of Machine Learning Research 7 206-29 Submitted /4; Revised /5; Published 5/6 An Emphatic Approach to the Problem of Off-policy Temporal-Difference Learning Richard S. Sutton sutton@cs.ualberta.ca

More information

Open Theoretical Questions in Reinforcement Learning

Open Theoretical Questions in Reinforcement Learning Open Theoretical Questions in Reinforcement Learning Richard S. Sutton AT&T Labs, Florham Park, NJ 07932, USA, sutton@research.att.com, www.cs.umass.edu/~rich Reinforcement learning (RL) concerns the problem

More information

Reinforcement Learning with Function Approximation. Joseph Christian G. Noel

Reinforcement Learning with Function Approximation. Joseph Christian G. Noel Reinforcement Learning with Function Approximation Joseph Christian G. Noel November 2011 Abstract Reinforcement learning (RL) is a key problem in the field of Artificial Intelligence. The main goal is

More information

Off-Policy Learning Combined with Automatic Feature Expansion for Solving Large MDPs

Off-Policy Learning Combined with Automatic Feature Expansion for Solving Large MDPs Off-Policy Learning Combined with Automatic Feature Expansion for Solving Large MDPs Alborz Geramifard Christoph Dann Jonathan P. How Laboratory for Information and Decision Systems Massachusetts Institute

More information

ECE276B: Planning & Learning in Robotics Lecture 16: Model-free Control

ECE276B: Planning & Learning in Robotics Lecture 16: Model-free Control ECE276B: Planning & Learning in Robotics Lecture 16: Model-free Control Lecturer: Nikolay Atanasov: natanasov@ucsd.edu Teaching Assistants: Tianyu Wang: tiw161@eng.ucsd.edu Yongxi Lu: yol070@eng.ucsd.edu

More information

CS599 Lecture 1 Introduction To RL

CS599 Lecture 1 Introduction To RL CS599 Lecture 1 Introduction To RL Reinforcement Learning Introduction Learning from rewards Policies Value Functions Rewards Models of the Environment Exploitation vs. Exploration Dynamic Programming

More information

Approximate Dynamic Programming

Approximate Dynamic Programming Approximate Dynamic Programming A. LAZARIC (SequeL Team @INRIA-Lille) Ecole Centrale - Option DAD SequeL INRIA Lille EC-RL Course Value Iteration: the Idea 1. Let V 0 be any vector in R N A. LAZARIC Reinforcement

More information

Reinforcement Learning

Reinforcement Learning Reinforcement Learning Function approximation Mario Martin CS-UPC May 18, 2018 Mario Martin (CS-UPC) Reinforcement Learning May 18, 2018 / 65 Recap Algorithms: MonteCarlo methods for Policy Evaluation

More information

The convergence limit of the temporal difference learning

The convergence limit of the temporal difference learning The convergence limit of the temporal difference learning Ryosuke Nomura the University of Tokyo September 3, 2013 1 Outline Reinforcement Learning Convergence limit Construction of the feature vector

More information

Emphatic Temporal-Difference Learning

Emphatic Temporal-Difference Learning Emphatic Temporal-Difference Learning A Rupam Mahmood Huizhen Yu Martha White Richard S Sutton Reinforcement Learning and Artificial Intelligence Laboratory Department of Computing Science, University

More information

Planning in Markov Decision Processes

Planning in Markov Decision Processes Carnegie Mellon School of Computer Science Deep Reinforcement Learning and Control Planning in Markov Decision Processes Lecture 3, CMU 10703 Katerina Fragkiadaki Markov Decision Process (MDP) A Markov

More information

Reinforcement Learning

Reinforcement Learning Reinforcement Learning Dipendra Misra Cornell University dkm@cs.cornell.edu https://dipendramisra.wordpress.com/ Task Grasp the green cup. Output: Sequence of controller actions Setup from Lenz et. al.

More information

Reinforcement Learning. Yishay Mansour Tel-Aviv University

Reinforcement Learning. Yishay Mansour Tel-Aviv University Reinforcement Learning Yishay Mansour Tel-Aviv University 1 Reinforcement Learning: Course Information Classes: Wednesday Lecture 10-13 Yishay Mansour Recitations:14-15/15-16 Eliya Nachmani Adam Polyak

More information

15-889e Policy Search: Gradient Methods Emma Brunskill. All slides from David Silver (with EB adding minor modificafons), unless otherwise noted

15-889e Policy Search: Gradient Methods Emma Brunskill. All slides from David Silver (with EB adding minor modificafons), unless otherwise noted 15-889e Policy Search: Gradient Methods Emma Brunskill All slides from David Silver (with EB adding minor modificafons), unless otherwise noted Outline 1 Introduction 2 Finite Difference Policy Gradient

More information

Regularization and Feature Selection in. the Least-Squares Temporal Difference

Regularization and Feature Selection in. the Least-Squares Temporal Difference Regularization and Feature Selection in Least-Squares Temporal Difference Learning J. Zico Kolter kolter@cs.stanford.edu Andrew Y. Ng ang@cs.stanford.edu Computer Science Department, Stanford University,

More information

ilstd: Eligibility Traces and Convergence Analysis

ilstd: Eligibility Traces and Convergence Analysis ilstd: Eligibility Traces and Convergence Analysis Alborz Geramifard Michael Bowling Martin Zinkevich Richard S. Sutton Department of Computing Science University of Alberta Edmonton, Alberta {alborz,bowling,maz,sutton}@cs.ualberta.ca

More information

Machine Learning. Reinforcement learning. Hamid Beigy. Sharif University of Technology. Fall 1396

Machine Learning. Reinforcement learning. Hamid Beigy. Sharif University of Technology. Fall 1396 Machine Learning Reinforcement learning Hamid Beigy Sharif University of Technology Fall 1396 Hamid Beigy (Sharif University of Technology) Machine Learning Fall 1396 1 / 32 Table of contents 1 Introduction

More information

This question has three parts, each of which can be answered concisely, but be prepared to explain and justify your concise answer.

This question has three parts, each of which can be answered concisely, but be prepared to explain and justify your concise answer. This question has three parts, each of which can be answered concisely, but be prepared to explain and justify your concise answer. 1. Suppose you have a policy and its action-value function, q, then you

More information

Lecture 23: Reinforcement Learning

Lecture 23: Reinforcement Learning Lecture 23: Reinforcement Learning MDPs revisited Model-based learning Monte Carlo value function estimation Temporal-difference (TD) learning Exploration November 23, 2006 1 COMP-424 Lecture 23 Recall:

More information

Notes on policy gradients and the log derivative trick for reinforcement learning

Notes on policy gradients and the log derivative trick for reinforcement learning Notes on policy gradients and the log derivative trick for reinforcement learning David Meyer dmm@{1-4-5.net,uoregon.edu,brocade.com,...} June 3, 2016 1 Introduction The log derivative trick 1 is a widely

More information

Dual Temporal Difference Learning

Dual Temporal Difference Learning Dual Temporal Difference Learning Min Yang Dept. of Computing Science University of Alberta Edmonton, Alberta Canada T6G 2E8 Yuxi Li Dept. of Computing Science University of Alberta Edmonton, Alberta Canada

More information

Approximate Dynamic Programming

Approximate Dynamic Programming Master MVA: Reinforcement Learning Lecture: 5 Approximate Dynamic Programming Lecturer: Alessandro Lazaric http://researchers.lille.inria.fr/ lazaric/webpage/teaching.html Objectives of the lecture 1.

More information

PART A and ONE question from PART B; or ONE question from PART A and TWO questions from PART B.

PART A and ONE question from PART B; or ONE question from PART A and TWO questions from PART B. Advanced Topics in Machine Learning, GI13, 2010/11 Advanced Topics in Machine Learning, GI13, 2010/11 Answer any THREE questions. Each question is worth 20 marks. Use separate answer books Answer any THREE

More information

Reinforcement Learning and Deep Reinforcement Learning

Reinforcement Learning and Deep Reinforcement Learning Reinforcement Learning and Deep Reinforcement Learning Ashis Kumer Biswas, Ph.D. ashis.biswas@ucdenver.edu Deep Learning November 5, 2018 1 / 64 Outlines 1 Principles of Reinforcement Learning 2 The Q

More information

Notes on Reinforcement Learning

Notes on Reinforcement Learning 1 Introduction Notes on Reinforcement Learning Paulo Eduardo Rauber 2014 Reinforcement learning is the study of agents that act in an environment with the goal of maximizing cumulative reward signals.

More information

arxiv: v1 [cs.ai] 1 Jul 2015

arxiv: v1 [cs.ai] 1 Jul 2015 arxiv:507.00353v [cs.ai] Jul 205 Harm van Seijen harm.vanseijen@ualberta.ca A. Rupam Mahmood ashique@ualberta.ca Patrick M. Pilarski patrick.pilarski@ualberta.ca Richard S. Sutton sutton@cs.ualberta.ca

More information

Lecture 3: Markov Decision Processes

Lecture 3: Markov Decision Processes Lecture 3: Markov Decision Processes Joseph Modayil 1 Markov Processes 2 Markov Reward Processes 3 Markov Decision Processes 4 Extensions to MDPs Markov Processes Introduction Introduction to MDPs Markov

More information

Model-based Reinforcement Learning with State and Action Abstractions. Hengshuai Yao

Model-based Reinforcement Learning with State and Action Abstractions. Hengshuai Yao Model-based Reinforcement Learning with State and Action Abstractions by Hengshuai Yao A thesis submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy Department of

More information

Parametric value function approximation: A unified view

Parametric value function approximation: A unified view Parametric value function approximation: A unified view Matthieu Geist, Olivier Pietquin To cite this version: Matthieu Geist, Olivier Pietquin. Parametric value function approximation: A unified view.

More information

REINFORCE Framework for Stochastic Policy Optimization and its use in Deep Learning

REINFORCE Framework for Stochastic Policy Optimization and its use in Deep Learning REINFORCE Framework for Stochastic Policy Optimization and its use in Deep Learning Ronen Tamari The Hebrew University of Jerusalem Advanced Seminar in Deep Learning (#67679) February 28, 2016 Ronen Tamari

More information

Machine Learning I Reinforcement Learning

Machine Learning I Reinforcement Learning Machine Learning I Reinforcement Learning Thomas Rückstieß Technische Universität München December 17/18, 2009 Literature Book: Reinforcement Learning: An Introduction Sutton & Barto (free online version:

More information

Natural Actor Critic Algorithms

Natural Actor Critic Algorithms Natural Actor Critic Algorithms Shalabh Bhatnagar, Richard S Sutton, Mohammad Ghavamzadeh, and Mark Lee June 2009 Abstract We present four new reinforcement learning algorithms based on actor critic, function

More information

Reinforcement learning

Reinforcement learning Reinforcement learning Based on [Kaelbling et al., 1996, Bertsekas, 2000] Bert Kappen Reinforcement learning Reinforcement learning is the problem faced by an agent that must learn behavior through trial-and-error

More information

Approximation Methods in Reinforcement Learning

Approximation Methods in Reinforcement Learning 2018 CS420, Machine Learning, Lecture 12 Approximation Methods in Reinforcement Learning Weinan Zhang Shanghai Jiao Tong University http://wnzhang.net http://wnzhang.net/teaching/cs420/index.html Reinforcement

More information

CSC321 Lecture 22: Q-Learning

CSC321 Lecture 22: Q-Learning CSC321 Lecture 22: Q-Learning Roger Grosse Roger Grosse CSC321 Lecture 22: Q-Learning 1 / 21 Overview Second of 3 lectures on reinforcement learning Last time: policy gradient (e.g. REINFORCE) Optimize

More information

Prof. Dr. Ann Nowé. Artificial Intelligence Lab ai.vub.ac.be

Prof. Dr. Ann Nowé. Artificial Intelligence Lab ai.vub.ac.be REINFORCEMENT LEARNING AN INTRODUCTION Prof. Dr. Ann Nowé Artificial Intelligence Lab ai.vub.ac.be REINFORCEMENT LEARNING WHAT IS IT? What is it? Learning from interaction Learning about, from, and while

More information

Trust Region Policy Optimization

Trust Region Policy Optimization Trust Region Policy Optimization Yixin Lin Duke University yixin.lin@duke.edu March 28, 2017 Yixin Lin (Duke) TRPO March 28, 2017 1 / 21 Overview 1 Preliminaries Markov Decision Processes Policy iteration

More information

Regularization and Feature Selection in. the Least-Squares Temporal Difference

Regularization and Feature Selection in. the Least-Squares Temporal Difference Regularization and Feature Selection in Least-Squares Temporal Difference Learning J. Zico Kolter kolter@cs.stanford.edu Andrew Y. Ng ang@cs.stanford.edu Computer Science Department, Stanford University,

More information

arxiv: v4 [cs.lg] 22 Oct 2018

arxiv: v4 [cs.lg] 22 Oct 2018 Ahmed Touati Pierre-Luc Bacon 3 Doina Precup 3 4 Pascal Vincent 4 arxiv:705.093v4 [cs.lg Oct 08 Abstract Off-policy learning is key to scaling up reinforcement learning as it allows to learn about a target

More information

Introduction to Reinforcement Learning. Part 6: Core Theory II: Bellman Equations and Dynamic Programming

Introduction to Reinforcement Learning. Part 6: Core Theory II: Bellman Equations and Dynamic Programming Introduction to Reinforcement Learning Part 6: Core Theory II: Bellman Equations and Dynamic Programming Bellman Equations Recursive relationships among values that can be used to compute values The tree

More information

Reinforcement learning an introduction

Reinforcement learning an introduction Reinforcement learning an introduction Prof. Dr. Ann Nowé Computational Modeling Group AIlab ai.vub.ac.be November 2013 Reinforcement Learning What is it? Learning from interaction Learning about, from,

More information

Lecture 9: Policy Gradient II 1

Lecture 9: Policy Gradient II 1 Lecture 9: Policy Gradient II 1 Emma Brunskill CS234 Reinforcement Learning. Winter 2019 Additional reading: Sutton and Barto 2018 Chp. 13 1 With many slides from or derived from David Silver and John

More information

REINFORCEMENT LEARNING

REINFORCEMENT LEARNING REINFORCEMENT LEARNING Larry Page: Where s Google going next? DeepMind's DQN playing Breakout Contents Introduction to Reinforcement Learning Deep Q-Learning INTRODUCTION TO REINFORCEMENT LEARNING Contents

More information

Reinforcement Learning: An Introduction. ****Draft****

Reinforcement Learning: An Introduction. ****Draft**** i Reinforcement Learning: An Introduction Second edition, in progress ****Draft**** Richard S. Sutton and Andrew G. Barto c 2014, 2015 A Bradford Book The MIT Press Cambridge, Massachusetts London, England

More information

University of Alberta

University of Alberta University of Alberta NEW REPRESENTATIONS AND APPROXIMATIONS FOR SEQUENTIAL DECISION MAKING UNDER UNCERTAINTY by Tao Wang A thesis submitted to the Faculty of Graduate Studies and Research in partial fulfillment

More information

Reinforcement Learning. Machine Learning, Fall 2010

Reinforcement Learning. Machine Learning, Fall 2010 Reinforcement Learning Machine Learning, Fall 2010 1 Administrativia This week: finish RL, most likely start graphical models LA2: due on Thursday LA3: comes out on Thursday TA Office hours: Today 1:30-2:30

More information

arxiv: v1 [cs.lg] 23 Oct 2017

arxiv: v1 [cs.lg] 23 Oct 2017 Accelerated Reinforcement Learning K. Lakshmanan Department of Computer Science and Engineering Indian Institute of Technology (BHU), Varanasi, India Email: lakshmanank.cse@itbhu.ac.in arxiv:1710.08070v1

More information

Reinforcement Learning

Reinforcement Learning Reinforcement Learning Cyber Rodent Project Some slides from: David Silver, Radford Neal CSC411: Machine Learning and Data Mining, Winter 2017 Michael Guerzhoy 1 Reinforcement Learning Supervised learning:

More information

MDP Preliminaries. Nan Jiang. February 10, 2019

MDP Preliminaries. Nan Jiang. February 10, 2019 MDP Preliminaries Nan Jiang February 10, 2019 1 Markov Decision Processes In reinforcement learning, the interactions between the agent and the environment are often described by a Markov Decision Process

More information

Off-Policy Temporal-Difference Learning with Function Approximation

Off-Policy Temporal-Difference Learning with Function Approximation This is a digitally remastered version of a paper published in the Proceedings of the 18th International Conference on Machine Learning (2001, pp. 417 424. This version should be crisper but otherwise

More information

Lecture 8: Policy Gradient

Lecture 8: Policy Gradient Lecture 8: Policy Gradient Hado van Hasselt Outline 1 Introduction 2 Finite Difference Policy Gradient 3 Monte-Carlo Policy Gradient 4 Actor-Critic Policy Gradient Introduction Vapnik s rule Never solve

More information

Basics of reinforcement learning

Basics of reinforcement learning Basics of reinforcement learning Lucian Buşoniu TMLSS, 20 July 2018 Main idea of reinforcement learning (RL) Learn a sequential decision policy to optimize the cumulative performance of an unknown system

More information

Balancing and Control of a Freely-Swinging Pendulum Using a Model-Free Reinforcement Learning Algorithm

Balancing and Control of a Freely-Swinging Pendulum Using a Model-Free Reinforcement Learning Algorithm Balancing and Control of a Freely-Swinging Pendulum Using a Model-Free Reinforcement Learning Algorithm Michail G. Lagoudakis Department of Computer Science Duke University Durham, NC 2778 mgl@cs.duke.edu

More information

Introduction to Reinforcement Learning

Introduction to Reinforcement Learning CSCI-699: Advanced Topics in Deep Learning 01/16/2019 Nitin Kamra Spring 2019 Introduction to Reinforcement Learning 1 What is Reinforcement Learning? So far we have seen unsupervised and supervised learning.

More information

Christopher Watkins and Peter Dayan. Noga Zaslavsky. The Hebrew University of Jerusalem Advanced Seminar in Deep Learning (67679) November 1, 2015

Christopher Watkins and Peter Dayan. Noga Zaslavsky. The Hebrew University of Jerusalem Advanced Seminar in Deep Learning (67679) November 1, 2015 Q-Learning Christopher Watkins and Peter Dayan Noga Zaslavsky The Hebrew University of Jerusalem Advanced Seminar in Deep Learning (67679) November 1, 2015 Noga Zaslavsky Q-Learning (Watkins & Dayan, 1992)

More information

Payments System Design Using Reinforcement Learning: A Progress Report

Payments System Design Using Reinforcement Learning: A Progress Report Payments System Design Using Reinforcement Learning: A Progress Report A. Desai 1 H. Du 1 R. Garratt 2 F. Rivadeneyra 1 1 Bank of Canada 2 University of California Santa Barbara 16th Payment and Settlement

More information

CS 598 Statistical Reinforcement Learning. Nan Jiang

CS 598 Statistical Reinforcement Learning. Nan Jiang CS 598 Statistical Reinforcement Learning Nan Jiang Overview What s this course about? A grad-level seminar course on theory of RL 3 What s this course about? A grad-level seminar course on theory of RL

More information

Chapter 4: Dynamic Programming

Chapter 4: Dynamic Programming Chapter 4: Dynamic Programming Objectives of this chapter: Overview of a collection of classical solution methods for MDPs known as dynamic programming (DP) Show how DP can be used to compute value functions,

More information

Chapter 7: Eligibility Traces. R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 1

Chapter 7: Eligibility Traces. R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 1 Chapter 7: Eligibility Traces R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 1 Midterm Mean = 77.33 Median = 82 R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction

More information

Deep Reinforcement Learning SISL. Jeremy Morton (jmorton2) November 7, Stanford Intelligent Systems Laboratory

Deep Reinforcement Learning SISL. Jeremy Morton (jmorton2) November 7, Stanford Intelligent Systems Laboratory Deep Reinforcement Learning Jeremy Morton (jmorton2) November 7, 2016 SISL Stanford Intelligent Systems Laboratory Overview 2 1 Motivation 2 Neural Networks 3 Deep Reinforcement Learning 4 Deep Learning

More information

Lecture 9: Policy Gradient II (Post lecture) 2

Lecture 9: Policy Gradient II (Post lecture) 2 Lecture 9: Policy Gradient II (Post lecture) 2 Emma Brunskill CS234 Reinforcement Learning. Winter 2018 Additional reading: Sutton and Barto 2018 Chp. 13 2 With many slides from or derived from David Silver

More information