Reinforcement Learning: the basics

Size: px
Start display at page:

Download "Reinforcement Learning: the basics"

Transcription

1 Reinforcement Learning: the basics Olivier Sigaud Université Pierre et Marie Curie, PARIS 6 August 6, / 46

2 Introduction Action selection/planning Learning by trial-and-error (main model : Reinforcement Learning) 2 / 46

3 Reinforcement Learning: the basics Introduction Introductory books 1. [Sutton & Barto, 1998] : the ultimate introduction to the eld, in the discrete case 2. [Bu et & Sigaud, 2008] : in french 3. [Sigaud & Bu et, 2010] : (improved) translation of 2 3 / 46

4 Introduction Dierent learning mechanisms Supervised learning The supervisor indicates to the agent the expected answer The agent corrects a model based on the answer Typical mechanism : gradient backpropagation, RLS Applications : classication, regression, function approximation 4 / 46

5 Introduction Dierent learning mechanisms Self-supervised learning When an agent learns to predict, it proposes its prediction The environment provides the correct answer : next state Supervised learning without a supervisor Dicult to distinguish from associative learning 5 / 46

6 Introduction Dierent learning mechanisms Cost-Sensitive Learning The environment provides the value of action (reward, penalty) Application : behaviour optimization 6 / 46

7 Introduction Dierent learning mechanisms Reinforcement learning In RL, the value signal is given as a scalar How good is ? Necessity of exploration 7 / 46

8 Introduction Dierent learning mechanisms The exploration/exploitation trade-o Exploring can be (very) harmful Shall I exploit what I know or look for a better policy? Am I optimal? Shall I keep exploring or stop? Decrease the rate of exploration along time ɛ-greedy : take the best action most of the time, and a random action from time to time 8 / 46

9 Introduction Dierent learning mechanisms Dierent mechanisms : reminder Supervised learning : for a given input, the learner gets as feedback the output it should have given Reinforcement learning : for a given input, the learner gets as feedback a scalar representing the immediate value of its output Unsupervised learning : for a given input, the learner gets no feedback : it just extracts correlations Note : the self-supervised learning case is hard to distinguish from the unsupervised learning case 9 / 46

10 Introduction Dierent learning mechanisms Outline Goals of this class : Present the basics of discrete RL and dynamic programming Content : Dynamic programming Model-free Reinforcement Learning Actor-critic approach Model-based Reinforcement Learning 10 / 46

11 Dynamic programming Markov Decision Processes S : states space A : action space T : S A Π(S) : transition function r : S A IR : reward function An MDP denes s t+1 and r t+1 as f (s t, a t) It describes a problem, not a solution Markov property : p(s t+1 s t, a t ) = p(s t+1 s t, a t, s t 1, a t 1,...s 0, a 0 ) Reactive agents a t+1 = f (s t), without internal states nor memory In an MDP, a memory of the past does not provide any useful advantage 11 / 46

12 Dynamic programming Markov property : Limitations Markov property is not veried if : the state does not contain all useful information to take decisions or if the next depends on decisions of several agents ou if transitions depend on time 12 / 46

13 Dynamic programming Example : tic-tac-toe The state is not always a location The opponents is seen as part of the environment (might be stochastic) 13 / 46

14 Dynamic programming A stochastic problem Deterministic problem = special case of stochastic T (s t, a t, s t+1 ) = p(s s, a) 14 / 46

15 Dynamic programming A stochastic policy For any MDP, there exists a deterministic policy that is optimal 15 / 46

16 Dynamic programming Rewards over a Markov chain : on states or action? Reward over states Reward over actions in states Below, we assume the latter (we note r(s, a)) 16 / 46

17 Dynamic programming Policy and value functions Goal : nd a policy π : S A maximising the agregation of reward on the long run The value function V π : S IR records the agregation of reward on the long run for each state (following policy π). It is a vector with one entry per state The action value function Q π : S A IR records the agregation of reward on the long run for doing each action in each state (and then following policy π). It is a matrix with one entry per state and per action In the remainder, we focus on V, trivial to transpose to Q 17 / 46

18 Dynamic programming Agregation criteria The computation of value functions assumes the choice of an agregation criterion (discounted, average, etc.) Mere sum (nite horizon) : V π (S 0) = r 0 + r 1 + r r N Equivalent : average over horizon 18 / 46

19 Dynamic programming Agregation criteria The computation of value functions assumes the choice of an agregation criterion (discounted, average, etc.) Average criterion on a window : V π (S 0) = r 0+r1+r / 46

20 Dynamic programming Agregation criteria The computation of value functions assumes the choice of an agregation criterion (discounted, average, etc.) Discounted criterion : V π (s t0 ) = t=t0 γt r(s t, π(s t)) γ [0, 1] : discount factor if γ = 0, sensitive only to immediate reward if γ = 1, future rewards are as important as immediate rewards The discounted case is the most used 18 / 46

21 Dynamic programming Bellman equation over a Markov chain : recursion Given the discounted reward agregation criterion : V (s 0) = r 0 + γv (s 1) 19 / 46

22 Dynamic programming Bellman equation : general case Generalisation of the recusion V (s 0) = r 0 + γv (s 1) over all possible trajectories Deterministic π : V π (s) = r(s, π(s)) + γ s p(s s, π(s))v π (s ) 20 / 46

23 Dynamic programming Bellman equation : general case Generalisation of the recusion V (s 0) = r 0 + γv (s 1) over all possible trajectories Stochastic π : V π (s) = a π(s, a)[r(s, a) + γ s p(s s, a)v π (s )] 20 / 46

24 Dynamic programming Bellman operator and dynamic programming We get V π (s) = r(s, π(s)) + γ s p(s s, π(s))v π (s ) We call Bellman operator (noted T π ) the application V π (s) r(s, π(s)) + γ s p(s s, π(s)) We call Bellman optimality operator (noted T ) the application [ V π (s) max r(s, a) + γ p(s s, a)v (s )] a A s The optimal value function is a xed-point of the Bellman optimality operator T : V = T V Value iteration : V i+1 T V i Policy Iteration : policy evaluation (with Vi+1 π T π V π i ) + policy improvement with s S, π (s) arg max a A s p(s s, a)[r(s, a) + γv π (s )] 21 / 46

25 Dynamic programming Value Iteration in practice R [ s S, V i+1 (s) max r(s, a) + γ p(s s, a)v i (s )] a A s 22 / 46

26 Dynamic programming Value Iteration in practice R [ s S, V i+1 (s) max r(s, a) + γ p(s s, a)v i (s )] a A s 22 / 46

27 Dynamic programming Value Iteration in practice R [ s S, V i+1 (s) max r(s, a) + γ p(s s, a)v i (s )] a A s 22 / 46

28 Dynamic programming Value Iteration in practice R [ s S, V i+1 (s) max r(s, a) + γ p(s s, a)v i (s )] a A s 22 / 46

29 Dynamic programming Value Iteration in practice [ π (s) = arg max r(s, a) + γ p(s s, a)v (s )] a A s 22 / 46

30 Dynamic programming Policy Iteration in practice s S, V i (s) evaluate(π i (s)) 23 / 46

31 Dynamic programming Policy Iteration in practice s S, π i+1 (s) improve(π i (s), V i (s)) 23 / 46

32 Dynamic programming Policy Iteration in practice s S, V i (s) evaluate(π i (s)) 23 / 46

33 Dynamic programming Policy Iteration in practice s S, π i+1 (s) improve(π i (s), V i (s)) 23 / 46

34 Dynamic programming Policy Iteration in practice s S, V i (s) evaluate(π i (s)) 23 / 46

35 Dynamic programming Policy Iteration in practice s S, π i+1 (s) improve(π i (s), V i (s)) 23 / 46

36 Dynamic programming Policy Iteration in practice s S, V i (s) evaluate(π i (s)) 23 / 46

37 Dynamic programming Policy Iteration in practice s S, π i+1 (s) improve(π i (s), V i (s)) 23 / 46

38 Dynamic programming Families of methods Critic : (action) value function evaluation of the policy Actor : the policy itself Value iteration is a pure critic method : it iterates on the value function up to convergence without storing policy, then computes optimal policy Policy iteration is implemented as an actor-critic method, updating in parallel one structure for the actor and one for the critic In the continuous case, there are pure actor methods 24 / 46

39 Model-free Reinforcement learning Reinforcement learning In DP (planning), T and r are given Reinforcement learning goal : build π without knowing T and r Model-free approach : build π without estimating T nor r Actor-critic approach : special case of model-free Model-based approach : build a model of T and r and use it to improve the policy 25 / 46

40 Model-free Reinforcement learning Temporal dierence methods Incremental estimation Estimating the average immediate (stochastic) reward in a state s E k (s) = (r 1 + r r k )/k E k+1 (s) = (r 1 + r r k + r k+1 )/(k + 1) Thus E k+1 (s) = k/(k + 1)E k (s) + r k+1 /(k + 1) Or E k+1 (s) = (k + 1)/(k + 1)E k (s) E k (s)/(k + 1) + r k+1 /(k + 1) Or E k+1 (s) = E k (s) + 1/(k + 1)[r k+1 E k (s)] Still needs to store k Can be approximated as E k+1 (s) = E k (s) + α[r k+1 E k (s)] (1) Converges to the true average (slower or faster depending on α) without storing anything Equation (1) is everywhere in reinforcement learning 26 / 46

41 Model-free Reinforcement learning Temporal dierence methods Temporal Dierence error The goal of TD methods is to estimate the value function V (s) If estimations V (s t) and V (s t+1) were exact, we would get : V (s t) = r t+1 + γr t+2 + γ 2 r t+3 + γ 3 r t V (s t+1) = r t+2 + γ(r t+3 + γ 2 r t Thus V (s t) = r t+1 + γv (s t+1) δ k = r k+1 + γv (s k+1 ) V (s k ) : measures the error between current values of V and the values they should have 27 / 46

42 Model-free Reinforcement learning Temporal dierence methods Monte Carlo methods Much used in games (Go...) to evaluate a state Generate a lot of trajectories : s 0, s 1,..., s N with observed rewards r 0, r 1,..., r N Update state values V (s k ), k = 0,..., N 1 with : V (s k ) V (s k ) + α(s k )(r k + r k r N V (s k )) It uses the average estimation method (1) 28 / 46

43 Model-free Reinforcement learning Temporal dierence methods Temporal Dierence (TD) Methods Temporal Dierence (TD) methods combine the properties of DP methods and Monte Carlo methods : in Monte Carlo, T and r are unknown, but the value update is global, trajectories are needed in DP, T and r are known, but the value update is local TD : as in DP, V (s t) is updated locally given an estimate of V (s t+1) and T and r are unknown Note : Monte Carlo can be reformulated incrementally using the temporal dierence δ k update 29 / 46

44 Model-free Reinforcement learning Temporal dierence methods Policy evaluation : TD(0) Given a policy π, the agent performs a sequence s 0, a 0, r 1,, s t, a t, r t+1, s t+1, a t+1, V (s t) V (s t) + α[r t+1 + γv (s t+1) V (s t)] Combines the TD update (propagation from V (s t+1) to V (s t)) from DP and the incremental estimation method from Monte Carlo Updates are local from s t, s t+1 and r t+1 Proof of convergence : [Dayan & Sejnowski, 1994] 30 / 46

45 Model-free Reinforcement learning Temporal dierence methods TD(0) : limitation TD(0) evaluates V (s) One cannot infer π(s) from V (s) without knowing T : one must know which a leads to the best V (s ) Three solutions : Work with Q(s, a) rather than V (s). Learn a model of T : model-based (or indirect) reinforcement learning Actor-critic methods (simultaneously learn V and update π) 31 / 46

46 Model-free Reinforcement learning Action Value Function Approaches Value function and Action Value function The value function V π : S IR records the agregation of reward on the long run for each state (following policy π). It is a vector with one entry per state The action value function Q π : S A IR records the agregation of reward on the long run for doing each action in each state (and then following policy π). It is a matrix with one entry per state and per action 32 / 46

47 Model-free Reinforcement learning Action Value Function Approaches Sarsa Reminder (TD) :V (s t) V (s t) + α[r t+1 + γv (s t+1) V (s t)] Sarsa : For each observed (s t, a t, r t+1, s t+1, a t+1) : Q(s t, a t) Q(s t, a t) + α[r t+1 + γq(s t+1, a t+1) Q(s t, a t)] Policy : perform exploration (e.g. ɛ-greedy) One must know the action a t+1, thus constrains exploration On-policy method : more complex convergence proof [Singh et al., 2000] 33 / 46

48 Model-free Reinforcement learning Action Value Function Approaches Q-Learning For each observed (s t, a t, r t+1, s t+1) : Q(s t, a t) Q(s t, a t) + α[r t+1 + γ max Q(s t+1, a) Q(s t, a t)] a A max a A Q(s t+1, a) instead of Q(s t+1, a t+1) O-policy method : no more need to know a t+1 [Watkins, 1989] Policy : perform exploration (e.g. ɛ-greedy) Convergence proved provided innite exploration [Dayan & Sejnowski, 1994] 34 / 46

49 Model-free Reinforcement learning Action Value Function Approaches Q-Learning in practice (Q-learning : the movie) Build a states actions table (Q-Table, eventually incremental) Initialise it (randomly or with 0 is not a good choice) Apply update equation after each action Problem : it is (very) slow 35 / 46

50 Model-free Reinforcement learning Actor-Critic approaches From Q(s, a) to Actor-Critic (1) state / action a 0 a 1 a 2 a 3 e e e e e e In Q learning, given a Q Table, one must determine the max at each step This becomes expensive if there are numerous actions 36 / 46

51 Model-free Reinforcement learning Actor-Critic approaches From Q(s, a) to Actor-Critic (2) state / action a 0 a 1 a 2 a 3 e * e * 0.43 e * 0.73 e * 0.81 e * e * 0.9 One can store the best value for each state Then one can update the max by just comparing the changed value and the max No more maximum over actions (only in one case) 37 / 46

52 Model-free Reinforcement learning Actor-Critic approaches From Q(s, a) to Actor-Critic (3) state / action a 0 a 1 a 2 a 3 e * e * 0.43 e * 0.73 e * 0.81 e * e * 0.9 state chosen action e 0 a 1 e 1 a 2 e 2 a 2 e 3 a 2 e 4 a 1 e 5 a 1 Storing the max is equivalent to storing the policy Update the policy as a function of value updates Basic actor-critic scheme 38 / 46

53 Model-free Reinforcement learning Actor-Critic approaches Dynamic Programming and Actor-Critic (1) In both PI and AC, the architecture contains a representation of the value function (the critic) and the policy (the actor) In PI, the MDP (T and r) is known PI alternates two stages : 1. Policy evaluation : update (V (s)) or (Q(s, a)) given the current policy 2. Policy improvement : follow the value gradient 39 / 46

54 Model-free Reinforcement learning Actor-Critic approaches Dynamic Programming and Actor-Critic (2) In AC, T and r are unknown and not represented (model-free) Information from the environment generates updates in the critic, then in the actor 40 / 46

55 Model-free Reinforcement learning Actor-Critic approaches Naive design Discrete states and actions, stochastic policy An update in the critic generates a local update in the actor Critic : compute δ and update V (s) with V k (s) V k (s) + α k δ k Actor : P π (a s) = P π (a s) + α k δ k NB : no need for a max over actions NB2 : one must then know how to draw an action from a probabilistic policy (not obvious for continuous actions) 41 / 46

56 Model-based reinforcement learning Eligibility traces To improve over Q-learning Naive approach : store all (s, a) pair and back-propagate values Limited to nite horizon trajectories Speed/memory trade-o TD(λ), sarsa (λ) and Q(λ) : more sophisticated approach to deal with innite horizon trajectories A variable e(s) is decayed with a factor λ after s was visited and reinitialized each time s is visited again TD(λ) : V (s) V (s) + αδe(s), (similar for sarsa (λ) and Q(λ)), If λ = 0, e(s) goes to 0 immediately, thus we get TD(0), sarsa or Q-learning TD(1) = Monte-Carlo / 46

57 Model-based reinforcement learning Model-based Reinforcement Learning General idea : planning with a learnt model of T and r is performing back-ups in the agent's head ([Sutton, 1990a, Sutton, 1990b]) Learning T and r is an incremental self-supervised learning problem Several approaches : Draw random transition in the model and apply TD back-up Use Policy Iteration (Dyna-PI) or Q-learning (Dyna-Q) to get V or Q Dyna-AC also exists Better propagation : Prioritized Sweeping [Moore & Atkeson, 1993, Peng & Williams, 1992] 43 / 46

58 Model-based reinforcement learning Dyna architecture and generalization (Dyna-like video (good model)) (Dyna-like video (bad model)) Thanks to the model of transitions, Dyna can propagate values more often Problem : in the stochastic case, the model of transitions is in card(s) card(s) card(a) Usefulness of compact models MACS [Gérard et al., 2005] : Dyna with generalisation (Learning Classier Systems) SPITI [Degris et al., 2006] : Dyna with generalisation (Factored MDPs) 44 / 46

59 Model-based reinforcement learning Messages Dynamic programming and reinforcement learning methods can be split into pure actor, pure critic and actor-critic methods Dynamic programming, value iteration, policy iteration are when you know the transition and reward functions Model-free RL is based on TD-error Actor critic RL is a model-free, PI-like algorithm Model-based RL combines dynamic programming and model learning The continuous case is more complicated 45 / 46

60 Model-based reinforcement learning Any question? 46 / 46

61 Model-based reinforcement learning Buet, O. & Sigaud, O. (2008). Processus décisionnels de Markov en intelligence articielle. Lavoisier. Dayan, P. & Sejnowski, T. (1994). Td(lambda) converges with probability 1. Machine Learning, 14(3). Degris, T., Sigaud, O., & Wuillemin, P.-H. (2006). Learning the Structure of Factored Markov Decision Processes in Reinforcement Learning Problems. Edité dans Proceedings of the 23rd International Conference on Machine Learning (ICML'2006), pages , CMU, Pennsylvania. Gérard, P., Meyer, J.-A., & Sigaud, O. (2005). Combining latent learning with dynamic programming in MACS. European Journal of Operational Research, 160 : Moore, A. W. & Atkeson, C. (1993). Prioritized sweeping : Reinforcement learning with less data and less real time. Machine Learning, 13 : Peng, J. & Williams, R. (1992). Ecient learning and planning within the DYNA framework. Edité dans Meyer, J.-A., Roitblat, H. L., & Wilson, S. W., editeurs, Proceedings of the Second International Conference on Simulation of Adaptive Behavior, pages , Cambridge, MA. MIT Press. Sigaud, O. & Buet, O. (2010). Markov Decision Processes in Articial Intelligence. iste - Wiley. Singh, S. P., Jaakkola, T., Littman, M. L., & Szepesvari, C. (2000). Convergence Results for Single-Step On-Policy Reinforcement Learning Algorithms. Machine Learning, 38(3) : / 46

62 Model-based reinforcement learning Sutton, R. S. (1990a). Integrating architectures for learning, planning, and reacting based on approximating dynamic programming. Edité dans Proceedings of the Seventh International Conference on Machine Learning ICML'90, pages , San Mateo, CA. Morgan Kaufmann. Sutton, R. S. (1990b). Planning by incremental dynamic programming. Edité dans Proceedings of the Eighth International Conference on Machine Learning, pages , San Mateo, CA. Morgan Kaufmann. Sutton, R. S. & Barto, A. G. (1998). Reinforcement Learning : An Introduction. MIT Press. Watkins, C. J. C. H. (1989). Learning with Delayed Rewards. Thèse de doctorat, Psychology Department, University of Cambridge, England. 46 / 46

Olivier Sigaud. September 21, 2012

Olivier Sigaud. September 21, 2012 Supervised and Reinforcement Learning Tools for Motor Learning Models Olivier Sigaud Université Pierre et Marie Curie - Paris 6 September 21, 2012 1 / 64 Introduction Who is speaking? 2 / 64 Introduction

More information

Temporal difference learning

Temporal difference learning Temporal difference learning AI & Agents for IET Lecturer: S Luz http://www.scss.tcd.ie/~luzs/t/cs7032/ February 4, 2014 Recall background & assumptions Environment is a finite MDP (i.e. A and S are finite).

More information

Machine Learning I Reinforcement Learning

Machine Learning I Reinforcement Learning Machine Learning I Reinforcement Learning Thomas Rückstieß Technische Universität München December 17/18, 2009 Literature Book: Reinforcement Learning: An Introduction Sutton & Barto (free online version:

More information

Introduction to Reinforcement Learning. CMPT 882 Mar. 18

Introduction to Reinforcement Learning. CMPT 882 Mar. 18 Introduction to Reinforcement Learning CMPT 882 Mar. 18 Outline for the week Basic ideas in RL Value functions and value iteration Policy evaluation and policy improvement Model-free RL Monte-Carlo and

More information

CS599 Lecture 1 Introduction To RL

CS599 Lecture 1 Introduction To RL CS599 Lecture 1 Introduction To RL Reinforcement Learning Introduction Learning from rewards Policies Value Functions Rewards Models of the Environment Exploitation vs. Exploration Dynamic Programming

More information

In Advances in Neural Information Processing Systems 6. J. D. Cowan, G. Tesauro and. Convergence of Indirect Adaptive. Andrew G.

In Advances in Neural Information Processing Systems 6. J. D. Cowan, G. Tesauro and. Convergence of Indirect Adaptive. Andrew G. In Advances in Neural Information Processing Systems 6. J. D. Cowan, G. Tesauro and J. Alspector, (Eds.). Morgan Kaufmann Publishers, San Fancisco, CA. 1994. Convergence of Indirect Adaptive Asynchronous

More information

Reinforcement learning

Reinforcement learning Reinforcement learning Based on [Kaelbling et al., 1996, Bertsekas, 2000] Bert Kappen Reinforcement learning Reinforcement learning is the problem faced by an agent that must learn behavior through trial-and-error

More information

Reinforcement Learning

Reinforcement Learning Reinforcement Learning Temporal Difference Learning Temporal difference learning, TD prediction, Q-learning, elibigility traces. (many slides from Marc Toussaint) Vien Ngo Marc Toussaint University of

More information

Reinforcement Learning with Function Approximation. Joseph Christian G. Noel

Reinforcement Learning with Function Approximation. Joseph Christian G. Noel Reinforcement Learning with Function Approximation Joseph Christian G. Noel November 2011 Abstract Reinforcement learning (RL) is a key problem in the field of Artificial Intelligence. The main goal is

More information

Christopher Watkins and Peter Dayan. Noga Zaslavsky. The Hebrew University of Jerusalem Advanced Seminar in Deep Learning (67679) November 1, 2015

Christopher Watkins and Peter Dayan. Noga Zaslavsky. The Hebrew University of Jerusalem Advanced Seminar in Deep Learning (67679) November 1, 2015 Q-Learning Christopher Watkins and Peter Dayan Noga Zaslavsky The Hebrew University of Jerusalem Advanced Seminar in Deep Learning (67679) November 1, 2015 Noga Zaslavsky Q-Learning (Watkins & Dayan, 1992)

More information

Reinforcement Learning

Reinforcement Learning Reinforcement Learning Temporal Difference Learning Temporal difference learning, TD prediction, Q-learning, elibigility traces. (many slides from Marc Toussaint) Vien Ngo MLR, University of Stuttgart

More information

Balancing and Control of a Freely-Swinging Pendulum Using a Model-Free Reinforcement Learning Algorithm

Balancing and Control of a Freely-Swinging Pendulum Using a Model-Free Reinforcement Learning Algorithm Balancing and Control of a Freely-Swinging Pendulum Using a Model-Free Reinforcement Learning Algorithm Michail G. Lagoudakis Department of Computer Science Duke University Durham, NC 2778 mgl@cs.duke.edu

More information

6 Reinforcement Learning

6 Reinforcement Learning 6 Reinforcement Learning As discussed above, a basic form of supervised learning is function approximation, relating input vectors to output vectors, or, more generally, finding density functions p(y,

More information

Lecture 23: Reinforcement Learning

Lecture 23: Reinforcement Learning Lecture 23: Reinforcement Learning MDPs revisited Model-based learning Monte Carlo value function estimation Temporal-difference (TD) learning Exploration November 23, 2006 1 COMP-424 Lecture 23 Recall:

More information

Reinforcement Learning

Reinforcement Learning Reinforcement Learning Markov decision process & Dynamic programming Evaluative feedback, value function, Bellman equation, optimality, Markov property, Markov decision process, dynamic programming, value

More information

Machine Learning. Reinforcement learning. Hamid Beigy. Sharif University of Technology. Fall 1396

Machine Learning. Reinforcement learning. Hamid Beigy. Sharif University of Technology. Fall 1396 Machine Learning Reinforcement learning Hamid Beigy Sharif University of Technology Fall 1396 Hamid Beigy (Sharif University of Technology) Machine Learning Fall 1396 1 / 32 Table of contents 1 Introduction

More information

Temporal Difference Learning & Policy Iteration

Temporal Difference Learning & Policy Iteration Temporal Difference Learning & Policy Iteration Advanced Topics in Reinforcement Learning Seminar WS 15/16 ±0 ±0 +1 by Tobias Joppen 03.11.2015 Fachbereich Informatik Knowledge Engineering Group Prof.

More information

Lecture 25: Learning 4. Victor R. Lesser. CMPSCI 683 Fall 2010

Lecture 25: Learning 4. Victor R. Lesser. CMPSCI 683 Fall 2010 Lecture 25: Learning 4 Victor R. Lesser CMPSCI 683 Fall 2010 Final Exam Information Final EXAM on Th 12/16 at 4:00pm in Lederle Grad Res Ctr Rm A301 2 Hours but obviously you can leave early! Open Book

More information

Grundlagen der Künstlichen Intelligenz

Grundlagen der Künstlichen Intelligenz Grundlagen der Künstlichen Intelligenz Reinforcement learning Daniel Hennes 4.12.2017 (WS 2017/18) University Stuttgart - IPVS - Machine Learning & Robotics 1 Today Reinforcement learning Model based and

More information

Open Theoretical Questions in Reinforcement Learning

Open Theoretical Questions in Reinforcement Learning Open Theoretical Questions in Reinforcement Learning Richard S. Sutton AT&T Labs, Florham Park, NJ 07932, USA, sutton@research.att.com, www.cs.umass.edu/~rich Reinforcement learning (RL) concerns the problem

More information

Reinforcement Learning

Reinforcement Learning Reinforcement Learning From the basics to Deep RL Olivier Sigaud ISIR, UPMC + INRIA http://people.isir.upmc.fr/sigaud September 14, 2017 1 / 54 Introduction Outline Some quick background about discrete

More information

Off-Policy Actor-Critic

Off-Policy Actor-Critic Off-Policy Actor-Critic Ludovic Trottier Laval University July 25 2012 Ludovic Trottier (DAMAS Laboratory) Off-Policy Actor-Critic July 25 2012 1 / 34 Table of Contents 1 Reinforcement Learning Theory

More information

Today s s Lecture. Applicability of Neural Networks. Back-propagation. Review of Neural Networks. Lecture 20: Learning -4. Markov-Decision Processes

Today s s Lecture. Applicability of Neural Networks. Back-propagation. Review of Neural Networks. Lecture 20: Learning -4. Markov-Decision Processes Today s s Lecture Lecture 20: Learning -4 Review of Neural Networks Markov-Decision Processes Victor Lesser CMPSCI 683 Fall 2004 Reinforcement learning 2 Back-propagation Applicability of Neural Networks

More information

Chapter 7: Eligibility Traces. R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 1

Chapter 7: Eligibility Traces. R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 1 Chapter 7: Eligibility Traces R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 1 Midterm Mean = 77.33 Median = 82 R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction

More information

Reinforcement Learning. George Konidaris

Reinforcement Learning. George Konidaris Reinforcement Learning George Konidaris gdk@cs.brown.edu Fall 2017 Machine Learning Subfield of AI concerned with learning from data. Broadly, using: Experience To Improve Performance On Some Task (Tom

More information

Prof. Dr. Ann Nowé. Artificial Intelligence Lab ai.vub.ac.be

Prof. Dr. Ann Nowé. Artificial Intelligence Lab ai.vub.ac.be REINFORCEMENT LEARNING AN INTRODUCTION Prof. Dr. Ann Nowé Artificial Intelligence Lab ai.vub.ac.be REINFORCEMENT LEARNING WHAT IS IT? What is it? Learning from interaction Learning about, from, and while

More information

Decision Theory: Q-Learning

Decision Theory: Q-Learning Decision Theory: Q-Learning CPSC 322 Decision Theory 5 Textbook 12.5 Decision Theory: Q-Learning CPSC 322 Decision Theory 5, Slide 1 Lecture Overview 1 Recap 2 Asynchronous Value Iteration 3 Q-Learning

More information

Approximate Optimal-Value Functions. Satinder P. Singh Richard C. Yee. University of Massachusetts.

Approximate Optimal-Value Functions. Satinder P. Singh Richard C. Yee. University of Massachusetts. An Upper Bound on the oss from Approximate Optimal-Value Functions Satinder P. Singh Richard C. Yee Department of Computer Science University of Massachusetts Amherst, MA 01003 singh@cs.umass.edu, yee@cs.umass.edu

More information

Reinforcement learning an introduction

Reinforcement learning an introduction Reinforcement learning an introduction Prof. Dr. Ann Nowé Computational Modeling Group AIlab ai.vub.ac.be November 2013 Reinforcement Learning What is it? Learning from interaction Learning about, from,

More information

Introduction to Reinforcement Learning

Introduction to Reinforcement Learning Introduction to Reinforcement Learning Rémi Munos SequeL project: Sequential Learning http://researchers.lille.inria.fr/ munos/ INRIA Lille - Nord Europe Machine Learning Summer School, September 2011,

More information

, and rewards and transition matrices as shown below:

, and rewards and transition matrices as shown below: CSE 50a. Assignment 7 Out: Tue Nov Due: Thu Dec Reading: Sutton & Barto, Chapters -. 7. Policy improvement Consider the Markov decision process (MDP) with two states s {0, }, two actions a {0, }, discount

More information

REINFORCEMENT LEARNING

REINFORCEMENT LEARNING REINFORCEMENT LEARNING Larry Page: Where s Google going next? DeepMind's DQN playing Breakout Contents Introduction to Reinforcement Learning Deep Q-Learning INTRODUCTION TO REINFORCEMENT LEARNING Contents

More information

Reinforcement Learning II

Reinforcement Learning II Reinforcement Learning II Andrea Bonarini Artificial Intelligence and Robotics Lab Department of Electronics and Information Politecnico di Milano E-mail: bonarini@elet.polimi.it URL:http://www.dei.polimi.it/people/bonarini

More information

Lecture 8: Policy Gradient

Lecture 8: Policy Gradient Lecture 8: Policy Gradient Hado van Hasselt Outline 1 Introduction 2 Finite Difference Policy Gradient 3 Monte-Carlo Policy Gradient 4 Actor-Critic Policy Gradient Introduction Vapnik s rule Never solve

More information

Reinforcement Learning. Spring 2018 Defining MDPs, Planning

Reinforcement Learning. Spring 2018 Defining MDPs, Planning Reinforcement Learning Spring 2018 Defining MDPs, Planning understandability 0 Slide 10 time You are here Markov Process Where you will go depends only on where you are Markov Process: Information state

More information

On and Off-Policy Relational Reinforcement Learning

On and Off-Policy Relational Reinforcement Learning On and Off-Policy Relational Reinforcement Learning Christophe Rodrigues, Pierre Gérard, and Céline Rouveirol LIPN, UMR CNRS 73, Institut Galilée - Université Paris-Nord first.last@lipn.univ-paris13.fr

More information

Basics of reinforcement learning

Basics of reinforcement learning Basics of reinforcement learning Lucian Buşoniu TMLSS, 20 July 2018 Main idea of reinforcement learning (RL) Learn a sequential decision policy to optimize the cumulative performance of an unknown system

More information

Review: TD-Learning. TD (SARSA) Learning for Q-values. Bellman Equations for Q-values. P (s, a, s )[R(s, a, s )+ Q (s, (s ))]

Review: TD-Learning. TD (SARSA) Learning for Q-values. Bellman Equations for Q-values. P (s, a, s )[R(s, a, s )+ Q (s, (s ))] Review: TD-Learning function TD-Learning(mdp) returns a policy Class #: Reinforcement Learning, II 8s S, U(s) =0 set start-state s s 0 choose action a, using -greedy policy based on U(s) U(s) U(s)+ [r

More information

Reinforcement Learning. Introduction

Reinforcement Learning. Introduction Reinforcement Learning Introduction Reinforcement Learning Agent interacts and learns from a stochastic environment Science of sequential decision making Many faces of reinforcement learning Optimal control

More information

16.410/413 Principles of Autonomy and Decision Making

16.410/413 Principles of Autonomy and Decision Making 16.410/413 Principles of Autonomy and Decision Making Lecture 23: Markov Decision Processes Policy Iteration Emilio Frazzoli Aeronautics and Astronautics Massachusetts Institute of Technology December

More information

Reinforcement Learning (1)

Reinforcement Learning (1) Reinforcement Learning 1 Reinforcement Learning (1) Machine Learning 64-360, Part II Norman Hendrich University of Hamburg, Dept. of Informatics Vogt-Kölln-Str. 30, D-22527 Hamburg hendrich@informatik.uni-hamburg.de

More information

Marks. bonus points. } Assignment 1: Should be out this weekend. } Mid-term: Before the last lecture. } Mid-term deferred exam:

Marks. bonus points. } Assignment 1: Should be out this weekend. } Mid-term: Before the last lecture. } Mid-term deferred exam: Marks } Assignment 1: Should be out this weekend } All are marked, I m trying to tally them and perhaps add bonus points } Mid-term: Before the last lecture } Mid-term deferred exam: } This Saturday, 9am-10.30am,

More information

PART A and ONE question from PART B; or ONE question from PART A and TWO questions from PART B.

PART A and ONE question from PART B; or ONE question from PART A and TWO questions from PART B. Advanced Topics in Machine Learning, GI13, 2010/11 Advanced Topics in Machine Learning, GI13, 2010/11 Answer any THREE questions. Each question is worth 20 marks. Use separate answer books Answer any THREE

More information

An Introduction to Reinforcement Learning

An Introduction to Reinforcement Learning An Introduction to Reinforcement Learning Shivaram Kalyanakrishnan shivaram@cse.iitb.ac.in Department of Computer Science and Engineering Indian Institute of Technology Bombay April 2018 What is Reinforcement

More information

Notes on Reinforcement Learning

Notes on Reinforcement Learning 1 Introduction Notes on Reinforcement Learning Paulo Eduardo Rauber 2014 Reinforcement learning is the study of agents that act in an environment with the goal of maximizing cumulative reward signals.

More information

Reinforcement Learning Part 2

Reinforcement Learning Part 2 Reinforcement Learning Part 2 Dipendra Misra Cornell University dkm@cs.cornell.edu https://dipendramisra.wordpress.com/ From previous tutorial Reinforcement Learning Exploration No supervision Agent-Reward-Environment

More information

Lecture 1: March 7, 2018

Lecture 1: March 7, 2018 Reinforcement Learning Spring Semester, 2017/8 Lecture 1: March 7, 2018 Lecturer: Yishay Mansour Scribe: ym DISCLAIMER: Based on Learning and Planning in Dynamical Systems by Shie Mannor c, all rights

More information

Internet Monetization

Internet Monetization Internet Monetization March May, 2013 Discrete time Finite A decision process (MDP) is reward process with decisions. It models an environment in which all states are and time is divided into stages. Definition

More information

Reinforcement Learning and NLP

Reinforcement Learning and NLP 1 Reinforcement Learning and NLP Kapil Thadani kapil@cs.columbia.edu RESEARCH Outline 2 Model-free RL Markov decision processes (MDPs) Derivative-free optimization Policy gradients Variance reduction Value

More information

Decision Theory: Markov Decision Processes

Decision Theory: Markov Decision Processes Decision Theory: Markov Decision Processes CPSC 322 Lecture 33 March 31, 2006 Textbook 12.5 Decision Theory: Markov Decision Processes CPSC 322 Lecture 33, Slide 1 Lecture Overview Recap Rewards and Policies

More information

Reinforcement Learning

Reinforcement Learning 1 Reinforcement Learning Chris Watkins Department of Computer Science Royal Holloway, University of London July 27, 2015 2 Plan 1 Why reinforcement learning? Where does this theory come from? Markov decision

More information

Prioritized Sweeping Converges to the Optimal Value Function

Prioritized Sweeping Converges to the Optimal Value Function Technical Report DCS-TR-631 Prioritized Sweeping Converges to the Optimal Value Function Lihong Li and Michael L. Littman {lihong,mlittman}@cs.rutgers.edu RL 3 Laboratory Department of Computer Science

More information

arxiv: v1 [cs.ai] 5 Nov 2017

arxiv: v1 [cs.ai] 5 Nov 2017 arxiv:1711.01569v1 [cs.ai] 5 Nov 2017 Markus Dumke Department of Statistics Ludwig-Maximilians-Universität München markus.dumke@campus.lmu.de Abstract Temporal-difference (TD) learning is an important

More information

(Deep) Reinforcement Learning

(Deep) Reinforcement Learning Martin Matyášek Artificial Intelligence Center Czech Technical University in Prague October 27, 2016 Martin Matyášek VPD, 2016 1 / 17 Reinforcement Learning in a picture R. S. Sutton and A. G. Barto 2015

More information

Machine Learning. Machine Learning: Jordan Boyd-Graber University of Maryland REINFORCEMENT LEARNING. Slides adapted from Tom Mitchell and Peter Abeel

Machine Learning. Machine Learning: Jordan Boyd-Graber University of Maryland REINFORCEMENT LEARNING. Slides adapted from Tom Mitchell and Peter Abeel Machine Learning Machine Learning: Jordan Boyd-Graber University of Maryland REINFORCEMENT LEARNING Slides adapted from Tom Mitchell and Peter Abeel Machine Learning: Jordan Boyd-Graber UMD Machine Learning

More information

Temporal Difference. Learning KENNETH TRAN. Principal Research Engineer, MSR AI

Temporal Difference. Learning KENNETH TRAN. Principal Research Engineer, MSR AI Temporal Difference Learning KENNETH TRAN Principal Research Engineer, MSR AI Temporal Difference Learning Policy Evaluation Intro to model-free learning Monte Carlo Learning Temporal Difference Learning

More information

Machine Learning and Bayesian Inference. Unsupervised learning. Can we find regularity in data without the aid of labels?

Machine Learning and Bayesian Inference. Unsupervised learning. Can we find regularity in data without the aid of labels? Machine Learning and Bayesian Inference Dr Sean Holden Computer Laboratory, Room FC6 Telephone extension 6372 Email: sbh11@cl.cam.ac.uk www.cl.cam.ac.uk/ sbh11/ Unsupervised learning Can we find regularity

More information

An Introduction to Reinforcement Learning

An Introduction to Reinforcement Learning An Introduction to Reinforcement Learning Shivaram Kalyanakrishnan shivaram@csa.iisc.ernet.in Department of Computer Science and Automation Indian Institute of Science August 2014 What is Reinforcement

More information

CMU Lecture 12: Reinforcement Learning. Teacher: Gianni A. Di Caro

CMU Lecture 12: Reinforcement Learning. Teacher: Gianni A. Di Caro CMU 15-781 Lecture 12: Reinforcement Learning Teacher: Gianni A. Di Caro REINFORCEMENT LEARNING Transition Model? State Action Reward model? Agent Goal: Maximize expected sum of future rewards 2 MDP PLANNING

More information

Reinforcement Learning for Continuous. Action using Stochastic Gradient Ascent. Hajime KIMURA, Shigenobu KOBAYASHI JAPAN

Reinforcement Learning for Continuous. Action using Stochastic Gradient Ascent. Hajime KIMURA, Shigenobu KOBAYASHI JAPAN Reinforcement Learning for Continuous Action using Stochastic Gradient Ascent Hajime KIMURA, Shigenobu KOBAYASHI Tokyo Institute of Technology, 4259 Nagatsuda, Midori-ku Yokohama 226-852 JAPAN Abstract:

More information

Lecture 3: Markov Decision Processes

Lecture 3: Markov Decision Processes Lecture 3: Markov Decision Processes Joseph Modayil 1 Markov Processes 2 Markov Reward Processes 3 Markov Decision Processes 4 Extensions to MDPs Markov Processes Introduction Introduction to MDPs Markov

More information

Artificial Intelligence

Artificial Intelligence Artificial Intelligence Dynamic Programming Marc Toussaint University of Stuttgart Winter 2018/19 Motivation: So far we focussed on tree search-like solvers for decision problems. There is a second important

More information

Reinforcement Learning

Reinforcement Learning Reinforcement Learning Function approximation Mario Martin CS-UPC May 18, 2018 Mario Martin (CS-UPC) Reinforcement Learning May 18, 2018 / 65 Recap Algorithms: MonteCarlo methods for Policy Evaluation

More information

Reinforcement Learning

Reinforcement Learning Reinforcement Learning Ron Parr CompSci 7 Department of Computer Science Duke University With thanks to Kris Hauser for some content RL Highlights Everybody likes to learn from experience Use ML techniques

More information

Lecture 7: Value Function Approximation

Lecture 7: Value Function Approximation Lecture 7: Value Function Approximation Joseph Modayil Outline 1 Introduction 2 3 Batch Methods Introduction Large-Scale Reinforcement Learning Reinforcement learning can be used to solve large problems,

More information

Sequential Decision Problems

Sequential Decision Problems Sequential Decision Problems Michael A. Goodrich November 10, 2006 If I make changes to these notes after they are posted and if these changes are important (beyond cosmetic), the changes will highlighted

More information

Dueling Network Architectures for Deep Reinforcement Learning (ICML 2016)

Dueling Network Architectures for Deep Reinforcement Learning (ICML 2016) Dueling Network Architectures for Deep Reinforcement Learning (ICML 2016) Yoonho Lee Department of Computer Science and Engineering Pohang University of Science and Technology October 11, 2016 Outline

More information

INF 5860 Machine learning for image classification. Lecture 14: Reinforcement learning May 9, 2018

INF 5860 Machine learning for image classification. Lecture 14: Reinforcement learning May 9, 2018 Machine learning for image classification Lecture 14: Reinforcement learning May 9, 2018 Page 3 Outline Motivation Introduction to reinforcement learning (RL) Value function based methods (Q-learning)

More information

MARKOV DECISION PROCESSES (MDP) AND REINFORCEMENT LEARNING (RL) Versione originale delle slide fornita dal Prof. Francesco Lo Presti

MARKOV DECISION PROCESSES (MDP) AND REINFORCEMENT LEARNING (RL) Versione originale delle slide fornita dal Prof. Francesco Lo Presti 1 MARKOV DECISION PROCESSES (MDP) AND REINFORCEMENT LEARNING (RL) Versione originale delle slide fornita dal Prof. Francesco Lo Presti Historical background 2 Original motivation: animal learning Early

More information

Linear Least-squares Dyna-style Planning

Linear Least-squares Dyna-style Planning Linear Least-squares Dyna-style Planning Hengshuai Yao Department of Computing Science University of Alberta Edmonton, AB, Canada T6G2E8 hengshua@cs.ualberta.ca Abstract World model is very important for

More information

Reinforcement Learning. Machine Learning, Fall 2010

Reinforcement Learning. Machine Learning, Fall 2010 Reinforcement Learning Machine Learning, Fall 2010 1 Administrativia This week: finish RL, most likely start graphical models LA2: due on Thursday LA3: comes out on Thursday TA Office hours: Today 1:30-2:30

More information

Reinforcement Learning and Deep Reinforcement Learning

Reinforcement Learning and Deep Reinforcement Learning Reinforcement Learning and Deep Reinforcement Learning Ashis Kumer Biswas, Ph.D. ashis.biswas@ucdenver.edu Deep Learning November 5, 2018 1 / 64 Outlines 1 Principles of Reinforcement Learning 2 The Q

More information

PART A and ONE question from PART B; or ONE question from PART A and TWO questions from PART B.

PART A and ONE question from PART B; or ONE question from PART A and TWO questions from PART B. Advanced Topics in Machine Learning, GI13, 2010/11 Advanced Topics in Machine Learning, GI13, 2010/11 Answer any THREE questions. Each question is worth 20 marks. Use separate answer books Answer any THREE

More information

An Introduction to Reinforcement Learning

An Introduction to Reinforcement Learning 1 / 58 An Introduction to Reinforcement Learning Lecture 01: Introduction Dr. Johannes A. Stork School of Computer Science and Communication KTH Royal Institute of Technology January 19, 2017 2 / 58 ../fig/reward-00.jpg

More information

arxiv: v1 [cs.ai] 1 Jul 2015

arxiv: v1 [cs.ai] 1 Jul 2015 arxiv:507.00353v [cs.ai] Jul 205 Harm van Seijen harm.vanseijen@ualberta.ca A. Rupam Mahmood ashique@ualberta.ca Patrick M. Pilarski patrick.pilarski@ualberta.ca Richard S. Sutton sutton@cs.ualberta.ca

More information

REINFORCE Framework for Stochastic Policy Optimization and its use in Deep Learning

REINFORCE Framework for Stochastic Policy Optimization and its use in Deep Learning REINFORCE Framework for Stochastic Policy Optimization and its use in Deep Learning Ronen Tamari The Hebrew University of Jerusalem Advanced Seminar in Deep Learning (#67679) February 28, 2016 Ronen Tamari

More information

Reinforcement Learning. Donglin Zeng, Department of Biostatistics, University of North Carolina

Reinforcement Learning. Donglin Zeng, Department of Biostatistics, University of North Carolina Reinforcement Learning Introduction Introduction Unsupervised learning has no outcome (no feedback). Supervised learning has outcome so we know what to predict. Reinforcement learning is in between it

More information

Course 16:198:520: Introduction To Artificial Intelligence Lecture 13. Decision Making. Abdeslam Boularias. Wednesday, December 7, 2016

Course 16:198:520: Introduction To Artificial Intelligence Lecture 13. Decision Making. Abdeslam Boularias. Wednesday, December 7, 2016 Course 16:198:520: Introduction To Artificial Intelligence Lecture 13 Decision Making Abdeslam Boularias Wednesday, December 7, 2016 1 / 45 Overview We consider probabilistic temporal models where the

More information

Reinforcement Learning

Reinforcement Learning Reinforcement Learning RL in continuous MDPs March April, 2015 Large/Continuous MDPs Large/Continuous state space Tabular representation cannot be used Large/Continuous action space Maximization over action

More information

An Adaptive Clustering Method for Model-free Reinforcement Learning

An Adaptive Clustering Method for Model-free Reinforcement Learning An Adaptive Clustering Method for Model-free Reinforcement Learning Andreas Matt and Georg Regensburger Institute of Mathematics University of Innsbruck, Austria {andreas.matt, georg.regensburger}@uibk.ac.at

More information

CS 287: Advanced Robotics Fall Lecture 14: Reinforcement Learning with Function Approximation and TD Gammon case study

CS 287: Advanced Robotics Fall Lecture 14: Reinforcement Learning with Function Approximation and TD Gammon case study CS 287: Advanced Robotics Fall 2009 Lecture 14: Reinforcement Learning with Function Approximation and TD Gammon case study Pieter Abbeel UC Berkeley EECS Assignment #1 Roll-out: nice example paper: X.

More information

15-889e Policy Search: Gradient Methods Emma Brunskill. All slides from David Silver (with EB adding minor modificafons), unless otherwise noted

15-889e Policy Search: Gradient Methods Emma Brunskill. All slides from David Silver (with EB adding minor modificafons), unless otherwise noted 15-889e Policy Search: Gradient Methods Emma Brunskill All slides from David Silver (with EB adding minor modificafons), unless otherwise noted Outline 1 Introduction 2 Finite Difference Policy Gradient

More information

Elements of Reinforcement Learning

Elements of Reinforcement Learning Elements of Reinforcement Learning Policy: way learning algorithm behaves (mapping from state to action) Reward function: Mapping of state action pair to reward or cost Value function: long term reward,

More information

This question has three parts, each of which can be answered concisely, but be prepared to explain and justify your concise answer.

This question has three parts, each of which can be answered concisely, but be prepared to explain and justify your concise answer. This question has three parts, each of which can be answered concisely, but be prepared to explain and justify your concise answer. 1. Suppose you have a policy and its action-value function, q, then you

More information

Q-Learning in Continuous State Action Spaces

Q-Learning in Continuous State Action Spaces Q-Learning in Continuous State Action Spaces Alex Irpan alexirpan@berkeley.edu December 5, 2015 Contents 1 Introduction 1 2 Background 1 3 Q-Learning 2 4 Q-Learning In Continuous Spaces 4 5 Experimental

More information

Reinforcement Learning

Reinforcement Learning CS7/CS7 Fall 005 Supervised Learning: Training examples: (x,y) Direct feedback y for each input x Sequence of decisions with eventual feedback No teacher that critiques individual actions Learn to act

More information

Reinforcement Learning using Continuous Actions. Hado van Hasselt

Reinforcement Learning using Continuous Actions. Hado van Hasselt Reinforcement Learning using Continuous Actions Hado van Hasselt 2005 Concluding thesis for Cognitive Artificial Intelligence University of Utrecht First supervisor: Dr. Marco A. Wiering, University of

More information

MDP Preliminaries. Nan Jiang. February 10, 2019

MDP Preliminaries. Nan Jiang. February 10, 2019 MDP Preliminaries Nan Jiang February 10, 2019 1 Markov Decision Processes In reinforcement learning, the interactions between the agent and the environment are often described by a Markov Decision Process

More information

Reinforcement Learning. Yishay Mansour Tel-Aviv University

Reinforcement Learning. Yishay Mansour Tel-Aviv University Reinforcement Learning Yishay Mansour Tel-Aviv University 1 Reinforcement Learning: Course Information Classes: Wednesday Lecture 10-13 Yishay Mansour Recitations:14-15/15-16 Eliya Nachmani Adam Polyak

More information

15-780: Graduate Artificial Intelligence. Reinforcement learning (RL)

15-780: Graduate Artificial Intelligence. Reinforcement learning (RL) 15-780: Graduate Artificial Intelligence Reinforcement learning (RL) From MDPs to RL We still use the same Markov model with rewards and actions But there are a few differences: 1. We do not assume we

More information

Markov Decision Processes and Solving Finite Problems. February 8, 2017

Markov Decision Processes and Solving Finite Problems. February 8, 2017 Markov Decision Processes and Solving Finite Problems February 8, 2017 Overview of Upcoming Lectures Feb 8: Markov decision processes, value iteration, policy iteration Feb 13: Policy gradients Feb 15:

More information

Reinforcement Learning

Reinforcement Learning Reinforcement Learning Dipendra Misra Cornell University dkm@cs.cornell.edu https://dipendramisra.wordpress.com/ Task Grasp the green cup. Output: Sequence of controller actions Setup from Lenz et. al.

More information

Proceedings of the International Conference on Neural Networks, Orlando Florida, June Leemon C. Baird III

Proceedings of the International Conference on Neural Networks, Orlando Florida, June Leemon C. Baird III Proceedings of the International Conference on Neural Networks, Orlando Florida, June 1994. REINFORCEMENT LEARNING IN CONTINUOUS TIME: ADVANTAGE UPDATING Leemon C. Baird III bairdlc@wl.wpafb.af.mil Wright

More information

Reinforcement learning

Reinforcement learning Reinforcement learning Stuart Russell, UC Berkeley Stuart Russell, UC Berkeley 1 Outline Sequential decision making Dynamic programming algorithms Reinforcement learning algorithms temporal difference

More information

Reinforcement Learning

Reinforcement Learning Reinforcement Learning March May, 2013 Schedule Update Introduction 03/13/2015 (10:15-12:15) Sala conferenze MDPs 03/18/2015 (10:15-12:15) Sala conferenze Solving MDPs 03/20/2015 (10:15-12:15) Aula Alpha

More information

1 Problem Formulation

1 Problem Formulation Book Review Self-Learning Control of Finite Markov Chains by A. S. Poznyak, K. Najim, and E. Gómez-Ramírez Review by Benjamin Van Roy This book presents a collection of work on algorithms for learning

More information

Reinforcement Learning

Reinforcement Learning Reinforcement Learning Model-Based Reinforcement Learning Model-based, PAC-MDP, sample complexity, exploration/exploitation, RMAX, E3, Bayes-optimal, Bayesian RL, model learning Vien Ngo MLR, University

More information

Reinforcement Learning

Reinforcement Learning Reinforcement Learning Function approximation Daniel Hennes 19.06.2017 University Stuttgart - IPVS - Machine Learning & Robotics 1 Today Eligibility traces n-step TD returns Forward and backward view Function

More information

RL 3: Reinforcement Learning

RL 3: Reinforcement Learning RL 3: Reinforcement Learning Q-Learning Michael Herrmann University of Edinburgh, School of Informatics 20/01/2015 Last time: Multi-Armed Bandits (10 Points to remember) MAB applications do exist (e.g.

More information

On the Convergence of Optimistic Policy Iteration

On the Convergence of Optimistic Policy Iteration Journal of Machine Learning Research 3 (2002) 59 72 Submitted 10/01; Published 7/02 On the Convergence of Optimistic Policy Iteration John N. Tsitsiklis LIDS, Room 35-209 Massachusetts Institute of Technology

More information