Reinforcement Learning. Spring 2018 Defining MDPs, Planning

Size: px
Start display at page:

Download "Reinforcement Learning. Spring 2018 Defining MDPs, Planning"

Transcription

1 Reinforcement Learning Spring 2018 Defining MDPs, Planning

2 understandability 0 Slide 10 time You are here

3 Markov Process Where you will go depends only on where you are

4 Markov Process: Information state This spider doesn t like to turn back The information state of a Markov process may be different from its physical state

5 Markov Reward Process Random wandering through states will occasionally win you a reward

6 The Fly Markov Reward Process 1.0 s /3 s 2 s 1 1/3 s 0 1/3 2/3 R=-1 R=-1 R=-1 R=0 There are, in fact, only four states, not eight Manhattan distance between fly and spider = 0 (s 0 ) Distance between fly and spider = 1 (s 1 ) Distance between fly and spider = 2 (s 2 ) Distance between fly and spider = 3 (s 3 ) Can, in fact, redefine the MRP entirely in terms of these 4 states

7 The discounted return G t = r t+1 + γr t+2 + γ 2 r t+3 + = γ k r t+k k=0 Total future reward all the way to the end

8 c 1 c 2 Markov Decision Process c 3 c 8 c 4 c 7 c 5 c 6 Markov Reward Process with following change: Agent has real agency Agent s actions modify environment s behavior

9 The Fly Markov Decision Process s 0 Process ends a + s 1 a s 2 s 1 1 s 1 s 2 a + a - 2/3 s 3 a- 2/3 s 3 s 2 1 s 0 s 2 1/3 s 1 s 3 1/3

10 Policy c 1 c 2 c 3 c 8 c 4 c 7 c 5 c 6 The policy is the agent s choice of action in each state May be stochastic

11 The Bellman Expectation Equations The Bellman expectation equation for state value function v π s = π a s R s a + γ P a s,s v π s a A s The Bellman expectation equation for action value function q π s, a = R s a + γ s P a s,s a A π a s q π s, a

12 Optimal Policies The optimal policy is the policy that will maximize the expected total discounted reward at every state: E G t S t = s = E γ k r t+k S t = s k=0 Optimal Policy Theorem: For any MDP there exist optimal policies π that is better than or equal to every other policy: π π π v s v π s q s, a q π s, a s s, a do we consider the discounted return, rather than the actual return σ k=0 r t+k?

13 The optimal value function π a s = 1 for argmax q s, a a 0 otherwise v s = max a q s, a

14 Bellman Optimality Equations Optimal value function equation v s = max a R s a + γ s P a s,s v s Optimal action value equation q s, a = R s a + γ s P a s,s max a q s, a

15 Planning with an MDP Problem: Given: an MDP S, P, A, R, γ Find: Optimal policy π Can either Value-based Solution: Find optimal value (or action value) function, and derive policy from it OR Policy-based Solution: Find optimal policy directly

16 Value-based Planning Value -based solution Breakdown: Prediction: Given any policy π find value function v π s Control: Find the optimal policy

17 Prediction DP Iterate v π (k+1) s = a A π a s R s a + γ s P a (k) s,s v π s

18 Policy Iteration Start with any policy π (0) Iterate (k = 0 convergence): Use prediction DP to find the value function v π (k) s Find the greedy policy π k+1 s = greedy v π (k) s

19 Value iteration v (k) s = max a R s a + γ s P a (k 1) s,s v s Each state simply inherits the cost of its best neighbour state Cost of neighbor is the value of the neighbour plus cost of getting there

20 Problem so far Given all details of the MDP Compute optimal value function Compute optimal action value function Compute optimal policy This is the problem of planning Problem: In real life, nobody gives you the MDP How do we plan???

21 Model-Free Methods AKA model-free reinforcement learning How do you find the value of a policy, without knowing the underlying MDP? Model-free prediction How do you find the optimal policy, without knowing the underlying MDP? Model-free control

22 Model-Free Methods AKA model-free reinforcement learning How do you find the value of a policy, without knowing the underlying MDP? Model-free prediction How do you find the optimal policy, without knowing the underlying MDP? Model-free control Assumption: We can identify the states, know the actions, and measure rewards, but have no knowledge of the system dynamics The key knowledge required to solve for the best policy A reasonable assumption in many discrete-state scenarios Can be generalized to other scenarios with infinite or unknowable state

23 Model-Free Assumption Can see the fly Know the distance to the fly Know possible actions (get closer/farther) But have no idea of how the fly will respond Will it move, and if so, to what corner

24 Model-Free Methods AKA model-free reinforcement learning How do you find the value of a policy, without knowing the underlying MDP? Model-free prediction How do you find the optimal policy, without knowing the underlying MDP? Model-free control

25 Model-Free Assumption Can see the fly and distance to the fly But have no idea of how the fly will respond to actions Will it move, and if so, to what corner But will always try to reduce distance to fly (have a known, fixed, policy) What is the value of being a distance D from the fly?

26 Methods Monte-Carlo Learning Temporal-Difference Learning TD(1) TD(K) TD(λ)

27 Monte-Carlo learning to learn the value of a policy π Just let the system run while following the policy π and learn the value of different states Procedure: Record several episodes of the following Take actions according to policy π Note states visited and rewards obtained as a result Record entire sequence: S 1, A 1, R 2, S 2, A 2, R 3,, S T Assumption: Each episode ends at some time Estimate value functions based on observations by counting

28 Monte-Carlo Value Estimation Objective: Estimate value function v π (s) for every state s, given recordings of the kind: S 1, A 1, R 2, S 2, A 2, R 3,, S T Recall, the value function is the expected return: v π s = E G t S t = s = E R t+1 + γr t γ T t 1 R T S t = s To estimate this, we replace the statistical expectation E G t S t = s by the empirical average avg G t S t = s

29 A bit of notation We actually record many episodes episode 1 = S 11, A 11, R 12, S 12, A 12, R 13,, S 1T1 episode 2 = S 21, A 21, R 22, S 22, A 22, R 23,, S 2T2 Different episodes may be different lengths

30 Counting Returns For each episode, we count the returns at all times: S 11, A 11, R 12, S 12, A 12, R 13, S 13, A 13, R 14,, S 1T1 G 1,1 Return at time t G 1,1 = R 12 + γr γ T 1 2 R 1T1 G 1,2 = R 13 + γr γ T 1 3 R 1T1 G t = R 1,t+1 + γr 1,t γ T 1 t 2 R 1T1

31 Counting Returns For each episode, we count the returns at all times: S 11, A 11, R 12, S 12, A 12, R 13, S 13, A 13, R 14,, S 1T1 Return at time t G 1,1 = R 12 + γr γ T 1 2 R 1T1 G 1,2 = R 13 + γr γ T 1 3 R 1T1 G 1,2 G t = R 1,t+1 + γr 1,t γ T 1 t 2 R 1T1

32 Counting Returns For each episode, we count the returns at all times: S 11, A 11, R 12, S 12, A 12, R 13, S 13, A 13, R 14,, S 1T1 Return at time t G 1,1 = R 12 + γr γ T1 2 R 1T1 G 1,2 = R 13 + γr γ T1 3 R 1T1 G 1,t = R 1,t+1 + γr 1,t γ T1 t 1 R 1T1

33 Estimating the Value of a State To estimate the value of any state, identify the instances of that state in the episodes: S 11, A 11, R 12, S 12, A 12, R 13, S 13, A 13, R 14,, S 1T1 s a s b s a Compute the average return from those instances v π s a = avg G 1,1, G 1,3,

34 Estimating the Value of a State For every state s Initialize: Count N s = 0, Total return v π s = 0 For every episode e For every time t = 1 T e Compute G t If (S t == s)» N s = N s + 1» v π s = v π s + G t v π s = v π s /N(s) Can be done more efficiently..

35 Online Version For all s Initialize: Count N s = 0, Total return totv π s = 0 For every episode e For every time t = 1 T e Compute G t N S t = N S t + 1 totv π S t = totv π S t + G t For every state s : v π s = totv π s /N(s) Updating values at the end of each episode Can be done more efficiently..

36 Monte Carlo estimation Learning from experience explicitly After a sufficiently large number of episodes, in which all states have been visited a sufficiently large number of times, we will obtain good estimates of the value functions of all states Easily extended to evaluating action value functions

37 Estimating the Action Value function To estimate the value of any state-action pair, identify the instances of that state-action pair in the episodes: S 1, A 1, R 2, S 2, A 2, R 3, S 3, A 3, R 4,, S T s a a x s b a y s a a y Compute the average return from those instances q π s a, a x = avg G 1,1,

38 Online Version For all s, a Initialize: Count N s, a = 0, Total value totq π s, a = 0 For every episode e For every time t = 1 T e Compute G t N S t, A t = N S t, A t + 1 totq π S t, A t = totq π S t, A t + G t For every s, a : q s, a = totq π s, a /N(s, a) Updating values at the end of each episode

39 Monte Carlo: Good and Bad Good: Will eventually get to the right answer Unbiased estimate Bad: Cannot update anything until the end of an episode Which may last for ever High variance! Each return adds many random values Slow to converge

40 Online methods for estimating the value of a policy: Temporal Difference Leaning (TD) Idea: Update your value estimates after every observation S 1, A 1, R 2, S 2, A 2, R 3, S 3, A 3, R 4,, S T Update for S 1 Update for S 2 Update for S 3 Do not actually wait until the end of the episode

41 Incremental Update of Averages Given a sequence x 1, x 2, x 3, a running estimate of their average can be computed as k μ k = 1 k i=1 x i This can be rewritten as: μ k = (k 1)μ k 1 + x k k And further refined to μ k = μ k k x k μ k 1

42 Incremental Update of Averages Given a sequence x 1, x 2, x 3, a running estimate of their average can be computed as μ k = μ k k x k μ k 1 Or more generally as μ k = μ k 1 + α x k μ k 1 The latter is particularly useful for non-stationary environments

43 Incremental Updates μ k = μ k k x k μ k 1 α = 0.1 α = 0.05 μ k = μ k 1 + α x k μ k 1 α = 0.03 Example of running average of a uniform random variable

44 Incremental Updates μ k = μ k k x k μ k 1 α = 0.1 μ k = μ k 1 + α x k μ k 1 α = 0.05 α = 0.03 Correct equation is unbiased and converges to true value Equation with α is biased (early estimates can be expected to be wrong) but converges to true value

45 Actual update Updating Value Function Incrementally v π s = 1 N(s) N(s) i=1 G t(i) N(s) is the total number of visits to state s across all episodes G t(i) is the discounted return at the time instant of the i-th visit to state s

46 Online update Given any episode S 1, A 1, R 2, S 2, A 2, R 3, S 3, A 3, R 4,, S T Update the value of each state visited N S t = N S t + 1 v π S t = v π S t + 1 N(S t ) G t v π S t Incremental version Still an unrealistic rule v π S t = v π S t + α G t v π S t Requires the entire track until the end of the episode to compute Gt

47 Online update Given any episode S 1, A 1, R 2, S 2, A 2, R 3, S 3, A 3, R 4,, S T Update the value of each state visited N S t = N S t + 1 v π S t = v π S t + 1 N(S t ) G t v π S t Incremental version Still an unrealistic rule Problem v π S t = v π S t + α G t v π S t Requires the entire track until the end of the episode to compute Gt

48 TD solution v π S t = v π S t + α G t v π S t Problem But G t = R t+1 + γg t+1 We can approximate G t+1 by the expected return at the next state S t+1

49 Counting Returns For each episode, we count the returns at all times: S 1, A 1, R 2, S 2, A 2, R 3, S 3, A 3, R 4,, S T Return at time t G 1 = R 2 + γr γ T 2 R T G 2 = R 3 + γr γ T 3 R T G t = R t+1 + γr t γ T t 2 R T Can rewrite as G 1 = R 2 + γg 2 Or G 1 = R 2 + γr 3 + γ 2 G 3 G t = R t+1 + σ N i=1 γ i R t+1+i + γ N+1 G t+1+n

50 TD solution But v π S t = v π S t + α G t v π S t Problem G t = R t+1 + γg t+1 We can approximate G t+1 by the expected return at the next state S t+1 v π S t+1 G t R t+1 + γv π S t+1 We don t know the real value of v π S t+1 bootstrap it by its current estimate but we can

51 TD(1) true online update v π S t = v π S t + α G t v π S t Where G t R t+1 + γv π S t+1 Giving us v π S t = v π S t + α R t+1 + γv π S t+1 v π S t

52 TD(1) true online update Where v π S t = v π S t + αδ t δ t = R t+1 + γv π S t+1 v π S t δ t is the TD error The error between an (estimated) observation of G t and the current estimate v π S t

53 TD(1) true online update For all s Initialize: v π s = 0 For every episode e For every time t = 1 T e v π S t = v π S t + α R t+1 + γv π S t+1 v π S t There s a lookahead of one state, to know which state the process arrives at at the next time But is otherwise online, with continuous updates

54 TD(1) Updates continuously improve estimates as soon as you observe a state (and its successor) Can work even with infinitely long processes that never terminate Guaranteed to converge to the true values eventually Although initial values will be biased as seen before Is actually lower variance than MC!! Only incorporates one RV at any time TD can give correct answers when MC goes wrong Particularly when TD is allowed to loop over all learning episodes

55 TD vs MC What are v(a) and v(b) Using MC Using TD(1), where you are allowed to repeatedly go over the data

56 TD look ahead further? TD(1) has a look ahead of 1 time step G t R t+1 + γv π S t+1 But we can look ahead further out G t (2) = R t+1 + γr t+2 + γ 2 v π S t+2 G t (N) = R t+1 σ N i=1 γ i R t+1+i + γ N+1 v π S t+n

57 TD(N) with lookahead v π S t = v π S t + αδ t (N) Where δ t (N) = R t+1 + N i=1 γ i R t+1+i + γ N+1 v π S t+n v π S t δ t (N) is the TD error with N step lookahead

58 Lookahead is good Good: The further you look ahead, the better your estimates get Problems: But you also get more variance At infinite lookahead, you re back at MC Also, you have to wait to update your estimates A lag between observation and estimate So how much lookahead must you use

59 Looking Into The Future How much various TDs look into the future Which do we use?

60 Solution: Why choose? Each lookahead provides an estimate of G t Why not just combine the lot with discounting?

61 TD(l) G t λ = (1 λ) n=1 λ n 1 G t (n) Combine the predictions from all lookaheads with an exponentially falling weight Weights sum to 1.0 V S t V S t + α G t λ V S t

62 Something magical just happened TD(l) looks into the infinite future I.e. we must have all the rewards of the future to compute our updates How does that help?

63 The contribution of future rewards to S t the present update R t+1 1 λ R t+2 (1 λ)λ R t+3 (1 λ)λ 2 R t+4 (1 λ)λ 3 R t+5 (1 λ)λ 4 (1 λ)λ 5 (1 λ)λ 6 TIME All future rewards contribute to the update of the value of the current state R t+6 Rt+7

64 The contribution of current reward to (1 λ)λ 6 (1 λ)λ 5 S t+7 (1 λ)λ 4 S t+6 S t+5 past states (1 λ)λ 3 (1 λ)λ 2 S t+4 S t+3 (1 λ)λ S t+2 1 λ S t+1 R t TIME All current reward contributes to the update of the value of all past states!

65 TD(l) backward view R t Add these weights to compute contribution to red state.. (1 λ)λ 6 (1 λ)λ 5 (1 λ)λ 4 (1 λ)λ 3 (1 λ)λ 2 (1 λ)λ 1 λ The Eligibility trace: Keeps track of total weight for any state TIME Which may have occurred at multiple times in the past

66 TD(l) Maintain an eligibility trace for every state E 0 s = 0 E t s = γe t 1 s + 1 S t = s Computes total weight for the state until the present time

67 TD(l) At every time, update the value of every state according to its eligibility trace δ t = R t+1 + γv S t+1 V S t V s V s + αδ t E t S t Any state that was visited will be updated Those that were not will not be, though

68 The magic of TD(l) Managed to get the effect of inifinite lookahead, by performing infinite lookbehind Or at least look behind to the beginning Every reward updates the value of all states leading to the reward! E.g., in a chess game, if we win, we want to increase the value of all game states we visited, not just the final move But early states/moves must gain much less than later moves When λ = 1 this is exactly equivalent to MC

69 Story so far Want to compute the values of all states, given a policy, but no knowledge of dynamics Have seen monte-carlo and temporal difference solutions TD is quicker to update, and in many situations the better solution TD(l) actually emulates an infinite lookahead But we must choose good values of a and l

70 Optimal Policy: Control We learned how to estimate the state value functions for an MDP whose transition probabilities are unknown for a given policy How do we find the optimal policy?

71 Value vs. Action Value The solution we saw so far only computes the value functions of states Not sufficient to compute the optimal policy from value functions alone, we will need extra information, namely transition probabilities Which we do not have Instead, we can use the same method to compute action value functions Optimal policy in any state : Choose the action that has the largest optimal action value

72 Value vs. Action value Given only value functions, the optimal policy must be estimated as: π s = argmax R a s + P a ss V s a A Needs knowledge of transition probabilities Given action value functions, we can find it as: π s = argmax Q s, a a A This is model free (no need for knowledge of model parameters)

73 Problem of optimal control From a series of episodes of the kind: S 1, A 1, R 2, S 2, A 2, R 3, S 3, A 3, R 4,, S T Find the optimal action value function q s, a The optimal policy can be found from it Ideally do this online So that we can continuously improve our policy from ongoing experience

74 Exploration vs. Exploitation Optimal policy search happens while gathering experience while following a policy For fastest learning, we will follow an estimate of the optimal policy Risk: We run the risk of positive feedback Only learn to evaluate our current policy Will never learn about alternate policies that may turn out to be better Solution: We will follow our current optimal policy 1 ε of the time But choose a random action ε of the time The epsilon-greedy policy

75 GLIE Monte Carlo Greedy in the limit with infinite exploration Start with some random initial policy π Start the process at the initial state, and follow an action according to initial policy π Produce the episode S 1, A 1, R 2, S 2, A 2, R 3, S 3, A 3, R 4,, S T Process the episode using the following online update rules: Compute the ε-greedy policy for each state Repeat 1 ε for a = argmax Q(s, a ) a π a s = ε N a 1 otherwise

76 GLIE Monte Carlo Greedy in the limit with infinite exploration Start with some random initial policy π Start the process at the initial state, and follow an action according to initial policy π Produce the episode S 1, A 1, R 2, S 2, A 2, R 3, S 3, A 3, R 4,, S T Process the episode using the following online update rules: Compute the ε-greedy policy for each state Repeat 1 ε for a = argmax Q(s, a ) a π a s = ε N a 1 otherwise

77 On-line version of GLIE: SARSA Replace G t with an estimate TD(1) or TD(l) Just as in the prediction problem TD(1) SARSA Q S, A Q S, A + α R + γq S, A Q(S, A)

78 Initialize Q(s, a) for all s, a Start at initial state S 1 Select an initial action A 1 For t = 1.. Terminate Get reward R t SARSA Let system transition to new state S t+1 Draw A t+1 according to ε -greedy policy Update 1 ε for a = argmax Q(s, a ) a π a s = ε N a 1 otherwise Q S t, A t = Q S t, A t + α R t + γq S t+1, A t+1 Q S t, A t

79 SARSA(l) Again, the TD(1) estimate can be replaced by a TD(l) estimate Maintain an eligibility trace for every state-action pair: E 0 s, a = 0 E t s, a = γe t 1 s, a + 1 S t = s, A t = a Update every state-action pair visited so far δ t = R t+1 + γq S t+1, A t+1 Q S t, A t Q s, a Q s, a + αδ t E t s, a

80 SARSA(l) For all s, a initialize Q(s, a) For each episode e For all s, a initialize E s, a = 0 Initialize S 1, A 1 For t = 1 Termination Observe R t+1, S t+1 Choose action A t+1 using policy obtained from Q δ = R t+1 + γq S t+1, A t+1 Q(S t, A t ) E S t, A t += 1 For all s, a Q s, a = Q s, a + αδe(s, a) E s, a = γλe(s, a)

81 On-policy vs. Off-policy SARSA assumes you re following the same policy that you re learning Its possible to follow one policy, while learning from others E.g. learning by observation The policy for learning is the whatif policy S 1, A 1, R 2, S 2, A 2, R 3, S 3, A 3, R 4,, S T መA 2 መA 3 Modifies learning rule to hypothetical Q S t, A t = Q S t, A t + α R t+1 + γq S t+1, A t+1 Q S t, A t Q S t, A t = Q S t, A t + α R t+1 + γq S t+1, መA t+1 Q S t, A t Q will actually represent the action value function of the hypothetical policy

82 SARSA: Suboptimality SARSA: From any state-action S, A, accept reward R, transition to next state S, choose next action A Use TD rules to update: δ = R + γq S, A Q S, A Problem: which policy do we use to choose A

83 SARSA: Suboptimality SARSA: From any state-action S, A, accept reward R, transition to next state S, choose next action A Problem: which policy do we use to choose A If we choose the current judgment of the best action at S we will become too greedy Never explore If we choose a sub-optimal policy to follow, we will never find the best policy

84 Solution: Off-policy learning The policy for learning is the whatif policy S 1, A 1, R 2, S 2, A 2, R 3, S 3, A 3, R 4,, S T መA 2 መA hypothetical 3 Use the best action for S t+1 as your hypothetical off-policy action But actually follow an epsilon-greedy action The hypothetical action is guaranteed to be better than the one you actually took But you still explore (non-greedy)

85 Q-Learning From any state-action pair S, A Accept reward R Transition to S Find the best action A for S Use it to update Q(S, A) But then actually perform an epsilon-greedy action A " from S

86 Q-Learning (TD(1) version) For all s, a initialize Q(s, a) For each episode e Initialize S 1, A 1 For t = 1 Termination Observe R t+1, S t+1 Choose action A t+1 at S t+1 using epsilon-greedy policy obtained from Q Choose action መA t+1 at S t+1 as መA t+1 = argmax a δ = R t+1 + γq S t+1, መA t+1 Q(S t, A t ) Q S t, A t = Q S t, A t + αδ Q(S t+1, a)

87 Q-Learning (TD(l) version) For all s, a initialize Q(s, a) For each episode e For all s, a initialize E s, a = 0 Initialize S 1, A 1 For t = 1 Termination Observe R t+1, S t+1 Choose action A t+1 at S t+1 using epsilon-greedy policy obtained from Q Choose action መA t+1 at S t+1 as መA t+1 = argmax a δ = R t+1 + γq S t+1, መA t+1 Q(S t, A t ) E S t, A t += 1 For all s, a Q s, a = Q s, a + αδe(s, a) E s, a = γλe(s, a) Q(S t+1, a)

88 What about the actual policy? Optimal greedy policy: 1 for a = argmax Q(s, a ) π a s = a 0 otherwise Exploration policy 1 ε for a = argmax Q(s, a ) a π a s = ε N a 1 otherwise Ideally ε should decrease with time

89 Q-Learning Currently most-popular RL algorithm Topics not covered: Value function approximation Continuous state spaces Deep-Q learning Action replay Application to real problem..

Reinforcement Learning. Summer 2017 Defining MDPs, Planning

Reinforcement Learning. Summer 2017 Defining MDPs, Planning Reinforcement Learning Summer 2017 Defining MDPs, Planning understandability 0 Slide 10 time You are here Markov Process Where you will go depends only on where you are Markov Process: Information state

More information

Temporal Difference. Learning KENNETH TRAN. Principal Research Engineer, MSR AI

Temporal Difference. Learning KENNETH TRAN. Principal Research Engineer, MSR AI Temporal Difference Learning KENNETH TRAN Principal Research Engineer, MSR AI Temporal Difference Learning Policy Evaluation Intro to model-free learning Monte Carlo Learning Temporal Difference Learning

More information

Reinforcement Learning. George Konidaris

Reinforcement Learning. George Konidaris Reinforcement Learning George Konidaris gdk@cs.brown.edu Fall 2017 Machine Learning Subfield of AI concerned with learning from data. Broadly, using: Experience To Improve Performance On Some Task (Tom

More information

Temporal difference learning

Temporal difference learning Temporal difference learning AI & Agents for IET Lecturer: S Luz http://www.scss.tcd.ie/~luzs/t/cs7032/ February 4, 2014 Recall background & assumptions Environment is a finite MDP (i.e. A and S are finite).

More information

Introduction to Reinforcement Learning. CMPT 882 Mar. 18

Introduction to Reinforcement Learning. CMPT 882 Mar. 18 Introduction to Reinforcement Learning CMPT 882 Mar. 18 Outline for the week Basic ideas in RL Value functions and value iteration Policy evaluation and policy improvement Model-free RL Monte-Carlo and

More information

CMU Lecture 12: Reinforcement Learning. Teacher: Gianni A. Di Caro

CMU Lecture 12: Reinforcement Learning. Teacher: Gianni A. Di Caro CMU 15-781 Lecture 12: Reinforcement Learning Teacher: Gianni A. Di Caro REINFORCEMENT LEARNING Transition Model? State Action Reward model? Agent Goal: Maximize expected sum of future rewards 2 MDP PLANNING

More information

Reinforcement Learning

Reinforcement Learning Reinforcement Learning Temporal Difference Learning Temporal difference learning, TD prediction, Q-learning, elibigility traces. (many slides from Marc Toussaint) Vien Ngo MLR, University of Stuttgart

More information

PART A and ONE question from PART B; or ONE question from PART A and TWO questions from PART B.

PART A and ONE question from PART B; or ONE question from PART A and TWO questions from PART B. Advanced Topics in Machine Learning, GI13, 2010/11 Advanced Topics in Machine Learning, GI13, 2010/11 Answer any THREE questions. Each question is worth 20 marks. Use separate answer books Answer any THREE

More information

Lecture 23: Reinforcement Learning

Lecture 23: Reinforcement Learning Lecture 23: Reinforcement Learning MDPs revisited Model-based learning Monte Carlo value function estimation Temporal-difference (TD) learning Exploration November 23, 2006 1 COMP-424 Lecture 23 Recall:

More information

Lecture 3: Policy Evaluation Without Knowing How the World Works / Model Free Policy Evaluation

Lecture 3: Policy Evaluation Without Knowing How the World Works / Model Free Policy Evaluation Lecture 3: Policy Evaluation Without Knowing How the World Works / Model Free Policy Evaluation CS234: RL Emma Brunskill Winter 2018 Material builds on structure from David SIlver s Lecture 4: Model-Free

More information

Machine Learning I Reinforcement Learning

Machine Learning I Reinforcement Learning Machine Learning I Reinforcement Learning Thomas Rückstieß Technische Universität München December 17/18, 2009 Literature Book: Reinforcement Learning: An Introduction Sutton & Barto (free online version:

More information

Lecture 7: Value Function Approximation

Lecture 7: Value Function Approximation Lecture 7: Value Function Approximation Joseph Modayil Outline 1 Introduction 2 3 Batch Methods Introduction Large-Scale Reinforcement Learning Reinforcement learning can be used to solve large problems,

More information

MARKOV DECISION PROCESSES (MDP) AND REINFORCEMENT LEARNING (RL) Versione originale delle slide fornita dal Prof. Francesco Lo Presti

MARKOV DECISION PROCESSES (MDP) AND REINFORCEMENT LEARNING (RL) Versione originale delle slide fornita dal Prof. Francesco Lo Presti 1 MARKOV DECISION PROCESSES (MDP) AND REINFORCEMENT LEARNING (RL) Versione originale delle slide fornita dal Prof. Francesco Lo Presti Historical background 2 Original motivation: animal learning Early

More information

CS599 Lecture 1 Introduction To RL

CS599 Lecture 1 Introduction To RL CS599 Lecture 1 Introduction To RL Reinforcement Learning Introduction Learning from rewards Policies Value Functions Rewards Models of the Environment Exploitation vs. Exploration Dynamic Programming

More information

Reinforcement Learning and Deep Reinforcement Learning

Reinforcement Learning and Deep Reinforcement Learning Reinforcement Learning and Deep Reinforcement Learning Ashis Kumer Biswas, Ph.D. ashis.biswas@ucdenver.edu Deep Learning November 5, 2018 1 / 64 Outlines 1 Principles of Reinforcement Learning 2 The Q

More information

Machine Learning. Reinforcement learning. Hamid Beigy. Sharif University of Technology. Fall 1396

Machine Learning. Reinforcement learning. Hamid Beigy. Sharif University of Technology. Fall 1396 Machine Learning Reinforcement learning Hamid Beigy Sharif University of Technology Fall 1396 Hamid Beigy (Sharif University of Technology) Machine Learning Fall 1396 1 / 32 Table of contents 1 Introduction

More information

Grundlagen der Künstlichen Intelligenz

Grundlagen der Künstlichen Intelligenz Grundlagen der Künstlichen Intelligenz Reinforcement learning Daniel Hennes 4.12.2017 (WS 2017/18) University Stuttgart - IPVS - Machine Learning & Robotics 1 Today Reinforcement learning Model based and

More information

Basics of reinforcement learning

Basics of reinforcement learning Basics of reinforcement learning Lucian Buşoniu TMLSS, 20 July 2018 Main idea of reinforcement learning (RL) Learn a sequential decision policy to optimize the cumulative performance of an unknown system

More information

Lecture 8: Policy Gradient

Lecture 8: Policy Gradient Lecture 8: Policy Gradient Hado van Hasselt Outline 1 Introduction 2 Finite Difference Policy Gradient 3 Monte-Carlo Policy Gradient 4 Actor-Critic Policy Gradient Introduction Vapnik s rule Never solve

More information

15-889e Policy Search: Gradient Methods Emma Brunskill. All slides from David Silver (with EB adding minor modificafons), unless otherwise noted

15-889e Policy Search: Gradient Methods Emma Brunskill. All slides from David Silver (with EB adding minor modificafons), unless otherwise noted 15-889e Policy Search: Gradient Methods Emma Brunskill All slides from David Silver (with EB adding minor modificafons), unless otherwise noted Outline 1 Introduction 2 Finite Difference Policy Gradient

More information

Reinforcement Learning II

Reinforcement Learning II Reinforcement Learning II Andrea Bonarini Artificial Intelligence and Robotics Lab Department of Electronics and Information Politecnico di Milano E-mail: bonarini@elet.polimi.it URL:http://www.dei.polimi.it/people/bonarini

More information

Chapter 7: Eligibility Traces. R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 1

Chapter 7: Eligibility Traces. R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 1 Chapter 7: Eligibility Traces R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 1 Midterm Mean = 77.33 Median = 82 R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction

More information

Notes on Reinforcement Learning

Notes on Reinforcement Learning 1 Introduction Notes on Reinforcement Learning Paulo Eduardo Rauber 2014 Reinforcement learning is the study of agents that act in an environment with the goal of maximizing cumulative reward signals.

More information

Reinforcement Learning

Reinforcement Learning Reinforcement Learning Temporal Difference Learning Temporal difference learning, TD prediction, Q-learning, elibigility traces. (many slides from Marc Toussaint) Vien Ngo Marc Toussaint University of

More information

Reinforcement Learning with Function Approximation. Joseph Christian G. Noel

Reinforcement Learning with Function Approximation. Joseph Christian G. Noel Reinforcement Learning with Function Approximation Joseph Christian G. Noel November 2011 Abstract Reinforcement learning (RL) is a key problem in the field of Artificial Intelligence. The main goal is

More information

Lecture 3: Markov Decision Processes

Lecture 3: Markov Decision Processes Lecture 3: Markov Decision Processes Joseph Modayil 1 Markov Processes 2 Markov Reward Processes 3 Markov Decision Processes 4 Extensions to MDPs Markov Processes Introduction Introduction to MDPs Markov

More information

Marks. bonus points. } Assignment 1: Should be out this weekend. } Mid-term: Before the last lecture. } Mid-term deferred exam:

Marks. bonus points. } Assignment 1: Should be out this weekend. } Mid-term: Before the last lecture. } Mid-term deferred exam: Marks } Assignment 1: Should be out this weekend } All are marked, I m trying to tally them and perhaps add bonus points } Mid-term: Before the last lecture } Mid-term deferred exam: } This Saturday, 9am-10.30am,

More information

Reinforcement Learning Part 2

Reinforcement Learning Part 2 Reinforcement Learning Part 2 Dipendra Misra Cornell University dkm@cs.cornell.edu https://dipendramisra.wordpress.com/ From previous tutorial Reinforcement Learning Exploration No supervision Agent-Reward-Environment

More information

Introduction to Reinforcement Learning

Introduction to Reinforcement Learning CSCI-699: Advanced Topics in Deep Learning 01/16/2019 Nitin Kamra Spring 2019 Introduction to Reinforcement Learning 1 What is Reinforcement Learning? So far we have seen unsupervised and supervised learning.

More information

Review: TD-Learning. TD (SARSA) Learning for Q-values. Bellman Equations for Q-values. P (s, a, s )[R(s, a, s )+ Q (s, (s ))]

Review: TD-Learning. TD (SARSA) Learning for Q-values. Bellman Equations for Q-values. P (s, a, s )[R(s, a, s )+ Q (s, (s ))] Review: TD-Learning function TD-Learning(mdp) returns a policy Class #: Reinforcement Learning, II 8s S, U(s) =0 set start-state s s 0 choose action a, using -greedy policy based on U(s) U(s) U(s)+ [r

More information

Reinforcement Learning

Reinforcement Learning Reinforcement Learning Function approximation Daniel Hennes 19.06.2017 University Stuttgart - IPVS - Machine Learning & Robotics 1 Today Eligibility traces n-step TD returns Forward and backward view Function

More information

Reinforcement Learning

Reinforcement Learning Reinforcement Learning RL in continuous MDPs March April, 2015 Large/Continuous MDPs Large/Continuous state space Tabular representation cannot be used Large/Continuous action space Maximization over action

More information

ARTIFICIAL INTELLIGENCE. Reinforcement learning

ARTIFICIAL INTELLIGENCE. Reinforcement learning INFOB2KI 2018-2019 Utrecht University The Netherlands ARTIFICIAL INTELLIGENCE Reinforcement learning Lecturer: Silja Renooij These slides are part of the INFOB2KI Course Notes available from www.cs.uu.nl/docs/vakken/b2ki/schema.html

More information

This question has three parts, each of which can be answered concisely, but be prepared to explain and justify your concise answer.

This question has three parts, each of which can be answered concisely, but be prepared to explain and justify your concise answer. This question has three parts, each of which can be answered concisely, but be prepared to explain and justify your concise answer. 1. Suppose you have a policy and its action-value function, q, then you

More information

PART A and ONE question from PART B; or ONE question from PART A and TWO questions from PART B.

PART A and ONE question from PART B; or ONE question from PART A and TWO questions from PART B. Advanced Topics in Machine Learning, GI13, 2010/11 Advanced Topics in Machine Learning, GI13, 2010/11 Answer any THREE questions. Each question is worth 20 marks. Use separate answer books Answer any THREE

More information

arxiv: v1 [cs.ai] 5 Nov 2017

arxiv: v1 [cs.ai] 5 Nov 2017 arxiv:1711.01569v1 [cs.ai] 5 Nov 2017 Markus Dumke Department of Statistics Ludwig-Maximilians-Universität München markus.dumke@campus.lmu.de Abstract Temporal-difference (TD) learning is an important

More information

Reinforcement Learning. Value Function Updates

Reinforcement Learning. Value Function Updates Reinforcement Learning Value Function Updates Manfred Huber 2014 1 Value Function Updates Different methods for updating the value function Dynamic programming Simple Monte Carlo Temporal differencing

More information

CSC321 Lecture 22: Q-Learning

CSC321 Lecture 22: Q-Learning CSC321 Lecture 22: Q-Learning Roger Grosse Roger Grosse CSC321 Lecture 22: Q-Learning 1 / 21 Overview Second of 3 lectures on reinforcement learning Last time: policy gradient (e.g. REINFORCE) Optimize

More information

6 Reinforcement Learning

6 Reinforcement Learning 6 Reinforcement Learning As discussed above, a basic form of supervised learning is function approximation, relating input vectors to output vectors, or, more generally, finding density functions p(y,

More information

Reinforcement Learning

Reinforcement Learning Reinforcement Learning Ron Parr CompSci 7 Department of Computer Science Duke University With thanks to Kris Hauser for some content RL Highlights Everybody likes to learn from experience Use ML techniques

More information

Decision Theory: Q-Learning

Decision Theory: Q-Learning Decision Theory: Q-Learning CPSC 322 Decision Theory 5 Textbook 12.5 Decision Theory: Q-Learning CPSC 322 Decision Theory 5, Slide 1 Lecture Overview 1 Recap 2 Asynchronous Value Iteration 3 Q-Learning

More information

Reinforcement Learning

Reinforcement Learning 1 Reinforcement Learning Chris Watkins Department of Computer Science Royal Holloway, University of London July 27, 2015 2 Plan 1 Why reinforcement learning? Where does this theory come from? Markov decision

More information

Christopher Watkins and Peter Dayan. Noga Zaslavsky. The Hebrew University of Jerusalem Advanced Seminar in Deep Learning (67679) November 1, 2015

Christopher Watkins and Peter Dayan. Noga Zaslavsky. The Hebrew University of Jerusalem Advanced Seminar in Deep Learning (67679) November 1, 2015 Q-Learning Christopher Watkins and Peter Dayan Noga Zaslavsky The Hebrew University of Jerusalem Advanced Seminar in Deep Learning (67679) November 1, 2015 Noga Zaslavsky Q-Learning (Watkins & Dayan, 1992)

More information

Decision Theory: Markov Decision Processes

Decision Theory: Markov Decision Processes Decision Theory: Markov Decision Processes CPSC 322 Lecture 33 March 31, 2006 Textbook 12.5 Decision Theory: Markov Decision Processes CPSC 322 Lecture 33, Slide 1 Lecture Overview Recap Rewards and Policies

More information

INF 5860 Machine learning for image classification. Lecture 14: Reinforcement learning May 9, 2018

INF 5860 Machine learning for image classification. Lecture 14: Reinforcement learning May 9, 2018 Machine learning for image classification Lecture 14: Reinforcement learning May 9, 2018 Page 3 Outline Motivation Introduction to reinforcement learning (RL) Value function based methods (Q-learning)

More information

The Book: Where we are and where we re going. CSE 190: Reinforcement Learning: An Introduction. Chapter 7: Eligibility Traces. Simple Monte Carlo

The Book: Where we are and where we re going. CSE 190: Reinforcement Learning: An Introduction. Chapter 7: Eligibility Traces. Simple Monte Carlo CSE 190: Reinforcement Learning: An Introduction Chapter 7: Eligibility races Acknowledgment: A good number of these slides are cribbed from Rich Sutton he Book: Where we are and where we re going Part

More information

Reinforcement Learning. Introduction

Reinforcement Learning. Introduction Reinforcement Learning Introduction Reinforcement Learning Agent interacts and learns from a stochastic environment Science of sequential decision making Many faces of reinforcement learning Optimal control

More information

ECE276B: Planning & Learning in Robotics Lecture 16: Model-free Control

ECE276B: Planning & Learning in Robotics Lecture 16: Model-free Control ECE276B: Planning & Learning in Robotics Lecture 16: Model-free Control Lecturer: Nikolay Atanasov: natanasov@ucsd.edu Teaching Assistants: Tianyu Wang: tiw161@eng.ucsd.edu Yongxi Lu: yol070@eng.ucsd.edu

More information

Reinforcement Learning

Reinforcement Learning Reinforcement Learning Function approximation Mario Martin CS-UPC May 18, 2018 Mario Martin (CS-UPC) Reinforcement Learning May 18, 2018 / 65 Recap Algorithms: MonteCarlo methods for Policy Evaluation

More information

Internet Monetization

Internet Monetization Internet Monetization March May, 2013 Discrete time Finite A decision process (MDP) is reward process with decisions. It models an environment in which all states are and time is divided into stages. Definition

More information

Lecture 25: Learning 4. Victor R. Lesser. CMPSCI 683 Fall 2010

Lecture 25: Learning 4. Victor R. Lesser. CMPSCI 683 Fall 2010 Lecture 25: Learning 4 Victor R. Lesser CMPSCI 683 Fall 2010 Final Exam Information Final EXAM on Th 12/16 at 4:00pm in Lederle Grad Res Ctr Rm A301 2 Hours but obviously you can leave early! Open Book

More information

Markov Decision Processes and Solving Finite Problems. February 8, 2017

Markov Decision Processes and Solving Finite Problems. February 8, 2017 Markov Decision Processes and Solving Finite Problems February 8, 2017 Overview of Upcoming Lectures Feb 8: Markov decision processes, value iteration, policy iteration Feb 13: Policy gradients Feb 15:

More information

Lecture 1: March 7, 2018

Lecture 1: March 7, 2018 Reinforcement Learning Spring Semester, 2017/8 Lecture 1: March 7, 2018 Lecturer: Yishay Mansour Scribe: ym DISCLAIMER: Based on Learning and Planning in Dynamical Systems by Shie Mannor c, all rights

More information

https://www.youtube.com/watch?v=ymvi1l746us Eligibility traces Chapter 12, plus some extra stuff! Like n-step methods, but better! Eligibility traces A mechanism that allow TD, Sarsa and Q-learning to

More information

Reinforcement Learning Active Learning

Reinforcement Learning Active Learning Reinforcement Learning Active Learning Alan Fern * Based in part on slides by Daniel Weld 1 Active Reinforcement Learning So far, we ve assumed agent has a policy We just learned how good it is Now, suppose

More information

Balancing and Control of a Freely-Swinging Pendulum Using a Model-Free Reinforcement Learning Algorithm

Balancing and Control of a Freely-Swinging Pendulum Using a Model-Free Reinforcement Learning Algorithm Balancing and Control of a Freely-Swinging Pendulum Using a Model-Free Reinforcement Learning Algorithm Michail G. Lagoudakis Department of Computer Science Duke University Durham, NC 2778 mgl@cs.duke.edu

More information

Q-Learning in Continuous State Action Spaces

Q-Learning in Continuous State Action Spaces Q-Learning in Continuous State Action Spaces Alex Irpan alexirpan@berkeley.edu December 5, 2015 Contents 1 Introduction 1 2 Background 1 3 Q-Learning 2 4 Q-Learning In Continuous Spaces 4 5 Experimental

More information

Elements of Reinforcement Learning

Elements of Reinforcement Learning Elements of Reinforcement Learning Policy: way learning algorithm behaves (mapping from state to action) Reward function: Mapping of state action pair to reward or cost Value function: long term reward,

More information

15-780: ReinforcementLearning

15-780: ReinforcementLearning 15-780: ReinforcementLearning J. Zico Kolter March 2, 2016 1 Outline Challenge of RL Model-based methods Model-free methods Exploration and exploitation 2 Outline Challenge of RL Model-based methods Model-free

More information

Machine Learning. Machine Learning: Jordan Boyd-Graber University of Maryland REINFORCEMENT LEARNING. Slides adapted from Tom Mitchell and Peter Abeel

Machine Learning. Machine Learning: Jordan Boyd-Graber University of Maryland REINFORCEMENT LEARNING. Slides adapted from Tom Mitchell and Peter Abeel Machine Learning Machine Learning: Jordan Boyd-Graber University of Maryland REINFORCEMENT LEARNING Slides adapted from Tom Mitchell and Peter Abeel Machine Learning: Jordan Boyd-Graber UMD Machine Learning

More information

Introduction to Reinforcement Learning. Part 5: Temporal-Difference Learning

Introduction to Reinforcement Learning. Part 5: Temporal-Difference Learning Introduction to Reinforcement Learning Part 5: emporal-difference Learning What everybody should know about emporal-difference (D) learning Used to learn value functions without human input Learns a guess

More information

Reinforcement Learning: An Introduction

Reinforcement Learning: An Introduction Introduction Betreuer: Freek Stulp Hauptseminar Intelligente Autonome Systeme (WiSe 04/05) Forschungs- und Lehreinheit Informatik IX Technische Universität München November 24, 2004 Introduction What is

More information

Administration. CSCI567 Machine Learning (Fall 2018) Outline. Outline. HW5 is available, due on 11/18. Practice final will also be available soon.

Administration. CSCI567 Machine Learning (Fall 2018) Outline. Outline. HW5 is available, due on 11/18. Practice final will also be available soon. Administration CSCI567 Machine Learning Fall 2018 Prof. Haipeng Luo U of Southern California Nov 7, 2018 HW5 is available, due on 11/18. Practice final will also be available soon. Remaining weeks: 11/14,

More information

Monte Carlo is important in practice. CSE 190: Reinforcement Learning: An Introduction. Chapter 6: Temporal Difference Learning.

Monte Carlo is important in practice. CSE 190: Reinforcement Learning: An Introduction. Chapter 6: Temporal Difference Learning. Monte Carlo is important in practice CSE 190: Reinforcement Learning: An Introduction Chapter 6: emporal Difference Learning When there are just a few possibilitieo value, out of a large state space, Monte

More information

Grundlagen der Künstlichen Intelligenz

Grundlagen der Künstlichen Intelligenz Grundlagen der Künstlichen Intelligenz Reinforcement learning II Daniel Hennes 11.12.2017 (WS 2017/18) University Stuttgart - IPVS - Machine Learning & Robotics 1 Today Eligibility traces n-step TD returns

More information

Reinforcement Learning and Control

Reinforcement Learning and Control CS9 Lecture notes Andrew Ng Part XIII Reinforcement Learning and Control We now begin our study of reinforcement learning and adaptive control. In supervised learning, we saw algorithms that tried to make

More information

Reinforcement learning an introduction

Reinforcement learning an introduction Reinforcement learning an introduction Prof. Dr. Ann Nowé Computational Modeling Group AIlab ai.vub.ac.be November 2013 Reinforcement Learning What is it? Learning from interaction Learning about, from,

More information

Chapter 6: Temporal Difference Learning

Chapter 6: Temporal Difference Learning Chapter 6: emporal Difference Learning Objectives of this chapter: Introduce emporal Difference (D) learning Focus first on policy evaluation, or prediction, methods Compare efficiency of D learning with

More information

Reinforcement Learning

Reinforcement Learning Reinforcement Learning March May, 2013 Schedule Update Introduction 03/13/2015 (10:15-12:15) Sala conferenze MDPs 03/18/2015 (10:15-12:15) Sala conferenze Solving MDPs 03/20/2015 (10:15-12:15) Aula Alpha

More information

Markov Decision Processes (and a small amount of reinforcement learning)

Markov Decision Processes (and a small amount of reinforcement learning) Markov Decision Processes (and a small amount of reinforcement learning) Slides adapted from: Brian Williams, MIT Manuela Veloso, Andrew Moore, Reid Simmons, & Tom Mitchell, CMU Nicholas Roy 16.4/13 Session

More information

15-780: Graduate Artificial Intelligence. Reinforcement learning (RL)

15-780: Graduate Artificial Intelligence. Reinforcement learning (RL) 15-780: Graduate Artificial Intelligence Reinforcement learning (RL) From MDPs to RL We still use the same Markov model with rewards and actions But there are a few differences: 1. We do not assume we

More information

Prof. Dr. Ann Nowé. Artificial Intelligence Lab ai.vub.ac.be

Prof. Dr. Ann Nowé. Artificial Intelligence Lab ai.vub.ac.be REINFORCEMENT LEARNING AN INTRODUCTION Prof. Dr. Ann Nowé Artificial Intelligence Lab ai.vub.ac.be REINFORCEMENT LEARNING WHAT IS IT? What is it? Learning from interaction Learning about, from, and while

More information

Reinforcement Learning

Reinforcement Learning CS7/CS7 Fall 005 Supervised Learning: Training examples: (x,y) Direct feedback y for each input x Sequence of decisions with eventual feedback No teacher that critiques individual actions Learn to act

More information

1 Introduction 2. 4 Q-Learning The Q-value The Temporal Difference The whole Q-Learning process... 5

1 Introduction 2. 4 Q-Learning The Q-value The Temporal Difference The whole Q-Learning process... 5 Table of contents 1 Introduction 2 2 Markov Decision Processes 2 3 Future Cumulative Reward 3 4 Q-Learning 4 4.1 The Q-value.............................................. 4 4.2 The Temporal Difference.......................................

More information

Today s s Lecture. Applicability of Neural Networks. Back-propagation. Review of Neural Networks. Lecture 20: Learning -4. Markov-Decision Processes

Today s s Lecture. Applicability of Neural Networks. Back-propagation. Review of Neural Networks. Lecture 20: Learning -4. Markov-Decision Processes Today s s Lecture Lecture 20: Learning -4 Review of Neural Networks Markov-Decision Processes Victor Lesser CMPSCI 683 Fall 2004 Reinforcement learning 2 Back-propagation Applicability of Neural Networks

More information

CS 188: Artificial Intelligence

CS 188: Artificial Intelligence CS 188: Artificial Intelligence Reinforcement Learning Instructor: Fabrice Popineau [These slides adapted from Stuart Russell, Dan Klein and Pieter Abbeel @ai.berkeley.edu] Reinforcement Learning Double

More information

Q-learning. Tambet Matiisen

Q-learning. Tambet Matiisen Q-learning Tambet Matiisen (based on chapter 11.3 of online book Artificial Intelligence, foundations of computational agents by David Poole and Alan Mackworth) Stochastic gradient descent Experience

More information

Reinforcement Learning: the basics

Reinforcement Learning: the basics Reinforcement Learning: the basics Olivier Sigaud Université Pierre et Marie Curie, PARIS 6 http://people.isir.upmc.fr/sigaud August 6, 2012 1 / 46 Introduction Action selection/planning Learning by trial-and-error

More information

Reinforcement learning

Reinforcement learning Reinforcement learning Stuart Russell, UC Berkeley Stuart Russell, UC Berkeley 1 Outline Sequential decision making Dynamic programming algorithms Reinforcement learning algorithms temporal difference

More information

CS 287: Advanced Robotics Fall Lecture 14: Reinforcement Learning with Function Approximation and TD Gammon case study

CS 287: Advanced Robotics Fall Lecture 14: Reinforcement Learning with Function Approximation and TD Gammon case study CS 287: Advanced Robotics Fall 2009 Lecture 14: Reinforcement Learning with Function Approximation and TD Gammon case study Pieter Abbeel UC Berkeley EECS Assignment #1 Roll-out: nice example paper: X.

More information

, and rewards and transition matrices as shown below:

, and rewards and transition matrices as shown below: CSE 50a. Assignment 7 Out: Tue Nov Due: Thu Dec Reading: Sutton & Barto, Chapters -. 7. Policy improvement Consider the Markov decision process (MDP) with two states s {0, }, two actions a {0, }, discount

More information

CSE250A Fall 12: Discussion Week 9

CSE250A Fall 12: Discussion Week 9 CSE250A Fall 12: Discussion Week 9 Aditya Menon (akmenon@ucsd.edu) December 4, 2012 1 Schedule for today Recap of Markov Decision Processes. Examples: slot machines and maze traversal. Planning and learning.

More information

CS 234 Midterm - Winter

CS 234 Midterm - Winter CS 234 Midterm - Winter 2017-18 **Do not turn this page until you are instructed to do so. Instructions Please answer the following questions to the best of your ability. Read all the questions first before

More information

Lecture 17: Reinforcement Learning, Finite Markov Decision Processes

Lecture 17: Reinforcement Learning, Finite Markov Decision Processes CSE599i: Online and Adaptive Machine Learning Winter 2018 Lecture 17: Reinforcement Learning, Finite Markov Decision Processes Lecturer: Kevin Jamieson Scribes: Aida Amini, Kousuke Ariga, James Ferguson,

More information

Lecture 10 - Planning under Uncertainty (III)

Lecture 10 - Planning under Uncertainty (III) Lecture 10 - Planning under Uncertainty (III) Jesse Hoey School of Computer Science University of Waterloo March 27, 2018 Readings: Poole & Mackworth (2nd ed.)chapter 12.1,12.3-12.9 1/ 34 Reinforcement

More information

COMP3702/7702 Artificial Intelligence Lecture 11: Introduction to Machine Learning and Reinforcement Learning. Hanna Kurniawati

COMP3702/7702 Artificial Intelligence Lecture 11: Introduction to Machine Learning and Reinforcement Learning. Hanna Kurniawati COMP3702/7702 Artificial Intelligence Lecture 11: Introduction to Machine Learning and Reinforcement Learning Hanna Kurniawati Today } What is machine learning? } Where is it used? } Types of machine learning

More information

REINFORCE Framework for Stochastic Policy Optimization and its use in Deep Learning

REINFORCE Framework for Stochastic Policy Optimization and its use in Deep Learning REINFORCE Framework for Stochastic Policy Optimization and its use in Deep Learning Ronen Tamari The Hebrew University of Jerusalem Advanced Seminar in Deep Learning (#67679) February 28, 2016 Ronen Tamari

More information

Reinforcement Learning. Yishay Mansour Tel-Aviv University

Reinforcement Learning. Yishay Mansour Tel-Aviv University Reinforcement Learning Yishay Mansour Tel-Aviv University 1 Reinforcement Learning: Course Information Classes: Wednesday Lecture 10-13 Yishay Mansour Recitations:14-15/15-16 Eliya Nachmani Adam Polyak

More information

Lecture 9: Policy Gradient II 1

Lecture 9: Policy Gradient II 1 Lecture 9: Policy Gradient II 1 Emma Brunskill CS234 Reinforcement Learning. Winter 2019 Additional reading: Sutton and Barto 2018 Chp. 13 1 With many slides from or derived from David Silver and John

More information

REINFORCEMENT LEARNING

REINFORCEMENT LEARNING REINFORCEMENT LEARNING Larry Page: Where s Google going next? DeepMind's DQN playing Breakout Contents Introduction to Reinforcement Learning Deep Q-Learning INTRODUCTION TO REINFORCEMENT LEARNING Contents

More information

Reinforcement Learning and NLP

Reinforcement Learning and NLP 1 Reinforcement Learning and NLP Kapil Thadani kapil@cs.columbia.edu RESEARCH Outline 2 Model-free RL Markov decision processes (MDPs) Derivative-free optimization Policy gradients Variance reduction Value

More information

Reinforcement Learning. Machine Learning, Fall 2010

Reinforcement Learning. Machine Learning, Fall 2010 Reinforcement Learning Machine Learning, Fall 2010 1 Administrativia This week: finish RL, most likely start graphical models LA2: due on Thursday LA3: comes out on Thursday TA Office hours: Today 1:30-2:30

More information

Planning in Markov Decision Processes

Planning in Markov Decision Processes Carnegie Mellon School of Computer Science Deep Reinforcement Learning and Control Planning in Markov Decision Processes Lecture 3, CMU 10703 Katerina Fragkiadaki Markov Decision Process (MDP) A Markov

More information

Reinforcement Learning

Reinforcement Learning Reinforcement Learning Markov decision process & Dynamic programming Evaluative feedback, value function, Bellman equation, optimality, Markov property, Markov decision process, dynamic programming, value

More information

CS 188 Introduction to Fall 2007 Artificial Intelligence Midterm

CS 188 Introduction to Fall 2007 Artificial Intelligence Midterm NAME: SID#: Login: Sec: 1 CS 188 Introduction to Fall 2007 Artificial Intelligence Midterm You have 80 minutes. The exam is closed book, closed notes except a one-page crib sheet, basic calculators only.

More information

Sequential Decision Problems

Sequential Decision Problems Sequential Decision Problems Michael A. Goodrich November 10, 2006 If I make changes to these notes after they are posted and if these changes are important (beyond cosmetic), the changes will highlighted

More information

Machine Learning and Bayesian Inference. Unsupervised learning. Can we find regularity in data without the aid of labels?

Machine Learning and Bayesian Inference. Unsupervised learning. Can we find regularity in data without the aid of labels? Machine Learning and Bayesian Inference Dr Sean Holden Computer Laboratory, Room FC6 Telephone extension 6372 Email: sbh11@cl.cam.ac.uk www.cl.cam.ac.uk/ sbh11/ Unsupervised learning Can we find regularity

More information

Exponential Moving Average Based Multiagent Reinforcement Learning Algorithms

Exponential Moving Average Based Multiagent Reinforcement Learning Algorithms Exponential Moving Average Based Multiagent Reinforcement Learning Algorithms Mostafa D. Awheda Department of Systems and Computer Engineering Carleton University Ottawa, Canada KS 5B6 Email: mawheda@sce.carleton.ca

More information

Course basics. CSE 190: Reinforcement Learning: An Introduction. Last Time. Course goals. The website for the class is linked off my homepage.

Course basics. CSE 190: Reinforcement Learning: An Introduction. Last Time. Course goals. The website for the class is linked off my homepage. Course basics CSE 190: Reinforcement Learning: An Introduction The website for the class is linked off my homepage. Grades will be based on programming assignments, homeworks, and class participation.

More information

Markov Decision Processes Infinite Horizon Problems

Markov Decision Processes Infinite Horizon Problems Markov Decision Processes Infinite Horizon Problems Alan Fern * * Based in part on slides by Craig Boutilier and Daniel Weld 1 What is a solution to an MDP? MDP Planning Problem: Input: an MDP (S,A,R,T)

More information