Machine Learning Reinforcement learning Hamid Beigy Sharif University of Technology Fall 1396 Hamid Beigy (Sharif University of Technology) Machine Learning Fall 1396 1 / 32
Table of contents 1 Introduction 2 Non-associative reinforcement learning 3 Associative reinforcement learning 4 Goals,rewards, and returns 5 Markov decision process 6 Dynamic programming 7 Monte Carlo methods 8 Temporal-difference methods 9 Reading Hamid Beigy (Sharif University of Technology) Machine Learning Fall 1396 2 / 32
Table of contents 1 Introduction 2 Non-associative reinforcement learning 3 Associative reinforcement learning 4 Goals,rewards, and returns 5 Markov decision process 6 Dynamic programming 7 Monte Carlo methods 8 Temporal-difference methods 9 Reading Hamid Beigy (Sharif University of Technology) Machine Learning Fall 1396 3 / 32
Introduction Reinforcement learning is what to do (how to map situations to actions) so as to maximize a scalar reward/reinforcement signal The learner is not told which actions to take as in supervised learning, but discover which actions yield the most reward by trying them. The trial-and-error and delayed reward are the two most important feature of reinforcement learning. Reinforcement learning is defined not by characterizing learning algorithms, but by characterizing a learning problem. Any algorithm that is well suited for solving the given problem, we consider to be a reinforcement learning. One of the challenges that arises in reinforcement learning and other kinds of learning is tradeoff between exploration and exploitation. Hamid Beigy (Sharif University of Technology) Machine Learning Fall 1396 3 / 32
Introduction A key feature of reinforcement learning is that it explicitly considers the whole problem of a goal-directed agent interacting with an uncertain environment. Agent state s t reward r t action a t r t+1 s t+1 Environment Hamid Beigy (Sharif University of Technology) Machine Learning Fall 1396 4 / 32
Elements of RL Policy : A policy is a mapping from received states of the environment to actions to be taken (what to do?). Reward function: It defines the goal of RL problem. It maps each state-action pair to a single number called reinforcement signal, indicating the goodness of the action. (what is good?) Value : It specifies what is good in the long run. (what is good because it predicts reward?) Model of the environment (optional): This is something that mimics the behavior of the environment. (what follows what?) Hamid Beigy (Sharif University of Technology) Machine Learning Fall 1396 5 / 32
An example : Tic-Tac-Toe Consider a two-playes game (Tic-Tac-Toe) opponent's move our move X O O O X X X opponent's move our move opponent's move our move { { { { { { starting position a b c c* d e* e f g g*... Consider the following updating V (s) V (s) + α[v (s ) V (s)] Hamid Beigy (Sharif University of Technology) e Machine Learning Fall 1396 6 / 32
Types of reinforcement learning Author's personal copy Non-associative reinforcement learning : The learning method that does not involve learning to act in more than one state. 604 H. Beigy, M.R. Meybodi / Computers and Electrical Engineering 37 (2011) 601 613 Fig. 2. The interaction of an automaton and its environment. Associative reinforcement learning : The learning method that involves learning to act in more than one state. of actions and applies it to a random environment. The random environment evaluates the applied action and gives response. The response from the environment is used by automaton to modify its action probabilities (p) and to selec next action. By continuing this process, the automaton learns to select the action with the highest reward. The interac of an automaton with its environment is shown in Fig. 2. Agent An automaton acting in an unknown random environment and improves its performance using a learning algorith some specified manner, is referred tostate as learning reward automaton (LA). The crucial factoraction affecting the performance of a lear automaton is learning algorithm. Various s r learning t t algorithms have been reported inathe t literature. Let a i be the action ch at time k as a sample realization from probability distribution r t+1 pðkþ. In linear reward-inaction algorithm the recurrence e tion for updating p j ðnþ for j ¼ 1; 2;...; r is defineds t+1 as Environment 8 < p j ðnþ a½1 bðnþšp j ðnþ j i p j ðn þ 1Þ ¼ p j ðnþþa½1 bðnþš P : p k ðnþ j ¼ i; Hamid Beigy (Sharif University of Technology) k i Machine Learning Fall 1396 7 / 32
Table of contents 1 Introduction 2 Non-associative reinforcement learning 3 Associative reinforcement learning 4 Goals,rewards, and returns 5 Markov decision process 6 Dynamic programming 7 Monte Carlo methods 8 Temporal-difference methods 9 Reading Hamid Beigy (Sharif University of Technology) Machine Learning Fall 1396 8 / 32
Multi-arm Bandit problem Consider that you are faced repeatedly with a choice among n different options or actions. After each choice, you receive a numerical reward chosen from a stationary probability distribution that depends on the action you selected. Your objective is to maximize the expected total reward over some time period. This is the original form of the n armed bandit problem called a slot machine. Hamid Beigy (Sharif University of Technology) Machine Learning Fall 1396 8 / 32
Action-value methods Consider some simple methods for estimating the values of actions and then using the estimates to select actions. Let the true value of action a denoted as Q (a) and its estimated value at t th play as Q t (a). The true value of an action is the mean reward when that action is selected. One natural way to estimate this is by averaging the rewards actually received when the action was selected. In other words, if at the t th play action a has been chosen k a times prior to t, yielding rewards r 1, r 2,..., r ka, then its value is estimated to be Q t (a) = r 1 + r 2 +... + r ka k a Hamid Beigy (Sharif University of Technology) Machine Learning Fall 1396 9 / 32
Action selection strategies Greedy action selection : This strategy selects the action with highest estimated action value. a t = arg max a Q t (a) ϵ greedy action selection : This strategy selects the action with highest estimated action value most of time but with small probability ϵ selects an action at random, uniformly, independently of the action-value estimates. Softmax action selection : This strategy selects actions using the action probabilities as a graded function of estimated value. p t (a) = expq t(a)/τ b expqt(b)/τ Hamid Beigy (Sharif University of Technology) Machine Learning Fall 1396 10 / 32
Learning automata Environment represented by a tuple < α, β, C >, 1 α = {α 1, α 2,..., α r } shows a set of inputs, 2 β = {0, 1} represents the set of values that the reinforcement signal can take, 3 C = {c 1, c 2,..., c r } is the set of penalty probabilities, where c i = Prob[β(k) = 1 α(k) = α i ]. A variable structure learning automaton is represented by triple < β, α, T >, 1 β = {0, 1} is a set of inputs, 2 α = {α 1, α 2,..., α r } is a set of actions, 3 T is a learning algorithm used to modify action probability vector p. Hamid Beigy (Sharif University of Technology) Machine Learning Fall 1396 11 / 32
L R ϵp learning algorithm In linear reward-ϵpenalty algorithm (L R ϵp ) updating rule for p is defined as { pj (k) + a [1 p p j (k + 1) = j (k)] if i = j p j (k) a p j (k) if i j when β(k) = 0 and when β(k) = 1. { pj (k) (1 b) if i = j p j (k + 1) = b r 1 + p j(k)(1 b) if i j Parameters 0 < b a < 1 represent step lengths. When a = b, we call it linear reward penalty(l R P ) algorithm. When b = 0, we call it linear reward inaction(l R I ) algorithm. Hamid Beigy (Sharif University of Technology) Machine Learning Fall 1396 12 / 32
Measure learning in learning automata In stationary environments, average penalty received by automaton is r M(k) = E[β(k) p(k)] = Prob[β(k) = 1 p(k)] = c i p i (k). A learning automaton is called expedient if lim E[M(k)] < M(0) k A learning automaton is called optimal if lim k E[M(k)] = min c i i A learning automaton is called ϵ optimal if for arbitrary ϵ > 0 lim k E[M(k)] < min c i + ϵ i Hamid Beigy (Sharif University of Technology) Machine Learning Fall 1396 13 / 32 i=1
Table of contents 1 Introduction 2 Non-associative reinforcement learning 3 Associative reinforcement learning 4 Goals,rewards, and returns 5 Markov decision process 6 Dynamic programming 7 Monte Carlo methods 8 Temporal-difference methods 9 Reading Hamid Beigy (Sharif University of Technology) Machine Learning Fall 1396 14 / 32
Associative reinforcement learning The learning method that involves learning to act in more than one state. Agent state s t reward r t action a t r t+1 s t+1 Environment Hamid Beigy (Sharif University of Technology) Machine Learning Fall 1396 14 / 32
Table of contents 1 Introduction 2 Non-associative reinforcement learning 3 Associative reinforcement learning 4 Goals,rewards, and returns 5 Markov decision process 6 Dynamic programming 7 Monte Carlo methods 8 Temporal-difference methods 9 Reading Hamid Beigy (Sharif University of Technology) Machine Learning Fall 1396 15 / 32
Goals,rewards, and returns In reinforcement learning, the goal of the agent is formalized in terms of a special reward signal passing from the environment to the agent. The agent s goal is to maximize the total amount of reward it receives. This means maximizing not immediate reward, but cumulative reward in the long run. How might the goal be formally defined? In episodic tasks the return, R t, is defined as R t = r 1 + r 2 +... + r T In continuous tasks the return, R t, is defined as R t = γ k r t+k+1 The unified approach r 1 = +1 r s 0 s 2 = +1 1 k=0 s 2 r 3 = +1 r 4 = 0 r 5 = 0 Hamid Beigy (Sharif University of Technology) Machine Learning Fall 1396 15 / 32
Table of contents 1 Introduction 2 Non-associative reinforcement learning 3 Associative reinforcement learning 4 Goals,rewards, and returns 5 Markov decision process 6 Dynamic programming 7 Monte Carlo methods 8 Temporal-difference methods 9 Reading Hamid Beigy (Sharif University of Technology) Machine Learning Fall 1396 16 / 32
Markov decision process A RL task satisfing the Markov property is called a Markov decision process (MDP). If the state and action spaces are finite, then it is called a finite MDP. A particular finite MDP is defined by its state and action sets and by the one-step dynamics of the environment. Recycling Robot MDP P a ss = Prob{s t+1 = s s t = s, a t = a} R a ss = E[r t+1 s t = s, a t = a, s t+1 = s ] 1, R wait wait 1 β, 3 search β, R search high 1, 0 recharge low search wait α, R search 1 α, R search 1, R wait Hamid Beigy (Sharif University of Technology) Machine Learning Fall 1396 16 / 32
Value functions Let in state s action a is selected with probability of π(s, a). Value of state s under a policy π is the expected return when starting in s and following π thereafter. { } V π (s) = E π {R t s t = s} = E π γ k r t+k+1 s t = s = π k=0 π(s, a) s P a ss [ R a ss + γv π (s ) ]. Value of action a in state s under a policy π is the expected return when starting in s taking action a and following π thereafter. { } Q π (s, a) = E π {R t s t = s, a t = a} = E π γ k r t+k+1 s t = s, a t = a k=0 Hamid Beigy (Sharif University of Technology) Machine Learning Fall 1396 17 / 32
Optimal value functions Policy π is better than or equal of π iff for all s V π (s) V π (s). There is always at least one policy that is better than or equal to all other policies. This is an optimal policy. Value of state s under the optimal policy (V (s)) equals V (s) = max V π (s) π Value of action a in state s under the optimal policy ( Q (s, a) equals Q (s, a) = max π Qπ (s, a) Backup diagram for V and Q (a) max s a r s' (b) max s,a r s' a' Hamid Beigy (Sharif University of Technology) Machine Learning Fall 1396 18 / 32
Table of contents 1 Introduction 2 Non-associative reinforcement learning 3 Associative reinforcement learning 4 Goals,rewards, and returns 5 Markov decision process 6 Dynamic programming 7 Monte Carlo methods 8 Temporal-difference methods 9 Reading Hamid Beigy (Sharif University of Technology) Machine Learning Fall 1396 19 / 32
Dynamic programming The key idea of DP is the use of value functions to organize and structure the search for good policies. We can easily obtain optimal policies once we have found the optimal value functions, or, which satisfy the Bellman optimality equations: V (s) = max a = max a E{r t+1 + γv (s t+1 ) s t = s, a t = a} [ R a ss + γv π (s ) ]. s P a ss Value of action a in state s under a policy π is the expected return when starting in s taking action a and following π thereafter. Q (s, a) = E{r t+1 + γ max Q (s t+1, a ) s t = s, a t = a} = a ] Pss [R a a ss + γ max Q (s, a ). s a Hamid Beigy (Sharif University of Technology) Machine Learning Fall 1396 19 / 32
Policy iteration Policy iteration is an iterative process π 0 E V π 0 I E π 1 V π 1 I E I π 2...... π E V Policy iteration has two phases : policy evaluation and improvement. In policy evaluation, we compute state or state-action value functions { } V π (s) = E π {R t s t = s} = E π γ k r t+k+1 s t = s = π k=0 π(s, a) s P a ss [ R a ss + γv π (s ) ]. In policy improvement, we change the policy to obtain a better policy π (s) = arg max a Q π (s, a) [ = arg max a R a ss + γv π (s ) ]. s P a ss Hamid Beigy (Sharif University of Technology) Machine Learning Fall 1396 20 / 32
Value and generalized policy iteration In value iteration we have V k+1 (s) = max a = max a Generalized policy iteration π E{r t+1 + γv k (s t+1 ) s t = s, a t = a} [ ] R a ss + γv ( k s ). s P a ss evaluation V V π π greedy(v) improvement Hamid Beigy (Sharif University of Technology) Machine Learning Fall 1396 21 / 32 V
Dynamic Programming Backup DP Backup diagram V (S t ) E [R t+1 + V (S t+1 )] V (S t ) E π [R t+1 + γv (S t+1 )] s t r t +1 s t +1 T! T! T! T! T! T! T! T! T! T! Hamid Beigy (Sharif University of Technology) Machine Learning Fall 1396 22 / 32
Table of contents 1 Introduction 2 Non-associative reinforcement learning 3 Associative reinforcement learning 4 Goals,rewards, and returns 5 Markov decision process 6 Dynamic programming 7 Monte Carlo methods 8 Temporal-difference methods 9 Reading Hamid Beigy (Sharif University of Technology) Machine Learning Fall 1396 23 / 32
Monte Carlo (MC) methods MC methods learn directly from episodes of experience. MC is model-free: no knowledge of MDP transitions / rewards MC learns from complete episodes MC uses the simplest possible idea: value = mean return Goal: learn V π from episodes of experience under policy π S 1 α 1 R1 S 2 α 2 R2 S 3 α 3 R3 The return is the total discounted reward: S 4... α k 1 S k Rk 1 G t = R t+1 + γr t+2 +... + γ T 1 R T The value function is the expected return: V π (s) = E π [G t S t = s] Monte-Carlo policy evaluation uses empirical mean return instead of Hamid Beigy expected (Sharif University return of Technology) Machine Learning Fall 1396 23 / 32
First-Visit Monte-Carlo Policy Evaluation To evaluate state s The first time-step t that state s is visited in an episode, Increment counter N(s) N(s) + 1 Increment total return Value is estimated by mean return By law of large numbers, as S(s) S(s) + G t V (s) = S(s) N(s) V (s) v π (s) N(s) Hamid Beigy (Sharif University of Technology) Machine Learning Fall 1396 24 / 32
Every-Visit Monte-Carlo Policy Evaluation To evaluate state s Every time-step t that state s is visited in an episode, Increment counter N(s) N(s) + 1 Increment total return Value is estimated by mean return By law of large numbers, as S(s) S(s) + G t V (s) = S(s) N(s) V (s) v π (s) N(s) Hamid Beigy (Sharif University of Technology) Machine Learning Fall 1396 25 / 32
Monte-Carlo MC Backup Backup diagram V (S (S t t )) V V (S (S t ) t + )+ α(g t (G t V (SV t )) (S t )) s t T! T! T! T! T! T! T! T! T! T! Hamid Beigy (Sharif University of Technology) Machine Learning Fall 1396 26 / 32
Table of contents 1 Introduction 2 Non-associative reinforcement learning 3 Associative reinforcement learning 4 Goals,rewards, and returns 5 Markov decision process 6 Dynamic programming 7 Monte Carlo methods 8 Temporal-difference methods 9 Reading Hamid Beigy (Sharif University of Technology) Machine Learning Fall 1396 27 / 32
Temporal-difference methods TD learning is a combination of Monte Carlo ideas and dynamic programming (DP) ideas. Like Monte Carlo methods, TD methods can learn directly from raw experience without a model of the environment s dynamics. Like DP, TD methods update estimates based in part on other learned estimates, without waiting for a final outcome (they bootstrap). Monte Carlo methods wait until the return following the visit is known, then use that return as a target for V (s t ) while TD methods need wait only until the next time step. The simplest TD method, known as TD(0), is V (s t ) V (s t ) + α [r t+1 + γv (s t+1 ) V (s t )] Hamid Beigy (Sharif University of Technology) Machine Learning Fall 1396 27 / 32
Temporal-Di erence Learning Unified View Temporal-Difference Backup Temporal-Di erence Backup V (S (s t ) V (S (s t )+ + α (R [r t+1 + γv V (s (S t+1 ) V (S (s t )] )) s t s t +1 r t +1 T! T! T! T! T! T! T! T! T! T! Hamid Beigy (Sharif University of Technology) Machine Learning Fall 1396 28 / 32
Temporal-difference methods (cont.) Algorithm for TD(0) Hamid Beigy (Sharif University of Technology) Machine Learning Fall 1396 29 / 32
Temporal-difference methods (SARSA) An episode consists of an alternating sequence of states and state-action pairs: r t+1 st s t+1 r t+2 s t+1,a t+1 s t,a t s t+2 s t+2,a t+2 SARSA, which is an on policy, updates values using Q(s t, a t ) Q(s t, a t ) + α [r t+1 + γq(s t+1, a t+1 ) Q(s t, a t )] Hamid Beigy (Sharif University of Technology) Machine Learning Fall 1396 30 / 32
Temporal-difference methods (Q-learning) An episode consists of an alternating sequence of states and state-action pairs: r t+1 st s t+1 r t+2 s t+1,a t+1 s t,a t s t+2 s t+2,a t+2 Q-learning, which is an off policy, updates values using [ ] Q(s t, a t ) Q(s t, a t ) + α r t+1 + γ max Q(s t+1, a) Q(s t, a t ) a Hamid Beigy (Sharif University of Technology) Machine Learning Fall 1396 31 / 32
Table of contents 1 Introduction 2 Non-associative reinforcement learning 3 Associative reinforcement learning 4 Goals,rewards, and returns 5 Markov decision process 6 Dynamic programming 7 Monte Carlo methods 8 Temporal-difference methods 9 Reading Hamid Beigy (Sharif University of Technology) Machine Learning Fall 1396 32 / 32
Reading Read chapters 1-6 of the following book Richard S. Sutton and Andrew G. Barto, Reinforcement Learning: An Introduction, MIT Press, 1998. Hamid Beigy (Sharif University of Technology) Machine Learning Fall 1396 32 / 32