The Application of Markov Chain in the Realization of AI of Draughts

Size: px
Start display at page:

Download "The Application of Markov Chain in the Realization of AI of Draughts"

Transcription

1 The Application of Markov Chain in the Realization of AI of Draughts Junchao CHEN Master 1 Model Statistic 31/3/2017

2 Abstract Artificial intelligence has made great success in the field of chess. This paper is devoted to the application of Markov decision process model in the realization of artificial intelligence of draughts. In order to improve the intelligent effects and calculation speed, I used some knowledge about reinforcement Learning. In theory, the size of the checker board that I studied is p q, and the empty lines in the middle is r. Where p,q,r are even number. In the code, the value of p,q,r are decide by the player. In order to reduce the computational complexity, we let p=q=6,r=2generally. In this paper, I used the Markov Decision Process to establish the model of the game. However, we can not expression the transition matrix since there are too many states and actions. Back Propagation Neural Network was used to evaluate the state values. And temporal d- ifference learning method was used to optimize the neural network. At last, I choose to use the α β search method to decide the actions. In chapter 5, I will evaluate the model with different parameter. Keywords: draughts; Markov Decision Process; Back propagation Neural Network; Temporal difference learning; Minimax search method; artificial intelligence

3 Contents Abstract I 1 Introduction Preface Structure of the paper Introduction of draughts Rules Computer complexity Fundamentals knowledges Fundamentals of the discrete-time Markov chains Basic model of Discrete Markov Decision Process The simple decision model in which the actions only rely on the state they are in The construction of optimal Markov policy under the total reward of time-bounded Back propagation neural network Temporal difference learning method α β minimax search method minimax search method α β search method Summary of this chapter Establish MDP model for draughts State space Action space

4 CONTENTS Transition matrix, discount factor, reward function and state value Summary of this chapter Algorithm of Realizing AI of the draughts Algorithm of BPNN Algorithm of TD learning Algorithm of α β minimax method Summary of this chapter Analysis of the effect of AI The speed of computer computing Compare search depth Compare size of the checker-board The level of intelligent computer Compare search depth Compare the numbers of learning Compare parameters γ Compare parameters λ Advantages and disadvantages Summary of this chapter Summary and Looking Ahead Conclusion Future work Acknowledgements 46 Bibliography 48 Appendix A: Other important pseudo codes 49 Appendix B: The checker board of draughts 51 2

5 Chapter 1 Introduction 1.1 Preface In the field of artificial intelligence research, the formalization method based on agent provides a unified framework for the modeling, design and implementation of intelligent system. Nowadays, artificial intelligence has achieved great success, especially in the field of chess. On 11 May 1997, Deep Blue became the first computer chess-playing system to beat a reigning world chess champion, Garry Kasparov. In March 2016, AlphaGo beat Li Shishi, became the first computer chess program win the professional Go chess without reduce pieces. Draughts also has been solved by AI. Draughts is a two-person board game. Its history is longer than the chess, the ancestor for the Middle East checkers and 8 8 draughts are official event of the first World Mind Sports Games. From 1952 to 1962, Arthur Samuel, IBM s employee, wrote the first checkers program. July 2007, Canadian computer scientists officially announced that the draughts have been solved[1]. The decision theory, which is represented by the Markov decision process, provides an important basis in terms of theory and algorithm for the optimal solution of artificial intelligence. MDP model is very useful tool in the application of solving checkers. Completely solving large MDPs model exactly is usually intractable due to the curse of dimensionality - that is, the size of the state space increases exponentially with the number of states[2]. We need the help of some other knowledge. 3

6 1.2. STRUCTURE OF THE PAPER 1 Artificial neural networks are usually optimized by a mathematical method based on mathematical statistical methods. By means of statistical standard mathematical methods we can obtain a large number of local structural spaces that can be expressed by functions. We can use neural networks to approximate any function, proved by Cybenko theorem. BP(Back Propagation) neural network, also known as multi-layer feedforward neural network, is a more mature method in artificial neural network, using non-feedback multi-layer forward network topology, BP algorithm is also called back propagation algorithm. In my paper, we use the BP neural network to approximate the state value function. Reinforcement learning is a field of machine learning, temporal difference learning is the core idea of reinforcement learning. We need to optimize the neural network by temporal difference learning. Minimax search method is one of the most useful method to search the best action with finite depth. α β search method can reduce lots of unnecessary nodes so that we can improve the compute speed. 1.2 Structure of the paper First, I will give some theoretical knowledge in chapter 2, including the discrete-time Markov process and discrete Markov decision process theory, neural network, temporal difference learning and minimax search method. All of them will be adapted in the model to solve the problem. In chapter 3, I will give a introduction about the how to establish the MDP model for realizing AI of draughts. A Markov decision process is a 5-tuple (states, actions, transfer matrix, reward, discount factor). The difficulty is how to calculate the transfer matrix when faced with the actual model and how to decide the return function. Is there any other method to solve it? In chapter 4, I will introduce some important algorithms of Realizing the draughts AI. And in chapter 5, I will evaluate this model and the degree of artificial intelligence. Chapter 6 is the summary of this paper. 4

7 1.3. INTRODUCTION OF DRAUGHTS Introduction of draughts Draughts is one of the world s oldest and most popular puzzle game. Draughts developed on the basis of checkers are popular in many countries. The International Checkers Association has more than 50 Member States. Many mathematicians and computer experts study the procedures to break checkers. Until July 2007, Canadian computer scientists Jonathan Schaeffer officially announced that the British checkers have been studied, the program called Chinook, they said the program can find the best way to play the chess, if both sides in accordance with the best way to play chess, Then the chess game will end with the Bureau. But he just solved the 8 8 variant of draughts[1]. In my paper, I will try to realize the artificial intelligence of draughts by using Markov Decision Process and Monte Carlo method. Draughts game is developed by Codeblocks on Windows 10 platform. In this game, two computers or one player with one computer can play the game. In the design process, we use the object-oriented programming method. Because many functions need to access some data, these data are often designed as global variables Rules Draughts is played by two opponents, on opposite sides of the checker board. One player has the black pieces; the other has the white pieces. Players alternate turns. All pieces are only walking in the black lattice. Move and Jump: The men can only move one grid in the diagonal line and it can not go backwards. When Black and white two pieces tightly connected to a slash, such as it s the turn of a player, the other side of the opponent s piece is just an empty chess bit, then you can skip the opponent s piece, eat and remove from the board. Before the start of the game, on both sides of the checker board are men, in the game process, when a piece of men come or jump to the other side of the bottom line to stop, you can change to kings, but you can only enjoy the right of the king from the next step. 5

8 1.3. INTRODUCTION OF DRAUGHTS 1 The kings can move in a long distance in the diagonal line, and the king can go backwards. When Black and white pieces in the same diagonal line, such as the turn of a player, there is an empty chess bit in the other side of the opponent s piece, then you can skip the opponent s piece, eat and remove from the board. The kings can jump several grids! The rules of eating: When there is a jump or jump continuously opportunities, regardless of whether they are beneficial to you, you have to jump, especially the king. After a piece eat the opponent s pieces, if it can continue to eat a new piece of enemy, it must eat, until you can not eat so far. If there are two routes or two pieces can eat opponent s pieces, then whether or not for y- our own benefit, you must eat as more pieces as you can. Game over: 1. If all the pieces are eaten by the enemy, you lost. 2. If all the remaining pieces on the board, was blocked by the enemy, no piece can move, you lost. 3. The game to the last without any possible victory over the both player, the result is draw Computer complexity Draughts is played on an p q board. It is PSPACE hard ( In computational complexity theory, PSPACE is the set of all decision problems that can be solved by a Turing machine using a polynomial amount of space.) to determine whether a specified player has a winning strategy. And if a polynomial bound is placed on the number of moves that are allowed in between jumps (which is a reasonable generalization of the drawing rule in standard Checkers), then the problem is in PSPACE, thus it is PSPACE complete[3]. However, without this bound, Checkers is EXPTIME complete[4]( In computational complexity theory, the complexity class EXPTIME is the set of all decision problems that have exponential runtime.). 6

9 Chapter 2 Fundamentals knowledges In this chapter, I will introduce some fundamentals knowledges for solving the problem. If we want to realize the artificial intelligence (AI) of the draughts through the Markov decision process, we must first have a good understand of the Markov chain. And we need the help of neural network and some knowledge of Reinforcement learning. We only consider the discrete-time Markov process since we do not use the continuous-time Markov process in solving the problem of implementing artificial intelligence of draughts. The Markov process (MP) which defined on the discrete state space is also called Markov chain (MC). The size of checker board of traditional draughts is or 8 8, the number of empty lines in the middle is 2. In my paper, the size of checker board of draughts is p q, the number of empty lines in the middle is r, where p,q,r are chosen by players, but they are all should be even numbers and r< p Fundamentals of the discrete-time Markov chains A Markov chain is a stochastic process with the Markov property. The term Markov chain refers to the sequence of random variables such a process moves through, with the Markov property defining serial dependence only between adjacent periods (as in a chain ). It can thus be used for describing systems that follow a chain of linked events, what happens next depends only on the current state of the system[5]. Definition (Markov chains)[6] A Markov chain on a finite or countably infinite state space S is a family of S-valued random variables {X n : n 0} with the property that, for all 7

10 2.1. FUNDAMENTALS OF THE DISCRETE-TIME MARKOV CHAINS 2 n 0 and (i 0,...,i n, j) S n+2, P(X n+1 = j X 0 = i 0,...,X n = i n )=(P) in j (2.1.1) Where P is a matrix all of whose entries are non-negative and each of whose rows sum to 1. Equivalently P(X n+1 = j X 0,...,X n )=(P) Xn j (2.1.2) Remark. It should be clear that (2.1.2) is a mathematically precise expression of the idea that, when a Markov chain jumps, the distribution where it lands depends only on where it was at the time and not on where it was in the past. Definition [7] A transition matrix describes a Markov chain X t over a finite state space S with cardinality n. If the probability of moving from i to j in one step is P( j i)=p i, j, the transition matrix P is given by using P i, j as the i th row and j th column element, e.g., P 1,1... P 1, j... P 1,n P= P i,1... P i, j... P i,n P n,1... P n, j... P n,n Since the total of transition probability from a state i to all other states must be 1, so that n P i, j = 1 j=1 Property (Markov Property)[6] The Markov property refers to the memoryless property of a stochastic process. In mathematical, if m,n 1 and F : S n+1 R is either bounded and non negative, then E[F(X m,x m+1,...,x m+n ) X 0 = i 0,...,X m = i m ]=E[F(X 0,X 1,...,X n ) X 0 = i m ] (2.1.3) A stochastic process has the Markov property if the conditional probability distribution of future states of the process (conditional on both past and present values) depends only on the present state; that is, given the present state, the future state does not depend on the past state. 8

11 2.2. BASIC MODEL OF DISCRETE MARKOV DECISION PROCESS Basic model of Discrete Markov Decision Process Before Markov Decision Process, we must know Markov reward process. In probability theory, a Markov reward process is a Markov process which extends a Markov chain by adding a reward rate to each state. An additional variable records the reward accumulated up to the current time. Features of interest in the model include expected reward at a given time and expected time to accumulate a given reward. Definition (Markov reward process) A Markov Reward Process(MRP) is a tuple S, P, R, γ. S is a finite set of states. P is a state transition probability matrix, P ij = P[S t+1 = j S t = i]. R is a reward function, R s = E[R t+1 S t = s]. γ is a discount factor, γ [0,1]. Definition (Return) The return G t is the total discounted reward from time step t. G t = R t+1 + γr t = i=0 γ i R i+t+1 (2.2.1) A Markov decision process (MDP) is a Markov reward process with decisions. It is an environment in which all states are Markov. Definition (Markov decision process) Markov Decision Process(MDP) is a tuple S, A, P, R, γ. S is a finite set of states. A is a finite set of actions. A(i) is the action space of state i S. Then, A= i S A(i). P is a state transition probability matrix, P a ij = P[S t+1 = j S t = i,a t = a]. R is a reward function, R a s = E[R t+1 S t = s,a t = a]. γ is a discount factor, γ [0,1] The simple decision model in which the actions only rely on the state they are in Definition (MDP policies) A policy π is a distribution over actions given states at the moment t, π(a s)=p[a t = a S t = s] (2.2.2) 9

12 2.2. BASIC MODEL OF DISCRETE MARKOV DECISION PROCESS 2 A policy fully defines the behaviour of an agent. MDP policies only depend on the current state (not the history). Policies are stationary (time-independent), for any t > 0, we have A t π t ( S t ). On the other hand consider, π( s) is a mapping function from the state space S to the action space A(s). Given a MDP M = S,A,P,R,γ and policies π( s) for each s S. The state and action sequence S 1,a 1,S 2,a 2,...,S N,a N, is a finite MDP. Where a i A(S i ),i=1,2,...,n. Let Ps,s π = π(a s)p a s,s (2.2.3) a A(s) Rs π = π(a s)r a s (2.2.4) a A(s) Definition (State Value Function) The state value function v π (s) of an MDP is the expected return starting from state s, and then following the policy π v π (s)=e[g t S t = s] (2.2.5) Definition (Action Value Function) The action state value function q π (s,a) of an MDP is the expected return starting from state s, taking action a A(s), and then following the policy π Theorem (Bellman Expectation Equation) q π (s,a)=e[g t S t = s,a t = a] (2.2.6) v π (s)= π(a s)q π (s,a)= π(a s)(r a s + γ P a s,s v π(s )) (2.2.7) a A(s) a A(s) s S q π (s,a)=r a s + γ s S P a s,s v π(s )=R a s + γ s S P a s,s a A(s ) The Bellman equation can be expressed concisely using matrices, π(a s )q π (s,a ) (2.2.8) v π = R π + γp π v π (2.2.9) with direct solution v π =(I γp π ) 1 R π (2.2.10) 10

13 2.2. BASIC MODEL OF DISCRETE MARKOV DECISION PROCESS The construction of optimal Markov policy under the total reward of time-bounded Definition The optimal state value function v (s) is the maximum value function over all policies v (s)= max π A(s) v π(s) (2.2.11) The optimal action value function q (s,a) is the maximum action value function over all policies q (s,a)= max π A(s) q π(s,a) (2.2.12) The optimal value function specifies the best possible performance in the MDP. An MDP is solved when we know the maximise value of q (s,a). If we know q (s,a), we immediately have the optimal policy. Theorem The optimal Markov policy under the total reward of time-bounded exists in all the Markov policy but not necessarily unique. In mathematical, for any Markov decision process, There exists an optimal policy π that is better than or equal to all other policies, v π (s) v π (s), π, s. All optimal policies achieve the optimal value function, v π (s)=v (s). All optimal policies achieve the optimal action-value function, q π (s,a)=q (s,a) By Theorem 2.2.1, we can get v (s)= max a A(s) Ra s + γ P a ss v (s ) (2.2.13) s S q (s,a)=r a s + γ P a ss max q (s,a ) (2.2.14) s S a A(s ) In practical problems, sometimes even the best Markov policy exists, but the calculation is too large or computation time is too long, it becomes virtually impossible. And, as defined below ε Optimal Markov strategies more practicable than optimal Markov policy. Definition We said a ε A(s) is a ε Optimal Markov strategies, if q(s,a ε )>q (s,a) ε (2.2.15) 11

14 2.3. BACK PROPAGATION NEURAL NETWORK 2 Remark. When the computational complexity of the best Markov policy is non-polynomial, the ε best Markov policy can often reduce the computational complexity to the polynomial, and it can be seen that the cost of this ε can bring great benefits. Remark. It is not difficult to find q (s,a) by computer algorithm, The problem is that when the action set A is large, the amount of computation will be very large and even difficult to complete in the allowed time, so instead of using the ε best Markov policy. 2.3 Back propagation neural network As we know, for a p q checker board, there are 5 pq 2 states, it is a large number for computer when p,q 6. It impossible to use a tabular representation for the values of the different states. So it is necessary to use a approximation function that is chosen by artificial neural network. Now I will introduce some knowledge about neural network that we need in this paper. The neural network that I used is named Back propagation neural network (BPNN). It is a common method of training artificial neural networks and used in conjunction with an optimization method such as gradient descent. A neural network consists of three parts, input layer, hidden layer and output layer. The input layer and output layer are only 1 layer, but the hidden layer can have many layers, I only introduce the model with 1 layer since I just use 1 hidden layer in this paper. The neural network has three layers in total. Input layer, hidden layer and output layer are made up of neurons. Suppose there are N 1,N 2,N 3 neurons for input layer, hidden layer, output layer, respectively. Three-layer back propagation neural network structure shown in Figure 2.1. X =(x 1,x 2,...,x N1 ) is the value of input layer, Y =(y 1,y 2,...,y N2 ) is the value of hidden layer, Z = (z 1,z 2,...,z N3 ) is the value of output layer. w i j,i = 1,2,...,N 2, j = 1,2,...,N 1 are the weights between input layer and hidden layer. v i j,i = 1,2,...,N 3, j = 1,2,...,N 2 are the weights between hidden layer and output layer. T = (T 1,T 2,...,T N3 ) is the expect value. In other words, (X,T) is the a sample. 12

15 2.3. BACK PROPAGATION NEURAL NETWORK 2 Figure 2.1: Three layers BPNN Remark. (1) When the number of hidden layer neurons is too small, not very good learning network, the number of training iterations will be more, training accuracy is not high. (2) When the number of hidden layer neurons is too large, the more powerful the network function, the higher the accuracy, the number of training iterations is also large, may appear over fitting phenomenon. Now we give the relation of X and Y, Y and Z [8], y i = f( z i = f( N 1 j=1 N 1 j=1 x j w i j ), i=1,2,...,n 2 (2.3.1) y j v i j ), i=1,2,...,n 3 (2.3.2) Where f() is a transfer function. The transfer function adopted by the BPNN is a non linear transformation function which is named sigmod function. It is characterized by the function 13

16 2.3. BACK PROPAGATION NEURAL NETWORK 2 Figure 2.2: Unipolar S type function curve Figure 2.3: Bipolar S type function curve itself and its derivatives are continuous, and thus very convenient in dealing with. There are two sigmod functions [8]. Unipolar S type function curve shown in figure 2.2, Bipolar S type function curve shown in figure 2.3, f 1 (x) = 1 1+e x (2.3.3) f 2 (x) = ex e x e x + e x (2.3.4) You can find that the value field of f 1 (x) is (0,1), the value field of f 2 (x) is ( 1,1). And it is easily to know that f 1 (x)= f 1(x)(1 f 1 (x)) and f 2 (x)=(1 f 2(x)) 2. Here I only talk about bipolar S type function, Because I choose to use the bipolar S type function in the model, I will give the reason later. Definition [8] When the value of output layer is not equal to the expect value, there is 14

17 2.3. BACK PROPAGATION NEURAL NETWORK 2 a output error E, defined as follows E = 1 (T Z)2 2 = 1 2 = 1 2 = 1 2 N 3 k=1 N 3 k=1 N 3 k=1 (T k z k ) 2 (T k f 2 ( (T k f 2 ( N 2 j=1 N 2 j=1 v k j y j )) 2 N 1 v k j f 2 ( i=1 w ji x i ))) 2 (2.3.5) From formula 2.3.5, we can know that the error E is a function of w i j,v i j. In order to reduce the error E, we should adjust the weights by let the weights is inversely proportional to error gradient, that is, w i j = α E = α(1 y 2 w i)x j i j N 3 k=1 (T k z k )(1 z 2 k )v ik i=1,2,...,n 2 j=1,2,...,n 1 v jk = α E v jk = α(t k z k )(1 z 2 k )y j (2.3.6) j=1,2,...,n 3 k=1,2,...,n 2 Where α is learning rate. It is belong to (0,1). Learning rates are generally chosen to be , and large learning rates can lead to system instability, but small learning rates lead to convergence too slow and require longer training times. For more complex networks, different learning rates may be required at different locations on the error surface. In order to reduce the number of training times and time to find the learning rate, a more appropriate approach is to use a changing adaptive learning rate to set the network different learning rates at different stages. After that, we can get the new weights, w i j = w i j + w i j, i=1,2,...,n 2 j = 1,2,...,N 1 v jk = v jk + v jk, j=1,2,...,n 3 k=1,2,...,n 2 (2.3.7) The theory has proved that a network with deviations and at least one S-type hidden layer plus a linear output layer can approximate any rational function, increasing the number of layers can further reduce the error and improve the accuracy, but also the network is complicated. The improvement of network training accuracy can be also achieved by using a hidden layer and increasing the number of neurons, which is much simpler in structuring than adding network layers. 15

18 2.4. TEMPORAL DIFFERENCE LEARNING METHOD Temporal difference learning method Temporal difference(td) learning is declared to be a reinforcement learning method. It is one of the most used approaches for policy evaluation. For deriving optimal control, policies have to be evaluated. This task requires value function approximation. At this point TD methods find application. TD methods are called bootstrapping methods, as they do not learn by the difference to the final outcome but the difference between each update step. Instead of a single update, TD methods calculate T 1 updates for a episode of T time steps[9]. TD learning is particularly well suited to game playing because instead of forming pairs between the actual outcome and each state encountered in the game, instead it updates it s prediction at each time step to the prediction at the next time step[10]. TD learning aims to achieve a approximation function as V θ (S) close to the state value function V(S) as possible by minimizing the mean squared error MES which defined as MSE= 1 n n k=1 (V θ (S i ) V(S i )) 2 (2.4.1) As V S is unknown, it is estimated by applying Equation on the current approximation V(S t ), V(S t ) E(R t + γv θ (S t+1 )) (2.4.2) The application of the Bellman Equation is the core idea of Temporal Difference Learning and allows to calculate the error denoted by Equation However analytic computation of the minimum of the error is not possible for systems with huge state spaces. Instead, a local minimum is searched numerically by Stochastic Gradient Descent. The method calculates new search weights θ by following a approximation of the gradient of the error function[9]. V θ (S i )= V θ (S i ) α MSE = V θ (S i ) α(v θ (S i ) V θ (S i )) V θt (S t ) (2.4.3) Where α is a learning rate factor which is used to adjust the step size of the SGD method and prevent overshooting, it has been introduced before. The value function approximation applied on equation 2.4.3, calculating the gradient. 16

19 2.5. α β MINIMAX SEARCH METHOD 2 To reduce the number of computations needed, the gradient is approximated by φ(s t ). The update function of the TD learning method can be displayed as: V θt+1 (S t )= V θt (S t )+αδ t e t δ t = R t+1 + γv θt (S t+1 ) V θt (S t ) (2.4.4) e t = φ(s t ) Where δ t denotes the temporal difference and e t denotes the approximation of the gradient V θt (S t ). e t allows the method to carry rewards backward over the sampled trajectory without the need to store the trajectory itself. The reach of this effect is depending on the factor λ [0,1]. The factor λ determines the degree to which extend the changes are propagated. the value of λ is very important on real tasks that the algorithm is named TD(λ)[9]. [11]One kind of attenuation way is { γλet 1 (S)+1 S=S t e t (S)= γλe t 1 (S) S = S t When λ = 1, it is the same with Monte Carlo learning, while a λ = 0 results in a one-step lookahead. 2.5 α β minimax search method minimax search method Minimax search method generally used in game search, such as: Go, backgammon, chess and so on. There are three possible outcomes: victory, failure and draw. Violence search, if you want to search through violence, the final result of the search, the depth of the search tree is too large, the machine can not meet, generally provide a search depth, Depth-first search within this depth range. We assume that two people involved in the competition are Black and White, Suppose we are black. We always want to choose the action that strives for the greatest benefit each step, it is named Max process. But we don t know the level of white, we suppose he always strives for the greatest benefit each step, too. It means he strives to let me get the smallest benefit 17

20 2.5. α β MINIMAX SEARCH METHOD 2 each step, it is named Min process. So the game is actually Max process and Min process alternately the process until game over or reached the search depth. In the search tree, the node that represents black is the max node, indicating that the node of white is a min node. Example Figure 2.4 is a Minimax search tree for black player. The root node is a max node, the depth of this search tree is 3. The value of each leaf node(depth=3) was calculated Figure 2.4: Minimax tree for black by the state value function, obtaining the values shown. When depth=2, the nodes are min nodes, it is the min process, it equal to the smallest value of its child nodes. For example, the first node equal to max(1,-1), that is 1. When depth=1, the nodes are max nodes. For example, the second node equal to min(5,-1), that is -1. The root node is a max node, equal to max(1,-1)=1. So the action should be the first node of the second layer if we are in the state of the root node. We will get a state that its value is at least 2. However, for the white player, he will try to find a state with the smallest value. The Minimax search tree for white player as figure 2.5. He will get a state that its value at most 3. 18

21 2.5. α β MINIMAX SEARCH METHOD 2 Figure 2.5: Minimax tree for white α β search method For general minimax search method, even if only a few choices each step, the number of search positions will grow exponentially as the search depth. It is difficult for a common computer to accomplish. But based on general minimax search method, α β search method can reduce many unnecessary nodes, it can save a lot of time to calculate. In the figure 2.5 of example 2.5.1, firstly, we get the value of the first node of the second layer is 3. And later, we get the value of the fourth node of the third layer is 5, since the second layer is Max process, so the value of the second node of the second layer is not small than 5. Then, the value of root node is equal to min(3,5). It is not necessary to calculate the value of the fifth node of the third layer. When the algorithm is implemented, two values are introduced for each node: α and β, respectively, which represent the lower and upper limits of the node s estimate, which constitute an interval, it means the state value belong to (α,β). In the root node, α and β defined by ourselves, we can let it be and +, respectively. The α and β values of other nodes are inherited from their parent nodes in the depth-first search process. After we get the value of the fourth node of the third layer, if it is lager than 3, we don t need to search the fifth node. In fact, 3 is the upper limit of the root nodes after updating, β = 3, 5 is the lower limit of the second node of the second layer, it is α. when α β, we don t need to the other child 19

22 2.6. SUMMARY OF THIS CHAPTER 2 nodes of this node. α and β are updated in the depth-first search process. For Max process, if the return value of child node lager than α, we let α be this value. For Min process, if the return value of child node little than β, let β equal to this value. Remark. We can see that α β search method can reduce many unnecessary nodes, but it depend on the order of the nodes appear. 2.6 Summary of this chapter This chapter introduced the knowledge that the paper need in detail. A Markov chain requires a no memory nature: The probability distribution of the next state can only be determined by the current state. MDP provides the basic theoretical model of action decision in uncertain environments. Bellman Expectation Equation writes the value of a decision problem at a certain point in time in terms of the pay off from some initial choices and the value of the remaining decision problem that results from those initial choices. BPNN is a common method of training an artificial networks since it can be trained to approximate any function. This chapter give the detailed principle how the three layer neural network run with the bipolar S type function curve. TD(λ) learning is one of the most important reinforcement method that combined with Monte Carlo learning and dynamic programming methods. Instead of learning 1 times each game, we can learn each step of the game with TD(λ) learning. This chapter introduced how the expected state value updated by TD(λ) learning method in theory. Minimax search method choose the best action for the player by traversing all the possibilities with a finite depth. But the depth is limited by the time. α β search method is an effective way to save time of searching. This chapter explained how the two method searched. 20

23 Chapter 3 Establish MDP model for draughts In chapter 1, I have given a introduction about draughts. Through the theoretical knowledge of chapter 2, we can establish a Markov decision process model for draughts. As we know, a MDP model must include the state space S, the action space A, the transition matrix P and the reward function f. And we need a discount factor γ [0,1], since immediate rewards are more important than delayed rewards. Now, Let s begin to decide the structure of the model in this chapter. The pieces on the checker board is a state of the model, move or jump a piece is an action of the model, the transition matrix is decide by the probability of the starting point and end point of the selected pieces, but there are too many states to write a transition matrix. However, the discount factor decide decide by comparing the intelligent effect of the computer. There is only a reward when the game over. 3.1 State space In the checker board, there are p q grids, and each grid is black kings, black men, white kings, or empty. Let the checker board be a matrix with the size of p q, and let 1 if it is a black kings piece in the position(i, j) 0.5 if it is a black men piece in the position(i, j) S i, j = 0 if it is no piece in the position(i, j) 0.5 if it is a white men piece in the position(i, j) 1 if it is a white kings piece in the position(i, j) 21

24 3.2. ACTION SPACE 3 As we know, when i + j is even number, there is always no piece in the position(i, j) of the checker board, so we have S i, j = 0 if i+ j is divisible by 2. And the initial state matrix S 0 = The set of all possible scenarios on the checker board, that is, all possible matrices S form a state space S. When one of the player move or jump a piece, the state matrix change from S i to S i+1. We know that S i+1 is just decide on S i and the piece that the player moved, it means that there is no relation between S i+1 and S 0, S 1,..., S i 1. So{S n : n 0} is a discrete time Markov process, and the state space S= {Sk : k=0,1,2,...,n, and S n is the matrix of the last scenarios of the checker-board} p q And this is a finite Markov chain since the game will end in a limited time. (3.1.1) 3.2 Action space Now, the state of the checker board is S, it s your turn to move a piece. You must choose the route of the pieces that can eat the largest number of opponent s pieces. Since Different kinds of pieces have different ways to move, the actions for each piece are different, and the positions of the pieces also influence the actions. Let s discuss the actions of each situation. 22

25 3.2. ACTION SPACE 3 Suppose your pieces are white, and you move or jump the piece in the position (i 1, j 1 ) to the position (i 2, j 2 ). Suppose your choice is allowable, we know that i 1 i 2 = j 1 j 2 since the piece can only move on the diagonal line. If the piece is a men, i 1 i 2 = j 1 j 2 =1 or 2. There are 2 actions when the piece moved since it can t go back forward, and there are 4 actions when the piece jumped. So there are 6 actions(figure 3.1) for this piece in total. Figure 3.1: The actions of men If the piece is a king, j 1 j 2 1. There are 4 actions when the piece moved since it can t go back forward, and there are 4 directions when the piece jumped, but maybe there are several actions each direction since a king can jump more than 2 grids. So there are 6 directions(figure 3.2) for this piece in total. However the number of actions decide by the position of the piece. We also need to notice that the actions is not the same for two players. By the analyse before, we have the actions are decided by the function of player(pl), the kind of piece(pi), the start position(i, j) (po), the direction(dir) and the distance that the piece move or jump(dis) a(pl, po, pi, dir, dis) (3.2.1) where { 1 if the piece of the player is white pl = 1 if the piece of the player is black (3.2.2) 23

26 3.2. ACTION SPACE 3 Figure 3.2: The actions of kings So we get the action space pi= { k if the piece is a king m if the piece is a man 1 if the piece moved to southeast 2 if the piece moved to southwest 3 if the piece moved to northwest 4 if the piece moved to northeast dir= 5 if the piece jumped to southeast 6 if the piece jumped to southwest 7 if the piece jumped to northwest 8 if the piece jumped to northeast = 1 if dir=1,2,3,4 dis = 2 if pi=m and dir=5,6,7,8 min{ p i, q j } if pi=k and dir=5,6,7,8 (3.2.3) (3.2.4) (3.2.5) A={a(pl, po, pi,dir,dis) : pl, po, pi, dir, dis are defined before} (3.2.6) For each fixed state i S,that is, for each step of game, the player and the pieces in each grids are fixed. So the actions are just decided by po,dir,dis, therefore, the action space for state i is A(i) ={a(po, dir, dis) : po,dir, dis are defined before} (3.2.7) 24

27 3.3. TRANSITION MATRIX, DISCOUNT FACTOR, REWARD FUNCTION AND STATE VALUE 3 And the action space is n A= A(i) (3.2.8) i=0 3.3 Transition matrix, discount factor, reward function and state value Transition matrix Since the amount of states is too large to write the transpose matrix explicitly, The amount of storage required to create a lookup table is too large, and we don t know the probability in practice, we couldn t solve the problem by traditional method. However, we can solve the problem by neural network. This will be explained in Chapter 4. Discount factor We can t know the exact value of γ now, but we can let it equal to 0.6, 0.7, 0.8, 0.9 respectively by comparing the effect of AI in practice. But how to compare? We could training the same times for each γ, and let all of them play with the random computer 50 times, compare the odds of winning. Reward function We set the reward function to be 1 if game over and white palyer win R(S,a)= 0 if game isn t over or the game draw 1 if game over and black palyer win (3.3.1) State value In the beginning, all the states value are 0. Farther, The states that correspond to final boards are considered to have a value of 0, the values of the rest of the states are given by the output of the function approximation which obtained by BPNN[12]. 25

28 3.4. SUMMARY OF THIS CHAPTER Summary of this chapter In this chapter, through the actual rules of draughts, we construct a Markov decision process model that includes state space, action space, transfer matrix, reward function, and discount factor. The checker board is too large to enumerate each state. A state function effectively represents each state of the board, thus constructing a state space. The rules of draughts are relatively complex, so the action space is also very large. However this chapter researched the determining factors of the actions successfully. We can not accurately express the transfer matrix of the model. The discount factor should be given by comparison experiments. The reward function just give to the last state of each game. 26

29 Chapter 4 Algorithm of Realizing AI of the draughts After we established the model in Chapter 3, now we need to write the code the the game. I don t introduce how to write the code about the rules of the game, though it is not easy, because I the most important point is to achieve artificial intelligence. Therefore, I just introduce the three key algorithm of the code, they are BPNN, TD learning and α β search method. 4.1 Algorithm of BPNN Though the introduction of draughts in the chapter 1, we know that we will win the game when the opponent has no piece or all his piece can not move. And in the game a king is much better than a men. And the state value is also depend the position of the pieces. So I give a vector with the length of pq+2 to store the features of the checker board. Let it be X[pq+2]=(S 11,S 12,...,S 1q,S 21,...,S 2q,...S pq,move(white), move(black)) (4.1.1) where move() is a function to calculate how many pieces can move for one player. So the number of the nodes of the input layer of the neural network is pq+2. The number of the nodes of the hidden layer is decide by ourselves, we usually let it be half of the nodes of the input layer, that is pq+2, it is a integer since p and q are even numbers. There 2 is only one node of the output layer. We expect the output be 1 when white piece win the game, 1 when white piece lost and 0 when it is draw. 27

30 4.1. ALGORITHM OF BPNN 4 We choose the bipolar S type function to be the network transfer function, because we hope the value field of the output be( 1,1). And the output error is defined as The learn rate is changed with the times of learning, With the increase in the number of learning, learning rate is getting smaller and smaller. learnrate = 1 1+learntimes (4.1.2) For the first time to train the neural network, We use the pseudo-random number to generate two vectors, the sizes are (pq+2) pq+2 and pq+2. Each element of the two vector 2 2 belongs to ( 0.05,0.05). The two vector are the initial value of the weights w and v. Where w is the weights between input layer and hidden layer, v is the weights between hidden layer and output layer. They are updating by the formula and at each time learning. The learning process specific steps are as follows: (1) Give random values to the weights w and v. (2) For the state S, extract the features to vector X[pq+2], and calculate the expected value of the state S. (3) Calculate the value of the nodes of hidden layer by the network transfer function and the weights w with the formula (4) Calculate the value of output layer by the network transfer function and the weights v with the formula (5) Calculate the error by the expected value and the value of out layer. (6) Updating the weights with formula and (7) Choose another state, and go back to the second step. (8) Learning until the error small than ε, learning finished! The pseudo code of BPNN is as algorithm 1 28

31 4.1. ALGORITHM OF BPNN 4 Algorithm 1 Algorithm of Back propagation Neural Network input: transfer function f(x), feature function g(s), states and the corresponding expected value output: weights 1: Initialize the weights vector w and v. 2: repeat 3: Input a state S; 4: X[pq+2]=g(S); This is the vector of input layer 5: for each i [1,N 2 ] do Compute the values of the nodes of hidden layer 6: y i = N 1 j=1 x jw i j ; 7: y i = f(y i ); 8: end for 9: z= N 2 j=1 y jv i j ; 10: z= f(z i ); The value of output layer 11: E= 1 2 (T z)2 ; Compute the error 1 12: learnrate = 1+learntimes ; 13: for each j [1, pq : v j = α(t z)(1 z 2 )y j ; 15: v j = v j + v j ; 16: end for Updating learn rate ] do Updating weights v 17: for each i [1, pq+2] do Updating weights w 18: for each j [1, pq+2 ] do 2 19: w i j = α(1 y 2 i )x j(t z)(1 z 2 )v i ; 20: w i j = w i j + w i j ; 21: end for 22: end for 23: until (E < ε) 29

32 4.2. ALGORITHM OF TD LEARNING Algorithm of TD learning In the algorithm of TD learning, we also choose the learn rate to be formula And the discount factor has been introduced in the subsection 3.3. After the game finish, there is a Markov decision chain {(S i,a i ) i = 0,1,2,...,n}, where a 1 = a 2 =...=a n 1 = 0 and a n decide by Now, we can compute the expected state value of every state by the formula Suppose ˆv i is the expected state value of state S i. We can train the neural network by updating the weights. But how to update the weights? Let the output value of transfer function of each state equal to the corresponding expected state value, that is: f(s n )= ˆv n. f(s n 1 )= ˆv n 1 (4.2.1) f(s 0 )= ˆv 0 Remark. The equation set must learn from state S n to state S 0, because we can only calculate the expected state value from state S n to state S 0. The specific steps are as follows: (1) Select the states one by one in descending order. (2) For the state S i, compute the learn rate. (3) Compute the approximation of the gradient e t. (4) Compute f 2 (S i+1 ) and f 2 (S i ). (5) For the state S i, compute ˆv i by formula (6) Updating the weights by algorithm 1. The pseudo code of TD learning is as algorithm 2 30

33 4.3. ALGORITHM OF α β MINIMAX METHOD 4 Algorithm 2 Algorithm of TD learning input: states S i,i=0,1,..,n, the result of the game 1: i n; 2: e 1; 3: Training state S n by BPNN; 4: i i 1; 5: while i 0 do 6: learnrate = 7: e γλe; 8: ˆv i+1 f 2 (S i+1 ); 9: ˆv i f 2 (S i ); 1 1+learntimes ; 10: ˆv i ˆv i + learnrate (R t+1 + γ ˆv i+1 ˆv i ) e; 11: Training state S i by BPNN; 12: end while 4.3 Algorithm of α β minimax method Different from other game models, one player can jump many steps continuously, so it is not always Max process and Min process alternately the process. But don t worry, since for each player, he must choose the action that he can eat the most opponent s pieces all the time, therefore the nodes of every layer of the minimax search tree are belong the same player. Though sometimes the steps that the players can jump continuously lager than the search depth, we still can search the search depth and find the best action, after we execute this action, we search again... When we jump to the last step, we search again, we still searched the states after the opponent moved. Example In the figure 4.1, if the search depth is 4, We will get the value of root is 6. But if the depth is 2, we compute the value of the nodes of the third layer and choose the best action states in the first. Then, we arrived the second layer of the search layer, we compute the value of the nodes of the fourth layer and choose the best action states. At last, we arrived the third layer of the search layer, we compute the value of the nodes of the fifth layer and choose the best action states. 31

34 4.3. ALGORITHM OF α β MINIMAX METHOD 4 Figure 4.1: Minimax search tree Remark. In our model, the value of every state is belong to( 1,1), this is just an example to explain the problem. Since Minimax search method is just simplified by α β search method. I only write the pseudo code of α β search method, please see algorithm 3 and 4. 32

35 4.3. ALGORITHM OF α β MINIMAX METHOD 4 Algorithm 3 Algorithm of α β search method input: state S, α, β, depth depth, player player. output: the coordinate(end i,end j ) of best action 1: Search depth depthlimit; 2: function ALPHABETA( player, state, depth, α, β) 3: if depth=0or gameover then 4: return Evaluate(board); 5: else if player = white then 6: for each position(i, j) can move or jump do 7: ALPHABETAIJ(white, state, depth, α, β, i, j); 8: end for 9: else 10: for each position(i, j) can move or jump do 11: ALPHABETAIJ(black, state, depth, α, β, i, j); 12: end for 13: end if 14: end function 15: function ALPHABETAIJ(player, state, depth, α, β, i, j) 16: float score;; 17: if depth=0or gameover then 18: return Evaluate(board); 19: else if player = white then 20: for each position(x,y) that peice (i, j) can move or jump do 21: Piece move or jump from (i, j) to(x,y); Get a new state S 22: depth depth 1; 23: if the piece jumped and can jump continue then 24: score ALPHABETAIJ(white,S,depth,α,β,x,y); 25: else 26: score ALPHABETA(black,S,depth,α,β); 27: end if 28: if score > α then 29: α score; 30: if depth = depthlimit 1 then 31: (end i,end j ) (x,y); 33

36 4.3. ALGORITHM OF α β MINIMAX METHOD 4 Algorithm 4 Algorithm of α β search method 32: end if 33: end if 34: depth depth+1; 35: Piece move or jump from (x,y) to(i, j); Go back to the original state 36: end for 37: return α; 38: else The player is black pieces 39: for each position(x,y) that peice (i, j) can move or jump do 40: Piece move or jump from (i, j) to(x,y); Get a new state S 41: depth depth 1; 42: if the piece jumped and can jump continue then 43: score ALPHABETAIJ(black,S,depth,α,β,x,y); 44: else 45: score ALPHABETA(white,S,depth,α,β); 46: end if 47: if score < β then 48: β score; 49: if depth = depthlimit 1 then 50: (end i,end j ) (x,y); 51: end if 52: end if 53: depth depth+1; 54: Piece move or jump from (x,y) to(i, j); Go back to the original state 55: end for 56: return β; 57: end if 58: end function 34

37 4.4. SUMMARY OF THIS CHAPTER Summary of this chapter This chapter described the process and algorithm of BPNN, TD(λ) learning and Minimax search method combined with the actual situation of draughts in detail. We use a vector with the length of pq+2 to store the features of the checker board, and then calculate the value of output by transfer function. TD learning method is used to evaluate the expected state value of the game. And then updating the weights of the neural network by back Propagation the neural network. Finally, The weight converges to the optimal solution of evaluate the state value of draughts. Therefore, we can use the transfer function to calculate the state value of each state. At last, α β search method is used to choose the best actions for the AI player. 35

Lecture 3: Markov Decision Processes

Lecture 3: Markov Decision Processes Lecture 3: Markov Decision Processes Joseph Modayil 1 Markov Processes 2 Markov Reward Processes 3 Markov Decision Processes 4 Extensions to MDPs Markov Processes Introduction Introduction to MDPs Markov

More information

Reinforcement Learning and Deep Reinforcement Learning

Reinforcement Learning and Deep Reinforcement Learning Reinforcement Learning and Deep Reinforcement Learning Ashis Kumer Biswas, Ph.D. ashis.biswas@ucdenver.edu Deep Learning November 5, 2018 1 / 64 Outlines 1 Principles of Reinforcement Learning 2 The Q

More information

CS 570: Machine Learning Seminar. Fall 2016

CS 570: Machine Learning Seminar. Fall 2016 CS 570: Machine Learning Seminar Fall 2016 Class Information Class web page: http://web.cecs.pdx.edu/~mm/mlseminar2016-2017/fall2016/ Class mailing list: cs570@cs.pdx.edu My office hours: T,Th, 2-3pm or

More information

CS599 Lecture 1 Introduction To RL

CS599 Lecture 1 Introduction To RL CS599 Lecture 1 Introduction To RL Reinforcement Learning Introduction Learning from rewards Policies Value Functions Rewards Models of the Environment Exploitation vs. Exploration Dynamic Programming

More information

Lecture 7: Value Function Approximation

Lecture 7: Value Function Approximation Lecture 7: Value Function Approximation Joseph Modayil Outline 1 Introduction 2 3 Batch Methods Introduction Large-Scale Reinforcement Learning Reinforcement learning can be used to solve large problems,

More information

Reinforcement Learning. Spring 2018 Defining MDPs, Planning

Reinforcement Learning. Spring 2018 Defining MDPs, Planning Reinforcement Learning Spring 2018 Defining MDPs, Planning understandability 0 Slide 10 time You are here Markov Process Where you will go depends only on where you are Markov Process: Information state

More information

Lecture 1: March 7, 2018

Lecture 1: March 7, 2018 Reinforcement Learning Spring Semester, 2017/8 Lecture 1: March 7, 2018 Lecturer: Yishay Mansour Scribe: ym DISCLAIMER: Based on Learning and Planning in Dynamical Systems by Shie Mannor c, all rights

More information

Machine Learning I Reinforcement Learning

Machine Learning I Reinforcement Learning Machine Learning I Reinforcement Learning Thomas Rückstieß Technische Universität München December 17/18, 2009 Literature Book: Reinforcement Learning: An Introduction Sutton & Barto (free online version:

More information

CS 287: Advanced Robotics Fall Lecture 14: Reinforcement Learning with Function Approximation and TD Gammon case study

CS 287: Advanced Robotics Fall Lecture 14: Reinforcement Learning with Function Approximation and TD Gammon case study CS 287: Advanced Robotics Fall 2009 Lecture 14: Reinforcement Learning with Function Approximation and TD Gammon case study Pieter Abbeel UC Berkeley EECS Assignment #1 Roll-out: nice example paper: X.

More information

Reinforcement Learning

Reinforcement Learning Reinforcement Learning Function approximation Daniel Hennes 19.06.2017 University Stuttgart - IPVS - Machine Learning & Robotics 1 Today Eligibility traces n-step TD returns Forward and backward view Function

More information

PART A and ONE question from PART B; or ONE question from PART A and TWO questions from PART B.

PART A and ONE question from PART B; or ONE question from PART A and TWO questions from PART B. Advanced Topics in Machine Learning, GI13, 2010/11 Advanced Topics in Machine Learning, GI13, 2010/11 Answer any THREE questions. Each question is worth 20 marks. Use separate answer books Answer any THREE

More information

Reinforcement Learning

Reinforcement Learning Reinforcement Learning Function approximation Mario Martin CS-UPC May 18, 2018 Mario Martin (CS-UPC) Reinforcement Learning May 18, 2018 / 65 Recap Algorithms: MonteCarlo methods for Policy Evaluation

More information

Reinforcement Learning

Reinforcement Learning Reinforcement Learning RL in continuous MDPs March April, 2015 Large/Continuous MDPs Large/Continuous state space Tabular representation cannot be used Large/Continuous action space Maximization over action

More information

Balancing and Control of a Freely-Swinging Pendulum Using a Model-Free Reinforcement Learning Algorithm

Balancing and Control of a Freely-Swinging Pendulum Using a Model-Free Reinforcement Learning Algorithm Balancing and Control of a Freely-Swinging Pendulum Using a Model-Free Reinforcement Learning Algorithm Michail G. Lagoudakis Department of Computer Science Duke University Durham, NC 2778 mgl@cs.duke.edu

More information

Deep Reinforcement Learning

Deep Reinforcement Learning Martin Matyášek Artificial Intelligence Center Czech Technical University in Prague October 27, 2016 Martin Matyášek VPD, 2016 1 / 50 Reinforcement Learning in a picture R. S. Sutton and A. G. Barto 2015

More information

Introduction to Reinforcement Learning. CMPT 882 Mar. 18

Introduction to Reinforcement Learning. CMPT 882 Mar. 18 Introduction to Reinforcement Learning CMPT 882 Mar. 18 Outline for the week Basic ideas in RL Value functions and value iteration Policy evaluation and policy improvement Model-free RL Monte-Carlo and

More information

Deep Reinforcement Learning. STAT946 Deep Learning Guest Lecture by Pascal Poupart University of Waterloo October 19, 2017

Deep Reinforcement Learning. STAT946 Deep Learning Guest Lecture by Pascal Poupart University of Waterloo October 19, 2017 Deep Reinforcement Learning STAT946 Deep Learning Guest Lecture by Pascal Poupart University of Waterloo October 19, 2017 Outline Introduction to Reinforcement Learning AlphaGo (Deep RL for Computer Go)

More information

Marks. bonus points. } Assignment 1: Should be out this weekend. } Mid-term: Before the last lecture. } Mid-term deferred exam:

Marks. bonus points. } Assignment 1: Should be out this weekend. } Mid-term: Before the last lecture. } Mid-term deferred exam: Marks } Assignment 1: Should be out this weekend } All are marked, I m trying to tally them and perhaps add bonus points } Mid-term: Before the last lecture } Mid-term deferred exam: } This Saturday, 9am-10.30am,

More information

Reinforcement Learning. George Konidaris

Reinforcement Learning. George Konidaris Reinforcement Learning George Konidaris gdk@cs.brown.edu Fall 2017 Machine Learning Subfield of AI concerned with learning from data. Broadly, using: Experience To Improve Performance On Some Task (Tom

More information

Reinforcement Learning

Reinforcement Learning Reinforcement Learning March May, 2013 Schedule Update Introduction 03/13/2015 (10:15-12:15) Sala conferenze MDPs 03/18/2015 (10:15-12:15) Sala conferenze Solving MDPs 03/20/2015 (10:15-12:15) Aula Alpha

More information

Temporal difference learning

Temporal difference learning Temporal difference learning AI & Agents for IET Lecturer: S Luz http://www.scss.tcd.ie/~luzs/t/cs7032/ February 4, 2014 Recall background & assumptions Environment is a finite MDP (i.e. A and S are finite).

More information

Reinforcement Learning II

Reinforcement Learning II Reinforcement Learning II Andrea Bonarini Artificial Intelligence and Robotics Lab Department of Electronics and Information Politecnico di Milano E-mail: bonarini@elet.polimi.it URL:http://www.dei.polimi.it/people/bonarini

More information

REINFORCEMENT LEARNING

REINFORCEMENT LEARNING REINFORCEMENT LEARNING Larry Page: Where s Google going next? DeepMind's DQN playing Breakout Contents Introduction to Reinforcement Learning Deep Q-Learning INTRODUCTION TO REINFORCEMENT LEARNING Contents

More information

Artificial Intelligence

Artificial Intelligence Artificial Intelligence Dynamic Programming Marc Toussaint University of Stuttgart Winter 2018/19 Motivation: So far we focussed on tree search-like solvers for decision problems. There is a second important

More information

Introduction to Reinforcement Learning

Introduction to Reinforcement Learning CSCI-699: Advanced Topics in Deep Learning 01/16/2019 Nitin Kamra Spring 2019 Introduction to Reinforcement Learning 1 What is Reinforcement Learning? So far we have seen unsupervised and supervised learning.

More information

1 Introduction 2. 4 Q-Learning The Q-value The Temporal Difference The whole Q-Learning process... 5

1 Introduction 2. 4 Q-Learning The Q-value The Temporal Difference The whole Q-Learning process... 5 Table of contents 1 Introduction 2 2 Markov Decision Processes 2 3 Future Cumulative Reward 3 4 Q-Learning 4 4.1 The Q-value.............................................. 4 4.2 The Temporal Difference.......................................

More information

Reinforcement Learning

Reinforcement Learning 1 Reinforcement Learning Chris Watkins Department of Computer Science Royal Holloway, University of London July 27, 2015 2 Plan 1 Why reinforcement learning? Where does this theory come from? Markov decision

More information

Lecture 8: Policy Gradient

Lecture 8: Policy Gradient Lecture 8: Policy Gradient Hado van Hasselt Outline 1 Introduction 2 Finite Difference Policy Gradient 3 Monte-Carlo Policy Gradient 4 Actor-Critic Policy Gradient Introduction Vapnik s rule Never solve

More information

MARKOV DECISION PROCESSES (MDP) AND REINFORCEMENT LEARNING (RL) Versione originale delle slide fornita dal Prof. Francesco Lo Presti

MARKOV DECISION PROCESSES (MDP) AND REINFORCEMENT LEARNING (RL) Versione originale delle slide fornita dal Prof. Francesco Lo Presti 1 MARKOV DECISION PROCESSES (MDP) AND REINFORCEMENT LEARNING (RL) Versione originale delle slide fornita dal Prof. Francesco Lo Presti Historical background 2 Original motivation: animal learning Early

More information

Internet Monetization

Internet Monetization Internet Monetization March May, 2013 Discrete time Finite A decision process (MDP) is reward process with decisions. It models an environment in which all states are and time is divided into stages. Definition

More information

Lecture 23: Reinforcement Learning

Lecture 23: Reinforcement Learning Lecture 23: Reinforcement Learning MDPs revisited Model-based learning Monte Carlo value function estimation Temporal-difference (TD) learning Exploration November 23, 2006 1 COMP-424 Lecture 23 Recall:

More information

Grundlagen der Künstlichen Intelligenz

Grundlagen der Künstlichen Intelligenz Grundlagen der Künstlichen Intelligenz Reinforcement learning II Daniel Hennes 11.12.2017 (WS 2017/18) University Stuttgart - IPVS - Machine Learning & Robotics 1 Today Eligibility traces n-step TD returns

More information

Machine Learning and Bayesian Inference. Unsupervised learning. Can we find regularity in data without the aid of labels?

Machine Learning and Bayesian Inference. Unsupervised learning. Can we find regularity in data without the aid of labels? Machine Learning and Bayesian Inference Dr Sean Holden Computer Laboratory, Room FC6 Telephone extension 6372 Email: sbh11@cl.cam.ac.uk www.cl.cam.ac.uk/ sbh11/ Unsupervised learning Can we find regularity

More information

A Gentle Introduction to Reinforcement Learning

A Gentle Introduction to Reinforcement Learning A Gentle Introduction to Reinforcement Learning Alexander Jung 2018 1 Introduction and Motivation Consider the cleaning robot Rumba which has to clean the office room B329. In order to keep things simple,

More information

Q-Learning in Continuous State Action Spaces

Q-Learning in Continuous State Action Spaces Q-Learning in Continuous State Action Spaces Alex Irpan alexirpan@berkeley.edu December 5, 2015 Contents 1 Introduction 1 2 Background 1 3 Q-Learning 2 4 Q-Learning In Continuous Spaces 4 5 Experimental

More information

Machine Learning. Reinforcement learning. Hamid Beigy. Sharif University of Technology. Fall 1396

Machine Learning. Reinforcement learning. Hamid Beigy. Sharif University of Technology. Fall 1396 Machine Learning Reinforcement learning Hamid Beigy Sharif University of Technology Fall 1396 Hamid Beigy (Sharif University of Technology) Machine Learning Fall 1396 1 / 32 Table of contents 1 Introduction

More information

6 Reinforcement Learning

6 Reinforcement Learning 6 Reinforcement Learning As discussed above, a basic form of supervised learning is function approximation, relating input vectors to output vectors, or, more generally, finding density functions p(y,

More information

Machine Learning. Machine Learning: Jordan Boyd-Graber University of Maryland REINFORCEMENT LEARNING. Slides adapted from Tom Mitchell and Peter Abeel

Machine Learning. Machine Learning: Jordan Boyd-Graber University of Maryland REINFORCEMENT LEARNING. Slides adapted from Tom Mitchell and Peter Abeel Machine Learning Machine Learning: Jordan Boyd-Graber University of Maryland REINFORCEMENT LEARNING Slides adapted from Tom Mitchell and Peter Abeel Machine Learning: Jordan Boyd-Graber UMD Machine Learning

More information

CS 188 Introduction to Fall 2007 Artificial Intelligence Midterm

CS 188 Introduction to Fall 2007 Artificial Intelligence Midterm NAME: SID#: Login: Sec: 1 CS 188 Introduction to Fall 2007 Artificial Intelligence Midterm You have 80 minutes. The exam is closed book, closed notes except a one-page crib sheet, basic calculators only.

More information

Reinforcement Learning

Reinforcement Learning Reinforcement Learning Ron Parr CompSci 7 Department of Computer Science Duke University With thanks to Kris Hauser for some content RL Highlights Everybody likes to learn from experience Use ML techniques

More information

Lecture 25: Learning 4. Victor R. Lesser. CMPSCI 683 Fall 2010

Lecture 25: Learning 4. Victor R. Lesser. CMPSCI 683 Fall 2010 Lecture 25: Learning 4 Victor R. Lesser CMPSCI 683 Fall 2010 Final Exam Information Final EXAM on Th 12/16 at 4:00pm in Lederle Grad Res Ctr Rm A301 2 Hours but obviously you can leave early! Open Book

More information

Reinforcement Learning. Introduction

Reinforcement Learning. Introduction Reinforcement Learning Introduction Reinforcement Learning Agent interacts and learns from a stochastic environment Science of sequential decision making Many faces of reinforcement learning Optimal control

More information

ARTIFICIAL INTELLIGENCE. Reinforcement learning

ARTIFICIAL INTELLIGENCE. Reinforcement learning INFOB2KI 2018-2019 Utrecht University The Netherlands ARTIFICIAL INTELLIGENCE Reinforcement learning Lecturer: Silja Renooij These slides are part of the INFOB2KI Course Notes available from www.cs.uu.nl/docs/vakken/b2ki/schema.html

More information

CSE250A Fall 12: Discussion Week 9

CSE250A Fall 12: Discussion Week 9 CSE250A Fall 12: Discussion Week 9 Aditya Menon (akmenon@ucsd.edu) December 4, 2012 1 Schedule for today Recap of Markov Decision Processes. Examples: slot machines and maze traversal. Planning and learning.

More information

Reinforcement Learning

Reinforcement Learning Reinforcement Learning Temporal Difference Learning Temporal difference learning, TD prediction, Q-learning, elibigility traces. (many slides from Marc Toussaint) Vien Ngo Marc Toussaint University of

More information

15-889e Policy Search: Gradient Methods Emma Brunskill. All slides from David Silver (with EB adding minor modificafons), unless otherwise noted

15-889e Policy Search: Gradient Methods Emma Brunskill. All slides from David Silver (with EB adding minor modificafons), unless otherwise noted 15-889e Policy Search: Gradient Methods Emma Brunskill All slides from David Silver (with EB adding minor modificafons), unless otherwise noted Outline 1 Introduction 2 Finite Difference Policy Gradient

More information

INF 5860 Machine learning for image classification. Lecture 14: Reinforcement learning May 9, 2018

INF 5860 Machine learning for image classification. Lecture 14: Reinforcement learning May 9, 2018 Machine learning for image classification Lecture 14: Reinforcement learning May 9, 2018 Page 3 Outline Motivation Introduction to reinforcement learning (RL) Value function based methods (Q-learning)

More information

Machine Learning I Continuous Reinforcement Learning

Machine Learning I Continuous Reinforcement Learning Machine Learning I Continuous Reinforcement Learning Thomas Rückstieß Technische Universität München January 7/8, 2010 RL Problem Statement (reminder) state s t+1 ENVIRONMENT reward r t+1 new step r t

More information

Reinforcement Learning and NLP

Reinforcement Learning and NLP 1 Reinforcement Learning and NLP Kapil Thadani kapil@cs.columbia.edu RESEARCH Outline 2 Model-free RL Markov decision processes (MDPs) Derivative-free optimization Policy gradients Variance reduction Value

More information

Evolutionary Computation: introduction

Evolutionary Computation: introduction Evolutionary Computation: introduction Dirk Thierens Universiteit Utrecht The Netherlands Dirk Thierens (Universiteit Utrecht) EC Introduction 1 / 42 What? Evolutionary Computation Evolutionary Computation

More information

Reinforcement Learning. Summer 2017 Defining MDPs, Planning

Reinforcement Learning. Summer 2017 Defining MDPs, Planning Reinforcement Learning Summer 2017 Defining MDPs, Planning understandability 0 Slide 10 time You are here Markov Process Where you will go depends only on where you are Markov Process: Information state

More information

Replacing eligibility trace for action-value learning with function approximation

Replacing eligibility trace for action-value learning with function approximation Replacing eligibility trace for action-value learning with function approximation Kary FRÄMLING Helsinki University of Technology PL 5500, FI-02015 TKK - Finland Abstract. The eligibility trace is one

More information

Planning in Markov Decision Processes

Planning in Markov Decision Processes Carnegie Mellon School of Computer Science Deep Reinforcement Learning and Control Planning in Markov Decision Processes Lecture 3, CMU 10703 Katerina Fragkiadaki Markov Decision Process (MDP) A Markov

More information

(Deep) Reinforcement Learning

(Deep) Reinforcement Learning Martin Matyášek Artificial Intelligence Center Czech Technical University in Prague October 27, 2016 Martin Matyášek VPD, 2016 1 / 17 Reinforcement Learning in a picture R. S. Sutton and A. G. Barto 2015

More information

Notes on Reinforcement Learning

Notes on Reinforcement Learning 1 Introduction Notes on Reinforcement Learning Paulo Eduardo Rauber 2014 Reinforcement learning is the study of agents that act in an environment with the goal of maximizing cumulative reward signals.

More information

Today s s Lecture. Applicability of Neural Networks. Back-propagation. Review of Neural Networks. Lecture 20: Learning -4. Markov-Decision Processes

Today s s Lecture. Applicability of Neural Networks. Back-propagation. Review of Neural Networks. Lecture 20: Learning -4. Markov-Decision Processes Today s s Lecture Lecture 20: Learning -4 Review of Neural Networks Markov-Decision Processes Victor Lesser CMPSCI 683 Fall 2004 Reinforcement learning 2 Back-propagation Applicability of Neural Networks

More information

CS 4100 // artificial intelligence. Recap/midterm review!

CS 4100 // artificial intelligence. Recap/midterm review! CS 4100 // artificial intelligence instructor: byron wallace Recap/midterm review! Attribution: many of these slides are modified versions of those distributed with the UC Berkeley CS188 materials Thanks

More information

Lecture 3: Policy Evaluation Without Knowing How the World Works / Model Free Policy Evaluation

Lecture 3: Policy Evaluation Without Knowing How the World Works / Model Free Policy Evaluation Lecture 3: Policy Evaluation Without Knowing How the World Works / Model Free Policy Evaluation CS234: RL Emma Brunskill Winter 2018 Material builds on structure from David SIlver s Lecture 4: Model-Free

More information

CS 7180: Behavioral Modeling and Decisionmaking

CS 7180: Behavioral Modeling and Decisionmaking CS 7180: Behavioral Modeling and Decisionmaking in AI Markov Decision Processes for Complex Decisionmaking Prof. Amy Sliva October 17, 2012 Decisions are nondeterministic In many situations, behavior and

More information

REINFORCE Framework for Stochastic Policy Optimization and its use in Deep Learning

REINFORCE Framework for Stochastic Policy Optimization and its use in Deep Learning REINFORCE Framework for Stochastic Policy Optimization and its use in Deep Learning Ronen Tamari The Hebrew University of Jerusalem Advanced Seminar in Deep Learning (#67679) February 28, 2016 Ronen Tamari

More information

Grundlagen der Künstlichen Intelligenz

Grundlagen der Künstlichen Intelligenz Grundlagen der Künstlichen Intelligenz Reinforcement learning Daniel Hennes 4.12.2017 (WS 2017/18) University Stuttgart - IPVS - Machine Learning & Robotics 1 Today Reinforcement learning Model based and

More information

Reinforcement Learning with Function Approximation. Joseph Christian G. Noel

Reinforcement Learning with Function Approximation. Joseph Christian G. Noel Reinforcement Learning with Function Approximation Joseph Christian G. Noel November 2011 Abstract Reinforcement learning (RL) is a key problem in the field of Artificial Intelligence. The main goal is

More information

Reinforcement Learning. Yishay Mansour Tel-Aviv University

Reinforcement Learning. Yishay Mansour Tel-Aviv University Reinforcement Learning Yishay Mansour Tel-Aviv University 1 Reinforcement Learning: Course Information Classes: Wednesday Lecture 10-13 Yishay Mansour Recitations:14-15/15-16 Eliya Nachmani Adam Polyak

More information

Decision Theory: Markov Decision Processes

Decision Theory: Markov Decision Processes Decision Theory: Markov Decision Processes CPSC 322 Lecture 33 March 31, 2006 Textbook 12.5 Decision Theory: Markov Decision Processes CPSC 322 Lecture 33, Slide 1 Lecture Overview Recap Rewards and Policies

More information

Learning in Zero-Sum Team Markov Games using Factored Value Functions

Learning in Zero-Sum Team Markov Games using Factored Value Functions Learning in Zero-Sum Team Markov Games using Factored Value Functions Michail G. Lagoudakis Department of Computer Science Duke University Durham, NC 27708 mgl@cs.duke.edu Ronald Parr Department of Computer

More information

Christopher Watkins and Peter Dayan. Noga Zaslavsky. The Hebrew University of Jerusalem Advanced Seminar in Deep Learning (67679) November 1, 2015

Christopher Watkins and Peter Dayan. Noga Zaslavsky. The Hebrew University of Jerusalem Advanced Seminar in Deep Learning (67679) November 1, 2015 Q-Learning Christopher Watkins and Peter Dayan Noga Zaslavsky The Hebrew University of Jerusalem Advanced Seminar in Deep Learning (67679) November 1, 2015 Noga Zaslavsky Q-Learning (Watkins & Dayan, 1992)

More information

CSC321 Lecture 22: Q-Learning

CSC321 Lecture 22: Q-Learning CSC321 Lecture 22: Q-Learning Roger Grosse Roger Grosse CSC321 Lecture 22: Q-Learning 1 / 21 Overview Second of 3 lectures on reinforcement learning Last time: policy gradient (e.g. REINFORCE) Optimize

More information

Markov Decision Processes (and a small amount of reinforcement learning)

Markov Decision Processes (and a small amount of reinforcement learning) Markov Decision Processes (and a small amount of reinforcement learning) Slides adapted from: Brian Williams, MIT Manuela Veloso, Andrew Moore, Reid Simmons, & Tom Mitchell, CMU Nicholas Roy 16.4/13 Session

More information

CSL302/612 Artificial Intelligence End-Semester Exam 120 Minutes

CSL302/612 Artificial Intelligence End-Semester Exam 120 Minutes CSL302/612 Artificial Intelligence End-Semester Exam 120 Minutes Name: Roll Number: Please read the following instructions carefully Ø Calculators are allowed. However, laptops or mobile phones are not

More information

Lecture 18: Reinforcement Learning Sanjeev Arora Elad Hazan

Lecture 18: Reinforcement Learning Sanjeev Arora Elad Hazan COS 402 Machine Learning and Artificial Intelligence Fall 2016 Lecture 18: Reinforcement Learning Sanjeev Arora Elad Hazan Some slides borrowed from Peter Bodik and David Silver Course progress Learning

More information

Today s Outline. Recap: MDPs. Bellman Equations. Q-Value Iteration. Bellman Backup 5/7/2012. CSE 473: Artificial Intelligence Reinforcement Learning

Today s Outline. Recap: MDPs. Bellman Equations. Q-Value Iteration. Bellman Backup 5/7/2012. CSE 473: Artificial Intelligence Reinforcement Learning CSE 473: Artificial Intelligence Reinforcement Learning Dan Weld Today s Outline Reinforcement Learning Q-value iteration Q-learning Exploration / exploitation Linear function approximation Many slides

More information

Temporal Difference. Learning KENNETH TRAN. Principal Research Engineer, MSR AI

Temporal Difference. Learning KENNETH TRAN. Principal Research Engineer, MSR AI Temporal Difference Learning KENNETH TRAN Principal Research Engineer, MSR AI Temporal Difference Learning Policy Evaluation Intro to model-free learning Monte Carlo Learning Temporal Difference Learning

More information

Basics of reinforcement learning

Basics of reinforcement learning Basics of reinforcement learning Lucian Buşoniu TMLSS, 20 July 2018 Main idea of reinforcement learning (RL) Learn a sequential decision policy to optimize the cumulative performance of an unknown system

More information

Prof. Dr. Ann Nowé. Artificial Intelligence Lab ai.vub.ac.be

Prof. Dr. Ann Nowé. Artificial Intelligence Lab ai.vub.ac.be REINFORCEMENT LEARNING AN INTRODUCTION Prof. Dr. Ann Nowé Artificial Intelligence Lab ai.vub.ac.be REINFORCEMENT LEARNING WHAT IS IT? What is it? Learning from interaction Learning about, from, and while

More information

Sequential Decision Problems

Sequential Decision Problems Sequential Decision Problems Michael A. Goodrich November 10, 2006 If I make changes to these notes after they are posted and if these changes are important (beyond cosmetic), the changes will highlighted

More information

CMU Lecture 12: Reinforcement Learning. Teacher: Gianni A. Di Caro

CMU Lecture 12: Reinforcement Learning. Teacher: Gianni A. Di Caro CMU 15-781 Lecture 12: Reinforcement Learning Teacher: Gianni A. Di Caro REINFORCEMENT LEARNING Transition Model? State Action Reward model? Agent Goal: Maximize expected sum of future rewards 2 MDP PLANNING

More information

Reinforcement Learning II. George Konidaris

Reinforcement Learning II. George Konidaris Reinforcement Learning II George Konidaris gdk@cs.brown.edu Fall 2017 Reinforcement Learning π : S A max R = t=0 t r t MDPs Agent interacts with an environment At each time t: Receives sensor signal Executes

More information

Reinforcement Learning. Machine Learning, Fall 2010

Reinforcement Learning. Machine Learning, Fall 2010 Reinforcement Learning Machine Learning, Fall 2010 1 Administrativia This week: finish RL, most likely start graphical models LA2: due on Thursday LA3: comes out on Thursday TA Office hours: Today 1:30-2:30

More information

Reinforcement Learning

Reinforcement Learning Reinforcement Learning Temporal Difference Learning Temporal difference learning, TD prediction, Q-learning, elibigility traces. (many slides from Marc Toussaint) Vien Ngo MLR, University of Stuttgart

More information

Learning Tetris. 1 Tetris. February 3, 2009

Learning Tetris. 1 Tetris. February 3, 2009 Learning Tetris Matt Zucker Andrew Maas February 3, 2009 1 Tetris The Tetris game has been used as a benchmark for Machine Learning tasks because its large state space (over 2 200 cell configurations are

More information

Reinforcement Learning II. George Konidaris

Reinforcement Learning II. George Konidaris Reinforcement Learning II George Konidaris gdk@cs.brown.edu Fall 2018 Reinforcement Learning π : S A max R = t=0 t r t MDPs Agent interacts with an environment At each time t: Receives sensor signal Executes

More information

Decision making, Markov decision processes

Decision making, Markov decision processes Decision making, Markov decision processes Solved tasks Collected by: Jiří Kléma, klema@fel.cvut.cz Spring 2017 The main goal: The text presents solved tasks to support labs in the A4B33ZUI course. 1 Simple

More information

Chapter 7: Eligibility Traces. R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 1

Chapter 7: Eligibility Traces. R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 1 Chapter 7: Eligibility Traces R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 1 Midterm Mean = 77.33 Median = 82 R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction

More information

Human-level control through deep reinforcement. Liia Butler

Human-level control through deep reinforcement. Liia Butler Humanlevel control through deep reinforcement Liia Butler But first... A quote "The question of whether machines can think... is about as relevant as the question of whether submarines can swim" Edsger

More information

Generalization and Function Approximation

Generalization and Function Approximation Generalization and Function Approximation 0 Generalization and Function Approximation Suggested reading: Chapter 8 in R. S. Sutton, A. G. Barto: Reinforcement Learning: An Introduction MIT Press, 1998.

More information

An Introduction to Reinforcement Learning

An Introduction to Reinforcement Learning An Introduction to Reinforcement Learning Shivaram Kalyanakrishnan shivaram@cse.iitb.ac.in Department of Computer Science and Engineering Indian Institute of Technology Bombay April 2018 What is Reinforcement

More information

CS885 Reinforcement Learning Lecture 7a: May 23, 2018

CS885 Reinforcement Learning Lecture 7a: May 23, 2018 CS885 Reinforcement Learning Lecture 7a: May 23, 2018 Policy Gradient Methods [SutBar] Sec. 13.1-13.3, 13.7 [SigBuf] Sec. 5.1-5.2, [RusNor] Sec. 21.5 CS885 Spring 2018 Pascal Poupart 1 Outline Stochastic

More information

Approximate Q-Learning. Dan Weld / University of Washington

Approximate Q-Learning. Dan Weld / University of Washington Approximate Q-Learning Dan Weld / University of Washington [Many slides taken from Dan Klein and Pieter Abbeel / CS188 Intro to AI at UC Berkeley materials available at http://ai.berkeley.edu.] Q Learning

More information

Reinforcement Learning. Donglin Zeng, Department of Biostatistics, University of North Carolina

Reinforcement Learning. Donglin Zeng, Department of Biostatistics, University of North Carolina Reinforcement Learning Introduction Introduction Unsupervised learning has no outcome (no feedback). Supervised learning has outcome so we know what to predict. Reinforcement learning is in between it

More information

Autonomous Helicopter Flight via Reinforcement Learning

Autonomous Helicopter Flight via Reinforcement Learning Autonomous Helicopter Flight via Reinforcement Learning Authors: Andrew Y. Ng, H. Jin Kim, Michael I. Jordan, Shankar Sastry Presenters: Shiv Ballianda, Jerrolyn Hebert, Shuiwang Ji, Kenley Malveaux, Huy

More information

Least squares policy iteration (LSPI)

Least squares policy iteration (LSPI) Least squares policy iteration (LSPI) Charles Elkan elkan@cs.ucsd.edu December 6, 2012 1 Policy evaluation and policy improvement Let π be a non-deterministic but stationary policy, so p(a s; π) is the

More information

Reinforcement learning an introduction

Reinforcement learning an introduction Reinforcement learning an introduction Prof. Dr. Ann Nowé Computational Modeling Group AIlab ai.vub.ac.be November 2013 Reinforcement Learning What is it? Learning from interaction Learning about, from,

More information

David Silver, Google DeepMind

David Silver, Google DeepMind Tutorial: Deep Reinforcement Learning David Silver, Google DeepMind Outline Introduction to Deep Learning Introduction to Reinforcement Learning Value-Based Deep RL Policy-Based Deep RL Model-Based Deep

More information

Course 16:198:520: Introduction To Artificial Intelligence Lecture 13. Decision Making. Abdeslam Boularias. Wednesday, December 7, 2016

Course 16:198:520: Introduction To Artificial Intelligence Lecture 13. Decision Making. Abdeslam Boularias. Wednesday, December 7, 2016 Course 16:198:520: Introduction To Artificial Intelligence Lecture 13 Decision Making Abdeslam Boularias Wednesday, December 7, 2016 1 / 45 Overview We consider probabilistic temporal models where the

More information

Decision Theory: Q-Learning

Decision Theory: Q-Learning Decision Theory: Q-Learning CPSC 322 Decision Theory 5 Textbook 12.5 Decision Theory: Q-Learning CPSC 322 Decision Theory 5, Slide 1 Lecture Overview 1 Recap 2 Asynchronous Value Iteration 3 Q-Learning

More information

Reinforcement Learning

Reinforcement Learning Reinforcement Learning Cyber Rodent Project Some slides from: David Silver, Radford Neal CSC411: Machine Learning and Data Mining, Winter 2017 Michael Guerzhoy 1 Reinforcement Learning Supervised learning:

More information

PART A and ONE question from PART B; or ONE question from PART A and TWO questions from PART B.

PART A and ONE question from PART B; or ONE question from PART A and TWO questions from PART B. Advanced Topics in Machine Learning, GI13, 2010/11 Advanced Topics in Machine Learning, GI13, 2010/11 Answer any THREE questions. Each question is worth 20 marks. Use separate answer books Answer any THREE

More information

Exponential Moving Average Based Multiagent Reinforcement Learning Algorithms

Exponential Moving Average Based Multiagent Reinforcement Learning Algorithms Exponential Moving Average Based Multiagent Reinforcement Learning Algorithms Mostafa D. Awheda Department of Systems and Computer Engineering Carleton University Ottawa, Canada KS 5B6 Email: mawheda@sce.carleton.ca

More information

This question has three parts, each of which can be answered concisely, but be prepared to explain and justify your concise answer.

This question has three parts, each of which can be answered concisely, but be prepared to explain and justify your concise answer. This question has three parts, each of which can be answered concisely, but be prepared to explain and justify your concise answer. 1. Suppose you have a policy and its action-value function, q, then you

More information

Q-learning. Tambet Matiisen

Q-learning. Tambet Matiisen Q-learning Tambet Matiisen (based on chapter 11.3 of online book Artificial Intelligence, foundations of computational agents by David Poole and Alan Mackworth) Stochastic gradient descent Experience

More information