David Silver, Google DeepMind

Size: px
Start display at page:

Download "David Silver, Google DeepMind"

Transcription

1 Tutorial: Deep Reinforcement Learning David Silver, Google DeepMind

2 Outline Introduction to Deep Learning Introduction to Reinforcement Learning Value-Based Deep RL Policy-Based Deep RL Model-Based Deep RL

3 Reinforcement Learning in a nutshell RL is a general-purpose framework for decision-making I RL is for an agent with the capacity to act I Each action influences the agent s future state I Success is measured by a scalar reward signal I Goal: select actions to maximise future reward

4 Deep Learning in a nutshell DL is a general-purpose framework for representation learning I Given an objective I Learn representation that is required to achieve objective I Directly from raw inputs I Using minimal domain knowledge

5 Deep Reinforcement Learning: AI = RL + DL We seek a single agent which can solve any human-level task I RL defines the objective I DL gives the mechanism I RL + DL = general intelligence

6 Examples of Deep I Play games: Atari, poker, Go,... I Explore worlds: 3D worlds, Labyrinth,... I Control physical systems: manipulate, walk, swim,... I Interact with users: recommend, optimise, personalise,...

7 Outline Introduction to Deep Learning Introduction to Reinforcement Learning Value-Based Deep RL Policy-Based Deep RL Model-Based Deep RL

8 o O o O o Deep Representations I A deep representation is a composition of many functions x / h 1 /... / h n / y / l w 1... w n I Its gradient can be backpropagated by the @h @h n n

9 Deep Neural Network A deep neural network is typically composed of: I Linear transformations I Non-linear activation functions h k+1 = Wh k h k+2 = f (h k+1 ) I A loss function on the output, e.g. I Mean-squared error l = y y 2 I Log likelihood l = log P [y ]

10 Training Neural Networks by Stochastic Gradient Descent I Sample gradient of expected loss L(w) = E @w = I Adjust w down the sampled gradient

11 ? O O = O O Weight Sharing Recurrent neural network shares weights between time-steps y t y t+1... / h t / h t+1 /... w x t w x t+1 Convolutional neural network shares weights between local regions w 1 w 2 w 1 w 2 h 2 x h 1

12 Outline Introduction to Deep Learning Introduction to Reinforcement Learning Value-Based Deep RL Policy-Based Deep RL Model-Based Deep RL

13 Many Faces of Reinforcement Learning Computer Science Engineering Mathematics Optimal Control Operations Research Machine Learning Reinforcement Learning Rationality/ Game Theory Reward System Classical/Operant Conditioning Neuroscience Psychology Economics

14 Agent and Environment observation action I At each step t the agent: o t reward r t a t I Executes action at I Receives observation o t I Receives scalar reward r t I The environment: I Receives action at I Emits observation o t+1 I Emits scalar reward r t+1

15 State I Experience is a sequence of observations, actions, rewards o 1, r 1, a 1,...,a t 1, o t, r t I The state is a summary of experience s t = f (o 1, r 1, a 1,...,a t 1, o t, r t ) I In a fully observed environment s t = f (o t )

16 Major Components of an RL Agent I An RL agent may include one or more of these components: I Policy: agent s behaviour function I Value function: how good is each state and/or action I Model: agent s representation of the environment

17 Policy I A policy is the agent s behaviour I It is a map from state to action: I Deterministic policy: a = (s) I Stochastic policy: (a s) =P [a s]

18 Value Function I A value function is a prediction of future reward I How much reward will I get from action a in state s? I Q-value function gives expected total reward I from state s and action a I under policy I with discount factor Q (s, a) =E r t+1 + r t r t s, a

19 Value Function I A value function is a prediction of future reward I How much reward will I get from action a in state s? I Q-value function gives expected total reward I from state s and action a I under policy I with discount factor Q (s, a) =E r t+1 + r t r t s, a I Value functions decompose into a Bellman equation Q (s, a) =E s 0,a 0 r + Q (s 0, a 0 ) s, a

20 Optimal Value Functions I An optimal value function is the maximum achievable value Q (s, a) =max Q (s, a) =Q (s, a)

21 Optimal Value Functions I An optimal value function is the maximum achievable value Q (s, a) =max Q (s, a) =Q (s, a) I Once we have Q we can act optimally, (s) = argmax a Q (s, a)

22 Optimal Value Functions I An optimal value function is the maximum achievable value Q (s, a) =max Q (s, a) =Q (s, a) I Once we have Q we can act optimally, (s) = argmax a Q (s, a) I Optimal value maximises over all decisions. Informally: Q (s, a) =r t+1 + max r t max r t a t+1 a t+2 = r t+1 + max a t+1 Q (s t+1, a t+1 )

23 Optimal Value Functions I An optimal value function is the maximum achievable value Q (s, a) =max Q (s, a) =Q (s, a) I Once we have Q we can act optimally, (s) = argmax a Q (s, a) I Optimal value maximises over all decisions. Informally: Q (s, a) =r t+1 + max r t max r t a t+1 a t+2 = r t+1 + max a t+1 Q (s t+1, a t+1 ) I Formally, optimal values decompose into a Bellman equation apple Q (s, a) =E s 0 r + max Q (s 0, a 0 ) s, a a 0

24 Value Function Demo

25 Model observation action o t a t reward r t

26 Model observation o t reward r t action a t I Model is learnt from experience I Acts as proxy for environment I Planner interacts with model I e.g. using lookahead search

27 Approaches To Reinforcement Learning Value-based RL I Estimate the optimal value function Q (s, a) I This is the maximum value achievable under any policy Policy-based RL I Search directly for the optimal policy I This is the policy achieving maximum future reward Model-based RL I Build a model of the environment I Plan (e.g. by lookahead) using model

28 Deep Reinforcement Learning I Use deep neural networks to represent I I I Value function Policy Model I Optimise loss function by stochastic gradient descent

29 Outline Introduction to Deep Learning Introduction to Reinforcement Learning Value-Based Deep RL Policy-Based Deep RL Model-Based Deep RL

30 Q-Networks Represent value function by Q-network with weights w Q(s, a, w) Q (s, a) Q(s,a,w) Q(s,a 1,w) Q(s,a m,w) w w s a s

31 Q-Learning I Optimal Q-values should obey Bellman equation apple Q (s, a) =E s 0 r + max Q (s 0, a 0 ) s, a a 0 I Treat right-hand side r + max Q(s 0, a 0, w) as a target a 0 I Minimise MSE loss by stochastic gradient descent l = r + 2 max a 0 Q(s 0, a 0, w) Q(s, a, w)

32 Q-Learning I Optimal Q-values should obey Bellman equation apple Q (s, a) =E s 0 r + max Q (s 0, a 0 ) s, a a 0 I Treat right-hand side r + max Q(s 0, a 0, w) as a target a 0 I Minimise MSE loss by stochastic gradient descent l = r + 2 max a 0 Q(s 0, a 0, w) Q(s, a, w) I Converges to Q using table lookup representation

33 Q-Learning I Optimal Q-values should obey Bellman equation apple Q (s, a) =E s 0 r + max Q (s 0, a 0 ) s, a a 0 I Treat right-hand side r + max Q(s 0, a 0, w) as a target a 0 I Minimise MSE loss by stochastic gradient descent l = r + 2 max a 0 Q(s 0, a 0, w) Q(s, a, w) I Converges to Q using table lookup representation I But diverges using neural networks due to: I Correlations between samples I Non-stationary targets

34 Deep Q-Networks (DQN): Experience Replay To remove correlations, build data-set from agent s own experience s t, a t, r t+1, s t+1! s t, a t, r t+1, s t+1 s 1, a 1, r 2, s 2 s 2, a 2, r 3, s 3! s, a, r, s 0 s 3, a 3, r 4, s 4... Sample experiences from data-set and apply update l = r + 2 max a 0 Q(s 0, a 0, w ) Q(s, a, w) To deal with non-stationarity, target parameters w are held fixed

35 Deep Reinforcement Learning in Atari state action s t a t reward r t

36 DQN in Atari I End-to-end learning of values Q(s, a) from pixels s I Input state s is stack of raw pixels from last 4 frames I Output is Q(s, a) for 18 joystick/button positions I Reward is change in score for that step Network architecture and hyperparameters fixed across all games

37 DQN Results in Atari

38 DQN Atari Demo INNOVATIONS IN The microbiome T H E I N T E R N AT I O N A L W E E K LY J O U R N A L O F S C I E N C E DQN paper DQN source code: sites.google.com/a/deepmind.com/dqn/ Self-taught AI software attains human-level performance in video games PAGES 486 & 529 EPIDEMIOLOGY COSMOLOGY QUANTUM PHYSICS SHARE DATA IN OUTBREAKS A GIANT IN THE EARLY UNIVERSE TELEPORTATION FOR TWO PAGE 477 PAGES 490 & 512 PAGES 491 & 516 Forge open access to sequences and more A supermassive black hole at a redshift of 6.3 Transferring two properties of a single photon NATURE.COM/NATURE 26 February Vol. 518, No. 7540

39 Improvements since Nature DQN I Double DQN: Remove upward bias caused by max a I Current Q-network w is used to select actions I Older Q-network w is used to evaluate actions l = r + Q(s, a, w) 2 Q(s 0, argmax a 0 Q(s 0, a 0, w), w ) Q(s, a, w)

40 Improvements since Nature DQN I Double DQN: Remove upward bias caused by max a I Current Q-network w is used to select actions I Older Q-network w is used to evaluate actions l = r + Q(s, a, w) 2 Q(s 0, argmax a 0 Q(s 0, a 0, w), w ) Q(s, a, w) I Prioritised replay: Weight experience according to surprise I Store experience in priority queue according to DQN error r + max Q(s 0, a 0, w ) Q(s, a, w) a 0

41 Improvements since Nature DQN I Double DQN: Remove upward bias caused by max a I Current Q-network w is used to select actions I Older Q-network w is used to evaluate actions l = r + Q(s, a, w) 2 Q(s 0, argmax a 0 Q(s 0, a 0, w), w ) Q(s, a, w) I Prioritised replay: Weight experience according to surprise I Store experience in priority queue according to DQN error r + max Q(s 0, a 0, w ) Q(s, a, w) a 0 I Duelling network: Split Q-network into two channels I Action-independent value function V (s, v) I Action-dependent advantage function A(s, a, w) Q(s, a) =V (s, v)+a(s, a, w)

42 Improvements since Nature DQN I Double DQN: Remove upward bias caused by max a I Current Q-network w is used to select actions I Older Q-network w is used to evaluate actions l = r + Q(s, a, w) 2 Q(s 0, argmax a 0 Q(s 0, a 0, w), w ) Q(s, a, w) I Prioritised replay: Weight experience according to surprise I Store experience in priority queue according to DQN error r + max Q(s 0, a 0, w ) Q(s, a, w) a 0 I Duelling network: Split Q-network into two channels I Action-independent value function V (s, v) I Action-dependent advantage function A(s, a, w) Q(s, a) =V (s, v)+a(s, a, w) Combined algorithm: 3x mean Atari score vs Nature DQN

43 Gorila (General Reinforcement Learning Architecture) I 10x faster than Nature DQN on 38 out of 49 Atari games I Applied to recommender systems within Google

44 Asynchronous Reinforcement Learning I Exploits multithreading of standard CPU I Execute many instances of agent in parallel I Network parameters shared between threads I Parallelism decorrelates data I Viable alternative to experience replay I Similar speedup to Gorila - on a single machine!

45 Outline Introduction to Deep Learning Introduction to Reinforcement Learning Value-Based Deep RL Policy-Based Deep RL Model-Based Deep RL

46 Deep Policy Networks I Represent policy by deep network with weights u a = (a s, u) ora = (s, u) I Define objective function as total discounted reward L(u) =E r 1 + r r (, u) I Optimise objective end-to-end by SGD I i.e. Adjust policy parameters u to achieve more reward

47 Policy Gradients How to make high-value actions more likely: I The gradient of a stochastic policy (a s, u) is (a s, u) = E Q (s,

48 Policy Gradients How to make high-value actions more likely: I The gradient of a stochastic policy (a s, u) is (a s, u) = E Q (s, I The gradient of a deterministic policy a = (s) is = E @u I if a is continuous and Q is di erentiable

49 Actor-Critic Algorithm I Estimate value function Q(s, a, w) Q (s, a) I Update policy parameters u by stochastic gradient @log (a s, u) = Q(s, @u @u

50 Asynchronous Advantage Actor-Critic (A3C) I Estimate state-value function V (s, v) E [r t+1 + r t s] I Q-value estimated by an n-step sample q t = r t+1 + r t n 1 r t+n + n V (s t+n, v)

51 Asynchronous Advantage Actor-Critic (A3C) I Estimate state-value function V (s, v) E [r t+1 + r t s] I Q-value estimated by an n-step sample q t = r t+1 + r t n 1 r t+n + n V (s t+n, v) I Actor is updated towards (a t s t, u) (q t V (s t, I Critic is updated to minimise MSE w.r.t. target l v =(q t V (s t, v)) 2

52 Asynchronous Advantage Actor-Critic (A3C) I Estimate state-value function V (s, v) E [r t+1 + r t s] I Q-value estimated by an n-step sample q t = r t+1 + r t n 1 r t+n + n V (s t+n, v) I Actor is updated towards (a t s t, u) (q t V (s t, I Critic is updated to minimise MSE w.r.t. target l v =(q t V (s t, v)) 2 I 4x mean Atari score vs Nature DQN

53 Deep Reinforcement Learning in Labyrinth

54 A3C in Labyrinth π(a s t-1 ) V(s t-1 ) π(a s t ) V(s t ) π(a s t+1 ) V(s t-1 ) s t-1 s t s t+1 o t-1 o t o t+1 I End-to-end learning of softmax policy (a s t ) from pixels I Observations o t are raw pixels from current frame I State s t = f (o 1,...,o t ) is a recurrent neural network (LSTM) I Outputs both value V (s) and softmax over actions (a s) I Task is to collect apples (+1 reward) and escape (+10 reward)

55 A3C Labyrinth Demo Demo: Labyrinth source code (coming soon): sites.google.com/a/deepmind.com/labyrinth/

56 Deep Reinforcement Learning with Continuous Actions How can we deal with high-dimensional continuous action spaces? I Can t easily compute max Q(s, a) a I Actor-critic algorithms learn without max I Q-values are di erentiable w.r.t a I Deterministic policy gradients exploit

57 Deep DPG DPG is the continuous analogue of DQN I Experience replay: build data-set from agent s experience I Critic estimates value of current policy by DQN l w = 2 r + Q(s 0, (s 0, u ), w ) Q(s, a, w) To deal with non-stationarity, targets u, w I Actor updates policy in direction that improves @u I In other words critic provides loss function for actor are held fixed

58 DPG in Simulated Physics I Physics domains are simulated in MuJoCo I End-to-end learning of control policy from raw pixels s I Input state s is stack of raw pixels from last 4 frames I Two separate convnets are used for Q and I Policy is adjusted in direction that most improves Q a Q(s,a) π(s)

59 DPG in Simulated Physics Demo I Demo: DPG from pixels

60 A3C in Simulated Physics Demo I Asynchronous RL is viable alternative to experience replay I Train a hierarchical, recurrent locomotion controller I Retrain controller on more challenging tasks

61 Fictitious Self-Play (FSP) Can deep RL find Nash equilibria in multi-agent games? I Q-network learns best response to opponent policies I I By applying DQN with experience replay c.f. fictitious play I Policy network (a s, u) learns an average of (a s, I Actions a sample mix of policy network and best response

62 Neural FSP in Texas Hold em Poker I Heads-up limit Texas Hold em I NFSP with raw inputs only (no prior knowledge of Poker) I vs SmooCT (3x medal winner 2015, handcrafted knowlege) mbb/h SmooCT NFSP, best response strategy -700 NFSP, greedy-average strategy NFSP, average strategy e+06 1e e+07 2e e+07 3e e+07 Iterations

63 Outline Introduction to Deep Learning Introduction to Reinforcement Learning Value-Based Deep RL Policy-Based Deep RL Model-Based Deep RL

64 Learning Models of the Environment I Demo: generative model of Atari I Challenging to plan due to compounding errors I Errors in the transition model compound over the trajectory I Planning trajectories di er from executed trajectories I At end of long, unusual trajectory, rewards are totally wrong

65 CONSERVATION PAGE 452 THE INTERNATIONAL WEEKLY JOURNAL OF SCIENCE RESEARCH ETHICS PAGE 459 POPULAR SCIENCE PAGE 462 NATURE.COM/NATURE 28 January Vol. 529, No Deep Reinforcement Learning in Go What if we have a perfect model? e.g. game rules are known AlphaGo paper: AlphaGo resources: deepmind.com/alphago/ At last a computer program that can beat a champion Go player PAGE 484 ALL SYSTEMS GO SONGBIRDS À LA CARTE Illegal harvest of millions of Mediterranean birds SAFEGUARD TRANSPARENCY Don t let openness backfire on individuals WHEN GENES GOT SELFISH Dawkins s calling card forty years on

66 Conclusion I General, stable and scalable RL is now possible I Using deep networks to represent value, policy, model I Successful in Atari, Labyrinth, Physics, Poker, Go I Using a variety of deep RL paradigms

(Deep) Reinforcement Learning

(Deep) Reinforcement Learning Martin Matyášek Artificial Intelligence Center Czech Technical University in Prague October 27, 2016 Martin Matyášek VPD, 2016 1 / 17 Reinforcement Learning in a picture R. S. Sutton and A. G. Barto 2015

More information

Approximate Q-Learning. Dan Weld / University of Washington

Approximate Q-Learning. Dan Weld / University of Washington Approximate Q-Learning Dan Weld / University of Washington [Many slides taken from Dan Klein and Pieter Abbeel / CS188 Intro to AI at UC Berkeley materials available at http://ai.berkeley.edu.] Q Learning

More information

Deep reinforcement learning. Dialogue Systems Group, Cambridge University Engineering Department

Deep reinforcement learning. Dialogue Systems Group, Cambridge University Engineering Department Deep reinforcement learning Milica Gašić Dialogue Systems Group, Cambridge University Engineering Department 1 / 25 In this lecture... Introduction to deep reinforcement learning Value-based Deep RL Deep

More information

Reinforcement Learning for NLP

Reinforcement Learning for NLP Reinforcement Learning for NLP Advanced Machine Learning for NLP Jordan Boyd-Graber REINFORCEMENT OVERVIEW, POLICY GRADIENT Adapted from slides by David Silver, Pieter Abbeel, and John Schulman Advanced

More information

Deep Reinforcement Learning

Deep Reinforcement Learning Martin Matyášek Artificial Intelligence Center Czech Technical University in Prague October 27, 2016 Martin Matyášek VPD, 2016 1 / 50 Reinforcement Learning in a picture R. S. Sutton and A. G. Barto 2015

More information

REINFORCEMENT LEARNING

REINFORCEMENT LEARNING REINFORCEMENT LEARNING Larry Page: Where s Google going next? DeepMind's DQN playing Breakout Contents Introduction to Reinforcement Learning Deep Q-Learning INTRODUCTION TO REINFORCEMENT LEARNING Contents

More information

Q-Learning in Continuous State Action Spaces

Q-Learning in Continuous State Action Spaces Q-Learning in Continuous State Action Spaces Alex Irpan alexirpan@berkeley.edu December 5, 2015 Contents 1 Introduction 1 2 Background 1 3 Q-Learning 2 4 Q-Learning In Continuous Spaces 4 5 Experimental

More information

CS230: Lecture 9 Deep Reinforcement Learning

CS230: Lecture 9 Deep Reinforcement Learning CS230: Lecture 9 Deep Reinforcement Learning Kian Katanforoosh Menti code: 21 90 15 Today s outline I. Motivation II. Recycling is good: an introduction to RL III. Deep Q-Learning IV. Application of Deep

More information

INF 5860 Machine learning for image classification. Lecture 14: Reinforcement learning May 9, 2018

INF 5860 Machine learning for image classification. Lecture 14: Reinforcement learning May 9, 2018 Machine learning for image classification Lecture 14: Reinforcement learning May 9, 2018 Page 3 Outline Motivation Introduction to reinforcement learning (RL) Value function based methods (Q-learning)

More information

Human-level control through deep reinforcement. Liia Butler

Human-level control through deep reinforcement. Liia Butler Humanlevel control through deep reinforcement Liia Butler But first... A quote "The question of whether machines can think... is about as relevant as the question of whether submarines can swim" Edsger

More information

Reinforcement Learning

Reinforcement Learning Reinforcement Learning Function approximation Daniel Hennes 19.06.2017 University Stuttgart - IPVS - Machine Learning & Robotics 1 Today Eligibility traces n-step TD returns Forward and backward view Function

More information

Deep Reinforcement Learning SISL. Jeremy Morton (jmorton2) November 7, Stanford Intelligent Systems Laboratory

Deep Reinforcement Learning SISL. Jeremy Morton (jmorton2) November 7, Stanford Intelligent Systems Laboratory Deep Reinforcement Learning Jeremy Morton (jmorton2) November 7, 2016 SISL Stanford Intelligent Systems Laboratory Overview 2 1 Motivation 2 Neural Networks 3 Deep Reinforcement Learning 4 Deep Learning

More information

COMP9444 Neural Networks and Deep Learning 10. Deep Reinforcement Learning. COMP9444 c Alan Blair, 2017

COMP9444 Neural Networks and Deep Learning 10. Deep Reinforcement Learning. COMP9444 c Alan Blair, 2017 COMP9444 Neural Networks and Deep Learning 10. Deep Reinforcement Learning COMP9444 17s2 Deep Reinforcement Learning 1 Outline History of Reinforcement Learning Deep Q-Learning for Atari Games Actor-Critic

More information

Introduction to Reinforcement Learning

Introduction to Reinforcement Learning CSCI-699: Advanced Topics in Deep Learning 01/16/2019 Nitin Kamra Spring 2019 Introduction to Reinforcement Learning 1 What is Reinforcement Learning? So far we have seen unsupervised and supervised learning.

More information

Lecture 1: March 7, 2018

Lecture 1: March 7, 2018 Reinforcement Learning Spring Semester, 2017/8 Lecture 1: March 7, 2018 Lecturer: Yishay Mansour Scribe: ym DISCLAIMER: Based on Learning and Planning in Dynamical Systems by Shie Mannor c, all rights

More information

CSC321 Lecture 22: Q-Learning

CSC321 Lecture 22: Q-Learning CSC321 Lecture 22: Q-Learning Roger Grosse Roger Grosse CSC321 Lecture 22: Q-Learning 1 / 21 Overview Second of 3 lectures on reinforcement learning Last time: policy gradient (e.g. REINFORCE) Optimize

More information

1 Introduction 2. 4 Q-Learning The Q-value The Temporal Difference The whole Q-Learning process... 5

1 Introduction 2. 4 Q-Learning The Q-value The Temporal Difference The whole Q-Learning process... 5 Table of contents 1 Introduction 2 2 Markov Decision Processes 2 3 Future Cumulative Reward 3 4 Q-Learning 4 4.1 The Q-value.............................................. 4 4.2 The Temporal Difference.......................................

More information

Reinforcement Learning Part 2

Reinforcement Learning Part 2 Reinforcement Learning Part 2 Dipendra Misra Cornell University dkm@cs.cornell.edu https://dipendramisra.wordpress.com/ From previous tutorial Reinforcement Learning Exploration No supervision Agent-Reward-Environment

More information

Dueling Network Architectures for Deep Reinforcement Learning (ICML 2016)

Dueling Network Architectures for Deep Reinforcement Learning (ICML 2016) Dueling Network Architectures for Deep Reinforcement Learning (ICML 2016) Yoonho Lee Department of Computer Science and Engineering Pohang University of Science and Technology October 11, 2016 Outline

More information

15-889e Policy Search: Gradient Methods Emma Brunskill. All slides from David Silver (with EB adding minor modificafons), unless otherwise noted

15-889e Policy Search: Gradient Methods Emma Brunskill. All slides from David Silver (with EB adding minor modificafons), unless otherwise noted 15-889e Policy Search: Gradient Methods Emma Brunskill All slides from David Silver (with EB adding minor modificafons), unless otherwise noted Outline 1 Introduction 2 Finite Difference Policy Gradient

More information

Deep Reinforcement Learning. STAT946 Deep Learning Guest Lecture by Pascal Poupart University of Waterloo October 19, 2017

Deep Reinforcement Learning. STAT946 Deep Learning Guest Lecture by Pascal Poupart University of Waterloo October 19, 2017 Deep Reinforcement Learning STAT946 Deep Learning Guest Lecture by Pascal Poupart University of Waterloo October 19, 2017 Outline Introduction to Reinforcement Learning AlphaGo (Deep RL for Computer Go)

More information

An Introduction to Reinforcement Learning

An Introduction to Reinforcement Learning An Introduction to Reinforcement Learning Shivaram Kalyanakrishnan shivaram@cse.iitb.ac.in Department of Computer Science and Engineering Indian Institute of Technology Bombay April 2018 What is Reinforcement

More information

Grundlagen der Künstlichen Intelligenz

Grundlagen der Künstlichen Intelligenz Grundlagen der Künstlichen Intelligenz Reinforcement learning II Daniel Hennes 11.12.2017 (WS 2017/18) University Stuttgart - IPVS - Machine Learning & Robotics 1 Today Eligibility traces n-step TD returns

More information

Reinforcement Learning and NLP

Reinforcement Learning and NLP 1 Reinforcement Learning and NLP Kapil Thadani kapil@cs.columbia.edu RESEARCH Outline 2 Model-free RL Markov decision processes (MDPs) Derivative-free optimization Policy gradients Variance reduction Value

More information

Lecture 8: Policy Gradient

Lecture 8: Policy Gradient Lecture 8: Policy Gradient Hado van Hasselt Outline 1 Introduction 2 Finite Difference Policy Gradient 3 Monte-Carlo Policy Gradient 4 Actor-Critic Policy Gradient Introduction Vapnik s rule Never solve

More information

Q-learning. Tambet Matiisen

Q-learning. Tambet Matiisen Q-learning Tambet Matiisen (based on chapter 11.3 of online book Artificial Intelligence, foundations of computational agents by David Poole and Alan Mackworth) Stochastic gradient descent Experience

More information

Reinforcement Learning. Introduction

Reinforcement Learning. Introduction Reinforcement Learning Introduction Reinforcement Learning Agent interacts and learns from a stochastic environment Science of sequential decision making Many faces of reinforcement learning Optimal control

More information

Learning Control for Air Hockey Striking using Deep Reinforcement Learning

Learning Control for Air Hockey Striking using Deep Reinforcement Learning Learning Control for Air Hockey Striking using Deep Reinforcement Learning Ayal Taitler, Nahum Shimkin Faculty of Electrical Engineering Technion - Israel Institute of Technology May 8, 2017 A. Taitler,

More information

Approximation Methods in Reinforcement Learning

Approximation Methods in Reinforcement Learning 2018 CS420, Machine Learning, Lecture 12 Approximation Methods in Reinforcement Learning Weinan Zhang Shanghai Jiao Tong University http://wnzhang.net http://wnzhang.net/teaching/cs420/index.html Reinforcement

More information

Lecture 18: Reinforcement Learning Sanjeev Arora Elad Hazan

Lecture 18: Reinforcement Learning Sanjeev Arora Elad Hazan COS 402 Machine Learning and Artificial Intelligence Fall 2016 Lecture 18: Reinforcement Learning Sanjeev Arora Elad Hazan Some slides borrowed from Peter Bodik and David Silver Course progress Learning

More information

Policy Gradient Methods. February 13, 2017

Policy Gradient Methods. February 13, 2017 Policy Gradient Methods February 13, 2017 Policy Optimization Problems maximize E π [expression] π Fixed-horizon episodic: T 1 Average-cost: lim T 1 T r t T 1 r t Infinite-horizon discounted: γt r t Variable-length

More information

Introduction of Reinforcement Learning

Introduction of Reinforcement Learning Introduction of Reinforcement Learning Deep Reinforcement Learning Reference Textbook: Reinforcement Learning: An Introduction http://incompleteideas.net/sutton/book/the-book.html Lectures of David Silver

More information

Reinforcement Learning

Reinforcement Learning Reinforcement Learning Cyber Rodent Project Some slides from: David Silver, Radford Neal CSC411: Machine Learning and Data Mining, Winter 2017 Michael Guerzhoy 1 Reinforcement Learning Supervised learning:

More information

INTRODUCTION TO EVOLUTION STRATEGY ALGORITHMS. James Gleeson Eric Langlois William Saunders

INTRODUCTION TO EVOLUTION STRATEGY ALGORITHMS. James Gleeson Eric Langlois William Saunders INTRODUCTION TO EVOLUTION STRATEGY ALGORITHMS James Gleeson Eric Langlois William Saunders REINFORCEMENT LEARNING CHALLENGES f(θ) is a discrete function of theta How do we get a gradient θ f? Credit assignment

More information

Introduction to Reinforcement Learning. CMPT 882 Mar. 18

Introduction to Reinforcement Learning. CMPT 882 Mar. 18 Introduction to Reinforcement Learning CMPT 882 Mar. 18 Outline for the week Basic ideas in RL Value functions and value iteration Policy evaluation and policy improvement Model-free RL Monte-Carlo and

More information

REINFORCE Framework for Stochastic Policy Optimization and its use in Deep Learning

REINFORCE Framework for Stochastic Policy Optimization and its use in Deep Learning REINFORCE Framework for Stochastic Policy Optimization and its use in Deep Learning Ronen Tamari The Hebrew University of Jerusalem Advanced Seminar in Deep Learning (#67679) February 28, 2016 Ronen Tamari

More information

CS885 Reinforcement Learning Lecture 7a: May 23, 2018

CS885 Reinforcement Learning Lecture 7a: May 23, 2018 CS885 Reinforcement Learning Lecture 7a: May 23, 2018 Policy Gradient Methods [SutBar] Sec. 13.1-13.3, 13.7 [SigBuf] Sec. 5.1-5.2, [RusNor] Sec. 21.5 CS885 Spring 2018 Pascal Poupart 1 Outline Stochastic

More information

Deep Learning. What Is Deep Learning? The Rise of Deep Learning. Long History (in Hind Sight)

Deep Learning. What Is Deep Learning? The Rise of Deep Learning. Long History (in Hind Sight) CSCE 636 Neural Networks Instructor: Yoonsuck Choe Deep Learning What Is Deep Learning? Learning higher level abstractions/representations from data. Motivation: how the brain represents sensory information

More information

Deep Reinforcement Learning: Policy Gradients and Q-Learning

Deep Reinforcement Learning: Policy Gradients and Q-Learning Deep Reinforcement Learning: Policy Gradients and Q-Learning John Schulman Bay Area Deep Learning School September 24, 2016 Introduction and Overview Aim of This Talk What is deep RL, and should I use

More information

Deep Learning. What Is Deep Learning? The Rise of Deep Learning. Long History (in Hind Sight)

Deep Learning. What Is Deep Learning? The Rise of Deep Learning. Long History (in Hind Sight) CSCE 636 Neural Networks Instructor: Yoonsuck Choe Deep Learning What Is Deep Learning? Learning higher level abstractions/representations from data. Motivation: how the brain represents sensory information

More information

Multiagent (Deep) Reinforcement Learning

Multiagent (Deep) Reinforcement Learning Multiagent (Deep) Reinforcement Learning MARTIN PILÁT (MARTIN.PILAT@MFF.CUNI.CZ) Reinforcement learning The agent needs to learn to perform tasks in environment No prior knowledge about the effects of

More information

Deep Reinforcement Learning. Scratching the surface

Deep Reinforcement Learning. Scratching the surface Deep Reinforcement Learning Scratching the surface Deep Reinforcement Learning Scenario of Reinforcement Learning Observation State Agent Action Change the environment Don t do that Reward Environment

More information

MARKOV DECISION PROCESSES (MDP) AND REINFORCEMENT LEARNING (RL) Versione originale delle slide fornita dal Prof. Francesco Lo Presti

MARKOV DECISION PROCESSES (MDP) AND REINFORCEMENT LEARNING (RL) Versione originale delle slide fornita dal Prof. Francesco Lo Presti 1 MARKOV DECISION PROCESSES (MDP) AND REINFORCEMENT LEARNING (RL) Versione originale delle slide fornita dal Prof. Francesco Lo Presti Historical background 2 Original motivation: animal learning Early

More information

CS 570: Machine Learning Seminar. Fall 2016

CS 570: Machine Learning Seminar. Fall 2016 CS 570: Machine Learning Seminar Fall 2016 Class Information Class web page: http://web.cecs.pdx.edu/~mm/mlseminar2016-2017/fall2016/ Class mailing list: cs570@cs.pdx.edu My office hours: T,Th, 2-3pm or

More information

CS599 Lecture 1 Introduction To RL

CS599 Lecture 1 Introduction To RL CS599 Lecture 1 Introduction To RL Reinforcement Learning Introduction Learning from rewards Policies Value Functions Rewards Models of the Environment Exploitation vs. Exploration Dynamic Programming

More information

Foundations of Artificial Intelligence

Foundations of Artificial Intelligence Foundations of Artificial Intelligence 14. Deep Learning An Overview Joschka Boedecker and Wolfram Burgard and Bernhard Nebel Guest lecturer: Frank Hutter Albert-Ludwigs-Universität Freiburg July 14, 2017

More information

Machine Learning I Reinforcement Learning

Machine Learning I Reinforcement Learning Machine Learning I Reinforcement Learning Thomas Rückstieß Technische Universität München December 17/18, 2009 Literature Book: Reinforcement Learning: An Introduction Sutton & Barto (free online version:

More information

Learning Tetris. 1 Tetris. February 3, 2009

Learning Tetris. 1 Tetris. February 3, 2009 Learning Tetris Matt Zucker Andrew Maas February 3, 2009 1 Tetris The Tetris game has been used as a benchmark for Machine Learning tasks because its large state space (over 2 200 cell configurations are

More information

Machine Learning. Machine Learning: Jordan Boyd-Graber University of Maryland REINFORCEMENT LEARNING. Slides adapted from Tom Mitchell and Peter Abeel

Machine Learning. Machine Learning: Jordan Boyd-Graber University of Maryland REINFORCEMENT LEARNING. Slides adapted from Tom Mitchell and Peter Abeel Machine Learning Machine Learning: Jordan Boyd-Graber University of Maryland REINFORCEMENT LEARNING Slides adapted from Tom Mitchell and Peter Abeel Machine Learning: Jordan Boyd-Graber UMD Machine Learning

More information

Trust Region Policy Optimization

Trust Region Policy Optimization Trust Region Policy Optimization Yixin Lin Duke University yixin.lin@duke.edu March 28, 2017 Yixin Lin (Duke) TRPO March 28, 2017 1 / 21 Overview 1 Preliminaries Markov Decision Processes Policy iteration

More information

Actor-critic methods. Dialogue Systems Group, Cambridge University Engineering Department. February 21, 2017

Actor-critic methods. Dialogue Systems Group, Cambridge University Engineering Department. February 21, 2017 Actor-critic methods Milica Gašić Dialogue Systems Group, Cambridge University Engineering Department February 21, 2017 1 / 21 In this lecture... The actor-critic architecture Least-Squares Policy Iteration

More information

Reinforcement. Function Approximation. Learning with KATJA HOFMANN. Researcher, MSR Cambridge

Reinforcement. Function Approximation. Learning with KATJA HOFMANN. Researcher, MSR Cambridge Reinforcement Learning with Function Approximation KATJA HOFMANN Researcher, MSR Cambridge Representation and Generalization in RL Focus on training stability Learning generalizable value functions Navigating

More information

Reinforcement Learning and Deep Reinforcement Learning

Reinforcement Learning and Deep Reinforcement Learning Reinforcement Learning and Deep Reinforcement Learning Ashis Kumer Biswas, Ph.D. ashis.biswas@ucdenver.edu Deep Learning November 5, 2018 1 / 64 Outlines 1 Principles of Reinforcement Learning 2 The Q

More information

Reinforcement Learning

Reinforcement Learning Reinforcement Learning Policy Search: Actor-Critic and Gradient Policy search Mario Martin CS-UPC May 28, 2018 Mario Martin (CS-UPC) Reinforcement Learning May 28, 2018 / 63 Goal of this lecture So far

More information

Reinforcement Learning

Reinforcement Learning 1 Reinforcement Learning Chris Watkins Department of Computer Science Royal Holloway, University of London July 27, 2015 2 Plan 1 Why reinforcement learning? Where does this theory come from? Markov decision

More information

Neural Networks. Intro to AI Bert Huang Virginia Tech

Neural Networks. Intro to AI Bert Huang Virginia Tech Neural Networks Intro to AI Bert Huang Virginia Tech Outline Biological inspiration for artificial neural networks Linear vs. nonlinear functions Learning with neural networks: back propagation https://en.wikipedia.org/wiki/neuron#/media/file:chemical_synapse_schema_cropped.jpg

More information

Reinforcement Learning

Reinforcement Learning Reinforcement Learning Markov decision process & Dynamic programming Evaluative feedback, value function, Bellman equation, optimality, Markov property, Markov decision process, dynamic programming, value

More information

Administration. CSCI567 Machine Learning (Fall 2018) Outline. Outline. HW5 is available, due on 11/18. Practice final will also be available soon.

Administration. CSCI567 Machine Learning (Fall 2018) Outline. Outline. HW5 is available, due on 11/18. Practice final will also be available soon. Administration CSCI567 Machine Learning Fall 2018 Prof. Haipeng Luo U of Southern California Nov 7, 2018 HW5 is available, due on 11/18. Practice final will also be available soon. Remaining weeks: 11/14,

More information

Reinforcement Learning

Reinforcement Learning Reinforcement Learning Policy gradients Daniel Hennes 26.06.2017 University Stuttgart - IPVS - Machine Learning & Robotics 1 Policy based reinforcement learning So far we approximated the action value

More information

Reinforcement Learning. Yishay Mansour Tel-Aviv University

Reinforcement Learning. Yishay Mansour Tel-Aviv University Reinforcement Learning Yishay Mansour Tel-Aviv University 1 Reinforcement Learning: Course Information Classes: Wednesday Lecture 10-13 Yishay Mansour Recitations:14-15/15-16 Eliya Nachmani Adam Polyak

More information

Convolutional Neural Networks

Convolutional Neural Networks Convolutional Neural Networks Books» http://www.deeplearningbook.org/ Books http://neuralnetworksanddeeplearning.com/.org/ reviews» http://www.deeplearningbook.org/contents/linear_algebra.html» http://www.deeplearningbook.org/contents/prob.html»

More information

Lecture 9: Policy Gradient II (Post lecture) 2

Lecture 9: Policy Gradient II (Post lecture) 2 Lecture 9: Policy Gradient II (Post lecture) 2 Emma Brunskill CS234 Reinforcement Learning. Winter 2018 Additional reading: Sutton and Barto 2018 Chp. 13 2 With many slides from or derived from David Silver

More information

Lecture 9: Policy Gradient II 1

Lecture 9: Policy Gradient II 1 Lecture 9: Policy Gradient II 1 Emma Brunskill CS234 Reinforcement Learning. Winter 2019 Additional reading: Sutton and Barto 2018 Chp. 13 1 With many slides from or derived from David Silver and John

More information

Markov Decision Processes Chapter 17. Mausam

Markov Decision Processes Chapter 17. Mausam Markov Decision Processes Chapter 17 Mausam Planning Agent Static vs. Dynamic Fully vs. Partially Observable Environment What action next? Deterministic vs. Stochastic Perfect vs. Noisy Instantaneous vs.

More information

6 Reinforcement Learning

6 Reinforcement Learning 6 Reinforcement Learning As discussed above, a basic form of supervised learning is function approximation, relating input vectors to output vectors, or, more generally, finding density functions p(y,

More information

Reinforcement Learning. George Konidaris

Reinforcement Learning. George Konidaris Reinforcement Learning George Konidaris gdk@cs.brown.edu Fall 2017 Machine Learning Subfield of AI concerned with learning from data. Broadly, using: Experience To Improve Performance On Some Task (Tom

More information

Decoupling deep learning and reinforcement learning for stable and efficient deep policy gradient algorithms

Decoupling deep learning and reinforcement learning for stable and efficient deep policy gradient algorithms Decoupling deep learning and reinforcement learning for stable and efficient deep policy gradient algorithms Alfredo Vicente Clemente Master of Science in Computer Science Submission date: August 2017

More information

Artificial Intelligence

Artificial Intelligence Artificial Intelligence Dynamic Programming Marc Toussaint University of Stuttgart Winter 2018/19 Motivation: So far we focussed on tree search-like solvers for decision problems. There is a second important

More information

Reinforcement Learning II. George Konidaris

Reinforcement Learning II. George Konidaris Reinforcement Learning II George Konidaris gdk@cs.brown.edu Fall 2017 Reinforcement Learning π : S A max R = t=0 t r t MDPs Agent interacts with an environment At each time t: Receives sensor signal Executes

More information

Reinforcement Learning. Machine Learning, Fall 2010

Reinforcement Learning. Machine Learning, Fall 2010 Reinforcement Learning Machine Learning, Fall 2010 1 Administrativia This week: finish RL, most likely start graphical models LA2: due on Thursday LA3: comes out on Thursday TA Office hours: Today 1:30-2:30

More information

Reinforcement Learning II. George Konidaris

Reinforcement Learning II. George Konidaris Reinforcement Learning II George Konidaris gdk@cs.brown.edu Fall 2018 Reinforcement Learning π : S A max R = t=0 t r t MDPs Agent interacts with an environment At each time t: Receives sensor signal Executes

More information

I D I A P. Online Policy Adaptation for Ensemble Classifiers R E S E A R C H R E P O R T. Samy Bengio b. Christos Dimitrakakis a IDIAP RR 03-69

I D I A P. Online Policy Adaptation for Ensemble Classifiers R E S E A R C H R E P O R T. Samy Bengio b. Christos Dimitrakakis a IDIAP RR 03-69 R E S E A R C H R E P O R T Online Policy Adaptation for Ensemble Classifiers Christos Dimitrakakis a IDIAP RR 03-69 Samy Bengio b I D I A P December 2003 D a l l e M o l l e I n s t i t u t e for Perceptual

More information

Deep Q-Learning with Recurrent Neural Networks

Deep Q-Learning with Recurrent Neural Networks Deep Q-Learning with Recurrent Neural Networks Clare Chen cchen9@stanford.edu Vincent Ying vincenthying@stanford.edu Dillon Laird dalaird@cs.stanford.edu Abstract Deep reinforcement learning models have

More information

Reinforcement Learning. Donglin Zeng, Department of Biostatistics, University of North Carolina

Reinforcement Learning. Donglin Zeng, Department of Biostatistics, University of North Carolina Reinforcement Learning Introduction Introduction Unsupervised learning has no outcome (no feedback). Supervised learning has outcome so we know what to predict. Reinforcement learning is in between it

More information

Deep Reinforcement Learning via Policy Optimization

Deep Reinforcement Learning via Policy Optimization Deep Reinforcement Learning via Policy Optimization John Schulman July 3, 2017 Introduction Deep Reinforcement Learning: What to Learn? Policies (select next action) Deep Reinforcement Learning: What to

More information

Reinforcement Learning

Reinforcement Learning Reinforcement Learning Function approximation Mario Martin CS-UPC May 18, 2018 Mario Martin (CS-UPC) Reinforcement Learning May 18, 2018 / 65 Recap Algorithms: MonteCarlo methods for Policy Evaluation

More information

Real Time Value Iteration and the State-Action Value Function

Real Time Value Iteration and the State-Action Value Function MS&E338 Reinforcement Learning Lecture 3-4/9/18 Real Time Value Iteration and the State-Action Value Function Lecturer: Ben Van Roy Scribe: Apoorva Sharma and Tong Mu 1 Review Last time we left off discussing

More information

arxiv: v1 [cs.lg] 20 Jun 2018

arxiv: v1 [cs.lg] 20 Jun 2018 RUDDER: Return Decomposition for Delayed Rewards arxiv:1806.07857v1 [cs.lg 20 Jun 2018 Jose A. Arjona-Medina Michael Gillhofer Michael Widrich Thomas Unterthiner Sepp Hochreiter LIT AI Lab Institute for

More information

15-780: ReinforcementLearning

15-780: ReinforcementLearning 15-780: ReinforcementLearning J. Zico Kolter March 2, 2016 1 Outline Challenge of RL Model-based methods Model-free methods Exploration and exploitation 2 Outline Challenge of RL Model-based methods Model-free

More information

Do not tear exam apart!

Do not tear exam apart! 6.036: Final Exam: Fall 2017 Do not tear exam apart! This is a closed book exam. Calculators not permitted. Useful formulas on page 1. The problems are not necessarily in any order of di culty. Record

More information

Reinforcement Learning: the basics

Reinforcement Learning: the basics Reinforcement Learning: the basics Olivier Sigaud Université Pierre et Marie Curie, PARIS 6 http://people.isir.upmc.fr/sigaud August 6, 2012 1 / 46 Introduction Action selection/planning Learning by trial-and-error

More information

Lecture 7: Value Function Approximation

Lecture 7: Value Function Approximation Lecture 7: Value Function Approximation Joseph Modayil Outline 1 Introduction 2 3 Batch Methods Introduction Large-Scale Reinforcement Learning Reinforcement learning can be used to solve large problems,

More information

Variance Reduction for Policy Gradient Methods. March 13, 2017

Variance Reduction for Policy Gradient Methods. March 13, 2017 Variance Reduction for Policy Gradient Methods March 13, 2017 Reward Shaping Reward Shaping Reward Shaping Reward shaping: r(s, a, s ) = r(s, a, s ) + γφ(s ) Φ(s) for arbitrary potential Φ Theorem: r admits

More information

CS 188: Artificial Intelligence

CS 188: Artificial Intelligence CS 188: Artificial Intelligence Reinforcement Learning Instructor: Fabrice Popineau [These slides adapted from Stuart Russell, Dan Klein and Pieter Abbeel @ai.berkeley.edu] Reinforcement Learning Double

More information

Bits of Machine Learning Part 2: Unsupervised Learning

Bits of Machine Learning Part 2: Unsupervised Learning Bits of Machine Learning Part 2: Unsupervised Learning Alexandre Proutiere and Vahan Petrosyan KTH (The Royal Institute of Technology) Outline of the Course 1. Supervised Learning Regression and Classification

More information

Fictitious Self-Play in Extensive-Form Games

Fictitious Self-Play in Extensive-Form Games Johannes Heinrich, Marc Lanctot, David Silver University College London, Google DeepMind July 9, 05 Problem Learn from self-play in games with imperfect information. Games: Multi-agent decision making

More information

Lecture 8: Policy Gradient I 2

Lecture 8: Policy Gradient I 2 Lecture 8: Policy Gradient I 2 Emma Brunskill CS234 Reinforcement Learning. Winter 2018 Additional reading: Sutton and Barto 2018 Chp. 13 2 With many slides from or derived from David Silver and John Schulman

More information

Reinforcement learning

Reinforcement learning Reinforcement learning Based on [Kaelbling et al., 1996, Bertsekas, 2000] Bert Kappen Reinforcement learning Reinforcement learning is the problem faced by an agent that must learn behavior through trial-and-error

More information

Reinforcement Learning and Control

Reinforcement Learning and Control CS9 Lecture notes Andrew Ng Part XIII Reinforcement Learning and Control We now begin our study of reinforcement learning and adaptive control. In supervised learning, we saw algorithms that tried to make

More information

Machine Learning and Bayesian Inference. Unsupervised learning. Can we find regularity in data without the aid of labels?

Machine Learning and Bayesian Inference. Unsupervised learning. Can we find regularity in data without the aid of labels? Machine Learning and Bayesian Inference Dr Sean Holden Computer Laboratory, Room FC6 Telephone extension 6372 Email: sbh11@cl.cam.ac.uk www.cl.cam.ac.uk/ sbh11/ Unsupervised learning Can we find regularity

More information

WaveNet: A Generative Model for Raw Audio

WaveNet: A Generative Model for Raw Audio WaveNet: A Generative Model for Raw Audio Ido Guy & Daniel Brodeski Deep Learning Seminar 2017 TAU Outline Introduction WaveNet Experiments Introduction WaveNet is a deep generative model of raw audio

More information

Natural Language Processing. Slides from Andreas Vlachos, Chris Manning, Mihai Surdeanu

Natural Language Processing. Slides from Andreas Vlachos, Chris Manning, Mihai Surdeanu Natural Language Processing Slides from Andreas Vlachos, Chris Manning, Mihai Surdeanu Projects Project descriptions due today! Last class Sequence to sequence models Attention Pointer networks Today Weak

More information

Prof. Dr. Ann Nowé. Artificial Intelligence Lab ai.vub.ac.be

Prof. Dr. Ann Nowé. Artificial Intelligence Lab ai.vub.ac.be REINFORCEMENT LEARNING AN INTRODUCTION Prof. Dr. Ann Nowé Artificial Intelligence Lab ai.vub.ac.be REINFORCEMENT LEARNING WHAT IS IT? What is it? Learning from interaction Learning about, from, and while

More information

Reinforcement learning an introduction

Reinforcement learning an introduction Reinforcement learning an introduction Prof. Dr. Ann Nowé Computational Modeling Group AIlab ai.vub.ac.be November 2013 Reinforcement Learning What is it? Learning from interaction Learning about, from,

More information

Temporal Difference. Learning KENNETH TRAN. Principal Research Engineer, MSR AI

Temporal Difference. Learning KENNETH TRAN. Principal Research Engineer, MSR AI Temporal Difference Learning KENNETH TRAN Principal Research Engineer, MSR AI Temporal Difference Learning Policy Evaluation Intro to model-free learning Monte Carlo Learning Temporal Difference Learning

More information

CS 598 Statistical Reinforcement Learning. Nan Jiang

CS 598 Statistical Reinforcement Learning. Nan Jiang CS 598 Statistical Reinforcement Learning Nan Jiang Overview What s this course about? A grad-level seminar course on theory of RL 3 What s this course about? A grad-level seminar course on theory of RL

More information

Markov Decision Processes and Solving Finite Problems. February 8, 2017

Markov Decision Processes and Solving Finite Problems. February 8, 2017 Markov Decision Processes and Solving Finite Problems February 8, 2017 Overview of Upcoming Lectures Feb 8: Markov decision processes, value iteration, policy iteration Feb 13: Policy gradients Feb 15:

More information

Decision Theory: Markov Decision Processes

Decision Theory: Markov Decision Processes Decision Theory: Markov Decision Processes CPSC 322 Lecture 33 March 31, 2006 Textbook 12.5 Decision Theory: Markov Decision Processes CPSC 322 Lecture 33, Slide 1 Lecture Overview Recap Rewards and Policies

More information

ARTIFICIAL INTELLIGENCE. Reinforcement learning

ARTIFICIAL INTELLIGENCE. Reinforcement learning INFOB2KI 2018-2019 Utrecht University The Netherlands ARTIFICIAL INTELLIGENCE Reinforcement learning Lecturer: Silja Renooij These slides are part of the INFOB2KI Course Notes available from www.cs.uu.nl/docs/vakken/b2ki/schema.html

More information

Branes with Brains. Reinforcement learning in the landscape of intersecting brane worlds. String_Data 2017, Boston 11/30/2017

Branes with Brains. Reinforcement learning in the landscape of intersecting brane worlds. String_Data 2017, Boston 11/30/2017 Branes with Brains Reinforcement learning in the landscape of intersecting brane worlds FABIAN RUEHLE (UNIVERSITY OF OXFORD) String_Data 2017, Boston 11/30/2017 Based on [work in progress] with Brent Nelson

More information