Course basics. CSE 190: Reinforcement Learning: An Introduction. Last Time. Course goals. The website for the class is linked off my homepage.

Size: px
Start display at page:

Download "Course basics. CSE 190: Reinforcement Learning: An Introduction. Last Time. Course goals. The website for the class is linked off my homepage."

Transcription

1 Course basics CSE 190: Reinforcement Learning: An Introduction The website for the class is linked off my homepage. Grades will be based on programming assignments, homeworks, and class participation. Homeworks will be turned in, but not graded, as we will discushe answers in class in small groups. Turning it in means I can see that you are holding up your end of the conversation (this is a major part of class participation) Programming assignments will be graded Acknowledgment: A good number of these slides are cribbed from Rich Sutton Any sent to me about the course should have CSE 190 in the subject line 2 Course goals After taking this course you should: Understand what is unique about Reinforcement Learning Understand the tradeoff between exploration and exploitation Be conversant in Markov Decision Problems (MDPs) Know the various solution methods for solving the RL problem: Dynamic programming (value iteration, policy iteration, etc.) Monte Carlo TD learning Know what an eligibility trace is Be aware of several well-known applications of RL Be able to read papers in the field and understand 75% of each paper. Last Time Difference from other forms of learning: Learning by interaction with environment, which leado The exploration/exploitation tradeoff: An agent learning by interacting with the environment must: Exploit its knowledge to maximize reward Explore the environment to ensure that its knowledge is correct The agent must try everything while favoring, over time, the most rewarding actions. Elements of RL: a policy a reward function a value function optionally, a model of the environment. 3 4

2 Last Time Elements of RL: a policy: a mapping from stateo actions, possibly stochastic a reward function: given as part of the problem a value function: A prediction of reward from a state optionally, a model of the environment. This should (almost) all be familiar from your programming assignment. Last Time: Elements of RL a policy: A mapping from stateo actions: (s) = a Stochastic policy: (s,a) = P(a s) a reward function: Specified in the environment, not in the agent Usually a scalar value at each state a value function: A mapping from stateo expected total rewards from this state if we follow policy Written: V (s) A model of the environment: Something that tells us what to expect if we take an action in a state: a P ss ' i.e., the probability of getting to state s from state s if we take action a. 5 6 Elements of RL A model of the environment supports planning through simulating the future (if I do this, then he ll do that ) In general, RL agents can span the gamut from reactive to far-sighted. Example 1: Tic-tac-toe Since one can always play to a draw, let s assume an imperfect opponent. Reinforcement learning approach: Set up a table, V[s i ], i=1 n, where n ihe number of possible states of the board, and s i is a state Each entry of V[] is an estimate of the probability of a win from that state: the value of that state Assume we always play X s - Initialize V as: V[s i = a state with three X s in a row] = 1 V[s i = a state with three O s in a row] = -1 V[s i = all other states] =

3 Example 1: Tic-tac-toe Reinforcement learning approach: Set up a table, V[s i ], i=1 n, where n ihe number of possible states of the board, and s i is a state Play many games against our imperfect opponent to learn values of states How do we play? We need a policy. Let s use one called -greedy For each move, of the time, we pick a move uniformly at random from the possible moves. Otherwise, we pick the move that gets uo the state with the highest value: V[s i ] (greedy). Example 1: Tic-tac-toe -greedy policy (details): % s k randomly, uniformly over states s k reachable from s i, with probability " (s i ) = $ arg maxv(s k ) over states s k reachable from s i, with probability 1-" k & % The case is exploration. The other (greedy, i.e., take the highest value state) case is exploitation 9 10 Example 1: Tic-tac-toe Example 1: Tic-tac-toe How do we update previous values? V(s i ) = V(s i ) + [V( s ") V(s i )] where s ihe state reached after the opponent s move and is a learning rate. This is called a temporal difference method because it uses values of states from two different time steps. It provably convergeo the optimal policy. Why do I say that updating the value estimates changehe policy? 11 12

4 Example 1: Tic-tac-toe Why does updating the V s change "? V(s i ) = V(s i ) + [V( s ") V(s i )] Because the policy is a function of V: % s k randomly, uniformly over states s k reachable from s i, with probability " (s i ) = $ arg maxv(s k ) over states s k reachable from s i, with probability 1-" k & % As we get better value estimates, we make better choices. 13 Example 2: n-armed bandit n = 10 possible actions Each Q * (a i ) is chosen randomly from a normal distrib.: Q * (a i ) ~N (0,1) each r t is also normal: r t ~N (Q * (a i ),1) Estimate Q-values using a running average: Q t (a) = r 1 + r r ka k a Policy is -greedy Use 1000 plays repeat the whole thing 2000 times and average the results Hopefully, this is what you did for your programming assignment 14 Example 2: n-armed bandit We compare three policies: greedy, -greedy with = 0.1 and Note that greedy never explores. Example 2: n-armed bandit We compare three policies: greedy, -greedy with = 0.1 and Note that greedy never explores. And this is your programming assignment 15 16

5 Another approach to exploration: The Softmax policy Another approach to exploration: The Softmax policy P(a) = " e Q t (a ) n b=1 e Q t (b ) P(a) = " e Q t (a ) P(a) ihe probability of taking action a Q t (a) ihe current estimate of Q * (a) The higher Q t (a) is, the more likely we will choose action a. ihe temperature: this is annealed (starts hot and slowly cools over trials). As $0, the policy becomes deterministic. n b=1 e Q t (b ) Another approach to exploration: The Softmax policy P(a) = " e Q t (a ) b=1 e Q t (b ) Why use this? Note that -greedy keeps exploring randomly long after it has learned what the right values are. Softmax quickly gives states or actions with better evidence higher priority, while still exploring. n Incremental estimation of Q s Recall (what you should have implemented in your programming assignment:) the sample average method for estimating Q k (a): Q t (a) = r 1 + r r ka k a Can we do this incrementally (without storing all of the rewards)? (a) = Q k (a) + 1 [ k + 1 r k +1 Q k (a)] Where k ihe number of times we have tried action a 19 20

6 Incremental estimation of Q s (a) = Q k (a) + 1 [ k + 1 r Q (a) k +1 k ] Note that this is a common form of update rules: New Estimate = Old Estimate + step-size[target-old Estimate] Note in the version above, the step size changes over time. 21 Tracking nonstationary problems Choosing Q k to be a sample average is appropriate in a stationary problem, i.e., when none of the Q * (a) change over time. When the Q * (a) change over time, this is called a non-stationary problem. In this case, the following is better: (a) = Q k (a) + [ r k +1 " Q k (a)] Where % is a constant between zero and one. (a) = (1 ")Q k (a) + "r k +1 (a) = (1 ") k Q 0 (a) + i=1 k "(1 ") k i r i 22 Tracking nonstationary problems Optimistic Initial Values The rule for non-stationary problems: (a) = Q k (a) + [ r k +1 " Q k (a)] (a) = (1 ") k Q 0 (a) + I.e., a recency-weighted exponential average. k i=1 "(1 ") k i r i All of the methods above are biased by the initial values, although, in the one we just looked at, the dependence on Q 0 clearly decreases over time: (a) = (1 ") k Q 0 (a) + This is bad: more parametero pick This is good: prior knowledge can be incorporated into the initial Q-values. k i=1 "(1 ") k i r i 23 24

7 Optimistic Initial Values One way to biahe model io use optimistic initial values. Suppose instead of 0 for the n-armed bandit, we used +5. Now everything looks good, especially to a greedy method So this encourages exploration: Optimistic Initial Values This looks good, but why doehe greedy method oscillate in the beginning? Pause for effect Chapter 3: The Reinforcement Learning Problem Objectives of what I will talk about from this chapter: describe the RL problem we will be studying for the remainder of the course present idealized form of the RL problem for which we have precise theoretical results; introduce key components of the mathematics: value functions and Bellman equations; describe trade-offs between applicability and mathematical tractability. 28

8 Chapter 3: The Reinforcement Learning Problem Objectives of what I will talk about from this chapter: describe the RL problem we will be studying for the remainder of the course present idealized form of the RL problem for which we have precise theoretical results; introduce key components of the mathematics: value functions and Bellman equations; describe trade-offs between applicability and mathematical tractability. The Agent-Environment Interface Agent and environment interact at discrete time steps : t = 0, 1, 2, K Agent observes state at step t : S produces action at step t : a t A( ) gets resulting reward : r t +1 " and resulting next state : r t +1 r +2 a t +1 s r t +3 a t +2 st t +1 a t +2 t a t The Agent Learns a Policy Policy at step t, t : a mapping from stateo action probabilities t (s,a) = probability that a t = a when = s Reinforcement learning methods specify how the agent changes its policy as a result of experience. Roughly, the agent s goal io get as much reward as it can over the long run. Getting the Degree of Abstraction Right Time steps need not refer to fixed intervals of real time. Actions can be low level (e.g., voltageo motors), or high level (e.g., accept a job offer), mental (e.g., shift in focus of attention), etc. States can low-level sensations, or they can be abstract, symbolic, based on memory, or subjective (e.g., the state of being surprised or lost ). An RL agent is not necessarily like a whole animal or robot. Rewards are in the agent s environment because the agent cannot change it arbitrarily - otherwise, it could simply reward itself and call it a day The environment is not necessarily unknown to the agent, only incompletely controllable

9 Goals and Rewards Is a scalar reward signal an adequate notion of a goal? maybe not, but it is surprisingly flexible. A goal should specify what we want to achieve, not how we want to achieve it. A goal must be outside the agent s direct control thus outside the agent. The agent must be able to measure success: explicitly; frequently during its lifespan. The reward hypothesis The reward hypothesis: All of what we mean by goals and purposes can be thought of ahe maximization of the cumulative sum of a received scalar signal (reward) A sort of null hypothesis. Probably ultimately wrong, but so simple we have to disprove it before considering anything more complicated Returns Returns for Continuing Tasks Suppose the sequence of rewards after step t is: r t +1, r t +2, r t + 3, What do we want to maximize? In general, we want to maximize the expected return, E{ R t }, for each step t. Episodic tasks: interaction breaks naturally into episodes, e.g., plays of a game, triphrough a maze. R t = r t +1 + r t r T, where T is a final time step at which a terminal state is reached, ending an episode. Continuing tasks: interaction does not have natural episodes. Instead, we use the Discounted return: R t = r t +1 + r t r t = k r t + k +1, " k =0 where, 0 $ $ 1, ihe discount rate. This ensurehat the expected reward converges. shortsighted 0 " 1 farsighted 35 36

10 An Example Avoid failure: the pole falling beyond a critical angle or the cart hitting end of track. Another Example Get to the top of the hill as quickly as possible. As an episodic task where episode ends upon failure: reward = +1 for each step before failure return = number of steps before failure As a continuing task with discounted return: reward = 1 upon failure; 0 otherwise " return = k, for k steps before failure In either case, return is maximized by avoiding failure for as long as possible. 37 reward = 1 for each step where not at top of hill " return = number of steps before reaching top of hill Return is maximized by minimizing number of steps to reach the top of the hill. 38 A Unified Notation In episodic tasks, we number the time steps of each episode starting from zero. We usually do not have to distinguish between episodes, so we write instead of, j for the state at step t of episode j. Think of each episode as ending in an absorbing state that always produces reward of zero: We can cover all cases by writing R t = k r t + k +1, where can be 1 only if a zero reward absorbing state is always reached. k =0 " 39 The Markov Property By the state at step t, the book means whatever information is available to the agent at step t about its environment. The state can include immediate sensations, highly processed sensations, and structures built up over time from sequences of sensations. Ideally, a state should summarize past sensations so ao retain all essential information, i.e., it should have the Markov Property: Pr +1 = s,r t +1 = r,a t,r t, "1,a t "1,,r 1,s 0,a 0 { } = Pr{ +1 = s,r t +1 = r,a t } for all s, r, and histories,a t,r t, "1,a t "1,,r 1, s 0,a 0. 40

11 Markov Decision Processes An Example Finite MDP If a reinforcement learning task hahe Markov Property, it is basically a Markov Decision Process (MDP). If state and action sets are finite, it is a finite MDP. To define a finite MDP, you need to give: state and action sets one-step dynamics defined by transition probabilities: P a s s = Pr{ +1 = s = s,a t = a} for all s, s "S, a "A(s). Recycling Robot At each step, robot hao decide whether it should (1) actively search for a can, (2) wait for someone to bring it a can, or (3) go to home base and recharge. Searching is better but runs down the battery; if runs out of power while searching, hao be rescued (which is bad). Decisions made on basis of current energy level: high, low. Reward = number of cans collected reward probabilities: a R s s = E{ r t +1 = s,a t = a, +1 = s } for all s, s "S, a "A(s) Recycling Robot MDP Value Functions { } { } { } S = high, low A(high) = search, wait A(low) = search, wait, recharge R search = expected no. of cans while searching R wait = expected no. of cans while waiting R search > R wait The value of a state ihe expected return starting from that state; depends on the agent s policy: State - value function for policy : { } = E & $ " k r t +k +1 = s V (s) = E R t = s The value of taking an action in a state under policy ihe expected return starting from that state, taking that action, and thereafter following : % ' k =0 ( ) * 43 Action- value function for policy : { } = E & $ " k r t + k +1 = s,a t = a Q (s, a) = E R t = s, a t = a CSE 190: Reinforcement Learning, k Lecture = 0 2 % ' ( ) * 44

12 Bellman Equation for a Policy More on the Bellman Equation The basic idea: R t = r t +1 + r t r t r t + 4 ( ) = r t +1 + r t +2 + r t r t + 4 ( V a (s) = (s,a) P s s" $% R a s s" + V ( s ")& ' a ( s" This is a set of equations (in fact, linear), one for each state. The value function for is its unique solution. = r t +1 + R t +1 So: V (s) = E R t = s { } { } = E r t +1 + " V ( +1 ) = s Backup diagrams: Or, without the expectation operator: ( ( V a (s) = (s,a) P s s" $% R a s s" + V ( s ")& ' a s" 45 for V for Q 46 Gridworld Golf Actions: north, south, east, west; deterministic. If would take agent off the grid: no move but reward = 1 Other actions produce reward = 0, except actionhat move agent out of special states A and B as shown. State-value function for equiprobable random policy; & = 0.9 State is ball location Reward of 1 for each stroke until the ball is in the hole Value of a state? Actions: putt (use putter) driver (use driver) putt succeeds anywhere on the green 47 48

13 Optimal Value Functions Optimal Value Function for Golf For finite MDPs, policies can be partially ordered: " if and only if V (s) " V (s) for all s $S There are always one or more policiehat are better than or equal to all the others. These are the optimal policies. We denote them all *. Optimal policies share the same optimal state-value function: V (s) = maxv " (s) for all s S " Optimal policies also share the same optimal action-value function: Q (s,a) = maxq " (s,a) for all s S and a A(s) " This ihe expected return for taking action a in state s and thereafter following an optimal policy. 49 We can hit the ball farther with driver than with putter, but with less accuracy Q*(s,driver) givehe value or using driver first, then using whichever actions are best 50 Bellman Optimality Equation for V* The value of a state under an optimal policy must equal the expected return for the best action from that state: V (s) = max a"a(s) Q (s,a) { } = max E r t +1 + $ V ( +1 ) = s,a t = a a"a(s) a = max & P s s% '( R a s s% + $ V ( s %)) * a"a(s) s% The relevant backup diagram: Bellman Optimality Equation for Q* Q (s,a) = E{ r t +1 + " maxq ( +1, a ) = s,a t = a} ( a = P $ s s % s a R s s The relevant backup diagram: a + " maxq ( s, a ) a & ' V * ihe unique solution of this system of nonlinear equations. 51 Q * ihe unique solution of this system of nonlinear equations. 52

14 Why Optimal State-Value Functions are Useful Any policy that is greedy with respect to V* is an optimal policy. E.g., back to the gridworld: Therefore, given V*, one-step-ahead search producehe long-term optimal actions. What About Optimal Action-Value Functions? Q * Given, the agent does not even have to do a one-step-ahead search: " (s) = arg max aa(s) Q" (s, a) Solving the Bellman Optimality Equation Finding an optimal policy by solving the Bellman Optimality Equation requirehe following: accurate knowledge of environment dynamics; we have enough space and time to do the computation; the Markov Property. How much space and time do we need? polynomial in number of states (via dynamic programming methods; Chapter 4), BUT, number of states is often huge (e.g., backgammon has about states). We usually have to settle for approximations. Many RL methods can be understood as approximately solving the Bellman Optimality Equation. 55 Agent-environment interaction States Actions Rewards Policy: stochastic rule for selecting actions Return: the function of future rewards agent trieo maximize Episodic and continuing tasks Markov Property Markov Decision Process Transition probabilities Expected rewards Summary Value functions State-value function for a policy Action-value function for a policy Optimal state-value function Optimal action-value function Optimal value functions Optimal policies Bellman Equations The need for approximation 56

15 END

Lecture 3: The Reinforcement Learning Problem

Lecture 3: The Reinforcement Learning Problem Lecture 3: The Reinforcement Learning Problem Objectives of this lecture: describe the RL problem we will be studying for the remainder of the course present idealized form of the RL problem for which

More information

Chapter 3: The Reinforcement Learning Problem

Chapter 3: The Reinforcement Learning Problem Chapter 3: The Reinforcement Learning Problem Objectives of this chapter: describe the RL problem we will be studying for the remainder of the course present idealized form of the RL problem for which

More information

Chapter 3: The Reinforcement Learning Problem

Chapter 3: The Reinforcement Learning Problem Chapter 3: The Reinforcement Learning Problem Objectives of this chapter: describe the RL problem we will be studying for the remainder of the course present idealized form of the RL problem for which

More information

Reading Response: Due Wednesday. R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 1

Reading Response: Due Wednesday. R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 1 Reading Response: Due Wednesday R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 1 Another Example Get to the top of the hill as quickly as possible. reward = 1 for each step where

More information

Reinforcement Learning

Reinforcement Learning Reinforcement Learning 1 Reinforcement Learning Mainly based on Reinforcement Learning An Introduction by Richard Sutton and Andrew Barto Slides are mainly based on the course material provided by the

More information

CS599 Lecture 1 Introduction To RL

CS599 Lecture 1 Introduction To RL CS599 Lecture 1 Introduction To RL Reinforcement Learning Introduction Learning from rewards Policies Value Functions Rewards Models of the Environment Exploitation vs. Exploration Dynamic Programming

More information

Reinforcement Learning. Up until now we have been

Reinforcement Learning. Up until now we have been Reinforcement Learning Slides by Rich Sutton Mods by Dan Lizotte Refer to Reinforcement Learning: An Introduction by Sutton and Barto Alpaydin Chapter 16 Up until now we have been Supervised Learning Classifying,

More information

CS788 Dialogue Management Systems Lecture #2: Markov Decision Processes

CS788 Dialogue Management Systems Lecture #2: Markov Decision Processes CS788 Dialogue Management Systems Lecture #2: Markov Decision Processes Kee-Eung Kim KAIST EECS Department Computer Science Division Markov Decision Processes (MDPs) A popular model for sequential decision

More information

ARTIFICIAL INTELLIGENCE. Reinforcement learning

ARTIFICIAL INTELLIGENCE. Reinforcement learning INFOB2KI 2018-2019 Utrecht University The Netherlands ARTIFICIAL INTELLIGENCE Reinforcement learning Lecturer: Silja Renooij These slides are part of the INFOB2KI Course Notes available from www.cs.uu.nl/docs/vakken/b2ki/schema.html

More information

Reinforcement Learning (1)

Reinforcement Learning (1) Reinforcement Learning 1 Reinforcement Learning (1) Machine Learning 64-360, Part II Norman Hendrich University of Hamburg, Dept. of Informatics Vogt-Kölln-Str. 30, D-22527 Hamburg hendrich@informatik.uni-hamburg.de

More information

Today s s Lecture. Applicability of Neural Networks. Back-propagation. Review of Neural Networks. Lecture 20: Learning -4. Markov-Decision Processes

Today s s Lecture. Applicability of Neural Networks. Back-propagation. Review of Neural Networks. Lecture 20: Learning -4. Markov-Decision Processes Today s s Lecture Lecture 20: Learning -4 Review of Neural Networks Markov-Decision Processes Victor Lesser CMPSCI 683 Fall 2004 Reinforcement learning 2 Back-propagation Applicability of Neural Networks

More information

Machine Learning. Reinforcement learning. Hamid Beigy. Sharif University of Technology. Fall 1396

Machine Learning. Reinforcement learning. Hamid Beigy. Sharif University of Technology. Fall 1396 Machine Learning Reinforcement learning Hamid Beigy Sharif University of Technology Fall 1396 Hamid Beigy (Sharif University of Technology) Machine Learning Fall 1396 1 / 32 Table of contents 1 Introduction

More information

Reinforcement Learning

Reinforcement Learning Reinforcement Learning Markov decision process & Dynamic programming Evaluative feedback, value function, Bellman equation, optimality, Markov property, Markov decision process, dynamic programming, value

More information

Introduction to Reinforcement Learning. CMPT 882 Mar. 18

Introduction to Reinforcement Learning. CMPT 882 Mar. 18 Introduction to Reinforcement Learning CMPT 882 Mar. 18 Outline for the week Basic ideas in RL Value functions and value iteration Policy evaluation and policy improvement Model-free RL Monte-Carlo and

More information

Reinforcement learning an introduction

Reinforcement learning an introduction Reinforcement learning an introduction Prof. Dr. Ann Nowé Computational Modeling Group AIlab ai.vub.ac.be November 2013 Reinforcement Learning What is it? Learning from interaction Learning about, from,

More information

Balancing and Control of a Freely-Swinging Pendulum Using a Model-Free Reinforcement Learning Algorithm

Balancing and Control of a Freely-Swinging Pendulum Using a Model-Free Reinforcement Learning Algorithm Balancing and Control of a Freely-Swinging Pendulum Using a Model-Free Reinforcement Learning Algorithm Michail G. Lagoudakis Department of Computer Science Duke University Durham, NC 2778 mgl@cs.duke.edu

More information

Chapter 7: Eligibility Traces. R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 1

Chapter 7: Eligibility Traces. R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 1 Chapter 7: Eligibility Traces R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 1 Midterm Mean = 77.33 Median = 82 R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction

More information

Monte Carlo is important in practice. CSE 190: Reinforcement Learning: An Introduction. Chapter 6: Temporal Difference Learning.

Monte Carlo is important in practice. CSE 190: Reinforcement Learning: An Introduction. Chapter 6: Temporal Difference Learning. Monte Carlo is important in practice CSE 190: Reinforcement Learning: An Introduction Chapter 6: emporal Difference Learning When there are just a few possibilitieo value, out of a large state space, Monte

More information

Prof. Dr. Ann Nowé. Artificial Intelligence Lab ai.vub.ac.be

Prof. Dr. Ann Nowé. Artificial Intelligence Lab ai.vub.ac.be REINFORCEMENT LEARNING AN INTRODUCTION Prof. Dr. Ann Nowé Artificial Intelligence Lab ai.vub.ac.be REINFORCEMENT LEARNING WHAT IS IT? What is it? Learning from interaction Learning about, from, and while

More information

Lecture 2: Learning from Evaluative Feedback. or Bandit Problems

Lecture 2: Learning from Evaluative Feedback. or Bandit Problems Lecture 2: Learning from Evaluative Feedback or Bandit Problems 1 Edward L. Thorndike (1874-1949) Puzzle Box 2 Learning by Trial-and-Error Law of Effect: Of several responses to the same situation, those

More information

The Reinforcement Learning Problem

The Reinforcement Learning Problem The Reinforcement Learning Problem Slides based on the book Reinforcement Learning by Sutton and Barto Formalizing Reinforcement Learning Formally, the agent and environment interact at each of a sequence

More information

Introduction to Reinforcement Learning. Part 6: Core Theory II: Bellman Equations and Dynamic Programming

Introduction to Reinforcement Learning. Part 6: Core Theory II: Bellman Equations and Dynamic Programming Introduction to Reinforcement Learning Part 6: Core Theory II: Bellman Equations and Dynamic Programming Bellman Equations Recursive relationships among values that can be used to compute values The tree

More information

The Book: Where we are and where we re going. CSE 190: Reinforcement Learning: An Introduction. Chapter 7: Eligibility Traces. Simple Monte Carlo

The Book: Where we are and where we re going. CSE 190: Reinforcement Learning: An Introduction. Chapter 7: Eligibility Traces. Simple Monte Carlo CSE 190: Reinforcement Learning: An Introduction Chapter 7: Eligibility races Acknowledgment: A good number of these slides are cribbed from Rich Sutton he Book: Where we are and where we re going Part

More information

Internet Monetization

Internet Monetization Internet Monetization March May, 2013 Discrete time Finite A decision process (MDP) is reward process with decisions. It models an environment in which all states are and time is divided into stages. Definition

More information

Lecture 25: Learning 4. Victor R. Lesser. CMPSCI 683 Fall 2010

Lecture 25: Learning 4. Victor R. Lesser. CMPSCI 683 Fall 2010 Lecture 25: Learning 4 Victor R. Lesser CMPSCI 683 Fall 2010 Final Exam Information Final EXAM on Th 12/16 at 4:00pm in Lederle Grad Res Ctr Rm A301 2 Hours but obviously you can leave early! Open Book

More information

Basics of reinforcement learning

Basics of reinforcement learning Basics of reinforcement learning Lucian Buşoniu TMLSS, 20 July 2018 Main idea of reinforcement learning (RL) Learn a sequential decision policy to optimize the cumulative performance of an unknown system

More information

Planning in Markov Decision Processes

Planning in Markov Decision Processes Carnegie Mellon School of Computer Science Deep Reinforcement Learning and Control Planning in Markov Decision Processes Lecture 3, CMU 10703 Katerina Fragkiadaki Markov Decision Process (MDP) A Markov

More information

CMU Lecture 12: Reinforcement Learning. Teacher: Gianni A. Di Caro

CMU Lecture 12: Reinforcement Learning. Teacher: Gianni A. Di Caro CMU 15-781 Lecture 12: Reinforcement Learning Teacher: Gianni A. Di Caro REINFORCEMENT LEARNING Transition Model? State Action Reward model? Agent Goal: Maximize expected sum of future rewards 2 MDP PLANNING

More information

MARKOV DECISION PROCESSES (MDP) AND REINFORCEMENT LEARNING (RL) Versione originale delle slide fornita dal Prof. Francesco Lo Presti

MARKOV DECISION PROCESSES (MDP) AND REINFORCEMENT LEARNING (RL) Versione originale delle slide fornita dal Prof. Francesco Lo Presti 1 MARKOV DECISION PROCESSES (MDP) AND REINFORCEMENT LEARNING (RL) Versione originale delle slide fornita dal Prof. Francesco Lo Presti Historical background 2 Original motivation: animal learning Early

More information

Machine Learning I Reinforcement Learning

Machine Learning I Reinforcement Learning Machine Learning I Reinforcement Learning Thomas Rückstieß Technische Universität München December 17/18, 2009 Literature Book: Reinforcement Learning: An Introduction Sutton & Barto (free online version:

More information

Course 16:198:520: Introduction To Artificial Intelligence Lecture 13. Decision Making. Abdeslam Boularias. Wednesday, December 7, 2016

Course 16:198:520: Introduction To Artificial Intelligence Lecture 13. Decision Making. Abdeslam Boularias. Wednesday, December 7, 2016 Course 16:198:520: Introduction To Artificial Intelligence Lecture 13 Decision Making Abdeslam Boularias Wednesday, December 7, 2016 1 / 45 Overview We consider probabilistic temporal models where the

More information

CMU Lecture 11: Markov Decision Processes II. Teacher: Gianni A. Di Caro

CMU Lecture 11: Markov Decision Processes II. Teacher: Gianni A. Di Caro CMU 15-781 Lecture 11: Markov Decision Processes II Teacher: Gianni A. Di Caro RECAP: DEFINING MDPS Markov decision processes: o Set of states S o Start state s 0 o Set of actions A o Transitions P(s s,a)

More information

Reinforcement Learning. Spring 2018 Defining MDPs, Planning

Reinforcement Learning. Spring 2018 Defining MDPs, Planning Reinforcement Learning Spring 2018 Defining MDPs, Planning understandability 0 Slide 10 time You are here Markov Process Where you will go depends only on where you are Markov Process: Information state

More information

Christopher Watkins and Peter Dayan. Noga Zaslavsky. The Hebrew University of Jerusalem Advanced Seminar in Deep Learning (67679) November 1, 2015

Christopher Watkins and Peter Dayan. Noga Zaslavsky. The Hebrew University of Jerusalem Advanced Seminar in Deep Learning (67679) November 1, 2015 Q-Learning Christopher Watkins and Peter Dayan Noga Zaslavsky The Hebrew University of Jerusalem Advanced Seminar in Deep Learning (67679) November 1, 2015 Noga Zaslavsky Q-Learning (Watkins & Dayan, 1992)

More information

Reinforcement Learning

Reinforcement Learning Reinforcement Learning Temporal Difference Learning Temporal difference learning, TD prediction, Q-learning, elibigility traces. (many slides from Marc Toussaint) Vien Ngo Marc Toussaint University of

More information

CSE250A Fall 12: Discussion Week 9

CSE250A Fall 12: Discussion Week 9 CSE250A Fall 12: Discussion Week 9 Aditya Menon (akmenon@ucsd.edu) December 4, 2012 1 Schedule for today Recap of Markov Decision Processes. Examples: slot machines and maze traversal. Planning and learning.

More information

Reinforcement Learning

Reinforcement Learning Reinforcement Learning Temporal Difference Learning Temporal difference learning, TD prediction, Q-learning, elibigility traces. (many slides from Marc Toussaint) Vien Ngo MLR, University of Stuttgart

More information

Reinforcement Learning and Control

Reinforcement Learning and Control CS9 Lecture notes Andrew Ng Part XIII Reinforcement Learning and Control We now begin our study of reinforcement learning and adaptive control. In supervised learning, we saw algorithms that tried to make

More information

Computational Reinforcement Learning: An Introduction

Computational Reinforcement Learning: An Introduction Computational Reinforcement Learning: An Introduction Andrew Barto Autonomous Learning Laboratory School of Computer Science University of Massachusetts Amherst barto@cs.umass.edu 1 Artificial Intelligence

More information

Edward L. Thorndike #1874$1949% Lecture 2: Learning from Evaluative Feedback. or!bandit Problems" Learning by Trial$and$Error.

Edward L. Thorndike #1874$1949% Lecture 2: Learning from Evaluative Feedback. or!bandit Problems Learning by Trial$and$Error. Lecture 2: Learning from Evaluative Feedback Edward L. Thorndike #1874$1949% or!bandit Problems" Puzzle Box 1 2 Learning by Trial$and$Error Law of E&ect:!Of several responses to the same situation, those

More information

Reinforcement Learning II

Reinforcement Learning II Reinforcement Learning II Andrea Bonarini Artificial Intelligence and Robotics Lab Department of Electronics and Information Politecnico di Milano E-mail: bonarini@elet.polimi.it URL:http://www.dei.polimi.it/people/bonarini

More information

Some AI Planning Problems

Some AI Planning Problems Course Logistics CS533: Intelligent Agents and Decision Making M, W, F: 1:00 1:50 Instructor: Alan Fern (KEC2071) Office hours: by appointment (see me after class or send email) Emailing me: include CS533

More information

The Markov Decision Process (MDP) model

The Markov Decision Process (MDP) model Decision Making in Robots and Autonomous Agents The Markov Decision Process (MDP) model Subramanian Ramamoorthy School of Informatics 25 January, 2013 In the MAB Model We were in a single casino and the

More information

Lecture 1: March 7, 2018

Lecture 1: March 7, 2018 Reinforcement Learning Spring Semester, 2017/8 Lecture 1: March 7, 2018 Lecturer: Yishay Mansour Scribe: ym DISCLAIMER: Based on Learning and Planning in Dynamical Systems by Shie Mannor c, all rights

More information

Q-learning Tutorial. CSC411 Geoffrey Roeder. Slides Adapted from lecture: Rich Zemel, Raquel Urtasun, Sanja Fidler, Nitish Srivastava

Q-learning Tutorial. CSC411 Geoffrey Roeder. Slides Adapted from lecture: Rich Zemel, Raquel Urtasun, Sanja Fidler, Nitish Srivastava Q-learning Tutorial CSC411 Geoffrey Roeder Slides Adapted from lecture: Rich Zemel, Raquel Urtasun, Sanja Fidler, Nitish Srivastava Tutorial Agenda Refresh RL terminology through Tic Tac Toe Deterministic

More information

Grundlagen der Künstlichen Intelligenz

Grundlagen der Künstlichen Intelligenz Grundlagen der Künstlichen Intelligenz Reinforcement learning Daniel Hennes 4.12.2017 (WS 2017/18) University Stuttgart - IPVS - Machine Learning & Robotics 1 Today Reinforcement learning Model based and

More information

Introduction to Reinforcement Learning

Introduction to Reinforcement Learning CSCI-699: Advanced Topics in Deep Learning 01/16/2019 Nitin Kamra Spring 2019 Introduction to Reinforcement Learning 1 What is Reinforcement Learning? So far we have seen unsupervised and supervised learning.

More information

Autonomous Helicopter Flight via Reinforcement Learning

Autonomous Helicopter Flight via Reinforcement Learning Autonomous Helicopter Flight via Reinforcement Learning Authors: Andrew Y. Ng, H. Jin Kim, Michael I. Jordan, Shankar Sastry Presenters: Shiv Ballianda, Jerrolyn Hebert, Shuiwang Ji, Kenley Malveaux, Huy

More information

Lecture 15: Bandit problems. Markov Processes. Recall: Lotteries and utilities

Lecture 15: Bandit problems. Markov Processes. Recall: Lotteries and utilities Lecture 15: Bandit problems. Markov Processes Bandit problems Action values (and now to compute them) Exploration-exploitation trade-off Simple exploration strategies -greedy Softmax (Boltzmann) exploration

More information

Reinforcement Learning

Reinforcement Learning Reinforcement Learning March May, 2013 Schedule Update Introduction 03/13/2015 (10:15-12:15) Sala conferenze MDPs 03/18/2015 (10:15-12:15) Sala conferenze Solving MDPs 03/20/2015 (10:15-12:15) Aula Alpha

More information

Reinforcement Learning. Yishay Mansour Tel-Aviv University

Reinforcement Learning. Yishay Mansour Tel-Aviv University Reinforcement Learning Yishay Mansour Tel-Aviv University 1 Reinforcement Learning: Course Information Classes: Wednesday Lecture 10-13 Yishay Mansour Recitations:14-15/15-16 Eliya Nachmani Adam Polyak

More information

, and rewards and transition matrices as shown below:

, and rewards and transition matrices as shown below: CSE 50a. Assignment 7 Out: Tue Nov Due: Thu Dec Reading: Sutton & Barto, Chapters -. 7. Policy improvement Consider the Markov decision process (MDP) with two states s {0, }, two actions a {0, }, discount

More information

Reinforcement Learning. Machine Learning, Fall 2010

Reinforcement Learning. Machine Learning, Fall 2010 Reinforcement Learning Machine Learning, Fall 2010 1 Administrativia This week: finish RL, most likely start graphical models LA2: due on Thursday LA3: comes out on Thursday TA Office hours: Today 1:30-2:30

More information

Reinforcement Learning Active Learning

Reinforcement Learning Active Learning Reinforcement Learning Active Learning Alan Fern * Based in part on slides by Daniel Weld 1 Active Reinforcement Learning So far, we ve assumed agent has a policy We just learned how good it is Now, suppose

More information

Lecture 23: Reinforcement Learning

Lecture 23: Reinforcement Learning Lecture 23: Reinforcement Learning MDPs revisited Model-based learning Monte Carlo value function estimation Temporal-difference (TD) learning Exploration November 23, 2006 1 COMP-424 Lecture 23 Recall:

More information

This question has three parts, each of which can be answered concisely, but be prepared to explain and justify your concise answer.

This question has three parts, each of which can be answered concisely, but be prepared to explain and justify your concise answer. This question has three parts, each of which can be answered concisely, but be prepared to explain and justify your concise answer. 1. Suppose you have a policy and its action-value function, q, then you

More information

CS 570: Machine Learning Seminar. Fall 2016

CS 570: Machine Learning Seminar. Fall 2016 CS 570: Machine Learning Seminar Fall 2016 Class Information Class web page: http://web.cecs.pdx.edu/~mm/mlseminar2016-2017/fall2016/ Class mailing list: cs570@cs.pdx.edu My office hours: T,Th, 2-3pm or

More information

Reinforcement Learning. Summer 2017 Defining MDPs, Planning

Reinforcement Learning. Summer 2017 Defining MDPs, Planning Reinforcement Learning Summer 2017 Defining MDPs, Planning understandability 0 Slide 10 time You are here Markov Process Where you will go depends only on where you are Markov Process: Information state

More information

Machine Learning and Bayesian Inference. Unsupervised learning. Can we find regularity in data without the aid of labels?

Machine Learning and Bayesian Inference. Unsupervised learning. Can we find regularity in data without the aid of labels? Machine Learning and Bayesian Inference Dr Sean Holden Computer Laboratory, Room FC6 Telephone extension 6372 Email: sbh11@cl.cam.ac.uk www.cl.cam.ac.uk/ sbh11/ Unsupervised learning Can we find regularity

More information

Factored State Spaces 3/2/178

Factored State Spaces 3/2/178 Factored State Spaces 3/2/178 Converting POMDPs to MDPs In a POMDP: Action + observation updates beliefs Value is a function of beliefs. Instead we can view this as an MDP where: There is a state for every

More information

MS&E338 Reinforcement Learning Lecture 1 - April 2, Introduction

MS&E338 Reinforcement Learning Lecture 1 - April 2, Introduction MS&E338 Reinforcement Learning Lecture 1 - April 2, 2018 Introduction Lecturer: Ben Van Roy Scribe: Gabriel Maher 1 Reinforcement Learning Introduction In reinforcement learning (RL) we consider an agent

More information

Reinforcement Learning. George Konidaris

Reinforcement Learning. George Konidaris Reinforcement Learning George Konidaris gdk@cs.brown.edu Fall 2017 Machine Learning Subfield of AI concerned with learning from data. Broadly, using: Experience To Improve Performance On Some Task (Tom

More information

Reinforcement Learning. Introduction

Reinforcement Learning. Introduction Reinforcement Learning Introduction Reinforcement Learning Agent interacts and learns from a stochastic environment Science of sequential decision making Many faces of reinforcement learning Optimal control

More information

Lecture 3: Policy Evaluation Without Knowing How the World Works / Model Free Policy Evaluation

Lecture 3: Policy Evaluation Without Knowing How the World Works / Model Free Policy Evaluation Lecture 3: Policy Evaluation Without Knowing How the World Works / Model Free Policy Evaluation CS234: RL Emma Brunskill Winter 2018 Material builds on structure from David SIlver s Lecture 4: Model-Free

More information

6 Reinforcement Learning

6 Reinforcement Learning 6 Reinforcement Learning As discussed above, a basic form of supervised learning is function approximation, relating input vectors to output vectors, or, more generally, finding density functions p(y,

More information

Exponential Moving Average Based Multiagent Reinforcement Learning Algorithms

Exponential Moving Average Based Multiagent Reinforcement Learning Algorithms Exponential Moving Average Based Multiagent Reinforcement Learning Algorithms Mostafa D. Awheda Department of Systems and Computer Engineering Carleton University Ottawa, Canada KS 5B6 Email: mawheda@sce.carleton.ca

More information

Lecture 8: Policy Gradient

Lecture 8: Policy Gradient Lecture 8: Policy Gradient Hado van Hasselt Outline 1 Introduction 2 Finite Difference Policy Gradient 3 Monte-Carlo Policy Gradient 4 Actor-Critic Policy Gradient Introduction Vapnik s rule Never solve

More information

Reinforcement Learning

Reinforcement Learning Reinforcement Learning Ron Parr CompSci 7 Department of Computer Science Duke University With thanks to Kris Hauser for some content RL Highlights Everybody likes to learn from experience Use ML techniques

More information

Temporal Difference. Learning KENNETH TRAN. Principal Research Engineer, MSR AI

Temporal Difference. Learning KENNETH TRAN. Principal Research Engineer, MSR AI Temporal Difference Learning KENNETH TRAN Principal Research Engineer, MSR AI Temporal Difference Learning Policy Evaluation Intro to model-free learning Monte Carlo Learning Temporal Difference Learning

More information

Reinforcement Learning: An Introduction

Reinforcement Learning: An Introduction Introduction Betreuer: Freek Stulp Hauptseminar Intelligente Autonome Systeme (WiSe 04/05) Forschungs- und Lehreinheit Informatik IX Technische Universität München November 24, 2004 Introduction What is

More information

Review: TD-Learning. TD (SARSA) Learning for Q-values. Bellman Equations for Q-values. P (s, a, s )[R(s, a, s )+ Q (s, (s ))]

Review: TD-Learning. TD (SARSA) Learning for Q-values. Bellman Equations for Q-values. P (s, a, s )[R(s, a, s )+ Q (s, (s ))] Review: TD-Learning function TD-Learning(mdp) returns a policy Class #: Reinforcement Learning, II 8s S, U(s) =0 set start-state s s 0 choose action a, using -greedy policy based on U(s) U(s) U(s)+ [r

More information

Reinforcement Learning

Reinforcement Learning 1 Reinforcement Learning Chris Watkins Department of Computer Science Royal Holloway, University of London July 27, 2015 2 Plan 1 Why reinforcement learning? Where does this theory come from? Markov decision

More information

Today s Outline. Recap: MDPs. Bellman Equations. Q-Value Iteration. Bellman Backup 5/7/2012. CSE 473: Artificial Intelligence Reinforcement Learning

Today s Outline. Recap: MDPs. Bellman Equations. Q-Value Iteration. Bellman Backup 5/7/2012. CSE 473: Artificial Intelligence Reinforcement Learning CSE 473: Artificial Intelligence Reinforcement Learning Dan Weld Today s Outline Reinforcement Learning Q-value iteration Q-learning Exploration / exploitation Linear function approximation Many slides

More information

CS 4100 // artificial intelligence. Recap/midterm review!

CS 4100 // artificial intelligence. Recap/midterm review! CS 4100 // artificial intelligence instructor: byron wallace Recap/midterm review! Attribution: many of these slides are modified versions of those distributed with the UC Berkeley CS188 materials Thanks

More information

RL 3: Reinforcement Learning

RL 3: Reinforcement Learning RL 3: Reinforcement Learning Q-Learning Michael Herrmann University of Edinburgh, School of Informatics 20/01/2015 Last time: Multi-Armed Bandits (10 Points to remember) MAB applications do exist (e.g.

More information

CSC321 Lecture 22: Q-Learning

CSC321 Lecture 22: Q-Learning CSC321 Lecture 22: Q-Learning Roger Grosse Roger Grosse CSC321 Lecture 22: Q-Learning 1 / 21 Overview Second of 3 lectures on reinforcement learning Last time: policy gradient (e.g. REINFORCE) Optimize

More information

Probabilistic Planning. George Konidaris

Probabilistic Planning. George Konidaris Probabilistic Planning George Konidaris gdk@cs.brown.edu Fall 2017 The Planning Problem Finding a sequence of actions to achieve some goal. Plans It s great when a plan just works but the world doesn t

More information

Administration. CSCI567 Machine Learning (Fall 2018) Outline. Outline. HW5 is available, due on 11/18. Practice final will also be available soon.

Administration. CSCI567 Machine Learning (Fall 2018) Outline. Outline. HW5 is available, due on 11/18. Practice final will also be available soon. Administration CSCI567 Machine Learning Fall 2018 Prof. Haipeng Luo U of Southern California Nov 7, 2018 HW5 is available, due on 11/18. Practice final will also be available soon. Remaining weeks: 11/14,

More information

INF 5860 Machine learning for image classification. Lecture 14: Reinforcement learning May 9, 2018

INF 5860 Machine learning for image classification. Lecture 14: Reinforcement learning May 9, 2018 Machine learning for image classification Lecture 14: Reinforcement learning May 9, 2018 Page 3 Outline Motivation Introduction to reinforcement learning (RL) Value function based methods (Q-learning)

More information

15-889e Policy Search: Gradient Methods Emma Brunskill. All slides from David Silver (with EB adding minor modificafons), unless otherwise noted

15-889e Policy Search: Gradient Methods Emma Brunskill. All slides from David Silver (with EB adding minor modificafons), unless otherwise noted 15-889e Policy Search: Gradient Methods Emma Brunskill All slides from David Silver (with EB adding minor modificafons), unless otherwise noted Outline 1 Introduction 2 Finite Difference Policy Gradient

More information

CS 7180: Behavioral Modeling and Decisionmaking

CS 7180: Behavioral Modeling and Decisionmaking CS 7180: Behavioral Modeling and Decisionmaking in AI Markov Decision Processes for Complex Decisionmaking Prof. Amy Sliva October 17, 2012 Decisions are nondeterministic In many situations, behavior and

More information

Markov Decision Processes

Markov Decision Processes Markov Decision Processes Noel Welsh 11 November 2010 Noel Welsh () Markov Decision Processes 11 November 2010 1 / 30 Annoucements Applicant visitor day seeks robot demonstrators for exciting half hour

More information

Reinforcement Learning: the basics

Reinforcement Learning: the basics Reinforcement Learning: the basics Olivier Sigaud Université Pierre et Marie Curie, PARIS 6 http://people.isir.upmc.fr/sigaud August 6, 2012 1 / 46 Introduction Action selection/planning Learning by trial-and-error

More information

Markov Decision Processes Chapter 17. Mausam

Markov Decision Processes Chapter 17. Mausam Markov Decision Processes Chapter 17 Mausam Planning Agent Static vs. Dynamic Fully vs. Partially Observable Environment What action next? Deterministic vs. Stochastic Perfect vs. Noisy Instantaneous vs.

More information

Elements of Reinforcement Learning

Elements of Reinforcement Learning Elements of Reinforcement Learning Policy: way learning algorithm behaves (mapping from state to action) Reward function: Mapping of state action pair to reward or cost Value function: long term reward,

More information

Notes on Reinforcement Learning

Notes on Reinforcement Learning 1 Introduction Notes on Reinforcement Learning Paulo Eduardo Rauber 2014 Reinforcement learning is the study of agents that act in an environment with the goal of maximizing cumulative reward signals.

More information

Reinforcement Learning

Reinforcement Learning Reinforcement Learning Model-Based Reinforcement Learning Model-based, PAC-MDP, sample complexity, exploration/exploitation, RMAX, E3, Bayes-optimal, Bayesian RL, model learning Vien Ngo MLR, University

More information

CS 598 Statistical Reinforcement Learning. Nan Jiang

CS 598 Statistical Reinforcement Learning. Nan Jiang CS 598 Statistical Reinforcement Learning Nan Jiang Overview What s this course about? A grad-level seminar course on theory of RL 3 What s this course about? A grad-level seminar course on theory of RL

More information

MDP Preliminaries. Nan Jiang. February 10, 2019

MDP Preliminaries. Nan Jiang. February 10, 2019 MDP Preliminaries Nan Jiang February 10, 2019 1 Markov Decision Processes In reinforcement learning, the interactions between the agent and the environment are often described by a Markov Decision Process

More information

CS 188: Artificial Intelligence

CS 188: Artificial Intelligence CS 188: Artificial Intelligence Reinforcement Learning Instructor: Fabrice Popineau [These slides adapted from Stuart Russell, Dan Klein and Pieter Abbeel @ai.berkeley.edu] Reinforcement Learning Double

More information

Introduction to Reinforcement Learning Part 1: Markov Decision Processes

Introduction to Reinforcement Learning Part 1: Markov Decision Processes Introduction to Reinforcement Learning Part 1: Markov Decision Processes Rowan McAllister Reinforcement Learning Reading Group 8 April 2015 Note I ve created these slides whilst following Algorithms for

More information

15-780: Graduate Artificial Intelligence. Reinforcement learning (RL)

15-780: Graduate Artificial Intelligence. Reinforcement learning (RL) 15-780: Graduate Artificial Intelligence Reinforcement learning (RL) From MDPs to RL We still use the same Markov model with rewards and actions But there are a few differences: 1. We do not assume we

More information

Reinforcement Learning

Reinforcement Learning Reinforcement Learning Function approximation Daniel Hennes 19.06.2017 University Stuttgart - IPVS - Machine Learning & Robotics 1 Today Eligibility traces n-step TD returns Forward and backward view Function

More information

REINFORCE Framework for Stochastic Policy Optimization and its use in Deep Learning

REINFORCE Framework for Stochastic Policy Optimization and its use in Deep Learning REINFORCE Framework for Stochastic Policy Optimization and its use in Deep Learning Ronen Tamari The Hebrew University of Jerusalem Advanced Seminar in Deep Learning (#67679) February 28, 2016 Ronen Tamari

More information

Reinforcement Learning (1)

Reinforcement Learning (1) Reinforcement Learning 1 Reinforcement Learning (1) Algorithmic Learning 64-360, Part 3d Jianwei Zhang University of Hamburg, Dept. of Informatics Vogt-Kölln-Str. 30, D-22527 Hamburg zhang@informatik.uni-hamburg.de

More information

Sequential decision making under uncertainty. Department of Computer Science, Czech Technical University in Prague

Sequential decision making under uncertainty. Department of Computer Science, Czech Technical University in Prague Sequential decision making under uncertainty Jiří Kléma Department of Computer Science, Czech Technical University in Prague https://cw.fel.cvut.cz/wiki/courses/b4b36zui/prednasky pagenda Previous lecture:

More information

Reinforcement Learning

Reinforcement Learning Reinforcement Learning Dipendra Misra Cornell University dkm@cs.cornell.edu https://dipendramisra.wordpress.com/ Task Grasp the green cup. Output: Sequence of controller actions Setup from Lenz et. al.

More information

Sequential Decision Problems

Sequential Decision Problems Sequential Decision Problems Michael A. Goodrich November 10, 2006 If I make changes to these notes after they are posted and if these changes are important (beyond cosmetic), the changes will highlighted

More information

15-780: ReinforcementLearning

15-780: ReinforcementLearning 15-780: ReinforcementLearning J. Zico Kolter March 2, 2016 1 Outline Challenge of RL Model-based methods Model-free methods Exploration and exploitation 2 Outline Challenge of RL Model-based methods Model-free

More information

Reinforcement Learning Part 2

Reinforcement Learning Part 2 Reinforcement Learning Part 2 Dipendra Misra Cornell University dkm@cs.cornell.edu https://dipendramisra.wordpress.com/ From previous tutorial Reinforcement Learning Exploration No supervision Agent-Reward-Environment

More information