Reinforcement Learning

Similar documents
CMU Lecture 12: Reinforcement Learning. Teacher: Gianni A. Di Caro

Reinforcement Learning. Yishay Mansour Tel-Aviv University

Introduction to Reinforcement Learning

Reinforcement Learning and Control

Introduction to Reinforcement Learning. CMPT 882 Mar. 18

Marks. bonus points. } Assignment 1: Should be out this weekend. } Mid-term: Before the last lecture. } Mid-term deferred exam:

CS599 Lecture 1 Introduction To RL

Lecture 23: Reinforcement Learning

Machine Learning I Reinforcement Learning

Lecture 25: Learning 4. Victor R. Lesser. CMPSCI 683 Fall 2010

REINFORCEMENT LEARNING

Reinforcement Learning

Lecture 10 - Planning under Uncertainty (III)

Grundlagen der Künstlichen Intelligenz

Lecture 3: Policy Evaluation Without Knowing How the World Works / Model Free Policy Evaluation

ARTIFICIAL INTELLIGENCE. Reinforcement learning

Reinforcement Learning. Spring 2018 Defining MDPs, Planning

Today s s Lecture. Applicability of Neural Networks. Back-propagation. Review of Neural Networks. Lecture 20: Learning -4. Markov-Decision Processes

CS 188: Artificial Intelligence

Basics of reinforcement learning

Reinforcement Learning. Machine Learning, Fall 2010

Reinforcement Learning

6 Reinforcement Learning

Reinforcement Learning Part 2

Balancing and Control of a Freely-Swinging Pendulum Using a Model-Free Reinforcement Learning Algorithm

Reinforcement Learning. George Konidaris

Machine Learning and Bayesian Inference. Unsupervised learning. Can we find regularity in data without the aid of labels?

COMP3702/7702 Artificial Intelligence Lecture 11: Introduction to Machine Learning and Reinforcement Learning. Hanna Kurniawati

Reinforcement Learning II

Review: TD-Learning. TD (SARSA) Learning for Q-values. Bellman Equations for Q-values. P (s, a, s )[R(s, a, s )+ Q (s, (s ))]

CSC321 Lecture 22: Q-Learning

CS230: Lecture 9 Deep Reinforcement Learning

MARKOV DECISION PROCESSES (MDP) AND REINFORCEMENT LEARNING (RL) Versione originale delle slide fornita dal Prof. Francesco Lo Presti

Prof. Dr. Ann Nowé. Artificial Intelligence Lab ai.vub.ac.be

Reinforcement Learning Active Learning

Deep Reinforcement Learning. STAT946 Deep Learning Guest Lecture by Pascal Poupart University of Waterloo October 19, 2017

Trust Region Policy Optimization

Lecture 1: March 7, 2018

Reinforcement learning an introduction

Reinforcement Learning

CS 570: Machine Learning Seminar. Fall 2016

15-780: Graduate Artificial Intelligence. Reinforcement learning (RL)

Machine Learning. Reinforcement learning. Hamid Beigy. Sharif University of Technology. Fall 1396

Q-learning. Tambet Matiisen

Reinforcement Learning II. George Konidaris

Markov Decision Processes (and a small amount of reinforcement learning)

Today s Outline. Recap: MDPs. Bellman Equations. Q-Value Iteration. Bellman Backup 5/7/2012. CSE 473: Artificial Intelligence Reinforcement Learning

Reinforcement Learning

Reinforcement Learning II. George Konidaris

Christopher Watkins and Peter Dayan. Noga Zaslavsky. The Hebrew University of Jerusalem Advanced Seminar in Deep Learning (67679) November 1, 2015

Temporal Difference. Learning KENNETH TRAN. Principal Research Engineer, MSR AI

15-780: ReinforcementLearning

Approximation Methods in Reinforcement Learning

MDP Preliminaries. Nan Jiang. February 10, 2019

Reinforcement Learning. Introduction

Administration. CSCI567 Machine Learning (Fall 2018) Outline. Outline. HW5 is available, due on 11/18. Practice final will also be available soon.

Course 16:198:520: Introduction To Artificial Intelligence Lecture 13. Decision Making. Abdeslam Boularias. Wednesday, December 7, 2016

Temporal Difference Learning & Policy Iteration

This question has three parts, each of which can be answered concisely, but be prepared to explain and justify your concise answer.

Reinforcement Learning

Reinforcement Learning. Donglin Zeng, Department of Biostatistics, University of North Carolina

CS 598 Statistical Reinforcement Learning. Nan Jiang

INF 5860 Machine learning for image classification. Lecture 14: Reinforcement learning May 9, 2018

Reinforcement Learning and NLP

Reinforcement Learning. Summer 2017 Defining MDPs, Planning

Reinforcement Learning with Function Approximation. Joseph Christian G. Noel

Autonomous Helicopter Flight via Reinforcement Learning

Lecture 3: Markov Decision Processes

Reinforcement Learning: An Introduction

Grundlagen der Künstlichen Intelligenz

CMU Lecture 11: Markov Decision Processes II. Teacher: Gianni A. Di Caro

Lecture 7: Value Function Approximation

Reinforcement Learning: the basics

The Reinforcement Learning Problem

Approximate Q-Learning. Dan Weld / University of Washington

Reinforcement learning

Notes on Reinforcement Learning

Chapter 3: The Reinforcement Learning Problem

Lecture 9: Policy Gradient II 1

Lecture 9: Policy Gradient II (Post lecture) 2

MS&E338 Reinforcement Learning Lecture 1 - April 2, Introduction

, and rewards and transition matrices as shown below:

Linear Least-squares Dyna-style Planning

Q-Learning in Continuous State Action Spaces

Value Function Methods. CS : Deep Reinforcement Learning Sergey Levine

Decision Theory: Q-Learning

16.410/413 Principles of Autonomy and Decision Making

Markov Decision Processes and Solving Finite Problems. February 8, 2017

Lecture 3: The Reinforcement Learning Problem

Introduction of Reinforcement Learning

CS788 Dialogue Management Systems Lecture #2: Markov Decision Processes

CS 188: Artificial Intelligence Spring Announcements

Reinforcement Learning

Chapter 7: Eligibility Traces. R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 1

An Introduction to Reinforcement Learning

REINFORCE Framework for Stochastic Policy Optimization and its use in Deep Learning

Markov Decision Processes Chapter 17. Mausam

An Adaptive Clustering Method for Model-free Reinforcement Learning

Markov Decision Processes Chapter 17. Mausam

Open Theoretical Questions in Reinforcement Learning

Transcription:

Reinforcement Learning Ron Parr CompSci 7 Department of Computer Science Duke University With thanks to Kris Hauser for some content RL Highlights Everybody likes to learn from experience Use ML techniques to generalize from relatively small amounts of experience Some notable successes: Backgammon, Go Flying a helicopter upside down Atari Games From Andrew Ng s home page Sutton & Barto RL Book is the 7 th most cited CS reference in CiteSeerX

Comparison w/other Kinds of Learning Learning often viewed as: Classification (supervised), or Model learning (unsupervised) RL is between these (delayed signal) What the last thing that happens before an accident? Source: By Damnsoft 9 at English Wikipedia, CC BY., https://commons.wikimedia.org/w/index.php?curid=85 Why We Need RL Where do we get transition probabilities? How do we store them? Big problems have big models Model size is quadratic in state space size Where do we get the reward function?

RL Framework Learn by trial and error No assumptions about model No assumptions about reward function Assumes: True state is known at all times Immediate reward is known Discount is known Less online deliberation Learn: Model free What to Learn? Policy p Action-utility function Q(s,a) Online: p(s) arg max a Q(s,a) Learning from Method: demonstration Q-learning, SARSA Value function V Model based More online deliberation Model of R and T arg max a Solve MDP S s P(s s,a)v(s ) Direct utility estimation, TDlearning Adaptive dynamic programming Simpler execution Fewer examples needed to learn?

Less online deliberation Learn: Model free What to Learn? Policy p Action-utility function Q(s,a) Online: p(s) arg max a Q(s,a) Learning from Method: demonstration Q-learning, SARSA Value function V Model based More online deliberation Model of R and T arg max a Solve MDP S s P(s s,a)v(s ) Direct utility estimation, TDlearning Adaptive dynamic programming Simpler execution Fewer examples needed to learn? RL for Our Game Show Problem: We don t know probability of answering correctly Solution: Source: Wikipedia page Buy the home version of the game For Who Wants to be a Millionaire Practice on the home game to refine our strategy Deploy strategy when we play the real game

Model Learning Approach Learn model, solve How to learn a model: Take action a in state s, observe s Take action a in state s, n times Observe s m times P(s s,a) = m/n Fill in transition matrix for each action Compute avg. reward for each state Solve learned model as an MDP (previous lecture) Limitations of Model Learning Partitions learning, solution into two phases Model may be large Hard to visit every state lots of times Note: Can t completely get around this problem Model storage is expensive Model manipulation is expensive 5

Less online deliberation Learn: Model free What to Learn? Policy p Action-utility function Q(s,a) Online: p(s) arg max a Q(s,a) Learning from Method: demonstration Q-learning, SARSA Value function V Model based More online deliberation Model of R and T arg max a Solve MDP S s P(s s,a)v(s ) Direct utility estimation, TDlearning Adaptive dynamic programming Simpler execution Fewer examples needed to learn? First steps: Passive RL Observe execution trials of an agent that acts according to some unobserved policy p Problem: estimate the value function V p [Recall V p (s) = E S(t) [g t R(S t )] where S t is the random variable denoting the distribution of states at time t] 6

Direct Utility Estimation +.8.87.9 + -.76.66 -.7.66.6.9. Observe trials t (i) =(s (i),a (i),s (i),r (i),,a (i) ti,s (i) ti,r (i) ti ) for i=,,n. For each state sîs:. Find all trials t (i) that pass through s. Compute subsequent value V t(i) (s)=s t=k to ti g t-k r (i) t 5. Set V p (s) to the average observed values Limitations: Clunky, learns only when an end state is reached Incremental ( Online ) Function Learning Data is streaming into learner x,y,, x n,y n y i = f(x i ) Observes x n+ and must make prediction for next time step y n+ Batch approach: Store all data at step n Use your learner of choice on all data up to time n, predict for time n+ Can we do this using less memory? 7

5 Example: Mean Estimation y i = q + error term (no x s) Current estimate q n = /n S i= n y i q 5 q n+ = /(n+) S i= n+ y i = /(n+) (y n+ + S i= n y i ) = /(n+) (y n+ + n q n ) = q n + /(n+) (y n+ - q n ) Example: Mean Estimation y i = q + error term (no x s) Current estimate q n = /n S i= n y i q 5 y 6 q n+ = /(n+) S i= n+ y i = /(n+) (y n+ + S i= n y i ) = /(n+) (y n+ + n q n ) = q n + /(n+) (y n+ - q n ) 8

Example: Mean Estimation y i = q + error term (no x s) Current estimate q n = /n S i= n y i q 5 q 6 = 5/6 q 5 + /6 y 6 q n+ = /(n+) S i= n+ y i = /(n+) (y n+ + S i= n y i ) = /(n+) (y n+ + n q n ) = q n + /(n+) (y n+ - q n ) Example: Mean Estimation q n+ = q n + /(n+) (y n+ - q n ) Only need to store n, q n q 5 q 6 = 5/6 q 5 + /6 y 6 9

Learning Rates In fact, q n+ = q n + a n (y n+ - q n ) converges to the mean for any a n such that: a n as n Sa n Sa n C < O(/n) does the trick If a n is close to, then the estimate shifts strongly to recent data; close to, and the old estimate is preserved Online Implementation +.8.87.9 + -.76.66 -.7.66.6.9. Store counts N[s] and estimated values V p (s). After a trial t, for each state s in the trial:. Set N[s] N[s]+. Adjust value V p (s) V p (s)+a(n[s])(v t (s)-v p (s)) Simple averaging Slow learning, because Bellman equation is not used to pass knowledge between adjacent states

Temporal Difference Learning + - Online estimation. Store counts N[s] and estimated values V p (s) of mean over value. For each observed transition (s,r,a,s ): next states. Set N[s] N[s]+. Adjust value V p (s) V p (s)+a(n[s])(r+gv p (s )-V p (s)) Instead of averaging at the level of trajectories Average at the level of states Temporal Difference Learning + -. Store counts N[s] and estimated values V p (s). For each observed transition (s,r,a,s ):. Set N[s] N[s]+. Adjust value V p (s) V p (s)+a(n[s])(r+gv p (s )-V p (s))

Temporal Difference Learning + With learning rate a=.5 - -.. Store counts N[s] and estimated values V p (s). For each observed transition (s,r,a,s ):. Set N[s] N[s]+. Adjust value V p (s) V p (s)+a(n[s])(r+gv p (s )-V p (s)) Temporal Difference Learning -. -. + With learning rate a=.5 -. - -.. Store counts N[s] and estimated values V p (s). For each observed transition (s,r,a,s ):. Set N[s] N[s]+. Adjust value V p (s) V p (s)+a(n[s])(r+gv p (s )-V p (s))

Temporal Difference Learning -. -..8 + With learning rate a=.5 -. - -.. Store counts N[s] and estimated values V p (s). For each observed transition (s,r,a,s ):. Set N[s] N[s]+. Adjust value V p (s) V p (s)+a(n[s])(r+gv p (s )-V p (s)) Temporal Difference Learning -. -...7 + - With learning rate a=.5 After a second trajectory -. from start to +. Store counts N[s] and estimated values V p (s). For each observed transition (s,r,a,s ):. Set N[s] N[s]+. Adjust value V p (s) V p (s)+a(n[s])(r+gv p (s )-V p (s))

Temporal Difference Learning.7 -.6..8 + - With learning rate a=.5 After a third trajectory -.6 from start to +. Store counts N[s] and estimated values V p (s). For each observed transition (s,r,a,s ):. Set N[s] N[s]+. Adjust value V p (s) V p (s)+a(n[s])(r+gv p (s )-V p (s)) Temporal Difference Learning. -..6. + - With learning rate a=.5 -.8 Our luck starts to run out on the fourth trajectory. Store counts N[s] and estimated values V p (s). For each observed transition (s,r,a,s ):. Set N[s] N[s]+. Adjust value V p (s) V p (s)+a(n[s])(r+gv p (s )-V p (s))

Temporal Difference Learning. -..6..9 + - With learning rate a=.5 But we recover -.8. Store counts N[s] and estimated values V p (s). For each observed transition (s,r,a,s ):. Set N[s] N[s]+. Adjust value V p (s) V p (s)+a(n[s])(r+gv p (s )-V p (s)) Temporal Difference Learning..6.69 -. -.8.9 + - With learning rate a=.5 and reach the goal!. Store counts N[s] and estimated values V p (s). For each observed transition (s,r,a,s ):. Set N[s] N[s]+. Adjust value V p (s) V p (s)+a(n[s])(r+gv p (s )-V p (s)) For any s, distribution of s approaches P(s s,p(s)) Uses relationships between adjacent states to adjust utilities toward equilibrium Unlike direct estimation, learns before trial is terminated 5

Using TD for Control Recall value iteration: V i+ (s) = max a R(s,a) +γ P(s' s,a)v i (s')!! s' Why not pick the maximizing a and then do: V ( s) = V ( s) + a( N( s))( r + gv ( s') -V ( s)) s' is the observed next state after taking action a Action selection What breaks? How do we pick a? Need to P(s s,a), but the reason why we re doing RL is that we don t know this! Even if we magically knew the best action: Can only learn the value of the policy we are following If initial guess for V suggests a stupid policy, we ll never learn otherwise 6

Less online deliberation Learn: Model free What to Learn? Policy p Action-utility function Q(s,a) Online: p(s) arg max a Q(s,a) Learning from Method: demonstration Q-learning, SARSA Value function V Model based More online deliberation Model of R and T arg max a Solve MDP S s P(s s,a)v(s ) Direct utility estimation, TDlearning Adaptive dynamic programming Simpler execution Fewer examples needed to learn? Q-Values Learning V is not enough for action selection because a transition model is needed Solution: learn Q-values: Q(s,a) is the utility of choosing action a in state s Shift Bellman equation V(s) = max a Q(s,a) Q(s,a) = R(s) + g S s P(s s,a) max a Q(s,a ) So far, everything is the same but what about the learning rule? 7

Q-learning Update Recall TD: Update: V(s) V(s)+a(N[s])(r+gV(s )-V(s)) Use P to pick actions? a arg max a S s P(s s,a)v(s )) Q-Learning: Update: Q(s,a) Q(s,a)+a(N[s,a])(r+g max a Q(s,a )-Q(s,a)) Select action: a arg max a f( Q(s,a)) Key difference: average over P(s s,a) is baked in to the Q function Q-learning is therefore a model-free active learner Q-learning vs. TD-learning TD converges to value of policy you are following Q-learning converges to values of optimal policy independent of of whatever policy you follow during learning! Caveats: Converges in limit, assuming all states are visited infinitely often In case of Q-learning, all states and actions must be tried infinitely often Note: If there is only one action possible in each state, then Q-learning and TD-learning are identical 8

Less online deliberation Learn: Model free What to Learn? Policy p Action-utility function Q(s,a) Online: p(s) arg max a Q(s,a) Learning from Method: demonstration Q-learning, SARSA Value function V Model based More online deliberation Model of R and T arg max a Solve MDP S s P(s s,a)v(s ) Direct utility estimation, TDlearning Adaptive dynamic programming Simpler execution Fewer examples needed to learn? Brief Comments on Learning from Demonstration LfD is a powerful method to convey human expertise to (ro)bots Useful for imitating human policies Less useful for surpassing human ability (but can smooth out noise in human demos) Used, e.g., for acrobatic helicopter flight 9

Advanced (but unavoidable) Topics Exploration vs. Exploitation Value function approximation Exploration vs. Exploitation Greedy strategy purely exploits its current knowledge The quality of this knowledge improves only for those states that the agent observes often A good learner must perform exploration in order to improve its knowledge about states that are not often observed But pure exploration is useless (and costly) if it is never exploited

Restaurant Problem Exploration vs. Exploitation in Practice Can assign an exploration bonus to parts of the world you haven t seen much In practice e-greedy action selection is used most often

Value Function Representation Fundamental problem remains unsolved: TD/Q learning solves model-learning problem, but Large models still have large value functions Too expensive to store these functions Impossible to visit every state in large models Function approximation Use machine learning methods to generalize Avoid the need to visit every state Function Approximation General problem: Learn function f(s) Linear regression Neural networks State aggregation (violates Markov property) Idea: Approximate f(s) with g(s;w) g is some easily computable function of s and w Try to find q that minimizes the error in g

Linear Regression Define a set of basis functions (vectors) j( s), j( s)... jk ( s) Approximate f with a weighted combination of these å g( s; w) = w j j ( s) j= Example: Space of quadratic functions: k j ( s = s ) =, j( s) = s, j( s) j Orthogonal projection minimizes SSE Updates with Approximation Recall regular TD update: V(s) V(s)+a(N[s])(r+gV(s )-V(s)) With function approximation: Update: w i+ = w i V ( s)» V ( s; w) Vector + a( r + gv ( s'; w) -V ( s; w)) Ñ Neural networks are a special case of this. w operations V ( s; w)

For linear value functions Gradient is trivial: Update is trivial: å V ( s; w) = w j j ( s) Ñ w j k j= V ( s; w) = j ( s) j j Individual components w i+ j = w i j + a( r + gv ( s'; w) -V ( s; w)) j ( s) j Properties of approximate RL Exact case (tabular representation) = special case Can be combined with Q-learning Convergence not guaranteed Policy evaluation with linear function approximation converges if samples are drawn on policy In general, convergence is not guaranteed Chasing a moving target Errors can compound Success has traditionally required very carefully chosen features Deepmind has recently had success using no feature engineering but lots of training data

Conclusions Reinforcement learning solves an MDP Converges for exact value function representation Can be combined with approximation methods Good results require good features and/or lots of data 5