Tutorial on Policy Gradient Methods. Jan Peters
|
|
- Lynne Ramsey
- 5 years ago
- Views:
Transcription
1 Tutorial on Policy Gradient Methods Jan Peters
2 Outline 1. Reinforcement Learning 2. Finite Difference vs Likelihood-Ratio Policy Gradients 3. Likelihood-Ratio Policy Gradients 4. Conclusion
3 General Setup Policy u = π(x) or u π(u x) Goal: Find a policy that maximizes the reward Reward r R Next state x R n System p(x x, u) Action u R m 1. Reinforcement Learning
4 Goal of RL What does maximizing your rewards mean? J(π) = 1 T T r(x t, u t, t) E{r(x, u, t)} t=1 Find a policy that maximizes the rewards! A policy tells you which actions to use for each state! 1. Reinforcement Learning
5 Policy Search vs Value Function Methods Value Function View! Critic: Policy Evaluation { T } Q(x t, u t, t) = E r(x τ, u τ, τ) x t, u t τ=t Actor: Compute Optimal Policy u t = π(x t, t) = argmax Q(x t, u, t) Policy Search View! J(π) = E Critic: Policy Sensitivity { T } r(x t, u t, t) t=0 Actor: Policy Improvement π = argmax π {J(π ) J(π)} 1. Reinforcement Learning
6 Greedy vs Gradients Greedy Updates: V π V π θ π Small change Small change = argmax θe π θ {Q π (x, u)} π Large change Small change V π Policy Gradient Updates: θ π π V π Large change = θ π + α dj(θ) dθ Small change π θ=θπ π Large change Small change potentially unstable learning process with large policy jumps stable learning process with smooth policy improvement 2. Value Function Methods
7 Outline 1. Reinforcement Learning for Motor Control? 2. Finite Difference vs Likelihood-Ratio Policy Gradients 3. Likelihood-Ratio Policy Gradients 4. Conclusion
8 Why Policy Gradient Methods? Why Policy Gradients? Smooth changes in the parameters result into stability. Prior information can be incorporated with ease. Works with incomplete information. Exploration-Exploitation Dilemma implicitly treated. Is unbiased! Only requires much fewer samples. 3. FD vs LR gradients
9 Finite Difference Gradients Blackbox-Approach Perturb the Parameters of your Policy Policy θ + δθ Reward r R u = π(x) or u π(u x) Next state x R n Action u R m J(θ + δθ) J(θ) System p(x x, u) dj dθ J(θ + δθ) J(θ) δθ 3. FD vs LR gradients
10 Finite Difference Gradients Why use Finite Difference Gradients? Only needs a black box! Works on any parameterization and deterministic policy. Fast to estimate for deterministic systems. State of the art in the simulation community Why not? Parameter perturbation can destroy your robot. Exploration is hard to include. For stochastic systems it is very slow. 3. FD vs LR gradients
11 Likelihood Ratio Gradients Whitebox-Approach Perturb the actions using a stochastic policy Policy u = π(x) or u π(u x) Reward r R Next state x R n System p(x x, u) Action u R m Actions with Noise 3. FD vs LR gradients
12 Likelihood Ratio Gradients Whitebox-Approach: Likelihood Ratio Trick d dθ J(θ) = d π (u) r (u) du, (1) dθ U dπ (u) = r (u) du, (2) U dθ 1 dπ (u) = π (u) r (u) du, (3) U π (u) dθ d log π (u) = π (u) r (u) du, (4) U dθ { } d log π (u) N d log π (u i ) = E r (u) r (u i ) (5) dθ dθ 3. FD vs LR gradients i=1
13 Likelihood Ratio Gradients Why use Likelihood Ratio Gradients? Fastest Gradient Method! We know the policy derivative - thus they are more efficient than for the simulation community. Only perturb the motor command -> policies will remain stable! Why not? Stochastic policy. Injection of noise into the system. Theory much more complex! 3. FD vs LR gradients
14 Outline 1. Reinforcement Learning for Motor Control? 2. Finite Difference vs Likelihood-Ratio Policy Gradients 3. Likelihood-Ratio Policy Gradients 4. Conclusion
15 Goal of RL Revisited Goal: Optimize the expected return J(θ) = X d π (x) U π(u x)r(x, u)dudx, State distribution Policy (we can choose it) { } = (1 γ)e γ t r t t=0 Reward 4. Likelihood Ratio Gradients
16 Policy Gradient Methods According to the Policy Gradient Theorem, the gradient can be computed as θ J(θ) = d π (x) θ π(u x)(q π (x, u) b π (x))dudx. X U Gradient of the expected return Derivative only of the policy State-action value function Arbitrary baseline function Problems: High variance, very slow convergence, dependence on baseline! Originally discovered: Aleksandrov, 1968; Glynn,1986. Examples: episodic REINFORCE, SRV, GPOMDP, Likelihood Ratio Gradients
17 Policy Gradient Methods According to the Policy Gradient Theorem, the gradient can be computed as θ J(θ) = d π (x) θ π(u x)(q π (x, u) b π (x))dudx. X U Gradient of the expected return Derivative only of the policy State-action value function Arbitrary baseline function Problems: High variance, very slow convergence, dependence on baseline! Originally discovered: Aleksandrov, 1968; Glynn,1986. Examples: episodic REINFORCE, SRV, GPOMDP, Likelihood Ratio Gradients
18 Compatible Function Approximation The state-action value function can be replaced by State-action value function Q π (x, u) f π w(x, u) = dlogπ(u x) dθ Compatible function approximation Log-policy derivative T w Parameters of the function approximator without biasing the gradient. Thus, the policy gradient becomes θ J(θ) = d π (x) θ π(u x)(fw(x, π u) b π (x))dudx. 4. Likelihood Ratio Gradients X U (Sutton et al., 2000; Konda& Tsitsiklis, 2000)
19 All-Action Gradient By integrating over all possible actions in a state, the baseline can be integrated out, and the gradient becomes: θ J(θ) = = All Action Matrix X X d π (x) d π (x) = F (θ)w. U U θ π(u x)(f π w(x, u) b(x))dudx, π(u x) θ logπ(u x) θ logπ(u x) T wdudx, Parameters (Peters et al., 2003) 4. Likelihood Ratio Gradients
20 Natural Gradients Natural Gradients: A more efficient gradient in learning problems is the natural gradient (Amari, 1998): Natural gradient θ J(θ) = G 1 (θ) θ J(θ) Inverse of the Fisher Information Matrix Vanilla` gradient where G(θ) = d π (x) π(u x) θ log (d π (x)π(u x)) θ log (d π (x)π(u x)) dudx. X U θ J(θ) = d π (x) θ π(u x)(q π (x, u) b π (x))dudx. 4. Likelihood Ratio Gradients X U
21 All Action = Fisher Information! So how does the All-Action Matrix F (θ) = G(θ) = X X d π (x) relate to the Fisher Information Matrix d π (x) U U π(u x) θ log π(u x) θ log π(u x)dudx. π(u x) θ log (d π (x)π(u x)) θ log (d π (x)π(u x)) dudx. While Kakade (2002) suggested that F is an average of point Fisher information matrices, we could prove that F = G. 4. Likelihood Ratio Gradients (Peters et al., 2003; 2005; Bagnel et al., 2003)
22 Natural Gradient Updates As G = F, the gradient simplifies to θ J(θ) = G 1 (θ) θ J(θ) = G 1 (θ)f (θ)w = w, and the policy parameter update becomes θ t+1 = θ t + α t w t. Important: The estimation of the gradient has simplified upon estimating the compatible function approximation / critic!!! (Kakade, 2002; Peters et al., 2003, 2005; Bagnell&Schneider, 2003) 4. Likelihood Ratio Gradients
23 Natural Policy Gradients Linear Quadratic Regulation Two-State Problem r = 0, u = r = 1 r = 2 u = 0 u = 1 r = 0, u = 1 4. Likelihood Ratio Gradients
24 Compatible Function Approximation To obtain the natural gradient θ J(θ) = w we need to estimate the compatible function approximation f π w(x, u) = dlogπ(u x) dθ This function approximation is mean zero! Therefore it can ONLY represent the Advantage Function f π w(x, u) = Q π (x, u) V π (x) = A π (x, u). T w 4. Likelihood Ratio Gradients
25 Compatible Function Approximation The advantage function f π w(x, u) = Q π (x, u) V π (x) = A π (x, u). is very different from the value functions! Value Function Q π (x, u) Advantage Function A π (x, u) 5 5 Action u Action u -5-5 State x State x 5...and we cannot directly do Temporal Difference Learning on this representation! 4. Likelihood Ratio Gradients
26 Natural Actor-Critic We cannot do TD learning with f π w(x, u) = Q π (x, u) V π (x) = A π (x, u). But when we add further basis function approximators into the Bellman equation V π (x) = φ(x) T v V π (x t ) + θ log π(u t x t ) T w = r(x t, u t ) + γv π (x t+1 ) + ɛ t we get a linear regression problem which can be solved with the LSTD(λ) algorithm (Boyan, 1996) in one step! 4. Likelihood Ratio Gradients
27 Natural Actor-Critic New { basis functions Boyan s LSTD(λ) Critic: LSTD-Q(λ) Evaluation Γ t = [φ(x t+1 ) T, 0 T ] T Φ t = [φ(x t ) T, θ log π(u t x t ) T ] T { z t+1 = λz t + Φ t A t+1 = A t + z t+1 (Φ t γγ t ) b t+1 = b t + z t+1 r t+1 [w T t+1, v T t+1] T = A 1 t+1 b t+1 Actor: Natural Policy Gradient Improvement θ t+1 = θ t + α t w t. 4. Likelihood Ratio Gradients
28 Algorithms Derivable from this Framework Gibbs Policy π(u t x t ) = e θ xu / b eθ xb Additional Basis Functions φ(x) = [0,..., 0, 1, 0,..., 0] T NAC Framework Sutton et al. s (1983) Actor Critic Linear Gauss-Policy π(u x) = N (u θ gain T x, θ explore ) Additional Basis Functions φ(x) = x T P x + p NAC Framework with Learning Rate α i = 1 J(θ i ) Bradtke&Bartos (1993) Q-Learning for LQR 4. Likelihood Ratio Gradients
29 Episodic Natural Actor Critic...but in many cases we don t have any good additional basis functions! V π (x) = φ(x) T v In this case, we can sum up the advantages along a trajectory and obtain one data point for a linear regression problem V π (x 0 ) }{{} J + ( T ) θ log π(u t x t ) t=0 } {{ } } {{ } ϕ i R i...and an additional basis function of 1 suffices! T w = T γ t r(x t, u t ) t=0 + γ T +1 V π (x T +1 ) }{{} 0 4. Likelihood Ratio Gradients
30 Episodic Natural Actor-Critic Sufficient Statistics{ Φ = Critic: Episodic Evaluation [ ϕ1, ϕ 2,..., ϕ N 1, 1,..., 1 R = [ R 1, R T 2,..., R T N ] T ] T Actor: Natural Policy Gradient Improvement θ t+1 = θ t + α t w t. { Linear Regression [ ] w J = ( Φ T Φ) 1 Φ T R 4. Likelihood Ratio Gradients
31 Improving Motor Primitives Ijspeert et al. (2002) suggested a nonlinear dynamics approach for motor primitives in imitation learning: Trajectory Plan Dynamics Canonical Dynamics Local Linear Model Approx. 4. Evaluations ( ( ) z) ( f ( x,v) + z) z = α z β z g y y = α y where ( ( ) v) v = α v β v g x x = α x v f ( x,v) = k i =1 k w i b i v i =1 w i w i = exp 1 2 d i ( ) 2 x c i The parameters b can also be improved by Reinforcement Learning and x = x x 0 g x 0
32 Improving Motor Primitives Minimum Motor Command Two Goals Policies!10 2.3!10 2.4!10 2.5!10 2!10 2.6!10 2.7! Evaluations
33 Outline 1. Reinforcement Learning for Motor Control? 2. Finite Difference vs Likelihood-Ratio Policy Gradients 3. Likelihood-Ratio Policy Gradients 4. Conclusion
34 Conclusions If you can explore your complete state-action space sufficiently... use value function methods. If you have access to policy and its derivatives... use likelihood ratio policy gradient methods. If you have good additional basis function... use Natural Actor-Critic. If not... use Episodic Natural Actor-Critic. If you want to explore a problem fastly... use vani&a likelihood ratio policy gradient methods. If you can only access your parameters... use finite difference policy gradient methods. If you can only access your parameters... use finite difference policy gradient methods. 6. Conclusion
35 Projects... Learning Motor Primitives with Reward-Weighted Regression This wi& be a nearly new method... very enthusiastic people needed! Applying Policy Gradients (methods of your choice!) to Osci#ator Optimization! Here we can create several projects if you like :) 6. Conclusion
Scaling Reinforcement Learning Paradigms for Motor Control
Scaling Reinforcement Learning Paradigms for Motor Control Jan Peters, Sethu Vijayakumar, Stefan Schaal Computer Science & Neuroscience, Univ. of Southern California, Los Angeles, CA 90089 ATR Computational
More informationSchool of Informatics, University of Edinburgh
T E H U N I V E R S I T Y O H F R G School of Informatics, University of Edinburgh E D I N B U Institute of Perception, Action and Behaviour Reinforcement Learning for Humanoid Robots - Policy Gradients
More informationIntroduction to Reinforcement Learning. CMPT 882 Mar. 18
Introduction to Reinforcement Learning CMPT 882 Mar. 18 Outline for the week Basic ideas in RL Value functions and value iteration Policy evaluation and policy improvement Model-free RL Monte-Carlo and
More informationActor-critic methods. Dialogue Systems Group, Cambridge University Engineering Department. February 21, 2017
Actor-critic methods Milica Gašić Dialogue Systems Group, Cambridge University Engineering Department February 21, 2017 1 / 21 In this lecture... The actor-critic architecture Least-Squares Policy Iteration
More informationECE276B: Planning & Learning in Robotics Lecture 16: Model-free Control
ECE276B: Planning & Learning in Robotics Lecture 16: Model-free Control Lecturer: Nikolay Atanasov: natanasov@ucsd.edu Teaching Assistants: Tianyu Wang: tiw161@eng.ucsd.edu Yongxi Lu: yol070@eng.ucsd.edu
More informationReinforcement Learning
Reinforcement Learning Policy gradients Daniel Hennes 26.06.2017 University Stuttgart - IPVS - Machine Learning & Robotics 1 Policy based reinforcement learning So far we approximated the action value
More informationREINFORCE Framework for Stochastic Policy Optimization and its use in Deep Learning
REINFORCE Framework for Stochastic Policy Optimization and its use in Deep Learning Ronen Tamari The Hebrew University of Jerusalem Advanced Seminar in Deep Learning (#67679) February 28, 2016 Ronen Tamari
More informationLecture 8: Policy Gradient
Lecture 8: Policy Gradient Hado van Hasselt Outline 1 Introduction 2 Finite Difference Policy Gradient 3 Monte-Carlo Policy Gradient 4 Actor-Critic Policy Gradient Introduction Vapnik s rule Never solve
More informationELEC-E8119 Robotics: Manipulation, Decision Making and Learning Policy gradient approaches. Ville Kyrki
ELEC-E8119 Robotics: Manipulation, Decision Making and Learning Policy gradient approaches Ville Kyrki 9.10.2017 Today Direct policy learning via policy gradient. Learning goals Understand basis and limitations
More informationLecture 6: CS395T Numerical Optimization for Graphics and AI Line Search Applications
Lecture 6: CS395T Numerical Optimization for Graphics and AI Line Search Applications Qixing Huang The University of Texas at Austin huangqx@cs.utexas.edu 1 Disclaimer This note is adapted from Section
More informationApproximate Dynamic Programming
Approximate Dynamic Programming A. LAZARIC (SequeL Team @INRIA-Lille) Ecole Centrale - Option DAD SequeL INRIA Lille EC-RL Course Value Iteration: the Idea 1. Let V 0 be any vector in R N A. LAZARIC Reinforcement
More information15-889e Policy Search: Gradient Methods Emma Brunskill. All slides from David Silver (with EB adding minor modificafons), unless otherwise noted
15-889e Policy Search: Gradient Methods Emma Brunskill All slides from David Silver (with EB adding minor modificafons), unless otherwise noted Outline 1 Introduction 2 Finite Difference Policy Gradient
More informationOptimal Control with Learned Forward Models
Optimal Control with Learned Forward Models Pieter Abbeel UC Berkeley Jan Peters TU Darmstadt 1 Where we are? Reinforcement Learning Data = {(x i, u i, x i+1, r i )}} x u xx r u xx V (x) π (u x) Now V
More informationReinforcement Learning and NLP
1 Reinforcement Learning and NLP Kapil Thadani kapil@cs.columbia.edu RESEARCH Outline 2 Model-free RL Markov decision processes (MDPs) Derivative-free optimization Policy gradients Variance reduction Value
More informationPolicy Gradient Reinforcement Learning for Robotics
Policy Gradient Reinforcement Learning for Robotics Michael C. Koval mkoval@cs.rutgers.edu Michael L. Littman mlittman@cs.rutgers.edu May 9, 211 1 Introduction Learning in an environment with a continuous
More informationLecture 8: Policy Gradient I 2
Lecture 8: Policy Gradient I 2 Emma Brunskill CS234 Reinforcement Learning. Winter 2018 Additional reading: Sutton and Barto 2018 Chp. 13 2 With many slides from or derived from David Silver and John Schulman
More informationDeep Reinforcement Learning: Policy Gradients and Q-Learning
Deep Reinforcement Learning: Policy Gradients and Q-Learning John Schulman Bay Area Deep Learning School September 24, 2016 Introduction and Overview Aim of This Talk What is deep RL, and should I use
More informationLecture 7: Value Function Approximation
Lecture 7: Value Function Approximation Joseph Modayil Outline 1 Introduction 2 3 Batch Methods Introduction Large-Scale Reinforcement Learning Reinforcement learning can be used to solve large problems,
More informationChristopher Watkins and Peter Dayan. Noga Zaslavsky. The Hebrew University of Jerusalem Advanced Seminar in Deep Learning (67679) November 1, 2015
Q-Learning Christopher Watkins and Peter Dayan Noga Zaslavsky The Hebrew University of Jerusalem Advanced Seminar in Deep Learning (67679) November 1, 2015 Noga Zaslavsky Q-Learning (Watkins & Dayan, 1992)
More informationUsing Gaussian Processes for Variance Reduction in Policy Gradient Algorithms *
Proceedings of the 8 th International Conference on Applied Informatics Eger, Hungary, January 27 30, 2010. Vol. 1. pp. 87 94. Using Gaussian Processes for Variance Reduction in Policy Gradient Algorithms
More informationReinforcement Learning
Reinforcement Learning Policy Search: Actor-Critic and Gradient Policy search Mario Martin CS-UPC May 28, 2018 Mario Martin (CS-UPC) Reinforcement Learning May 28, 2018 / 63 Goal of this lecture So far
More informationLecture 9: Policy Gradient II (Post lecture) 2
Lecture 9: Policy Gradient II (Post lecture) 2 Emma Brunskill CS234 Reinforcement Learning. Winter 2018 Additional reading: Sutton and Barto 2018 Chp. 13 2 With many slides from or derived from David Silver
More informationFast Gradient-Descent Methods for Temporal-Difference Learning with Linear Function Approximation
Fast Gradient-Descent Methods for Temporal-Difference Learning with Linear Function Approximation Rich Sutton, University of Alberta Hamid Maei, University of Alberta Doina Precup, McGill University Shalabh
More informationMachine Learning I Continuous Reinforcement Learning
Machine Learning I Continuous Reinforcement Learning Thomas Rückstieß Technische Universität München January 7/8, 2010 RL Problem Statement (reminder) state s t+1 ENVIRONMENT reward r t+1 new step r t
More informationReinforcement Learning Part 2
Reinforcement Learning Part 2 Dipendra Misra Cornell University dkm@cs.cornell.edu https://dipendramisra.wordpress.com/ From previous tutorial Reinforcement Learning Exploration No supervision Agent-Reward-Environment
More informationRobotics Part II: From Learning Model-based Control to Model-free Reinforcement Learning
Robotics Part II: From Learning Model-based Control to Model-free Reinforcement Learning Stefan Schaal Max-Planck-Institute for Intelligent Systems Tübingen, Germany & Computer Science, Neuroscience, &
More informationLecture 9: Policy Gradient II 1
Lecture 9: Policy Gradient II 1 Emma Brunskill CS234 Reinforcement Learning. Winter 2019 Additional reading: Sutton and Barto 2018 Chp. 13 1 With many slides from or derived from David Silver and John
More informationPolicy Gradient Methods. February 13, 2017
Policy Gradient Methods February 13, 2017 Policy Optimization Problems maximize E π [expression] π Fixed-horizon episodic: T 1 Average-cost: lim T 1 T r t T 1 r t Infinite-horizon discounted: γt r t Variable-length
More informationTrust Region Policy Optimization
Trust Region Policy Optimization Yixin Lin Duke University yixin.lin@duke.edu March 28, 2017 Yixin Lin (Duke) TRPO March 28, 2017 1 / 21 Overview 1 Preliminaries Markov Decision Processes Policy iteration
More informationNotes on Reinforcement Learning
1 Introduction Notes on Reinforcement Learning Paulo Eduardo Rauber 2014 Reinforcement learning is the study of agents that act in an environment with the goal of maximizing cumulative reward signals.
More informationLeast squares temporal difference learning
Least squares temporal difference learning TD(λ) Good properties of TD Easy to implement, traces achieve the forward view Linear complexity = fast Combines easily with linear function approximation Outperforms
More informationLearning Control for Air Hockey Striking using Deep Reinforcement Learning
Learning Control for Air Hockey Striking using Deep Reinforcement Learning Ayal Taitler, Nahum Shimkin Faculty of Electrical Engineering Technion - Israel Institute of Technology May 8, 2017 A. Taitler,
More informationNotes on policy gradients and the log derivative trick for reinforcement learning
Notes on policy gradients and the log derivative trick for reinforcement learning David Meyer dmm@{1-4-5.net,uoregon.edu,brocade.com,...} June 3, 2016 1 Introduction The log derivative trick 1 is a widely
More informationReinforcement Learning as Variational Inference: Two Recent Approaches
Reinforcement Learning as Variational Inference: Two Recent Approaches Rohith Kuditipudi Duke University 11 August 2017 Outline 1 Background 2 Stein Variational Policy Gradient 3 Soft Q-Learning 4 Closing
More informationCS599 Lecture 1 Introduction To RL
CS599 Lecture 1 Introduction To RL Reinforcement Learning Introduction Learning from rewards Policies Value Functions Rewards Models of the Environment Exploitation vs. Exploration Dynamic Programming
More informationVariance Adjusted Actor Critic Algorithms
Variance Adjusted Actor Critic Algorithms 1 Aviv Tamar, Shie Mannor arxiv:1310.3697v1 [stat.ml 14 Oct 2013 Abstract We present an actor-critic framework for MDPs where the objective is the variance-adjusted
More informationReinforcement Learning via Policy Optimization
Reinforcement Learning via Policy Optimization Hanxiao Liu November 22, 2017 1 / 27 Reinforcement Learning Policy a π(s) 2 / 27 Example - Mario 3 / 27 Example - ChatBot 4 / 27 Applications - Video Games
More informationReinforcement Learning In Continuous Time and Space
Reinforcement Learning In Continuous Time and Space presentation of paper by Kenji Doya Leszek Rybicki lrybicki@mat.umk.pl 18.07.2008 Leszek Rybicki lrybicki@mat.umk.pl Reinforcement Learning In Continuous
More informationApproximate Dynamic Programming
Master MVA: Reinforcement Learning Lecture: 5 Approximate Dynamic Programming Lecturer: Alessandro Lazaric http://researchers.lille.inria.fr/ lazaric/webpage/teaching.html Objectives of the lecture 1.
More informationOlivier Sigaud. September 21, 2012
Supervised and Reinforcement Learning Tools for Motor Learning Models Olivier Sigaud Université Pierre et Marie Curie - Paris 6 September 21, 2012 1 / 64 Introduction Who is speaking? 2 / 64 Introduction
More informationA Survey of Actor-Critic Reinforcement Learning: Standard and Natural Policy Gradients
IEEE TRANSACTIONS ON SYSTEMS, MAN AND CYBERNETICS PART C: APPLICATIONS AND REVIEWS 1 A Survey of Actor-Critic Reinforcement Learning: Standard and Natural Policy Gradients Ivo Grondman, Lucian Buşoniu,
More informationReinforcement learning
Reinforcement learning Based on [Kaelbling et al., 1996, Bertsekas, 2000] Bert Kappen Reinforcement learning Reinforcement learning is the problem faced by an agent that must learn behavior through trial-and-error
More informationLearning Manipulation Patterns
Introduction Dynamic Movement Primitives Reinforcement Learning Neuroinformatics Group February 29, 2012 Introduction Dynamic Movement Primitives Reinforcement Learning Motivation shift to reactive motion
More informationINF 5860 Machine learning for image classification. Lecture 14: Reinforcement learning May 9, 2018
Machine learning for image classification Lecture 14: Reinforcement learning May 9, 2018 Page 3 Outline Motivation Introduction to reinforcement learning (RL) Value function based methods (Q-learning)
More informationReinforcement Learning
Reinforcement Learning RL in continuous MDPs March April, 2015 Large/Continuous MDPs Large/Continuous state space Tabular representation cannot be used Large/Continuous action space Maximization over action
More informationReinforcement Learning
Reinforcement Learning Function approximation Mario Martin CS-UPC May 18, 2018 Mario Martin (CS-UPC) Reinforcement Learning May 18, 2018 / 65 Recap Algorithms: MonteCarlo methods for Policy Evaluation
More informationOnline Control Basis Selection by a Regularized Actor Critic Algorithm
Online Control Basis Selection by a Regularized Actor Critic Algorithm Jianjun Yuan and Andrew Lamperski Abstract Policy gradient algorithms are useful reinforcement learning methods which optimize a control
More informationQ-Learning in Continuous State Action Spaces
Q-Learning in Continuous State Action Spaces Alex Irpan alexirpan@berkeley.edu December 5, 2015 Contents 1 Introduction 1 2 Background 1 3 Q-Learning 2 4 Q-Learning In Continuous Spaces 4 5 Experimental
More informationDeep Reinforcement Learning via Policy Optimization
Deep Reinforcement Learning via Policy Optimization John Schulman July 3, 2017 Introduction Deep Reinforcement Learning: What to Learn? Policies (select next action) Deep Reinforcement Learning: What to
More informationApproximate Dynamic Programming
Approximate Dynamic Programming A. LAZARIC (SequeL Team @INRIA-Lille) ENS Cachan - Master 2 MVA SequeL INRIA Lille MVA-RL Course Approximate Dynamic Programming (a.k.a. Batch Reinforcement Learning) A.
More informationState Space Abstractions for Reinforcement Learning
State Space Abstractions for Reinforcement Learning Rowan McAllister and Thang Bui MLG RCC 6 November 24 / 24 Outline Introduction Markov Decision Process Reinforcement Learning State Abstraction 2 Abstraction
More informationRelative Entropy Policy Search
Relative Entropy Policy Search Jan Peters, Katharina Mülling, Yasemin Altün Max Planck Institute for Biological Cybernetics, Spemannstr. 38, 72076 Tübingen, Germany {jrpeters,muelling,altun}@tuebingen.mpg.de
More informationReinforcement Learning
Reinforcement Learning Markov decision process & Dynamic programming Evaluative feedback, value function, Bellman equation, optimality, Markov property, Markov decision process, dynamic programming, value
More informationIntroduction of Reinforcement Learning
Introduction of Reinforcement Learning Deep Reinforcement Learning Reference Textbook: Reinforcement Learning: An Introduction http://incompleteideas.net/sutton/book/the-book.html Lectures of David Silver
More informationilstd: Eligibility Traces and Convergence Analysis
ilstd: Eligibility Traces and Convergence Analysis Alborz Geramifard Michael Bowling Martin Zinkevich Richard S. Sutton Department of Computing Science University of Alberta Edmonton, Alberta {alborz,bowling,maz,sutton}@cs.ualberta.ca
More informationActive Policy Iteration: Efficient Exploration through Active Learning for Value Function Approximation in Reinforcement Learning
Active Policy Iteration: fficient xploration through Active Learning for Value Function Approximation in Reinforcement Learning Takayuki Akiyama, Hirotaka Hachiya, and Masashi Sugiyama Department of Computer
More informationCITEC SummerSchool 2013 Learning From Physics to Knowledge Selected Learning Methods
CITEC SummerSchool 23 Learning From Physics to Knowledge Selected Learning Methods Robert Haschke Neuroinformatics Group CITEC, Bielefeld University September, th 23 Robert Haschke (CITEC) Learning From
More informationPolicy Gradient Methods for Robotics
Policy Gradient Methods for Robotics Jan Peters, Stefan Schaal University of Southern California, Los Angeles, CA 90089, USA ATR Computational Neuroscience Laboratories, Kyoto, 619-0288, Japan {jrpeters,
More informationLinear Least-squares Dyna-style Planning
Linear Least-squares Dyna-style Planning Hengshuai Yao Department of Computing Science University of Alberta Edmonton, AB, Canada T6G2E8 hengshua@cs.ualberta.ca Abstract World model is very important for
More informationImplicit Incremental Natural Actor Critic
Implicit Incremental Natural Actor Critic Ryo Iwaki and Minoru Asada Osaka University, -1, Yamadaoka, Suita city, Osaka, Japan {ryo.iwaki,asada}@ams.eng.osaka-u.ac.jp Abstract. The natural policy gradient
More informationLecture 3: Policy Evaluation Without Knowing How the World Works / Model Free Policy Evaluation
Lecture 3: Policy Evaluation Without Knowing How the World Works / Model Free Policy Evaluation CS234: RL Emma Brunskill Winter 2018 Material builds on structure from David SIlver s Lecture 4: Model-Free
More informationGrundlagen der Künstlichen Intelligenz
Grundlagen der Künstlichen Intelligenz Reinforcement learning Daniel Hennes 4.12.2017 (WS 2017/18) University Stuttgart - IPVS - Machine Learning & Robotics 1 Today Reinforcement learning Model based and
More informationPath Integral Stochastic Optimal Control for Reinforcement Learning
Preprint August 3, 204 The st Multidisciplinary Conference on Reinforcement Learning and Decision Making RLDM203 Path Integral Stochastic Optimal Control for Reinforcement Learning Farbod Farshidian Institute
More informationNatural Actor Critic Algorithms
Natural Actor Critic Algorithms Shalabh Bhatnagar, Richard S Sutton, Mohammad Ghavamzadeh, and Mark Lee June 2009 Abstract We present four new reinforcement learning algorithms based on actor critic, function
More informationLecture 4: Approximate dynamic programming
IEOR 800: Reinforcement learning By Shipra Agrawal Lecture 4: Approximate dynamic programming Deep Q Networks discussed in the last lecture are an instance of approximate dynamic programming. These are
More informationOptimal Control. McGill COMP 765 Oct 3 rd, 2017
Optimal Control McGill COMP 765 Oct 3 rd, 2017 Classical Control Quiz Question 1: Can a PID controller be used to balance an inverted pendulum: A) That starts upright? B) That must be swung-up (perhaps
More informationReinforcement Learning. Donglin Zeng, Department of Biostatistics, University of North Carolina
Reinforcement Learning Introduction Introduction Unsupervised learning has no outcome (no feedback). Supervised learning has outcome so we know what to predict. Reinforcement learning is in between it
More informationReinforcement Learning for Continuous. Action using Stochastic Gradient Ascent. Hajime KIMURA, Shigenobu KOBAYASHI JAPAN
Reinforcement Learning for Continuous Action using Stochastic Gradient Ascent Hajime KIMURA, Shigenobu KOBAYASHI Tokyo Institute of Technology, 4259 Nagatsuda, Midori-ku Yokohama 226-852 JAPAN Abstract:
More informationReinforcement Learning. Spring 2018 Defining MDPs, Planning
Reinforcement Learning Spring 2018 Defining MDPs, Planning understandability 0 Slide 10 time You are here Markov Process Where you will go depends only on where you are Markov Process: Information state
More informationBACKPROPAGATION THROUGH THE VOID
BACKPROPAGATION THROUGH THE VOID Optimizing Control Variates for Black-Box Gradient Estimation 27 Nov 2017, University of Cambridge Speaker: Geoffrey Roeder, University of Toronto OPTIMIZING EXPECTATIONS
More informationTemporal Difference. Learning KENNETH TRAN. Principal Research Engineer, MSR AI
Temporal Difference Learning KENNETH TRAN Principal Research Engineer, MSR AI Temporal Difference Learning Policy Evaluation Intro to model-free learning Monte Carlo Learning Temporal Difference Learning
More informationCovariant Policy Search
Covariant Policy Search J. Andrew Bagnell and Jeff Schneider Robotics Institute Carnegie-Mellon University Pittsburgh, PA 15213 { dbagnell, schneide } @ ri. emu. edu Abstract We investigate the problem
More informationReinforcement Learning
Reinforcement Learning Model-Based Reinforcement Learning Model-based, PAC-MDP, sample complexity, exploration/exploitation, RMAX, E3, Bayes-optimal, Bayesian RL, model learning Vien Ngo MLR, University
More informationVariance Reduction for Policy Gradient Methods. March 13, 2017
Variance Reduction for Policy Gradient Methods March 13, 2017 Reward Shaping Reward Shaping Reward Shaping Reward shaping: r(s, a, s ) = r(s, a, s ) + γφ(s ) Φ(s) for arbitrary potential Φ Theorem: r admits
More informationMARKOV DECISION PROCESSES (MDP) AND REINFORCEMENT LEARNING (RL) Versione originale delle slide fornita dal Prof. Francesco Lo Presti
1 MARKOV DECISION PROCESSES (MDP) AND REINFORCEMENT LEARNING (RL) Versione originale delle slide fornita dal Prof. Francesco Lo Presti Historical background 2 Original motivation: animal learning Early
More informationCS599 Lecture 2 Function Approximation in RL
CS599 Lecture 2 Function Approximation in RL Look at how experience with a limited part of the state set be used to produce good behavior over a much larger part. Overview of function approximation (FA)
More informationA Tour of Reinforcement Learning The View from Continuous Control. Benjamin Recht University of California, Berkeley
A Tour of Reinforcement Learning The View from Continuous Control Benjamin Recht University of California, Berkeley trustable, scalable, predictable Control Theory! Reinforcement Learning is the study
More informationA Function Approximation Approach to Estimation of Policy Gradient for POMDP with Structured Policies
A Function Approximation Approach to Estimation of Policy Gradient for POMDP with Structured Policies Huizhen Yu Lab for Information and Decision Systems Massachusetts Institute of Technology Cambridge,
More informationReinforcement Learning
Reinforcement Learning Ron Parr CompSci 7 Department of Computer Science Duke University With thanks to Kris Hauser for some content RL Highlights Everybody likes to learn from experience Use ML techniques
More informationDeep Reinforcement Learning
Deep Reinforcement Learning John Schulman 1 MLSS, May 2016, Cadiz 1 Berkeley Artificial Intelligence Research Lab Agenda Introduction and Overview Markov Decision Processes Reinforcement Learning via Black-Box
More informationReinforcement Learning
Reinforcement Learning Temporal Difference Learning Temporal difference learning, TD prediction, Q-learning, elibigility traces. (many slides from Marc Toussaint) Vien Ngo MLR, University of Stuttgart
More informationCSC321 Lecture 22: Q-Learning
CSC321 Lecture 22: Q-Learning Roger Grosse Roger Grosse CSC321 Lecture 22: Q-Learning 1 / 21 Overview Second of 3 lectures on reinforcement learning Last time: policy gradient (e.g. REINFORCE) Optimize
More informationReinforcement Learning: An Introduction
Introduction Betreuer: Freek Stulp Hauptseminar Intelligente Autonome Systeme (WiSe 04/05) Forschungs- und Lehreinheit Informatik IX Technische Universität München November 24, 2004 Introduction What is
More informationMaximum Margin Planning
Maximum Margin Planning Nathan Ratliff, Drew Bagnell and Martin Zinkevich Presenters: Ashesh Jain, Michael Hu CS6784 Class Presentation Theme 1. Supervised learning 2. Unsupervised learning 3. Reinforcement
More informationReinforcement Learning. Yishay Mansour Tel-Aviv University
Reinforcement Learning Yishay Mansour Tel-Aviv University 1 Reinforcement Learning: Course Information Classes: Wednesday Lecture 10-13 Yishay Mansour Recitations:14-15/15-16 Eliya Nachmani Adam Polyak
More informationDiscrete Variables and Gradient Estimators
iscrete Variables and Gradient Estimators This assignment is designed to get you comfortable deriving gradient estimators, and optimizing distributions over discrete random variables. For most questions,
More informationOptimal Stopping Problems
2.997 Decision Making in Large Scale Systems March 3 MIT, Spring 2004 Handout #9 Lecture Note 5 Optimal Stopping Problems In the last lecture, we have analyzed the behavior of T D(λ) for approximating
More informationLearning to Control an Octopus Arm with Gaussian Process Temporal Difference Methods
Learning to Control an Octopus Arm with Gaussian Process Temporal Difference Methods Yaakov Engel Joint work with Peter Szabo and Dmitry Volkinshtein (ex. Technion) Why use GPs in RL? A Bayesian approach
More informationOutline. Supervised Learning. Hong Chang. Institute of Computing Technology, Chinese Academy of Sciences. Machine Learning Methods (Fall 2012)
Outline Hong Chang Institute of Computing Technology, Chinese Academy of Sciences Machine Learning Methods (Fall 2012) Outline Outline I 1 Linear Models for Regression Linear Regression Probabilistic Interpretation
More informationDeterministic Policy Gradient, Advanced RL Algorithms
NPFL122, Lecture 9 Deterministic Policy Gradient, Advanced RL Algorithms Milan Straka December 1, 218 Charles University in Prague Faculty of Mathematics and Physics Institute of Formal and Applied Linguistics
More informationGeneralization and Function Approximation
Generalization and Function Approximation 0 Generalization and Function Approximation Suggested reading: Chapter 8 in R. S. Sutton, A. G. Barto: Reinforcement Learning: An Introduction MIT Press, 1998.
More informationRobotics. Control Theory. Marc Toussaint U Stuttgart
Robotics Control Theory Topics in control theory, optimal control, HJB equation, infinite horizon case, Linear-Quadratic optimal control, Riccati equations (differential, algebraic, discrete-time), controllability,
More informationCSC242: Intro to AI. Lecture 23
CSC242: Intro to AI Lecture 23 Administrivia Posters! Tue Apr 24 and Thu Apr 26 Idea! Presentation! 2-wide x 4-high landscape pages Learning so far... Input Attributes Alt Bar Fri Hun Pat Price Rain Res
More informationReinforcement Learning
Reinforcement Learning Function Approximation Continuous state/action space, mean-square error, gradient temporal difference learning, least-square temporal difference, least squares policy iteration Vien
More informationLecture 2: Learning from Evaluative Feedback. or Bandit Problems
Lecture 2: Learning from Evaluative Feedback or Bandit Problems 1 Edward L. Thorndike (1874-1949) Puzzle Box 2 Learning by Trial-and-Error Law of Effect: Of several responses to the same situation, those
More information6 Reinforcement Learning
6 Reinforcement Learning As discussed above, a basic form of supervised learning is function approximation, relating input vectors to output vectors, or, more generally, finding density functions p(y,
More informationReinforcement Learning II
Reinforcement Learning II Andrea Bonarini Artificial Intelligence and Robotics Lab Department of Electronics and Information Politecnico di Milano E-mail: bonarini@elet.polimi.it URL:http://www.dei.polimi.it/people/bonarini
More informationProf. Dr. Ann Nowé. Artificial Intelligence Lab ai.vub.ac.be
REINFORCEMENT LEARNING AN INTRODUCTION Prof. Dr. Ann Nowé Artificial Intelligence Lab ai.vub.ac.be REINFORCEMENT LEARNING WHAT IS IT? What is it? Learning from interaction Learning about, from, and while
More informationDerivatives of Logarithmic Stationary Distributions for Policy Gradient Reinforcement Learning
Derivatives of Logarithmic Stationary Distributions for Policy Gradient Reinforcement Learning Tetsuro Morimura, Eiji Uchibe, Junichiro Yoshimoto,, Jan Peters, Kenji Doya,, IBM Research, Tokyo Research
More informationReinforcement Learning
Reinforcement Learning Temporal Difference Learning Temporal difference learning, TD prediction, Q-learning, elibigility traces. (many slides from Marc Toussaint) Vien Ngo Marc Toussaint University of
More information