Reinforcement Learning. Donglin Zeng, Department of Biostatistics, University of North Carolina
|
|
- Amanda Hudson
- 5 years ago
- Views:
Transcription
1 Reinforcement Learning
2 Introduction
3 Introduction Unsupervised learning has no outcome (no feedback). Supervised learning has outcome so we know what to predict. Reinforcement learning is in between it has no explicit supervision so uses a rewarding system to learn feature-outcome relationship. The crucial advantage of reinforcement learning is its non-greedy nature: we do not need to improve performance in a short term but to optimize a long-term achievement.
4 RL terminology Reinforcement learning is a dynamic process where at each step, a new decision rule or policy is updated based on new data and rewarding system. Terminology used in reinforcement learning: Agent: whoever uses learned decisions during the process (robot in AI) Action (A): a decision to be taken during the process State (S): environment variables that may interact with Action Reward (R): a value system to evaluate Action given State. Note that (A, S, R) is time-step dependent so we use (A t, S t, R t ) to reflect time-step t.
5 Reinforcement learning diagram
6 Maze example
7 Maze example: continue
8 Maze example: continue
9 Mountain car problem
10 RL Framework
11 RL Notation At time-step t, the agent observe a state S t from a state space (S T ) and selects an action A t from an action space (A t ). Both action and state result in transition to a new state S t+1. Given (A t, S t, S t+1 ), the agent receives an immediate reward R t = r t (S t, A t, S t+1 ) R, where r t (,, ) is called immediate reward function.
12 RL mathematical formulation At time t, we assume a transition probability function from (S t = s, A t = a) to (S t+1 = s ): p t (s s, a) 0, s p t (s s, a)ds = 1. We also assume A t given S t from a probability distribution: π t (a s) 0, a π t(a s)da = 1. A trajectory (training sample) (s 1, a 1, s 2,..., s T, a T, s T+1 ) is generated as follows: start from an initial state s 1 from a probability distribution p(s); for t = 1, 2,..., T (T is the total number of steps), (a) a t is chosen from π t ( s t ), (b) the next state s t+1 is from p t ( s t, a t ). It is called finite horizon if T < and infinite horizon if T =.
13 Goal of RL Define the return at time t as T γ j t r j (S j, A j, S j+1 ) j=t where γ [0, 1) is called the discount factor (discounting long trajectory). An action policy, π = (π 1,..., π T ), is a sequence of probability distribution functions, where π t is a probability distribution for A t given S t. The goal of RL is to learn the optimal action decision, policy π = (π1, π 2,..., π T ), to maximize the expected return: T E π [ γ j 1 r j (S j, A j, S j+1 )], E π ( ) means A t S t π t ( S t ). j=1
14 Optimal policy RL aims to find the best action decision rules such that the average long-term reward is maximized if such rules are implemented. Note: π is a function of states and for any individual, we only know what actions should be at time t after observing its states ate time t. This is related to the so-called adaptive decision or dynamic decision.
15 How supervised learning is framed in RL context? We can imagine S t to be all data (both feature and outcome) collected by step t. Then A t is the prediction rule from a class of prediction functions based on S t (no need to be perfect prediction function; can be even random prediction) so π t is the probabilistic selection of which prediction function at t. Based on (S t, A t ), S t+1 can be S t with additionally collected data, or S t with individual errors, or just S t. R t is the prediction error evaluated at the data. The goal is to learn the best prediction rule RL method can help!
16 State-Action and State Value Functions
17 Two important concepts in RL State-Action value function (SAV) It is the expected return increment at time t given state S t = s and action A t = a: Q π t (s, a) = E π [ T γ j t r j (S j, A j, S j+1 ) S t = s, A t = a]. j=t Q t (s, a) Qπ t (s, a) is the optimal expected return at time t. State value function (SV) It is the expected return increment at time t given state S t = s: V π t (s) = E π [ T γ j t r j (S j, A j, S j+1 ) S t = s]. j=t Similarly, Vt (s) = Vπ t (s). Clearly, Vt π(s) = a Qπ t (s, a)π t(a s)da.
18 Bellman equations The Bellman equation for SV: ] Vt π (s) = E π [r t (s, A t, S t+1 ) + γvt+1 π (S t+1) S t = s [ = rt (s, a, s ) + γvt+1 π (s ) ] π t (a s)p t (s s, a)dads. s a The Bellman equation for SAV: ] Q π t (s, a) = E π [r t (s, a, S t+1 ) + γq π t+1 (S t+1, A t+1 ) S t = s, A t = a [ = rt (s, a, s ) + γq π t+1 (s, a ) ] s a π t+1 (a s )p t (s s, a)da ds.
19 Optimal policy learning: Bellman equation Bellman equation for optimal policy: [ Qt π (s, a) = E π V π t (s) = max a Q π t (s, a), ] r t (s, a, S t+1 ) + γvt π (S t+1 ) S t = s, A t = a, { } πt (s) I a = argmax a Q π t (s, a).
20 Reinforcement Learning for Finite Horizon
21 Value function given π For finite T, the Bellman equations suggest a backward procedure to evaluate the value function associated a particular policy: start from time T. We can learn Q π T (s, a) = E[R T S T = s, A T = a]i(a π( s)). at time T 1, we learn Q π T 1 (s, a) as [ ] E R T 1 + γe π [Q π T (S T, A T ) S T ] S T 1 = s, A T 1 = a I(a π( s)).. we perform learning backwards till time 1. Note that each step can be estimated using parametric, nonparametric or machine learning.
22 Optimal policy learning for finite horizon (Q-learning) Start from time T. We can learn Q π T (s, a) = E[R T S T = s, A T = a]. We calculate πt (s) as with probability 1 at a = argmax a Q π T (s, a). At time T 1, we learn Q π T 1 (s, a) as [ ] E R T 1 + γ max Q π a T (S T, a ) S T 1 = s, A T 1 = a. We obtain πt 1 as the one with probability 1 at a = argmax a Q π T 1 (s, a). We perform the same learning procedures backwards till time 1 to learn all the optimal policies.
23 Statistical models for state-value function Parametric/semiparametric models for Q π (s, a) are commonly used. We assume Q π (s, a) = B θ b φ b (s, a), b=1 where φ b (s, a) is a sequence of basis functions. In other words, the policy is indirectly represented by θ b s. From the Bellman equation, we note that the conditional mean of R t = r(s t, A t, S t+1 ) given (S t, A t ) is Q π (S t, A t ) γe π [Q π (S t+1, A t+1 ) S t, A t ] = θ T ψ(s t, A t ) under policy π, where ψ(s, a) = φ(s, a) γe π [φ(s t+1, A t+1 ) S t = s, A t = a].
24 Numerical implementation Suppose we have data from n subjects, each with a training sample of T steps, or n training T-step sample from the same agent, (S i1, A i1, S i2,..., S it, A it, S i,t+1 ). We estimate ψ(s, a) by ψ b (s, a) = n T i=1 t=1 φ b (s, a) γ I(S it = s, A it = a)e π [φ b (S i,t+1, A i,t+1 ]) n T i=1 t=1 I(S. it = s, A it = a) We perform a least-squares estimation 1 min θ nt n i=1 T t=1 [ ] 2 I(A it S it π) θ T ψ(sit, A it ) R it, where A it S it π means that the data of A it is obtained by following the policy.
25 More on numerical implementation Regularization may be introduced to have a more sparse solution. L 2 -minimization can be replaced by L 1 -minimization to gain robustness. Choice of basis functions: radial basis function where kernel function can be the usual Gaussian kernel (one possible definition of d(s, s ) is the shortest path from s to s in the graph defined by transition probabilities).
26 Alternative methods Modelling transition probability functions Active policy iteration (active learning) update sampling policy actively
27 Reinforcement Learning for Infinite Horizon
28 Value function learning given π When T = or T is large, Q-learning method may not be applicable. The salvage is to take advantage of process stability when t is large so we can assume the following Markov decision process (MDP): MDP assumes that state and action spaces are constant over time. MPD assumes pt (s s, a) to be independent of t. Reward function rt (s, a, s ) is independent of t. MDP assumption is plausible for a long horizon and after certain number of steps.
29 Bellman equations under MDP for infinite horizon Under MPD, Q π t (s, a) = Qπ (s, a) and Vt π(s) = Vπ (s). Bellman equations become ] V π (s) = E π [r(s, A t, S t+1 ) + γv π (S t+1 ) S t = s, ] Q π (s, a) = E π [r(s, a, S t+1 ) + γq π (S t+1, A t+1 ) S t = s, A t = a.
30 On- and Off-policy estimation We can still apply least-square learning algorithm for Q π (s, a: [ T ] E π (θ T ψ(s t, A t ) R t ) 2 t=1 using the history sample (S t, A t ) following the target policy π. This is called on-policy reinforcement learning. However, not all policy has been seen in the history sample. An alternative method is to use importance sampling: [ T ] [ T ] E π (θ T ψ(s t, A t ) R t ) 2 = E π (θ T ψ(s t, A t ) R t ) 2 w t, t=1 t=1 where t t w t = π(a j S j )/ π(a j S j ). j=1 Donglin Zeng, j=1 Department of Biostatistics, University of North Carolina
31 Off-policy iteration: more We need one assumption: there exists a policy in history sample, π, such that π(a s) > 0, (a, s). Adaptive importance weighting is to replace w t by w ν t and choose ν via cross-validation. When history sample have multiple policies π s, we can obtain the estimate from importance weighting with respect to each policy and aggregate estimation (sample-reuse policy iteration).
32 Reinforcement Learning for Optimal Policy The concept of RL is to make use of existing data from some given policies to learn potentially improved policies (EXPLOITATION); it then tries new policies to collect additional data evidence (EXPLORATION). Reinforcement learning methods are mostly into two groups: (policy iteration) model-based or learning methods to approximate optimal SAV (policy search) model-based or learning methods to directly maximize SV for estimating π.
33 Optimal policy learning: policy iteration procedure Start from a policy π. Policy evaluation: evaluate Q π (s, a) and thus V π (s). Policy improvement: update π(a s) to be I(a = a π (s)) where a π (s) is the action maximizing Q π (s, a). Iterate between policy evaluation step and policy improvement.
34 Soft policy iteration procedure Selecting a deterministic policy update may be too greedy if the initial policy is far from the optimal. More soft policy update includes: π(a s) exp{q π (s, a)/τ}, (ɛ-greedy policy improvement) π(a x) has a probability (1 ɛ + ɛ/m) at a = a(π) and probability ɛ/m at other a s, where m is the number of possible actions.
35 Optimal policy learning: direct policy search The direct policy search approach aims for finding the policy maximizing the expected return. Suppose we model policy as π(a s; θ). The expected return under π is given by T J(θ) = p(s 1 ) p(s t+1 s t, a t )π(a t s t ; θ) s 1,...,s T t=1 { T } γ t 1 r(s t, a t, s t+1 ) s 1 ds T. t=1 We optimize J(θ) to find the optimal θ. Gradient approach can be adopted for optimization. EM-based policy search can be used for optimization. Importance sampling can be used for evaluating J(θ).
36 How RL works in artificial intelligence? The agent (Robot) starts with one initial policy, π (0), to yield a trial for a period (each trial sometimes called epoch). The agent uses RL algorithms (Q-learning, least square estimation) to learn the state-action value function for π (0). The agent uses policy iteration or directly policy search method to obtain an improved policy π (1) then runs a new trial under this policy. The agent continues this process, where SAV function learning can reuse all previous policies based on importance sampling. It stops when the value or policy has negligible change.
37 What statisticians can do with RL? Improve the design of initial policy (random policy or other choices). Pilot trials? Improve learning methods RL algorithms. Improve policy update. Characterize convergence rates and so on. Design better rewarding systems.
38 Simulated examples
39 Robot-Arm control example
40 Robot-Arm control example: continue
41 Robot-Arm control example: continue
42 Mountain car example Action space: force applied to the car (0.2, 0.2, 0). State space: (x, ẋ) where x is the horizontal position ( [ 1.2, 0.5]) and ẋ is the velocity ( [ 1.5, 1.5]). Transition: x t+1 = x t + ẋ t+1 δt, ẋ t+1 = ẋ t + ( 9.8wcos(3x t ) + a t /w kẋ t )δt, where w is the mass 0.2kg, k is the friction coefficient 0.3, and δt is 0.1 second. Reward: r(s, a, s ) = { 1 xs 0.5, 0.01 o.w. Policy iteration uses kernels with centers at { 1.2, 0.35, 0.5} { 1.5, 0.5, 0.5, 1.5} and σ = 1.
43 Experiment results
44 Experiment results
REINFORCEMENT LEARNING
REINFORCEMENT LEARNING Larry Page: Where s Google going next? DeepMind's DQN playing Breakout Contents Introduction to Reinforcement Learning Deep Q-Learning INTRODUCTION TO REINFORCEMENT LEARNING Contents
More informationIntroduction to Reinforcement Learning. CMPT 882 Mar. 18
Introduction to Reinforcement Learning CMPT 882 Mar. 18 Outline for the week Basic ideas in RL Value functions and value iteration Policy evaluation and policy improvement Model-free RL Monte-Carlo and
More informationReinforcement Learning
Reinforcement Learning Markov decision process & Dynamic programming Evaluative feedback, value function, Bellman equation, optimality, Markov property, Markov decision process, dynamic programming, value
More information, and rewards and transition matrices as shown below:
CSE 50a. Assignment 7 Out: Tue Nov Due: Thu Dec Reading: Sutton & Barto, Chapters -. 7. Policy improvement Consider the Markov decision process (MDP) with two states s {0, }, two actions a {0, }, discount
More informationReinforcement Learning. George Konidaris
Reinforcement Learning George Konidaris gdk@cs.brown.edu Fall 2017 Machine Learning Subfield of AI concerned with learning from data. Broadly, using: Experience To Improve Performance On Some Task (Tom
More informationMarkov decision processes
CS 2740 Knowledge representation Lecture 24 Markov decision processes Milos Hauskrecht milos@cs.pitt.edu 5329 Sennott Square Administrative announcements Final exam: Monday, December 8, 2008 In-class Only
More informationLecture 7: Value Function Approximation
Lecture 7: Value Function Approximation Joseph Modayil Outline 1 Introduction 2 3 Batch Methods Introduction Large-Scale Reinforcement Learning Reinforcement learning can be used to solve large problems,
More informationChristopher Watkins and Peter Dayan. Noga Zaslavsky. The Hebrew University of Jerusalem Advanced Seminar in Deep Learning (67679) November 1, 2015
Q-Learning Christopher Watkins and Peter Dayan Noga Zaslavsky The Hebrew University of Jerusalem Advanced Seminar in Deep Learning (67679) November 1, 2015 Noga Zaslavsky Q-Learning (Watkins & Dayan, 1992)
More informationGrundlagen der Künstlichen Intelligenz
Grundlagen der Künstlichen Intelligenz Reinforcement learning Daniel Hennes 4.12.2017 (WS 2017/18) University Stuttgart - IPVS - Machine Learning & Robotics 1 Today Reinforcement learning Model based and
More informationMARKOV DECISION PROCESSES (MDP) AND REINFORCEMENT LEARNING (RL) Versione originale delle slide fornita dal Prof. Francesco Lo Presti
1 MARKOV DECISION PROCESSES (MDP) AND REINFORCEMENT LEARNING (RL) Versione originale delle slide fornita dal Prof. Francesco Lo Presti Historical background 2 Original motivation: animal learning Early
More informationActive Policy Iteration: Efficient Exploration through Active Learning for Value Function Approximation in Reinforcement Learning
Active Policy Iteration: fficient xploration through Active Learning for Value Function Approximation in Reinforcement Learning Takayuki Akiyama, Hirotaka Hachiya, and Masashi Sugiyama Department of Computer
More informationArtificial Intelligence
Artificial Intelligence Dynamic Programming Marc Toussaint University of Stuttgart Winter 2018/19 Motivation: So far we focussed on tree search-like solvers for decision problems. There is a second important
More informationInternet Monetization
Internet Monetization March May, 2013 Discrete time Finite A decision process (MDP) is reward process with decisions. It models an environment in which all states are and time is divided into stages. Definition
More informationComplexity of stochastic branch and bound methods for belief tree search in Bayesian reinforcement learning
Complexity of stochastic branch and bound methods for belief tree search in Bayesian reinforcement learning Christos Dimitrakakis Informatics Institute, University of Amsterdam, Amsterdam, The Netherlands
More informationMarkov Decision Processes and Solving Finite Problems. February 8, 2017
Markov Decision Processes and Solving Finite Problems February 8, 2017 Overview of Upcoming Lectures Feb 8: Markov decision processes, value iteration, policy iteration Feb 13: Policy gradients Feb 15:
More informationCourse 16:198:520: Introduction To Artificial Intelligence Lecture 13. Decision Making. Abdeslam Boularias. Wednesday, December 7, 2016
Course 16:198:520: Introduction To Artificial Intelligence Lecture 13 Decision Making Abdeslam Boularias Wednesday, December 7, 2016 1 / 45 Overview We consider probabilistic temporal models where the
More informationPART A and ONE question from PART B; or ONE question from PART A and TWO questions from PART B.
Advanced Topics in Machine Learning, GI13, 2010/11 Advanced Topics in Machine Learning, GI13, 2010/11 Answer any THREE questions. Each question is worth 20 marks. Use separate answer books Answer any THREE
More informationMarks. bonus points. } Assignment 1: Should be out this weekend. } Mid-term: Before the last lecture. } Mid-term deferred exam:
Marks } Assignment 1: Should be out this weekend } All are marked, I m trying to tally them and perhaps add bonus points } Mid-term: Before the last lecture } Mid-term deferred exam: } This Saturday, 9am-10.30am,
More informationQ-Learning in Continuous State Action Spaces
Q-Learning in Continuous State Action Spaces Alex Irpan alexirpan@berkeley.edu December 5, 2015 Contents 1 Introduction 1 2 Background 1 3 Q-Learning 2 4 Q-Learning In Continuous Spaces 4 5 Experimental
More informationReinforcement Learning as Variational Inference: Two Recent Approaches
Reinforcement Learning as Variational Inference: Two Recent Approaches Rohith Kuditipudi Duke University 11 August 2017 Outline 1 Background 2 Stein Variational Policy Gradient 3 Soft Q-Learning 4 Closing
More informationReinforcement Learning
CS7/CS7 Fall 005 Supervised Learning: Training examples: (x,y) Direct feedback y for each input x Sequence of decisions with eventual feedback No teacher that critiques individual actions Learn to act
More informationReinforcement Learning with Function Approximation. Joseph Christian G. Noel
Reinforcement Learning with Function Approximation Joseph Christian G. Noel November 2011 Abstract Reinforcement learning (RL) is a key problem in the field of Artificial Intelligence. The main goal is
More informationCS599 Lecture 1 Introduction To RL
CS599 Lecture 1 Introduction To RL Reinforcement Learning Introduction Learning from rewards Policies Value Functions Rewards Models of the Environment Exploitation vs. Exploration Dynamic Programming
More informationReinforcement Learning and Deep Reinforcement Learning
Reinforcement Learning and Deep Reinforcement Learning Ashis Kumer Biswas, Ph.D. ashis.biswas@ucdenver.edu Deep Learning November 5, 2018 1 / 64 Outlines 1 Principles of Reinforcement Learning 2 The Q
More informationCOMP3702/7702 Artificial Intelligence Lecture 11: Introduction to Machine Learning and Reinforcement Learning. Hanna Kurniawati
COMP3702/7702 Artificial Intelligence Lecture 11: Introduction to Machine Learning and Reinforcement Learning Hanna Kurniawati Today } What is machine learning? } Where is it used? } Types of machine learning
More informationReinforcement Learning: An Introduction
Introduction Betreuer: Freek Stulp Hauptseminar Intelligente Autonome Systeme (WiSe 04/05) Forschungs- und Lehreinheit Informatik IX Technische Universität München November 24, 2004 Introduction What is
More informationState Space Abstractions for Reinforcement Learning
State Space Abstractions for Reinforcement Learning Rowan McAllister and Thang Bui MLG RCC 6 November 24 / 24 Outline Introduction Markov Decision Process Reinforcement Learning State Abstraction 2 Abstraction
More informationIntroduction to Reinforcement Learning
CSCI-699: Advanced Topics in Deep Learning 01/16/2019 Nitin Kamra Spring 2019 Introduction to Reinforcement Learning 1 What is Reinforcement Learning? So far we have seen unsupervised and supervised learning.
More informationMachine Learning. Reinforcement learning. Hamid Beigy. Sharif University of Technology. Fall 1396
Machine Learning Reinforcement learning Hamid Beigy Sharif University of Technology Fall 1396 Hamid Beigy (Sharif University of Technology) Machine Learning Fall 1396 1 / 32 Table of contents 1 Introduction
More informationMachine Learning I Reinforcement Learning
Machine Learning I Reinforcement Learning Thomas Rückstieß Technische Universität München December 17/18, 2009 Literature Book: Reinforcement Learning: An Introduction Sutton & Barto (free online version:
More informationProf. Dr. Ann Nowé. Artificial Intelligence Lab ai.vub.ac.be
REINFORCEMENT LEARNING AN INTRODUCTION Prof. Dr. Ann Nowé Artificial Intelligence Lab ai.vub.ac.be REINFORCEMENT LEARNING WHAT IS IT? What is it? Learning from interaction Learning about, from, and while
More informationReinforcement Learning. Introduction
Reinforcement Learning Introduction Reinforcement Learning Agent interacts and learns from a stochastic environment Science of sequential decision making Many faces of reinforcement learning Optimal control
More informationDecision Theory: Q-Learning
Decision Theory: Q-Learning CPSC 322 Decision Theory 5 Textbook 12.5 Decision Theory: Q-Learning CPSC 322 Decision Theory 5, Slide 1 Lecture Overview 1 Recap 2 Asynchronous Value Iteration 3 Q-Learning
More informationMarkov Decision Processes (and a small amount of reinforcement learning)
Markov Decision Processes (and a small amount of reinforcement learning) Slides adapted from: Brian Williams, MIT Manuela Veloso, Andrew Moore, Reid Simmons, & Tom Mitchell, CMU Nicholas Roy 16.4/13 Session
More informationReinforcement Learning
Reinforcement Learning RL in continuous MDPs March April, 2015 Large/Continuous MDPs Large/Continuous state space Tabular representation cannot be used Large/Continuous action space Maximization over action
More informationReinforcement Learning. Yishay Mansour Tel-Aviv University
Reinforcement Learning Yishay Mansour Tel-Aviv University 1 Reinforcement Learning: Course Information Classes: Wednesday Lecture 10-13 Yishay Mansour Recitations:14-15/15-16 Eliya Nachmani Adam Polyak
More informationMachine Learning I Continuous Reinforcement Learning
Machine Learning I Continuous Reinforcement Learning Thomas Rückstieß Technische Universität München January 7/8, 2010 RL Problem Statement (reminder) state s t+1 ENVIRONMENT reward r t+1 new step r t
More informationBasics of reinforcement learning
Basics of reinforcement learning Lucian Buşoniu TMLSS, 20 July 2018 Main idea of reinforcement learning (RL) Learn a sequential decision policy to optimize the cumulative performance of an unknown system
More informationReinforcement Learning and NLP
1 Reinforcement Learning and NLP Kapil Thadani kapil@cs.columbia.edu RESEARCH Outline 2 Model-free RL Markov decision processes (MDPs) Derivative-free optimization Policy gradients Variance reduction Value
More informationLecture 1: March 7, 2018
Reinforcement Learning Spring Semester, 2017/8 Lecture 1: March 7, 2018 Lecturer: Yishay Mansour Scribe: ym DISCLAIMER: Based on Learning and Planning in Dynamical Systems by Shie Mannor c, all rights
More informationMachine Learning and Bayesian Inference. Unsupervised learning. Can we find regularity in data without the aid of labels?
Machine Learning and Bayesian Inference Dr Sean Holden Computer Laboratory, Room FC6 Telephone extension 6372 Email: sbh11@cl.cam.ac.uk www.cl.cam.ac.uk/ sbh11/ Unsupervised learning Can we find regularity
More informationReinforcement Learning via Policy Optimization
Reinforcement Learning via Policy Optimization Hanxiao Liu November 22, 2017 1 / 27 Reinforcement Learning Policy a π(s) 2 / 27 Example - Mario 3 / 27 Example - ChatBot 4 / 27 Applications - Video Games
More informationReinforcement learning
Reinforcement learning Based on [Kaelbling et al., 1996, Bertsekas, 2000] Bert Kappen Reinforcement learning Reinforcement learning is the problem faced by an agent that must learn behavior through trial-and-error
More informationINF 5860 Machine learning for image classification. Lecture 14: Reinforcement learning May 9, 2018
Machine learning for image classification Lecture 14: Reinforcement learning May 9, 2018 Page 3 Outline Motivation Introduction to reinforcement learning (RL) Value function based methods (Q-learning)
More informationDecision Theory: Markov Decision Processes
Decision Theory: Markov Decision Processes CPSC 322 Lecture 33 March 31, 2006 Textbook 12.5 Decision Theory: Markov Decision Processes CPSC 322 Lecture 33, Slide 1 Lecture Overview Recap Rewards and Policies
More informationReinforcement Learning
1 Reinforcement Learning Chris Watkins Department of Computer Science Royal Holloway, University of London July 27, 2015 2 Plan 1 Why reinforcement learning? Where does this theory come from? Markov decision
More informationMS&E338 Reinforcement Learning Lecture 1 - April 2, Introduction
MS&E338 Reinforcement Learning Lecture 1 - April 2, 2018 Introduction Lecturer: Ben Van Roy Scribe: Gabriel Maher 1 Reinforcement Learning Introduction In reinforcement learning (RL) we consider an agent
More informationAdministration. CSCI567 Machine Learning (Fall 2018) Outline. Outline. HW5 is available, due on 11/18. Practice final will also be available soon.
Administration CSCI567 Machine Learning Fall 2018 Prof. Haipeng Luo U of Southern California Nov 7, 2018 HW5 is available, due on 11/18. Practice final will also be available soon. Remaining weeks: 11/14,
More informationCS788 Dialogue Management Systems Lecture #2: Markov Decision Processes
CS788 Dialogue Management Systems Lecture #2: Markov Decision Processes Kee-Eung Kim KAIST EECS Department Computer Science Division Markov Decision Processes (MDPs) A popular model for sequential decision
More information(Deep) Reinforcement Learning
Martin Matyášek Artificial Intelligence Center Czech Technical University in Prague October 27, 2016 Martin Matyášek VPD, 2016 1 / 17 Reinforcement Learning in a picture R. S. Sutton and A. G. Barto 2015
More informationLecture 8: Policy Gradient
Lecture 8: Policy Gradient Hado van Hasselt Outline 1 Introduction 2 Finite Difference Policy Gradient 3 Monte-Carlo Policy Gradient 4 Actor-Critic Policy Gradient Introduction Vapnik s rule Never solve
More informationReinforcement Learning
Reinforcement Learning Model-Based Reinforcement Learning Model-based, PAC-MDP, sample complexity, exploration/exploitation, RMAX, E3, Bayes-optimal, Bayesian RL, model learning Vien Ngo MLR, University
More informationReinforcement learning an introduction
Reinforcement learning an introduction Prof. Dr. Ann Nowé Computational Modeling Group AIlab ai.vub.ac.be November 2013 Reinforcement Learning What is it? Learning from interaction Learning about, from,
More informationReinforcement Learning
Reinforcement Learning Function approximation Mario Martin CS-UPC May 18, 2018 Mario Martin (CS-UPC) Reinforcement Learning May 18, 2018 / 65 Recap Algorithms: MonteCarlo methods for Policy Evaluation
More informationLearning Control Under Uncertainty: A Probabilistic Value-Iteration Approach
Learning Control Under Uncertainty: A Probabilistic Value-Iteration Approach B. Bischoff 1, D. Nguyen-Tuong 1,H.Markert 1 anda.knoll 2 1- Robert Bosch GmbH - Corporate Research Robert-Bosch-Str. 2, 71701
More informationTemporal Difference. Learning KENNETH TRAN. Principal Research Engineer, MSR AI
Temporal Difference Learning KENNETH TRAN Principal Research Engineer, MSR AI Temporal Difference Learning Policy Evaluation Intro to model-free learning Monte Carlo Learning Temporal Difference Learning
More informationCS 570: Machine Learning Seminar. Fall 2016
CS 570: Machine Learning Seminar Fall 2016 Class Information Class web page: http://web.cecs.pdx.edu/~mm/mlseminar2016-2017/fall2016/ Class mailing list: cs570@cs.pdx.edu My office hours: T,Th, 2-3pm or
More informationNotes on Reinforcement Learning
1 Introduction Notes on Reinforcement Learning Paulo Eduardo Rauber 2014 Reinforcement learning is the study of agents that act in an environment with the goal of maximizing cumulative reward signals.
More informationReinforcement Learning and Control
CS9 Lecture notes Andrew Ng Part XIII Reinforcement Learning and Control We now begin our study of reinforcement learning and adaptive control. In supervised learning, we saw algorithms that tried to make
More information15-780: Graduate Artificial Intelligence. Reinforcement learning (RL)
15-780: Graduate Artificial Intelligence Reinforcement learning (RL) From MDPs to RL We still use the same Markov model with rewards and actions But there are a few differences: 1. We do not assume we
More informationLecture 23: Reinforcement Learning
Lecture 23: Reinforcement Learning MDPs revisited Model-based learning Monte Carlo value function estimation Temporal-difference (TD) learning Exploration November 23, 2006 1 COMP-424 Lecture 23 Recall:
More informationBalancing and Control of a Freely-Swinging Pendulum Using a Model-Free Reinforcement Learning Algorithm
Balancing and Control of a Freely-Swinging Pendulum Using a Model-Free Reinforcement Learning Algorithm Michail G. Lagoudakis Department of Computer Science Duke University Durham, NC 2778 mgl@cs.duke.edu
More informationCSC321 Lecture 22: Q-Learning
CSC321 Lecture 22: Q-Learning Roger Grosse Roger Grosse CSC321 Lecture 22: Q-Learning 1 / 21 Overview Second of 3 lectures on reinforcement learning Last time: policy gradient (e.g. REINFORCE) Optimize
More informationReinforcement Learning
Reinforcement Learning Ron Parr CompSci 7 Department of Computer Science Duke University With thanks to Kris Hauser for some content RL Highlights Everybody likes to learn from experience Use ML techniques
More informationMDP Preliminaries. Nan Jiang. February 10, 2019
MDP Preliminaries Nan Jiang February 10, 2019 1 Markov Decision Processes In reinforcement learning, the interactions between the agent and the environment are often described by a Markov Decision Process
More informationCS 7180: Behavioral Modeling and Decisionmaking
CS 7180: Behavioral Modeling and Decisionmaking in AI Markov Decision Processes for Complex Decisionmaking Prof. Amy Sliva October 17, 2012 Decisions are nondeterministic In many situations, behavior and
More informationAdaptive Importance Sampling for Value Function Approximation in Off-policy Reinforcement Learning
Adaptive Importance Sampling for Value Function Approximation in Off-policy Reinforcement Learning Abstract Off-policy reinforcement learning is aimed at efficiently using data samples gathered from a
More informationReinforcement Learning. Machine Learning, Fall 2010
Reinforcement Learning Machine Learning, Fall 2010 1 Administrativia This week: finish RL, most likely start graphical models LA2: due on Thursday LA3: comes out on Thursday TA Office hours: Today 1:30-2:30
More informationMarkov Decision Processes
Markov Decision Processes Noel Welsh 11 November 2010 Noel Welsh () Markov Decision Processes 11 November 2010 1 / 30 Annoucements Applicant visitor day seeks robot demonstrators for exciting half hour
More informationTrust Region Policy Optimization
Trust Region Policy Optimization Yixin Lin Duke University yixin.lin@duke.edu March 28, 2017 Yixin Lin (Duke) TRPO March 28, 2017 1 / 21 Overview 1 Preliminaries Markov Decision Processes Policy iteration
More informationLecture 25: Learning 4. Victor R. Lesser. CMPSCI 683 Fall 2010
Lecture 25: Learning 4 Victor R. Lesser CMPSCI 683 Fall 2010 Final Exam Information Final EXAM on Th 12/16 at 4:00pm in Lederle Grad Res Ctr Rm A301 2 Hours but obviously you can leave early! Open Book
More informationReinforcement Learning: the basics
Reinforcement Learning: the basics Olivier Sigaud Université Pierre et Marie Curie, PARIS 6 http://people.isir.upmc.fr/sigaud August 6, 2012 1 / 46 Introduction Action selection/planning Learning by trial-and-error
More informationA Gentle Introduction to Reinforcement Learning
A Gentle Introduction to Reinforcement Learning Alexander Jung 2018 1 Introduction and Motivation Consider the cleaning robot Rumba which has to clean the office room B329. In order to keep things simple,
More informationReal Time Value Iteration and the State-Action Value Function
MS&E338 Reinforcement Learning Lecture 3-4/9/18 Real Time Value Iteration and the State-Action Value Function Lecturer: Ben Van Roy Scribe: Apoorva Sharma and Tong Mu 1 Review Last time we left off discussing
More informationREINFORCE Framework for Stochastic Policy Optimization and its use in Deep Learning
REINFORCE Framework for Stochastic Policy Optimization and its use in Deep Learning Ronen Tamari The Hebrew University of Jerusalem Advanced Seminar in Deep Learning (#67679) February 28, 2016 Ronen Tamari
More informationLearning to Control an Octopus Arm with Gaussian Process Temporal Difference Methods
Learning to Control an Octopus Arm with Gaussian Process Temporal Difference Methods Yaakov Engel Joint work with Peter Szabo and Dmitry Volkinshtein (ex. Technion) Why use GPs in RL? A Bayesian approach
More informationCS 598 Statistical Reinforcement Learning. Nan Jiang
CS 598 Statistical Reinforcement Learning Nan Jiang Overview What s this course about? A grad-level seminar course on theory of RL 3 What s this course about? A grad-level seminar course on theory of RL
More informationPlanning in Markov Decision Processes
Carnegie Mellon School of Computer Science Deep Reinforcement Learning and Control Planning in Markov Decision Processes Lecture 3, CMU 10703 Katerina Fragkiadaki Markov Decision Process (MDP) A Markov
More informationToday s s Lecture. Applicability of Neural Networks. Back-propagation. Review of Neural Networks. Lecture 20: Learning -4. Markov-Decision Processes
Today s s Lecture Lecture 20: Learning -4 Review of Neural Networks Markov-Decision Processes Victor Lesser CMPSCI 683 Fall 2004 Reinforcement learning 2 Back-propagation Applicability of Neural Networks
More informationTemporal Difference Learning & Policy Iteration
Temporal Difference Learning & Policy Iteration Advanced Topics in Reinforcement Learning Seminar WS 15/16 ±0 ±0 +1 by Tobias Joppen 03.11.2015 Fachbereich Informatik Knowledge Engineering Group Prof.
More informationAn Introduction to Reinforcement Learning
1 / 58 An Introduction to Reinforcement Learning Lecture 01: Introduction Dr. Johannes A. Stork School of Computer Science and Communication KTH Royal Institute of Technology January 19, 2017 2 / 58 ../fig/reward-00.jpg
More informationAn online kernel-based clustering approach for value function approximation
An online kernel-based clustering approach for value function approximation N. Tziortziotis and K. Blekas Department of Computer Science, University of Ioannina P.O.Box 1186, Ioannina 45110 - Greece {ntziorzi,kblekas}@cs.uoi.gr
More informationAutonomous Helicopter Flight via Reinforcement Learning
Autonomous Helicopter Flight via Reinforcement Learning Authors: Andrew Y. Ng, H. Jin Kim, Michael I. Jordan, Shankar Sastry Presenters: Shiv Ballianda, Jerrolyn Hebert, Shuiwang Ji, Kenley Malveaux, Huy
More informationLaplacian Agent Learning: Representation Policy Iteration
Laplacian Agent Learning: Representation Policy Iteration Sridhar Mahadevan Example of a Markov Decision Process a1: $0 Heaven $1 Earth What should the agent do? a2: $100 Hell $-1 V a1 ( Earth ) = f(0,1,1,1,1,...)
More informationReinforcement Learning
Reinforcement Learning Policy gradients Daniel Hennes 26.06.2017 University Stuttgart - IPVS - Machine Learning & Robotics 1 Policy based reinforcement learning So far we approximated the action value
More informationArtificial Intelligence & Sequential Decision Problems
Artificial Intelligence & Sequential Decision Problems (CIV6540 - Machine Learning for Civil Engineers) Professor: James-A. Goulet Département des génies civil, géologique et des mines Chapter 15 Goulet
More informationReinforcement Learning
Reinforcement Learning Lecture 6: RL algorithms 2.0 Alexandre Proutiere, Sadegh Talebi, Jungseul Ok KTH, The Royal Institute of Technology Objectives of this lecture Present and analyse two online algorithms
More informationReinforcement Learning In Continuous Time and Space
Reinforcement Learning In Continuous Time and Space presentation of paper by Kenji Doya Leszek Rybicki lrybicki@mat.umk.pl 18.07.2008 Leszek Rybicki lrybicki@mat.umk.pl Reinforcement Learning In Continuous
More information1 Introduction 2. 4 Q-Learning The Q-value The Temporal Difference The whole Q-Learning process... 5
Table of contents 1 Introduction 2 2 Markov Decision Processes 2 3 Future Cumulative Reward 3 4 Q-Learning 4 4.1 The Q-value.............................................. 4 4.2 The Temporal Difference.......................................
More information6 Reinforcement Learning
6 Reinforcement Learning As discussed above, a basic form of supervised learning is function approximation, relating input vectors to output vectors, or, more generally, finding density functions p(y,
More informationReinforcement Learning II
Reinforcement Learning II Andrea Bonarini Artificial Intelligence and Robotics Lab Department of Electronics and Information Politecnico di Milano E-mail: bonarini@elet.polimi.it URL:http://www.dei.polimi.it/people/bonarini
More informationTemporal difference learning
Temporal difference learning AI & Agents for IET Lecturer: S Luz http://www.scss.tcd.ie/~luzs/t/cs7032/ February 4, 2014 Recall background & assumptions Environment is a finite MDP (i.e. A and S are finite).
More informationPreference Elicitation for Sequential Decision Problems
Preference Elicitation for Sequential Decision Problems Kevin Regan University of Toronto Introduction 2 Motivation Focus: Computational approaches to sequential decision making under uncertainty These
More informationReinforcement Learning
Reinforcement Learning Dipendra Misra Cornell University dkm@cs.cornell.edu https://dipendramisra.wordpress.com/ Task Grasp the green cup. Output: Sequence of controller actions Setup from Lenz et. al.
More informationCS599 Lecture 2 Function Approximation in RL
CS599 Lecture 2 Function Approximation in RL Look at how experience with a limited part of the state set be used to produce good behavior over a much larger part. Overview of function approximation (FA)
More informationAnimal learning theory
Animal learning theory Based on [Sutton and Barto, 1990, Dayan and Abbott, 2001] Bert Kappen [Sutton and Barto, 1990] Classical conditioning: - A conditioned stimulus (CS) and unconditioned stimulus (US)
More informationGradient Methods for Markov Decision Processes
Gradient Methods for Markov Decision Processes Department of Computer Science University College London May 11, 212 Outline 1 Introduction Markov Decision Processes Dynamic Programming 2 Gradient Methods
More informationNonparametric Inverse Reinforcement Learning and Approximate Optimal Control with Temporal Logic Tasks. Siddharthan Rajasekaran
Nonparametric Inverse Reinforcement Learning and Approximate Optimal Control with Temporal Logic Tasks by Siddharthan Rajasekaran A dissertation submitted in partial satisfaction of the requirements for
More informationPolicy Gradient Reinforcement Learning for Robotics
Policy Gradient Reinforcement Learning for Robotics Michael C. Koval mkoval@cs.rutgers.edu Michael L. Littman mlittman@cs.rutgers.edu May 9, 211 1 Introduction Learning in an environment with a continuous
More informationilstd: Eligibility Traces and Convergence Analysis
ilstd: Eligibility Traces and Convergence Analysis Alborz Geramifard Michael Bowling Martin Zinkevich Richard S. Sutton Department of Computing Science University of Alberta Edmonton, Alberta {alborz,bowling,maz,sutton}@cs.ualberta.ca
More information