Internet Monetization
|
|
- Adam Paul
- 5 years ago
- Views:
Transcription
1 Internet Monetization March May, 2013
2 Discrete time Finite A decision process (MDP) is reward process with decisions. It models an environment in which all states are and time is divided into stages. Definition A Process is a tuple S, A, P, R, γ, µ S is a (finite) set of states A is a (finite) set of actions P is a state transition probability matrix, P(s s, a) R is a reward function, R(s, a) = E[r s, a] γ is a discount factor, γ [0, 1] a set of initial probabilities µ 0 i = P(X 0 = i) for all i
3 Example: Student MDP
4 Goals and Rewards Is a scalar reward an adequate notion of a goal? Sutton hypothesis: That all of what we mean by goals and purposes can be well thought of as the maximization of the cumulative sum of a received scalar signal (reward) Probably ultimately wrong, but so simple and flexible we have to disprove it before considering anything more complicated A goal should specify what we want to achieve, not how we want to achieve it The same goal can be specified by (infinite) different reward functions A goal must be outside the agent s direct control thus outside the agent The agent must be able to measure success: explicitly frequently during her lifespan
5 Return Time horizon finite: finite and fixed number of steps indefinite: until some stopping criteria is met (absorbing states) infinite: forever Cumulative reward total reward: V = average reward: i=1 r r n V = lim n n discounted reward: V = γ i 1 r i mean variance reward i=1 r i
6 Infinite horizon Discounted Return Definition The return v t is the total discounted reward from time step t. v t = r t+1 + γr t+2 + = γ k r t+k+1 k=0 The discount γ [0, 1) is the present value of future rewards The value of receiving reward r after k + 1 time steps is γ k r Immediate reward vs delayed reward γ close to 0 leads to myopic evaluation γ close to 1 leads to far sighted evaluation γ can be also interpreted as the probability that the process will go on
7 Why discount? Most reward (and decision) processes are discounted, why? Mathematically convenient to discount rewards Avoids infinite returns in cyclic processes Uncertainty about the future may not be fully represented If the reward is financial, immediate rewards may earn more interest than delayed rewards Animal/human behavior shows preference for immediate reward It is sometimes possible to use undiscounted reward processes (i.e. γ = 1), e.g. if all sequences terminate
8 Policies A policy, at any given point in time, decides which action the agent selects A policy fully defines the behavior of an agent Policies can be: ian History dependent Deterministic Stochastic Stationary Non stationary
9 Stationary Stochastic ian Policies Definition A policy π is a distribution over actions given the state: π(a s) = P[a s] MDP policies depend on the current state (not the history) i.e., Policies are stationary (time independent) Given an MDP M = S, A, P, R, γ, µ and a policy π The state sequence s 1, s 2,... is a process S, P π, µ The state and reward sequence s 1, r 2, s 2,... is a reward process S, P π, R π, γ, µ, where P π = π(a s)p(s s, a) R π = π(a s)r(s, a) a A a A
10 Value Functions Given a policy π, it is possible to define the utility of each state: Policy Evaluation Definition The state value function V π (s) of an MDP is the expected return starting from state s, and then following policy π V π (s) = E π[v t s t = s] For control purposes, rather than the value of each state, it is easier to consider the value of each action in each state Definition The action value function Q π (s, a) is the expected return starting from state s, taking action a, and then following policy π Q π (s, a) = E π[v t s t = s, a t = a]
11 Example: Value Function of Student MDP
12 Bellman Expectation Equation The state value function can again be decomposed into immediate reward plus discounted value of successor state, V π (s) = E π [r t+1 + γv π (s t+1 ) s t = s] ( = a A π(a s) R(s, a) + γ s S P(s s, a)v π (s ) The action-value function can similarly be decomposed ) Q π (s, a) = E π [r t+1 + γq π (s t+1, a t+1 ) s t = s, a t = a] = R(s, a) + γ P(s s, a)v π (s ) s S = R(s, a) + γ P(s s, a) π(a s )Q π (s, a ) s S a A
13 Bellman Expectation Equation (Matrix Form) The Bellman expectation equation can be expressed concisely using the induced MRP with direct solution V π = R π + γp π V π V π = (I γp π ) 1 R π
14 Bellman operators for V π Definition The Bellman operator for V π is defined as T π : R S R S (maps value functions to value functions): ( ) (T π V π )(s) = a A π(a s) R(s, a) + γ s S P(s s, a)v π (s ) Using Bellman operator, Bellman expectation equation can be compactly written as: T π V π = V π V π is a fixed point of the Bellman operator T π This is a linear equation in V π and T π If 0 < γ < 1 then T π is a contraction w.r.t. the maximum norm
15 Bellman operators for Q π Definition The Bellman operator for Q π is defined as T π : R S A R S A (maps action value functions to action value functions): (T π Q π )(s, a) = R(s, a) + γ s S P(s s, a) a A π(a s ) (Q π (s, a )) Using Bellman operator, Bellman expectation equation can be compactly written as: T π Q π = Q π Q π is a fixed point of the Bellman operator T π This is a linear equation in Q π and T π If 0 < γ < 1 then T π is a contraction w.r.t. the maximum norm
16 Example: Value Function of Student MDP Matlab code % Problem definition gamma = 1; R = [-0.5;-1.5;-1;5.5]; P = [ ; ; ; ]; % Closed form computation of the value function I = eye(4); Vcf = inv(i-gamma*p)*r % Iterative computation of the value function V = zeros(4,1); for i = 1:10 V = R + gamma*p*v; error(i) = max(abs(vcf - V)); end % Plotting how the error decrese with the number of iterations plot([1:10],error);
17 Optimal Value Function Definition The optimal state value function V (s) is the maximum value function over all policies V (s) = max V π (s) π The optimal action value function Q (s, a) is the maximum action value function over all policies Q (s, a) = max Q π (s, a) π The optimal value function specifies the best possible performance in the MDP An MDP is solved when we know the optimal value function
18 Example: Optimal Value Function of Student MDP
19 Optimal Policy Value functions define a partial ordering over policies Theorem π π if V π (s) V π (s), For any Process s S There exists an optimal policy π that is better than or equal to all other policies π π, π All optimal policies achieve the optimal value function, V π (s) = V (s) All optimal policies achieve the optimal action value function, Q π (s) = Q (s) There is always a deterministic optimal policy for any MDP A deterministic optimal policy can be found by maximizing over Q (s, a) { π 1 if a = arg max a A Q (s, a) (a s) = 0 otherwise
20 Example: Optimal Policy for Student MDP
21 Bellman Optimality Equation Bellman Optimality Equation for V V (s) = max a = max a Q (s, a) { R(s, a) + γ s S P(s s, a)v (s ) } Bellman Optimality Equation for Q Q (s) = R(s, a) + γ s S P(s s, a)v (s ) = R(s, a) + γ s S P(s s, a) max a Q (s, a )
22 Example: Bellman Optimality Equation in Student MDP
23 Bellman Optimality Operator Definition The Bellman optimality operator for V is defined as T : R S R S (maps value functions to value functions): ( ) (T V )(s) = max a A R(s, a) + γ s S P(s s, a)v (s ) Definition The Bellman optimality operator for Q is defined as T : R S A R S A (maps action value functions to action value functions): (T Q )(s) = R(s, a) + γ s S P(s s, a) max a Q (s, a )
24 Properties of Bellman Operators Monotonicity: if f 1 f 2 component wise T π f 1 T π f 2, T f 1 T f 2 Max Norm Contraction: for two vectors f 1 anf f 2 T π f 1 T π f 2 γ f 1 f 2 T f 1 T f 2 γ f 1 f 2 V π is the unique fixed point of T π V is the unique fixed point of T For any vector f R S and any policy π, we have lim k (T π ) k f = V π, lim k (T ) k f = V
25 Solving the Bellman Optimality Equation Bellman optimality equation is non linear No closed form solution for the general case Many iterative solution methods Linear Reinforcement Learning Q learning SARSA
26 Extensions to MDP Undiscounted, average reward MDPs Infinite and continuous MDPs Partially observable MDPs Semi-MDPs Non stationary MDPs, games
27 Exercises Model the following problems But who s counting Blackjack
28 Brute Force Solving an MDP means finding an optimal policy A naive approach consists of enumerating all the deterministic policies evaluate each policy return the best one The number of policies is exponential: A S Need a more intelligent search for best policies restrict the search to a subset of the possible policies using stochastic optimization algorithms
29 What is? : sequential or temporal component to the problem : optimizing a program, i.e., a policy c.f. linear programming A method for solving complex problems By breaking them down into subproblems Solve the subproblems Combine solutions to subproblems
30 Requirements for is a very general solution method for problems which have two properties: Optimal substructure Principle of optimality applies Optimal solution can be decomposed into subproblems Overlapping subproblems Subproblems recur many times Solutions can be cached and reused decision processes satisfy both properties Bellman equation gives revursive decomposition Value function stores and reuses solutions
31 Planning by assumes full knowledge of the MDP It is used for planning in an MDP Prediction Input: MDP S, A, P, R, γ, µ and policy π (i.e., MRP S, P π, R π, γ, µ ) Output: value function V π Control Input: MDP S, A, P, R, γ, µ Output: value function V and optimal policy π
32 Other Applications of is used to solve many other problems: Scheduling algorithms String algorithms (e.g., sequence alignment) Graph algorithms (e.g., shortest path algorithms) Graphical models (e.g., Viterbi algorithm) Bioinformatics (e.g., lattice models)
33 Finite Horizon Principle of optimality: the tail of an optimal policy is optimal for the tail problem Backward induction Backward recursion V k (s) = max a A k R k (s, a) + P k (s s, a)v k+1 (s ) s, k = N 1,..., 0 S k+1 Optimal policy π k (s) arg max a A k R k (s, a) + P k (s s, a)v k+1 (s ) s, k = 0,..., N 1 S k+1 Cost: N S A vs A N S of brute force policy search From now on, we will consider infinite horizon discounted MDPs
34 Policy Evaluation For a given policy π compute the state value function V π Recall State value function for policy π: { } V π (s) = E γ t r t s 0 = s Bellman equation for V π : [ V π (s) = a A π(a s) t=0 R(s, a) + γ s S P(s s, a)v π (s ) ] A system of S simultaneous linear equations Solution in matrix notation (complexity O(n 3 )): V π = (I γp π ) 1 R π
35 Iterative Policy Evaluation Iterative application of Bellman expectation backup V 0 V 1 V k V k+1 V π A full policy evaluation backup: V k+1 (s) [ π(a s) R(s, a) + γ ] P(s s, a)v k (s ) a A s S A sweep consists of applying a backup operation to each state Using synchronous backups At each iteration k + 1 For all states s S Update V k+1 (s) from V k (s )
36 Example Small Gridworld Undiscounted episodic MDP γ = 1 All episodes terminate in absorbing terminal state Transient states 1,..., 14 One terminal state (shown twice as shaded squares) Actions that would take agent off the grid leave state unchanged Reward is 1 until the terminal state is reached
37 Policy Evaluation in Small Gridworld
38 Policy Evaluation in Small Gridworld
39 Policy Improvement Consider a deterministic policy π For a given state s, would it better to do an action a π(s)? We can improve the policy by acting greedily π (s) = arg max a A Qπ (s, a) This improves the value from any state s over one step Q π (s, π (s)) = max a A Qπ (s, a) Q π (s, π(s)) = V π (s)
40 Policy Improvement Theorem Theorem Let π and π be any pair of deterministic policies such that Q π (s, π (s)) V π (s), s S Then the policy π must be as good as, or better than π V π (s) V π (s), s S Proof. V π (s) Q π (s, π (s)) = E π [r t+1 + γv π (s t+1 ) s t = s] [ E π rt+1 + γq π (s t+1, π (s t+1 )) s t = s ] [ ] E π r t+1 + γr t+2 + γ 2 Q π (s t+2, π (s t+2 )) s t = s E π [r t+1 + γr t s t = s] = V π (s)
41 What if improvements stops (V π = V π )? Q π (s, π (s)) = max a A Qπ (s, a) = Q π (s, π(s)) = V π (s) But this is the Bellman optimality equation Therefore V π (s) = V π (s) = V (s) for all s S So π is an optimal policy! π 0 V π 0 π 1 V π 1 π V π
42 Modified Does policy evaluation need to converge to V π? Or should we introduce a stopping condition e.g., ɛ convergence of value function Or simply stop after k iterations of iterative policy evaluation? For example, in the small gridworld k = 3 was sufficient to achieve optimal policy Why not update policy every iteration? i.e. stop after k = 1
43 Generalized Policy evaluation: Estimate V π e.g., Iterative policy evaluation Policy improvement: Generate π π e.g., Greedy policy improvement
44 Determinisitic If we know the solution to subproblems V (s ) Then it is easy to construct the solution to V (s) V (s) max a A R(s, a) + V (s ) The idea of value iteration is to apply these updates iteratively e.g., Starting from the goal and working backward
45 in MDPs Many MDPs don t have a finite horizon They are tipically loopy So there is no end to work backwards from However, we can still propagate information backwards Using Bellman optimality equation to backup V (s) from V (s ) Each subproblem is easier due to discount factor γ Iterate until convergence
46 Principle of Optimality in MDPs Theorem A policy π(a s) achieves the optimal value from state s, V π (s) = V (s), if and only if For any state s reachable from s π achieves the optimal value from state s, V π (s ) = V (s ) So an optimal policy π (a s) must consist of: an optimal first action a followed by an optimal policy from successor state s V (s) = max R(s, a) + γ P(s s, a)v (s ) a A s S
47 Problem: find optimal policy π Solution: iterative application of Bellman optimality backup V 1 V 2 V Using synchronous backups At each iteration k + 1 For all states s S Update V k+1 (s) from V k (s ) Unlike policy iteration there is no explicit policy Intermediate value functions may not correspond to any policy demo: poole/demos/mdp/vi.html
48 Convergence and Contractions Define the max norm: V = max s V (s) Theorem converges to the optimal state-value function lim k V k = V Proof. V k+1 V = T V k T V γ V k V γ k+1 V 0 V Theorem V i+1 V i < ɛ V i+1 V < 2ɛγ 1 γ
49 Synchronous Algorithms Problem Bellman Equation Algorithm Prediction Bellman Expectation Equation Policy Evaluation (Iterative) Control Bellman Expectation + Greedy Policy Improvement Control Bellman Optimality Equation Algorithms are based on state value function V π (s) or V (s) Complexity O(mn 2 ) per iteration, for m actions and n states Could also apply to action value function Q π (s, a) or Q (s, a) Complexity O(m 2 n 2 ) per iteration
Reinforcement Learning
Reinforcement Learning March May, 2013 Schedule Update Introduction 03/13/2015 (10:15-12:15) Sala conferenze MDPs 03/18/2015 (10:15-12:15) Sala conferenze Solving MDPs 03/20/2015 (10:15-12:15) Aula Alpha
More informationReinforcement Learning and Deep Reinforcement Learning
Reinforcement Learning and Deep Reinforcement Learning Ashis Kumer Biswas, Ph.D. ashis.biswas@ucdenver.edu Deep Learning November 5, 2018 1 / 64 Outlines 1 Principles of Reinforcement Learning 2 The Q
More informationPlanning in Markov Decision Processes
Carnegie Mellon School of Computer Science Deep Reinforcement Learning and Control Planning in Markov Decision Processes Lecture 3, CMU 10703 Katerina Fragkiadaki Markov Decision Process (MDP) A Markov
More informationLecture 3: Markov Decision Processes
Lecture 3: Markov Decision Processes Joseph Modayil 1 Markov Processes 2 Markov Reward Processes 3 Markov Decision Processes 4 Extensions to MDPs Markov Processes Introduction Introduction to MDPs Markov
More informationDecision Theory: Markov Decision Processes
Decision Theory: Markov Decision Processes CPSC 322 Lecture 33 March 31, 2006 Textbook 12.5 Decision Theory: Markov Decision Processes CPSC 322 Lecture 33, Slide 1 Lecture Overview Recap Rewards and Policies
More informationReinforcement Learning. Introduction
Reinforcement Learning Introduction Reinforcement Learning Agent interacts and learns from a stochastic environment Science of sequential decision making Many faces of reinforcement learning Optimal control
More informationDecision Theory: Q-Learning
Decision Theory: Q-Learning CPSC 322 Decision Theory 5 Textbook 12.5 Decision Theory: Q-Learning CPSC 322 Decision Theory 5, Slide 1 Lecture Overview 1 Recap 2 Asynchronous Value Iteration 3 Q-Learning
More informationMarkov Decision Processes and Solving Finite Problems. February 8, 2017
Markov Decision Processes and Solving Finite Problems February 8, 2017 Overview of Upcoming Lectures Feb 8: Markov decision processes, value iteration, policy iteration Feb 13: Policy gradients Feb 15:
More informationReinforcement Learning
Reinforcement Learning Markov decision process & Dynamic programming Evaluative feedback, value function, Bellman equation, optimality, Markov property, Markov decision process, dynamic programming, value
More informationMDP Preliminaries. Nan Jiang. February 10, 2019
MDP Preliminaries Nan Jiang February 10, 2019 1 Markov Decision Processes In reinforcement learning, the interactions between the agent and the environment are often described by a Markov Decision Process
More informationGrundlagen der Künstlichen Intelligenz
Grundlagen der Künstlichen Intelligenz Reinforcement learning Daniel Hennes 4.12.2017 (WS 2017/18) University Stuttgart - IPVS - Machine Learning & Robotics 1 Today Reinforcement learning Model based and
More informationCourse 16:198:520: Introduction To Artificial Intelligence Lecture 13. Decision Making. Abdeslam Boularias. Wednesday, December 7, 2016
Course 16:198:520: Introduction To Artificial Intelligence Lecture 13 Decision Making Abdeslam Boularias Wednesday, December 7, 2016 1 / 45 Overview We consider probabilistic temporal models where the
More informationChristopher Watkins and Peter Dayan. Noga Zaslavsky. The Hebrew University of Jerusalem Advanced Seminar in Deep Learning (67679) November 1, 2015
Q-Learning Christopher Watkins and Peter Dayan Noga Zaslavsky The Hebrew University of Jerusalem Advanced Seminar in Deep Learning (67679) November 1, 2015 Noga Zaslavsky Q-Learning (Watkins & Dayan, 1992)
More informationMS&E338 Reinforcement Learning Lecture 1 - April 2, Introduction
MS&E338 Reinforcement Learning Lecture 1 - April 2, 2018 Introduction Lecturer: Ben Van Roy Scribe: Gabriel Maher 1 Reinforcement Learning Introduction In reinforcement learning (RL) we consider an agent
More informationREINFORCEMENT LEARNING
REINFORCEMENT LEARNING Larry Page: Where s Google going next? DeepMind's DQN playing Breakout Contents Introduction to Reinforcement Learning Deep Q-Learning INTRODUCTION TO REINFORCEMENT LEARNING Contents
More informationPART A and ONE question from PART B; or ONE question from PART A and TWO questions from PART B.
Advanced Topics in Machine Learning, GI13, 2010/11 Advanced Topics in Machine Learning, GI13, 2010/11 Answer any THREE questions. Each question is worth 20 marks. Use separate answer books Answer any THREE
More informationMARKOV DECISION PROCESSES (MDP) AND REINFORCEMENT LEARNING (RL) Versione originale delle slide fornita dal Prof. Francesco Lo Presti
1 MARKOV DECISION PROCESSES (MDP) AND REINFORCEMENT LEARNING (RL) Versione originale delle slide fornita dal Prof. Francesco Lo Presti Historical background 2 Original motivation: animal learning Early
More informationReinforcement Learning
1 Reinforcement Learning Chris Watkins Department of Computer Science Royal Holloway, University of London July 27, 2015 2 Plan 1 Why reinforcement learning? Where does this theory come from? Markov decision
More informationLecture 3: The Reinforcement Learning Problem
Lecture 3: The Reinforcement Learning Problem Objectives of this lecture: describe the RL problem we will be studying for the remainder of the course present idealized form of the RL problem for which
More informationCMU Lecture 12: Reinforcement Learning. Teacher: Gianni A. Di Caro
CMU 15-781 Lecture 12: Reinforcement Learning Teacher: Gianni A. Di Caro REINFORCEMENT LEARNING Transition Model? State Action Reward model? Agent Goal: Maximize expected sum of future rewards 2 MDP PLANNING
More information, and rewards and transition matrices as shown below:
CSE 50a. Assignment 7 Out: Tue Nov Due: Thu Dec Reading: Sutton & Barto, Chapters -. 7. Policy improvement Consider the Markov decision process (MDP) with two states s {0, }, two actions a {0, }, discount
More informationThis question has three parts, each of which can be answered concisely, but be prepared to explain and justify your concise answer.
This question has three parts, each of which can be answered concisely, but be prepared to explain and justify your concise answer. 1. Suppose you have a policy and its action-value function, q, then you
More informationChapter 3: The Reinforcement Learning Problem
Chapter 3: The Reinforcement Learning Problem Objectives of this chapter: describe the RL problem we will be studying for the remainder of the course present idealized form of the RL problem for which
More informationReinforcement Learning. Donglin Zeng, Department of Biostatistics, University of North Carolina
Reinforcement Learning Introduction Introduction Unsupervised learning has no outcome (no feedback). Supervised learning has outcome so we know what to predict. Reinforcement learning is in between it
More informationReinforcement Learning
Reinforcement Learning Temporal Difference Learning Temporal difference learning, TD prediction, Q-learning, elibigility traces. (many slides from Marc Toussaint) Vien Ngo MLR, University of Stuttgart
More informationReinforcement Learning. Spring 2018 Defining MDPs, Planning
Reinforcement Learning Spring 2018 Defining MDPs, Planning understandability 0 Slide 10 time You are here Markov Process Where you will go depends only on where you are Markov Process: Information state
More informationToday s s Lecture. Applicability of Neural Networks. Back-propagation. Review of Neural Networks. Lecture 20: Learning -4. Markov-Decision Processes
Today s s Lecture Lecture 20: Learning -4 Review of Neural Networks Markov-Decision Processes Victor Lesser CMPSCI 683 Fall 2004 Reinforcement learning 2 Back-propagation Applicability of Neural Networks
More informationINF 5860 Machine learning for image classification. Lecture 14: Reinforcement learning May 9, 2018
Machine learning for image classification Lecture 14: Reinforcement learning May 9, 2018 Page 3 Outline Motivation Introduction to reinforcement learning (RL) Value function based methods (Q-learning)
More informationIntroduction to Reinforcement Learning
CSCI-699: Advanced Topics in Deep Learning 01/16/2019 Nitin Kamra Spring 2019 Introduction to Reinforcement Learning 1 What is Reinforcement Learning? So far we have seen unsupervised and supervised learning.
More informationReinforcement Learning. Summer 2017 Defining MDPs, Planning
Reinforcement Learning Summer 2017 Defining MDPs, Planning understandability 0 Slide 10 time You are here Markov Process Where you will go depends only on where you are Markov Process: Information state
More informationReal Time Value Iteration and the State-Action Value Function
MS&E338 Reinforcement Learning Lecture 3-4/9/18 Real Time Value Iteration and the State-Action Value Function Lecturer: Ben Van Roy Scribe: Apoorva Sharma and Tong Mu 1 Review Last time we left off discussing
More informationQ-Learning for Markov Decision Processes*
McGill University ECSE 506: Term Project Q-Learning for Markov Decision Processes* Authors: Khoa Phan khoa.phan@mail.mcgill.ca Sandeep Manjanna sandeep.manjanna@mail.mcgill.ca (*Based on: Convergence of
More informationMarkov Decision Processes Chapter 17. Mausam
Markov Decision Processes Chapter 17 Mausam Planning Agent Static vs. Dynamic Fully vs. Partially Observable Environment What action next? Deterministic vs. Stochastic Perfect vs. Noisy Instantaneous vs.
More informationMarkov Decision Processes Chapter 17. Mausam
Markov Decision Processes Chapter 17 Mausam Planning Agent Static vs. Dynamic Fully vs. Partially Observable Environment What action next? Deterministic vs. Stochastic Perfect vs. Noisy Instantaneous vs.
More informationIntroduction to Reinforcement Learning. CMPT 882 Mar. 18
Introduction to Reinforcement Learning CMPT 882 Mar. 18 Outline for the week Basic ideas in RL Value functions and value iteration Policy evaluation and policy improvement Model-free RL Monte-Carlo and
More informationChapter 3: The Reinforcement Learning Problem
Chapter 3: The Reinforcement Learning Problem Objectives of this chapter: describe the RL problem we will be studying for the remainder of the course present idealized form of the RL problem for which
More informationBasics of reinforcement learning
Basics of reinforcement learning Lucian Buşoniu TMLSS, 20 July 2018 Main idea of reinforcement learning (RL) Learn a sequential decision policy to optimize the cumulative performance of an unknown system
More informationTemporal difference learning
Temporal difference learning AI & Agents for IET Lecturer: S Luz http://www.scss.tcd.ie/~luzs/t/cs7032/ February 4, 2014 Recall background & assumptions Environment is a finite MDP (i.e. A and S are finite).
More informationQ-Learning in Continuous State Action Spaces
Q-Learning in Continuous State Action Spaces Alex Irpan alexirpan@berkeley.edu December 5, 2015 Contents 1 Introduction 1 2 Background 1 3 Q-Learning 2 4 Q-Learning In Continuous Spaces 4 5 Experimental
More informationReinforcement Learning
Reinforcement Learning Dipendra Misra Cornell University dkm@cs.cornell.edu https://dipendramisra.wordpress.com/ Task Grasp the green cup. Output: Sequence of controller actions Setup from Lenz et. al.
More informationCMU Lecture 11: Markov Decision Processes II. Teacher: Gianni A. Di Caro
CMU 15-781 Lecture 11: Markov Decision Processes II Teacher: Gianni A. Di Caro RECAP: DEFINING MDPS Markov decision processes: o Set of states S o Start state s 0 o Set of actions A o Transitions P(s s,a)
More informationarxiv: v1 [cs.ai] 5 Nov 2017
arxiv:1711.01569v1 [cs.ai] 5 Nov 2017 Markus Dumke Department of Statistics Ludwig-Maximilians-Universität München markus.dumke@campus.lmu.de Abstract Temporal-difference (TD) learning is an important
More informationTemporal Difference. Learning KENNETH TRAN. Principal Research Engineer, MSR AI
Temporal Difference Learning KENNETH TRAN Principal Research Engineer, MSR AI Temporal Difference Learning Policy Evaluation Intro to model-free learning Monte Carlo Learning Temporal Difference Learning
More informationReinforcement Learning: An Introduction
Introduction Betreuer: Freek Stulp Hauptseminar Intelligente Autonome Systeme (WiSe 04/05) Forschungs- und Lehreinheit Informatik IX Technische Universität München November 24, 2004 Introduction What is
More informationSequential decision making under uncertainty. Department of Computer Science, Czech Technical University in Prague
Sequential decision making under uncertainty Jiří Kléma Department of Computer Science, Czech Technical University in Prague https://cw.fel.cvut.cz/wiki/courses/b4b36zui/prednasky pagenda Previous lecture:
More informationReinforcement learning
Reinforcement learning Based on [Kaelbling et al., 1996, Bertsekas, 2000] Bert Kappen Reinforcement learning Reinforcement learning is the problem faced by an agent that must learn behavior through trial-and-error
More informationDistributed Optimization. Song Chong EE, KAIST
Distributed Optimization Song Chong EE, KAIST songchong@kaist.edu Dynamic Programming for Path Planning A path-planning problem consists of a weighted directed graph with a set of n nodes N, directed links
More informationReinforcement Learning. George Konidaris
Reinforcement Learning George Konidaris gdk@cs.brown.edu Fall 2017 Machine Learning Subfield of AI concerned with learning from data. Broadly, using: Experience To Improve Performance On Some Task (Tom
More informationMachine Learning. Reinforcement learning. Hamid Beigy. Sharif University of Technology. Fall 1396
Machine Learning Reinforcement learning Hamid Beigy Sharif University of Technology Fall 1396 Hamid Beigy (Sharif University of Technology) Machine Learning Fall 1396 1 / 32 Table of contents 1 Introduction
More informationReinforcement Learning and NLP
1 Reinforcement Learning and NLP Kapil Thadani kapil@cs.columbia.edu RESEARCH Outline 2 Model-free RL Markov decision processes (MDPs) Derivative-free optimization Policy gradients Variance reduction Value
More informationChapter 4: Dynamic Programming
Chapter 4: Dynamic Programming Objectives of this chapter: Overview of a collection of classical solution methods for MDPs known as dynamic programming (DP) Show how DP can be used to compute value functions,
More informationReinforcement Learning and Control
CS9 Lecture notes Andrew Ng Part XIII Reinforcement Learning and Control We now begin our study of reinforcement learning and adaptive control. In supervised learning, we saw algorithms that tried to make
More informationArtificial Intelligence
Artificial Intelligence Dynamic Programming Marc Toussaint University of Stuttgart Winter 2018/19 Motivation: So far we focussed on tree search-like solvers for decision problems. There is a second important
More informationIntroduction to Reinforcement Learning. Part 6: Core Theory II: Bellman Equations and Dynamic Programming
Introduction to Reinforcement Learning Part 6: Core Theory II: Bellman Equations and Dynamic Programming Bellman Equations Recursive relationships among values that can be used to compute values The tree
More informationUnderstanding (Exact) Dynamic Programming through Bellman Operators
Understanding (Exact) Dynamic Programming through Bellman Operators Ashwin Rao ICME, Stanford University January 15, 2019 Ashwin Rao (Stanford) Bellman Operators January 15, 2019 1 / 11 Overview 1 Value
More informationMarkov Decision Processes and Dynamic Programming
Markov Decision Processes and Dynamic Programming A. LAZARIC (SequeL Team @INRIA-Lille) Ecole Centrale - Option DAD SequeL INRIA Lille EC-RL Course In This Lecture A. LAZARIC Markov Decision Processes
More informationPolicy Gradient Methods. February 13, 2017
Policy Gradient Methods February 13, 2017 Policy Optimization Problems maximize E π [expression] π Fixed-horizon episodic: T 1 Average-cost: lim T 1 T r t T 1 r t Infinite-horizon discounted: γt r t Variable-length
More informationReinforcement learning
Reinforcement learning Stuart Russell, UC Berkeley Stuart Russell, UC Berkeley 1 Outline Sequential decision making Dynamic programming algorithms Reinforcement learning algorithms temporal difference
More information15-780: ReinforcementLearning
15-780: ReinforcementLearning J. Zico Kolter March 2, 2016 1 Outline Challenge of RL Model-based methods Model-free methods Exploration and exploitation 2 Outline Challenge of RL Model-based methods Model-free
More informationReinforcement Learning: the basics
Reinforcement Learning: the basics Olivier Sigaud Université Pierre et Marie Curie, PARIS 6 http://people.isir.upmc.fr/sigaud August 6, 2012 1 / 46 Introduction Action selection/planning Learning by trial-and-error
More information6 Reinforcement Learning
6 Reinforcement Learning As discussed above, a basic form of supervised learning is function approximation, relating input vectors to output vectors, or, more generally, finding density functions p(y,
More informationNotes on Reinforcement Learning
1 Introduction Notes on Reinforcement Learning Paulo Eduardo Rauber 2014 Reinforcement learning is the study of agents that act in an environment with the goal of maximizing cumulative reward signals.
More informationCS599 Lecture 1 Introduction To RL
CS599 Lecture 1 Introduction To RL Reinforcement Learning Introduction Learning from rewards Policies Value Functions Rewards Models of the Environment Exploitation vs. Exploration Dynamic Programming
More informationMotivation for introducing probabilities
for introducing probabilities Reaching the goals is often not sufficient: it is important that the expected costs do not outweigh the benefit of reaching the goals. 1 Objective: maximize benefits - costs.
More informationCS 7180: Behavioral Modeling and Decisionmaking
CS 7180: Behavioral Modeling and Decisionmaking in AI Markov Decision Processes for Complex Decisionmaking Prof. Amy Sliva October 17, 2012 Decisions are nondeterministic In many situations, behavior and
More informationIntroduction to Reinforcement Learning Part 1: Markov Decision Processes
Introduction to Reinforcement Learning Part 1: Markov Decision Processes Rowan McAllister Reinforcement Learning Reading Group 8 April 2015 Note I ve created these slides whilst following Algorithms for
More informationCourse basics. CSE 190: Reinforcement Learning: An Introduction. Last Time. Course goals. The website for the class is linked off my homepage.
Course basics CSE 190: Reinforcement Learning: An Introduction The website for the class is linked off my homepage. Grades will be based on programming assignments, homeworks, and class participation.
More informationSequential Decision Problems
Sequential Decision Problems Michael A. Goodrich November 10, 2006 If I make changes to these notes after they are posted and if these changes are important (beyond cosmetic), the changes will highlighted
More informationCS788 Dialogue Management Systems Lecture #2: Markov Decision Processes
CS788 Dialogue Management Systems Lecture #2: Markov Decision Processes Kee-Eung Kim KAIST EECS Department Computer Science Division Markov Decision Processes (MDPs) A popular model for sequential decision
More informationReinforcement Learning with Function Approximation. Joseph Christian G. Noel
Reinforcement Learning with Function Approximation Joseph Christian G. Noel November 2011 Abstract Reinforcement learning (RL) is a key problem in the field of Artificial Intelligence. The main goal is
More informationAn Adaptive Clustering Method for Model-free Reinforcement Learning
An Adaptive Clustering Method for Model-free Reinforcement Learning Andreas Matt and Georg Regensburger Institute of Mathematics University of Innsbruck, Austria {andreas.matt, georg.regensburger}@uibk.ac.at
More informationProf. Dr. Ann Nowé. Artificial Intelligence Lab ai.vub.ac.be
REINFORCEMENT LEARNING AN INTRODUCTION Prof. Dr. Ann Nowé Artificial Intelligence Lab ai.vub.ac.be REINFORCEMENT LEARNING WHAT IS IT? What is it? Learning from interaction Learning about, from, and while
More informationLecture 1: March 7, 2018
Reinforcement Learning Spring Semester, 2017/8 Lecture 1: March 7, 2018 Lecturer: Yishay Mansour Scribe: ym DISCLAIMER: Based on Learning and Planning in Dynamical Systems by Shie Mannor c, all rights
More informationTrust Region Policy Optimization
Trust Region Policy Optimization Yixin Lin Duke University yixin.lin@duke.edu March 28, 2017 Yixin Lin (Duke) TRPO March 28, 2017 1 / 21 Overview 1 Preliminaries Markov Decision Processes Policy iteration
More informationLecture 10 - Planning under Uncertainty (III)
Lecture 10 - Planning under Uncertainty (III) Jesse Hoey School of Computer Science University of Waterloo March 27, 2018 Readings: Poole & Mackworth (2nd ed.)chapter 12.1,12.3-12.9 1/ 34 Reinforcement
More informationMachine Learning I Reinforcement Learning
Machine Learning I Reinforcement Learning Thomas Rückstieß Technische Universität München December 17/18, 2009 Literature Book: Reinforcement Learning: An Introduction Sutton & Barto (free online version:
More informationMarks. bonus points. } Assignment 1: Should be out this weekend. } Mid-term: Before the last lecture. } Mid-term deferred exam:
Marks } Assignment 1: Should be out this weekend } All are marked, I m trying to tally them and perhaps add bonus points } Mid-term: Before the last lecture } Mid-term deferred exam: } This Saturday, 9am-10.30am,
More informationMarkov Decision Processes and Dynamic Programming
Markov Decision Processes and Dynamic Programming A. LAZARIC (SequeL Team @INRIA-Lille) ENS Cachan - Master 2 MVA SequeL INRIA Lille MVA-RL Course How to model an RL problem The Markov Decision Process
More informationLecture notes for Analysis of Algorithms : Markov decision processes
Lecture notes for Analysis of Algorithms : Markov decision processes Lecturer: Thomas Dueholm Hansen June 6, 013 Abstract We give an introduction to infinite-horizon Markov decision processes (MDPs) with
More informationLecture 18: Reinforcement Learning Sanjeev Arora Elad Hazan
COS 402 Machine Learning and Artificial Intelligence Fall 2016 Lecture 18: Reinforcement Learning Sanjeev Arora Elad Hazan Some slides borrowed from Peter Bodik and David Silver Course progress Learning
More informationCSC321 Lecture 22: Q-Learning
CSC321 Lecture 22: Q-Learning Roger Grosse Roger Grosse CSC321 Lecture 22: Q-Learning 1 / 21 Overview Second of 3 lectures on reinforcement learning Last time: policy gradient (e.g. REINFORCE) Optimize
More informationMarkov Decision Processes Infinite Horizon Problems
Markov Decision Processes Infinite Horizon Problems Alan Fern * * Based in part on slides by Craig Boutilier and Daniel Weld 1 What is a solution to an MDP? MDP Planning Problem: Input: an MDP (S,A,R,T)
More informationLecture 8: Policy Gradient
Lecture 8: Policy Gradient Hado van Hasselt Outline 1 Introduction 2 Finite Difference Policy Gradient 3 Monte-Carlo Policy Gradient 4 Actor-Critic Policy Gradient Introduction Vapnik s rule Never solve
More informationMarkov decision processes
CS 2740 Knowledge representation Lecture 24 Markov decision processes Milos Hauskrecht milos@cs.pitt.edu 5329 Sennott Square Administrative announcements Final exam: Monday, December 8, 2008 In-class Only
More information(Deep) Reinforcement Learning
Martin Matyášek Artificial Intelligence Center Czech Technical University in Prague October 27, 2016 Martin Matyášek VPD, 2016 1 / 17 Reinforcement Learning in a picture R. S. Sutton and A. G. Barto 2015
More informationPART A and ONE question from PART B; or ONE question from PART A and TWO questions from PART B.
Advanced Topics in Machine Learning, GI13, 2010/11 Advanced Topics in Machine Learning, GI13, 2010/11 Answer any THREE questions. Each question is worth 20 marks. Use separate answer books Answer any THREE
More informationReinforcement Learning
Reinforcement Learning Model-Based Reinforcement Learning Model-based, PAC-MDP, sample complexity, exploration/exploitation, RMAX, E3, Bayes-optimal, Bayesian RL, model learning Vien Ngo MLR, University
More informationThe convergence limit of the temporal difference learning
The convergence limit of the temporal difference learning Ryosuke Nomura the University of Tokyo September 3, 2013 1 Outline Reinforcement Learning Convergence limit Construction of the feature vector
More informationLecture 4: Approximate dynamic programming
IEOR 800: Reinforcement learning By Shipra Agrawal Lecture 4: Approximate dynamic programming Deep Q Networks discussed in the last lecture are an instance of approximate dynamic programming. These are
More informationComplexity of stochastic branch and bound methods for belief tree search in Bayesian reinforcement learning
Complexity of stochastic branch and bound methods for belief tree search in Bayesian reinforcement learning Christos Dimitrakakis Informatics Institute, University of Amsterdam, Amsterdam, The Netherlands
More informationGrundlagen der Künstlichen Intelligenz
Grundlagen der Künstlichen Intelligenz Formal models of interaction Daniel Hennes 27.11.2017 (WS 2017/18) University Stuttgart - IPVS - Machine Learning & Robotics 1 Today Taxonomy of domains Models of
More information1 MDP Value Iteration Algorithm
CS 0. - Active Learning Problem Set Handed out: 4 Jan 009 Due: 9 Jan 009 MDP Value Iteration Algorithm. Implement the value iteration algorithm given in the lecture. That is, solve Bellman s equation using
More information1 Markov decision processes
2.997 Decision-Making in Large-Scale Systems February 4 MI, Spring 2004 Handout #1 Lecture Note 1 1 Markov decision processes In this class we will study discrete-time stochastic systems. We can describe
More informationReinforcement learning an introduction
Reinforcement learning an introduction Prof. Dr. Ann Nowé Computational Modeling Group AIlab ai.vub.ac.be November 2013 Reinforcement Learning What is it? Learning from interaction Learning about, from,
More informationCSE250A Fall 12: Discussion Week 9
CSE250A Fall 12: Discussion Week 9 Aditya Menon (akmenon@ucsd.edu) December 4, 2012 1 Schedule for today Recap of Markov Decision Processes. Examples: slot machines and maze traversal. Planning and learning.
More informationCSE 573. Markov Decision Processes: Heuristic Search & Real-Time Dynamic Programming. Slides adapted from Andrey Kolobov and Mausam
CSE 573 Markov Decision Processes: Heuristic Search & Real-Time Dynamic Programming Slides adapted from Andrey Kolobov and Mausam 1 Stochastic Shortest-Path MDPs: Motivation Assume the agent pays cost
More informationECE276B: Planning & Learning in Robotics Lecture 16: Model-free Control
ECE276B: Planning & Learning in Robotics Lecture 16: Model-free Control Lecturer: Nikolay Atanasov: natanasov@ucsd.edu Teaching Assistants: Tianyu Wang: tiw161@eng.ucsd.edu Yongxi Lu: yol070@eng.ucsd.edu
More informationSome AI Planning Problems
Course Logistics CS533: Intelligent Agents and Decision Making M, W, F: 1:00 1:50 Instructor: Alan Fern (KEC2071) Office hours: by appointment (see me after class or send email) Emailing me: include CS533
More information16.410/413 Principles of Autonomy and Decision Making
16.410/413 Principles of Autonomy and Decision Making Lecture 23: Markov Decision Processes Policy Iteration Emilio Frazzoli Aeronautics and Astronautics Massachusetts Institute of Technology December
More informationReinforcement Learning
Reinforcement Learning Temporal Difference Learning Temporal difference learning, TD prediction, Q-learning, elibigility traces. (many slides from Marc Toussaint) Vien Ngo Marc Toussaint University of
More information