Optimal Stopping Problems
|
|
- Sybil Dortha Stewart
- 5 years ago
- Views:
Transcription
1 2.997 Decision Making in Large Scale Systems March 3 MIT, Spring 2004 Handout #9 Lecture Note 5 Optimal Stopping Problems In the last lecture, we have analyzed the behavior of T D(λ) for approximating the cost to go function in autonomous systems. Recall that much of the analysis was based on the idea of sampling states according to their stationary distribution. This was done either explicitly, as was assumed in approximate value iteration, or implicitly through the simulation or observation of system trajectories. It is unclear how this line of analysis can be extended to general controlled systems. In the presence of multiple policies, in general there are multiple stationary state distributions to be considered, and it is not clear which one should be used. Moreover, the dynamic programming operator T may not be a contraction with respect to any such distribution, which invalidates the argument used in the autonomous case. However, there is a special class of control problems for which analysis of T D(λ) along the same lines used in the case of autonomous systems is successful. These problems are called optimal stopping problems, and are characterized by a tuple (S, P : S S [0, ], g 0 : S R, g : S R), with the following interpretation. The problem involves a Markov decision process with state space S. In each state, there are two actions available: to stop (action 0) or to continue (action ). Once action 0 (stop) is selected, the system is halted and a final cost g 0 (x) is incurred, based on the final state x. At each previous time stage, action (continue) is selected and a cost g (x) is incurred. In this case, the system transitions from state x to state y with probability P (x, y). Each policy corresponds to a (random) stopping time τ u, which is given by τ u = min{k : u(x k ) = 0}. Example (American Options) An American call option is an option to buy stock at a price K, called the stock price, on or before an expiration date the last time period the option can be exercised. The state of a such a problem is the stock price P k. Exercising the option corresponds to the stop action and leads to a reward max(0, P k K); not exercising the option corresponds to the continue action and incurs no costs or rewards. Example 2 (The Secretary Problem) In the secretary problem, a manager needs to hire a secretary. He interviews secretaries sequentially and must make a decision about hiring each one of them immediately after the interview. Each interview incurs a certain cost for the hours spent meeting with the candidate, and hiring a certain person incurs a reward that is a function of the person s abilities. In the infinite horizon, discounted cost case, each policy is associated with a discounted cost to go [ ] τ u J u (x) = E α t g (t) + α τu g 0 (x τu ) x 0 = x. We are interested in finding the policy u with optimal cost to go t=0 J = min J u. u
2 Bellman s equation for the optimal stopping problem is given by J = min(g 0, g + αp J) T J. As usual, T is a maximum norm contraction, and Bellman s equation has a unique solution corresponding to the optimal cost to go function J. Moreover, T is also a weighted Euclidean norm contraction. Lemma Let π be a stationary distribution associated with P, i.e., π T P = π T, and let D = diag(π) be a diagonal matrix whose diagonal entries correspond to the elements in vector π. Then, for all J and J, we have T J T J 2,D α J J 2,D Proof: Note that, for any scalars c, c 2 and c 3, we have min(c, c 3 ) min(c 2, c 3 ) c c 2. () It is straightforward to show Eq.(). Assume without loss of generality that c c 2. If c c 2 c 3 or c 3 c c 2, Eq() holds trivially. The only remaining case is c c 3 c 2, and we have Applying (), we have It follows that min(c, c 3 ) min(c 2, c 3 ) = c 3 c 2 c c 2. T J T J = min(g 0, g + αp J) min(g 0, g + αp J ) αp (J J ). T J T J 2,D α P (J J ) 2,D α J J 2,D, where the last inequality follows from P J 2,D J 2,D by Jensen s inequality and stationarity of π.. T D(λ) Recall the Q function Q(x, a) = g a (x) + α P a (x, y)j (y). In general control problems, storing Q may require substantially more space than storing J, since Q is a function of state action pairs. However, in the case of optimal stopping problems, storing Q requires essentially the same space as J, since the Q value of stopping is trivially equal to g 0 (x). Hence in the case of optimal stopping problems, we can set T D(λ) to learn y Q = g + αp J, the cost of choosing to continue and behaving optimally afterwards. Note that, assuming that one stage costs g 0 and g are known, we can derive an optimal policy from Q by comparing the cost of continuing with the cost of stopping, which is simply g 0. We can express J in terms of Q as J = min(g 0, Q ), 2
3 so that Q also satisfies the recursive relation This leads to the following version of T D(λ): Let and Q = g + αp min(g 0, Q ). r k+ = r k + γ k z k [g (x) + α min (g 0 (x k+ ), φ(x k+ )r k ) φ(x k )r k ] z k = αγz k + φ(x k ) We can rewrite T D(λ) in terms of operator Q as HQ = g + αp min(g 0, Q), (2) H λ t H t+ λ Q = ( λ) Q. t=0 r k+ = r k + γ k z k (H λ Φr k Φr k + w k ). (3) The following property of H implies that an analysis of T D(λ) for optimal stopping problems along the same lines of the analysis for autonomous systems suffices for establishing convergence of the algorithm. Lemma 2 HQ HQ 2,D α Q Q 2,D. Theorem [Analogous to autonomous systems] Let r k be given by (3) and suppose that P is irreducible and aperiodic. Then r k r w.p., where r satisfies Φr = ΠH λ Φr and Φr Q 2,D k 2 ΠQ Q 2,D, (4) α( λ) where k = αλ α. We can also place a bound on the loss in the performance incurred by using a policy that is greedy with respect to Q, rather than the optimal policy. Specifically, consider the following stopping role based on Φr. { u(x) = stop, continue, ũ specifies a (random) stopping time τ, which is given by if g 0 (x) Φr (x) otherwise τ = min{k : φ(x k )r g 0 (x k )}, and the cost to go associated with ũ is given by [ ] τ τ J (x) = E α t g (x t ) + α g 0 (x τ ). t=0 The following theorem establishes a bound on the expected difference between J and the optimal cost to go J. 3
4 Theorem 2 2 E[J (x) x π] E[J (x) x π] ( α) K 2 ΠQ Q 2,D Proof: Let Q be the cost of choosing to continue in the current time stage, followed by using policy u: Then we have Q = g + αp J. E[J (x 0 ) J(x 0 ) x 0 π] = E[J (x ) J(x ) x 0 π] = π T P (J J) }{{} = π(x) P (J J )(x) x S = P J P J,π The inequality (5) follows from the fact that, for all J, we have P J P J 2,π (5) = α g + αp J g + αp J 2,π = α Q Q 2,π (6) HQ = g { + αp KQ, where g 0 (x), if Φr (x) g 0 (x) KQ = Q, otherwise. Q Q Q 2,π Q HΦr 2,π + HΦr 2,π = HQ HΦr 2,π + H Q HΦr 2,π α Q Φr Q Φr 2,π + α 2,π ( 0 J 2 2,π = E[ J(x) : x π] E[J(x) 2 : x π] = J 2 2,π, where the inequality is due to Jensen s inequality. Now define the operators H and K, given by Recall the definition of operator H. Then it is easy to verify the following identities: HΦr = HΦr HQ = Q H Q = Q. Moreover, it is also easy to show that H is a contraction with respect to 2,π. Now we have α Q Φr Q Q 2,π + Q Φr 2,π + α 2,π 2α Q Φr Q Φr 2,π + α 2,π ) 4
5 Thus, 2α Q Q 2,π Q Φr 2,π. α (7) The theorem follows from Theorem and equations (6) and (7). 2.2 Discounted Cost Problems with Control Our analysis of T D(λ) is based on comparisons with approximate value iteration: Φr k+ = ΠT λ Φr k, where the projection represented by Π is defined with respect to the weighted Euclidean norm 2,π (or, equivalently, 2,D ) and π is the stationary distribution associated with the transition matrix P. In controlled problems, there is not a single stationary distribution to be taken into account; rather, for every policy u, there is a stationary distribution π u associated with transition probabilities T u. A natural way of extending AVI to the controlled case is to consider a scheme of the form Φr k+ = Π uk T uk Φr k, (8) where Π u represents the projection based on 2,πu and u k is the greedy policy with respect to Φr k. Such a scheme is a plausible approximation, for instance, for an approximate policy iteration based on T D(λ) trained with system trajectories:. Select a policy u 0. Let k = Fit J uk Φr k (e.g., via T D(λ) for autonomous systems); 3. Choose u k+ to be greedy with respect to Φr k. Let k = k +. Go back to step 2. Note that step 2 in the approximate policy iteration presented above involves training over an infinitely long trajectory, with a single policy, in order to perform policy evaluation. Drawing inspiration from asynchronous policy iteration, one may consider performing policy updates before the policy evaluation step is considered. As it turns out, none of these algorithms is guaranteed to converge. In particular, approximate value iteration (8) is not even guaranteed to have a fixed point. For an analysis of approximate value iteration, see []. A special situation where AVI and T D(λ) are guaranteed to converge occurs when the basis functions are constant over partitions of the state space. Theorem 3 Suppose that φ i (x) = {x A i }, where A i A j =, i, j, i = j. Then converges for any Euclidean projections Π. Φr k+ = ΠT Φr k Proof: We will show that, if Π is a Euclidean projection and φ i satisfy the assumption of the theorem, then Π is a maximum norm nonexpansion: ΠJ ΠJ J J. 5
6 Let (ΠJ )(x) = K i, if x A i, where ( ) 2 K i = arg min w(x) J (x) r r x A i Thus Therefore w(x) K i = E[J (x) x w i ], where w i = w(x). x A i [ (ΠJ)(x) (ΠJ )(x) = E J (x) J (x) x J J wi ] It follows that ΠT is a maximum norm contraction, which ensures convergence of approximate value iteration. 2 References [] D.P. de Farias and B. Van Roy. On the existence of fixed points for appproximate value iteration and temporal difference learning. Journal of Optimization Theory and Applications, 05(3),
1 Markov decision processes
2.997 Decision-Making in Large-Scale Systems February 4 MI, Spring 2004 Handout #1 Lecture Note 1 1 Markov decision processes In this class we will study discrete-time stochastic systems. We can describe
More informationDecision Theory: Markov Decision Processes
Decision Theory: Markov Decision Processes CPSC 322 Lecture 33 March 31, 2006 Textbook 12.5 Decision Theory: Markov Decision Processes CPSC 322 Lecture 33, Slide 1 Lecture Overview Recap Rewards and Policies
More informationMarkov Decision Processes and Dynamic Programming
Markov Decision Processes and Dynamic Programming A. LAZARIC (SequeL Team @INRIA-Lille) Ecole Centrale - Option DAD SequeL INRIA Lille EC-RL Course In This Lecture A. LAZARIC Markov Decision Processes
More informationOn Regression-Based Stopping Times
On Regression-Based Stopping Times Benjamin Van Roy Stanford University October 14, 2009 Abstract We study approaches that fit a linear combination of basis functions to the continuation value function
More informationIntroduction to Reinforcement Learning. CMPT 882 Mar. 18
Introduction to Reinforcement Learning CMPT 882 Mar. 18 Outline for the week Basic ideas in RL Value functions and value iteration Policy evaluation and policy improvement Model-free RL Monte-Carlo and
More informationChristopher Watkins and Peter Dayan. Noga Zaslavsky. The Hebrew University of Jerusalem Advanced Seminar in Deep Learning (67679) November 1, 2015
Q-Learning Christopher Watkins and Peter Dayan Noga Zaslavsky The Hebrew University of Jerusalem Advanced Seminar in Deep Learning (67679) November 1, 2015 Noga Zaslavsky Q-Learning (Watkins & Dayan, 1992)
More informationDecision Theory: Q-Learning
Decision Theory: Q-Learning CPSC 322 Decision Theory 5 Textbook 12.5 Decision Theory: Q-Learning CPSC 322 Decision Theory 5, Slide 1 Lecture Overview 1 Recap 2 Asynchronous Value Iteration 3 Q-Learning
More informationMarkov Decision Processes and Dynamic Programming
Markov Decision Processes and Dynamic Programming A. LAZARIC (SequeL Team @INRIA-Lille) ENS Cachan - Master 2 MVA SequeL INRIA Lille MVA-RL Course How to model an RL problem The Markov Decision Process
More informationMDP Preliminaries. Nan Jiang. February 10, 2019
MDP Preliminaries Nan Jiang February 10, 2019 1 Markov Decision Processes In reinforcement learning, the interactions between the agent and the environment are often described by a Markov Decision Process
More informationHOW TO CHOOSE THE STATE RELEVANCE WEIGHT OF THE APPROXIMATE LINEAR PROGRAM?
HOW O CHOOSE HE SAE RELEVANCE WEIGH OF HE APPROXIMAE LINEAR PROGRAM? YANN LE ALLEC AND HEOPHANE WEBER Abstract. he linear programming approach to approximate dynamic programming was introduced in [1].
More informationApproximate Dynamic Programming
Master MVA: Reinforcement Learning Lecture: 5 Approximate Dynamic Programming Lecturer: Alessandro Lazaric http://researchers.lille.inria.fr/ lazaric/webpage/teaching.html Objectives of the lecture 1.
More informationWE consider finite-state Markov decision processes
IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 54, NO. 7, JULY 2009 1515 Convergence Results for Some Temporal Difference Methods Based on Least Squares Huizhen Yu and Dimitri P. Bertsekas Abstract We consider
More informationMS&E339/EE337B Approximate Dynamic Programming Lecture 1-3/31/2004. Introduction
MS&E339/EE337B Approximate Dynamic Programming Lecture 1-3/31/2004 Lecturer: Ben Van Roy Introduction Scribe: Ciamac Moallemi 1 Stochastic Systems In this class, we study stochastic systems. A stochastic
More informationLecture notes for Analysis of Algorithms : Markov decision processes
Lecture notes for Analysis of Algorithms : Markov decision processes Lecturer: Thomas Dueholm Hansen June 6, 013 Abstract We give an introduction to infinite-horizon Markov decision processes (MDPs) with
More informationIntroduction to Reinforcement Learning Part 1: Markov Decision Processes
Introduction to Reinforcement Learning Part 1: Markov Decision Processes Rowan McAllister Reinforcement Learning Reading Group 8 April 2015 Note I ve created these slides whilst following Algorithms for
More informationBasics of reinforcement learning
Basics of reinforcement learning Lucian Buşoniu TMLSS, 20 July 2018 Main idea of reinforcement learning (RL) Learn a sequential decision policy to optimize the cumulative performance of an unknown system
More informationRECURSION EQUATION FOR
Math 46 Lecture 8 Infinite Horizon discounted reward problem From the last lecture: The value function of policy u for the infinite horizon problem with discount factor a and initial state i is W i, u
More informationA Least Squares Q-Learning Algorithm for Optimal Stopping Problems
LIDS REPORT 273 December 2006 Revised: June 2007 A Least Squares Q-Learning Algorithm for Optimal Stopping Problems Huizhen Yu janey.yu@cs.helsinki.fi Dimitri P. Bertsekas dimitrib@mit.edu Abstract We
More informationPlanning in Markov Decision Processes
Carnegie Mellon School of Computer Science Deep Reinforcement Learning and Control Planning in Markov Decision Processes Lecture 3, CMU 10703 Katerina Fragkiadaki Markov Decision Process (MDP) A Markov
More informationBasic Deterministic Dynamic Programming
Basic Deterministic Dynamic Programming Timothy Kam School of Economics & CAMA Australian National University ECON8022, This version March 17, 2008 Motivation What do we do? Outline Deterministic IHDP
More informationCourse 16:198:520: Introduction To Artificial Intelligence Lecture 13. Decision Making. Abdeslam Boularias. Wednesday, December 7, 2016
Course 16:198:520: Introduction To Artificial Intelligence Lecture 13 Decision Making Abdeslam Boularias Wednesday, December 7, 2016 1 / 45 Overview We consider probabilistic temporal models where the
More informationInternet Monetization
Internet Monetization March May, 2013 Discrete time Finite A decision process (MDP) is reward process with decisions. It models an environment in which all states are and time is divided into stages. Definition
More informationApproximate Dynamic Programming
Approximate Dynamic Programming A. LAZARIC (SequeL Team @INRIA-Lille) Ecole Centrale - Option DAD SequeL INRIA Lille EC-RL Course Value Iteration: the Idea 1. Let V 0 be any vector in R N A. LAZARIC Reinforcement
More informationValue and Policy Iteration
Chapter 7 Value and Policy Iteration 1 For infinite horizon problems, we need to replace our basic computational tool, the DP algorithm, which we used to compute the optimal cost and policy for finite
More informationReinforcement Learning. Introduction
Reinforcement Learning Introduction Reinforcement Learning Agent interacts and learns from a stochastic environment Science of sequential decision making Many faces of reinforcement learning Optimal control
More informationAlgorithms for MDPs and Their Convergence
MS&E338 Reinforcement Learning Lecture 2 - April 4 208 Algorithms for MDPs and Their Convergence Lecturer: Ben Van Roy Scribe: Matthew Creme and Kristen Kessel Bellman operators Recall from last lecture
More informationMarkov Decision Processes and Solving Finite Problems. February 8, 2017
Markov Decision Processes and Solving Finite Problems February 8, 2017 Overview of Upcoming Lectures Feb 8: Markov decision processes, value iteration, policy iteration Feb 13: Policy gradients Feb 15:
More informationOn the Convergence of Optimistic Policy Iteration
Journal of Machine Learning Research 3 (2002) 59 72 Submitted 10/01; Published 7/02 On the Convergence of Optimistic Policy Iteration John N. Tsitsiklis LIDS, Room 35-209 Massachusetts Institute of Technology
More informationIntroduction to Reinforcement Learning
CSCI-699: Advanced Topics in Deep Learning 01/16/2019 Nitin Kamra Spring 2019 Introduction to Reinforcement Learning 1 What is Reinforcement Learning? So far we have seen unsupervised and supervised learning.
More informationJournal of Computational and Applied Mathematics. Projected equation methods for approximate solution of large linear systems
Journal of Computational and Applied Mathematics 227 2009) 27 50 Contents lists available at ScienceDirect Journal of Computational and Applied Mathematics journal homepage: www.elsevier.com/locate/cam
More informationMS&E338 Reinforcement Learning Lecture 1 - April 2, Introduction
MS&E338 Reinforcement Learning Lecture 1 - April 2, 2018 Introduction Lecturer: Ben Van Roy Scribe: Gabriel Maher 1 Reinforcement Learning Introduction In reinforcement learning (RL) we consider an agent
More informationRandom Times and Their Properties
Chapter 6 Random Times and Their Properties Section 6.1 recalls the definition of a filtration (a growing collection of σ-fields) and of stopping times (basically, measurable random times). Section 6.2
More informationOptimal Control. McGill COMP 765 Oct 3 rd, 2017
Optimal Control McGill COMP 765 Oct 3 rd, 2017 Classical Control Quiz Question 1: Can a PID controller be used to balance an inverted pendulum: A) That starts upright? B) That must be swung-up (perhaps
More informationReinforcement Learning
Reinforcement Learning March May, 2013 Schedule Update Introduction 03/13/2015 (10:15-12:15) Sala conferenze MDPs 03/18/2015 (10:15-12:15) Sala conferenze Solving MDPs 03/20/2015 (10:15-12:15) Aula Alpha
More informationReal Time Value Iteration and the State-Action Value Function
MS&E338 Reinforcement Learning Lecture 3-4/9/18 Real Time Value Iteration and the State-Action Value Function Lecturer: Ben Van Roy Scribe: Apoorva Sharma and Tong Mu 1 Review Last time we left off discussing
More informationMarkov Decision Processes Infinite Horizon Problems
Markov Decision Processes Infinite Horizon Problems Alan Fern * * Based in part on slides by Craig Boutilier and Daniel Weld 1 What is a solution to an MDP? MDP Planning Problem: Input: an MDP (S,A,R,T)
More informationLeast squares policy iteration (LSPI)
Least squares policy iteration (LSPI) Charles Elkan elkan@cs.ucsd.edu December 6, 2012 1 Policy evaluation and policy improvement Let π be a non-deterministic but stationary policy, so p(a s; π) is the
More informationQ-Learning and Enhanced Policy Iteration in Discounted Dynamic Programming
MATHEMATICS OF OPERATIONS RESEARCH Vol. 37, No. 1, February 2012, pp. 66 94 ISSN 0364-765X (print) ISSN 1526-5471 (online) http://dx.doi.org/10.1287/moor.1110.0532 2012 INFORMS Q-Learning and Enhanced
More informationInfinite-Horizon Discounted Markov Decision Processes
Infinite-Horizon Discounted Markov Decision Processes Dan Zhang Leeds School of Business University of Colorado at Boulder Dan Zhang, Spring 2012 Infinite Horizon Discounted MDP 1 Outline The expected
More informationReinforcement Learning and Control
CS9 Lecture notes Andrew Ng Part XIII Reinforcement Learning and Control We now begin our study of reinforcement learning and adaptive control. In supervised learning, we saw algorithms that tried to make
More information5. Solving the Bellman Equation
5. Solving the Bellman Equation In the next two lectures, we will look at several methods to solve Bellman s Equation (BE) for the stochastic shortest path problem: Value Iteration, Policy Iteration and
More information1 MDP Value Iteration Algorithm
CS 0. - Active Learning Problem Set Handed out: 4 Jan 009 Due: 9 Jan 009 MDP Value Iteration Algorithm. Implement the value iteration algorithm given in the lecture. That is, solve Bellman s equation using
More informationLecture notes: Rust (1987) Economics : M. Shum 1
Economics 180.672: M. Shum 1 Estimate the parameters of a dynamic optimization problem: when to replace engine of a bus? This is a another practical example of an optimal stopping problem, which is easily
More information, and rewards and transition matrices as shown below:
CSE 50a. Assignment 7 Out: Tue Nov Due: Thu Dec Reading: Sutton & Barto, Chapters -. 7. Policy improvement Consider the Markov decision process (MDP) with two states s {0, }, two actions a {0, }, discount
More informationTemporal-Difference Learning
.997 Decision-Making in Lage-Scale Systems Mach 17 MIT, Sping 004 Handout #17 Lectue Note 13 1 Tempoal-Diffeence Leaning We now conside the poblem of computing an appopiate paamete, so that, given an appoximation
More informationNotes on Tabular Methods
Notes on Tabular ethods Nan Jiang September 28, 208 Overview of the methods. Tabular certainty-equivalence Certainty-equivalence is a model-based RL algorithm, that is, it first estimates an DP model from
More informationCS599 Lecture 1 Introduction To RL
CS599 Lecture 1 Introduction To RL Reinforcement Learning Introduction Learning from rewards Policies Value Functions Rewards Models of the Environment Exploitation vs. Exploration Dynamic Programming
More informationAn Application to Growth Theory
An Application to Growth Theory First let s review the concepts of solution function and value function for a maximization problem. Suppose we have the problem max F (x, α) subject to G(x, β) 0, (P) x
More informationElements of Reinforcement Learning
Elements of Reinforcement Learning Policy: way learning algorithm behaves (mapping from state to action) Reward function: Mapping of state action pair to reward or cost Value function: long term reward,
More informationReinforcement Learning and Optimal Control. ASU, CSE 691, Winter 2019
Reinforcement Learning and Optimal Control ASU, CSE 691, Winter 2019 Dimitri P. Bertsekas dimitrib@mit.edu Lecture 8 Bertsekas Reinforcement Learning 1 / 21 Outline 1 Review of Infinite Horizon Problems
More informationStability Analysis of Optimal Adaptive Control under Value Iteration using a Stabilizing Initial Policy
Stability Analysis of Optimal Adaptive Control under Value Iteration using a Stabilizing Initial Policy Ali Heydari, Member, IEEE Abstract Adaptive optimal control using value iteration initiated from
More information9 Improved Temporal Difference
9 Improved Temporal Difference Methods with Linear Function Approximation DIMITRI P. BERTSEKAS and ANGELIA NEDICH Massachusetts Institute of Technology Alphatech, Inc. VIVEK S. BORKAR Tata Institute of
More informationReinforcement Learning
1 Reinforcement Learning Chris Watkins Department of Computer Science Royal Holloway, University of London July 27, 2015 2 Plan 1 Why reinforcement learning? Where does this theory come from? Markov decision
More informationMARKOV DECISION PROCESSES (MDP) AND REINFORCEMENT LEARNING (RL) Versione originale delle slide fornita dal Prof. Francesco Lo Presti
1 MARKOV DECISION PROCESSES (MDP) AND REINFORCEMENT LEARNING (RL) Versione originale delle slide fornita dal Prof. Francesco Lo Presti Historical background 2 Original motivation: animal learning Early
More information16.410/413 Principles of Autonomy and Decision Making
16.410/413 Principles of Autonomy and Decision Making Lecture 23: Markov Decision Processes Policy Iteration Emilio Frazzoli Aeronautics and Astronautics Massachusetts Institute of Technology December
More information6.231 DYNAMIC PROGRAMMING LECTURE 6 LECTURE OUTLINE
6.231 DYNAMIC PROGRAMMING LECTURE 6 LECTURE OUTLINE Review of Q-factors and Bellman equations for Q-factors VI and PI for Q-factors Q-learning - Combination of VI and sampling Q-learning and cost function
More information16.4 Multiattribute Utility Functions
285 Normalized utilities The scale of utilities reaches from the best possible prize u to the worst possible catastrophe u Normalized utilities use a scale with u = 0 and u = 1 Utilities of intermediate
More informationDistributed Optimization. Song Chong EE, KAIST
Distributed Optimization Song Chong EE, KAIST songchong@kaist.edu Dynamic Programming for Path Planning A path-planning problem consists of a weighted directed graph with a set of n nodes N, directed links
More informationApproximate Dynamic Programming
Approximate Dynamic Programming A. LAZARIC (SequeL Team @INRIA-Lille) ENS Cachan - Master 2 MVA SequeL INRIA Lille MVA-RL Course Approximate Dynamic Programming (a.k.a. Batch Reinforcement Learning) A.
More informationCMU Lecture 11: Markov Decision Processes II. Teacher: Gianni A. Di Caro
CMU 15-781 Lecture 11: Markov Decision Processes II Teacher: Gianni A. Di Caro RECAP: DEFINING MDPS Markov decision processes: o Set of states S o Start state s 0 o Set of actions A o Transitions P(s s,a)
More informationA Gentle Introduction to Reinforcement Learning
A Gentle Introduction to Reinforcement Learning Alexander Jung 2018 1 Introduction and Motivation Consider the cleaning robot Rumba which has to clean the office room B329. In order to keep things simple,
More informationOn Average Versus Discounted Reward Temporal-Difference Learning
Machine Learning, 49, 179 191, 2002 c 2002 Kluwer Academic Publishers. Manufactured in The Netherlands. On Average Versus Discounted Reward Temporal-Difference Learning JOHN N. TSITSIKLIS Laboratory for
More informationConstructing Learning Models from Data: The Dynamic Catalog Mailing Problem
Constructing Learning Models from Data: The Dynamic Catalog Mailing Problem Peng Sun May 6, 2003 Problem and Motivation Big industry In 2000 Catalog companies in the USA sent out 7 billion catalogs, generated
More informationQ-Learning in Continuous State Action Spaces
Q-Learning in Continuous State Action Spaces Alex Irpan alexirpan@berkeley.edu December 5, 2015 Contents 1 Introduction 1 2 Background 1 3 Q-Learning 2 4 Q-Learning In Continuous Spaces 4 5 Experimental
More informationToday s s Lecture. Applicability of Neural Networks. Back-propagation. Review of Neural Networks. Lecture 20: Learning -4. Markov-Decision Processes
Today s s Lecture Lecture 20: Learning -4 Review of Neural Networks Markov-Decision Processes Victor Lesser CMPSCI 683 Fall 2004 Reinforcement learning 2 Back-propagation Applicability of Neural Networks
More informationValue Function Based Reinforcement Learning in Changing Markovian Environments
Journal of Machine Learning Research 9 (2008) 1679-1709 Submitted 6/07; Revised 12/07; Published 8/08 Value Function Based Reinforcement Learning in Changing Markovian Environments Balázs Csanád Csáji
More informationIntroduction to Approximate Dynamic Programming
Introduction to Approximate Dynamic Programming Dan Zhang Leeds School of Business University of Colorado at Boulder Dan Zhang, Spring 2012 Approximate Dynamic Programming 1 Key References Bertsekas, D.P.
More informationProcedia Computer Science 00 (2011) 000 6
Procedia Computer Science (211) 6 Procedia Computer Science Complex Adaptive Systems, Volume 1 Cihan H. Dagli, Editor in Chief Conference Organized by Missouri University of Science and Technology 211-
More information6 Basic Convergence Results for RL Algorithms
Learning in Complex Systems Spring 2011 Lecture Notes Nahum Shimkin 6 Basic Convergence Results for RL Algorithms We establish here some asymptotic convergence results for the basic RL algorithms, by showing
More informationLecture 3: Policy Evaluation Without Knowing How the World Works / Model Free Policy Evaluation
Lecture 3: Policy Evaluation Without Knowing How the World Works / Model Free Policy Evaluation CS234: RL Emma Brunskill Winter 2018 Material builds on structure from David SIlver s Lecture 4: Model-Free
More informationON THE POLICY IMPROVEMENT ALGORITHM IN CONTINUOUS TIME
ON THE POLICY IMPROVEMENT ALGORITHM IN CONTINUOUS TIME SAUL D. JACKA AND ALEKSANDAR MIJATOVIĆ Abstract. We develop a general approach to the Policy Improvement Algorithm (PIA) for stochastic control problems
More informationECE276B: Planning & Learning in Robotics Lecture 16: Model-free Control
ECE276B: Planning & Learning in Robotics Lecture 16: Model-free Control Lecturer: Nikolay Atanasov: natanasov@ucsd.edu Teaching Assistants: Tianyu Wang: tiw161@eng.ucsd.edu Yongxi Lu: yol070@eng.ucsd.edu
More informationAbstract Dynamic Programming
Abstract Dynamic Programming Dimitri P. Bertsekas Department of Electrical Engineering and Computer Science Massachusetts Institute of Technology Overview of the Research Monograph Abstract Dynamic Programming"
More informationQ-Learning for Markov Decision Processes*
McGill University ECSE 506: Term Project Q-Learning for Markov Decision Processes* Authors: Khoa Phan khoa.phan@mail.mcgill.ca Sandeep Manjanna sandeep.manjanna@mail.mcgill.ca (*Based on: Convergence of
More informationNotes on Reinforcement Learning
1 Introduction Notes on Reinforcement Learning Paulo Eduardo Rauber 2014 Reinforcement learning is the study of agents that act in an environment with the goal of maximizing cumulative reward signals.
More informationINTRODUCTION TO MARKOV DECISION PROCESSES
INTRODUCTION TO MARKOV DECISION PROCESSES Balázs Csanád Csáji Research Fellow, The University of Melbourne Signals & Systems Colloquium, 29 April 2010 Department of Electrical and Electronic Engineering,
More informationLecture 3: Markov Decision Processes
Lecture 3: Markov Decision Processes Joseph Modayil 1 Markov Processes 2 Markov Reward Processes 3 Markov Decision Processes 4 Extensions to MDPs Markov Processes Introduction Introduction to MDPs Markov
More informationCSE250A Fall 12: Discussion Week 9
CSE250A Fall 12: Discussion Week 9 Aditya Menon (akmenon@ucsd.edu) December 4, 2012 1 Schedule for today Recap of Markov Decision Processes. Examples: slot machines and maze traversal. Planning and learning.
More informationLecture 1: March 7, 2018
Reinforcement Learning Spring Semester, 2017/8 Lecture 1: March 7, 2018 Lecturer: Yishay Mansour Scribe: ym DISCLAIMER: Based on Learning and Planning in Dynamical Systems by Shie Mannor c, all rights
More informationReinforcement Learning
Reinforcement Learning Dipendra Misra Cornell University dkm@cs.cornell.edu https://dipendramisra.wordpress.com/ Task Grasp the green cup. Output: Sequence of controller actions Setup from Lenz et. al.
More informationReinforcement Learning
CS7/CS7 Fall 005 Supervised Learning: Training examples: (x,y) Direct feedback y for each input x Sequence of decisions with eventual feedback No teacher that critiques individual actions Learn to act
More informationADVANCED MACROECONOMIC TECHNIQUES NOTE 3a
316-406 ADVANCED MACROECONOMIC TECHNIQUES NOTE 3a Chris Edmond hcpedmond@unimelb.edu.aui Dynamic programming and the growth model Dynamic programming and closely related recursive methods provide an important
More informationDual Temporal Difference Learning
Dual Temporal Difference Learning Min Yang Dept. of Computing Science University of Alberta Edmonton, Alberta Canada T6G 2E8 Yuxi Li Dept. of Computing Science University of Alberta Edmonton, Alberta Canada
More informationMathematical Optimization Models and Applications
Mathematical Optimization Models and Applications Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A. http://www.stanford.edu/ yyye Chapters 1, 2.1-2,
More informationFinal. Introduction to Artificial Intelligence. CS 188 Spring You have approximately 2 hours and 50 minutes.
CS 188 Spring 2014 Introduction to Artificial Intelligence Final You have approximately 2 hours and 50 minutes. The exam is closed book, closed notes except your two-page crib sheet. Mark your answers
More informationMarkov Decision Processes and Dynamic Programming
Master MVA: Reinforcement Learning Lecture: 2 Markov Decision Processes and Dnamic Programming Lecturer: Alessandro Lazaric http://researchers.lille.inria.fr/ lazaric/webpage/teaching.html Objectives of
More informationCSC321 Lecture 22: Q-Learning
CSC321 Lecture 22: Q-Learning Roger Grosse Roger Grosse CSC321 Lecture 22: Q-Learning 1 / 21 Overview Second of 3 lectures on reinforcement learning Last time: policy gradient (e.g. REINFORCE) Optimize
More informationLecture 21. David Aldous. 16 October David Aldous Lecture 21
Lecture 21 David Aldous 16 October 2015 In continuous time 0 t < we specify transition rates or informally P(X (t+δ)=j X (t)=i, past ) q ij = lim δ 0 δ P(X (t + dt) = j X (t) = i) = q ij dt but note these
More information1 Basics of probability theory
Examples of Stochastic Optimization Problems In this chapter, we will give examples of three types of stochastic optimization problems, that is, optimal stopping, total expected (discounted) cost problem,
More informationQ-Learning and Stochastic Approximation
MS&E338 Reinforcement Learning Lecture 4-04.11.018 Q-Learning and Stochastic Approximation Lecturer: Ben Van Roy Scribe: Christopher Lazarus Javier Sagastuy In this lecture we study the convergence of
More information6.842 Randomness and Computation March 3, Lecture 8
6.84 Randomness and Computation March 3, 04 Lecture 8 Lecturer: Ronitt Rubinfeld Scribe: Daniel Grier Useful Linear Algebra Let v = (v, v,..., v n ) be a non-zero n-dimensional row vector and P an n n
More informationAnalysis of Classification-based Policy Iteration Algorithms
Journal of Machine Learning Research 17 (2016) 1-30 Submitted 12/10; Revised 9/14; Published 4/16 Analysis of Classification-based Policy Iteration Algorithms Alessandro Lazaric 1 Mohammad Ghavamzadeh
More informationReinforcement learning
Reinforcement learning Based on [Kaelbling et al., 1996, Bertsekas, 2000] Bert Kappen Reinforcement learning Reinforcement learning is the problem faced by an agent that must learn behavior through trial-and-error
More informationMarkov decision processes
CS 2740 Knowledge representation Lecture 24 Markov decision processes Milos Hauskrecht milos@cs.pitt.edu 5329 Sennott Square Administrative announcements Final exam: Monday, December 8, 2008 In-class Only
More information6.231 DYNAMIC PROGRAMMING LECTURE 13 LECTURE OUTLINE
6.231 DYNAMIC PROGRAMMING LECTURE 13 LECTURE OUTLINE Control of continuous-time Markov chains Semi-Markov problems Problem formulation Equivalence to discretetime problems Discounted problems Average cost
More informationPrioritized Sweeping Converges to the Optimal Value Function
Technical Report DCS-TR-631 Prioritized Sweeping Converges to the Optimal Value Function Lihong Li and Michael L. Littman {lihong,mlittman}@cs.rutgers.edu RL 3 Laboratory Department of Computer Science
More informationFinite Time Bounds for Temporal Difference Learning with Function Approximation: Problems with some state-of-the-art results
Finite Time Bounds for Temporal Difference Learning with Function Approximation: Problems with some state-of-the-art results Chandrashekar Lakshmi Narayanan Csaba Szepesvári Abstract In all branches of
More informationarxiv: v1 [math.oc] 9 Oct 2018
A Convex Optimization Approach to Dynamic Programming in Continuous State and Action Spaces Insoon Yang arxiv:1810.03847v1 [math.oc] 9 Oct 2018 Abstract A convex optimization-based method is proposed to
More informationLecture 7. We can regard (p(i, j)) as defining a (maybe infinite) matrix P. Then a basic fact is
MARKOV CHAINS What I will talk about in class is pretty close to Durrett Chapter 5 sections 1-5. We stick to the countable state case, except where otherwise mentioned. Lecture 7. We can regard (p(i, j))
More informationA Function Approximation Approach to Estimation of Policy Gradient for POMDP with Structured Policies
A Function Approximation Approach to Estimation of Policy Gradient for POMDP with Structured Policies Huizhen Yu Lab for Information and Decision Systems Massachusetts Institute of Technology Cambridge,
More information