Infinite-Horizon Discounted Markov Decision Processes
|
|
- Ralph Marshall
- 6 years ago
- Views:
Transcription
1 Infinite-Horizon Discounted Markov Decision Processes Dan Zhang Leeds School of Business University of Colorado at Boulder Dan Zhang, Spring 2012 Infinite Horizon Discounted MDP 1
2 Outline The expected total discounted reward Policy evaluation Optimality equations Value iteration Policy iteration Linear Programming Dan Zhang, Spring 2012 Infinite Horizon Discounted MDP 2
3 Expected Total Reward Criterion Let π = (d 1, d 2,... ) Π HR Starting at a state s, using policy π leads to a sequence of state-action pairs {X t, Y t }. The sequence of rewards is given by {R t r t (X t, Y t ) : t = 1, 2,... }. Let λ [0, 1) be the discount factor The expected total rewards from policy π starting in state s is given by [ N ] vλ π (s) lim N Eπ s t=1 λ t 1 r(x t, Y t ). The limit above exists when r( ) is bounded; i.e., sup s S,a As r(s, a) = M <. Dan Zhang, Spring 2012 Infinite Horizon Discounted MDP 3
4 Expected Total Reward Criterion Under suitable conditions (such as the boundedness of r( )), we have [ N ] [ ] vλ π (s) lim N Eπ s λ t 1 r(x t, Y t ) = E π s λ t 1 r(x t, Y t ). Let t=1 [ ] v π (s) E π s r(x t, Y t ). t=1 t=1 We have v π (s) = lim λ 1 v π λ (s) whenever v π (s) exists. Dan Zhang, Spring 2012 Infinite Horizon Discounted MDP 4
5 Optimality Criteria A policy π is discount optimal for λ [0, 1) if v π λ (s) v π λ (s), s S, π ΠHR. The value of a discounted MDP is defined by v λ (s) sup vλ π (s), s S. π Π HR Let π be a discount optimal policy. Then vλ π (s) = v λ (s) for all s S. Dan Zhang, Spring 2012 Infinite Horizon Discounted MDP 5
6 Vector Notation for Markov Decision Processes Let V denote the set of bounded real valued functions on S with componentwise partial order and norm v sup s S v(s). The corresponding matrix norm is given by H sup H(j s), s S j S where H(j s) denotes the (s, j)-th component of H. Let e V denote the function with all components equal to 1; that is, e(s) = 1 for all s S. Dan Zhang, Spring 2012 Infinite Horizon Discounted MDP 6
7 Vector Notation for Markov Decision Processes For d D MD, let r d (s) r(s, d(s)) and p d (j s) p(j s, d(s)). Similarly, for d D MR, let r d (s) a A s q d(s) (a)r(s, a), p d (j s) a A s q d(s) (a)p(j s, a). Let r d denote the S -vector, with the s-th component r d (s) and P d the S S matrix with (s, j)-th entry p d (j s). We refer to r d as the reward vector and P d as the transition probability matrix. Dan Zhang, Spring 2012 Infinite Horizon Discounted MDP 7
8 Vector Notation for Markov Decision Processes π = (d 1, d 2,... ) Π MR. The (s, j) component of the t-step transition probability matrix P t π(j s) satisfies P t π(j s) = [P d1... P dt 1 P dt ](j s) = P π (X t+1 = j X 1 = s). For v V, E π s [v(x t )] = j S Pπ t 1 (j s)v(j). We also have v π λ = t=1 λ t 1 P t 1 π r dt. Dan Zhang, Spring 2012 Infinite Horizon Discounted MDP 8
9 Assumptions Stationary rewards and transition probabilities: r(s, a) and p(j s, a) do not vary with time Bounded rewards: r(s, a) M < Discounting: λ [0, 1). Discrete state space: S is finite or countable Dan Zhang, Spring 2012 Infinite Horizon Discounted MDP 9
10 Policy Evaluation Theorem Let π = (d 1, d 2,... ) Π HR. Then for each s S, there exists a policy π = (d 1, d 2,... ) ΠMR, satisfying P π (X t = j, Y t = a X 1 = s) = P π (X t = j, Y t = a X 1 = s), t. = Suppose π Π HR, then for each s S, there exists a policy π Π MR such that v π λ (s) = v π λ (s). = It suffices to consider Π MR. v λ (s) = sup vλ π (s) = sup vλ π (s). π Π HR π Π MR Dan Zhang, Spring 2012 Infinite Horizon Discounted MDP 10
11 Policy Evaluation Let π = (d 1, d 2,... ) Π MR. Then [ ] vλ π (s) = E s π λ t 1 r(x t, Y t ). In vector notation, we have v π λ = t=1 t=1 λ t 1 Pπ t 1 r dt = r d1 + λp 1 πr d2 + λ 2 P 2 πr d = r d1 + λp d1 r d2 + λ 2 P d1 P d2 r d = r d1 + λp d1 (r d2 + λp d2 r d ) = r d1 + λp d1 v π λ, where π = (d 2, d 3,... ). Dan Zhang, Spring 2012 Infinite Horizon Discounted MDP 11
12 Policy Evaluation When π is stationary, π = (d, d,... ) d and π = π. It follows that v d λ satisfies v d λ = r d1 + λp d v d λ L d v d λ, where L d : V V is a linear transformation. Theorem Suppose λ [0, 1). Then for any stationary policy d with d D MR, vλ d is a solution in V of v = r d + λp d v. Furthermore, v d λ may be written as v d λ = (I λp d ) 1 r d. Dan Zhang, Spring 2012 Infinite Horizon Discounted MDP 12
13 Optimality Equations For any fixed n, the finite horizon optimality equation is given by v n (s) = sup a A s r(s, a) + j S Taking limits on both sides leads to v(s) = sup a A s r(s, a) + j S λp(j s, a)v n+1 (j). λp(j s, a)v(j). The equations above for all s S are the optimality equations. For v V, let Lv sup [r d + λp d v], d D MD Lv max d D MD[r d + λp d v]. Dan Zhang, Spring 2012 Infinite Horizon Discounted MDP 13
14 Optimality Equations Proposition For all v V and λ [0, 1), sup [r d + λp d v] = sup [r d + λp d v]. d D MD d D MR Replacing D MD with D, the optimality equation can be written as v = Lv. In case supremum can be attained above for all v V, v = Lv. Dan Zhang, Spring 2012 Infinite Horizon Discounted MDP 14
15 Solutions of the Optimality Equations Theorem Suppose v V. (i) If v Lv, then v v λ ; (ii) If v Lv, then v v λ ; (iii) If v = Lv, then v is the only element of V with this property and v = v λ. Dan Zhang, Spring 2012 Infinite Horizon Discounted MDP 15
16 Solutions of the Optimality Equations Let U be a Banach space (complete normed linear space). Special case: space of bounded measurable real-valued functions An operator T : U U is a contraction mapping if there exists a λ [0, 1) such that Tv Tu λ v u for all u and v in U. Theorem [Banach Fixed-Point Theorem] Suppose U is a Banach space and T : U U is a contraction mapping. Then (i) There exists a unique v in U such that Tv = v ; (ii) For arbitrary v 0 U, the sequence {v n } defined by v n+1 = Tv n = T n+1 v 0 converges to v. Dan Zhang, Spring 2012 Infinite Horizon Discounted MDP 16
17 Solutions of the Optimality Equations Proposition Suppose λ [0, 1). Then L and L are contraction mappings on V. Theorem Suppose λ [0, 1), S is finite or countable, and r(s, a) is bounded. The following results hold. (i) There exits a v V satisfying Lv = v (Lv = v ). Furthermore, v is the only element of V with this property and equals v λ ; (ii) For each d D MR, there exists a unique v V satisfying L d v = v. Furthermore, v = v d λ. Dan Zhang, Spring 2012 Infinite Horizon Discounted MDP 17
18 Existence of Stationary Optimal Policies A decision rule is d is conserving if d argmax{r d + λp d vλ }. d D Theorem Suppose there exists a conserving decision rule or an optimal policy, then there exists a deterministic stationary policy which is optimal. Dan Zhang, Spring 2012 Infinite Horizon Discounted MDP 18
19 Value Iteration 1 Select v 0 V, specify ɛ > 0, and set n = 0. 2 For each s S, compute v n+1 (s) by v n+1 (s) = max a A s r(s, a) + j S λp(j s, a)v n (j). 3 If v n+1 v n ɛ(1 λ) <, 2λ go to step 4. Otherwise, increment n by 1 and return to step 2. 4 For each s S, choose and stop. d ɛ (s) argmax a A s r(s, a) + j S λp(j s, a)v n+1 (j) Dan Zhang, Spring 2012 Infinite Horizon Discounted MDP 19
20 Policy Iteration 1 Set n = 0 and select an arbitrary decision rule d 0 D. 2 (Policy evaluation) Obtain v n by solving (I λp dn )v = r dn. 3 (Policy improvement) Choose d n+1 satisfy Setting d n+1 = d n if possible. d n+1 argmax[r d + λp d v n ]. d D 4 If d n+1 = d n, stop and set d = d n. Otherwise increment n by 1 and return to step 2. Dan Zhang, Spring 2012 Infinite Horizon Discounted MDP 20
21 Linear Programming Let α(s) be positive scalars such that s S α(s) = 1. Primal linear program is given by min α(j)v(j) v j S v(s) j S λp(j s, a)v(j) r(s, a), s S, a A s. Dual linear program is given by max r(s, a)x(s, a) x s S a A s x(j, a) λp(j s, a)x(s, a) = α(j), j S, a A j s S a A s x(s, a) 0, s S, a A s. Dan Zhang, Spring 2012 Infinite Horizon Discounted MDP 21
Infinite-Horizon Average Reward Markov Decision Processes
Infinite-Horizon Average Reward Markov Decision Processes Dan Zhang Leeds School of Business University of Colorado at Boulder Dan Zhang, Spring 2012 Infinite Horizon Average Reward MDP 1 Outline The average
More informationMATH4406 Assignment 5
MATH4406 Assignment 5 Patrick Laub (ID: 42051392) October 7, 2014 1 The machine replacement model 1.1 Real-world motivation Consider the machine to be the entire world. Over time the creator has running
More informationLecture notes for Analysis of Algorithms : Markov decision processes
Lecture notes for Analysis of Algorithms : Markov decision processes Lecturer: Thomas Dueholm Hansen June 6, 013 Abstract We give an introduction to infinite-horizon Markov decision processes (MDPs) with
More informationIntroduction to Approximate Dynamic Programming
Introduction to Approximate Dynamic Programming Dan Zhang Leeds School of Business University of Colorado at Boulder Dan Zhang, Spring 2012 Approximate Dynamic Programming 1 Key References Bertsekas, D.P.
More informationTotal Expected Discounted Reward MDPs: Existence of Optimal Policies
Total Expected Discounted Reward MDPs: Existence of Optimal Policies Eugene A. Feinberg Department of Applied Mathematics and Statistics State University of New York at Stony Brook Stony Brook, NY 11794-3600
More informationSimplex Algorithm for Countable-state Discounted Markov Decision Processes
Simplex Algorithm for Countable-state Discounted Markov Decision Processes Ilbin Lee Marina A. Epelman H. Edwin Romeijn Robert L. Smith November 16, 2014 Abstract We consider discounted Markov Decision
More informationMS&E338 Reinforcement Learning Lecture 1 - April 2, Introduction
MS&E338 Reinforcement Learning Lecture 1 - April 2, 2018 Introduction Lecturer: Ben Van Roy Scribe: Gabriel Maher 1 Reinforcement Learning Introduction In reinforcement learning (RL) we consider an agent
More information, and rewards and transition matrices as shown below:
CSE 50a. Assignment 7 Out: Tue Nov Due: Thu Dec Reading: Sutton & Barto, Chapters -. 7. Policy improvement Consider the Markov decision process (MDP) with two states s {0, }, two actions a {0, }, discount
More informationChristopher Watkins and Peter Dayan. Noga Zaslavsky. The Hebrew University of Jerusalem Advanced Seminar in Deep Learning (67679) November 1, 2015
Q-Learning Christopher Watkins and Peter Dayan Noga Zaslavsky The Hebrew University of Jerusalem Advanced Seminar in Deep Learning (67679) November 1, 2015 Noga Zaslavsky Q-Learning (Watkins & Dayan, 1992)
More informationZero-Sum Stochastic Games An algorithmic review
Zero-Sum Stochastic Games An algorithmic review Emmanuel Hyon LIP6/Paris Nanterre with N Yemele and L Perrotin Rosario November 2017 Final Meeting Dygame Dygame Project Amstic Outline 1 Introduction Static
More information1 Markov decision processes
2.997 Decision-Making in Large-Scale Systems February 4 MI, Spring 2004 Handout #1 Lecture Note 1 1 Markov decision processes In this class we will study discrete-time stochastic systems. We can describe
More informationChapter 2 SOME ANALYTICAL TOOLS USED IN THE THESIS
Chapter 2 SOME ANALYTICAL TOOLS USED IN THE THESIS 63 2.1 Introduction In this chapter we describe the analytical tools used in this thesis. They are Markov Decision Processes(MDP), Markov Renewal process
More informationMarkov Decision Processes and Dynamic Programming
Markov Decision Processes and Dynamic Programming A. LAZARIC (SequeL Team @INRIA-Lille) Ecole Centrale - Option DAD SequeL INRIA Lille EC-RL Course In This Lecture A. LAZARIC Markov Decision Processes
More informationDecision Theory: Markov Decision Processes
Decision Theory: Markov Decision Processes CPSC 322 Lecture 33 March 31, 2006 Textbook 12.5 Decision Theory: Markov Decision Processes CPSC 322 Lecture 33, Slide 1 Lecture Overview Recap Rewards and Policies
More informationMDP Preliminaries. Nan Jiang. February 10, 2019
MDP Preliminaries Nan Jiang February 10, 2019 1 Markov Decision Processes In reinforcement learning, the interactions between the agent and the environment are often described by a Markov Decision Process
More informationReinforcement Learning
Reinforcement Learning Markov decision process & Dynamic programming Evaluative feedback, value function, Bellman equation, optimality, Markov property, Markov decision process, dynamic programming, value
More informationMarkov decision processes and interval Markov chains: exploiting the connection
Markov decision processes and interval Markov chains: exploiting the connection Mingmei Teo Supervisors: Prof. Nigel Bean, Dr Joshua Ross University of Adelaide July 10, 2013 Intervals and interval arithmetic
More informationCORC Tech Report TR Robust dynamic programming
CORC Tech Report TR-2002-07 Robust dynamic programming G. Iyengar Submitted Dec. 3rd, 2002. Revised May 4, 2004. Abstract In this paper we propose a robust formulation for discrete time dynamic programming
More informationPlanning in Markov Decision Processes
Carnegie Mellon School of Computer Science Deep Reinforcement Learning and Control Planning in Markov Decision Processes Lecture 3, CMU 10703 Katerina Fragkiadaki Markov Decision Process (MDP) A Markov
More informationAn Empirical Algorithm for Relative Value Iteration for Average-cost MDPs
2015 IEEE 54th Annual Conference on Decision and Control CDC December 15-18, 2015. Osaka, Japan An Empirical Algorithm for Relative Value Iteration for Average-cost MDPs Abhishek Gupta Rahul Jain Peter
More informationCourse 16:198:520: Introduction To Artificial Intelligence Lecture 13. Decision Making. Abdeslam Boularias. Wednesday, December 7, 2016
Course 16:198:520: Introduction To Artificial Intelligence Lecture 13 Decision Making Abdeslam Boularias Wednesday, December 7, 2016 1 / 45 Overview We consider probabilistic temporal models where the
More informationReinforcement Learning
Reinforcement Learning March May, 2013 Schedule Update Introduction 03/13/2015 (10:15-12:15) Sala conferenze MDPs 03/18/2015 (10:15-12:15) Sala conferenze Solving MDPs 03/20/2015 (10:15-12:15) Aula Alpha
More informationLeast squares policy iteration (LSPI)
Least squares policy iteration (LSPI) Charles Elkan elkan@cs.ucsd.edu December 6, 2012 1 Policy evaluation and policy improvement Let π be a non-deterministic but stationary policy, so p(a s; π) is the
More informationDecision Theory: Q-Learning
Decision Theory: Q-Learning CPSC 322 Decision Theory 5 Textbook 12.5 Decision Theory: Q-Learning CPSC 322 Decision Theory 5, Slide 1 Lecture Overview 1 Recap 2 Asynchronous Value Iteration 3 Q-Learning
More informationMarkov Decision Processes and their Applications to Supply Chain Management
Markov Decision Processes and their Applications to Supply Chain Management Jefferson Huang School of Operations Research & Information Engineering Cornell University June 24 & 25, 2018 10 th Operations
More informationAM 121: Intro to Optimization Models and Methods: Fall 2018
AM 11: Intro to Optimization Models and Methods: Fall 018 Lecture 18: Markov Decision Processes Yiling Chen Lesson Plan Markov decision processes Policies and value functions Solving: average reward, discounted
More informationRobust Modified Policy Iteration
Robust Modified Policy Iteration David L. Kaufman Department of Industrial and Operations Engineering, University of Michigan 1205 Beal Avenue, Ann Arbor, MI 48109, USA 8davidlk8umich.edu (remove 8s) Andrew
More informationStochastic Primal-Dual Methods for Reinforcement Learning
Stochastic Primal-Dual Methods for Reinforcement Learning Alireza Askarian 1 Amber Srivastava 1 1 Department of Mechanical Engineering University of Illinois at Urbana Champaign Big Data Optimization,
More information1 Stochastic Dynamic Programming
1 Stochastic Dynamic Programming Formally, a stochastic dynamic program has the same components as a deterministic one; the only modification is to the state transition equation. When events in the future
More informationWeighted Sup-Norm Contractions in Dynamic Programming: A Review and Some New Applications
May 2012 Report LIDS - 2884 Weighted Sup-Norm Contractions in Dynamic Programming: A Review and Some New Applications Dimitri P. Bertsekas Abstract We consider a class of generalized dynamic programming
More informationUNCORRECTED PROOFS. P{X(t + s) = j X(t) = i, X(u) = x(u), 0 u < t} = P{X(t + s) = j X(t) = i}.
Cochran eorms934.tex V1 - May 25, 21 2:25 P.M. P. 1 UNIFORMIZATION IN MARKOV DECISION PROCESSES OGUZHAN ALAGOZ MEHMET U.S. AYVACI Department of Industrial and Systems Engineering, University of Wisconsin-Madison,
More informationAlgorithms for MDPs and Their Convergence
MS&E338 Reinforcement Learning Lecture 2 - April 4 208 Algorithms for MDPs and Their Convergence Lecturer: Ben Van Roy Scribe: Matthew Creme and Kristen Kessel Bellman operators Recall from last lecture
More informationDistributed Optimization. Song Chong EE, KAIST
Distributed Optimization Song Chong EE, KAIST songchong@kaist.edu Dynamic Programming for Path Planning A path-planning problem consists of a weighted directed graph with a set of n nodes N, directed links
More informationMarkov Decision Processes and Dynamic Programming
Markov Decision Processes and Dynamic Programming A. LAZARIC (SequeL Team @INRIA-Lille) ENS Cachan - Master 2 MVA SequeL INRIA Lille MVA-RL Course How to model an RL problem The Markov Decision Process
More informationMarkov Decision Processes Infinite Horizon Problems
Markov Decision Processes Infinite Horizon Problems Alan Fern * * Based in part on slides by Craig Boutilier and Daniel Weld 1 What is a solution to an MDP? MDP Planning Problem: Input: an MDP (S,A,R,T)
More informationCMU Lecture 11: Markov Decision Processes II. Teacher: Gianni A. Di Caro
CMU 15-781 Lecture 11: Markov Decision Processes II Teacher: Gianni A. Di Caro RECAP: DEFINING MDPS Markov decision processes: o Set of states S o Start state s 0 o Set of actions A o Transitions P(s s,a)
More informationReinforcement Learning. Introduction
Reinforcement Learning Introduction Reinforcement Learning Agent interacts and learns from a stochastic environment Science of sequential decision making Many faces of reinforcement learning Optimal control
More informationOptimal Stopping Problems
2.997 Decision Making in Large Scale Systems March 3 MIT, Spring 2004 Handout #9 Lecture Note 5 Optimal Stopping Problems In the last lecture, we have analyzed the behavior of T D(λ) for approximating
More informationReinforcement Learning
Reinforcement Learning Dipendra Misra Cornell University dkm@cs.cornell.edu https://dipendramisra.wordpress.com/ Task Grasp the green cup. Output: Sequence of controller actions Setup from Lenz et. al.
More informationON THE POLICY IMPROVEMENT ALGORITHM IN CONTINUOUS TIME
ON THE POLICY IMPROVEMENT ALGORITHM IN CONTINUOUS TIME SAUL D. JACKA AND ALEKSANDAR MIJATOVIĆ Abstract. We develop a general approach to the Policy Improvement Algorithm (PIA) for stochastic control problems
More informationReinforcement Learning. Donglin Zeng, Department of Biostatistics, University of North Carolina
Reinforcement Learning Introduction Introduction Unsupervised learning has no outcome (no feedback). Supervised learning has outcome so we know what to predict. Reinforcement learning is in between it
More informationReal Time Value Iteration and the State-Action Value Function
MS&E338 Reinforcement Learning Lecture 3-4/9/18 Real Time Value Iteration and the State-Action Value Function Lecturer: Ben Van Roy Scribe: Apoorva Sharma and Tong Mu 1 Review Last time we left off discussing
More informationInternet Monetization
Internet Monetization March May, 2013 Discrete time Finite A decision process (MDP) is reward process with decisions. It models an environment in which all states are and time is divided into stages. Definition
More informationZero-sum semi-markov games in Borel spaces: discounted and average payoff
Zero-sum semi-markov games in Borel spaces: discounted and average payoff Fernando Luque-Vásquez Departamento de Matemáticas Universidad de Sonora México May 2002 Abstract We study two-person zero-sum
More informationOn Finding Optimal Policies for Markovian Decision Processes Using Simulation
On Finding Optimal Policies for Markovian Decision Processes Using Simulation Apostolos N. Burnetas Case Western Reserve University Michael N. Katehakis Rutgers University February 1995 Abstract A simulation
More informationLecture 3: Markov Decision Processes
Lecture 3: Markov Decision Processes Joseph Modayil 1 Markov Processes 2 Markov Reward Processes 3 Markov Decision Processes 4 Extensions to MDPs Markov Processes Introduction Introduction to MDPs Markov
More informationMarkov Decision Processes and Solving Finite Problems. February 8, 2017
Markov Decision Processes and Solving Finite Problems February 8, 2017 Overview of Upcoming Lectures Feb 8: Markov decision processes, value iteration, policy iteration Feb 13: Policy gradients Feb 15:
More informationSection Notes 9. Midterm 2 Review. Applied Math / Engineering Sciences 121. Week of December 3, 2018
Section Notes 9 Midterm 2 Review Applied Math / Engineering Sciences 121 Week of December 3, 2018 The following list of topics is an overview of the material that was covered in the lectures and sections
More informationNoncooperative continuous-time Markov games
Morfismos, Vol. 9, No. 1, 2005, pp. 39 54 Noncooperative continuous-time Markov games Héctor Jasso-Fuentes Abstract This work concerns noncooperative continuous-time Markov games with Polish state and
More informationMotivation for introducing probabilities
for introducing probabilities Reaching the goals is often not sufficient: it is important that the expected costs do not outweigh the benefit of reaching the goals. 1 Objective: maximize benefits - costs.
More informationArtificial Intelligence
Artificial Intelligence Dynamic Programming Marc Toussaint University of Stuttgart Winter 2018/19 Motivation: So far we focussed on tree search-like solvers for decision problems. There is a second important
More informationThis question has three parts, each of which can be answered concisely, but be prepared to explain and justify your concise answer.
This question has three parts, each of which can be answered concisely, but be prepared to explain and justify your concise answer. 1. Suppose you have a policy and its action-value function, q, then you
More informationBasic Deterministic Dynamic Programming
Basic Deterministic Dynamic Programming Timothy Kam School of Economics & CAMA Australian National University ECON8022, This version March 17, 2008 Motivation What do we do? Outline Deterministic IHDP
More informationCS 7180: Behavioral Modeling and Decisionmaking
CS 7180: Behavioral Modeling and Decisionmaking in AI Markov Decision Processes for Complex Decisionmaking Prof. Amy Sliva October 17, 2012 Decisions are nondeterministic In many situations, behavior and
More informationUniform turnpike theorems for finite Markov decision processes
MATHEMATICS OF OPERATIONS RESEARCH Vol. 00, No. 0, Xxxxx 0000, pp. 000 000 issn 0364-765X eissn 1526-5471 00 0000 0001 INFORMS doi 10.1287/xxxx.0000.0000 c 0000 INFORMS Authors are encouraged to submit
More informationExample I: Capital Accumulation
1 Example I: Capital Accumulation Time t = 0, 1,..., T < Output y, initial output y 0 Fraction of output invested a, capital k = ay Transition (production function) y = g(k) = g(ay) Reward (utility of
More informationMultiagent Value Iteration in Markov Games
Multiagent Value Iteration in Markov Games Amy Greenwald Brown University with Michael Littman and Martin Zinkevich Stony Brook Game Theory Festival July 21, 2005 Agenda Theorem Value iteration converges
More informationModule 8 Linear Programming. CS 886 Sequential Decision Making and Reinforcement Learning University of Waterloo
Module 8 Linear Programming CS 886 Sequential Decision Making and Reinforcement Learning University of Waterloo Policy Optimization Value and policy iteration Iterative algorithms that implicitly solve
More informationMarkov decision processes
CS 2740 Knowledge representation Lecture 24 Markov decision processes Milos Hauskrecht milos@cs.pitt.edu 5329 Sennott Square Administrative announcements Final exam: Monday, December 8, 2008 In-class Only
More informationMachine Learning and Bayesian Inference. Unsupervised learning. Can we find regularity in data without the aid of labels?
Machine Learning and Bayesian Inference Dr Sean Holden Computer Laboratory, Room FC6 Telephone extension 6372 Email: sbh11@cl.cam.ac.uk www.cl.cam.ac.uk/ sbh11/ Unsupervised learning Can we find regularity
More informationAbstract Dynamic Programming
Abstract Dynamic Programming Dimitri P. Bertsekas Department of Electrical Engineering and Computer Science Massachusetts Institute of Technology Overview of the Research Monograph Abstract Dynamic Programming"
More informationReinforcement Learning: the basics
Reinforcement Learning: the basics Olivier Sigaud Université Pierre et Marie Curie, PARIS 6 http://people.isir.upmc.fr/sigaud August 6, 2012 1 / 46 Introduction Action selection/planning Learning by trial-and-error
More informationINTRODUCTION TO MARKOV DECISION PROCESSES
INTRODUCTION TO MARKOV DECISION PROCESSES Balázs Csanád Csáji Research Fellow, The University of Melbourne Signals & Systems Colloquium, 29 April 2010 Department of Electrical and Electronic Engineering,
More informationA Perturbation Approach to Approximate Value Iteration for Average Cost Markov Decision Process with Borel Spaces and Bounded Costs
A Perturbation Approach to Approximate Value Iteration for Average Cost Markov Decision Process with Borel Spaces and Bounded Costs Óscar Vega-Amaya Joaquín López-Borbón August 21, 2015 Abstract The present
More informationApproximate Dynamic Programming
Approximate Dynamic Programming A. LAZARIC (SequeL Team @INRIA-Lille) Ecole Centrale - Option DAD SequeL INRIA Lille EC-RL Course Value Iteration: the Idea 1. Let V 0 be any vector in R N A. LAZARIC Reinforcement
More informationReductions Of Undiscounted Markov Decision Processes and Stochastic Games To Discounted Ones. Jefferson Huang
Reductions Of Undiscounted Markov Decision Processes and Stochastic Games To Discounted Ones Jefferson Huang School of Operations Research and Information Engineering Cornell University November 16, 2016
More informationHomework If the inverse T 1 of a closed linear operator exists, show that T 1 is a closed linear operator.
Homework 3 1 If the inverse T 1 of a closed linear operator exists, show that T 1 is a closed linear operator Solution: Assuming that the inverse of T were defined, then we will have to have that D(T 1
More informationEfficient Implementation of Approximate Linear Programming
2.997 Decision-Making in Large-Scale Systems April 12 MI, Spring 2004 Handout #21 Lecture Note 18 1 Efficient Implementation of Approximate Linear Programming While the ALP may involve only a small number
More informationOn the Convergence of Optimistic Policy Iteration
Journal of Machine Learning Research 3 (2002) 59 72 Submitted 10/01; Published 7/02 On the Convergence of Optimistic Policy Iteration John N. Tsitsiklis LIDS, Room 35-209 Massachusetts Institute of Technology
More informationA Reinforcement Learning (Nash-R) Algorithm for Average Reward Irreducible Stochastic Games
Learning in Average Reward Stochastic Games A Reinforcement Learning (Nash-R) Algorithm for Average Reward Irreducible Stochastic Games Jun Li Jun.Li@warnerbros.com Kandethody Ramachandran ram@cas.usf.edu
More informationPreference Elicitation for Sequential Decision Problems
Preference Elicitation for Sequential Decision Problems Kevin Regan University of Toronto Introduction 2 Motivation Focus: Computational approaches to sequential decision making under uncertainty These
More informationReinforcement Learning. Yishay Mansour Tel-Aviv University
Reinforcement Learning Yishay Mansour Tel-Aviv University 1 Reinforcement Learning: Course Information Classes: Wednesday Lecture 10-13 Yishay Mansour Recitations:14-15/15-16 Eliya Nachmani Adam Polyak
More informationWEAK CONVERGENCES OF PROBABILITY MEASURES: A UNIFORM PRINCIPLE
PROCEEDINGS OF THE AMERICAN MATHEMATICAL SOCIETY Volume 126, Number 10, October 1998, Pages 3089 3096 S 0002-9939(98)04390-1 WEAK CONVERGENCES OF PROBABILITY MEASURES: A UNIFORM PRINCIPLE JEAN B. LASSERRE
More informationLearning to Control an Octopus Arm with Gaussian Process Temporal Difference Methods
Learning to Control an Octopus Arm with Gaussian Process Temporal Difference Methods Yaakov Engel Joint work with Peter Szabo and Dmitry Volkinshtein (ex. Technion) Why use GPs in RL? A Bayesian approach
More informationApproximate Dynamic Programming
Master MVA: Reinforcement Learning Lecture: 5 Approximate Dynamic Programming Lecturer: Alessandro Lazaric http://researchers.lille.inria.fr/ lazaric/webpage/teaching.html Objectives of the lecture 1.
More informationREMARKS ON THE EXISTENCE OF SOLUTIONS IN MARKOV DECISION PROCESSES. Emmanuel Fernández-Gaucherand, Aristotle Arapostathis, and Steven I.
REMARKS ON THE EXISTENCE OF SOLUTIONS TO THE AVERAGE COST OPTIMALITY EQUATION IN MARKOV DECISION PROCESSES Emmanuel Fernández-Gaucherand, Aristotle Arapostathis, and Steven I. Marcus Department of Electrical
More informationarxiv: v1 [math.oc] 9 Oct 2018
A Convex Optimization Approach to Dynamic Programming in Continuous State and Action Spaces Insoon Yang arxiv:1810.03847v1 [math.oc] 9 Oct 2018 Abstract A convex optimization-based method is proposed to
More informationChapter 16 Planning Based on Markov Decision Processes
Lecture slides for Automated Planning: Theory and Practice Chapter 16 Planning Based on Markov Decision Processes Dana S. Nau University of Maryland 12:48 PM February 29, 2012 1 Motivation c a b Until
More informationIntroduction to Reinforcement Learning
Introduction to Reinforcement Learning Rémi Munos SequeL project: Sequential Learning http://researchers.lille.inria.fr/ munos/ INRIA Lille - Nord Europe Machine Learning Summer School, September 2011,
More informationUnderstanding (Exact) Dynamic Programming through Bellman Operators
Understanding (Exact) Dynamic Programming through Bellman Operators Ashwin Rao ICME, Stanford University January 15, 2019 Ashwin Rao (Stanford) Bellman Operators January 15, 2019 1 / 11 Overview 1 Value
More informationCSE250A Fall 12: Discussion Week 9
CSE250A Fall 12: Discussion Week 9 Aditya Menon (akmenon@ucsd.edu) December 4, 2012 1 Schedule for today Recap of Markov Decision Processes. Examples: slot machines and maze traversal. Planning and learning.
More informationAN ABSTRACT OF THE THESIS OF
AN ABSTRACT OF THE THESIS OF Thai Duong for the degree of Master of Science in Electrical and Computer Engineering presented on June 10, 2013. Title: Adiabatic Markov Decision Process: Convergence of Value
More informationFunctional Analysis. Franck Sueur Metric spaces Definitions Completeness Compactness Separability...
Functional Analysis Franck Sueur 2018-2019 Contents 1 Metric spaces 1 1.1 Definitions........................................ 1 1.2 Completeness...................................... 3 1.3 Compactness......................................
More informationMinimum average value-at-risk for finite horizon semi-markov decision processes
12th workshop on Markov processes and related topics Minimum average value-at-risk for finite horizon semi-markov decision processes Xianping Guo (with Y.H. HUANG) Sun Yat-Sen University Email: mcsgxp@mail.sysu.edu.cn
More informationValue Function Based Reinforcement Learning in Changing Markovian Environments
Journal of Machine Learning Research 9 (2008) 1679-1709 Submitted 6/07; Revised 12/07; Published 8/08 Value Function Based Reinforcement Learning in Changing Markovian Environments Balázs Csanád Csáji
More informationRobust Markov Decision Processes
Robust Markov Decision Processes Wolfram Wiesemann, Daniel Kuhn and Berç Rustem February 9, 2012 Abstract Markov decision processes (MDPs) are powerful tools for decision making in uncertain dynamic environments.
More informationComputational complexity estimates for value and policy iteration algorithms for total-cost and average-cost Markov decision processes
Computational complexity estimates for value and policy iteration algorithms for total-cost and average-cost Markov decision processes Jefferson Huang Dept. Applied Mathematics and Statistics Stony Brook
More informationUniversity of Warwick, EC9A0 Maths for Economists Lecture Notes 10: Dynamic Programming
University of Warwick, EC9A0 Maths for Economists 1 of 63 University of Warwick, EC9A0 Maths for Economists Lecture Notes 10: Dynamic Programming Peter J. Hammond Autumn 2013, revised 2014 University of
More informationReinforcement Learning and Deep Reinforcement Learning
Reinforcement Learning and Deep Reinforcement Learning Ashis Kumer Biswas, Ph.D. ashis.biswas@ucdenver.edu Deep Learning November 5, 2018 1 / 64 Outlines 1 Principles of Reinforcement Learning 2 The Q
More informationSome notes on Markov Decision Theory
Some notes on Markov Decision Theory Nikolaos Laoutaris laoutaris@di.uoa.gr January, 2004 1 Markov Decision Theory[1, 2, 3, 4] provides a methodology for the analysis of probabilistic sequential decision
More informationA Gentle Introduction to Reinforcement Learning
A Gentle Introduction to Reinforcement Learning Alexander Jung 2018 1 Introduction and Motivation Consider the cleaning robot Rumba which has to clean the office room B329. In order to keep things simple,
More informationPART A and ONE question from PART B; or ONE question from PART A and TWO questions from PART B.
Advanced Topics in Machine Learning, GI13, 2010/11 Advanced Topics in Machine Learning, GI13, 2010/11 Answer any THREE questions. Each question is worth 20 marks. Use separate answer books Answer any THREE
More informationMarkov Decision Processes Chapter 17. Mausam
Markov Decision Processes Chapter 17 Mausam Planning Agent Static vs. Dynamic Fully vs. Partially Observable Environment What action next? Deterministic vs. Stochastic Perfect vs. Noisy Instantaneous vs.
More informationReinforcement Learning and Optimal Control. ASU, CSE 691, Winter 2019
Reinforcement Learning and Optimal Control ASU, CSE 691, Winter 2019 Dimitri P. Bertsekas dimitrib@mit.edu Lecture 8 Bertsekas Reinforcement Learning 1 / 21 Outline 1 Review of Infinite Horizon Problems
More informationReinforcement Learning
1 Reinforcement Learning Chris Watkins Department of Computer Science Royal Holloway, University of London July 27, 2015 2 Plan 1 Why reinforcement learning? Where does this theory come from? Markov decision
More informationApproximate Dynamic Programming By Minimizing Distributionally Robust Bounds
Approximate Dynamic Programming By Minimizing Distributionally Robust Bounds Marek Petrik IBM T.J. Watson Research Center, Yorktown, NY, USA Abstract Approximate dynamic programming is a popular method
More informationDynamic control of a tandem system with abandonments
Dynamic control of a tandem system with abandonments Gabriel Zayas-Cabán 1, Jingui Xie 2, Linda V. Green 3, and Mark E. Lewis 4 1 Center for Healthcare Engineering and Patient Safety University of Michigan
More informationZero-Sum Average Semi-Markov Games: Fixed Point Solutions of the Shapley Equation
Zero-Sum Average Semi-Markov Games: Fixed Point Solutions of the Shapley Equation Oscar Vega-Amaya Departamento de Matemáticas Universidad de Sonora May 2002 Abstract This paper deals with zero-sum average
More informationRandomized Linear Programming Solves the Discounted Markov Decision Problem In Nearly-Linear (Sometimes Sublinear) Run Time
Randomized Linear Programming Solves the Discounted Markov Decision Problem In Nearly-Linear Sometimes Sublinear Run Time Mengdi Wang Department of Operations Research and Financial Engineering, Princeton
More informationValue Iteration and Action ɛ-approximation of Optimal Policies in Discounted Markov Decision Processes
Value Iteration and Action ɛ-approximation of Optimal Policies in Discounted Markov Decision Processes RAÚL MONTES-DE-OCA Departamento de Matemáticas Universidad Autónoma Metropolitana-Iztapalapa San Rafael
More information