Markov Decision Processes Chapter 17. Mausam
|
|
- Dinah Hopkins
- 5 years ago
- Views:
Transcription
1 Markov Decision Processes Chapter 17 Mausam
2 Planning Agent Static vs. Dynamic Fully vs. Partially Observable Environment What action next? Deterministic vs. Stochastic Perfect vs. Noisy Instantaneous vs. Durative Percepts Actions 2
3 Classical Planning Static Environment Fully Observable Perfect What action next? Deterministic Instantaneous Percepts Actions 3
4 Stochastic Planning: MDPs Static Environment Fully Observable Perfect What action next? Stochastic Instantaneous Percepts Actions 4
5 MDP vs. Decision Theory Decision theory episodic MDP -- sequential 5
6 Markov Decision Process (MDP) S: A set of states A: A set of actions T(s,a,s ): transition model C(s,a,s ): cost model G: set of goals : start state : discount factor R(s,a,s ): reward model factored Factored MDP absorbing/ non-absorbing 6
7 Objective of an MDP Find a policy : S A which optimizes minimizes maximizes maximizes discounted or undiscount. expected cost to reach a goal expected reward expected (reward-cost) given a horizon finite infinite indefinite assuming full observability 7
8 Role of Discount Factor ( ) Keep the total reward/total cost finite useful for infinite horizon problems Intuition (economics): Money today is worth more than money tomorrow. Total reward: r 1 + r r 3 + Total cost: c 1 + c c 3 + 8
9 Examples of MDPs Goal-directed, Indefinite Horizon, Cost Minimization MDP <S, A, T, C, G, > Most often studied in planning, graph theory communities Infinite Horizon, Discounted Reward Maximization MDP <S, A, T, R, > Most often studied in machine learning, economics, operations research communities most popular Oversubscription Planning: Non absorbing goals, Reward Max. MDP <S, A, T, G, R, > Relatively recent model 9
10 Acyclic vs. Cyclic MDPs a P b a P b Q R S T R S T c c c c c c c G C(a) = 5, C(b) = 10, C(c) =1 Expectimin works V(Q/R/S/T) = 1 V(P) = 6 action a G Expectimin doesn t work infinite loop V(R/S/T) = 1 Q(P,b) = 11 Q(P,a) =???? suppose I decide to take a in P Q(P,a) = * Q(P,a) 10 = 13.5
11 Policy Evaluation Given a policy ¼: compute V ¼ V ¼ : cost of reaching goal while following ¼ 12
12 Deterministic MDPs Policy Graph for ¼ ¼( ) = a 0 ; ¼(s 1 ) = a 1 C=5 C=1 s 1 s a g 0 a 1 V ¼ (s 1 ) = 1 V ¼ ( ) = 6 add costs on path to goal 13
13 Acyclic MDPs Policy Graph for ¼ V ¼ (s 1 ) = 1 V ¼ (s 2 ) = 4 Pr=0.6 C=5 s 1 a 0 a 1 Pr=0.4 C=2 s 2 C=1 C=4 V ¼ ( ) = 0.6(5+1) + 0.4(2+4) = 6 a 2 s g backward pass in reverse topological order 14
14 General MDPs can be cyclic! Pr=0.6 C=5 s 1 a 0 a 1 Pr=0.4 C=2 s 2 a 2 C=1 Pr=0.7 C=4 Pr=0.3 C=3 s g cannot do a simple single pass V ¼ (s 1 ) = 1 V ¼ (s 2 ) =?? (depends on V ¼ ( )) V ¼ ( ) =?? (depends on V ¼ (s 2 )) 15
15 General SSPs can be cyclic! Pr=0.6 C=5 s 1 a 0 a 1 Pr=0.4 C=2 s 2 V ¼ (g) = 0 V ¼ (s 1 ) = 1+V ¼ (s g ) = 1 V ¼ (s 2 ) = 0.7(4+V ¼ (s g )) + 0.3(3+V ¼ ( )) V ¼ ( ) = 0.6(5+V ¼ (s 1 )) + 0.4(2+V ¼ (s 2 )) a 2 C=1 Pr=0.7 C=4 Pr=0.3 C=3 s g a simple system of linear equations 16
16 Policy Evaluation (Approach 1) Solving the System of Linear Equations V ¼ (s) = 0 if s 2 G = X 2S T (s; ¼(s); ) [C(s; ¼(s); ) + V ¼ ( )] S variables. O( S 3 ) running time 17
17 Iterative Policy Evaluation V ¼ (s 2 ) Pr=0.6 C=5 Pr=0.4 C=2 s 1 s 2 a 2 C=1 a 0 a V ¼ ( ) Pr=0.7 C=4 Pr=0.3 C=3 s g 18
18 Policy Evaluation (Approach 2) V ¼ (s) = X 2S T (s; ¼(s); ) [C(s; ¼(s); ) + V ¼ ( )] iterative refinement V ¼ n (s) Ã X 2S T (s; ¼(s); ) C(s; ¼(s); ) + V ¼ n 1( ) 19
19 Iterative Policy Evaluation iteration n ²-consistency termination condition 20
20 Policy Evaluation Value Iteration (Bellman Equations for MDP 1 ) <S, A, T, C,G, > Define V*(s) {optimal cost} as the minimum expected cost to reach a goal from this state. V* should satisfy the following equation: V (s) = 0 if s 2 G X = min T (s; a; ) [C(s; a; ) + V ( )] a2a 2S Q*(s,a) V*(s) = min a Q*(s,a) 22
21 Bellman Equations for MDP 2 <S, A, T, R,, > Define V*(s) {optimal value} as the maximum expected discounted reward from this state. V* should satisfy the following equation: 23
22 Fixed Point Computation in VI V (s) = min a2a X 2S T (s; a; ) [C(s; a; ) + V ( )] iterative refinement V n (s) Ã min a2a X 2S T (s; a; ) [C(s; a; ) + V n 1 ( )] non-linear 24
23 Example a 20 a a 00 s 2 s 40 4 a 41 a 21 a 1 a C=2 3 a 01 s 1 s 3 C=5 Pr=0.6 Pr=0.4 s g 25
24 Bellman Backup s 4 s 3 a 41 a 3 C=2 a 40 C=5 Pr=0.6 Pr=0.4 s g min Q 1 (s 4,a 40 ) = Q 1 (s 4,a 41 ) = = 2.8 V 1 = 2.8 s 4 a greedy = a 41 C=5 C=2 a 40 a 41 s g V 0 = 0 s 3 V 0 = 2
25 Value Iteration [Bellman 57] No restriction on initial value function iteration n ²-consistency termination condition 27
26 Example (all actions cost 1 unless otherwise stated) a 20 a a 00 s 2 s 40 4 a 41 a 21 a 1 a C=2 3 a 01 s 1 s 3 C=5 Pr=0.6 Pr=0.4 s g n V n ( ) V n (s 1 ) V n (s 2 ) V n (s 3 ) V n (s 4 )
27 Comments Decision-theoretic Algorithm Dynamic Programming Fixed Point Computation Probabilistic version of Bellman-Ford Algorithm for shortest path computation MDP 1 : Stochastic Shortest Path Problem Time Complexity one iteration: O( S 2 A ) number of iterations: poly( S, A, 1/(1- )) Space Complexity: O( S ) 31
28 Monotonicity For all n>k V k p V * V n p V* (V n monotonic from below) V k p V * V n p V* (V n monotonic from above) 32
29 Changing the Search Space Value Iteration Search in value space Compute the resulting policy Policy Iteration Search in policy space Compute the resulting value 40
30 Policy iteration [Howard 60] assign an arbitrary assignment of 0 to each state. repeat costly: O(n 3 ) Policy Evaluation: compute V n+1 : the evaluation of n Policy Improvement: for all states s compute n+1 (s): argmax a2 Ap(s) Q n+1 (s,a) until n+1 = n Advantage searching in a finite (policy) space as opposed to uncountably infinite (value) space convergence faster. all other properties follow! Modified Policy Iteration approximate by value iteration using fixed policy 41
31 Modified Policy iteration assign an arbitrary assignment of 0 to each state. repeat Policy Evaluation: compute V n+1 the approx. evaluation of n Policy Improvement: for all states s compute n+1 (s): argmax a2 Ap(s) Q n+1 (s,a) until n+1 = n Advantage probably the most competitive synchronous dynamic programming algorithm. 42
32 Applications Stochastic Games Robotics: navigation, helicopter manuevers Finance: options, investments Communication Networks Medicine: Radiation planning for cancer Controlling workflows Optimize bidding decisions in auctions Traffic flow optimization Aircraft queueing for landing; airline meal provisioning Optimizing software on mobiles Forest firefighting 43
33 VI Asynchronous VI Is backing up all states in an iteration essential? No! States may be backed up as many times in any order If no state gets starved convergence properties still hold!! 44
34 Residual wrt Value Function V (Res V ) Residual at s with respect to V magnitude( V(s)) after one Bellman backup at s Res V (s) = V (s) min a2a X 2S T (s; a; )[C(s; a; ) + V ( )] Residual wrt respect to V max residual Res V = max s (Res V (s)) Res V <² (²-consistency) 45
35 (General) Asynchronous VI 46
36 Prioritization of Bellman Backups Are all backups equally important? Can we avoid some backups? Can we schedule the backups more appropriately? 47
37 Useless Backups? a 20 a a 00 s 2 s 40 4 a 41 a 21 a 1 a C=2 3 a 01 s 1 s 3 C=5 Pr=0.6 Pr=0.4 s g n V n ( ) V n (s 1 ) V n (s 2 ) V n (s 3 ) V n (s 4 )
38 Useless Backups? a 20 a a 00 s 2 s 40 4 a 41 a 21 a 1 a C=2 3 a 01 s 1 s 3 C=5 Pr=0.6 Pr=0.4 s g n V n ( ) V n (s 1 ) V n (s 2 ) V n (s 3 ) V n (s 4 )
39 Asynch VI Prioritized VI 50
40 Which state to prioritize? s' V=0 s' V=0 s' V= s 1 s'.. V=0.. s s'.. V=2.. s 3 s'.. V=5.. s' V=0 s' V=0 s' V=0 s 1 is zero priority s 2 is higher priority s 3 is low priority 51
41 Prioritized Sweeping priority P S (s) = max ½ ¾ priority P S (s); max ft (s; a; a2a s0 )Res V ( )g Convergence [Li&Littman 08] Prioritized Sweeping converges to optimal in the limit, if all initial priorities are non-zero. (does not need synchronous VI iterations) 52
42 Prioritized Sweeping a 20 a a 00 s 2 s 40 4 a 41 a 21 a 1 a C=2 3 a 01 s 1 s 3 C=5 Pr=0.6 Pr=0.4 s g V( ) V(s 1 ) V(s 2 ) V(s 3 ) V(s 4 ) Initial V Priority Update Priority Update
43 Limitations of VI/Extensions Scalability Memory linear in size of state space Time at least polynomial or more Polynomial is good, no? state spaces are usually huge. if n state vars then 2 n states! Curse of Dimensionality! 54
44 Heuristic Search Insight 1 knowledge of a start state to save on computation ~ (all sources shortest path single source shortest path) Insight 2 additional knowledge in the form of heuristic function ~ (dfs/bfs A*) 55
45 Model MDP with an additional start state denoted by MDP s0 What is the solution to an MDP s0 Policy (S!A)? are states that are not reachable from relevant? states that are never visited (even though reachable)? 56
46 Partial Policy Define Partial policy ¼: S! A, where S µ S Define Partial policy closed w.r.t. a state s. is a partial policy ¼ s defined for all states s reachable by ¼ s starting from s 57
47 Partial policy closed wrt s 9 s 1 s 2 s 3 s 4 s 5 s 6 s 7 s 8 S g 58
48 Partial policy closed wrt s 9 s 1 s 2 s 3 s 4 s 5 s 6 s 7 s 8 S g Is this policy closed wrt? ¼ s0 ( )= a 1 ¼ s0 (s 1 )= a 2 ¼ s0 (s 2 )= a 1 59
49 Partial policy closed wrt s 9 s 1 s 2 s 3 s 4 s 5 s 6 s 7 s 8 S g Is this policy closed wrt? ¼ s0 ( )= a 1 ¼ s0 (s 1 )= a 2 ¼ s0 (s 2 )= a 1 60
50 Partial policy closed wrt s 9 s 1 s 2 s 3 s 4 s 5 s 6 s 7 s 8 S g Is this policy closed wrt? ¼ s0 ( )= a 1 ¼ s0 (s 1 )= a 2 ¼ s0 (s 2 )= a 1 ¼ s0 (s 6 )= a 1 61
51 Policy Graph of ¼ s0 s 9 s 1 s 2 s 3 s 4 s 5 s 6 s 7 s 8 S g ¼ s0 ( )= a 1 ¼ s0 (s 1 )= a 2 ¼ s0 (s 2 )= a 1 ¼ s0 (s 6 )= a 1 62
52 Greedy Policy Graph Define greedy policy: ¼ V = argmin a Q V (s,a) Define greedy partial policy rooted at Partial policy rooted at Greedy policy denoted by ¼ V s0 Define greedy policy graph Policy graph of ¼ V s0 : denoted by G V s0 63
53 Heuristic Function h(s): S!R estimates V*(s) gives an indication about goodness of a state usually used in initialization V 0 (s) = h(s) helps us avoid seemingly bad states Define admissible heuristic optimistic h(s) V*(s) 64
54 A General Scheme for Heuristic Search in MDPs Two (over)simplified intuitions Focus on states in greedy policy wrt V rooted at Focus on states with residual > ² Find & Revise: repeat find a state that satisfies the two properties above revise: perform a Bellman backup until no such state remains 65
55 A* LAO* regular graph soln:(shortest) path A* acyclic AND/OR graph soln:(expected shortest) acyclic graph AO* [Nilsson 71] cyclic AND/OR graph soln:(expected shortest) cyclic graph LAO* [Hansen&Zil. 98] All algorithms able to make effective use of reachability information!
56 LAO* Family add to the fringe and to greedy policy graph repeat FIND: expand some states on the fringe (in greedy graph) initialize all new states by their heuristic value choose a subset of affected states REVISE: perform some Bellman backups on this subset recompute the greedy graph until greedy graph has no fringe & residuals in greedy graph small output the greedy graph as the final policy 68
57 LAO* add to the fringe and to greedy policy graph repeat FIND: expand best state s on the fringe (in greedy graph) initialize all new states by their heuristic value subset = all states in expanded graph that can reach s REVISE: perform VI on this subset recompute the greedy graph until greedy graph has no fringe & residuals in greedy graph small output the greedy graph as the final policy 69
58 LAO* V( ) = h( ) s 1 s 2 s 3 s 4 s 5 s 6 s 7 s 8 S g add in the fringe and in greedy graph 70
59 LAO* V( ) = h( ) s 1 s 2 s 3 s 4 s 5 s 6 s 7 s 8 S g FIND: expand some states on the fringe (in greedy graph) 71
60 LAO* V( ) s 1 s 2 s 3 s 4 h s 1 s 2 h h s 3 s 4 h s 5 s 6 s 7 s 8 S g FIND: expand some states on the fringe (in greedy graph) initialize all new states by their heuristic value subset = all states in expanded graph that can reach s perform VI on this subset 72
61 LAO* V( ) s 1 s 2 s 3 s 4 h s 1 s 2 h h s 3 s 4 h s 5 s 6 s 7 s 8 S g FIND: expand some states on the fringe (in greedy graph) initialize all new states by their heuristic value subset = all states in expanded graph that can reach s perform VI on this subset recompute the greedy graph 73
62 LAO* V( ) s 1 s 2 s 3 s 4 h s 1 s 2 h h s 3 s 4 h s 5 s 6 s 7 s 8 S g FIND: expand some states on the fringe (in greedy graph) initialize all new states by their heuristic value subset = all states in expanded graph that can reach s perform VI on this subset recompute the greedy graph s h 6 s 7 h 74
63 LAO* V( ) s 1 s 2 s 3 s 4 h s 1 s 2 h h s 3 s 4 h s 5 s 6 s 7 s 8 S g FIND: expand some states on the fringe (in greedy graph) initialize all new states by their heuristic value subset = all states in expanded graph that can reach s perform VI on this subset recompute the greedy graph s h 6 s 7 h 75
64 LAO* V s 1 s 2 s 3 s 4 h s 1 s 2 h V s 3 s 4 h s 5 s 6 s 7 s 8 S g FIND: expand some states on the fringe (in greedy graph) initialize all new states by their heuristic value subset = all states in expanded graph that can reach s perform VI on this subset recompute the greedy graph s h 6 s 7 h 76
65 LAO* V s 1 s 2 s 3 s 4 h s 1 s 2 h V s 3 s 4 h s 5 s 6 s 7 s 8 S g FIND: expand some states on the fringe (in greedy graph) initialize all new states by their heuristic value subset = all states in expanded graph that can reach s perform VI on this subset recompute the greedy graph s h 6 s 7 h 77
66 LAO* V s 1 s 2 s 3 s 4 h s 1 s 2 h V s 3 s 4 h s 5 s 6 s 7 s 8 S g h s 5 0 s h 6 s 7 S g h FIND: expand some states on the fringe (in greedy graph) initialize all new states by their heuristic value subset = all states in expanded graph that can reach s perform VI on this subset recompute the greedy graph 78
67 LAO* V s 1 s 2 s 3 s 4 h s 1 s 2 h V s 3 s 4 h s 5 s 6 s 7 s 8 S g h s 5 0 s h 6 s 7 S g h FIND: expand some states on the fringe (in greedy graph) initialize all new states by their heuristic value subset = all states in expanded graph that can reach s perform VI on this subset recompute the greedy graph 79
68 LAO* V s 1 s 2 s 3 s 4 V s 1 s 2 h V s 3 s 4 h s 5 s 6 s 7 s 8 S g h s 5 0 s h 6 s 7 S g h FIND: expand some states on the fringe (in greedy graph) initialize all new states by their heuristic value subset = all states in expanded graph that can reach s perform VI on this subset recompute the greedy graph 80
69 LAO* V s 1 s 2 s 3 s 4 V s 1 s 2 h V s 3 s 4 h s 5 s 6 s 7 s 8 S g h s 5 0 s h 6 s 7 S g h FIND: expand some states on the fringe (in greedy graph) initialize all new states by their heuristic value subset = all states in expanded graph that can reach s perform VI on this subset recompute the greedy graph 81
70 LAO* V s 1 s 2 s 3 s 4 V s 1 s 2 V V s 3 s 4 h s 5 s 6 s 7 s 8 S g h s 5 0 s h 6 s 7 S g h FIND: expand some states on the fringe (in greedy graph) initialize all new states by their heuristic value subset = all states in expanded graph that can reach s perform VI on this subset recompute the greedy graph 82
71 LAO* V s 1 s 2 s 3 s 4 V s 1 s 2 V V s 3 s 4 h s 5 s 6 s 7 s 8 S g h s 5 0 s h 6 s 7 S g h FIND: expand some states on the fringe (in greedy graph) initialize all new states by their heuristic value subset = all states in expanded graph that can reach s perform VI on this subset recompute the greedy graph 83
72 LAO* V s 1 s 2 s 3 s 4 V s 1 s 2 V V s 3 s 4 h s 5 s 6 s 7 s 8 S g h s 5 0 s V 6 s 7 S g h output the greedy graph as the final policy 84
73 LAO* V s 1 s 2 s 3 s 4 V s 1 s 2 V V s 3 s 4 h s 5 s 6 s 7 s 8 S g h s 5 0 s V 6 s 7 S g h output the greedy graph as the final policy 85
74 LAO* V s 1 s 2 s 3 s 4 V s 1 s 2 V V s 3 s 4 h s 5 s 6 s 7 s 8 S g h s 5 0 s V 6 s h 7 s 8 S g s 4 was never expanded s 8 was never touched 86
75 Extensions Heuristic Search + Dynamic Programming AO*, LAO*, RTDP, Factored MDPs add planning graph style heuristics use goal regression to generalize better Hierarchical MDPs hierarchy of sub-tasks, actions to scale better Reinforcement Learning learning the probability and rewards acting while learning connections to psychology Partially Observable Markov Decision Processes noisy sensors; partially observable environment popular in robotics 91
Markov Decision Processes Chapter 17. Mausam
Markov Decision Processes Chapter 17 Mausam Planning Agent Static vs. Dynamic Fully vs. Partially Observable Environment What action next? Deterministic vs. Stochastic Perfect vs. Noisy Instantaneous vs.
More informationCSE 573. Markov Decision Processes: Heuristic Search & Real-Time Dynamic Programming. Slides adapted from Andrey Kolobov and Mausam
CSE 573 Markov Decision Processes: Heuristic Search & Real-Time Dynamic Programming Slides adapted from Andrey Kolobov and Mausam 1 Stochastic Shortest-Path MDPs: Motivation Assume the agent pays cost
More informationCS 7180: Behavioral Modeling and Decisionmaking
CS 7180: Behavioral Modeling and Decisionmaking in AI Markov Decision Processes for Complex Decisionmaking Prof. Amy Sliva October 17, 2012 Decisions are nondeterministic In many situations, behavior and
More informationPlanning in Markov Decision Processes
Carnegie Mellon School of Computer Science Deep Reinforcement Learning and Control Planning in Markov Decision Processes Lecture 3, CMU 10703 Katerina Fragkiadaki Markov Decision Process (MDP) A Markov
More informationToday s Outline. Recap: MDPs. Bellman Equations. Q-Value Iteration. Bellman Backup 5/7/2012. CSE 473: Artificial Intelligence Reinforcement Learning
CSE 473: Artificial Intelligence Reinforcement Learning Dan Weld Today s Outline Reinforcement Learning Q-value iteration Q-learning Exploration / exploitation Linear function approximation Many slides
More informationHeuristic Search Algorithms
CHAPTER 4 Heuristic Search Algorithms 59 4.1 HEURISTIC SEARCH AND SSP MDPS The methods we explored in the previous chapter have a serious practical drawback the amount of memory they require is proportional
More informationReinforcement Learning and Control
CS9 Lecture notes Andrew Ng Part XIII Reinforcement Learning and Control We now begin our study of reinforcement learning and adaptive control. In supervised learning, we saw algorithms that tried to make
More informationDistributed Optimization. Song Chong EE, KAIST
Distributed Optimization Song Chong EE, KAIST songchong@kaist.edu Dynamic Programming for Path Planning A path-planning problem consists of a weighted directed graph with a set of n nodes N, directed links
More informationInternet Monetization
Internet Monetization March May, 2013 Discrete time Finite A decision process (MDP) is reward process with decisions. It models an environment in which all states are and time is divided into stages. Definition
More informationIntroduction to Reinforcement Learning
CSCI-699: Advanced Topics in Deep Learning 01/16/2019 Nitin Kamra Spring 2019 Introduction to Reinforcement Learning 1 What is Reinforcement Learning? So far we have seen unsupervised and supervised learning.
More informationMARKOV DECISION PROCESSES (MDP) AND REINFORCEMENT LEARNING (RL) Versione originale delle slide fornita dal Prof. Francesco Lo Presti
1 MARKOV DECISION PROCESSES (MDP) AND REINFORCEMENT LEARNING (RL) Versione originale delle slide fornita dal Prof. Francesco Lo Presti Historical background 2 Original motivation: animal learning Early
More informationIntroduction to Reinforcement Learning. CMPT 882 Mar. 18
Introduction to Reinforcement Learning CMPT 882 Mar. 18 Outline for the week Basic ideas in RL Value functions and value iteration Policy evaluation and policy improvement Model-free RL Monte-Carlo and
More informationChapter 16 Planning Based on Markov Decision Processes
Lecture slides for Automated Planning: Theory and Practice Chapter 16 Planning Based on Markov Decision Processes Dana S. Nau University of Maryland 12:48 PM February 29, 2012 1 Motivation c a b Until
More informationReal Time Value Iteration and the State-Action Value Function
MS&E338 Reinforcement Learning Lecture 3-4/9/18 Real Time Value Iteration and the State-Action Value Function Lecturer: Ben Van Roy Scribe: Apoorva Sharma and Tong Mu 1 Review Last time we left off discussing
More informationCS788 Dialogue Management Systems Lecture #2: Markov Decision Processes
CS788 Dialogue Management Systems Lecture #2: Markov Decision Processes Kee-Eung Kim KAIST EECS Department Computer Science Division Markov Decision Processes (MDPs) A popular model for sequential decision
More informationPART A and ONE question from PART B; or ONE question from PART A and TWO questions from PART B.
Advanced Topics in Machine Learning, GI13, 2010/11 Advanced Topics in Machine Learning, GI13, 2010/11 Answer any THREE questions. Each question is worth 20 marks. Use separate answer books Answer any THREE
More informationToday s s Lecture. Applicability of Neural Networks. Back-propagation. Review of Neural Networks. Lecture 20: Learning -4. Markov-Decision Processes
Today s s Lecture Lecture 20: Learning -4 Review of Neural Networks Markov-Decision Processes Victor Lesser CMPSCI 683 Fall 2004 Reinforcement learning 2 Back-propagation Applicability of Neural Networks
More informationCS 4100 // artificial intelligence. Recap/midterm review!
CS 4100 // artificial intelligence instructor: byron wallace Recap/midterm review! Attribution: many of these slides are modified versions of those distributed with the UC Berkeley CS188 materials Thanks
More informationThis question has three parts, each of which can be answered concisely, but be prepared to explain and justify your concise answer.
This question has three parts, each of which can be answered concisely, but be prepared to explain and justify your concise answer. 1. Suppose you have a policy and its action-value function, q, then you
More informationReinforcement Learning. Introduction
Reinforcement Learning Introduction Reinforcement Learning Agent interacts and learns from a stochastic environment Science of sequential decision making Many faces of reinforcement learning Optimal control
More informationSome AI Planning Problems
Course Logistics CS533: Intelligent Agents and Decision Making M, W, F: 1:00 1:50 Instructor: Alan Fern (KEC2071) Office hours: by appointment (see me after class or send email) Emailing me: include CS533
More informationFinal Exam December 12, 2017
Introduction to Artificial Intelligence CSE 473, Autumn 2017 Dieter Fox Final Exam December 12, 2017 Directions This exam has 7 problems with 111 points shown in the table below, and you have 110 minutes
More informationChapter 3: The Reinforcement Learning Problem
Chapter 3: The Reinforcement Learning Problem Objectives of this chapter: describe the RL problem we will be studying for the remainder of the course present idealized form of the RL problem for which
More informationPART A and ONE question from PART B; or ONE question from PART A and TWO questions from PART B.
Advanced Topics in Machine Learning, GI13, 2010/11 Advanced Topics in Machine Learning, GI13, 2010/11 Answer any THREE questions. Each question is worth 20 marks. Use separate answer books Answer any THREE
More informationCourse 16:198:520: Introduction To Artificial Intelligence Lecture 13. Decision Making. Abdeslam Boularias. Wednesday, December 7, 2016
Course 16:198:520: Introduction To Artificial Intelligence Lecture 13 Decision Making Abdeslam Boularias Wednesday, December 7, 2016 1 / 45 Overview We consider probabilistic temporal models where the
More informationReinforcement learning an introduction
Reinforcement learning an introduction Prof. Dr. Ann Nowé Computational Modeling Group AIlab ai.vub.ac.be November 2013 Reinforcement Learning What is it? Learning from interaction Learning about, from,
More informationDecision Theory: Markov Decision Processes
Decision Theory: Markov Decision Processes CPSC 322 Lecture 33 March 31, 2006 Textbook 12.5 Decision Theory: Markov Decision Processes CPSC 322 Lecture 33, Slide 1 Lecture Overview Recap Rewards and Policies
More informationLecture 3: The Reinforcement Learning Problem
Lecture 3: The Reinforcement Learning Problem Objectives of this lecture: describe the RL problem we will be studying for the remainder of the course present idealized form of the RL problem for which
More informationMarkov Decision Processes and Solving Finite Problems. February 8, 2017
Markov Decision Processes and Solving Finite Problems February 8, 2017 Overview of Upcoming Lectures Feb 8: Markov decision processes, value iteration, policy iteration Feb 13: Policy gradients Feb 15:
More informationStochastic Safest and Shortest Path Problems
Stochastic Safest and Shortest Path Problems Florent Teichteil-Königsbuch AAAI-12, Toronto, Canada July 24-26, 2012 Path optimization under probabilistic uncertainties Problems coming to searching for
More informationChristopher Watkins and Peter Dayan. Noga Zaslavsky. The Hebrew University of Jerusalem Advanced Seminar in Deep Learning (67679) November 1, 2015
Q-Learning Christopher Watkins and Peter Dayan Noga Zaslavsky The Hebrew University of Jerusalem Advanced Seminar in Deep Learning (67679) November 1, 2015 Noga Zaslavsky Q-Learning (Watkins & Dayan, 1992)
More informationPrioritized Sweeping Converges to the Optimal Value Function
Technical Report DCS-TR-631 Prioritized Sweeping Converges to the Optimal Value Function Lihong Li and Michael L. Littman {lihong,mlittman}@cs.rutgers.edu RL 3 Laboratory Department of Computer Science
More informationReinforcement Learning. Yishay Mansour Tel-Aviv University
Reinforcement Learning Yishay Mansour Tel-Aviv University 1 Reinforcement Learning: Course Information Classes: Wednesday Lecture 10-13 Yishay Mansour Recitations:14-15/15-16 Eliya Nachmani Adam Polyak
More informationFinal Exam December 12, 2017
Introduction to Artificial Intelligence CSE 473, Autumn 2017 Dieter Fox Final Exam December 12, 2017 Directions This exam has 7 problems with 111 points shown in the table below, and you have 110 minutes
More informationProf. Dr. Ann Nowé. Artificial Intelligence Lab ai.vub.ac.be
REINFORCEMENT LEARNING AN INTRODUCTION Prof. Dr. Ann Nowé Artificial Intelligence Lab ai.vub.ac.be REINFORCEMENT LEARNING WHAT IS IT? What is it? Learning from interaction Learning about, from, and while
More informationChapter 3: The Reinforcement Learning Problem
Chapter 3: The Reinforcement Learning Problem Objectives of this chapter: describe the RL problem we will be studying for the remainder of the course present idealized form of the RL problem for which
More informationReinforcement Learning. Machine Learning, Fall 2010
Reinforcement Learning Machine Learning, Fall 2010 1 Administrativia This week: finish RL, most likely start graphical models LA2: due on Thursday LA3: comes out on Thursday TA Office hours: Today 1:30-2:30
More informationCS599 Lecture 1 Introduction To RL
CS599 Lecture 1 Introduction To RL Reinforcement Learning Introduction Learning from rewards Policies Value Functions Rewards Models of the Environment Exploitation vs. Exploration Dynamic Programming
More informationProbabilistic Planning. George Konidaris
Probabilistic Planning George Konidaris gdk@cs.brown.edu Fall 2017 The Planning Problem Finding a sequence of actions to achieve some goal. Plans It s great when a plan just works but the world doesn t
More informationDecision Theory: Q-Learning
Decision Theory: Q-Learning CPSC 322 Decision Theory 5 Textbook 12.5 Decision Theory: Q-Learning CPSC 322 Decision Theory 5, Slide 1 Lecture Overview 1 Recap 2 Asynchronous Value Iteration 3 Q-Learning
More informationNotes on Reinforcement Learning
1 Introduction Notes on Reinforcement Learning Paulo Eduardo Rauber 2014 Reinforcement learning is the study of agents that act in an environment with the goal of maximizing cumulative reward signals.
More informationMS&E338 Reinforcement Learning Lecture 1 - April 2, Introduction
MS&E338 Reinforcement Learning Lecture 1 - April 2, 2018 Introduction Lecturer: Ben Van Roy Scribe: Gabriel Maher 1 Reinforcement Learning Introduction In reinforcement learning (RL) we consider an agent
More informationReinforcement Learning. George Konidaris
Reinforcement Learning George Konidaris gdk@cs.brown.edu Fall 2017 Machine Learning Subfield of AI concerned with learning from data. Broadly, using: Experience To Improve Performance On Some Task (Tom
More informationMarkov Decision Processes Infinite Horizon Problems
Markov Decision Processes Infinite Horizon Problems Alan Fern * * Based in part on slides by Craig Boutilier and Daniel Weld 1 What is a solution to an MDP? MDP Planning Problem: Input: an MDP (S,A,R,T)
More informationCMU Lecture 12: Reinforcement Learning. Teacher: Gianni A. Di Caro
CMU 15-781 Lecture 12: Reinforcement Learning Teacher: Gianni A. Di Caro REINFORCEMENT LEARNING Transition Model? State Action Reward model? Agent Goal: Maximize expected sum of future rewards 2 MDP PLANNING
More informationReinforcement Learning
CS7/CS7 Fall 005 Supervised Learning: Training examples: (x,y) Direct feedback y for each input x Sequence of decisions with eventual feedback No teacher that critiques individual actions Learn to act
More informationElements of Reinforcement Learning
Elements of Reinforcement Learning Policy: way learning algorithm behaves (mapping from state to action) Reward function: Mapping of state action pair to reward or cost Value function: long term reward,
More informationMarkov Decision Processes (and a small amount of reinforcement learning)
Markov Decision Processes (and a small amount of reinforcement learning) Slides adapted from: Brian Williams, MIT Manuela Veloso, Andrew Moore, Reid Simmons, & Tom Mitchell, CMU Nicholas Roy 16.4/13 Session
More informationMarkov Decision Processes
Markov Decision Processes Noel Welsh 11 November 2010 Noel Welsh () Markov Decision Processes 11 November 2010 1 / 30 Annoucements Applicant visitor day seeks robot demonstrators for exciting half hour
More informationARTIFICIAL INTELLIGENCE. Reinforcement learning
INFOB2KI 2018-2019 Utrecht University The Netherlands ARTIFICIAL INTELLIGENCE Reinforcement learning Lecturer: Silja Renooij These slides are part of the INFOB2KI Course Notes available from www.cs.uu.nl/docs/vakken/b2ki/schema.html
More informationReinforcement Learning
Reinforcement Learning Ron Parr CompSci 7 Department of Computer Science Duke University With thanks to Kris Hauser for some content RL Highlights Everybody likes to learn from experience Use ML techniques
More informationCourse basics. CSE 190: Reinforcement Learning: An Introduction. Last Time. Course goals. The website for the class is linked off my homepage.
Course basics CSE 190: Reinforcement Learning: An Introduction The website for the class is linked off my homepage. Grades will be based on programming assignments, homeworks, and class participation.
More informationBalancing and Control of a Freely-Swinging Pendulum Using a Model-Free Reinforcement Learning Algorithm
Balancing and Control of a Freely-Swinging Pendulum Using a Model-Free Reinforcement Learning Algorithm Michail G. Lagoudakis Department of Computer Science Duke University Durham, NC 2778 mgl@cs.duke.edu
More information2534 Lecture 4: Sequential Decisions and Markov Decision Processes
2534 Lecture 4: Sequential Decisions and Markov Decision Processes Briefly: preference elicitation (last week s readings) Utility Elicitation as a Classification Problem. Chajewska, U., L. Getoor, J. Norman,Y.
More informationReview: TD-Learning. TD (SARSA) Learning for Q-values. Bellman Equations for Q-values. P (s, a, s )[R(s, a, s )+ Q (s, (s ))]
Review: TD-Learning function TD-Learning(mdp) returns a policy Class #: Reinforcement Learning, II 8s S, U(s) =0 set start-state s s 0 choose action a, using -greedy policy based on U(s) U(s) U(s)+ [r
More informationMarkov decision processes
CS 2740 Knowledge representation Lecture 24 Markov decision processes Milos Hauskrecht milos@cs.pitt.edu 5329 Sennott Square Administrative announcements Final exam: Monday, December 8, 2008 In-class Only
More informationReinforcement Learning II
Reinforcement Learning II Andrea Bonarini Artificial Intelligence and Robotics Lab Department of Electronics and Information Politecnico di Milano E-mail: bonarini@elet.polimi.it URL:http://www.dei.polimi.it/people/bonarini
More informationPartially Observable Markov Decision Processes (POMDPs)
Partially Observable Markov Decision Processes (POMDPs) Geoff Hollinger Sequential Decision Making in Robotics Spring, 2011 *Some media from Reid Simmons, Trey Smith, Tony Cassandra, Michael Littman, and
More informationReinforcement learning
Reinforcement learning Stuart Russell, UC Berkeley Stuart Russell, UC Berkeley 1 Outline Sequential decision making Dynamic programming algorithms Reinforcement learning algorithms temporal difference
More informationCSE250A Fall 12: Discussion Week 9
CSE250A Fall 12: Discussion Week 9 Aditya Menon (akmenon@ucsd.edu) December 4, 2012 1 Schedule for today Recap of Markov Decision Processes. Examples: slot machines and maze traversal. Planning and learning.
More informationHidden Markov Models (HMM) and Support Vector Machine (SVM)
Hidden Markov Models (HMM) and Support Vector Machine (SVM) Professor Joongheon Kim School of Computer Science and Engineering, Chung-Ang University, Seoul, Republic of Korea 1 Hidden Markov Models (HMM)
More informationMachine Learning I Reinforcement Learning
Machine Learning I Reinforcement Learning Thomas Rückstieß Technische Universität München December 17/18, 2009 Literature Book: Reinforcement Learning: An Introduction Sutton & Barto (free online version:
More informationFast SSP Solvers Using Short-Sighted Labeling
Luis Pineda, Kyle H. Wray and Shlomo Zilberstein College of Information and Computer Sciences, University of Massachusetts, Amherst, USA July 9th Introduction Motivation SSPs are a highly-expressive model
More informationLecture 1: March 7, 2018
Reinforcement Learning Spring Semester, 2017/8 Lecture 1: March 7, 2018 Lecturer: Yishay Mansour Scribe: ym DISCLAIMER: Based on Learning and Planning in Dynamical Systems by Shie Mannor c, all rights
More informationLecture 3: Policy Evaluation Without Knowing How the World Works / Model Free Policy Evaluation
Lecture 3: Policy Evaluation Without Knowing How the World Works / Model Free Policy Evaluation CS234: RL Emma Brunskill Winter 2018 Material builds on structure from David SIlver s Lecture 4: Model-Free
More informationReading Response: Due Wednesday. R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 1
Reading Response: Due Wednesday R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction 1 Another Example Get to the top of the hill as quickly as possible. reward = 1 for each step where
More informationMDP Preliminaries. Nan Jiang. February 10, 2019
MDP Preliminaries Nan Jiang February 10, 2019 1 Markov Decision Processes In reinforcement learning, the interactions between the agent and the environment are often described by a Markov Decision Process
More information1 Markov decision processes
2.997 Decision-Making in Large-Scale Systems February 4 MI, Spring 2004 Handout #1 Lecture Note 1 1 Markov decision processes In this class we will study discrete-time stochastic systems. We can describe
More informationRL 14: POMDPs continued
RL 14: POMDPs continued Michael Herrmann University of Edinburgh, School of Informatics 06/03/2015 POMDPs: Points to remember Belief states are probability distributions over states Even if computationally
More informationCS230: Lecture 9 Deep Reinforcement Learning
CS230: Lecture 9 Deep Reinforcement Learning Kian Katanforoosh Menti code: 21 90 15 Today s outline I. Motivation II. Recycling is good: an introduction to RL III. Deep Q-Learning IV. Application of Deep
More informationMachine Learning and Bayesian Inference. Unsupervised learning. Can we find regularity in data without the aid of labels?
Machine Learning and Bayesian Inference Dr Sean Holden Computer Laboratory, Room FC6 Telephone extension 6372 Email: sbh11@cl.cam.ac.uk www.cl.cam.ac.uk/ sbh11/ Unsupervised learning Can we find regularity
More informationThe Reinforcement Learning Problem
The Reinforcement Learning Problem Slides based on the book Reinforcement Learning by Sutton and Barto Formalizing Reinforcement Learning Formally, the agent and environment interact at each of a sequence
More informationOutline. CSE 573: Artificial Intelligence Autumn Agent. Partial Observability. Markov Decision Process (MDP) 10/31/2012
CSE 573: Artificial Intelligence Autumn 2012 Reasoning about Uncertainty & Hidden Markov Models Daniel Weld Many slides adapted from Dan Klein, Stuart Russell, Andrew Moore & Luke Zettlemoyer 1 Outline
More informationReinforcement Learning
Reinforcement Learning Model-Based Reinforcement Learning Model-based, PAC-MDP, sample complexity, exploration/exploitation, RMAX, E3, Bayes-optimal, Bayesian RL, model learning Vien Ngo MLR, University
More informationCS 188 Introduction to Fall 2007 Artificial Intelligence Midterm
NAME: SID#: Login: Sec: 1 CS 188 Introduction to Fall 2007 Artificial Intelligence Midterm You have 80 minutes. The exam is closed book, closed notes except a one-page crib sheet, basic calculators only.
More information, and rewards and transition matrices as shown below:
CSE 50a. Assignment 7 Out: Tue Nov Due: Thu Dec Reading: Sutton & Barto, Chapters -. 7. Policy improvement Consider the Markov decision process (MDP) with two states s {0, }, two actions a {0, }, discount
More informationReinforcement Learning
Reinforcement Learning Dipendra Misra Cornell University dkm@cs.cornell.edu https://dipendramisra.wordpress.com/ Task Grasp the green cup. Output: Sequence of controller actions Setup from Lenz et. al.
More informationQ-learning. Tambet Matiisen
Q-learning Tambet Matiisen (based on chapter 11.3 of online book Artificial Intelligence, foundations of computational agents by David Poole and Alan Mackworth) Stochastic gradient descent Experience
More informationLecture 18: Reinforcement Learning Sanjeev Arora Elad Hazan
COS 402 Machine Learning and Artificial Intelligence Fall 2016 Lecture 18: Reinforcement Learning Sanjeev Arora Elad Hazan Some slides borrowed from Peter Bodik and David Silver Course progress Learning
More informationAn Adaptive Clustering Method for Model-free Reinforcement Learning
An Adaptive Clustering Method for Model-free Reinforcement Learning Andreas Matt and Georg Regensburger Institute of Mathematics University of Innsbruck, Austria {andreas.matt, georg.regensburger}@uibk.ac.at
More information15-780: Graduate Artificial Intelligence. Reinforcement learning (RL)
15-780: Graduate Artificial Intelligence Reinforcement learning (RL) From MDPs to RL We still use the same Markov model with rewards and actions But there are a few differences: 1. We do not assume we
More information6 Reinforcement Learning
6 Reinforcement Learning As discussed above, a basic form of supervised learning is function approximation, relating input vectors to output vectors, or, more generally, finding density functions p(y,
More informationReinforcement Learning. Summer 2017 Defining MDPs, Planning
Reinforcement Learning Summer 2017 Defining MDPs, Planning understandability 0 Slide 10 time You are here Markov Process Where you will go depends only on where you are Markov Process: Information state
More informationTopics of Active Research in Reinforcement Learning Relevant to Spoken Dialogue Systems
Topics of Active Research in Reinforcement Learning Relevant to Spoken Dialogue Systems Pascal Poupart David R. Cheriton School of Computer Science University of Waterloo 1 Outline Review Markov Models
More informationReinforcement Learning. Donglin Zeng, Department of Biostatistics, University of North Carolina
Reinforcement Learning Introduction Introduction Unsupervised learning has no outcome (no feedback). Supervised learning has outcome so we know what to predict. Reinforcement learning is in between it
More informationReinforcement Learning: An Introduction
Introduction Betreuer: Freek Stulp Hauptseminar Intelligente Autonome Systeme (WiSe 04/05) Forschungs- und Lehreinheit Informatik IX Technische Universität München November 24, 2004 Introduction What is
More informationCMU Lecture 11: Markov Decision Processes II. Teacher: Gianni A. Di Caro
CMU 15-781 Lecture 11: Markov Decision Processes II Teacher: Gianni A. Di Caro RECAP: DEFINING MDPS Markov decision processes: o Set of states S o Start state s 0 o Set of actions A o Transitions P(s s,a)
More informationSequential decision making under uncertainty. Department of Computer Science, Czech Technical University in Prague
Sequential decision making under uncertainty Jiří Kléma Department of Computer Science, Czech Technical University in Prague https://cw.fel.cvut.cz/wiki/courses/b4b36zui/prednasky pagenda Previous lecture:
More informationBasics of reinforcement learning
Basics of reinforcement learning Lucian Buşoniu TMLSS, 20 July 2018 Main idea of reinforcement learning (RL) Learn a sequential decision policy to optimize the cumulative performance of an unknown system
More informationGrundlagen der Künstlichen Intelligenz
Grundlagen der Künstlichen Intelligenz Formal models of interaction Daniel Hennes 27.11.2017 (WS 2017/18) University Stuttgart - IPVS - Machine Learning & Robotics 1 Today Taxonomy of domains Models of
More informationDecision making, Markov decision processes
Decision making, Markov decision processes Solved tasks Collected by: Jiří Kléma, klema@fel.cvut.cz Spring 2017 The main goal: The text presents solved tasks to support labs in the A4B33ZUI course. 1 Simple
More informationReinforcement Learning. Spring 2018 Defining MDPs, Planning
Reinforcement Learning Spring 2018 Defining MDPs, Planning understandability 0 Slide 10 time You are here Markov Process Where you will go depends only on where you are Markov Process: Information state
More informationReinforcement Learning
Reinforcement Learning March May, 2013 Schedule Update Introduction 03/13/2015 (10:15-12:15) Sala conferenze MDPs 03/18/2015 (10:15-12:15) Sala conferenze Solving MDPs 03/20/2015 (10:15-12:15) Aula Alpha
More informationIntroduction to Reinforcement Learning Part 1: Markov Decision Processes
Introduction to Reinforcement Learning Part 1: Markov Decision Processes Rowan McAllister Reinforcement Learning Reading Group 8 April 2015 Note I ve created these slides whilst following Algorithms for
More informationReinforcement Learning
1 Reinforcement Learning Chris Watkins Department of Computer Science Royal Holloway, University of London July 27, 2015 2 Plan 1 Why reinforcement learning? Where does this theory come from? Markov decision
More informationAn Algorithm better than AO*?
An Algorithm better than? Blai Bonet Universidad Simón Boĺıvar Caracas, Venezuela Héctor Geffner ICREA and Universitat Pompeu Fabra Barcelona, Spain 7/2005 An Algorithm Better than? B. Bonet and H. Geffner;
More informationLecture 3: Markov Decision Processes
Lecture 3: Markov Decision Processes Joseph Modayil 1 Markov Processes 2 Markov Reward Processes 3 Markov Decision Processes 4 Extensions to MDPs Markov Processes Introduction Introduction to MDPs Markov
More informationArtificial Intelligence
Artificial Intelligence Dynamic Programming Marc Toussaint University of Stuttgart Winter 2018/19 Motivation: So far we focussed on tree search-like solvers for decision problems. There is a second important
More informationIntroduction to Reinforcement Learning
Introduction to Reinforcement Learning Rémi Munos SequeL project: Sequential Learning http://researchers.lille.inria.fr/ munos/ INRIA Lille - Nord Europe Machine Learning Summer School, September 2011,
More informationChapter 4: Dynamic Programming
Chapter 4: Dynamic Programming Objectives of this chapter: Overview of a collection of classical solution methods for MDPs known as dynamic programming (DP) Show how DP can be used to compute value functions,
More information