Point-Based Value Iteration for Constrained POMDPs
|
|
- Oswin Blake
- 6 years ago
- Views:
Transcription
1 Point-Based Value Iteration for Constrained POMDPs Dongho Kim Jaesong Lee Kee-Eung Kim Department of Computer Science Pascal Poupart School of Computer Science IJCAI
2 Motivation goals Agent action oservation Environment Partially oservale Markov decision processes (POMDPs) [Kaelling98] Modeling sequential decision making under partial or uncertain oservations Single reward function encodes the immediate utility of executing actions. Required to manually alance different ojectives into the single reward function Constrained POMDPs (CPOMDPs) Prolems with limited resource or multiple ojectives Maximizing one ojective (reward) while constraining other ojectives (costs) CPOMDP has not received as much attention as CMDPs. [Altman99] Exception: DP method for finding deterministic policies [Isom08] Dongho Kim 2
3 Motivation Resource-limited agent, e.g., attery-equipped root Accomplish as many goals as possile given a finite amount of energy Spoken dialogue system [Williams07] e.g., minimize length of dialogue while guaranteeing 95% dialogue success rate Reward : -1 for each dialogue turn Cost : +1 for each unsuccessful dialogue, 0 for each successful dialogue Dialogue : s 0 s 1 s 2 s T R = 1 C = 0 R = 1 C = 0 R = 1 C = 0 R = 1 C = +1 for unsuccessful dialogue C = 0 for successful dialogue Goal: maximize E γ t t r t s.t. E γ t t c t c We propose exact and approximate methods for solving CPOMDPs. Dongho Kim 3
4 Suoptimality of deterministic policies in CPOMDPs lazy, p = 0.9 R = 0, C = 0 lazy R = 0, C = 0 AdvisorHappy lazy, p = 0.1 R = 0, C = 0 AdvisorAngry Procrastinating student prolem work R = 1 C = 1 Optimal deterministic policy At t = 0, lazy At t = 1, work value = 0.9γ, cumulative cost = γ Optimal randomized policy At t = 0, work with pro. of c and lazy with pro. of 1 c At t 1, lazy JoDone work R = 0 C = 1 value = c, cumulative cost = c with pro. of c 0 = 1,0,0 γ < c < 1 Reward and cost for work at each timestep t elief reward cost 0 [1,0,0] [0.9,0.1,0] 0.9γ γ 2 [0.81,0.19,0] 0.81γ 2 γ 2 Dongho Kim 4
5 Value iteration in CPOMDPs Value function of CPOMDPs is a set of α-vector pairs value α 2,r α 3,r α 1,r cumulative cost α 2,c α 3,c α 1,c c V = α i,r, α i,c i α i,r and α i,c are i-th vectors for cumulative reward and cost respectively. Exact DP update via enumeration α i,r (s) = R(s, a)/ Z + γ T s, a, s O s s S, a, z α i,r a i,c (s) = C(s, a)/ Z + γ T s, a, s O s s S, a, z α i,c V = a A z Z α i,r, α i,c i, Creates exponentially many α-vector pairs V = A V Z Pruning is needed s s Dongho Kim 5
6 Exact DP update for CPOMDPs Pruning y mixed integer linear program (MILP) [Isom08] Check whether α r, α c is dominated y V = α i,r, α i,c i Not dominated at : cost c and higher value than other vectors with cost c value α 1,r α 2,r α r If there exists where α r, α c is not dominated, it will not e pruned. Shortcomings in MILP pruning Considers only deterministic policies cost α 1,c α 2,c Need to consider randomized policies (convex comination of α-vectors) Prunes α-vector pairs violating the cost constraint in each DP update Satisfying the cost constraint does not necessarily mean that the constraint should e satisfied at every time step. α c c Boolean variales MILP cost α c c Dongho Kim 6
7 Exact DP update for CPOMDPs Pruning y minimax quadratically constrained program (QCP) Inner maximization: Is α r, α c dominated at? Outer mininization: Where is α r, α c not dominated? Not dominated at : no convex comination with higher value and same or lower cost Inner maximization: for fixed Find convex comination which dominates α r, α c y maximizing the gap If the gap is positive, α r, α c is dominated at Outer minimization value α r α 1,r α 2,r gap = value of convex comination - value of α r cost α 1,c α c α 2,c Find where α r, α c is not dominated y minimizing the gap If the gap is negative at the resulting, α r, α c will not e pruned. Dongho Kim 7
8 Point-ased DP for CPOMDPs value Point-ased value iteration (PBVI) for standard POMDPs[Pineau06] Maintains the est α-vector for each B = 0, 1,, q Adapting standard PBVI to CPOMDPs in a simple way Enumerates α-vector pairs and performs pruning confined to B Minimax QCP pruning ecomes LP for each B find a randomized policy which dominates α r, α c at value α r α 1,r cost α 1,c α c α 2,r α 2,c Still many α-vectors at each B No information on costs at B Dongho Kim 8
9 Admissile cost [Piunovskiy00] Admissile cost is Expected cumulative cost that can e additionally incurred in the future s 0 s 1 s t s t+1 c 0 s t+2 γc 1 γ t c t γ t+1 c t+1 γ t+2 c t+2 Expected cumulative cost up to t W t = γ τ c τ t τ=0 Admissile cost at t + 1 d t+1 = 1 γ t+1 (c W t ) d t d t+1 Recursive formulation: d t+1 = 1 γ d t c t Dongho Kim 9
10 PBVI with admissile cost for CPOMDPs Samples elief-admissile cost pairs B = 0, d 0, 1, d 1,, q, d q Maintains the est randomized policy for each, d B Using LP for finding the est convex comination for, d value α 1,r α 3,r α 2,r Point-ased DP update For each, d cost α 1,c α 2,c B, find the est rand. policy at (τ, a, z, d z ) for each a, z Heuristic: distriuting admissile cost in proportion to the oservation proaility, i.e., d z = 1 d C, a P(z, a) γ α 3,c d LP solution: Convex comination of at most 2 α-vector pairs value cost At most 2 B α-vector pairs d 0 d Dongho Kim 10
11 Experiment: Quickest change detection Quickest change detection[isom08] Minimize detection delay while constraining the proaility of false alarm S = 3, A = 2, Z = 3 MILP (det) vs. QCP (rand) vs. PBVI (rand) MILP and QCP could not perform DP updates more than 6 and 5 timesteps. PBVI scaled effectively more than 10 timesteps. PBVI performed close to exact methods. NoAlarm, p = 0.99 R = 0, C = 0 NoAlarm p = 0.01 R = 0, C = 0 NoAlarm R = 1, C = 0 Alarm R = 0, C = 0 PreChange PostChange PostAlarm Alarm false alarm R = 0, C = 1 Dongho Kim 11
12 Experiment: n-city ticketing prolem n-city ticketing prolem[williams07] Figure out the origin and the destination among n-cities Sumit the ticket purchase request once it has gathered sufficient information Due to the speech recognition errors, the oserved user s response can e different from the true response -1 reward for each timestep, 1 cost for a wrong ticket PBVI result for n = 3, P e = 0.2 S = 1945, A = 16, Z = 18 More dialogue turns for smaller c Needs more information gathering steps to e more accurate Dongho Kim 12
13 Conclusion We showed that optimal policies in CPOMDPs can e randomized We presented exact and approximate methods for CPOMDPs Exact method with minimax QCP pruning Approximate method ased on PBVI Can extend to multiple constraints and different discount factor for each cost function Future work Adopting state-of-the-art POMDP solvers with heuristic elief exploration Extension to average reward and cost criterion Extension to factored CPOMDPs Dongho Kim 13
14 Reference [Altman99] E. Altman. Constrained Markov Decision Processes. Chapman & Hall/CRC, [Isom08] J. D. Isom, S. P. Meyn, and R. D. Braatz. Piecewise linear dynamic programming for constrained POMDPs. In Proc. of AAAI, [Kaelling98] L. P. Kaelling, M. L. Littman, and A. R. Cassandra. Planning and acting in partially oservale stochastic domains. Artificial Intelligence, 101:99-134, [Pineau06] J. Pineau, G. Gordon, and S. Thrun. Anytime point-ased approximations for large POMDPs. JAIR, 27: , [Piunovskiy00] A. B. Piunovskiy and X. Mao. Constrained Markovian decision processes: the dynamic programming approach. Operations Research Letters, 27(3): , [Williams07] J. D. Willians and S. Young. Partially oservale Markov decision processes for spoken dialog systems. Computer Speech and Language, 21(2): , Dongho Kim 14
Piecewise Linear Dynamic Programming for Constrained POMDPs
Proceedings of the Twenty-Third AAAI Conference on Artificial Intelligence (28) Piecewise Linear Dynamic Programming for Constrained POMDPs Joshua D. Isom Sikorsky Aircraft Corporation Stratford, CT 6615
More informationCAP Plan, Activity, and Intent Recognition
CAP6938-02 Plan, Activity, and Intent Recognition Lecture 10: Sequential Decision-Making Under Uncertainty (part 1) MDPs and POMDPs Instructor: Dr. Gita Sukthankar Email: gitars@eecs.ucf.edu SP2-1 Reminder
More informationRL 14: POMDPs continued
RL 14: POMDPs continued Michael Herrmann University of Edinburgh, School of Informatics 06/03/2015 POMDPs: Points to remember Belief states are probability distributions over states Even if computationally
More informationArtificial Intelligence
Artificial Intelligence Dynamic Programming Marc Toussaint University of Stuttgart Winter 2018/19 Motivation: So far we focussed on tree search-like solvers for decision problems. There is a second important
More informationSolving Risk-Sensitive POMDPs with and without Cost Observations
Solving Risk-Sensitive POMDPs with and without Cost Observations Ping Hou Department of Computer Science New Mexico State University Las Cruces, NM 88003, USA phou@cs.nmsu.edu William Yeoh Department of
More informationA Decentralized Approach to Multi-agent Planning in the Presence of Constraints and Uncertainty
2011 IEEE International Conference on Robotics and Automation Shanghai International Conference Center May 9-13, 2011, Shanghai, China A Decentralized Approach to Multi-agent Planning in the Presence of
More informationPartially Observable Markov Decision Processes (POMDPs)
Partially Observable Markov Decision Processes (POMDPs) Geoff Hollinger Sequential Decision Making in Robotics Spring, 2011 *Some media from Reid Simmons, Trey Smith, Tony Cassandra, Michael Littman, and
More informationSymbolic Dynamic Programming for Continuous State and Observation POMDPs
Symbolic Dynamic Programming for Continuous State and Observation POMDPs Zahra Zamani ANU & NICTA Canberra, Australia zahra.zamani@anu.edu.au Pascal Poupart U. of Waterloo Waterloo, Canada ppoupart@uwaterloo.ca
More informationSymbolic Dynamic Programming for Continuous State and Observation POMDPs
Symbolic Dynamic Programming for Continuous State and Observation POMDPs Zahra Zamani ANU & NICTA Canberra, Australia zahra.zamani@anu.edu.au Pascal Poupart U. of Waterloo Waterloo, Canada ppoupart@uwaterloo.ca
More informationSPOKEN dialog systems (SDSs) help people accomplish
IEEE TRANS. ON AUDIO, SPEECH & LANGUAGE PROCESSING, VOL.???, NO.???, MONTH??? YEAR??? 1 Scaling POMDPs for spoken dialog management Astract Control in spoken dialog systems is challenging largely ecause
More informationEfficient Maximization in Solving POMDPs
Efficient Maximization in Solving POMDPs Zhengzhu Feng Computer Science Department University of Massachusetts Amherst, MA 01003 fengzz@cs.umass.edu Shlomo Zilberstein Computer Science Department University
More informationInformation Gathering and Reward Exploitation of Subgoals for P
Information Gathering and Reward Exploitation of Subgoals for POMDPs Hang Ma and Joelle Pineau McGill University AAAI January 27, 2015 http://www.cs.washington.edu/ai/mobile_robotics/mcl/animations/global-floor.gif
More informationTowards Faster Planning with Continuous Resources in Stochastic Domains
Towards Faster Planning with Continuous Resources in Stochastic Domains Janusz Marecki and Milind Tambe Computer Science Department University of Southern California 941 W 37th Place, Los Angeles, CA 989
More informationKalman Based Temporal Difference Neural Network for Policy Generation under Uncertainty (KBTDNN)
Kalman Based Temporal Difference Neural Network for Policy Generation under Uncertainty (KBTDNN) Alp Sardag and H.Levent Akin Bogazici University Department of Computer Engineering 34342 Bebek, Istanbul,
More informationScaling up Partially Observable Markov Decision Processes for Dialogue Management
Scaling up Partially Observable Markov Decision Processes for Dialogue Management Jason D. Williams 22 July 2005 Machine Intelligence Laboratory Cambridge University Engineering Department Outline Dialogue
More informationRobust Policy Computation in Reward-uncertain MDPs using Nondominated Policies
Robust Policy Computation in Reward-uncertain MDPs using Nondominated Policies Kevin Regan University of Toronto Toronto, Ontario, Canada, M5S 3G4 kmregan@cs.toronto.edu Craig Boutilier University of Toronto
More informationConstrained Bayesian Reinforcement Learning via Approximate Linear Programming
Constrained Bayesian Reinforcement Learning via Approximate Linear Programming Jongmin Lee, Youngsoo Jang, Pascal Poupart, Kee-Eung Kim School of Computing, KAIST, Republic of Korea David R. Cheriton School
More informationFinite-State Controllers Based on Mealy Machines for Centralized and Decentralized POMDPs
Finite-State Controllers Based on Mealy Machines for Centralized and Decentralized POMDPs Christopher Amato Department of Computer Science University of Massachusetts Amherst, MA 01003 USA camato@cs.umass.edu
More informationPrediction-Constrained POMDPs
Prediction-Constrained POMDPs Joseph Futoma Harvard SEAS Michael C. Hughes Dept of. Computer Science, Tufts University Abstract Finale Doshi-Velez Harvard SEAS We propose prediction-constrained (PC) training
More informationHeuristic Search Value Iteration for POMDPs
520 SMITH & SIMMONS UAI 2004 Heuristic Search Value Iteration for POMDPs Trey Smith and Reid Simmons Robotics Institute, Carnegie Mellon University {trey,reids}@ri.cmu.edu Abstract We present a novel POMDP
More informationToday s Outline. Recap: MDPs. Bellman Equations. Q-Value Iteration. Bellman Backup 5/7/2012. CSE 473: Artificial Intelligence Reinforcement Learning
CSE 473: Artificial Intelligence Reinforcement Learning Dan Weld Today s Outline Reinforcement Learning Q-value iteration Q-learning Exploration / exploitation Linear function approximation Many slides
More informationRegion-Based Dynamic Programming for Partially Observable Markov Decision Processes
Region-Based Dynamic Programming for Partially Observable Markov Decision Processes Zhengzhu Feng Department of Computer Science University of Massachusetts Amherst, MA 01003 fengzz@cs.umass.edu Abstract
More informationRL 14: Simplifications of POMDPs
RL 14: Simplifications of POMDPs Michael Herrmann University of Edinburgh, School of Informatics 04/03/2016 POMDPs: Points to remember Belief states are probability distributions over states Even if computationally
More informationOptimally Solving Dec-POMDPs as Continuous-State MDPs
Optimally Solving Dec-POMDPs as Continuous-State MDPs Jilles Dibangoye (1), Chris Amato (2), Olivier Buffet (1) and François Charpillet (1) (1) Inria, Université de Lorraine France (2) MIT, CSAIL USA IJCAI
More informationBayesian Congestion Control over a Markovian Network Bandwidth Process
Bayesian Congestion Control over a Markovian Network Bandwidth Process Parisa Mansourifard 1/30 Bayesian Congestion Control over a Markovian Network Bandwidth Process Parisa Mansourifard (USC) Joint work
More informationSection Notes 9. Midterm 2 Review. Applied Math / Engineering Sciences 121. Week of December 3, 2018
Section Notes 9 Midterm 2 Review Applied Math / Engineering Sciences 121 Week of December 3, 2018 The following list of topics is an overview of the material that was covered in the lectures and sections
More informationAn Adaptive Clustering Method for Model-free Reinforcement Learning
An Adaptive Clustering Method for Model-free Reinforcement Learning Andreas Matt and Georg Regensburger Institute of Mathematics University of Innsbruck, Austria {andreas.matt, georg.regensburger}@uibk.ac.at
More informationCS788 Dialogue Management Systems Lecture #2: Markov Decision Processes
CS788 Dialogue Management Systems Lecture #2: Markov Decision Processes Kee-Eung Kim KAIST EECS Department Computer Science Division Markov Decision Processes (MDPs) A popular model for sequential decision
More informationOnline Partial Conditional Plan Synthesis for POMDPs with Safe-Reachability Objectives
To appear in the proceedings of the 13th International Workshop on the Algorithmic Foundations of Robotics (WAFR), 2018 Online Partial Conditional Plan Synthesis for POMDPs with Safe-Reachability Objectives
More informationPlanning and Acting in Partially Observable Stochastic Domains
Planning and Acting in Partially Observable Stochastic Domains Leslie Pack Kaelbling*, Michael L. Littman**, Anthony R. Cassandra*** *Computer Science Department, Brown University, Providence, RI, USA
More informationCS 234 Midterm - Winter
CS 234 Midterm - Winter 2017-18 **Do not turn this page until you are instructed to do so. Instructions Please answer the following questions to the best of your ability. Read all the questions first before
More informationDialogue as a Decision Making Process
Dialogue as a Decision Making Process Nicholas Roy Challenges of Autonomy in the Real World Wide range of sensors Noisy sensors World dynamics Adaptability Incomplete information Robustness under uncertainty
More informationLecture 18: Reinforcement Learning Sanjeev Arora Elad Hazan
COS 402 Machine Learning and Artificial Intelligence Fall 2016 Lecture 18: Reinforcement Learning Sanjeev Arora Elad Hazan Some slides borrowed from Peter Bodik and David Silver Course progress Learning
More informationReinforcement Learning for Continuous. Action using Stochastic Gradient Ascent. Hajime KIMURA, Shigenobu KOBAYASHI JAPAN
Reinforcement Learning for Continuous Action using Stochastic Gradient Ascent Hajime KIMURA, Shigenobu KOBAYASHI Tokyo Institute of Technology, 4259 Nagatsuda, Midori-ku Yokohama 226-852 JAPAN Abstract:
More informationChristopher Watkins and Peter Dayan. Noga Zaslavsky. The Hebrew University of Jerusalem Advanced Seminar in Deep Learning (67679) November 1, 2015
Q-Learning Christopher Watkins and Peter Dayan Noga Zaslavsky The Hebrew University of Jerusalem Advanced Seminar in Deep Learning (67679) November 1, 2015 Noga Zaslavsky Q-Learning (Watkins & Dayan, 1992)
More informationDialogue management: Parametric approaches to policy optimisation. Dialogue Systems Group, Cambridge University Engineering Department
Dialogue management: Parametric approaches to policy optimisation Milica Gašić Dialogue Systems Group, Cambridge University Engineering Department 1 / 30 Dialogue optimisation as a reinforcement learning
More informationInverse Reinforcement Learning in Partially Observable Environments
Proceedings of the Twenty-First International Joint Conference on Artificial Intelligence (IJCAI-09) Inverse Reinforcement Learning in Partially Observable Environments Jaedeug Choi and Kee-Eung Kim Department
More informationSolving Stochastic Planning Problems With Large State and Action Spaces
Solving Stochastic Planning Problems With Large State and Action Spaces Thomas Dean, Robert Givan, and Kee-Eung Kim Thomas Dean and Kee-Eung Kim Robert Givan Department of Computer Science Department of
More informationAccelerated Vector Pruning for Optimal POMDP Solvers
Accelerated Vector Pruning for Optimal POMDP Solvers Erwin Walraven and Matthijs T. J. Spaan Delft University of Technology Mekelweg 4, 2628 CD Delft, The Netherlands Abstract Partially Observable Markov
More informationBayes-Adaptive POMDPs: Toward an Optimal Policy for Learning POMDPs with Parameter Uncertainty
Bayes-Adaptive POMDPs: Toward an Optimal Policy for Learning POMDPs with Parameter Uncertainty Stéphane Ross School of Computer Science McGill University, Montreal (Qc), Canada, H3A 2A7 stephane.ross@mail.mcgill.ca
More informationPoint Based Value Iteration with Optimal Belief Compression for Dec-POMDPs
Point Based Value Iteration with Optimal Belief Compression for Dec-POMDPs Liam MacDermed College of Computing Georgia Institute of Technology Atlanta, GA 30332 liam@cc.gatech.edu Charles L. Isbell College
More informationCS 4100 // artificial intelligence. Recap/midterm review!
CS 4100 // artificial intelligence instructor: byron wallace Recap/midterm review! Attribution: many of these slides are modified versions of those distributed with the UC Berkeley CS188 materials Thanks
More informationBayesian Congestion Control over a Markovian Network Bandwidth Process: A multiperiod Newsvendor Problem
Bayesian Congestion Control over a Markovian Network Bandwidth Process: A multiperiod Newsvendor Problem Parisa Mansourifard 1/37 Bayesian Congestion Control over a Markovian Network Bandwidth Process:
More informationGrundlagen der Künstlichen Intelligenz
Grundlagen der Künstlichen Intelligenz Formal models of interaction Daniel Hennes 27.11.2017 (WS 2017/18) University Stuttgart - IPVS - Machine Learning & Robotics 1 Today Taxonomy of domains Models of
More informationTopics of Active Research in Reinforcement Learning Relevant to Spoken Dialogue Systems
Topics of Active Research in Reinforcement Learning Relevant to Spoken Dialogue Systems Pascal Poupart David R. Cheriton School of Computer Science University of Waterloo 1 Outline Review Markov Models
More informationPlanning Under Uncertainty II
Planning Under Uncertainty II Intelligent Robotics 2014/15 Bruno Lacerda Announcement No class next Monday - 17/11/2014 2 Previous Lecture Approach to cope with uncertainty on outcome of actions Markov
More informationRecent Developments in Statistical Dialogue Systems
Recent Developments in Statistical Dialogue Systems Steve Young Machine Intelligence Laboratory Information Engineering Division Cambridge University Engineering Department Cambridge, UK Contents Review
More informationRelational Partially Observable MDPs
Proceedings of the Twenty-Fourth AAAI Conference on Artificial Intelligence (AAAI-10) elational Partially Observable MDPs Chenggang Wang and oni Khardon Department of Computer Science Tufts University
More informationFunction Approximation for Continuous Constrained MDPs
Function Approximation for Continuous Constrained MDPs Aditya Undurti, Alborz Geramifard, Jonathan P. How Abstract In this work we apply function approximation techniques to solve continuous, constrained
More informationExploiting Structure to Efficiently Solve Large Scale Partially Observable Markov Decision Processes. Pascal Poupart
Exploiting Structure to Efficiently Solve Large Scale Partially Observable Markov Decision Processes by Pascal Poupart A thesis submitted in conformity with the requirements for the degree of Doctor of
More informationPROBABILISTIC PLANNING WITH RISK-SENSITIVE CRITERION PING HOU. A dissertation submitted to the Graduate School
PROBABILISTIC PLANNING WITH RISK-SENSITIVE CRITERION BY PING HOU A dissertation submitted to the Graduate School in partial fulfillment of the requirements for the degree Doctor of Philosophy Major Subject:
More information2534 Lecture 4: Sequential Decisions and Markov Decision Processes
2534 Lecture 4: Sequential Decisions and Markov Decision Processes Briefly: preference elicitation (last week s readings) Utility Elicitation as a Classification Problem. Chajewska, U., L. Getoor, J. Norman,Y.
More informationRAO : an Algorithm for Chance-Constrained POMDP s
RAO : an Algorithm for Chance-Constrained POMDP s Pedro Santana and Sylvie Thiébaux + and Brian Williams Massachusetts Institute of Technology, CSAIL + The Australian National University & NICTA 32 Vassar
More informationMarkov Decision Processes and Solving Finite Problems. February 8, 2017
Markov Decision Processes and Solving Finite Problems February 8, 2017 Overview of Upcoming Lectures Feb 8: Markov decision processes, value iteration, policy iteration Feb 13: Policy gradients Feb 15:
More informationSolving POMDPs with Continuous or Large Discrete Observation Spaces
Solving POMDPs with Continuous or Large Discrete Observation Spaces Jesse Hoey Department of Computer Science University of Toronto Toronto, ON, M5S 3H5 jhoey@cs.toronto.edu Pascal Poupart School of Computer
More informationProbabilistic robot planning under model uncertainty: an active learning approach
Probabilistic robot planning under model uncertainty: an active learning approach Robin JAULMES, Joelle PINEAU and Doina PRECUP School of Computer Science McGill University Montreal, QC CANADA H3A 2A7
More informationOn Prediction and Planning in Partially Observable Markov Decision Processes with Large Observation Sets
On Prediction and Planning in Partially Observable Markov Decision Processes with Large Observation Sets Pablo Samuel Castro pcastr@cs.mcgill.ca McGill University Joint work with: Doina Precup and Prakash
More informationNonparametric Bayesian Inverse Reinforcement Learning
PRML Summer School 2013 Nonparametric Bayesian Inverse Reinforcement Learning Jaedeug Choi JDCHOI@AI.KAIST.AC.KR Sequential Decision Making (1) Multiple decisions over time are made to achieve goals Reinforcement
More informationThe exam is closed book, closed calculator, and closed notes except your one-page crib sheet.
CS 188 Fall 2015 Introduction to Artificial Intelligence Final You have approximately 2 hours and 50 minutes. The exam is closed book, closed calculator, and closed notes except your one-page crib sheet.
More informationSymbolic Perseus: a Generic POMDP Algorithm with Application to Dynamic Pricing with Demand Learning
Symbolic Perseus: a Generic POMDP Algorithm with Application to Dynamic Pricing with Demand Learning Pascal Poupart (University of Waterloo) INFORMS 2009 1 Outline Dynamic Pricing as a POMDP Symbolic Perseus
More informationDiscrete planning (an introduction)
Sistemi Intelligenti Corso di Laurea in Informatica, A.A. 2017-2018 Università degli Studi di Milano Discrete planning (an introduction) Nicola Basilico Dipartimento di Informatica Via Comelico 39/41-20135
More informationMachine Learning for Sustainable Development and Biological Conservation
Machine Learning for Sustainable Development and Biological Conservation Tom Dietterich Distinguished Professor, Oregon State University President, Association for the Advancement of Artificial Intelligence
More informationProbabilistic Planning. George Konidaris
Probabilistic Planning George Konidaris gdk@cs.brown.edu Fall 2017 The Planning Problem Finding a sequence of actions to achieve some goal. Plans It s great when a plan just works but the world doesn t
More informationOptimizing Memory-Bounded Controllers for Decentralized POMDPs
Optimizing Memory-Bounded Controllers for Decentralized POMDPs Christopher Amato, Daniel S. Bernstein and Shlomo Zilberstein Department of Computer Science University of Massachusetts Amherst, MA 01003
More informationInteractive POMDP Lite: Towards Practical Planning to Predict and Exploit Intentions for Interacting with Self-Interested Agents
Proceedings of the Twenty-Third International Joint Conference on Artificial Intelligence Interactive POMDP Lite: Towards Practical Planning to Predict and Exploit Intentions for Interacting with Self-Interested
More informationAn Introduction to Markov Decision Processes. MDP Tutorial - 1
An Introduction to Markov Decision Processes Bob Givan Purdue University Ron Parr Duke University MDP Tutorial - 1 Outline Markov Decision Processes defined (Bob) Objective functions Policies Finding Optimal
More informationDistributed Optimization. Song Chong EE, KAIST
Distributed Optimization Song Chong EE, KAIST songchong@kaist.edu Dynamic Programming for Path Planning A path-planning problem consists of a weighted directed graph with a set of n nodes N, directed links
More informationMARKOV DECISION PROCESSES (MDP) AND REINFORCEMENT LEARNING (RL) Versione originale delle slide fornita dal Prof. Francesco Lo Presti
1 MARKOV DECISION PROCESSES (MDP) AND REINFORCEMENT LEARNING (RL) Versione originale delle slide fornita dal Prof. Francesco Lo Presti Historical background 2 Original motivation: animal learning Early
More informationLearning in Zero-Sum Team Markov Games using Factored Value Functions
Learning in Zero-Sum Team Markov Games using Factored Value Functions Michail G. Lagoudakis Department of Computer Science Duke University Durham, NC 27708 mgl@cs.duke.edu Ronald Parr Department of Computer
More informationLearning in non-stationary Partially Observable Markov Decision Processes
Learning in non-stationary Partially Observable Markov Decision Processes Robin JAULMES, Joelle PINEAU, Doina PRECUP McGill University, School of Computer Science, 3480 University St., Montreal, QC, Canada,
More informationPartially Observable Markov Decision Processes (POMDPs) Pieter Abbeel UC Berkeley EECS
Partially Observable Markov Decision Processes (POMDPs) Pieter Abbeel UC Berkeley EECS Many slides adapted from Jur van den Berg Outline POMDPs Separation Principle / Certainty Equivalence Locally Optimal
More informationBayes-Adaptive POMDPs 1
Bayes-Adaptive POMDPs 1 Stéphane Ross, Brahim Chaib-draa and Joelle Pineau SOCS-TR-007.6 School of Computer Science McGill University Montreal, Qc, Canada Department of Computer Science and Software Engineering
More informationOPTIMALITY OF RANDOMIZED TRUNK RESERVATION FOR A PROBLEM WITH MULTIPLE CONSTRAINTS
OPTIMALITY OF RANDOMIZED TRUNK RESERVATION FOR A PROBLEM WITH MULTIPLE CONSTRAINTS Xiaofei Fan-Orzechowski Department of Applied Mathematics and Statistics State University of New York at Stony Brook Stony
More informationArtificial Intelligence. Non-deterministic state model. Model for non-deterministic problems. Solutions. Blai Bonet
Artificial Intelligence Blai Bonet Non-deterministic state model Universidad Simón Boĺıvar, Caracas, Venezuela Model for non-deterministic problems Solutions State models with non-deterministic actions
More informationReinforcement Learning
Reinforcement Learning Model-Based Reinforcement Learning Model-based, PAC-MDP, sample complexity, exploration/exploitation, RMAX, E3, Bayes-optimal, Bayesian RL, model learning Vien Ngo MLR, University
More informationCourse 16:198:520: Introduction To Artificial Intelligence Lecture 13. Decision Making. Abdeslam Boularias. Wednesday, December 7, 2016
Course 16:198:520: Introduction To Artificial Intelligence Lecture 13 Decision Making Abdeslam Boularias Wednesday, December 7, 2016 1 / 45 Overview We consider probabilistic temporal models where the
More informationSymbolic Dynamic Programming for First-order POMDPs
Symbolic Dynamic Programming for First-order POMDPs Scott Sanner NICTA & ANU Canberra, Australia scott.sanner@nicta.com.au Kristian Kersting Fraunhofer IAIS Sankt Augustin, Germany kristian.kersting@iais.fraunhofer.de
More informationPOMDPs and Policy Gradients
POMDPs and Policy Gradients MLSS 2006, Canberra Douglas Aberdeen Canberra Node, RSISE Building Australian National University 15th February 2006 Outline 1 Introduction What is Reinforcement Learning? Types
More informationREINFORCEMENT LEARNING
REINFORCEMENT LEARNING Larry Page: Where s Google going next? DeepMind's DQN playing Breakout Contents Introduction to Reinforcement Learning Deep Q-Learning INTRODUCTION TO REINFORCEMENT LEARNING Contents
More informationSolution Methods for Constrained Markov Decision Process with Continuous Probability Modulation
Solution Methods for Constrained Markov Decision Process with Continuous Probability Modulation Janusz Marecki, Marek Petrik, Dharmashankar Subramanian Business Analytics and Mathematical Sciences IBM
More informationMulti-Objective Decision Making
Multi-Objective Decision Making Shimon Whiteson & Diederik M. Roijers Department of Computer Science University of Oxford July 25, 2016 Whiteson & Roijers (UOx) Multi-Objective Decision Making July 25,
More informationTemporal Difference Learning & Policy Iteration
Temporal Difference Learning & Policy Iteration Advanced Topics in Reinforcement Learning Seminar WS 15/16 ±0 ±0 +1 by Tobias Joppen 03.11.2015 Fachbereich Informatik Knowledge Engineering Group Prof.
More informationOnline Learning for Markov Decision Processes Applied to Multi-Agent Systems
Online Learning for Markov Decision Processes Applied to Multi-Agent Systems Mahmoud El Chamie Behçet Açıkmeşe Mehran Mesbahi Abstract Online learning is the process of providing online control decisions
More informationMarkov decision processes (MDP) CS 416 Artificial Intelligence. Iterative solution of Bellman equations. Building an optimal policy.
Page 1 Markov decision processes (MDP) CS 416 Artificial Intelligence Lecture 21 Making Complex Decisions Chapter 17 Initial State S 0 Transition Model T (s, a, s ) How does Markov apply here? Uncertainty
More informationFinal Exam December 12, 2017
Introduction to Artificial Intelligence CSE 473, Autumn 2017 Dieter Fox Final Exam December 12, 2017 Directions This exam has 7 problems with 111 points shown in the table below, and you have 110 minutes
More informationBootstrapping LPs in Value Iteration for Multi-Objective and Partially Observable MDPs
Bootstrapping LPs in Value Iteration for Multi-Objective and Partially Observable MDPs Diederik M. Roijers Vrije Universiteit Brussel & Vrije Universiteit Amsterdam Erwin Walraven Delft University of Technology
More informationPreference Elicitation for Sequential Decision Problems
Preference Elicitation for Sequential Decision Problems Kevin Regan University of Toronto Introduction 2 Motivation Focus: Computational approaches to sequential decision making under uncertainty These
More informationCS221 Practice Midterm
CS221 Practice Midterm Autumn 2012 1 ther Midterms The following pages are excerpts from similar classes midterms. The content is similar to what we ve been covering this quarter, so that it should be
More informationApproximating Reachable Belief Points in POMDPs
Approximating Reachable Belief Points in POMDPs Kyle Hollins Wray and Shlomo Zilberstein Abstract We propose an algorithm called σ-approximation that compresses the non-zero values of beliefs for partially
More informationGoal Recognition over POMDPs: Inferring the Intention of a POMDP Agent
Goal Recognition over POMDPs: Inferring the Intention of a POMDP Agent Miquel Ramírez Universitat Pompeu Fabra 08018 Barcelona, SPAIN miquel.ramirez@upf.edu Hector Geffner ICREA & Universitat Pompeu Fabra
More informationFocused Real-Time Dynamic Programming for MDPs: Squeezing More Out of a Heuristic
Focused Real-Time Dynamic Programming for MDPs: Squeezing More Out of a Heuristic Trey Smith and Reid Simmons Robotics Institute, Carnegie Mellon University {trey,reids}@ri.cmu.edu Abstract Real-time dynamic
More informationProbabilistic inference for computing optimal policies in MDPs
Probabilistic inference for computing optimal policies in MDPs Marc Toussaint Amos Storkey School of Informatics, University of Edinburgh Edinburgh EH1 2QL, Scotland, UK mtoussai@inf.ed.ac.uk, amos@storkey.org
More informationDeep Reinforcement Learning. STAT946 Deep Learning Guest Lecture by Pascal Poupart University of Waterloo October 19, 2017
Deep Reinforcement Learning STAT946 Deep Learning Guest Lecture by Pascal Poupart University of Waterloo October 19, 2017 Outline Introduction to Reinforcement Learning AlphaGo (Deep RL for Computer Go)
More informationSolving Factored MDPs with Continuous and Discrete Variables
Appeared in the Twentieth Conference on Uncertainty in Artificial Intelligence, Banff, Canada, July 24. Solving Factored MDPs with Continuous and Discrete Variables Carlos Guestrin Milos Hauskrecht Branislav
More informationThe quest for finding Hamiltonian cycles
The quest for finding Hamiltonian cycles Giang Nguyen School of Mathematical Sciences University of Adelaide Travelling Salesman Problem Given a list of cities and distances between cities, what is the
More informationConstrained Markov Decision Processes
Constrained Markov Decision Processes Nelson Gonçalves April 16, 2007 Topics Introduction Examples Constrained Markov Decision Process Markov Decision Process Policies Cost functions: The discounted cost
More informationAn Analytic Solution to Discrete Bayesian Reinforcement Learning
An Analytic Solution to Discrete Bayesian Reinforcement Learning Pascal Poupart (U of Waterloo) Nikos Vlassis (U of Amsterdam) Jesse Hoey (U of Toronto) Kevin Regan (U of Waterloo) 1 Motivation Automated
More informationCS 188 Introduction to Fall 2007 Artificial Intelligence Midterm
NAME: SID#: Login: Sec: 1 CS 188 Introduction to Fall 2007 Artificial Intelligence Midterm You have 80 minutes. The exam is closed book, closed notes except a one-page crib sheet, basic calculators only.
More informationFinal Exam December 12, 2017
Introduction to Artificial Intelligence CSE 473, Autumn 2017 Dieter Fox Final Exam December 12, 2017 Directions This exam has 7 problems with 111 points shown in the table below, and you have 110 minutes
More information