Opponent Modelling by Sequence Prediction and Lookahead in Two-Player Games

Size: px
Start display at page:

Download "Opponent Modelling by Sequence Prediction and Lookahead in Two-Player Games"

Transcription

1 Opponent Modelling by Sequence Prediction and Lookahead in Two-Player Games Richard Mealing and Jonathan L. Shapiro Machine Learning and Optimisation Group School of omputer Science University of Manchester, UK Richard Mealing and Jonathan L. Shapiro (Machine Sequence Learning Prediction and Optimisation Opponent Modelling GroupSchool of omputer ScienceUniversity of Manchest 1 / 22

2 The Problem You play against an opponent The opponent s actions are based on previous actions How can you maximise your reward? Applications Heads-up poker Auctions P2P networking Path finding etc Richard Mealing and Jonathan L. Shapiro (Machine Sequence Learning Prediction and Optimisation Opponent Modelling GroupSchool of omputer ScienceUniversity of Manchest 2 / 22

3 Possible Approaches You could use reinforcement learning to learn to take actions with high expected discounted rewards However we propose to: Model the opponent using sequence prediction methods Lookahead and take actions which probabilistically, according to the opponent model, lead to the highest reward Which approach give us the highest rewards? Richard Mealing and Jonathan L. Shapiro (Machine Sequence Learning Prediction and Optimisation Opponent Modelling GroupSchool of omputer ScienceUniversity of Manchest 3 / 22

4 Opponent Modelling using Sequence Prediction Observe the opponent s action and the player s action (a opp, a) Form a sequence over time t (memory size n) (a t opp, a t ), (a t 1 opp, a t 1 ),..., (a t n+1 opp, a t n+1 ) Predict the opponent s next action based on this sequence Pr ( a t+1 opp (a t opp, a t ), (a t 1 opp, a t 1 ),..., (a t n+1 opp, a t n+1 ) ) Richard Mealing and Jonathan L. Shapiro (Machine Sequence Learning Prediction and Optimisation Opponent Modelling GroupSchool of omputer ScienceUniversity of Manchest 4 / 22

5 Sequence Prediction Methods We tested a variety of sequence prediction methods... Lempel-Ziv-1978 (LZ78) [1] Knuth-Morris-Pratt (KMP) [2] Unbounded contexts Prediction by Partial Matching (PPM) [3] ActiveLeZi [4] Transition irected Acyclic Graph (TAG) [5] Entropy Learned Pruned Hypothesis Space (ELPH) [6] N-Gram [7] ontext blending ontext pruning Hierarchical N-Gram (H. N-Gram) [7] } ollection of 1 to N-Grams Long Short Term Memory (LSTM) [8] } Implicit blending & pruning Richard Mealing and Jonathan L. Shapiro (Machine Sequence Learning Prediction and Optimisation Opponent Modelling GroupSchool of omputer ScienceUniversity of Manchest 5 / 22

6 Sequence Prediction Method Lookahead Predict with k lookahead given a hypothesised context i.e. ( ) Pr aopp t+k (aopp t+k 1, a t+k 1 ), (aopp t+k 2, a t+k 2 ),..., (aopp t+k n, a t+k n ) A hypothesised context may contain unobserved (predicted) symbols Richard Mealing and Jonathan L. Shapiro (Machine Sequence Learning Prediction and Optimisation Opponent Modelling GroupSchool of omputer ScienceUniversity of Manchest 6 / 22

7 Reinforcement Learning: Q-Learning Learns an action-value function that when input a state-action pair (s, a) outputs the expected value of taking that action in that state and following a fixed strategy thereafter [9] State Action {}}{ Q( s t, {}}{ a t Reward iscount Learning rate {}}{{}}{ ) (1 α )Q(s t, a t ) + α[ r t {}}{ + γ }{{} fraction of old value max Q(st+1, a t+1 )] a t+1 }{{} fraction of reward & next max valued action Select actions with high q-values with some exploration Richard Mealing and Jonathan L. Shapiro (Machine Sequence Learning Prediction and Optimisation Opponent Modelling GroupSchool of omputer ScienceUniversity of Manchest 7 / 22

8 Need for Lookahead (Prisoner s ilemma Example) 1,1 4,0 0,4 3,3 Richard Mealing and Jonathan L. Shapiro (Machine Sequence Learning Prediction and Optimisation Opponent Modelling GroupSchool of omputer ScienceUniversity of Manchest 8 / 22

9 Need for Lookahead (Prisoner s ilemma Example) efect is the dominant action ooperate-ooperate is socially optimal (highest sum of rewards) Tit-for-tat (copy opponent s last move) is good for iterated play an we learn tit-for-tat? Richard Mealing and Jonathan L. Shapiro (Machine Sequence Learning Prediction and Optimisation Opponent Modelling GroupSchool of omputer ScienceUniversity of Manchest 9 / 22

10 Need for Lookahead (Prisoner s ilemma Example) Pred. Pred ,1 4,0 0,4 3,3 1 0 Richard Mealing and Jonathan L. Shapiro (Machine Sequence Learning Prediction and Optimisation Opponent Modelling GroupSchool of omputer ScienceUniversity of10 Manchest / 22

11 Need for Lookahead (Prisoner s ilemma Example) Pred. Pred ,1 4,0 0,4 3,3 1 0 Lookahead of 1 shows has highest reward With lookahead of 2 (,,,) has highest total reward (unlikely) Assume the opponent copies the player s last move (i.e. tit-for-tat) Richard Mealing and Jonathan L. Shapiro (Machine Sequence Learning Prediction and Optimisation Opponent Modelling GroupSchool of omputer ScienceUniversity of11 Manchest / 22

12 Need for Lookahead (Prisoner s ilemma Example) Pred. 1,1 4,0 0,4 3,3 Pred Pred. Pred. Pred. Pred Richard Mealing and Jonathan L. Shapiro (Machine Sequence Learning Prediction and Optimisation Opponent Modelling GroupSchool of omputer ScienceUniversity of12 Manchest / 22

13 Need for Lookahead (Prisoner s ilemma Example) Pred. 1,1 4,0 0,4 3,3 Pred Pred. Pred. Pred. Pred Lookahead of 2 against tit-for-tat shows has highest reward 3 Richard Mealing and Jonathan L. Shapiro (Machine Sequence Learning Prediction and Optimisation Opponent Modelling GroupSchool of omputer ScienceUniversity of13 Manchest / 22

14 Q-Learning s Implicit Lookahead State Action {}}{ Q( s t, {}}{ a t Reward iscount Learning rate {}}{{}}{ ) (1 α )Q(s t, a t ) + α[ r t {}}{ + γ }{{} fraction of old value max Q(st+1, a t+1 )] a t+1 }{{} fraction of reward & next max valued action Assume each state is an opponent action i.e. s = a opp Learns (player action, opponent action) values as: γ = 0 - payoff matrix (arg max a Q ( aopp, t+1 a ) same as max lookahead 1) 0 <γ <1 - payoff matrix + future rewards with exponential decay γ = 1 - payoff matrix + future rewards Increasing γ increases lookahead Richard Mealing and Jonathan L. Shapiro (Machine Sequence Learning Prediction and Optimisation Opponent Modelling GroupSchool of omputer ScienceUniversity of14 Manchest / 22

15 Exhaustive Explicit Lookahead We use exhaustive explicit lookahead with the opponent model and action values to greedily select actions (to limited depth) maximising total reward Richard Mealing and Jonathan L. Shapiro (Machine Learning and Optimisation GroupSchool of omputer ScienceUniversity of Manchest Sequence Prediction Opponent Modelling 15 / 22

16 Experiments Iterated Rock-Paper-Scissors Opponent s actions depend on its previous actions Iterated Prisoner s ilemma Opponent s actions depend on both players previous actions R P S R 0,0-1,1 1,-1 P 1,-1 0,0-1,1 S -1,1 1,-1 0,0 1,1 4,0 0,4 3,3 Littman s Soccer [10] irect competition Which approach has better performance? ichard Mealing and Jonathan L. Shapiro (Machine Sequence Learning Prediction and Optimisation Opponent Modelling GroupSchool of omputer ScienceUniversity of16 Manchest / 22

17 Iterated Rock Paper Scissors Memory Size 1 {R,P,S} Order 1 {R,R,P,P,S,S} Order 2 {R,R,R,P,P,P,S,S,S} Order 3 Name Avg Payoff Avg Time Name Avg Payoff Avg Time Name Avg Payoff Avg Time ELPH 1 ± ± 0.6 WoLF-PH ± ± 5 ELPH ± ± 4 WoLF-PH 1 ± 0 27 ± 2 PGA-APP ± ± 5 PGA-APP ± ± 4 PGA-APP ± ± 2 ɛ Q-Learner ± ± 3 WoLF-PH ± ± 4 ɛ Q-Learner 0.97 ± ± 2 ELPH ± ± 0 ɛ Q-Learner ± ± 6 WPL 0.87 ± ± 6 WPL ± ± 7 WPL ± ± 7 Memory Size 2 Memory Size 3 ELPH 1 ± 0 10 ± 0 ELPH 1 ± 0 10 ± 0 WoLF-PH 0.68 ± ± 6 WoLF-PH 0.98 ± ± 3 ɛ Q-Learner 0.92 ± ± 4 ɛ Q-Learner 0.64 ± ± 5 ɛ Q-Learner 0.97 ± ± 2 WoLF-PH 0.91 ± ± 8 PGA-APP 0.61 ± ± 7 PGA-APP 0.92 ± ± 3 PGA-APP 0.86 ± ± 6 ELPH 0.6 ± ± 4 WPL 0.65 ± ± 7 WPL 0.54 ± ± 6 WPL ± ± 7 ELPH 1 ± 0 10 ± 0 ELPH 1 ± ± 0.7 ELPH 1 ± ± 0.7 WoLF-PH 0.95 ± ± 6 WoLF-PH 0.89 ± ± 3 WoLF-PH 0.85 ± ± 0 ɛ Q-Learner 0.94 ± ± 4 ɛ Q-Learner 0.87 ± ± 5 ɛ Q-Learner 0.84 ± ± 6 PGA-APP 0.9 ± ± 6 PGA-APP 0.87 ± ± 6 PGA-APP 0.77 ± ± 3 WPL 0.63 ± ± 6 WPL 0.69 ± ± 2 WPL 0.76 ± ± 0 Good Bad Agents cannot learn best response with memory size < model order Our approach gains the highest payoffs at generally the fastest rates Richard Mealing and Jonathan L. Shapiro (Machine Sequence Learning Prediction and Optimisation Opponent Modelling GroupSchool of omputer ScienceUniversity of17 Manchest / 22

18 Iterated Prisoner s ilemma Memory Size 1 iscount = 0 and epth = 1 iscount = 0.99 and epth = 2 Name Avg Payoff Avg Time Position Name Avg Payoff Avg Time Position PGA-APP 2.03 ± ± 3 13 ɛ Q-Learner 2.68 ± ± 5 1 ɛ Q-Learner 1.94 ± ± 4 16 TAG + Q-Learner 2.63 ± ± 4 1 WPL ± ± 1 17 TAG ± ± 1 1 TAG 1.93 ± ± 2 16 WPL 2.31 ± ± 4 12 WoLF-PH 1.89 ± ± 2 18 PGA-APP 2.17 ± ± 3 13 WoLF-PH 2.1 ± ± 5 13 Memory Size 2 Memory Size 3 PGA-APP 2.01 ± ± 4 14 TAG + Q-Learner ± ± 6 1 WPL ± ± 1 17 ɛ Q-Learner 2.74 ± ± 5 1 WoLF-PH 1.92 ± ± 4 17 TAG 2.72 ± ± 1 1 TAG ± ± 2 16 WPL 2.34 ± ± 4 12 ɛ Q-Learner ± ± 2 18 PGA-APP 2.18 ± ± 5 13 WoLF-PH 2.14 ± ± 3 13 ɛ Q-Learner 2.02 ± ± 3 14 TAG + Q-Learner ± ± 5 1 TAG ± ± 3 17 TAG 2.74 ± ± 3 1 WPL ± ± 3 17 ɛ Q-Learner 2.65 ± ± 5 1 PGA-APP 1.92 ± ± 2 16 WPL 2.32 ± ± 4 12 WoLF-PH ± ± 1 18 PGA-APP 2.18 ± ± 4 12 WoLF-PH 2.14 ± ± 4 13 Good Bad Increasing lookahead (discounting, search depth) increases rewards Our approach + Q-Learning increases rewards but also increases time Our approach gains the highest payoffs at generally the fastest rates Richard Mealing and Jonathan L. Shapiro (Machine Sequence Learning Prediction and Optimisation Opponent Modelling GroupSchool of omputer ScienceUniversity of18 Manchest / 22

19 Soccer ɛ Q-Learner WoLF-PH WPL PGA-APP Name Avg Payoff Name Avg Payoff Name Avg Payoff Name Avg Payoff PPM ± PPM ± PPM ± PPM ± LSTM ± LSTM ± H. N-Gram ± H. N-Gram ± TAG 0.63 ± FP ± N-Gram ± ActiveLeZi ± H. N-Gram ± N-Gram ± LSTM ± FP ± LZ ± H. N-Gram ± TAG ± TAG ± N-Gram 0.62 ± ActiveLeZi ± FP ± LSTM ± ActiveLeZi ± TAG ± LZ ± N-Gram ± ELPH ± LZ ± ActiveLeZi ± LZ ± FP ± ELPH ± ELPH ± ELPH ± KMP ± KMP ± KMP 0.62 ± KMP ± Good Bad Our approach wins above 50% of the games using any predictor PPM has the highest performances Richard Mealing and Jonathan L. Shapiro (Machine Sequence Learning Prediction and Optimisation Opponent Modelling GroupSchool of omputer ScienceUniversity of19 Manchest / 22

20 onclusions We proposed sequence prediction and lookahead to accurately model and effectively respond to opponents with memory Empirical results show given sufficient memory and lookahead our approach outperforms reinforcement learning algorithms Richard Mealing and Jonathan L. Shapiro (Machine Sequence Learning Prediction and Optimisation Opponent Modelling GroupSchool of omputer ScienceUniversity of20 Manchest / 22

21 Future Work Will apply our approach to domains with: Larger state spaces Hidden information Where the challenges are: eeper lookahead (e.g. sampling techniques) Sequence predictor configuration (e.g. 1 predictor per state) Richard Mealing and Jonathan L. Shapiro (Machine Sequence Learning Prediction and Optimisation Opponent Modelling GroupSchool of omputer ScienceUniversity of21 Manchest / 22

22 References [1] Lempel and Ziv. ompression of Individual Sequences via Variable-Rate oding [2] Byron Knoll. Text Prediction and lassification Using String Matching [3] Alistair Moffat. Implementing the PPM ata ompression Scheme. In: IEEE Transactions on ommunications 38 (1990), pp [4] Karthik Gopalratnam and iane J. ook. ActiveLezi: An incremental parsing algorithm for sequential prediction. In: 16th Int. FLAIRS onf. 2003, pp [5] Philip Laird and Ronald Saul. iscrete Sequence Prediction and Its Applications. In: Machine Learning 15 (1994), pp [6] Jensen et al. Non-stationary policy learning in 2-player zero sum games. In: Proc. of 20th Int. onf. on AI. 2005, pp [7] Ian Millington. Artificial Intelligence for Games. In: ed. by avid H. Eberly. Morgan Kaufmann, hap. Learning, pp [8] Felix A. Gers, Nicol N. Schraudolph, and Jürgen Schmidhuber. Learning Precise Timing with LSTM Recurrent Networks. In: JMLR 3 (2002), pp [9]. J.. H. Watkins. Learning from delayed rewards. Ph thesis. ambridge, [10] Michael L. Littman. Markov games as a framework for multi-agent reinforcement learning. In: 11th Proc. of IML. Morgan Kaufmann, 1994, pp Richard Mealing and Jonathan L. Shapiro (Machine Sequence Learning Prediction and Optimisation Opponent Modelling GroupSchool of omputer ScienceUniversity of22 Manchest / 22

Learning an Effective Strategy in a Multi-Agent System with Hidden Information

Learning an Effective Strategy in a Multi-Agent System with Hidden Information Learning an Effective Strategy in a Multi-Agent System with Hidden Information Richard Mealing Supervisor: Jon Shapiro Machine Learning and Optimisation Group School of Computer Science University of Manchester

More information

Exponential Moving Average Based Multiagent Reinforcement Learning Algorithms

Exponential Moving Average Based Multiagent Reinforcement Learning Algorithms Exponential Moving Average Based Multiagent Reinforcement Learning Algorithms Mostafa D. Awheda Department of Systems and Computer Engineering Carleton University Ottawa, Canada KS 5B6 Email: mawheda@sce.carleton.ca

More information

I D I A P. Online Policy Adaptation for Ensemble Classifiers R E S E A R C H R E P O R T. Samy Bengio b. Christos Dimitrakakis a IDIAP RR 03-69

I D I A P. Online Policy Adaptation for Ensemble Classifiers R E S E A R C H R E P O R T. Samy Bengio b. Christos Dimitrakakis a IDIAP RR 03-69 R E S E A R C H R E P O R T Online Policy Adaptation for Ensemble Classifiers Christos Dimitrakakis a IDIAP RR 03-69 Samy Bengio b I D I A P December 2003 D a l l e M o l l e I n s t i t u t e for Perceptual

More information

Exponential Moving Average Based Multiagent Reinforcement Learning Algorithms

Exponential Moving Average Based Multiagent Reinforcement Learning Algorithms Artificial Intelligence Review manuscript No. (will be inserted by the editor) Exponential Moving Average Based Multiagent Reinforcement Learning Algorithms Mostafa D. Awheda Howard M. Schwartz Received:

More information

Game Theory, Evolutionary Dynamics, and Multi-Agent Learning. Prof. Nicola Gatti

Game Theory, Evolutionary Dynamics, and Multi-Agent Learning. Prof. Nicola Gatti Game Theory, Evolutionary Dynamics, and Multi-Agent Learning Prof. Nicola Gatti (nicola.gatti@polimi.it) Game theory Game theory: basics Normal form Players Actions Outcomes Utilities Strategies Solutions

More information

Temporal Difference Learning & Policy Iteration

Temporal Difference Learning & Policy Iteration Temporal Difference Learning & Policy Iteration Advanced Topics in Reinforcement Learning Seminar WS 15/16 ±0 ±0 +1 by Tobias Joppen 03.11.2015 Fachbereich Informatik Knowledge Engineering Group Prof.

More information

Belief-based Learning

Belief-based Learning Belief-based Learning Algorithmic Game Theory Marcello Restelli Lecture Outline Introdutcion to multi-agent learning Belief-based learning Cournot adjustment Fictitious play Bayesian learning Equilibrium

More information

Reinforcement Learning. Introduction

Reinforcement Learning. Introduction Reinforcement Learning Introduction Reinforcement Learning Agent interacts and learns from a stochastic environment Science of sequential decision making Many faces of reinforcement learning Optimal control

More information

Machine Learning I Reinforcement Learning

Machine Learning I Reinforcement Learning Machine Learning I Reinforcement Learning Thomas Rückstieß Technische Universität München December 17/18, 2009 Literature Book: Reinforcement Learning: An Introduction Sutton & Barto (free online version:

More information

CS599 Lecture 1 Introduction To RL

CS599 Lecture 1 Introduction To RL CS599 Lecture 1 Introduction To RL Reinforcement Learning Introduction Learning from rewards Policies Value Functions Rewards Models of the Environment Exploitation vs. Exploration Dynamic Programming

More information

A Polynomial-time Nash Equilibrium Algorithm for Repeated Games

A Polynomial-time Nash Equilibrium Algorithm for Repeated Games A Polynomial-time Nash Equilibrium Algorithm for Repeated Games Michael L. Littman mlittman@cs.rutgers.edu Rutgers University Peter Stone pstone@cs.utexas.edu The University of Texas at Austin Main Result

More information

Introduction to Reinforcement Learning Part 1: Markov Decision Processes

Introduction to Reinforcement Learning Part 1: Markov Decision Processes Introduction to Reinforcement Learning Part 1: Markov Decision Processes Rowan McAllister Reinforcement Learning Reading Group 8 April 2015 Note I ve created these slides whilst following Algorithms for

More information

Multi-Agent Learning with Policy Prediction

Multi-Agent Learning with Policy Prediction Multi-Agent Learning with Policy Prediction Chongjie Zhang Computer Science Department University of Massachusetts Amherst, MA 3 USA chongjie@cs.umass.edu Victor Lesser Computer Science Department University

More information

Learning to Compete, Compromise, and Cooperate in Repeated General-Sum Games

Learning to Compete, Compromise, and Cooperate in Repeated General-Sum Games Learning to Compete, Compromise, and Cooperate in Repeated General-Sum Games Jacob W. Crandall Michael A. Goodrich Computer Science Department, Brigham Young University, Provo, UT 84602 USA crandall@cs.byu.edu

More information

Multiagent Learning Using a Variable Learning Rate

Multiagent Learning Using a Variable Learning Rate Multiagent Learning Using a Variable Learning Rate Michael Bowling, Manuela Veloso Computer Science Department, Carnegie Mellon University, Pittsburgh, PA 15213-3890 Abstract Learning to act in a multiagent

More information

Communities and Populations

Communities and Populations ommunities and Populations Two models of population change The logistic map The Lotke-Volterra equations for oscillations in populations Prisoner s dilemma Single play Iterated play ommunity-wide play

More information

Learning in Zero-Sum Team Markov Games using Factored Value Functions

Learning in Zero-Sum Team Markov Games using Factored Value Functions Learning in Zero-Sum Team Markov Games using Factored Value Functions Michail G. Lagoudakis Department of Computer Science Duke University Durham, NC 27708 mgl@cs.duke.edu Ronald Parr Department of Computer

More information

Machine Learning. Reinforcement learning. Hamid Beigy. Sharif University of Technology. Fall 1396

Machine Learning. Reinforcement learning. Hamid Beigy. Sharif University of Technology. Fall 1396 Machine Learning Reinforcement learning Hamid Beigy Sharif University of Technology Fall 1396 Hamid Beigy (Sharif University of Technology) Machine Learning Fall 1396 1 / 32 Table of contents 1 Introduction

More information

Balancing and Control of a Freely-Swinging Pendulum Using a Model-Free Reinforcement Learning Algorithm

Balancing and Control of a Freely-Swinging Pendulum Using a Model-Free Reinforcement Learning Algorithm Balancing and Control of a Freely-Swinging Pendulum Using a Model-Free Reinforcement Learning Algorithm Michail G. Lagoudakis Department of Computer Science Duke University Durham, NC 2778 mgl@cs.duke.edu

More information

Deep Reinforcement Learning. STAT946 Deep Learning Guest Lecture by Pascal Poupart University of Waterloo October 19, 2017

Deep Reinforcement Learning. STAT946 Deep Learning Guest Lecture by Pascal Poupart University of Waterloo October 19, 2017 Deep Reinforcement Learning STAT946 Deep Learning Guest Lecture by Pascal Poupart University of Waterloo October 19, 2017 Outline Introduction to Reinforcement Learning AlphaGo (Deep RL for Computer Go)

More information

Reinforcement Learning: the basics

Reinforcement Learning: the basics Reinforcement Learning: the basics Olivier Sigaud Université Pierre et Marie Curie, PARIS 6 http://people.isir.upmc.fr/sigaud August 6, 2012 1 / 46 Introduction Action selection/planning Learning by trial-and-error

More information

6 Reinforcement Learning

6 Reinforcement Learning 6 Reinforcement Learning As discussed above, a basic form of supervised learning is function approximation, relating input vectors to output vectors, or, more generally, finding density functions p(y,

More information

Reinforcement Learning, Neural Networks and PI Control Applied to a Heating Coil

Reinforcement Learning, Neural Networks and PI Control Applied to a Heating Coil Reinforcement Learning, Neural Networks and PI Control Applied to a Heating Coil Charles W. Anderson 1, Douglas C. Hittle 2, Alon D. Katz 2, and R. Matt Kretchmar 1 1 Department of Computer Science Colorado

More information

Convergence to Pareto Optimality in General Sum Games via Learning Opponent s Preference

Convergence to Pareto Optimality in General Sum Games via Learning Opponent s Preference Convergence to Pareto Optimality in General Sum Games via Learning Opponent s Preference Dipyaman Banerjee Department of Math & CS University of Tulsa Tulsa, OK, USA dipyaman@gmail.com Sandip Sen Department

More information

An Introduction to Reinforcement Learning

An Introduction to Reinforcement Learning An Introduction to Reinforcement Learning Shivaram Kalyanakrishnan shivaram@cse.iitb.ac.in Department of Computer Science and Engineering Indian Institute of Technology Bombay April 2018 What is Reinforcement

More information

Sequential Decision Problems

Sequential Decision Problems Sequential Decision Problems Michael A. Goodrich November 10, 2006 If I make changes to these notes after they are posted and if these changes are important (beyond cosmetic), the changes will highlighted

More information

Convergence and No-Regret in Multiagent Learning

Convergence and No-Regret in Multiagent Learning Convergence and No-Regret in Multiagent Learning Michael Bowling Department of Computing Science University of Alberta Edmonton, Alberta Canada T6G 2E8 bowling@cs.ualberta.ca Abstract Learning in a multiagent

More information

CS 570: Machine Learning Seminar. Fall 2016

CS 570: Machine Learning Seminar. Fall 2016 CS 570: Machine Learning Seminar Fall 2016 Class Information Class web page: http://web.cecs.pdx.edu/~mm/mlseminar2016-2017/fall2016/ Class mailing list: cs570@cs.pdx.edu My office hours: T,Th, 2-3pm or

More information

Machine Learning. Machine Learning: Jordan Boyd-Graber University of Maryland REINFORCEMENT LEARNING. Slides adapted from Tom Mitchell and Peter Abeel

Machine Learning. Machine Learning: Jordan Boyd-Graber University of Maryland REINFORCEMENT LEARNING. Slides adapted from Tom Mitchell and Peter Abeel Machine Learning Machine Learning: Jordan Boyd-Graber University of Maryland REINFORCEMENT LEARNING Slides adapted from Tom Mitchell and Peter Abeel Machine Learning: Jordan Boyd-Graber UMD Machine Learning

More information

CPT+: Decreasing the time/space complexity of the Compact Prediction Tree

CPT+: Decreasing the time/space complexity of the Compact Prediction Tree PT+: Decreasing the time/space complexity of the ompact Ted Gueniche 1, Philippe Fournier-Viger 1, Rajeev Raman 2, and Vincent S. Tseng 3 1 Dept. of computer science, University of Moncton, anada 2 Department

More information

Collective Evolution of Turn-taking Norm in Dispersion Games

Collective Evolution of Turn-taking Norm in Dispersion Games ollective Evolution of Turn-taking Norm in ispersion Games Akira NAMATAME ept. of omputer Science National efense Academy Yokosuka,239-8686,Japan E-mail: nama@nda.ac.jp http//www.nda.ac.jp/~nama/ Outline

More information

ilstd: Eligibility Traces and Convergence Analysis

ilstd: Eligibility Traces and Convergence Analysis ilstd: Eligibility Traces and Convergence Analysis Alborz Geramifard Michael Bowling Martin Zinkevich Richard S. Sutton Department of Computing Science University of Alberta Edmonton, Alberta {alborz,bowling,maz,sutton}@cs.ualberta.ca

More information

Cyclic Equilibria in Markov Games

Cyclic Equilibria in Markov Games Cyclic Equilibria in Markov Games Martin Zinkevich and Amy Greenwald Department of Computer Science Brown University Providence, RI 02912 {maz,amy}@cs.brown.edu Michael L. Littman Department of Computer

More information

QUICR-learning for Multi-Agent Coordination

QUICR-learning for Multi-Agent Coordination QUICR-learning for Multi-Agent Coordination Adrian K. Agogino UCSC, NASA Ames Research Center Mailstop 269-3 Moffett Field, CA 94035 adrian@email.arc.nasa.gov Kagan Tumer NASA Ames Research Center Mailstop

More information

Reinforcement Learning. Spring 2018 Defining MDPs, Planning

Reinforcement Learning. Spring 2018 Defining MDPs, Planning Reinforcement Learning Spring 2018 Defining MDPs, Planning understandability 0 Slide 10 time You are here Markov Process Where you will go depends only on where you are Markov Process: Information state

More information

Machine Learning and Bayesian Inference. Unsupervised learning. Can we find regularity in data without the aid of labels?

Machine Learning and Bayesian Inference. Unsupervised learning. Can we find regularity in data without the aid of labels? Machine Learning and Bayesian Inference Dr Sean Holden Computer Laboratory, Room FC6 Telephone extension 6372 Email: sbh11@cl.cam.ac.uk www.cl.cam.ac.uk/ sbh11/ Unsupervised learning Can we find regularity

More information

Internet Monetization

Internet Monetization Internet Monetization March May, 2013 Discrete time Finite A decision process (MDP) is reward process with decisions. It models an environment in which all states are and time is divided into stages. Definition

More information

Using Expectation-Maximization for Reinforcement Learning

Using Expectation-Maximization for Reinforcement Learning NOTE Communicated by Andrew Barto and Michael Jordan Using Expectation-Maximization for Reinforcement Learning Peter Dayan Department of Brain and Cognitive Sciences, Center for Biological and Computational

More information

POMDPs and Policy Gradients

POMDPs and Policy Gradients POMDPs and Policy Gradients MLSS 2006, Canberra Douglas Aberdeen Canberra Node, RSISE Building Australian National University 15th February 2006 Outline 1 Introduction What is Reinforcement Learning? Types

More information

Planning with Predictive State Representations

Planning with Predictive State Representations Planning with Predictive State Representations Michael R. James University of Michigan mrjames@umich.edu Satinder Singh University of Michigan baveja@umich.edu Michael L. Littman Rutgers University mlittman@cs.rutgers.edu

More information

CS 4100 // artificial intelligence. Recap/midterm review!

CS 4100 // artificial intelligence. Recap/midterm review! CS 4100 // artificial intelligence instructor: byron wallace Recap/midterm review! Attribution: many of these slides are modified versions of those distributed with the UC Berkeley CS188 materials Thanks

More information

arxiv: v1 [cs.ai] 5 Nov 2017

arxiv: v1 [cs.ai] 5 Nov 2017 arxiv:1711.01569v1 [cs.ai] 5 Nov 2017 Markus Dumke Department of Statistics Ludwig-Maximilians-Universität München markus.dumke@campus.lmu.de Abstract Temporal-difference (TD) learning is an important

More information

Learning Tetris. 1 Tetris. February 3, 2009

Learning Tetris. 1 Tetris. February 3, 2009 Learning Tetris Matt Zucker Andrew Maas February 3, 2009 1 Tetris The Tetris game has been used as a benchmark for Machine Learning tasks because its large state space (over 2 200 cell configurations are

More information

Lecture 25: Learning 4. Victor R. Lesser. CMPSCI 683 Fall 2010

Lecture 25: Learning 4. Victor R. Lesser. CMPSCI 683 Fall 2010 Lecture 25: Learning 4 Victor R. Lesser CMPSCI 683 Fall 2010 Final Exam Information Final EXAM on Th 12/16 at 4:00pm in Lederle Grad Res Ctr Rm A301 2 Hours but obviously you can leave early! Open Book

More information

CS 188 Introduction to Fall 2007 Artificial Intelligence Midterm

CS 188 Introduction to Fall 2007 Artificial Intelligence Midterm NAME: SID#: Login: Sec: 1 CS 188 Introduction to Fall 2007 Artificial Intelligence Midterm You have 80 minutes. The exam is closed book, closed notes except a one-page crib sheet, basic calculators only.

More information

Artificial Intelligence & Sequential Decision Problems

Artificial Intelligence & Sequential Decision Problems Artificial Intelligence & Sequential Decision Problems (CIV6540 - Machine Learning for Civil Engineers) Professor: James-A. Goulet Département des génies civil, géologique et des mines Chapter 15 Goulet

More information

Open Theoretical Questions in Reinforcement Learning

Open Theoretical Questions in Reinforcement Learning Open Theoretical Questions in Reinforcement Learning Richard S. Sutton AT&T Labs, Florham Park, NJ 07932, USA, sutton@research.att.com, www.cs.umass.edu/~rich Reinforcement learning (RL) concerns the problem

More information

David Silver, Google DeepMind

David Silver, Google DeepMind Tutorial: Deep Reinforcement Learning David Silver, Google DeepMind Outline Introduction to Deep Learning Introduction to Reinforcement Learning Value-Based Deep RL Policy-Based Deep RL Model-Based Deep

More information

15-780: Graduate Artificial Intelligence. Reinforcement learning (RL)

15-780: Graduate Artificial Intelligence. Reinforcement learning (RL) 15-780: Graduate Artificial Intelligence Reinforcement learning (RL) From MDPs to RL We still use the same Markov model with rewards and actions But there are a few differences: 1. We do not assume we

More information

An Introduction to Reinforcement Learning

An Introduction to Reinforcement Learning An Introduction to Reinforcement Learning Shivaram Kalyanakrishnan shivaram@csa.iisc.ernet.in Department of Computer Science and Automation Indian Institute of Science August 2014 What is Reinforcement

More information

Reinforcement Learning

Reinforcement Learning 1 Reinforcement Learning Chris Watkins Department of Computer Science Royal Holloway, University of London July 27, 2015 2 Plan 1 Why reinforcement learning? Where does this theory come from? Markov decision

More information

A reinforcement learning scheme for a multi-agent card game with Monte Carlo state estimation

A reinforcement learning scheme for a multi-agent card game with Monte Carlo state estimation A reinforcement learning scheme for a multi-agent card game with Monte Carlo state estimation Hajime Fujita and Shin Ishii, Nara Institute of Science and Technology 8916 5 Takayama, Ikoma, 630 0192 JAPAN

More information

REINFORCEMENT LEARNING

REINFORCEMENT LEARNING REINFORCEMENT LEARNING Larry Page: Where s Google going next? DeepMind's DQN playing Breakout Contents Introduction to Reinforcement Learning Deep Q-Learning INTRODUCTION TO REINFORCEMENT LEARNING Contents

More information

Multiagent Learning. Foundations and Recent Trends. Stefano Albrecht and Peter Stone

Multiagent Learning. Foundations and Recent Trends. Stefano Albrecht and Peter Stone Multiagent Learning Foundations and Recent Trends Stefano Albrecht and Peter Stone Tutorial at IJCAI 2017 conference: http://www.cs.utexas.edu/~larg/ijcai17_tutorial Overview Introduction Multiagent Models

More information

Decision Theory: Q-Learning

Decision Theory: Q-Learning Decision Theory: Q-Learning CPSC 322 Decision Theory 5 Textbook 12.5 Decision Theory: Q-Learning CPSC 322 Decision Theory 5, Slide 1 Lecture Overview 1 Recap 2 Asynchronous Value Iteration 3 Q-Learning

More information

Reinforcement Learning (1)

Reinforcement Learning (1) Reinforcement Learning 1 Reinforcement Learning (1) Machine Learning 64-360, Part II Norman Hendrich University of Hamburg, Dept. of Informatics Vogt-Kölln-Str. 30, D-22527 Hamburg hendrich@informatik.uni-hamburg.de

More information

Bias-Variance Error Bounds for Temporal Difference Updates

Bias-Variance Error Bounds for Temporal Difference Updates Bias-Variance Bounds for Temporal Difference Updates Michael Kearns AT&T Labs mkearns@research.att.com Satinder Singh AT&T Labs baveja@research.att.com Abstract We give the first rigorous upper bounds

More information

Context-Driven Predictions

Context-Driven Predictions Context-Driven Predictions Marc G. Bellemare and Doina Precup McGill University School of Computer Science {marcgb,dprecup}@cs.mcgill.ca Keywords: Prediction learning, associative memories, context-based

More information

Markov Decision Processes

Markov Decision Processes Markov Decision Processes Noel Welsh 11 November 2010 Noel Welsh () Markov Decision Processes 11 November 2010 1 / 30 Annoucements Applicant visitor day seeks robot demonstrators for exciting half hour

More information

Markov Decision Processes Chapter 17. Mausam

Markov Decision Processes Chapter 17. Mausam Markov Decision Processes Chapter 17 Mausam Planning Agent Static vs. Dynamic Fully vs. Partially Observable Environment What action next? Deterministic vs. Stochastic Perfect vs. Noisy Instantaneous vs.

More information

Solving Zero-Sum Extensive-Form Games. Branislav Bošanský AE4M36MAS, Fall 2013, Lecture 6

Solving Zero-Sum Extensive-Form Games. Branislav Bošanský AE4M36MAS, Fall 2013, Lecture 6 Solving Zero-Sum Extensive-Form Games ranislav ošanský E4M36MS, Fall 2013, Lecture 6 Imperfect Information EFGs States Players 1 2 Information Set ctions Utility Solving II Zero-Sum EFG with perfect recall

More information

Approximate Optimal-Value Functions. Satinder P. Singh Richard C. Yee. University of Massachusetts.

Approximate Optimal-Value Functions. Satinder P. Singh Richard C. Yee. University of Massachusetts. An Upper Bound on the oss from Approximate Optimal-Value Functions Satinder P. Singh Richard C. Yee Department of Computer Science University of Massachusetts Amherst, MA 01003 singh@cs.umass.edu, yee@cs.umass.edu

More information

Introduction to Reinforcement Learning

Introduction to Reinforcement Learning CSCI-699: Advanced Topics in Deep Learning 01/16/2019 Nitin Kamra Spring 2019 Introduction to Reinforcement Learning 1 What is Reinforcement Learning? So far we have seen unsupervised and supervised learning.

More information

Optimal Efficient Learning Equilibrium: Imperfect Monitoring in Symmetric Games

Optimal Efficient Learning Equilibrium: Imperfect Monitoring in Symmetric Games Optimal Efficient Learning Equilibrium: Imperfect Monitoring in Symmetric Games Ronen I. Brafman Department of Computer Science Stanford University Stanford, CA 94305 brafman@cs.stanford.edu Moshe Tennenholtz

More information

Course 16:198:520: Introduction To Artificial Intelligence Lecture 13. Decision Making. Abdeslam Boularias. Wednesday, December 7, 2016

Course 16:198:520: Introduction To Artificial Intelligence Lecture 13. Decision Making. Abdeslam Boularias. Wednesday, December 7, 2016 Course 16:198:520: Introduction To Artificial Intelligence Lecture 13 Decision Making Abdeslam Boularias Wednesday, December 7, 2016 1 / 45 Overview We consider probabilistic temporal models where the

More information

Complexity of stochastic branch and bound methods for belief tree search in Bayesian reinforcement learning

Complexity of stochastic branch and bound methods for belief tree search in Bayesian reinforcement learning Complexity of stochastic branch and bound methods for belief tree search in Bayesian reinforcement learning Christos Dimitrakakis Informatics Institute, University of Amsterdam, Amsterdam, The Netherlands

More information

Learning to Coordinate Efficiently: A Model-based Approach

Learning to Coordinate Efficiently: A Model-based Approach Journal of Artificial Intelligence Research 19 (2003) 11-23 Submitted 10/02; published 7/03 Learning to Coordinate Efficiently: A Model-based Approach Ronen I. Brafman Computer Science Department Ben-Gurion

More information

Reinforcement Learning for Continuous. Action using Stochastic Gradient Ascent. Hajime KIMURA, Shigenobu KOBAYASHI JAPAN

Reinforcement Learning for Continuous. Action using Stochastic Gradient Ascent. Hajime KIMURA, Shigenobu KOBAYASHI JAPAN Reinforcement Learning for Continuous Action using Stochastic Gradient Ascent Hajime KIMURA, Shigenobu KOBAYASHI Tokyo Institute of Technology, 4259 Nagatsuda, Midori-ku Yokohama 226-852 JAPAN Abstract:

More information

The convergence limit of the temporal difference learning

The convergence limit of the temporal difference learning The convergence limit of the temporal difference learning Ryosuke Nomura the University of Tokyo September 3, 2013 1 Outline Reinforcement Learning Convergence limit Construction of the feature vector

More information

Christopher Watkins and Peter Dayan. Noga Zaslavsky. The Hebrew University of Jerusalem Advanced Seminar in Deep Learning (67679) November 1, 2015

Christopher Watkins and Peter Dayan. Noga Zaslavsky. The Hebrew University of Jerusalem Advanced Seminar in Deep Learning (67679) November 1, 2015 Q-Learning Christopher Watkins and Peter Dayan Noga Zaslavsky The Hebrew University of Jerusalem Advanced Seminar in Deep Learning (67679) November 1, 2015 Noga Zaslavsky Q-Learning (Watkins & Dayan, 1992)

More information

Multiagent Value Iteration in Markov Games

Multiagent Value Iteration in Markov Games Multiagent Value Iteration in Markov Games Amy Greenwald Brown University with Michael Littman and Martin Zinkevich Stony Brook Game Theory Festival July 21, 2005 Agenda Theorem Value iteration converges

More information

Best-Response Multiagent Learning in Non-Stationary Environments

Best-Response Multiagent Learning in Non-Stationary Environments Best-Response Multiagent Learning in Non-Stationary Environments Michael Weinberg Jeffrey S. Rosenschein School of Engineering and Computer Science Heew University Jerusalem, Israel fmwmw,jeffg@cs.huji.ac.il

More information

Markov Decision Processes Chapter 17. Mausam

Markov Decision Processes Chapter 17. Mausam Markov Decision Processes Chapter 17 Mausam Planning Agent Static vs. Dynamic Fully vs. Partially Observable Environment What action next? Deterministic vs. Stochastic Perfect vs. Noisy Instantaneous vs.

More information

Decision Theory: Markov Decision Processes

Decision Theory: Markov Decision Processes Decision Theory: Markov Decision Processes CPSC 322 Lecture 33 March 31, 2006 Textbook 12.5 Decision Theory: Markov Decision Processes CPSC 322 Lecture 33, Slide 1 Lecture Overview Recap Rewards and Policies

More information

Some AI Planning Problems

Some AI Planning Problems Course Logistics CS533: Intelligent Agents and Decision Making M, W, F: 1:00 1:50 Instructor: Alan Fern (KEC2071) Office hours: by appointment (see me after class or send email) Emailing me: include CS533

More information

Reinforcement Learning

Reinforcement Learning Reinforcement Learning Model-Based Reinforcement Learning Model-based, PAC-MDP, sample complexity, exploration/exploitation, RMAX, E3, Bayes-optimal, Bayesian RL, model learning Vien Ngo MLR, University

More information

Q-Learning for Markov Decision Processes*

Q-Learning for Markov Decision Processes* McGill University ECSE 506: Term Project Q-Learning for Markov Decision Processes* Authors: Khoa Phan khoa.phan@mail.mcgill.ca Sandeep Manjanna sandeep.manjanna@mail.mcgill.ca (*Based on: Convergence of

More information

Policy Gradient Critics

Policy Gradient Critics Policy Gradient Critics Daan Wierstra 1 and Jürgen Schmidhuber 1,2 1 Istituto Dalle Molle di Studi sull Intelligenza Artificiale (IDSIA), CH-6928 Manno-Lugano, Switzerland, daan@idsia.ch 2 Department of

More information

Introduction to Reinforcement Learning. CMPT 882 Mar. 18

Introduction to Reinforcement Learning. CMPT 882 Mar. 18 Introduction to Reinforcement Learning CMPT 882 Mar. 18 Outline for the week Basic ideas in RL Value functions and value iteration Policy evaluation and policy improvement Model-free RL Monte-Carlo and

More information

Incremental Policy Learning: An Equilibrium Selection Algorithm for Reinforcement Learning Agents with Common Interests

Incremental Policy Learning: An Equilibrium Selection Algorithm for Reinforcement Learning Agents with Common Interests Incremental Policy Learning: An Equilibrium Selection Algorithm for Reinforcement Learning Agents with Common Interests Nancy Fulda and Dan Ventura Department of Computer Science Brigham Young University

More information

Playing Abstract games with Hidden States (Spatial and Non-Spatial).

Playing Abstract games with Hidden States (Spatial and Non-Spatial). Playing Abstract games with Hidden States (Spatial and Non-Spatial). Gregory Calbert, Hing-Wah Kwok Peter Smet, Jason Scholz, Michael Webb VE Group, C2D, DSTO. Report Documentation Page Form Approved OMB

More information

Reinforcement Learning

Reinforcement Learning Reinforcement Learning 1 Reinforcement Learning Mainly based on Reinforcement Learning An Introduction by Richard Sutton and Andrew Barto Slides are mainly based on the course material provided by the

More information

Lecture 18: Reinforcement Learning Sanjeev Arora Elad Hazan

Lecture 18: Reinforcement Learning Sanjeev Arora Elad Hazan COS 402 Machine Learning and Artificial Intelligence Fall 2016 Lecture 18: Reinforcement Learning Sanjeev Arora Elad Hazan Some slides borrowed from Peter Bodik and David Silver Course progress Learning

More information

A Universal Scheme for Learning

A Universal Scheme for Learning A Universal Scheme for Learning Vivek F. Farias, Ciamac C. Moallemi, Benjamin Van Roy, and Tsachy Weissman Electrical Engineering, Stanford University, Stanford, CA 94305 USA Emails: {vivekf,ciamac,bvr,tsachy}@stanford.edu

More information

Today s Outline. Recap: MDPs. Bellman Equations. Q-Value Iteration. Bellman Backup 5/7/2012. CSE 473: Artificial Intelligence Reinforcement Learning

Today s Outline. Recap: MDPs. Bellman Equations. Q-Value Iteration. Bellman Backup 5/7/2012. CSE 473: Artificial Intelligence Reinforcement Learning CSE 473: Artificial Intelligence Reinforcement Learning Dan Weld Today s Outline Reinforcement Learning Q-value iteration Q-learning Exploration / exploitation Linear function approximation Many slides

More information

Reinforcement learning

Reinforcement learning Reinforcement learning Based on [Kaelbling et al., 1996, Bertsekas, 2000] Bert Kappen Reinforcement learning Reinforcement learning is the problem faced by an agent that must learn behavior through trial-and-error

More information

Rule Acquisition for Cognitive Agents by Using Estimation of Distribution Algorithms

Rule Acquisition for Cognitive Agents by Using Estimation of Distribution Algorithms Rule cquisition for Cognitive gents by Using Estimation of Distribution lgorithms Tokue Nishimura and Hisashi Handa Graduate School of Natural Science and Technology, Okayama University Okayama 700-8530,

More information

Generating Sequences with Recurrent Neural Networks

Generating Sequences with Recurrent Neural Networks Generating Sequences with Recurrent Neural Networks Alex Graves University of Toronto & Google DeepMind Presented by Zhe Gan, Duke University May 15, 2015 1 / 23 Outline Deep recurrent neural network based

More information

arxiv: v1 [cs.ai] 1 Jul 2015

arxiv: v1 [cs.ai] 1 Jul 2015 arxiv:507.00353v [cs.ai] Jul 205 Harm van Seijen harm.vanseijen@ualberta.ca A. Rupam Mahmood ashique@ualberta.ca Patrick M. Pilarski patrick.pilarski@ualberta.ca Richard S. Sutton sutton@cs.ualberta.ca

More information

DECENTRALIZED LEARNING IN GENERAL-SUM MATRIX GAMES: AN L R I LAGGING ANCHOR ALGORITHM. Xiaosong Lu and Howard M. Schwartz

DECENTRALIZED LEARNING IN GENERAL-SUM MATRIX GAMES: AN L R I LAGGING ANCHOR ALGORITHM. Xiaosong Lu and Howard M. Schwartz International Journal of Innovative Computing, Information and Control ICIC International c 20 ISSN 349-498 Volume 7, Number, January 20 pp. 0 DECENTRALIZED LEARNING IN GENERAL-SUM MATRIX GAMES: AN L R

More information

MDP Preliminaries. Nan Jiang. February 10, 2019

MDP Preliminaries. Nan Jiang. February 10, 2019 MDP Preliminaries Nan Jiang February 10, 2019 1 Markov Decision Processes In reinforcement learning, the interactions between the agent and the environment are often described by a Markov Decision Process

More information

Measuring and Optimizing Behavioral Complexity for Evolutionary Reinforcement Learning

Measuring and Optimizing Behavioral Complexity for Evolutionary Reinforcement Learning Measuring and Optimizing Behavioral Complexity for Evolutionary Reinforcement Learning Faustino J. Gomez, Julian Togelius, and Juergen Schmidhuber IDSIA, Galleria 2 6928 Manno-Lugano Switzerland Abstract.

More information

Strategy Classification in Multi-agent Environment Applying Reinforcement Learning to Soccer Agents

Strategy Classification in Multi-agent Environment Applying Reinforcement Learning to Soccer Agents Strategy Classification in Multi-agent Environment Applying Reinforcement Learning to Soccer Agents Eiji Uchibe, Minoru Asada, and Koh Hosoda Dept. of Mech. Eng. for Computer-Controlled Machinery Osaka

More information

CS788 Dialogue Management Systems Lecture #2: Markov Decision Processes

CS788 Dialogue Management Systems Lecture #2: Markov Decision Processes CS788 Dialogue Management Systems Lecture #2: Markov Decision Processes Kee-Eung Kim KAIST EECS Department Computer Science Division Markov Decision Processes (MDPs) A popular model for sequential decision

More information

Reinforcement Learning: An Introduction

Reinforcement Learning: An Introduction Introduction Betreuer: Freek Stulp Hauptseminar Intelligente Autonome Systeme (WiSe 04/05) Forschungs- und Lehreinheit Informatik IX Technische Universität München November 24, 2004 Introduction What is

More information

Learning ε-pareto Efficient Solutions With Minimal Knowledge Requirements Using Satisficing

Learning ε-pareto Efficient Solutions With Minimal Knowledge Requirements Using Satisficing Learning ε-pareto Efficient Solutions With Minimal Knowledge Requirements Using Satisficing Jacob W. Crandall and Michael A. Goodrich Computer Science Department Brigham Young University Provo, UT 84602

More information

RL 14: Simplifications of POMDPs

RL 14: Simplifications of POMDPs RL 14: Simplifications of POMDPs Michael Herrmann University of Edinburgh, School of Informatics 04/03/2016 POMDPs: Points to remember Belief states are probability distributions over states Even if computationally

More information

Reinforcement Learning

Reinforcement Learning Reinforcement Learning Ron Parr CompSci 7 Department of Computer Science Duke University With thanks to Kris Hauser for some content RL Highlights Everybody likes to learn from experience Use ML techniques

More information

Policy Gradient Reinforcement Learning for Robotics

Policy Gradient Reinforcement Learning for Robotics Policy Gradient Reinforcement Learning for Robotics Michael C. Koval mkoval@cs.rutgers.edu Michael L. Littman mlittman@cs.rutgers.edu May 9, 211 1 Introduction Learning in an environment with a continuous

More information

Optimism in the Face of Uncertainty Should be Refutable

Optimism in the Face of Uncertainty Should be Refutable Optimism in the Face of Uncertainty Should be Refutable Ronald ORTNER Montanuniversität Leoben Department Mathematik und Informationstechnolgie Franz-Josef-Strasse 18, 8700 Leoben, Austria, Phone number:

More information