Reinforcement Learning
|
|
- Harvey Bradford
- 6 years ago
- Views:
Transcription
1 Reinforcement Learning Lecture 6: RL algorithms 2.0 Alexandre Proutiere, Sadegh Talebi, Jungseul Ok KTH, The Royal Institute of Technology
2 Objectives of this lecture Present and analyse two online algorithms based on the optimism in front of uncertainty principle, and compare their regret to algorithms with random exploration UCB-VI for episodic RL problems UCRL2 for ergodic RL problems 2
3 Lecture 6: Outline 1. Minimal exploration in RL 2. UCB-VI 3. UCRL2 3
4 Lecture 6: Outline 1. Minimal exploration in RL 2. UCB-VI 3. UCRL2 4
5 Towards minimal exploration The MDP model is unknown and has to be learnt. Solutions for on-policy algorithms: 1. Estimate the model then optimise: poor regret and premature exploitation 2. ɛ greedy exploration: undirected exploration (explores too much (state, action) pairs with low values) 3. Bandit-like optimal exploration-exploitation trade-off But how much should a (state,action) pair be explored? 5
6 Regret lower bounds In the case of ergodic RL problems: Problem-specific lower bound (Burnetas - Katehakis 1997) E[N (s,a) (T )] 1 lim inf T log(t ) K M (s, a) Leading to an asymptotic regret lower bound scaling as SA log(t ) Minimax lower bound Θ( SAT ) We don t know when the asymptotic problem-specific regret lower bound is representative, often for very large T! Read for bandit optimisation: Explore First, Exploit Next: The True Shape of Regret in Bandit Problems, Garivier et al., 6
7 Which regret lower bound should we target? Example: SA = 1000, comparison of SAT and SA log(t ) 7
8 Which regret lower bound should we target? Boundary: SA = T log(t ) 2 8
9 Optimism in front of uncertainty Estimate the unknown system parameters (here p(, ) and r(, )) and build an optimistic reward estimate to trigger exploration. Estimate: find confidence balls containing the true model w.h.p. Optimistic reward estimate: find the model within the confidence balls leading to the highest value. 9
10 Optimism in front of uncertainty: generic algorithm Algorithm. (for Infinite horizon RL problems) Initialise ˆp, ˆr, and N(s, a) For t = 1, 2, Build an optimistic reward model ( Q(s, a)) s,a from ˆp, ˆr, and N(s, a) 2. Select action a(t) maximising Q(s(t), a) over A s(t) 3. Observe the transition to s(t + 1) and collect reward r(s(t), a(t)) 4. Update ˆp, ˆr, and N(s, a) 10
11 Examples UCB-VI: directly build a confidence ball for the Q function based on the empirical estimates of the model. UCRL2: first build confidence balls for the reward and transition probabilities, and then identify Q. 11
12 Lecture 6: Outline 1. Minimal exploration in RL 2. UCB-VI 3. UCRL2 12
13 Finite-horizon MDP to episodic RL problems Initial state s 0 (could be a r.v.) Transition probabilities at time t: p(s s, a) Reward at time t: r(s, a) and at time H: r H (s) Unknown transition probabilities and reward function Objective: quickly learn a policy π maximising over π 0 MD [ H 1 ] V π0 H := E u=0 r(s π0 u, a π0 u ) + r H (s π0 H ). 13
14 Finite-horizon MDP to episodic RL problems Data: K episodes of length H (actions, states, rewards) Learning algorithm π : data π K MD Performance of π: how close π K is from the optimal policy π 14
15 UCB-VI UCBVI is an extension of Value Iteration, guaranteeing that the resulting value function is a (high-probability) upper confidence bound (UCB) on the optimal value function. At the beginning of episode k, it computes state-action values using empirical transition kernel and reward function. In step h of backward induction (to update Q k,h (s, a) for any (s, a)), it adds a bonus b k,h (s, a) to the value, and ensures that Q k,h never exceeds Q k,h 1. Two variants of UCBVI, depending on the choice of bonus b k,h : UCBVI-CH UCBVI-FB 15
16 UCB-VI algorithm Variables to be maintained by the algorithm: for known reward function - ˆp = (ˆp(s s, a), s, s S, a A s ): estimated transition probabilities - Q = (Q h (s, a), h H, s S, a A s ): estimated Q-function - b = (b h (s, a), h H, s S, a A s ): Q-value bonus - N = (N(s, a), s S, a A s ): number of visits to (s, a) so far - N = (N h (s, a), h H, s S, a A s ): number of visits in the h-step of episodes to (s, a) so far 16
17 UCB-VI algorithm Algorithm. UCB-VI Input: Initial state distribution ν 0, precision δ Initialise the variables ˆp, N, and N For episode k = 1, 2, Optimistic reward: a. Compute the bonus: b bonus(n, N, ˆp, Q, δ) b. Estimate the Q-function: Q bellmanopt(q, b, ˆp) 2. Initialise the state s(0) ν 0 3. for h = 1,..., H, select action a arg max a A s(h 1) Q h (s(h 1), a ) 4. Observe the transition and update ˆp, N, and N 17
18 UCB-VI algorithm: bonus UCBVI-CH: b h (s, a) = 7H N(s, a) log(5sat/δ) UCBVI-BF: 8L b h (s, a) = N(s, a) Var p( s,a)(v h+1 (Y )) p(y s, a) min N(s, a) y 14HL 3N(s, a) { 10 4 H 3 S 2 AL 2 N h+1 (y), H 2 } where L = log(5sat/δ). 18
19 UCB-VI algorithm: Optimistic Bellman operator bellmanopt(q, b, ˆp) applies Dynamic Programming with a bonus. Initialisation: Q H (s, a) = r H (s) for all (s, a) For step h = H( 1,..., 1: for all (s, a) visited at least once so far: Q h (s, a) min Q h (s, a), H, r(s, a) + ) y ˆp(y s, a)v h+1(s) + b h (s, a) 19
20 UCB-VI: Regret guarantees Regret up to time T = KH: R UCBV I (T ) = K k=1 (V (x k,1 ) V π k (x k,1 )). Theorem For any δ > 0, the regret of UCB-VI-CH(δ) is bounded w.p. at least 1 δ by: R UCBV I CH (T ) 20HL SAT + 250H 2 S 2 AL 2, with L = log(5hsat/δ). For T HS 3 A and SA H, the regret upper bound scales as Õ(H SAT ) (!?) 20
21 Sketch of proof Notations: - π k is the policy applied by UCBVI in the k-th episode - V k,h is the optimistic value function computed by UCBVI in the h-step of the k-th episode - V π h is the value function from step h under π - P π = (p(s s, π(s))) s,s - ˆP π k = (ˆp k (s s, π(s))) s,s where ˆp k is the estimated transitions in episode k Claim 1: by construction with high probability, V k,h V h. Then: R UCBV I (T ) R(T ) = K (V k,1 (x k,1 ) V π k (x k,1 )) k=1 21
22 Sketch of proof Let k,h = V k,h V π k h, so that R(T ) = K k=1 k,1 (x k,1 ). Backward induction on h to bound k,1 : introduce δ k,h = k,h (x k,h ), then δ k,h ( ˆP π k k P π k ) k,h+1 (x k,h ) + δ k,h+1 + ɛ k,h + b k,h + e k,h where { ɛ k,h = P π k k,h+1 (x k,h ) k,h+1 (x k,h+1 ) e k,h = ( ˆP π k k P π k )Vh+1 (x k,h) Concentration + Martingale (Azuma) + bounding bonus 22
23 Numerical experiments The river-swim example... 23
24 Regret Regret 4 states, H = 2, δ = 0.05 (for UCBVI), ɛ-greedy: ɛ t = min(1, 1000/t) 10 6 UCBVI-CH DP -greedy Episode
25 Regret Regret 4 states, H = 3, δ = 0.05 (for UCBVI), ɛ-greedy: ɛ t = min(1, 1000/t) UCBVI-CH DP -greedy Episode
26 Q * (s,a) - Q k,1 (s,a) Optimistic Q-values 4 states, H = 3, δ = 0.05 (for UCBVI) s = 1, a = 1 s = 1, a = 2 s = 2, a = 1 s = 2, a = 2 s = 3, a = 1 s = 3, a = 2 s = 4, a = 1 s = 4, a = Episode
27 V * (s) - V k(s) Value function convergence under UCBVI 4 states, H = 3, δ = 0.05 (for UCBVI) s = 1 s = 2 s = 3 s = Episode
28 Lecture 6: Outline 1. Minimal exploration in RL 2. UCB-VI 3. UCRL2 28
29 Expected average reward MDP to ergodic RL problems Stationary transition probabilities p(s s, a) and rewards r(s, a), uniformly bounded: a, s, r(s, a) 1 Objective: learn from data a policy π MD maximising (over all possible policies) [ T 1 ] g π = V π 1 (s 0 ) := lim inf T T E s 0 r(s π u, a π u, ) u=0 29
30 Ergodic RL problems: Preliminaries Optimal policy Recall Bellman s equation ( ) g + h (s) = max a A r(s, a) + h p( s, a), s where g is the maximal gain, and h is the bias function (h is uniquely determined up to an additive constant). Note: g does not depend on the initial state for communicating MDPs. Let a (s) denote any optimal action for state s (i.e., a maximizer in the above). Define the gap for sub-optimal action a at state s: φ(s, a) := ( r(s, a (s)) r(s, a) ) + h ( p( s, a (s)) p( s, a) ) 30
31 Ergodic RL problems: Preliminaries Diameter D: defined as D := max s s min π E[T π s,s ] where Ts,s π denotes the first time step in which s is reached under π staring from initial state s. Remark: all communicating MDPs have a finite diameter. Important parameters impacting performance Diameter D Gap Φ := min s,a a (s) φ(s, a) Gap := min π (g g π ) 31
32 Ergodic RL problems: Regret lower bounds Problem-specific regret lower bound: (Burnetas-Katehakis) For any algorithm π, R π (T ) lim inf T log(t ) c bk := s,a φ(s, a) inf{kl(p( s, a), q) : q Θ s,a } where Θ s,a is the set of distributions q s.t. replacing (only) p( s, a) by q makes a the unique optimal action in state s. - asymptotic (valid as T ) - valid for any ergodic MDP - scales as Ω( DSA Φ log(t )) for specific MDPs Minimax regret lower bound: Ω( DSAT ) - non-asymptotic (valid for all T DSA) - derived for a specific family of hard-to-learn communicating MDPs 32
33 Ergodic RL problems: State-of-the-art Two types of algorithms targeting different regret guarantees: Problem-specific guarantees - MDP-specific regret bound scaling as O(log(T )) - Algorithms: B-K (Burnetas & Katehakis, 1997), OLP (Tewari & Bartlett, 2007), UCRL2 (Jaksch et al. 2009), KL-UCRL (Filippi et al. 2010) Minimax guarantees - Valid for a class of MDPs with S states and A actions, and (typically) diameter D - Scaling as Ω( T ) - Algorithms: UCRL2 (Jaksch et al. 2009), KL-UCRL (Filippi et al. 2010), REGAL (Bartlett & Tewari, 2009), A-J (Agrawal & Jia, 2010) 33
34 Ergodic RL problems: State-of-the-art Algorithm Setup Regret B-K ergodic MDPs, known rewards O (c bk log(t )) asympt. ( ) OLP ergodic MDPs, known rewards O D 2 SA Φ log(t ) asympt. ( ) UCRL unichain MDPs O S 5 A 2 log(t ) ( ) UCRL2, KL-UCRL communicating MDPs O D 2 S 2 A ( log(t ) ) Lower Bound ergodic MDPs, known rewards Ω (c bk log(t )), Ω DSA Φ log(t ) Algorithm Setup Regret ( UCRL2 communicating MDPs Õ DS ) AT ( KL-UCRL communicating MDPs Õ DS ) AT ( REGAL weakly comm. MDPs, known rewards Õ BS ) AT ( A-J communicating MDPs, known rewards Õ D ) SAT, T S 5 A ( DSAT ) Lower Bound known rewards Ω, T DSA *B denotes the span of bias function of true MDP, and B D 34
35 UCRL2 UCRL2 is an optimistic algorithm that works in episodes of increasing lengths. At the beginning of each episode k, it maintains a set of plausible MDPs M k (which contains the true MDP w.h.p.) It then computes an optimal policy π k, which has the largest gain over all MDPs in M k (π k argmax M M k,π g π (M )). - For computational efficiency, UCRL2 computes an 1 tk -optimal policy, where t k is the starting step of episode k - To find a near-optimal policy, UCRL2 uses Extended Value Iteration It then follows π k within episode k until the number of visits for some pair (s, a) is doubled (and so, a new episode starts). 35
36 UCRL2 Notations: - k N: index of an episode - N k (s, a): total no. visits of pairs (s, a) before episode k - ˆp k ( s, a): empirical transition probability of (s, a) made by observations up to episode k - ˆr k (s, a): empirical reward distribution of (s, a) made by observations up to episode k - π k : policy followed in episode k - M k : set of models for episode k (defined next) - ν k (s, a): no. of visits of pairs (s, a) seen so far in episode k 36
37 UCRL2: Main ingredients The set of plausible MDPs M k : for confidence parameter δ, define { M k = M = (S, A, r, p) : (s, a), r(s, a) ˆr k (s, a) 3.5 log(2sat/δ) N k (s, a) + } p( s, a) ˆp k ( s, a) 1 14S log(2at/δ) N k (s, a) + Optimistic gain: find in M k the MDP that leads to the highest gain. We need to solve for episode k: maximise over (M, π) g π (M) subject to M M k 37
38 UCRL2 pseudo-code Algorithm. UCRL2 Input: Initial state s 0, precision δ, t = 1 For each episode k 1: 1. Initialisation. t k = t (start time of the episode) Update N k (s, a), ˆr k (s, a), and ˆp k (s, a) for all (s, a) 2. Compute the set of possible MDPs M k (using δ) 3. Compute the policy π k ExtendedValueIteration(M k, 1/ t k ) 4. Execute π k and end the episode: While [ν k (s t, π k (s t )) < max(1, N k (s t, π k (s t ))] - Play π k (s t), observe the reward and the next state - Update ν k (s t, π k (s t)) ν k (s t, π k (s t)) + 1 and t t
39 Extended value iteration Set of plausible MDPs M k : { M k = M = (S, A, r, p) : (s, a), r(s, a) ˆr k (s, a) d(s, a) } p( s, a) ˆp k ( s, a) 1 d (s, a) We wish to find M M k and a policy π k maximising g π (M ) over all possible M M k and policy π. Ideas: a. we can fix the reward to its maximum: r(s, a) = ˆr(s, a) + d(s, a) b. solve a large MDP whose set of actions is A s where (a, q) A s if and only if q P k (s, a) with: P k (s, a) = {q : q( ) ˆp k ( s, a) 1 d (s, a)} 39
40 Extended value iteration Solution: apply one of the known algorithms to find an optimal policy in MDPs, i.e., value iteration algorithm. Extended Value Iteration: For all s S, starting from u 0 (s) = 0: { } u i+1 (s) = max r(s, a) + max a A q P k (s,a) u i q - P k (s, a) is a polytope, and the inner maximisation can be done in O(S) operations. - To obtain an ε-optimal policy, the update is stopped when max s (u i+1 (s) u i (s)) min s (u i+1 (s) u i (s)) ε 40
41 UCRL2: Regret guarantees Let π =UCRL2 Regret up to time T : R π (T ) = T g T t=1 r(sπ t, a π t ), a random variable capturing the learning cost and the mixing time problems. Theorem W.p. at least 1 δ, the regret of UCRL2 satisfies, for any initial state, for any T > 1, R π (T ) 34DS AT log( T δ ). For any initial state, and any T 1, we have w.p. at least 1 3δ, R π (T ) 34 2 D2 S 2 A log( T δ ) ɛ + ɛt. 41
42 Regret Regret 6 states, δ = 0.05 (for UCRL2), ɛ-greedy: ɛ t = min(1, 1000/t) UCRL2 KL-UCRL -Greedy Time
43 Regret 12 states, δ = 0.05 (for UCRL2) 10 x 104 UCRL2 KL UCRL 8 Regret Time x
44 References Episodic RL UCBVI algorithm: M. Gheshlaghi Azar, I. Osband, and R. Munos, Minimax regret bounds for reinforcement learning, Proc. ICML, Ergodic RL UCRL algorithm: P. Auer & R. Ortner, Logarithmic online regret bounds for undiscounted reinforcement learning, Proc. NIPS, UCRL2 algorithm and minimax LB: P. Auer, T. Jaksch, and R. Ortner, Near-optimal regret bounds for reinforcement learning, J. Machine Learning Research,
Reinforcement Learning
Reinforcement Learning Lecture 3: RL problems, sample complexity and regret Alexandre Proutiere, Sadegh Talebi, Jungseul Ok KTH, The Royal Institute of Technology Objectives of this lecture Introduce the
More informationOnline regret in reinforcement learning
University of Leoben, Austria Tübingen, 31 July 2007 Undiscounted online regret I am interested in the difference (in rewards during learning) between an optimal policy and a reinforcement learner: T T
More informationCsaba Szepesvári 1. University of Alberta. Machine Learning Summer School, Ile de Re, France, 2008
LEARNING THEORY OF OPTIMAL DECISION MAKING PART I: ON-LINE LEARNING IN STOCHASTIC ENVIRONMENTS Csaba Szepesvári 1 1 Department of Computing Science University of Alberta Machine Learning Summer School,
More informationTwo optimization problems in a stochastic bandit model
Two optimization problems in a stochastic bandit model Emilie Kaufmann joint work with Olivier Cappé, Aurélien Garivier and Shivaram Kalyanakrishnan Journées MAS 204, Toulouse Outline From stochastic optimization
More informationVariance-Aware Regret Bounds for Undiscounted Reinforcement Learning in MDPs
Journal of Machine Learning Research volume (year pages Submitted submitted; Published published Variance-Aware Regret Bounds for Undiscounted Reinforcement Learning in MDPs Mohammad Sadegh Talebi KTH
More informationLogarithmic Online Regret Bounds for Undiscounted Reinforcement Learning
Logarithmic Online Regret Bounds for Undiscounted Reinforcement Learning Peter Auer Ronald Ortner University of Leoben, Franz-Josef-Strasse 18, 8700 Leoben, Austria auer,rortner}@unileoben.ac.at Abstract
More informationReinforcement Learning
Reinforcement Learning Lecture 5: Bandit optimisation Alexandre Proutiere, Sadegh Talebi, Jungseul Ok KTH, The Royal Institute of Technology Objectives of this lecture Introduce bandit optimisation: the
More informationAdministration. CSCI567 Machine Learning (Fall 2018) Outline. Outline. HW5 is available, due on 11/18. Practice final will also be available soon.
Administration CSCI567 Machine Learning Fall 2018 Prof. Haipeng Luo U of Southern California Nov 7, 2018 HW5 is available, due on 11/18. Practice final will also be available soon. Remaining weeks: 11/14,
More informationMulti-armed bandit models: a tutorial
Multi-armed bandit models: a tutorial CERMICS seminar, March 30th, 2016 Multi-Armed Bandit model: general setting K arms: for a {1,..., K}, (X a,t ) t N is a stochastic process. (unknown distributions)
More informationBandit models: a tutorial
Gdt COS, December 3rd, 2015 Multi-Armed Bandit model: general setting K arms: for a {1,..., K}, (X a,t ) t N is a stochastic process. (unknown distributions) Bandit game: a each round t, an agent chooses
More informationComplexity of stochastic branch and bound methods for belief tree search in Bayesian reinforcement learning
Complexity of stochastic branch and bound methods for belief tree search in Bayesian reinforcement learning Christos Dimitrakakis Informatics Institute, University of Amsterdam, Amsterdam, The Netherlands
More information(More) Efficient Reinforcement Learning via Posterior Sampling
(More) Efficient Reinforcement Learning via Posterior Sampling Osband, Ian Stanford University Stanford, CA 94305 iosband@stanford.edu Van Roy, Benjamin Stanford University Stanford, CA 94305 bvr@stanford.edu
More informationReinforcement Learning
Reinforcement Learning Markov decision process & Dynamic programming Evaluative feedback, value function, Bellman equation, optimality, Markov property, Markov decision process, dynamic programming, value
More information1 MDP Value Iteration Algorithm
CS 0. - Active Learning Problem Set Handed out: 4 Jan 009 Due: 9 Jan 009 MDP Value Iteration Algorithm. Implement the value iteration algorithm given in the lecture. That is, solve Bellman s equation using
More informationReinforcement Learning
Reinforcement Learning Model-Based Reinforcement Learning Model-based, PAC-MDP, sample complexity, exploration/exploitation, RMAX, E3, Bayes-optimal, Bayesian RL, model learning Vien Ngo MLR, University
More informationIntroduction to Reinforcement Learning. CMPT 882 Mar. 18
Introduction to Reinforcement Learning CMPT 882 Mar. 18 Outline for the week Basic ideas in RL Value functions and value iteration Policy evaluation and policy improvement Model-free RL Monte-Carlo and
More informationSelecting Near-Optimal Approximate State Representations in Reinforcement Learning
Selecting Near-Optimal Approximate State Representations in Reinforcement Learning Ronald Ortner 1, Odalric-Ambrym Maillard 2, and Daniil Ryabko 3 1 Montanuniversitaet Leoben, Austria 2 The Technion, Israel
More informationOptimism in Reinforcement Learning and Kullback-Leibler Divergence
Optimism in Reinforcement Learning and Kullback-Leibler Divergence Sarah Filippi, Olivier Cappé, and Aurélien Garivier LTCI, TELECOM ParisTech and CNRS 46 rue Barrault, 7503 Paris, France (filippi,cappe,garivier)@telecom-paristech.fr,
More informationArtificial Intelligence
Artificial Intelligence Dynamic Programming Marc Toussaint University of Stuttgart Winter 2018/19 Motivation: So far we focussed on tree search-like solvers for decision problems. There is a second important
More informationMDP Preliminaries. Nan Jiang. February 10, 2019
MDP Preliminaries Nan Jiang February 10, 2019 1 Markov Decision Processes In reinforcement learning, the interactions between the agent and the environment are often described by a Markov Decision Process
More informationarxiv: v1 [cs.lg] 12 Feb 2018
in Reinforcement Learning Ronan Fruit * 1 Matteo Pirotta * 1 Alessandro Lazaric 2 Ronald Ortner 3 arxiv:1802.04020v1 [cs.lg] 12 Feb 2018 Abstract We introduce SCAL, an algorithm designed to perform efficient
More informationDecision Theory: Q-Learning
Decision Theory: Q-Learning CPSC 322 Decision Theory 5 Textbook 12.5 Decision Theory: Q-Learning CPSC 322 Decision Theory 5, Slide 1 Lecture Overview 1 Recap 2 Asynchronous Value Iteration 3 Q-Learning
More informationStratégies bayésiennes et fréquentistes dans un modèle de bandit
Stratégies bayésiennes et fréquentistes dans un modèle de bandit thèse effectuée à Telecom ParisTech, co-dirigée par Olivier Cappé, Aurélien Garivier et Rémi Munos Journées MAS, Grenoble, 30 août 2016
More informationThe Multi-Armed Bandit Problem
The Multi-Armed Bandit Problem Electrical and Computer Engineering December 7, 2013 Outline 1 2 Mathematical 3 Algorithm Upper Confidence Bound Algorithm A/B Testing Exploration vs. Exploitation Scientist
More informationOnline Learning and Sequential Decision Making
Online Learning and Sequential Decision Making Emilie Kaufmann CNRS & CRIStAL, Inria SequeL, emilie.kaufmann@univ-lille.fr Research School, ENS Lyon, Novembre 12-13th 2018 Emilie Kaufmann Sequential Decision
More informationReinforcement Learning. Donglin Zeng, Department of Biostatistics, University of North Carolina
Reinforcement Learning Introduction Introduction Unsupervised learning has no outcome (no feedback). Supervised learning has outcome so we know what to predict. Reinforcement learning is in between it
More informationDecision Theory: Markov Decision Processes
Decision Theory: Markov Decision Processes CPSC 322 Lecture 33 March 31, 2006 Textbook 12.5 Decision Theory: Markov Decision Processes CPSC 322 Lecture 33, Slide 1 Lecture Overview Recap Rewards and Policies
More informationLecture 3: Markov Decision Processes
Lecture 3: Markov Decision Processes Joseph Modayil 1 Markov Processes 2 Markov Reward Processes 3 Markov Decision Processes 4 Extensions to MDPs Markov Processes Introduction Introduction to MDPs Markov
More informationGrundlagen der Künstlichen Intelligenz
Grundlagen der Künstlichen Intelligenz Reinforcement learning Daniel Hennes 4.12.2017 (WS 2017/18) University Stuttgart - IPVS - Machine Learning & Robotics 1 Today Reinforcement learning Model based and
More informationReal Time Value Iteration and the State-Action Value Function
MS&E338 Reinforcement Learning Lecture 3-4/9/18 Real Time Value Iteration and the State-Action Value Function Lecturer: Ben Van Roy Scribe: Apoorva Sharma and Tong Mu 1 Review Last time we left off discussing
More informationBayesian and Frequentist Methods in Bandit Models
Bayesian and Frequentist Methods in Bandit Models Emilie Kaufmann, Telecom ParisTech Bayes In Paris, ENSAE, October 24th, 2013 Emilie Kaufmann (Telecom ParisTech) Bayesian and Frequentist Bandits BIP,
More informationOn the Complexity of Best Arm Identification in Multi-Armed Bandit Models
On the Complexity of Best Arm Identification in Multi-Armed Bandit Models Aurélien Garivier Institut de Mathématiques de Toulouse Information Theory, Learning and Big Data Simons Institute, Berkeley, March
More informationRevisiting the Exploration-Exploitation Tradeoff in Bandit Models
Revisiting the Exploration-Exploitation Tradeoff in Bandit Models joint work with Aurélien Garivier (IMT, Toulouse) and Tor Lattimore (University of Alberta) Workshop on Optimization and Decision-Making
More informationTemporal Difference. Learning KENNETH TRAN. Principal Research Engineer, MSR AI
Temporal Difference Learning KENNETH TRAN Principal Research Engineer, MSR AI Temporal Difference Learning Policy Evaluation Intro to model-free learning Monte Carlo Learning Temporal Difference Learning
More informationarxiv: v2 [stat.ml] 17 Jul 2013
Regret Bounds for Reinforcement Learning with Policy Advice Mohammad Gheshlaghi Azar 1 and Alessandro Lazaric 2 and Emma Brunskill 1 1 Carnegie Mellon University, Pittsburgh, PA, USA {ebrun,mazar}@cs.cmu.edu
More informationOptimism in the Face of Uncertainty Should be Refutable
Optimism in the Face of Uncertainty Should be Refutable Ronald ORTNER Montanuniversität Leoben Department Mathematik und Informationstechnolgie Franz-Josef-Strasse 18, 8700 Leoben, Austria, Phone number:
More informationPART A and ONE question from PART B; or ONE question from PART A and TWO questions from PART B.
Advanced Topics in Machine Learning, GI13, 2010/11 Advanced Topics in Machine Learning, GI13, 2010/11 Answer any THREE questions. Each question is worth 20 marks. Use separate answer books Answer any THREE
More informationReinforcement Learning. Yishay Mansour Tel-Aviv University
Reinforcement Learning Yishay Mansour Tel-Aviv University 1 Reinforcement Learning: Course Information Classes: Wednesday Lecture 10-13 Yishay Mansour Recitations:14-15/15-16 Eliya Nachmani Adam Polyak
More informationLecture 17: Reinforcement Learning, Finite Markov Decision Processes
CSE599i: Online and Adaptive Machine Learning Winter 2018 Lecture 17: Reinforcement Learning, Finite Markov Decision Processes Lecturer: Kevin Jamieson Scribes: Aida Amini, Kousuke Ariga, James Ferguson,
More informationLearning Exploration/Exploitation Strategies for Single Trajectory Reinforcement Learning
JMLR: Workshop and Conference Proceedings vol:1 8, 2012 10th European Workshop on Reinforcement Learning Learning Exploration/Exploitation Strategies for Single Trajectory Reinforcement Learning Michael
More informationMachine Learning and Bayesian Inference. Unsupervised learning. Can we find regularity in data without the aid of labels?
Machine Learning and Bayesian Inference Dr Sean Holden Computer Laboratory, Room FC6 Telephone extension 6372 Email: sbh11@cl.cam.ac.uk www.cl.cam.ac.uk/ sbh11/ Unsupervised learning Can we find regularity
More informationMARKOV DECISION PROCESSES (MDP) AND REINFORCEMENT LEARNING (RL) Versione originale delle slide fornita dal Prof. Francesco Lo Presti
1 MARKOV DECISION PROCESSES (MDP) AND REINFORCEMENT LEARNING (RL) Versione originale delle slide fornita dal Prof. Francesco Lo Presti Historical background 2 Original motivation: animal learning Early
More informationCMU Lecture 12: Reinforcement Learning. Teacher: Gianni A. Di Caro
CMU 15-781 Lecture 12: Reinforcement Learning Teacher: Gianni A. Di Caro REINFORCEMENT LEARNING Transition Model? State Action Reward model? Agent Goal: Maximize expected sum of future rewards 2 MDP PLANNING
More informationBasics of reinforcement learning
Basics of reinforcement learning Lucian Buşoniu TMLSS, 20 July 2018 Main idea of reinforcement learning (RL) Learn a sequential decision policy to optimize the cumulative performance of an unknown system
More informationThe information complexity of sequential resource allocation
The information complexity of sequential resource allocation Emilie Kaufmann, joint work with Olivier Cappé, Aurélien Garivier and Shivaram Kalyanakrishan SMILE Seminar, ENS, June 8th, 205 Sequential allocation
More informationOptimism in Reinforcement Learning Based on Kullback-Leibler Divergence
Optimism in Reinforcement Learning Based on Kullback-Leibler Divergence Sarah Filippi, Olivier Cappé, Aurélien Garivier To cite this version: Sarah Filippi, Olivier Cappé, Aurélien Garivier. Optimism in
More informationReinforcement Learning. George Konidaris
Reinforcement Learning George Konidaris gdk@cs.brown.edu Fall 2017 Machine Learning Subfield of AI concerned with learning from data. Broadly, using: Experience To Improve Performance On Some Task (Tom
More informationReinforcement Learning. Spring 2018 Defining MDPs, Planning
Reinforcement Learning Spring 2018 Defining MDPs, Planning understandability 0 Slide 10 time You are here Markov Process Where you will go depends only on where you are Markov Process: Information state
More informationMulti-Armed Bandits. Credit: David Silver. Google DeepMind. Presenter: Tianlu Wang
Multi-Armed Bandits Credit: David Silver Google DeepMind Presenter: Tianlu Wang Credit: David Silver (DeepMind) Multi-Armed Bandits Presenter: Tianlu Wang 1 / 27 Outline 1 Introduction Exploration vs.
More informationARTIFICIAL INTELLIGENCE. Reinforcement learning
INFOB2KI 2018-2019 Utrecht University The Netherlands ARTIFICIAL INTELLIGENCE Reinforcement learning Lecturer: Silja Renooij These slides are part of the INFOB2KI Course Notes available from www.cs.uu.nl/docs/vakken/b2ki/schema.html
More information6 Basic Convergence Results for RL Algorithms
Learning in Complex Systems Spring 2011 Lecture Notes Nahum Shimkin 6 Basic Convergence Results for RL Algorithms We establish here some asymptotic convergence results for the basic RL algorithms, by showing
More informationPlanning in Markov Decision Processes
Carnegie Mellon School of Computer Science Deep Reinforcement Learning and Control Planning in Markov Decision Processes Lecture 3, CMU 10703 Katerina Fragkiadaki Markov Decision Process (MDP) A Markov
More informationOnline Regret Bounds for Markov Decision Processes with Deterministic Transitions
Online Regret Bounds for Markov Decision Processes with Deterministic Transitions Ronald Ortner Department Mathematik und Informationstechnologie, Montanuniversität Leoben, A-8700 Leoben, Austria Abstract
More informationOn Bayesian bandit algorithms
On Bayesian bandit algorithms Emilie Kaufmann joint work with Olivier Cappé, Aurélien Garivier, Nathaniel Korda and Rémi Munos July 1st, 2012 Emilie Kaufmann (Telecom ParisTech) On Bayesian bandit algorithms
More informationReinforcement learning an introduction
Reinforcement learning an introduction Prof. Dr. Ann Nowé Computational Modeling Group AIlab ai.vub.ac.be November 2013 Reinforcement Learning What is it? Learning from interaction Learning about, from,
More informationIntroduction to Bandit Algorithms. Introduction to Bandit Algorithms
Stochastic K-Arm Bandit Problem Formulation Consider K arms (actions) each correspond to an unknown distribution {ν k } K k=1 with values bounded in [0, 1]. At each time t, the agent pulls an arm I t {1,...,
More informationEvaluation of multi armed bandit algorithms and empirical algorithm
Acta Technica 62, No. 2B/2017, 639 656 c 2017 Institute of Thermomechanics CAS, v.v.i. Evaluation of multi armed bandit algorithms and empirical algorithm Zhang Hong 2,3, Cao Xiushan 1, Pu Qiumei 1,4 Abstract.
More informationTemporal difference learning
Temporal difference learning AI & Agents for IET Lecturer: S Luz http://www.scss.tcd.ie/~luzs/t/cs7032/ February 4, 2014 Recall background & assumptions Environment is a finite MDP (i.e. A and S are finite).
More informationPreference Elicitation for Sequential Decision Problems
Preference Elicitation for Sequential Decision Problems Kevin Regan University of Toronto Introduction 2 Motivation Focus: Computational approaches to sequential decision making under uncertainty These
More informationReinforcement Learning Part 2
Reinforcement Learning Part 2 Dipendra Misra Cornell University dkm@cs.cornell.edu https://dipendramisra.wordpress.com/ From previous tutorial Reinforcement Learning Exploration No supervision Agent-Reward-Environment
More informationRegret Bounds for Restless Markov Bandits
Regret Bounds for Restless Markov Bandits Ronald Ortner, Daniil Ryabko, Peter Auer, Rémi Munos Abstract We consider the restless Markov bandit problem, in which the state of each arm evolves according
More informationTwo generic principles in modern bandits: the optimistic principle and Thompson sampling
Two generic principles in modern bandits: the optimistic principle and Thompson sampling Rémi Munos INRIA Lille, France CSML Lunch Seminars, September 12, 2014 Outline Two principles: The optimistic principle
More informationSparse Linear Contextual Bandits via Relevance Vector Machines
Sparse Linear Contextual Bandits via Relevance Vector Machines Davis Gilton and Rebecca Willett Electrical and Computer Engineering University of Wisconsin-Madison Madison, WI 53706 Email: gilton@wisc.edu,
More informationEfficient Average Reward Reinforcement Learning Using Constant Shifting Values
Efficient Average Reward Reinforcement Learning Using Constant Shifting Values Shangdong Yang and Yang Gao and Bo An and Hao Wang and Xingguo Chen State Key Laboratory for Novel Software Technology, Collaborative
More informationProf. Dr. Ann Nowé. Artificial Intelligence Lab ai.vub.ac.be
REINFORCEMENT LEARNING AN INTRODUCTION Prof. Dr. Ann Nowé Artificial Intelligence Lab ai.vub.ac.be REINFORCEMENT LEARNING WHAT IS IT? What is it? Learning from interaction Learning about, from, and while
More informationExploration. 2015/10/12 John Schulman
Exploration 2015/10/12 John Schulman What is the exploration problem? Given a long-lived agent (or long-running learning algorithm), how to balance exploration and exploitation to maximize long-term rewards
More informationarxiv: v1 [cs.lg] 1 Jan 2019
Tighter Problem-Dependent Regret Bounds in Reinforcement Learning without Domain Knowledge using Value Function Bounds arxiv:1901.00210v1 [cs.lg] 1 Jan 2019 Andrea Zanette Institute for Computational and
More informationReinforcement Learning
Reinforcement Learning March May, 2013 Schedule Update Introduction 03/13/2015 (10:15-12:15) Sala conferenze MDPs 03/18/2015 (10:15-12:15) Sala conferenze Solving MDPs 03/20/2015 (10:15-12:15) Aula Alpha
More informationSelecting the State-Representation in Reinforcement Learning
Selecting the State-Representation in Reinforcement Learning Odalric-Ambrym Maillard INRIA Lille - Nord Europe odalricambrym.maillard@gmail.com Rémi Munos INRIA Lille - Nord Europe remi.munos@inria.fr
More informationInfinite-Horizon Average Reward Markov Decision Processes
Infinite-Horizon Average Reward Markov Decision Processes Dan Zhang Leeds School of Business University of Colorado at Boulder Dan Zhang, Spring 2012 Infinite Horizon Average Reward MDP 1 Outline The average
More informationLecture 8: Policy Gradient
Lecture 8: Policy Gradient Hado van Hasselt Outline 1 Introduction 2 Finite Difference Policy Gradient 3 Monte-Carlo Policy Gradient 4 Actor-Critic Policy Gradient Introduction Vapnik s rule Never solve
More informationInternet Monetization
Internet Monetization March May, 2013 Discrete time Finite A decision process (MDP) is reward process with decisions. It models an environment in which all states are and time is divided into stages. Definition
More informationReinforcement Learning
1 Reinforcement Learning Chris Watkins Department of Computer Science Royal Holloway, University of London July 27, 2015 2 Plan 1 Why reinforcement learning? Where does this theory come from? Markov decision
More informationCS788 Dialogue Management Systems Lecture #2: Markov Decision Processes
CS788 Dialogue Management Systems Lecture #2: Markov Decision Processes Kee-Eung Kim KAIST EECS Department Computer Science Division Markov Decision Processes (MDPs) A popular model for sequential decision
More informationChristopher Watkins and Peter Dayan. Noga Zaslavsky. The Hebrew University of Jerusalem Advanced Seminar in Deep Learning (67679) November 1, 2015
Q-Learning Christopher Watkins and Peter Dayan Noga Zaslavsky The Hebrew University of Jerusalem Advanced Seminar in Deep Learning (67679) November 1, 2015 Noga Zaslavsky Q-Learning (Watkins & Dayan, 1992)
More informationReinforcement Learning. Introduction
Reinforcement Learning Introduction Reinforcement Learning Agent interacts and learns from a stochastic environment Science of sequential decision making Many faces of reinforcement learning Optimal control
More informationAn Analysis of Model-Based Interval Estimation for Markov Decision Processes
An Analysis of Model-Based Interval Estimation for Markov Decision Processes Alexander L. Strehl, Michael L. Littman astrehl@gmail.com, mlittman@cs.rutgers.edu Computer Science Dept. Rutgers University
More informationOptimal Regret Bounds for Selecting the State Representation in Reinforcement Learning
Optimal Regret Bounds for Selecting the State Representation in Reinforcement Learning Odalric-Ambrym Maillard odalricambrym.maillard@gmail.com Montanuniversität Leoben, Franz-Josef-Strasse 18, A-8700
More informationReinforcement learning
Reinforcement learning Based on [Kaelbling et al., 1996, Bertsekas, 2000] Bert Kappen Reinforcement learning Reinforcement learning is the problem faced by an agent that must learn behavior through trial-and-error
More informationA Review of the E 3 Algorithm: Near-Optimal Reinforcement Learning in Polynomial Time
A Review of the E 3 Algorithm: Near-Optimal Reinforcement Learning in Polynomial Time April 16, 2016 Abstract In this exposition we study the E 3 algorithm proposed by Kearns and Singh for reinforcement
More informationCOS 402 Machine Learning and Artificial Intelligence Fall Lecture 22. Exploration & Exploitation in Reinforcement Learning: MAB, UCB, Exp3
COS 402 Machine Learning and Artificial Intelligence Fall 2016 Lecture 22 Exploration & Exploitation in Reinforcement Learning: MAB, UCB, Exp3 How to balance exploration and exploitation in reinforcement
More informationCSC321 Lecture 22: Q-Learning
CSC321 Lecture 22: Q-Learning Roger Grosse Roger Grosse CSC321 Lecture 22: Q-Learning 1 / 21 Overview Second of 3 lectures on reinforcement learning Last time: policy gradient (e.g. REINFORCE) Optimize
More informationActor-critic methods. Dialogue Systems Group, Cambridge University Engineering Department. February 21, 2017
Actor-critic methods Milica Gašić Dialogue Systems Group, Cambridge University Engineering Department February 21, 2017 1 / 21 In this lecture... The actor-critic architecture Least-Squares Policy Iteration
More informationPracticable Robust Markov Decision Processes
Practicable Robust Markov Decision Processes Huan Xu Department of Mechanical Engineering National University of Singapore Joint work with Shiau-Hong Lim (IBM), Shie Mannor (Techion), Ofir Mebel (Apple)
More informationAnnealing-Pareto Multi-Objective Multi-Armed Bandit Algorithm
Annealing-Pareto Multi-Objective Multi-Armed Bandit Algorithm Saba Q. Yahyaa, Madalina M. Drugan and Bernard Manderick Vrije Universiteit Brussel, Department of Computer Science, Pleinlaan 2, 1050 Brussels,
More informationMachine Learning. Reinforcement learning. Hamid Beigy. Sharif University of Technology. Fall 1396
Machine Learning Reinforcement learning Hamid Beigy Sharif University of Technology Fall 1396 Hamid Beigy (Sharif University of Technology) Machine Learning Fall 1396 1 / 32 Table of contents 1 Introduction
More information, and rewards and transition matrices as shown below:
CSE 50a. Assignment 7 Out: Tue Nov Due: Thu Dec Reading: Sutton & Barto, Chapters -. 7. Policy improvement Consider the Markov decision process (MDP) with two states s {0, }, two actions a {0, }, discount
More informationAdvanced Machine Learning
Advanced Machine Learning Bandit Problems MEHRYAR MOHRI MOHRI@ COURANT INSTITUTE & GOOGLE RESEARCH. Multi-Armed Bandit Problem Problem: which arm of a K-slot machine should a gambler pull to maximize his
More informationMarks. bonus points. } Assignment 1: Should be out this weekend. } Mid-term: Before the last lecture. } Mid-term deferred exam:
Marks } Assignment 1: Should be out this weekend } All are marked, I m trying to tally them and perhaps add bonus points } Mid-term: Before the last lecture } Mid-term deferred exam: } This Saturday, 9am-10.30am,
More informationReinforcement Learning: An Introduction
Introduction Betreuer: Freek Stulp Hauptseminar Intelligente Autonome Systeme (WiSe 04/05) Forschungs- und Lehreinheit Informatik IX Technische Universität München November 24, 2004 Introduction What is
More informationLecture 1: March 7, 2018
Reinforcement Learning Spring Semester, 2017/8 Lecture 1: March 7, 2018 Lecturer: Yishay Mansour Scribe: ym DISCLAIMER: Based on Learning and Planning in Dynamical Systems by Shie Mannor c, all rights
More informationLecture 9: Policy Gradient II 1
Lecture 9: Policy Gradient II 1 Emma Brunskill CS234 Reinforcement Learning. Winter 2019 Additional reading: Sutton and Barto 2018 Chp. 13 1 With many slides from or derived from David Silver and John
More informationReinforcement Learning
Reinforcement Learning Temporal Difference Learning Temporal difference learning, TD prediction, Q-learning, elibigility traces. (many slides from Marc Toussaint) Vien Ngo MLR, University of Stuttgart
More informationCS 7180: Behavioral Modeling and Decisionmaking
CS 7180: Behavioral Modeling and Decisionmaking in AI Markov Decision Processes for Complex Decisionmaking Prof. Amy Sliva October 17, 2012 Decisions are nondeterministic In many situations, behavior and
More information15-780: ReinforcementLearning
15-780: ReinforcementLearning J. Zico Kolter March 2, 2016 1 Outline Challenge of RL Model-based methods Model-free methods Exploration and exploitation 2 Outline Challenge of RL Model-based methods Model-free
More informationPART A and ONE question from PART B; or ONE question from PART A and TWO questions from PART B.
Advanced Topics in Machine Learning, GI13, 2010/11 Advanced Topics in Machine Learning, GI13, 2010/11 Answer any THREE questions. Each question is worth 20 marks. Use separate answer books Answer any THREE
More informationCOMP3702/7702 Artificial Intelligence Lecture 11: Introduction to Machine Learning and Reinforcement Learning. Hanna Kurniawati
COMP3702/7702 Artificial Intelligence Lecture 11: Introduction to Machine Learning and Reinforcement Learning Hanna Kurniawati Today } What is machine learning? } Where is it used? } Types of machine learning
More informationReinforcement Learning
Reinforcement Learning Ron Parr CompSci 7 Department of Computer Science Duke University With thanks to Kris Hauser for some content RL Highlights Everybody likes to learn from experience Use ML techniques
More informationBits of Machine Learning Part 2: Unsupervised Learning
Bits of Machine Learning Part 2: Unsupervised Learning Alexandre Proutiere and Vahan Petrosyan KTH (The Royal Institute of Technology) Outline of the Course 1. Supervised Learning Regression and Classification
More informationDueling Network Architectures for Deep Reinforcement Learning (ICML 2016)
Dueling Network Architectures for Deep Reinforcement Learning (ICML 2016) Yoonho Lee Department of Computer Science and Engineering Pohang University of Science and Technology October 11, 2016 Outline
More information