Bandit models: a tutorial

Size: px
Start display at page:

Download "Bandit models: a tutorial"

Transcription

1 Gdt COS, December 3rd, 2015

2 Multi-Armed Bandit model: general setting K arms: for a {1,..., K}, (X a,t ) t N is a stochastic process. (unknown distributions) Bandit game: a each round t, an agent chooses an arm A t {1,..., K} receives a reward X t = X At,t Goal: Build a sequential strategy maximizing A t = F t (A 1, X 1,..., A t 1, X t 1 ) [ ] E α t X t, t=1 where (α t ) t N is a discount sequence. [Berry and Fristedt, 1985]

3 Multi-Armed Bandit model: the i.i.d. case K independent arms: for a {1,..., K}, (X a,t ) t N is i.i.d ν a (unknown distributions) Bandit game: a each round t, an agent chooses an arm A t {1,..., K} receives a reward X t ν At Goal: Build a sequential strategy maximizing A t = F t (A 1, X 1,..., A t 1, X t 1 ) Finite-horizon [ MAB T ] E t=1 X t Discounted MAB E [ ] t=1 αt 1 X t

4 Why MABs? ν 1 ν 2 ν 3 ν 4 ν 5 Goal: maximize ones gains in a casino? (HOPELESS)

5 Why MABs? Real motivations Clinical trials: B(µ 1 ) B(µ 2 ) B(µ 3 ) B(µ 4 ) B(µ 5 ) choose a treatment A t for patient t observe a response X t {0, 1} : P(X t = 1) = µ At Goal: maximize the number of patient healed Recommendation tasks: ν 1 ν 2 ν 3 ν 4 ν 5 recommend a movie A t for visitor t observe a rating X t ν At (e.g. X t {1,..., 5}) Goal: maximize the sum of ratings

6 Bernoulli bandit models K independent arms: for a {1,..., K}, (X a,t ) t N is i.i.d B(µ a ) Bandit game: a each round t, an agent chooses an arm A t {1,..., K} receives a reward X t B(µ At ) {0, 1} Goal: maximize Finite-horizon [ MAB T ] E t=1 X t Discounted MAB E [ ] t=1 αt 1 X t

7 Bernoulli bandit models K independent arms: for a {1,..., K}, (X a,t ) t N is i.i.d B(µ a ) Bandit game: a each round t, an agent chooses an arm A t {1,..., K} receives a reward X t B(µ At ) {0, 1} Goal: maximize Finite-horizon [ MAB T ] E t=1 X t Discounted MAB E [ ] t=1 αt 1 X t Frequentist model Bayesian model µ 1,..., µ K µ 1,..., µ K drawn from a unknown parameters prior distribution : µ a π a arm a: (X a,t ) t i.i.d. B(µ a ) arm a: (X a,t ) t µ i.i.d. B(µ a )

8 Outline 1 Bayesian bandits: a planning problem 2 Frequentist bandits: asymptotically optimal algorithms 3 Non stochastic bandits: minimax algorithms

9 Outline 1 Bayesian bandits: a planning problem 2 Frequentist bandits: asymptotically optimal algorithms 3 Non stochastic bandits: minimax algorithms

10 A Markov Decision Process Bandit model (B(µ 1 ),..., B(µ K )) i.i.d prior distribution: µ a U([0, 1]) posterior distribution: πa t := L(µ a X 1,..., X t ) ( πa t = Beta S a (t) }{{} #ones +1, N a (t) S a (t) }{{} #zeros ) +1 S a (t): sum of the rewards gathered from a up to time t N a (t): number of draws of arm a up to time t State Π t = (πa) t K a=1that evolves in a MDP.

11 A Markov Decision Process An example of transition: At=2 6 1 if X t = Solving a planning problem: there exists an exact solution to The finite-horizon MAB: [ T ] arg max E µ π X t (A t) t=1 The discounted MAB: [ ] arg max E µ π α t 1 X t (A t) t=1 Optimal policy = solution to dynamic programming equations.

12 A Markov Decision Process An example of transition: At=2 6 1 if X t = Solving a planning problem: there exists an exact solution to The finite-horizon MAB: [ T ] arg max E µ π X t (A t) t=1 The discounted MAB: [ ] arg max E µ π α t 1 X t (A t) t=1 Optimal policy = solution to dynamic programming equations. Problem: The state space is too large!

13 A reduction of the dimension [Gittins 79]: the solution of the discounted MAB reduces to an index policy: A t+1 = argmax G α (πa). t a=1...k The Gittins indices: G α (p) = sup stopping times τ>0 E i.i.d Yt B(µ) µ p E i.i.d Yt B(µ) µ p [ τ t=1 αt 1 Y t ] [ τ t=1 αt 1 ] instantaneous rewards when commiting to arm µ p, when rewards are discounted by α

14 The Gittins indices An alternative formulation: G α (p) = inf{λ R : V α(p, λ) = 0}, with V α(p, λ) = sup stopping times τ>0 E Yt i.i.d B(µ) µ p [ τ ] α t 1 (Y t λ). t=1 price worth paying for commiting to arm µ p when rewards are discounted by α

15 Gittins indices for finite horizon The Finite-Horizon Gittins indices: depend on the remaining time to play r with V r (p, λ) = G(p, r) = inf{λ R : V r (p, λ) = 0}, sup stopping times 0<τ r E Yt i.i.d B(µ) µ p [ τ ] (Y t λ). t=1 price worth paying for playing arm µ p for at most r rounds The Finite-Horizon Gittins algorithm A t+1 = argmax a=1...k G(π t a, T t) does NOT coincide with the optimal solution [Berry and Fristedt 85]... but is conjectured to be a good approximation!

16 Outline 1 Bayesian bandits: a planning problem 2 Frequentist bandits: asymptotically optimal algorithms 3 Non stochastic bandits: minimax algorithms

17 Regret minimization µ = (µ 1,..., µ K ) unknown parameters, µ = max a µ a. The regret of a strategy A = (A t ) is defined as R µ (A, T ) = E µ [µ T and can be rewritten ] T X t t=1 K R µ (A, T ) = (µ µ a )E µ [N a (T )]. a=1 N a (t) : number of draws of arm a up to time t Maximizing rewards Minimizing regret Goal: Design strategies that have small regret for all µ.

18 Optimal algorithms for regret minimization All the arms should be drawn infinitely often! where [Lai and Robbins, 1985]: a uniformly efficient strategy ( µ, α ]0, 1[, R µ (A, T ) = o(t α )) satisfies µ a < µ lim inf T E µ [N a (T )] log T d(µ, µ ) = KL(B(µ), B(µ )) 1 d(µ a, µ ), = µ log µ 1 µ + (1 µ) log µ 1 µ. Definition A bandit algorithm is asymptotically optimal if, for every µ, µ a < µ lim sup T E µ [N a (T )] log T 1 d(µ a, µ )

19 First algorithms Idea 1 : Draw each arm T /K times EXPLORATION

20 First algorithms Idea 1 : Draw each arm T /K times EXPLORATION Idea 2: Always choose the empirical best arm: EXPLOITATION A t+1 = argmax a ˆµ a (t)

21 First algorithms Idea 1 : Draw each arm T /K times EXPLORATION Idea 2: Always choose the empirical best arm: EXPLOITATION A t+1 = argmax a ˆµ a (t) Idea 3 : Draw the arms uniformly during T /2 rounds, then draw the empirical best until the end EXPLORATION followed EXPLOITATION

22 First algorithms Idea 1 : Draw each arm T /K times EXPLORATION Idea 2: Always choose the empirical best arm: EXPLOITATION A t+1 = argmax a ˆµ a (t) Idea 3 : Draw the arms uniformly during T /2 rounds, then draw the empirical best until the end EXPLORATION followed EXPLOITATION Linear regret...

23 Optimistic algorithms For each arm a, build a confidence interval on µ a : µ a UCB a (t) w.h.p Figure : Confidence intervals on the arms at round t Optimism principle: act as if the best possible model were the true model A t+1 = arg max a UCB a (t)

24 A UCB algorithm in action!

25 The UCB1 algorithm UCB1 [Auer et al. 02] is based on the index α log(t) UCB a (t) = ˆµ a (t) + 2N a (t) Hoeffding s inequality: ( ) ( ( )) α log(t) α log(t) P ˆµ a,s + µ a exp 2s 2s 2s = 1 t α. Union bound: P(UCB a (t) µ a ) P ( t s=1 s t : ˆµ a,s + 1 t α = 1 t α 1. α log(t) 2s µ a )

26 The UCB1 algorithm Theorem For every α > 2 and every sub-optimal arm a, there exists a constant C α > 0 such that E µ [N a (T )] 2α (µ µ a ) 2 log(t ) + C α.

27 The UCB1 algorithm Theorem For every α > 2 and every sub-optimal arm a, there exists a constant C α > 0 such that E µ [N a (T )] 2α (µ µ a ) 2 log(t ) + C α. Remark: 2α (µ µ a ) 2 > 4α 1 d(µ a, µ ) (UCB1 not asymptotically optimal)

28 Proof : 1/3 Assume µ = µ 1 and µ 2 < µ 1. N 2 (T ) = = T 1 t=0 T 1 t=0 T 1 t=0 1 (At+1 =2) T 1 1 (At+1 =2) (UCB 1 (t) µ 1 ) + T 1 1 (UCB1 (t) µ 1 ) + t=0 t=0 1 (At+1 =2) (UCB 1 (t)>µ 1 ) 1 (At+1 =2) (UCB 2 (t)>µ 1 )

29 Proof : 1/3 Assume µ = µ 1 and µ 2 < µ 1. N 2 (T ) = = T 1 t=0 T 1 t=0 T 1 t=0 1 (At+1 =2) T 1 1 (At+1 =2) (UCB 1 (t) µ 1 ) + T 1 1 (UCB1 (t) µ 1 ) + t=0 t=0 1 (At+1 =2) (UCB 1 (t)>µ 1 ) 1 (At+1 =2) (UCB 2 (t)>µ 1 ) E[N 2 (T )] T 1 t=0 P(UCB 1 (t) µ 1 ) } {{ } A P(A t+1 = 2, UCB 2 (t) > µ 1 ) T 1 + t=0 } {{ } B

30 Proof : 2/3 E[N 2 (T )] T 1 t=0 P(UCB 1 (t) µ 1 ) } {{ } A Term A: if α > 2, P(A t+1 = 2, UCB 2 (t) > µ 1 ) T 1 + t=0 } {{ } B T 1 t=0 T 1 P(UCB 1 (t) µ 1 ) 1 + t=1 1 t α ζ(α 1) := C α /2.

31 Preuve : 3/3 Term B: (B) = with T 1 t=0 T 1 t=0 P (A t+1 = 2, UCB 2 (t) > µ 1 ) P(A t+1 = 2, UCB 2 (t) > µ 1, LCB 2 (t) µ 2 ) + C α /2 LCB 2 (t) = ˆµ 2 (t) α log t 2N 2 (t). (LCB 2 (t) < µ 2 < µ 1 UCB 2 (t)) α log(t ) (µ 1 µ 2 ) 2 2N 2 (t) N 2 (t) 2α (µ 1 µ 2 ) 2 log(t )

32 Proof : 3/3 Term B: (continued) (B) T 1 t=0 T 1 t=0 P(A t+1 = 2, UCB 2 (t) > µ 1, LCB 2 (t) µ 2 ) + C α /2 ( P A t+1 = 2, N 2 (t) 2α (µ 1 µ 2 ) 2 log(t ) + C α/2 ) 2α (µ 1 µ 2 ) 2 log(t ) + C α /2 Conclusion: E[N 2 (T )] 2α (µ 1 µ 2 ) 2 log(t ) + C α.

33 The KL-UCB algorithm A UCB-type algorithm: A t+1 = arg max a u a (t)... associated to the right upper confidence bounds: { u a (t) = max q ˆµ a (t) : d (ˆµ a (t), x) log(t) }, N a (t) ˆµ a (t): empirical mean of rewards from arm a up to time t d(µ a (t),q) log(t)/n a (t) µ a (t) [ ] u a (t) q [Cappé et al. 13]: KL-UCB satisfies 1 E µ [N a (T )] d(µ a, µ ) log T + O( log(t )).

34 Bayesian algorithms for regret minimization? Algorithms based on Bayesian tools can be good to solve (frequentist) regret minimization Ideas: use the Finite-Horizon Gittins use posterior quantiles use posterior samples

35 Thompson Sampling (π t a,.., π t K ) posterior distribution on (µ 1,.., µ K ) at round t. Algorithm: Thompson Sampling Thompson Sampling is a randomized Bayesian algorithm: a {1..K}, θ a (t) πa t A t+1 = argmax a θ a (t) Draw each arm according to its posterior probability of being optimal [Thompson 1933] θ (t) μ μ θ (t) Thompson Sampling is asymptotically optimal. [K.,Korda,Munos 2012]

36 Outline 1 Bayesian bandits: a planning problem 2 Frequentist bandits: asymptotically optimal algorithms 3 Non stochastic bandits: minimax algorithms

37 Minimax regret In stochastic (Bernoulli) bandits, we exhibited algorithm satisfying µ, R µ (A, T ) = ( K ) (µ µ a ) d(µ a=1 a, µ ) }{{} log(t ) + o(log(t )). problem-dependent term, can be large For those algorithms, one can also prove that, for some constant C, µ, R µ (A, T ) C KT log(t ). }{{} problem-independent bound Minimax rate of the regret ( ) inf sup R µ (A, T ) = O KT A µ

38 Removing the stochastic assumption... A new bandit game: at round t the player chooses arm A t simultaneously, an adversary chooses the vector of rewards (x t,1,..., x t,k ) the player receives the reward x t = x At,t Goal: maximize rewards, or minimize regret [ T ] [ T ] R(A, T ) = max E x a,t E x t. a t=1 t=1

39 Full information: Exponential Weighted Forecaster The full-information game: at round t the player chooses arm A t simultaneously, an adversary chooses the vector of rewards (x t,1,..., x t,k ) the player receives the reward x t = x At,t and he observes the reward vector (x t,1,..., x t,k ) The EWF algorithm [Littelstone, Warmuth 1994] With ˆp t the probability distribution at round t, choose ˆp a,t e η( t 1 s=1 xa,s) A t ˆp t

40 Back to the bandit case: the EXP3 strategy We don t have access to the (x a,t ) for all a... satisfies E[ˆx a,t ] = x a,t. ˆx a,t = x a,t ˆp a,t 1 (At=a) The EXP3 strategy [Auer et al. 2003] With ˆp t the probability distribution at round t, choose ˆp a,t e η( t 1 s=1 ˆxa,s) A t ˆp t

41 Theoretical results The EXP3 strategy [Auer et al. 2003] With ˆp t the probability distribution at round t, choose ˆp a,t e η( t 1 s=1 ˆxa,s) A t ˆp t [Bubeck and Cesa-Bianchi 12] EXP3 with log(k) η = KT satisfies R(EXP3, T ) 2 log K KT Remarks: log(k) Kt almost the same guarantees for η t = extra exploration is needed to have high probability results

42 Conclusions Under different assumptions, different types of strategies to achieve an exploration-exploitation tradeoff in bandit models: Index policies: Gittins indices UCB-type algorithms Randomized algorithms: Thompson Sampling Exponential weights More complex bandit models not covered today: restless bandits, contextual bandits, combinatorial bandits...

43 Books and surveys Bayesian bandits: Bandit Problems. Sequential allocation of experiments. Berry and Fristedt. Chapman and Hall, Multi-armed bandit allocation indices. Gittins, Glazebrook, Weber. Wiley, Stochastic and non-stochastic bandits: Regret analysis of Stochastic and Non-stochastic Bandit Problems. Bubeck and Cesa-Bianchi. Fondations and Trends in Machine Learning, Prediction, Learning and Games. Cesa-Bianchi and Lugosi. Cambridge University Press, 2006.

Multi-armed bandit models: a tutorial

Multi-armed bandit models: a tutorial Multi-armed bandit models: a tutorial CERMICS seminar, March 30th, 2016 Multi-Armed Bandit model: general setting K arms: for a {1,..., K}, (X a,t ) t N is a stochastic process. (unknown distributions)

More information

Stratégies bayésiennes et fréquentistes dans un modèle de bandit

Stratégies bayésiennes et fréquentistes dans un modèle de bandit Stratégies bayésiennes et fréquentistes dans un modèle de bandit thèse effectuée à Telecom ParisTech, co-dirigée par Olivier Cappé, Aurélien Garivier et Rémi Munos Journées MAS, Grenoble, 30 août 2016

More information

Online Learning and Sequential Decision Making

Online Learning and Sequential Decision Making Online Learning and Sequential Decision Making Emilie Kaufmann CNRS & CRIStAL, Inria SequeL, emilie.kaufmann@univ-lille.fr Research School, ENS Lyon, Novembre 12-13th 2018 Emilie Kaufmann Sequential Decision

More information

Revisiting the Exploration-Exploitation Tradeoff in Bandit Models

Revisiting the Exploration-Exploitation Tradeoff in Bandit Models Revisiting the Exploration-Exploitation Tradeoff in Bandit Models joint work with Aurélien Garivier (IMT, Toulouse) and Tor Lattimore (University of Alberta) Workshop on Optimization and Decision-Making

More information

Two optimization problems in a stochastic bandit model

Two optimization problems in a stochastic bandit model Two optimization problems in a stochastic bandit model Emilie Kaufmann joint work with Olivier Cappé, Aurélien Garivier and Shivaram Kalyanakrishnan Journées MAS 204, Toulouse Outline From stochastic optimization

More information

The information complexity of sequential resource allocation

The information complexity of sequential resource allocation The information complexity of sequential resource allocation Emilie Kaufmann, joint work with Olivier Cappé, Aurélien Garivier and Shivaram Kalyanakrishan SMILE Seminar, ENS, June 8th, 205 Sequential allocation

More information

Bayesian and Frequentist Methods in Bandit Models

Bayesian and Frequentist Methods in Bandit Models Bayesian and Frequentist Methods in Bandit Models Emilie Kaufmann, Telecom ParisTech Bayes In Paris, ENSAE, October 24th, 2013 Emilie Kaufmann (Telecom ParisTech) Bayesian and Frequentist Bandits BIP,

More information

Regret Analysis of Stochastic and Nonstochastic Multi-armed Bandit Problems, Part I. Sébastien Bubeck Theory Group

Regret Analysis of Stochastic and Nonstochastic Multi-armed Bandit Problems, Part I. Sébastien Bubeck Theory Group Regret Analysis of Stochastic and Nonstochastic Multi-armed Bandit Problems, Part I Sébastien Bubeck Theory Group i.i.d. multi-armed bandit, Robbins [1952] i.i.d. multi-armed bandit, Robbins [1952] Known

More information

Advanced Machine Learning

Advanced Machine Learning Advanced Machine Learning Bandit Problems MEHRYAR MOHRI MOHRI@ COURANT INSTITUTE & GOOGLE RESEARCH. Multi-Armed Bandit Problem Problem: which arm of a K-slot machine should a gambler pull to maximize his

More information

The information complexity of best-arm identification

The information complexity of best-arm identification The information complexity of best-arm identification Emilie Kaufmann, joint work with Olivier Cappé and Aurélien Garivier MAB workshop, Lancaster, January th, 206 Context: the multi-armed bandit model

More information

On Bayesian bandit algorithms

On Bayesian bandit algorithms On Bayesian bandit algorithms Emilie Kaufmann joint work with Olivier Cappé, Aurélien Garivier, Nathaniel Korda and Rémi Munos July 1st, 2012 Emilie Kaufmann (Telecom ParisTech) On Bayesian bandit algorithms

More information

On the Complexity of Best Arm Identification in Multi-Armed Bandit Models

On the Complexity of Best Arm Identification in Multi-Armed Bandit Models On the Complexity of Best Arm Identification in Multi-Armed Bandit Models Aurélien Garivier Institut de Mathématiques de Toulouse Information Theory, Learning and Big Data Simons Institute, Berkeley, March

More information

Bandit Algorithms. Tor Lattimore & Csaba Szepesvári

Bandit Algorithms. Tor Lattimore & Csaba Szepesvári Bandit Algorithms Tor Lattimore & Csaba Szepesvári Bandits Time 1 2 3 4 5 6 7 8 9 10 11 12 Left arm $1 $0 $1 $1 $0 Right arm $1 $0 Five rounds to go. Which arm would you play next? Overview What are bandits,

More information

Introduction to Bandit Algorithms. Introduction to Bandit Algorithms

Introduction to Bandit Algorithms. Introduction to Bandit Algorithms Stochastic K-Arm Bandit Problem Formulation Consider K arms (actions) each correspond to an unknown distribution {ν k } K k=1 with values bounded in [0, 1]. At each time t, the agent pulls an arm I t {1,...,

More information

Reinforcement Learning

Reinforcement Learning Reinforcement Learning Lecture 5: Bandit optimisation Alexandre Proutiere, Sadegh Talebi, Jungseul Ok KTH, The Royal Institute of Technology Objectives of this lecture Introduce bandit optimisation: the

More information

Bandit Algorithms. Zhifeng Wang ... Department of Statistics Florida State University

Bandit Algorithms. Zhifeng Wang ... Department of Statistics Florida State University Bandit Algorithms Zhifeng Wang Department of Statistics Florida State University Outline Multi-Armed Bandits (MAB) Exploration-First Epsilon-Greedy Softmax UCB Thompson Sampling Adversarial Bandits Exp3

More information

COS 402 Machine Learning and Artificial Intelligence Fall Lecture 22. Exploration & Exploitation in Reinforcement Learning: MAB, UCB, Exp3

COS 402 Machine Learning and Artificial Intelligence Fall Lecture 22. Exploration & Exploitation in Reinforcement Learning: MAB, UCB, Exp3 COS 402 Machine Learning and Artificial Intelligence Fall 2016 Lecture 22 Exploration & Exploitation in Reinforcement Learning: MAB, UCB, Exp3 How to balance exploration and exploitation in reinforcement

More information

The Multi-Arm Bandit Framework

The Multi-Arm Bandit Framework The Multi-Arm Bandit Framework A. LAZARIC (SequeL Team @INRIA-Lille) ENS Cachan - Master 2 MVA SequeL INRIA Lille MVA-RL Course In This Lecture A. LAZARIC Reinforcement Learning Algorithms Oct 29th, 2013-2/94

More information

On the Complexity of Best Arm Identification with Fixed Confidence

On the Complexity of Best Arm Identification with Fixed Confidence On the Complexity of Best Arm Identification with Fixed Confidence Discrete Optimization with Noise Aurélien Garivier, joint work with Emilie Kaufmann CNRS, CRIStAL) to be presented at COLT 16, New York

More information

Bandits and Exploration: How do we (optimally) gather information? Sham M. Kakade

Bandits and Exploration: How do we (optimally) gather information? Sham M. Kakade Bandits and Exploration: How do we (optimally) gather information? Sham M. Kakade Machine Learning for Big Data CSE547/STAT548 University of Washington S. M. Kakade (UW) Optimization for Big data 1 / 22

More information

The Multi-Armed Bandit Problem

The Multi-Armed Bandit Problem The Multi-Armed Bandit Problem Electrical and Computer Engineering December 7, 2013 Outline 1 2 Mathematical 3 Algorithm Upper Confidence Bound Algorithm A/B Testing Exploration vs. Exploitation Scientist

More information

Bandits : optimality in exponential families

Bandits : optimality in exponential families Bandits : optimality in exponential families Odalric-Ambrym Maillard IHES, January 2016 Odalric-Ambrym Maillard Bandits 1 / 40 Introduction 1 Stochastic multi-armed bandits 2 Boundary crossing probabilities

More information

Multi-Armed Bandit: Learning in Dynamic Systems with Unknown Models

Multi-Armed Bandit: Learning in Dynamic Systems with Unknown Models c Qing Zhao, UC Davis. Talk at Xidian Univ., September, 2011. 1 Multi-Armed Bandit: Learning in Dynamic Systems with Unknown Models Qing Zhao Department of Electrical and Computer Engineering University

More information

On the Complexity of Best Arm Identification with Fixed Confidence

On the Complexity of Best Arm Identification with Fixed Confidence On the Complexity of Best Arm Identification with Fixed Confidence Discrete Optimization with Noise Aurélien Garivier, Emilie Kaufmann COLT, June 23 th 2016, New York Institut de Mathématiques de Toulouse

More information

Stat 260/CS Learning in Sequential Decision Problems. Peter Bartlett

Stat 260/CS Learning in Sequential Decision Problems. Peter Bartlett Stat 260/CS 294-102. Learning in Sequential Decision Problems. Peter Bartlett 1. Multi-armed bandit algorithms. Concentration inequalities. P(X ǫ) exp( ψ (ǫ))). Cumulant generating function bounds. Hoeffding

More information

Lecture 19: UCB Algorithm and Adversarial Bandit Problem. Announcements Review on stochastic multi-armed bandit problem

Lecture 19: UCB Algorithm and Adversarial Bandit Problem. Announcements Review on stochastic multi-armed bandit problem Lecture 9: UCB Algorithm and Adversarial Bandit Problem EECS598: Prediction and Learning: It s Only a Game Fall 03 Lecture 9: UCB Algorithm and Adversarial Bandit Problem Prof. Jacob Abernethy Scribe:

More information

Two generic principles in modern bandits: the optimistic principle and Thompson sampling

Two generic principles in modern bandits: the optimistic principle and Thompson sampling Two generic principles in modern bandits: the optimistic principle and Thompson sampling Rémi Munos INRIA Lille, France CSML Lunch Seminars, September 12, 2014 Outline Two principles: The optimistic principle

More information

Grundlagen der Künstlichen Intelligenz

Grundlagen der Künstlichen Intelligenz Grundlagen der Künstlichen Intelligenz Uncertainty & Probabilities & Bandits Daniel Hennes 16.11.2017 (WS 2017/18) University Stuttgart - IPVS - Machine Learning & Robotics 1 Today Uncertainty Probability

More information

arxiv: v4 [cs.lg] 22 Jul 2014

arxiv: v4 [cs.lg] 22 Jul 2014 Learning to Optimize Via Information-Directed Sampling Daniel Russo and Benjamin Van Roy July 23, 2014 arxiv:1403.5556v4 cs.lg] 22 Jul 2014 Abstract We propose information-directed sampling a new algorithm

More information

Informational Confidence Bounds for Self-Normalized Averages and Applications

Informational Confidence Bounds for Self-Normalized Averages and Applications Informational Confidence Bounds for Self-Normalized Averages and Applications Aurélien Garivier Institut de Mathématiques de Toulouse - Université Paul Sabatier Thursday, September 12th 2013 Context Tree

More information

Stochastic bandits: Explore-First and UCB

Stochastic bandits: Explore-First and UCB CSE599s, Spring 2014, Online Learning Lecture 15-2/19/2014 Stochastic bandits: Explore-First and UCB Lecturer: Brendan McMahan or Ofer Dekel Scribe: Javad Hosseini In this lecture, we like to answer this

More information

THE first formalization of the multi-armed bandit problem

THE first formalization of the multi-armed bandit problem EDIC RESEARCH PROPOSAL 1 Multi-armed Bandits in a Network Farnood Salehi I&C, EPFL Abstract The multi-armed bandit problem is a sequential decision problem in which we have several options (arms). We can

More information

Analysis of Thompson Sampling for the multi-armed bandit problem

Analysis of Thompson Sampling for the multi-armed bandit problem Analysis of Thompson Sampling for the multi-armed bandit problem Shipra Agrawal Microsoft Research India shipra@microsoft.com avin Goyal Microsoft Research India navingo@microsoft.com Abstract We show

More information

Stat 260/CS Learning in Sequential Decision Problems. Peter Bartlett

Stat 260/CS Learning in Sequential Decision Problems. Peter Bartlett Stat 260/CS 294-102. Learning in Sequential Decision Problems. Peter Bartlett 1. Thompson sampling Bernoulli strategy Regret bounds Extensions the flexibility of Bayesian strategies 1 Bayesian bandit strategies

More information

An Estimation Based Allocation Rule with Super-linear Regret and Finite Lock-on Time for Time-dependent Multi-armed Bandit Processes

An Estimation Based Allocation Rule with Super-linear Regret and Finite Lock-on Time for Time-dependent Multi-armed Bandit Processes An Estimation Based Allocation Rule with Super-linear Regret and Finite Lock-on Time for Time-dependent Multi-armed Bandit Processes Prokopis C. Prokopiou, Peter E. Caines, and Aditya Mahajan McGill University

More information

Multi-Armed Bandit Formulations for Identification and Control

Multi-Armed Bandit Formulations for Identification and Control Multi-Armed Bandit Formulations for Identification and Control Cristian R. Rojas Joint work with Matías I. Müller and Alexandre Proutiere KTH Royal Institute of Technology, Sweden ERNSI, September 24-27,

More information

arxiv: v2 [stat.ml] 19 Jul 2012

arxiv: v2 [stat.ml] 19 Jul 2012 Thompson Sampling: An Asymptotically Optimal Finite Time Analysis Emilie Kaufmann, Nathaniel Korda and Rémi Munos arxiv:105.417v [stat.ml] 19 Jul 01 Telecom Paristech UMR CNRS 5141 & INRIA Lille - Nord

More information

The multi armed-bandit problem

The multi armed-bandit problem The multi armed-bandit problem (with covariates if we have time) Vianney Perchet & Philippe Rigollet LPMA Université Paris Diderot ORFE Princeton University Algorithms and Dynamics for Games and Optimization

More information

KULLBACK-LEIBLER UPPER CONFIDENCE BOUNDS FOR OPTIMAL SEQUENTIAL ALLOCATION

KULLBACK-LEIBLER UPPER CONFIDENCE BOUNDS FOR OPTIMAL SEQUENTIAL ALLOCATION Submitted to the Annals of Statistics arxiv: math.pr/0000000 KULLBACK-LEIBLER UPPER CONFIDENCE BOUNDS FOR OPTIMAL SEQUENTIAL ALLOCATION By Olivier Cappé 1, Aurélien Garivier 2, Odalric-Ambrym Maillard

More information

Complexity of stochastic branch and bound methods for belief tree search in Bayesian reinforcement learning

Complexity of stochastic branch and bound methods for belief tree search in Bayesian reinforcement learning Complexity of stochastic branch and bound methods for belief tree search in Bayesian reinforcement learning Christos Dimitrakakis Informatics Institute, University of Amsterdam, Amsterdam, The Netherlands

More information

Regret Analysis of Stochastic and Nonstochastic Multi-armed Bandit Problems. Sébastien Bubeck Theory Group

Regret Analysis of Stochastic and Nonstochastic Multi-armed Bandit Problems. Sébastien Bubeck Theory Group Regret Analysis of Stochastic and Nonstochastic Multi-armed Bandit Problems Sébastien Bubeck Theory Group Part 1: i.i.d., adversarial, and Bayesian bandit models i.i.d. multi-armed bandit, Robbins [1952]

More information

Evaluation of multi armed bandit algorithms and empirical algorithm

Evaluation of multi armed bandit algorithms and empirical algorithm Acta Technica 62, No. 2B/2017, 639 656 c 2017 Institute of Thermomechanics CAS, v.v.i. Evaluation of multi armed bandit algorithms and empirical algorithm Zhang Hong 2,3, Cao Xiushan 1, Pu Qiumei 1,4 Abstract.

More information

arxiv: v3 [stat.ml] 6 Nov 2017

arxiv: v3 [stat.ml] 6 Nov 2017 On Bayesian index policies for sequential resource allocation Emilie Kaufmann arxiv:60.090v3 [stat.ml] 6 Nov 07 CNRS & Univ. Lille, UMR 989 CRIStAL, Inria SequeL emilie.kaufmann@univ-lille.fr Abstract

More information

Dynamic resource allocation: Bandit problems and extensions

Dynamic resource allocation: Bandit problems and extensions Dynamic resource allocation: Bandit problems and extensions Aurélien Garivier Institut de Mathématiques de Toulouse MAD Seminar, Université Toulouse 1 October 3rd, 2014 The Bandit Model Roadmap 1 The Bandit

More information

Lecture 4: Lower Bounds (ending); Thompson Sampling

Lecture 4: Lower Bounds (ending); Thompson Sampling CMSC 858G: Bandits, Experts and Games 09/12/16 Lecture 4: Lower Bounds (ending); Thompson Sampling Instructor: Alex Slivkins Scribed by: Guowei Sun,Cheng Jie 1 Lower bounds on regret (ending) Recap from

More information

Reinforcement Learning

Reinforcement Learning Reinforcement Learning Lecture 6: RL algorithms 2.0 Alexandre Proutiere, Sadegh Talebi, Jungseul Ok KTH, The Royal Institute of Technology Objectives of this lecture Present and analyse two online algorithms

More information

1 MDP Value Iteration Algorithm

1 MDP Value Iteration Algorithm CS 0. - Active Learning Problem Set Handed out: 4 Jan 009 Due: 9 Jan 009 MDP Value Iteration Algorithm. Implement the value iteration algorithm given in the lecture. That is, solve Bellman s equation using

More information

Csaba Szepesvári 1. University of Alberta. Machine Learning Summer School, Ile de Re, France, 2008

Csaba Szepesvári 1. University of Alberta. Machine Learning Summer School, Ile de Re, France, 2008 LEARNING THEORY OF OPTIMAL DECISION MAKING PART I: ON-LINE LEARNING IN STOCHASTIC ENVIRONMENTS Csaba Szepesvári 1 1 Department of Computing Science University of Alberta Machine Learning Summer School,

More information

Optimistic Bayesian Sampling in Contextual-Bandit Problems

Optimistic Bayesian Sampling in Contextual-Bandit Problems Journal of Machine Learning Research volume (2012) 2069-2106 Submitted 7/11; Revised 5/12; Published 6/12 Optimistic Bayesian Sampling in Contextual-Bandit Problems Benedict C. May School of Mathematics

More information

EASINESS IN BANDITS. Gergely Neu. Pompeu Fabra University

EASINESS IN BANDITS. Gergely Neu. Pompeu Fabra University EASINESS IN BANDITS Gergely Neu Pompeu Fabra University EASINESS IN BANDITS Gergely Neu Pompeu Fabra University THE BANDIT PROBLEM Play for T rounds attempting to maximize rewards THE BANDIT PROBLEM Play

More information

Multi armed bandit problem: some insights

Multi armed bandit problem: some insights Multi armed bandit problem: some insights July 4, 20 Introduction Multi Armed Bandit problems have been widely studied in the context of sequential analysis. The application areas include clinical trials,

More information

Analysis of Thompson Sampling for the multi-armed bandit problem

Analysis of Thompson Sampling for the multi-armed bandit problem Analysis of Thompson Sampling for the multi-armed bandit problem Shipra Agrawal Microsoft Research India shipra@microsoft.com Navin Goyal Microsoft Research India navingo@microsoft.com Abstract The multi-armed

More information

Stochastic Multi-Armed-Bandit Problem with Non-stationary Rewards

Stochastic Multi-Armed-Bandit Problem with Non-stationary Rewards 000 001 002 003 004 005 006 007 008 009 010 011 012 013 014 015 016 017 018 019 020 021 022 023 024 025 026 027 028 029 030 031 032 033 034 035 036 037 038 039 040 041 042 043 044 045 046 047 048 049 050

More information

arxiv: v3 [cs.lg] 7 Nov 2017

arxiv: v3 [cs.lg] 7 Nov 2017 ESAIM: PROCEEDINGS AND SURVEYS, Vol.?, 2017, 1-10 Editors: Will be set by the publisher arxiv:1702.00001v3 [cs.lg] 7 Nov 2017 LEARNING THE DISTRIBUTION WITH LARGEST MEAN: TWO BANDIT FRAMEWORKS Emilie Kaufmann

More information

Alireza Shafaei. Machine Learning Reading Group The University of British Columbia Summer 2017

Alireza Shafaei. Machine Learning Reading Group The University of British Columbia Summer 2017 s s Machine Learning Reading Group The University of British Columbia Summer 2017 (OCO) Convex 1/29 Outline (OCO) Convex Stochastic Bernoulli s (OCO) Convex 2/29 At each iteration t, the player chooses

More information

An Optimal Bidimensional Multi Armed Bandit Auction for Multi unit Procurement

An Optimal Bidimensional Multi Armed Bandit Auction for Multi unit Procurement An Optimal Bidimensional Multi Armed Bandit Auction for Multi unit Procurement Satyanath Bhat Joint work with: Shweta Jain, Sujit Gujar, Y. Narahari Department of Computer Science and Automation, Indian

More information

Reward Maximization Under Uncertainty: Leveraging Side-Observations on Networks

Reward Maximization Under Uncertainty: Leveraging Side-Observations on Networks Reward Maximization Under Uncertainty: Leveraging Side-Observations Reward Maximization Under Uncertainty: Leveraging Side-Observations on Networks Swapna Buccapatnam AT&T Labs Research, Middletown, NJ

More information

Yevgeny Seldin. University of Copenhagen

Yevgeny Seldin. University of Copenhagen Yevgeny Seldin University of Copenhagen Classical (Batch) Machine Learning Collect Data Data Assumption The samples are independent identically distributed (i.i.d.) Machine Learning Prediction rule New

More information

Learning to play K-armed bandit problems

Learning to play K-armed bandit problems Learning to play K-armed bandit problems Francis Maes 1, Louis Wehenkel 1 and Damien Ernst 1 1 University of Liège Dept. of Electrical Engineering and Computer Science Institut Montefiore, B28, B-4000,

More information

Mechanisms with Learning for Stochastic Multi-armed Bandit Problems

Mechanisms with Learning for Stochastic Multi-armed Bandit Problems Mechanisms with Learning for Stochastic Multi-armed Bandit Problems Shweta Jain 1, Satyanath Bhat 1, Ganesh Ghalme 1, Divya Padmanabhan 1, and Y. Narahari 1 Department of Computer Science and Automation,

More information

Bayesian Contextual Multi-armed Bandits

Bayesian Contextual Multi-armed Bandits Bayesian Contextual Multi-armed Bandits Xiaoting Zhao Joint Work with Peter I. Frazier School of Operations Research and Information Engineering Cornell University October 22, 2012 1 / 33 Outline 1 Motivating

More information

Adaptive Learning with Unknown Information Flows

Adaptive Learning with Unknown Information Flows Adaptive Learning with Unknown Information Flows Yonatan Gur Stanford University Ahmadreza Momeni Stanford University June 8, 018 Abstract An agent facing sequential decisions that are characterized by

More information

Subsampling, Concentration and Multi-armed bandits

Subsampling, Concentration and Multi-armed bandits Subsampling, Concentration and Multi-armed bandits Odalric-Ambrym Maillard, R. Bardenet, S. Mannor, A. Baransi, N. Galichet, J. Pineau, A. Durand Toulouse, November 09, 2015 O-A. Maillard Subsampling and

More information

Stochastic Contextual Bandits with Known. Reward Functions

Stochastic Contextual Bandits with Known. Reward Functions Stochastic Contextual Bandits with nown 1 Reward Functions Pranav Sakulkar and Bhaskar rishnamachari Ming Hsieh Department of Electrical Engineering Viterbi School of Engineering University of Southern

More information

Introduction to Reinforcement Learning Part 3: Exploration for decision making, Application to games, optimization, and planning

Introduction to Reinforcement Learning Part 3: Exploration for decision making, Application to games, optimization, and planning Introduction to Reinforcement Learning Part 3: Exploration for decision making, Application to games, optimization, and planning Rémi Munos SequeL project: Sequential Learning http://researchers.lille.inria.fr/

More information

Applications of on-line prediction. in telecommunication problems

Applications of on-line prediction. in telecommunication problems Applications of on-line prediction in telecommunication problems Gábor Lugosi Pompeu Fabra University, Barcelona based on joint work with András György and Tamás Linder 1 Outline On-line prediction; Some

More information

Annealing-Pareto Multi-Objective Multi-Armed Bandit Algorithm

Annealing-Pareto Multi-Objective Multi-Armed Bandit Algorithm Annealing-Pareto Multi-Objective Multi-Armed Bandit Algorithm Saba Q. Yahyaa, Madalina M. Drugan and Bernard Manderick Vrije Universiteit Brussel, Department of Computer Science, Pleinlaan 2, 1050 Brussels,

More information

arxiv: v4 [math.pr] 26 Aug 2013

arxiv: v4 [math.pr] 26 Aug 2013 The Annals of Statistics 2013, Vol. 41, No. 3, 1516 1541 DOI: 10.1214/13-AOS1119 c Institute of Mathematical Statistics, 2013 arxiv:1210.1136v4 [math.pr] 26 Aug 2013 KULLBACK LEIBLER UPPER CONFIDENCE BOUNDS

More information

Combinatorial Multi-Armed Bandit: General Framework, Results and Applications

Combinatorial Multi-Armed Bandit: General Framework, Results and Applications Combinatorial Multi-Armed Bandit: General Framework, Results and Applications Wei Chen Microsoft Research Asia, Beijing, China Yajun Wang Microsoft Research Asia, Beijing, China Yang Yuan Computer Science

More information

The knowledge gradient method for multi-armed bandit problems

The knowledge gradient method for multi-armed bandit problems The knowledge gradient method for multi-armed bandit problems Moving beyond inde policies Ilya O. Ryzhov Warren Powell Peter Frazier Department of Operations Research and Financial Engineering Princeton

More information

Kullback-Leibler Upper Confidence Bounds for Optimal Sequential Allocation

Kullback-Leibler Upper Confidence Bounds for Optimal Sequential Allocation Kullback-Leibler Upper Confidence Bounds for Optimal Sequential Allocation Olivier Cappé, Aurélien Garivier, Odalric-Ambrym Maillard, Rémi Munos, Gilles Stoltz To cite this version: Olivier Cappé, Aurélien

More information

Regional Multi-Armed Bandits

Regional Multi-Armed Bandits School of Information Science and Technology University of Science and Technology of China {wzy43, zrd7}@mail.ustc.edu.cn, congshen@ustc.edu.cn Abstract We consider a variant of the classic multiarmed

More information

Exploration and exploitation of scratch games

Exploration and exploitation of scratch games Mach Learn (2013) 92:377 401 DOI 10.1007/s10994-013-5359-2 Exploration and exploitation of scratch games Raphaël Féraud Tanguy Urvoy Received: 10 January 2013 / Accepted: 12 April 2013 / Published online:

More information

Anytime optimal algorithms in stochastic multi-armed bandits

Anytime optimal algorithms in stochastic multi-armed bandits Rémy Degenne LPMA, Université Paris Diderot Vianney Perchet CREST, ENSAE REMYDEGENNE@MATHUNIV-PARIS-DIDEROTFR VIANNEYPERCHET@NORMALESUPORG Abstract We introduce an anytime algorithm for stochastic multi-armed

More information

Bandit View on Continuous Stochastic Optimization

Bandit View on Continuous Stochastic Optimization Bandit View on Continuous Stochastic Optimization Sébastien Bubeck 1 joint work with Rémi Munos 1 & Gilles Stoltz 2 & Csaba Szepesvari 3 1 INRIA Lille, SequeL team 2 CNRS/ENS/HEC 3 University of Alberta

More information

Multi-armed Bandits in the Presence of Side Observations in Social Networks

Multi-armed Bandits in the Presence of Side Observations in Social Networks 52nd IEEE Conference on Decision and Control December 0-3, 203. Florence, Italy Multi-armed Bandits in the Presence of Side Observations in Social Networks Swapna Buccapatnam, Atilla Eryilmaz, and Ness

More information

arxiv: v2 [stat.ml] 14 Nov 2016

arxiv: v2 [stat.ml] 14 Nov 2016 Journal of Machine Learning Research 6 06-4 Submitted 7/4; Revised /5; Published /6 On the Complexity of Best-Arm Identification in Multi-Armed Bandit Models arxiv:407.4443v [stat.ml] 4 Nov 06 Emilie Kaufmann

More information

Ordinal optimization - Empirical large deviations rate estimators, and multi-armed bandit methods

Ordinal optimization - Empirical large deviations rate estimators, and multi-armed bandit methods Ordinal optimization - Empirical large deviations rate estimators, and multi-armed bandit methods Sandeep Juneja Tata Institute of Fundamental Research Mumbai, India joint work with Peter Glynn Applied

More information

Monte-Carlo Tree Search by. MCTS by Best Arm Identification

Monte-Carlo Tree Search by. MCTS by Best Arm Identification Monte-Carlo Tree Search by Best Arm Identification and Wouter M. Koolen Inria Lille SequeL team CWI Machine Learning Group Inria-CWI workshop Amsterdam, September 20th, 2017 Part of...... a new Associate

More information

Pure Exploration Stochastic Multi-armed Bandits

Pure Exploration Stochastic Multi-armed Bandits C&A Workshop 2016, Hangzhou Pure Exploration Stochastic Multi-armed Bandits Jian Li Institute for Interdisciplinary Information Sciences Tsinghua University Outline Introduction 2 Arms Best Arm Identification

More information

On Regret-Optimal Learning in Decentralized Multi-player Multi-armed Bandits

On Regret-Optimal Learning in Decentralized Multi-player Multi-armed Bandits 1 On Regret-Optimal Learning in Decentralized Multi-player Multi-armed Bandits Naumaan Nayyar, Dileep Kalathil and Rahul Jain Abstract We consider the problem of learning in single-player and multiplayer

More information

Multiple Identifications in Multi-Armed Bandits

Multiple Identifications in Multi-Armed Bandits Multiple Identifications in Multi-Armed Bandits arxiv:05.38v [cs.lg] 4 May 0 Sébastien Bubeck Department of Operations Research and Financial Engineering, Princeton University sbubeck@princeton.edu Tengyao

More information

Bandit Algorithms for Pure Exploration: Best Arm Identification and Game Tree Search. Wouter M. Koolen

Bandit Algorithms for Pure Exploration: Best Arm Identification and Game Tree Search. Wouter M. Koolen Bandit Algorithms for Pure Exploration: Best Arm Identification and Game Tree Search Wouter M. Koolen Machine Learning and Statistics for Structures Friday 23 rd February, 2018 Outline 1 Intro 2 Model

More information

Optimistic Gittins Indices

Optimistic Gittins Indices Optimistic Gittins Indices Eli Gutin Operations Research Center, MIT Cambridge, MA 0242 gutin@mit.edu Vivek F. Farias MIT Sloan School of Management Cambridge, MA 0242 vivekf@mit.edu Abstract Starting

More information

Reinforcement Learning

Reinforcement Learning Reinforcement Learning Lecture 3: RL problems, sample complexity and regret Alexandre Proutiere, Sadegh Talebi, Jungseul Ok KTH, The Royal Institute of Technology Objectives of this lecture Introduce the

More information

Piecewise-stationary Bandit Problems with Side Observations

Piecewise-stationary Bandit Problems with Side Observations Jia Yuan Yu jia.yu@mcgill.ca Department Electrical and Computer Engineering, McGill University, Montréal, Québec, Canada. Shie Mannor shie.mannor@mcgill.ca; shie@ee.technion.ac.il Department Electrical

More information

The Multi-Armed Bandit Problem

The Multi-Armed Bandit Problem Università degli Studi di Milano The bandit problem [Robbins, 1952]... K slot machines Rewards X i,1, X i,2,... of machine i are i.i.d. [0, 1]-valued random variables An allocation policy prescribes which

More information

Online Learning Schemes for Power Allocation in Energy Harvesting Communications

Online Learning Schemes for Power Allocation in Energy Harvesting Communications Online Learning Schemes for Power Allocation in Energy Harvesting Communications Pranav Sakulkar and Bhaskar Krishnamachari Ming Hsieh Department of Electrical Engineering Viterbi School of Engineering

More information

Online Learning and Sequential Decision Making

Online Learning and Sequential Decision Making Online Learning and Sequential Decision Making Emilie Kaufmann CNRS & CRIStAL, Inria SequeL, emilie.kaufmann@univ-lille.fr Research School, ENS Lyon, Novembre 12-13th 2018 Emilie Kaufmann Online Learning

More information

The geometry of Gaussian processes and Bayesian optimization. Contal CMLA, ENS Cachan

The geometry of Gaussian processes and Bayesian optimization. Contal CMLA, ENS Cachan The geometry of Gaussian processes and Bayesian optimization. Contal CMLA, ENS Cachan Background: Global Optimization and Gaussian Processes The Geometry of Gaussian Processes and the Chaining Trick Algorithm

More information

COMPUTING OPTIMAL SEQUENTIAL ALLOCATION RULES IN CLINICAL TRIALS* Michael N. Katehakis. State University of New York at Stony Brook. and.

COMPUTING OPTIMAL SEQUENTIAL ALLOCATION RULES IN CLINICAL TRIALS* Michael N. Katehakis. State University of New York at Stony Brook. and. COMPUTING OPTIMAL SEQUENTIAL ALLOCATION RULES IN CLINICAL TRIALS* Michael N. Katehakis State University of New York at Stony Brook and Cyrus Derman Columbia University The problem of assigning one of several

More information

arxiv: v1 [cs.lg] 13 May 2014

arxiv: v1 [cs.lg] 13 May 2014 Optimal Exploration-Exploitation in a Multi-Armed-Bandit Problem with Non-stationary Rewards Omar Besbes Columbia University Yonatan Gur Columbia University May 15, 2014 Assaf Zeevi Columbia University

More information

Introduction to Reinforcement Learning and multi-armed bandits

Introduction to Reinforcement Learning and multi-armed bandits Introduction to Reinforcement Learning and multi-armed bandits Rémi Munos INRIA Lille - Nord Europe Currently on leave at MSR-NE http://researchers.lille.inria.fr/ munos/ NETADIS Summer School 2013, Hillerod,

More information

Thompson Sampling for the non-stationary Corrupt Multi-Armed Bandit

Thompson Sampling for the non-stationary Corrupt Multi-Armed Bandit European Worshop on Reinforcement Learning 14 (2018 October 2018, Lille, France. Thompson Sampling for the non-stationary Corrupt Multi-Armed Bandit Réda Alami Orange Labs 2 Avenue Pierre Marzin 22300,

More information

Deterministic Sequencing of Exploration and Exploitation for Multi-Armed Bandit Problems

Deterministic Sequencing of Exploration and Exploitation for Multi-Armed Bandit Problems Deterministic Sequencing of Exploration and Exploitation for Multi-Armed Bandit Problems 1 Sattar Vakili, Keqin Liu, Qing Zhao arxiv:1106.6104v3 [math.oc] 9 Mar 2013 Abstract In the Multi-Armed Bandit

More information

Online Learning with Gaussian Payoffs and Side Observations

Online Learning with Gaussian Payoffs and Side Observations Online Learning with Gaussian Payoffs and Side Observations Yifan Wu 1 András György 2 Csaba Szepesvári 1 1 Department of Computing Science University of Alberta 2 Department of Electrical and Electronic

More information

Online Learning with Feedback Graphs

Online Learning with Feedback Graphs Online Learning with Feedback Graphs Claudio Gentile INRIA and Google NY clagentile@gmailcom NYC March 6th, 2018 1 Content of this lecture Regret analysis of sequential prediction problems lying between

More information

Crowdsourcing & Optimal Budget Allocation in Crowd Labeling

Crowdsourcing & Optimal Budget Allocation in Crowd Labeling Crowdsourcing & Optimal Budget Allocation in Crowd Labeling Madhav Mohandas, Richard Zhu, Vincent Zhuang May 5, 2016 Table of Contents 1. Intro to Crowdsourcing 2. The Problem 3. Knowledge Gradient Algorithm

More information

Tsinghua Machine Learning Guest Lecture, June 9,

Tsinghua Machine Learning Guest Lecture, June 9, Tsinghua Machine Learning Guest Lecture, June 9, 2015 1 Lecture Outline Introduction: motivations and definitions for online learning Multi-armed bandit: canonical example of online learning Combinatorial

More information

An Information-Theoretic Analysis of Thompson Sampling

An Information-Theoretic Analysis of Thompson Sampling Journal of Machine Learning Research (2015) Submitted ; Published An Information-Theoretic Analysis of Thompson Sampling Daniel Russo Department of Management Science and Engineering Stanford University

More information