Two generic principles in modern bandits: the optimistic principle and Thompson sampling

Size: px
Start display at page:

Download "Two generic principles in modern bandits: the optimistic principle and Thompson sampling"

Transcription

1 Two generic principles in modern bandits: the optimistic principle and Thompson sampling Rémi Munos INRIA Lille, France CSML Lunch Seminars, September 12, 2014

2 Outline Two principles: The optimistic principle Thompson sampling In 3 settings: The stochastic multi-armed bandit Linear bandits Bandits in graphs

3 The stochastic multi-armed bandit problem Setting: Set of K arms, defined by distributions ν k,initiallyunknown, At each time t, choose an arm I t and i.i.d. receive reward x t ν It. Goal: maximize the sum of rewards. Exploration-exploitation tradeoff: Explore: learn about the environment Exploit: act optimally according to our current beliefs

4 Definition of regret Definitions: Let µ k = E[ν k ] be the expected value of arm k, Let µ =max k µ k the best expected value, The cumulative expected regret: n def R n = µ µ It = t=1 K k T k (n), k=1 where k def = µ µ k,andt k (n) the number of times arm k has been played up to round n. Equivalent goal: Minimize R n.

5 Proposed solutions This is an old problem! [Robbins, 1952] Maybe surprisingly, not fully solved yet! Many proposed strategies: ɛ-greedy Softmax exploration (e.g., EXP3) Follow the perturbed leader Bayesian exploration (Gittins index)

6 Proposed solutions This is an old problem! [Robbins, 1952] Maybe surprisingly, not fully solved yet! Many proposed strategies: ɛ-greedy Softmax exploration (e.g., EXP3) Follow the perturbed leader Bayesian exploration (Gittins index) Here we will consider: Optimism in the face of uncertainty: Thompson sampling: Play optimally in the best possible world Play optimally in a randomly selected world

7 The UCB1 algorithm Upper Confidence Bound algorithm [Auer, Cesa-Bianchi, Fischer, 2002]: Select the arm with highest where: B k,t def = µ k (t)+ 2 log(t) T k (t), µ k (t) is the empirical rewards collected for arm k at time t T k (t) is the number of times arm k has been pulled up to time t Pull an arm either because it looks good or because it is uncertain

8 Can we stay a long time playing a bad arm?

9 Can we stay a long time playing a bad arm? No, since The more we pull an arm k, the smaller the size of its confidence interval But in h.p., it cannot be pulled once its UCB is smaller than µ Thus a sub-optimal arm k can be only pulled a number of times T k (n) suchthat 2 log n 2 T k (n) µ µ k = k.

10 Regret bound for UCB1 Proposition 1. Each sub-optimal arm k is visited in average, at most: ET k (n) 8 log n 2 k times (where k def = µ µ k > 0). Theorem 1. Thus the expected regret is bounded by: ER n = k E[T k (n)] k (8 k: k >0 +1+ π2 3 1 k ) log n + K(1 + π2 3 ).

11 Intuition of the proof Let k be a sub-optimal arm, and k be an optimal arm. At time t, if arm k is selected, this means that B k,t B k,t 2 log(t) µ k,t + µ k T k (t),t + 2 log(t) µ k +2 T k (t) T k (t) 8 log(t) 2 k 2 log(t) T k (t) µ, with high proba Thus, if T k (t) > 8log(t) 2 k, then there is only a small probability that arm k can be selected.

12 Write u = 8log(n) 2 k T k (n) u + u + Full proof of Proposition 1 n t=u+1 n t=u We have: 1{I t = k; T k (t) > u} [ 2 log t 1{ˆµ k,t µ k T k (t) } + 1{ˆµ 2 log t ] k,t µ k T k (t) } Now, taking the expectation of both sides, E[T k (n)] u + n t=u+1 8 log(n) 2 k 2t π2 3

13 Lower bound We have proven that UCB1 has a regret bounded as: ER n (8 k: k >0 1 ) log n + O(1). k Lower bound [Burnetas, Katehakis, 1996], [Lai, Robbins, 1985] ( ER n k: k >0 ) k K inf (ν k,µ log n + o(log n) ) where K inf (ν k,µ )=inf{kl(ν k,ν ):ν Dand E(ν ) >µ }.

14 UCB with variance estimate Tighter bounds lead to better performance UCB-V [Audibert, Munos, Szepesvári, 2007] Define the UCB as B k,t def = µ k,t + 2 σ k,t 2 log(1.2t) T k (t) Then the expected regret is bounded as: ER n (10 k: k >0 + 3 log(1.2t). T k (t) σk 2 ) +2 log n. k

15 KL-UCB Use the full empirical distribution KL-UCB Given a class of distributions D, { def B k,t =sup E[ν] :ν Dand KL(ˆµ k (t),ν) log t } T k (t) kl(ˆµ k (t),x) log t Tk(t) ˆµ k (t) B k,t

16 Regret of KL-UCB Theorem 2 (Cappé, Garivier, Maillard, Munos, Stoltz, 2013). The regret of KL-UCB is bounded as : ( ER n = k: k >0 ) k K inf (ν k,µ log n + o(log n), ) Reaches the lower-bounds of [Lai, Robbins, 1985], [Burnetas, Katehakis, 1996]: for exponential family (Bernoulli, Gaussian, Gamma, Dirichlet, Poisson,...) finitely supported distributions

17 Thompson sampling The first bandit algorithm ever [Thompson, 1933] Only recently rediscovered: Efficient in practice [Chapelle, Li, 2011],... Recent analyses: Frequentist: [Agrawal, Goyal, ], [Kaufmann, Korda, Munos, ], Bayesian: [Russo, Van Roy, 2013], [Bubeck, Liu, 2013] Properties: Choose a prior on the set of unknown parameters Update the posterior according to the observed rewards Draw a sample from the posterior and play optimally

18 (Frequentist) Analysis of Thompson sampling Theorem 3 (Korda, Kaufmann, Munos, 2013). Assume the arm distributions belong to an exponential family. Use Jeffrey s prior. Then ER n = k: k >0 k K inf (ν k,µ log n + o(log n). ) Reaches the lower-bounds of [Burnetas, Katehakis, 1996], [Lai, Robbins, 1985].

19 Three ingredients for analysing Thompson sampling 1. Concentration of the posterior distributions around true mean 2. The optimal arm is often pulled: for any b (0, 1), T k (t) =Ω(t b ) This is achieved by proving that P(θ π k,t >µ ) c (anti-concentration of the posterior) 3. Comparison of θ π k,t to quantile of π k,t at level 1 1 T k (t)

20 Conclusions on multi-armed bandits Two generic principles: Optimistic principle: act optimally in best possible world compatible with observations Thompson sampling: act optimally in any world randomly selected from the posterior KL-UCB and Thompson sampling are currently among the best algorithms for multi-armed bandits Those principles extend to more complicated settings: Linear bandits bandits in graphs

21 Linear bandits The set of arms X is a subset of R D.Ateachtimestept, Select x t X, Observe r t = x t α + ɛ t,whereα R D is unknown. Define the regret: n R n = (x x t ) α, with x =argmax x X x α. t=1

22 The optimistic principle E t The reward r t = x t α + ɛ t provides information about α along direction x t. IR D ˆα t α Idea: Build a high probability confidence set E t s.t. α E t w.h.p. Play the arm x X that maximizes max α E t x α. 0 X x t x

23 Abitmoreprecisely... UCB idea: Definealeast-squaresestimateˆα t of α : ˆα t =arg min α R N [ t 1 ( rt x s α ) ] 2 + α 2 s=1 and a confidence ellipsoid E t around ˆα t : E t = { α R D, α ˆα t Vt ρ(t) }, where ρ(t) =c D log(t/δ), and V t = t 1 s=1 x sx s + I. Property: w.p.1 δ, α E t for all t 1. Algorithm: x t = argmaxmax x α x X α E t ( or x t = argmax x X x ˆα t + ρ(t) x V 1 t )

24 [Agrawal, Goyal, 2013] Use a Gaussian prior G(0, I ). At each t, drawasample from the posterior: α t G(ˆα t,ρ(t) 2 V 1 t ) Select x t =argmax x X x α t Remarks: Thompson sampling IR D 0 X α t ˆα t α Gaussian prior and Gaussian likelihood model are just there for the design of the TS algo. Computational complexity is generally lower than UCB x t

25 Regret analysis UCB algorithms: With high probability, R n = O ( D n ) or O ( Dn log( X ) ) Thompson sampling: With high probability, R n = O ( D 3/2 n ) or O ( D n log( X ) ) Lower bound: There exists a set X such that for any algorithm, R n =Ω(D n) Ref: [Auer, 2002], [Dani, Hayes, Kakade, 2008], [Rusmevichientong, Tsitsiklis, 2010], [Li, Chu, Langford, Schapire, 2010], [Abbasi-Yadkori, Pál, Szepesvári, 2011], [Agrawal, Goyal, 2013].

26 Bandits in graphs Examples: advertising campains, recommender systems,... The number of arms (nodes) is larger than the number of rounds.

27 Bandit in a graph Let G a known graph with K nodes {1, 2,...,K} Let f be a unknown function defined on the set of nodes For t =1ton, Select a node I t Observe reward r t = f (I t )+ɛ t Goal: maximize sum of expected rewards Equivalently minimize regret: R n = where f =max 1 i K f (i). n (f f (I t )), t=1 We care about the case when K > n

28 Smooth graph function 0 1 Neighboring nodes have similar values Smoothness of the function: S G (f) = 1 w i,j (f i f j ) 2, 2 i,j K where w i,j is a weight of the edge between nodes i and j.

29 Graph Laplacian Graph Laplacian: L = D W,where W: adjacency matrix (edge weights w i,j ). D: diagonal matrix with the entries d i = j w i,j. Example: L = w1,2 = 1 w2,3 =3 2 3 w1,5 =2 Spectral decomposition: L = QΛQ T,where w1,4 = 1 Λ is diagonal containing the eigenvalues of L. 5 w2,4 = 4 w4,5 = 5 Q is orthogonal and its columns are the eigenvectors of L. w3,4 = 2 4

30 Alternative representation Change of basis: Let f = Qα. Thenα = Q f. We can learn α instead of f. Smoothness of f: S G (f) = 1 2 w i,j (f i f j ) 2 i,j K K = f T Lf = f T QΛQ T f = α T Λα = α 2 Λ = λ i αi 2 i=1 f smooth when α i is small for large λ i. α lies in a thin ellipsoid

31 Problem reformulation Eigendecomposition of graph laplacian L = QΛQ T where x 1 Q = q 1 q K =. x K (q i ) 1 i K are orthonormal, as well as (x i ) 1 i K Notice that f i =(Qα) i = x i α Thus this is a linear bandit problem where the set of arms is {x 1,...,x K } R K and α the unknown parameter.

32 Spectral UCB [Valko, Munos, Kveton, Kocák, 2014]. Follows the optimistic principle: Define a penalized least-squares estimate ˆα t =arg min Select the next point: where V t α R K [ t 1 ( rt x s α ) ] 2 + α 2 Λ s=1 x t = argmax x X def = ( t 1 x s x s +Λ. s=1 ) x ˆα t + ρ(t) x V 1, t }{{} UCB on x α Observe reward r t = x t α + ɛ t

33 Spectral TS [Kocák, Valko, Munos, Agrawal, 2014] Idea: Incorporate the smoothness assumption into the prior. Set V 1 = Λ, ˆα 1 = 0. For t =1ton: Sample Select α t G(ˆα t,ρ(t) 2 V 1 t ) x t =argmaxx α t x Observe reward r t = x t α + ɛ t Update the posterior mean ˆα t+1 and covariance matrix V t+1

34 Spectral UCB and Spectral TS regret bound Theorem 4. Both the regret of Spectral UCB and the regret of Spectral TS are bounded, with probability 1 δ, as ( R n = O ( d + α Λ ) ) nd log n/δ where d is the effective dimension: largest d such that dλ d n log n. d is small when the (λ i ) grow rapidly This is related to the number of non-negligible dimensions.

35 Effective dimension vs. Ambient dimension Flixster graph: N= Barabasi Albert graph N= effective dimenstion effective dimenstion time T Usually d K in real world graphs time T

36 (note that )

37 Synthetic experiment cumulative regret Barabasi Albert N=250, basis size=3, effective d=1 SpectralTS LinearTS SpectralUCB LinUCB time T SpectralTS LinearTS SpectralUCB LinUCB Figure: Barabási-Albert random graph results. K = 250 computational time in seconds

38 Experiments with the MovieLens dataset cumulative regret Movielens data N=2019, average of 10 users, T=200, d = 5 SpectralUCB LinUCB SpectralTS LinearTS computational time in seconds time t 0 SpectralTS LinearTS SpectralUCB LinUCB Figure: Results on the MovieLens dataset [Lam, Herlocker, 2012]. K = 2019 Note: the graph has been learnt based on low-rank matrix factorization of the 10 6 ratings matrix.

39 Conclusion on graph bandits Given a known graph, we assume the unknown function to be smooth w.r.t. the graph structure. Spectral UCB and Spectral TS achieves a regret bound of order Õ(d n) where the effective dimension d K Computational complexity per step is O(K 3 )forspectral UCB and O(K 2 )forspectralts. Approximation to the first J eigenvectors in O(Jmlog m) time, where m is the number of edges. Then complexity per step: O(JK 2 )forspectralucbando(jk) forspectralts.

40 Conclusions Bandits = a great source of inspiration: Optimistic approach: act optimally in best possible world compatible with observations Thompson sampling: act optimally in any world randomly selected from the posterior Multi-armed bandit = building block Many extensions in bandits: linear, convex, Lipschitz, Gaussian, contextual, combinatorial,... and in Reinforcement Learning

41 Thanks!!!

On the Complexity of Best Arm Identification in Multi-Armed Bandit Models

On the Complexity of Best Arm Identification in Multi-Armed Bandit Models On the Complexity of Best Arm Identification in Multi-Armed Bandit Models Aurélien Garivier Institut de Mathématiques de Toulouse Information Theory, Learning and Big Data Simons Institute, Berkeley, March

More information

Graphs in Machine Learning

Graphs in Machine Learning Graphs in Machine Learning Michal Valko Inria Lille - Nord Europe, France Partially based on material by: Toma s Koca k November 23, 2015 MVA 2015/2016 Last Lecture Examples of applications of online SSL

More information

Regret Analysis of Stochastic and Nonstochastic Multi-armed Bandit Problems, Part I. Sébastien Bubeck Theory Group

Regret Analysis of Stochastic and Nonstochastic Multi-armed Bandit Problems, Part I. Sébastien Bubeck Theory Group Regret Analysis of Stochastic and Nonstochastic Multi-armed Bandit Problems, Part I Sébastien Bubeck Theory Group i.i.d. multi-armed bandit, Robbins [1952] i.i.d. multi-armed bandit, Robbins [1952] Known

More information

Spectral Bandits for Smooth Graph Functions with Applications in Recommender Systems

Spectral Bandits for Smooth Graph Functions with Applications in Recommender Systems Spectral Bandits for Smooth Graph Functions with Applications in Recommender Systems Tomáš Kocák SequeL team INRIA Lille France Michal Valko SequeL team INRIA Lille France Rémi Munos SequeL team, INRIA

More information

Graphs in Machine Learning

Graphs in Machine Learning Graphs in Machine Learning Michal Valko INRIA Lille - Nord Europe, France Partially based on material by: Rob Fergus, Tomáš Kocák March 17, 2015 MVA 2014/2015 Last Lecture Analysis of online SSL Analysis

More information

Online Learning and Sequential Decision Making

Online Learning and Sequential Decision Making Online Learning and Sequential Decision Making Emilie Kaufmann CNRS & CRIStAL, Inria SequeL, emilie.kaufmann@univ-lille.fr Research School, ENS Lyon, Novembre 12-13th 2018 Emilie Kaufmann Sequential Decision

More information

Multi-armed bandit models: a tutorial

Multi-armed bandit models: a tutorial Multi-armed bandit models: a tutorial CERMICS seminar, March 30th, 2016 Multi-Armed Bandit model: general setting K arms: for a {1,..., K}, (X a,t ) t N is a stochastic process. (unknown distributions)

More information

On Bayesian bandit algorithms

On Bayesian bandit algorithms On Bayesian bandit algorithms Emilie Kaufmann joint work with Olivier Cappé, Aurélien Garivier, Nathaniel Korda and Rémi Munos July 1st, 2012 Emilie Kaufmann (Telecom ParisTech) On Bayesian bandit algorithms

More information

Stratégies bayésiennes et fréquentistes dans un modèle de bandit

Stratégies bayésiennes et fréquentistes dans un modèle de bandit Stratégies bayésiennes et fréquentistes dans un modèle de bandit thèse effectuée à Telecom ParisTech, co-dirigée par Olivier Cappé, Aurélien Garivier et Rémi Munos Journées MAS, Grenoble, 30 août 2016

More information

Regret Analysis of Stochastic and Nonstochastic Multi-armed Bandit Problems. Sébastien Bubeck Theory Group

Regret Analysis of Stochastic and Nonstochastic Multi-armed Bandit Problems. Sébastien Bubeck Theory Group Regret Analysis of Stochastic and Nonstochastic Multi-armed Bandit Problems Sébastien Bubeck Theory Group Part 1: i.i.d., adversarial, and Bayesian bandit models i.i.d. multi-armed bandit, Robbins [1952]

More information

Bayesian and Frequentist Methods in Bandit Models

Bayesian and Frequentist Methods in Bandit Models Bayesian and Frequentist Methods in Bandit Models Emilie Kaufmann, Telecom ParisTech Bayes In Paris, ENSAE, October 24th, 2013 Emilie Kaufmann (Telecom ParisTech) Bayesian and Frequentist Bandits BIP,

More information

Bandit models: a tutorial

Bandit models: a tutorial Gdt COS, December 3rd, 2015 Multi-Armed Bandit model: general setting K arms: for a {1,..., K}, (X a,t ) t N is a stochastic process. (unknown distributions) Bandit game: a each round t, an agent chooses

More information

Two optimization problems in a stochastic bandit model

Two optimization problems in a stochastic bandit model Two optimization problems in a stochastic bandit model Emilie Kaufmann joint work with Olivier Cappé, Aurélien Garivier and Shivaram Kalyanakrishnan Journées MAS 204, Toulouse Outline From stochastic optimization

More information

The information complexity of sequential resource allocation

The information complexity of sequential resource allocation The information complexity of sequential resource allocation Emilie Kaufmann, joint work with Olivier Cappé, Aurélien Garivier and Shivaram Kalyanakrishan SMILE Seminar, ENS, June 8th, 205 Sequential allocation

More information

Sparse Linear Contextual Bandits via Relevance Vector Machines

Sparse Linear Contextual Bandits via Relevance Vector Machines Sparse Linear Contextual Bandits via Relevance Vector Machines Davis Gilton and Rebecca Willett Electrical and Computer Engineering University of Wisconsin-Madison Madison, WI 53706 Email: gilton@wisc.edu,

More information

Informational Confidence Bounds for Self-Normalized Averages and Applications

Informational Confidence Bounds for Self-Normalized Averages and Applications Informational Confidence Bounds for Self-Normalized Averages and Applications Aurélien Garivier Institut de Mathématiques de Toulouse - Université Paul Sabatier Thursday, September 12th 2013 Context Tree

More information

Introduction to Reinforcement Learning and multi-armed bandits

Introduction to Reinforcement Learning and multi-armed bandits Introduction to Reinforcement Learning and multi-armed bandits Rémi Munos INRIA Lille - Nord Europe Currently on leave at MSR-NE http://researchers.lille.inria.fr/ munos/ NETADIS Summer School 2013, Hillerod,

More information

Introduction to Reinforcement Learning Part 3: Exploration for decision making, Application to games, optimization, and planning

Introduction to Reinforcement Learning Part 3: Exploration for decision making, Application to games, optimization, and planning Introduction to Reinforcement Learning Part 3: Exploration for decision making, Application to games, optimization, and planning Rémi Munos SequeL project: Sequential Learning http://researchers.lille.inria.fr/

More information

Improved Algorithms for Linear Stochastic Bandits

Improved Algorithms for Linear Stochastic Bandits Improved Algorithms for Linear Stochastic Bandits Yasin Abbasi-Yadkori abbasiya@ualberta.ca Dept. of Computing Science University of Alberta Dávid Pál dpal@google.com Dept. of Computing Science University

More information

Reinforcement Learning

Reinforcement Learning Reinforcement Learning Lecture 5: Bandit optimisation Alexandre Proutiere, Sadegh Talebi, Jungseul Ok KTH, The Royal Institute of Technology Objectives of this lecture Introduce bandit optimisation: the

More information

arxiv: v2 [stat.ml] 19 Jul 2012

arxiv: v2 [stat.ml] 19 Jul 2012 Thompson Sampling: An Asymptotically Optimal Finite Time Analysis Emilie Kaufmann, Nathaniel Korda and Rémi Munos arxiv:105.417v [stat.ml] 19 Jul 01 Telecom Paristech UMR CNRS 5141 & INRIA Lille - Nord

More information

Bandits : optimality in exponential families

Bandits : optimality in exponential families Bandits : optimality in exponential families Odalric-Ambrym Maillard IHES, January 2016 Odalric-Ambrym Maillard Bandits 1 / 40 Introduction 1 Stochastic multi-armed bandits 2 Boundary crossing probabilities

More information

The information complexity of best-arm identification

The information complexity of best-arm identification The information complexity of best-arm identification Emilie Kaufmann, joint work with Olivier Cappé and Aurélien Garivier MAB workshop, Lancaster, January th, 206 Context: the multi-armed bandit model

More information

On the Complexity of Best Arm Identification with Fixed Confidence

On the Complexity of Best Arm Identification with Fixed Confidence On the Complexity of Best Arm Identification with Fixed Confidence Discrete Optimization with Noise Aurélien Garivier, Emilie Kaufmann COLT, June 23 th 2016, New York Institut de Mathématiques de Toulouse

More information

From Bandits to Monte-Carlo Tree Search: The Optimistic Principle Applied to Optimization and Planning

From Bandits to Monte-Carlo Tree Search: The Optimistic Principle Applied to Optimization and Planning From Bandits to Monte-Carlo Tree Search: The Optimistic Principle Applied to Optimization and Planning Rémi Munos To cite this version: Rémi Munos. From Bandits to Monte-Carlo Tree Search: The Optimistic

More information

Advanced Machine Learning

Advanced Machine Learning Advanced Machine Learning Bandit Problems MEHRYAR MOHRI MOHRI@ COURANT INSTITUTE & GOOGLE RESEARCH. Multi-Armed Bandit Problem Problem: which arm of a K-slot machine should a gambler pull to maximize his

More information

Csaba Szepesvári 1. University of Alberta. Machine Learning Summer School, Ile de Re, France, 2008

Csaba Szepesvári 1. University of Alberta. Machine Learning Summer School, Ile de Re, France, 2008 LEARNING THEORY OF OPTIMAL DECISION MAKING PART I: ON-LINE LEARNING IN STOCHASTIC ENVIRONMENTS Csaba Szepesvári 1 1 Department of Computing Science University of Alberta Machine Learning Summer School,

More information

Bandit View on Continuous Stochastic Optimization

Bandit View on Continuous Stochastic Optimization Bandit View on Continuous Stochastic Optimization Sébastien Bubeck 1 joint work with Rémi Munos 1 & Gilles Stoltz 2 & Csaba Szepesvari 3 1 INRIA Lille, SequeL team 2 CNRS/ENS/HEC 3 University of Alberta

More information

Bandit Algorithms. Zhifeng Wang ... Department of Statistics Florida State University

Bandit Algorithms. Zhifeng Wang ... Department of Statistics Florida State University Bandit Algorithms Zhifeng Wang Department of Statistics Florida State University Outline Multi-Armed Bandits (MAB) Exploration-First Epsilon-Greedy Softmax UCB Thompson Sampling Adversarial Bandits Exp3

More information

Profile-Based Bandit with Unknown Profiles

Profile-Based Bandit with Unknown Profiles Journal of Machine Learning Research 9 (208) -40 Submitted /7; Revised 6/8; Published 9/8 Profile-Based Bandit with Unknown Profiles Sylvain Lamprier sylvain.lamprier@lip6.fr Sorbonne Universités, UPMC

More information

Revisiting the Exploration-Exploitation Tradeoff in Bandit Models

Revisiting the Exploration-Exploitation Tradeoff in Bandit Models Revisiting the Exploration-Exploitation Tradeoff in Bandit Models joint work with Aurélien Garivier (IMT, Toulouse) and Tor Lattimore (University of Alberta) Workshop on Optimization and Decision-Making

More information

Dynamic resource allocation: Bandit problems and extensions

Dynamic resource allocation: Bandit problems and extensions Dynamic resource allocation: Bandit problems and extensions Aurélien Garivier Institut de Mathématiques de Toulouse MAD Seminar, Université Toulouse 1 October 3rd, 2014 The Bandit Model Roadmap 1 The Bandit

More information

Optimality of Thompson Sampling for Gaussian Bandits Depends on Priors

Optimality of Thompson Sampling for Gaussian Bandits Depends on Priors Optimality of Thompson Sampling for Gaussian Bandits Depends on Priors Junya Honda Akimichi Takemura The University of Tokyo {honda, takemura}@stat.t.u-tokyo.ac.jp Abstract In stochastic bandit problems,

More information

The Multi-Arm Bandit Framework

The Multi-Arm Bandit Framework The Multi-Arm Bandit Framework A. LAZARIC (SequeL Team @INRIA-Lille) ENS Cachan - Master 2 MVA SequeL INRIA Lille MVA-RL Course In This Lecture A. LAZARIC Reinforcement Learning Algorithms Oct 29th, 2013-2/94

More information

Stat 260/CS Learning in Sequential Decision Problems. Peter Bartlett

Stat 260/CS Learning in Sequential Decision Problems. Peter Bartlett Stat 260/CS 294-102. Learning in Sequential Decision Problems. Peter Bartlett 1. Multi-armed bandit algorithms. Concentration inequalities. P(X ǫ) exp( ψ (ǫ))). Cumulant generating function bounds. Hoeffding

More information

KULLBACK-LEIBLER UPPER CONFIDENCE BOUNDS FOR OPTIMAL SEQUENTIAL ALLOCATION

KULLBACK-LEIBLER UPPER CONFIDENCE BOUNDS FOR OPTIMAL SEQUENTIAL ALLOCATION Submitted to the Annals of Statistics arxiv: math.pr/0000000 KULLBACK-LEIBLER UPPER CONFIDENCE BOUNDS FOR OPTIMAL SEQUENTIAL ALLOCATION By Olivier Cappé 1, Aurélien Garivier 2, Odalric-Ambrym Maillard

More information

Subsampling, Concentration and Multi-armed bandits

Subsampling, Concentration and Multi-armed bandits Subsampling, Concentration and Multi-armed bandits Odalric-Ambrym Maillard, R. Bardenet, S. Mannor, A. Baransi, N. Galichet, J. Pineau, A. Durand Toulouse, November 09, 2015 O-A. Maillard Subsampling and

More information

arxiv: v1 [cs.lg] 15 Oct 2014

arxiv: v1 [cs.lg] 15 Oct 2014 THOMPSON SAMPLING WITH THE ONLINE BOOTSTRAP By Dean Eckles and Maurits Kaptein Facebook, Inc., and Radboud University, Nijmegen arxiv:141.49v1 [cs.lg] 15 Oct 214 Thompson sampling provides a solution to

More information

Bandit Algorithms. Tor Lattimore & Csaba Szepesvári

Bandit Algorithms. Tor Lattimore & Csaba Szepesvári Bandit Algorithms Tor Lattimore & Csaba Szepesvári Bandits Time 1 2 3 4 5 6 7 8 9 10 11 12 Left arm $1 $0 $1 $1 $0 Right arm $1 $0 Five rounds to go. Which arm would you play next? Overview What are bandits,

More information

Introduction to Reinforcement Learning Part 3: Exploration for sequential decision making

Introduction to Reinforcement Learning Part 3: Exploration for sequential decision making Introduction to Reinforcement Learning Part 3: Exploration for sequential decision making Rémi Munos SequeL project: Sequential Learning http://researchers.lille.inria.fr/ munos/ INRIA Lille - Nord Europe

More information

Introduction to Bandit Algorithms. Introduction to Bandit Algorithms

Introduction to Bandit Algorithms. Introduction to Bandit Algorithms Stochastic K-Arm Bandit Problem Formulation Consider K arms (actions) each correspond to an unknown distribution {ν k } K k=1 with values bounded in [0, 1]. At each time t, the agent pulls an arm I t {1,...,

More information

THE first formalization of the multi-armed bandit problem

THE first formalization of the multi-armed bandit problem EDIC RESEARCH PROPOSAL 1 Multi-armed Bandits in a Network Farnood Salehi I&C, EPFL Abstract The multi-armed bandit problem is a sequential decision problem in which we have several options (arms). We can

More information

Learning to play K-armed bandit problems

Learning to play K-armed bandit problems Learning to play K-armed bandit problems Francis Maes 1, Louis Wehenkel 1 and Damien Ernst 1 1 University of Liège Dept. of Electrical Engineering and Computer Science Institut Montefiore, B28, B-4000,

More information

Complex Bandit Problems and Thompson Sampling

Complex Bandit Problems and Thompson Sampling Complex Bandit Problems and Aditya Gopalan Department of Electrical Engineering Technion, Israel aditya@ee.technion.ac.il Shie Mannor Department of Electrical Engineering Technion, Israel shie@ee.technion.ac.il

More information

Lecture 4: Lower Bounds (ending); Thompson Sampling

Lecture 4: Lower Bounds (ending); Thompson Sampling CMSC 858G: Bandits, Experts and Games 09/12/16 Lecture 4: Lower Bounds (ending); Thompson Sampling Instructor: Alex Slivkins Scribed by: Guowei Sun,Cheng Jie 1 Lower bounds on regret (ending) Recap from

More information

COS 402 Machine Learning and Artificial Intelligence Fall Lecture 22. Exploration & Exploitation in Reinforcement Learning: MAB, UCB, Exp3

COS 402 Machine Learning and Artificial Intelligence Fall Lecture 22. Exploration & Exploitation in Reinforcement Learning: MAB, UCB, Exp3 COS 402 Machine Learning and Artificial Intelligence Fall 2016 Lecture 22 Exploration & Exploitation in Reinforcement Learning: MAB, UCB, Exp3 How to balance exploration and exploitation in reinforcement

More information

arxiv: v4 [cs.lg] 22 Jul 2014

arxiv: v4 [cs.lg] 22 Jul 2014 Learning to Optimize Via Information-Directed Sampling Daniel Russo and Benjamin Van Roy July 23, 2014 arxiv:1403.5556v4 cs.lg] 22 Jul 2014 Abstract We propose information-directed sampling a new algorithm

More information

Thompson Sampling for Contextual Bandits with Linear Payoffs

Thompson Sampling for Contextual Bandits with Linear Payoffs Thompson Sampling for Contextual Bandits with Linear Payoffs Shipra Agrawal Microsoft Research Navin Goyal Microsoft Research Abstract Thompson Sampling is one of the oldest heuristics for multi-armed

More information

Stat 260/CS Learning in Sequential Decision Problems. Peter Bartlett

Stat 260/CS Learning in Sequential Decision Problems. Peter Bartlett Stat 260/CS 294-102. Learning in Sequential Decision Problems. Peter Bartlett 1. Thompson sampling Bernoulli strategy Regret bounds Extensions the flexibility of Bayesian strategies 1 Bayesian bandit strategies

More information

An Information-Theoretic Analysis of Thompson Sampling

An Information-Theoretic Analysis of Thompson Sampling Journal of Machine Learning Research (2015) Submitted ; Published An Information-Theoretic Analysis of Thompson Sampling Daniel Russo Department of Management Science and Engineering Stanford University

More information

Kullback-Leibler Upper Confidence Bounds for Optimal Sequential Allocation

Kullback-Leibler Upper Confidence Bounds for Optimal Sequential Allocation Kullback-Leibler Upper Confidence Bounds for Optimal Sequential Allocation Olivier Cappé, Aurélien Garivier, Odalric-Ambrym Maillard, Rémi Munos, Gilles Stoltz To cite this version: Olivier Cappé, Aurélien

More information

Bandit Algorithms for Pure Exploration: Best Arm Identification and Game Tree Search. Wouter M. Koolen

Bandit Algorithms for Pure Exploration: Best Arm Identification and Game Tree Search. Wouter M. Koolen Bandit Algorithms for Pure Exploration: Best Arm Identification and Game Tree Search Wouter M. Koolen Machine Learning and Statistics for Structures Friday 23 rd February, 2018 Outline 1 Intro 2 Model

More information

Lecture 5: Regret Bounds for Thompson Sampling

Lecture 5: Regret Bounds for Thompson Sampling CMSC 858G: Bandits, Experts and Games 09/2/6 Lecture 5: Regret Bounds for Thompson Sampling Instructor: Alex Slivkins Scribed by: Yancy Liao Regret Bounds for Thompson Sampling For each round t, we defined

More information

Reinforcement Learning

Reinforcement Learning Reinforcement Learning Lecture 6: RL algorithms 2.0 Alexandre Proutiere, Sadegh Talebi, Jungseul Ok KTH, The Royal Institute of Technology Objectives of this lecture Present and analyse two online algorithms

More information

Efficient learning by implicit exploration in bandit problems with side observations

Efficient learning by implicit exploration in bandit problems with side observations Efficient learning by implicit exploration in bandit problems with side observations Tomáš Kocák, Gergely Neu, Michal Valko, Rémi Munos SequeL team, INRIA Lille - Nord Europe, France SequeL INRIA Lille

More information

Evaluation of multi armed bandit algorithms and empirical algorithm

Evaluation of multi armed bandit algorithms and empirical algorithm Acta Technica 62, No. 2B/2017, 639 656 c 2017 Institute of Thermomechanics CAS, v.v.i. Evaluation of multi armed bandit algorithms and empirical algorithm Zhang Hong 2,3, Cao Xiushan 1, Pu Qiumei 1,4 Abstract.

More information

Learning to Optimize via Information-Directed Sampling

Learning to Optimize via Information-Directed Sampling Learning to Optimize via Information-Directed Sampling Daniel Russo Stanford University Stanford, CA 94305 djrusso@stanford.edu Benjamin Van Roy Stanford University Stanford, CA 94305 bvr@stanford.edu

More information

EFFICIENT ALGORITHMS FOR LINEAR POLYHEDRAL BANDITS. Manjesh K. Hanawal Amir Leshem Venkatesh Saligrama

EFFICIENT ALGORITHMS FOR LINEAR POLYHEDRAL BANDITS. Manjesh K. Hanawal Amir Leshem Venkatesh Saligrama EFFICIENT ALGORITHMS FOR LINEAR POLYHEDRAL BANDITS Manjesh K. Hanawal Amir Leshem Venkatesh Saligrama IEOR Group, IIT-Bombay, Mumbai, India 400076 Dept. of EE, Bar-Ilan University, Ramat-Gan, Israel 52900

More information

Bandits and Exploration: How do we (optimally) gather information? Sham M. Kakade

Bandits and Exploration: How do we (optimally) gather information? Sham M. Kakade Bandits and Exploration: How do we (optimally) gather information? Sham M. Kakade Machine Learning for Big Data CSE547/STAT548 University of Washington S. M. Kakade (UW) Optimization for Big data 1 / 22

More information

Racing Thompson: an Efficient Algorithm for Thompson Sampling with Non-conjugate Priors

Racing Thompson: an Efficient Algorithm for Thompson Sampling with Non-conjugate Priors Racing Thompson: an Efficient Algorithm for Thompson Sampling with Non-conugate Priors Yichi Zhou 1 Jun Zhu 1 Jingwe Zhuo 1 Abstract Thompson sampling has impressive empirical performance for many multi-armed

More information

Stochastic bandits: Explore-First and UCB

Stochastic bandits: Explore-First and UCB CSE599s, Spring 2014, Online Learning Lecture 15-2/19/2014 Stochastic bandits: Explore-First and UCB Lecturer: Brendan McMahan or Ofer Dekel Scribe: Javad Hosseini In this lecture, we like to answer this

More information

Analysis of Thompson Sampling for the multi-armed bandit problem

Analysis of Thompson Sampling for the multi-armed bandit problem Analysis of Thompson Sampling for the multi-armed bandit problem Shipra Agrawal Microsoft Research India shipra@microsoft.com avin Goyal Microsoft Research India navingo@microsoft.com Abstract We show

More information

The Multi-Armed Bandit Problem

The Multi-Armed Bandit Problem The Multi-Armed Bandit Problem Electrical and Computer Engineering December 7, 2013 Outline 1 2 Mathematical 3 Algorithm Upper Confidence Bound Algorithm A/B Testing Exploration vs. Exploitation Scientist

More information

arxiv: v4 [math.pr] 26 Aug 2013

arxiv: v4 [math.pr] 26 Aug 2013 The Annals of Statistics 2013, Vol. 41, No. 3, 1516 1541 DOI: 10.1214/13-AOS1119 c Institute of Mathematical Statistics, 2013 arxiv:1210.1136v4 [math.pr] 26 Aug 2013 KULLBACK LEIBLER UPPER CONFIDENCE BOUNDS

More information

Lecture 19: UCB Algorithm and Adversarial Bandit Problem. Announcements Review on stochastic multi-armed bandit problem

Lecture 19: UCB Algorithm and Adversarial Bandit Problem. Announcements Review on stochastic multi-armed bandit problem Lecture 9: UCB Algorithm and Adversarial Bandit Problem EECS598: Prediction and Learning: It s Only a Game Fall 03 Lecture 9: UCB Algorithm and Adversarial Bandit Problem Prof. Jacob Abernethy Scribe:

More information

THE MULTI-ARMED BANDIT PROBLEM: AN EFFICIENT NON-PARAMETRIC SOLUTION

THE MULTI-ARMED BANDIT PROBLEM: AN EFFICIENT NON-PARAMETRIC SOLUTION THE MULTI-ARMED BANDIT PROBLEM: AN EFFICIENT NON-PARAMETRIC SOLUTION Hock Peng Chan stachp@nus.edu.sg Department of Statistics and Applied Probability National University of Singapore Abstract Lai and

More information

Bandit Optimization: Theory and Applications

Bandit Optimization: Theory and Applications 1 / 40 Bandit Optimization: Theory and Applications Richard Combes 1 and Alexandre Proutière 2 1 Centrale-Supélec / L2S, France 2 KTH, Royal Institute of Technology, Sweden. SIGMETRICS 2015 2 / 40 Outline

More information

Online Learning with Gaussian Payoffs and Side Observations

Online Learning with Gaussian Payoffs and Side Observations Online Learning with Gaussian Payoffs and Side Observations Yifan Wu 1 András György 2 Csaba Szepesvári 1 1 Department of Computing Science University of Alberta 2 Department of Electrical and Electronic

More information

Exploration. 2015/10/12 John Schulman

Exploration. 2015/10/12 John Schulman Exploration 2015/10/12 John Schulman What is the exploration problem? Given a long-lived agent (or long-running learning algorithm), how to balance exploration and exploitation to maximize long-term rewards

More information

The multi armed-bandit problem

The multi armed-bandit problem The multi armed-bandit problem (with covariates if we have time) Vianney Perchet & Philippe Rigollet LPMA Université Paris Diderot ORFE Princeton University Algorithms and Dynamics for Games and Optimization

More information

The optimistic principle applied to function optimization

The optimistic principle applied to function optimization The optimistic principle applied to function optimization Rémi Munos Google DeepMind INRIA Lille, Sequel team LION 9, 2015 The optimistic principle applied to function optimization Optimistic principle:

More information

Multi-armed Bandits in the Presence of Side Observations in Social Networks

Multi-armed Bandits in the Presence of Side Observations in Social Networks 52nd IEEE Conference on Decision and Control December 0-3, 203. Florence, Italy Multi-armed Bandits in the Presence of Side Observations in Social Networks Swapna Buccapatnam, Atilla Eryilmaz, and Ness

More information

Value Directed Exploration in Multi-Armed Bandits with Structured Priors

Value Directed Exploration in Multi-Armed Bandits with Structured Priors Value Directed Exploration in Multi-Armed Bandits with Structured Priors Bence Cserna Marek Petrik Reazul Hasan Russel Wheeler Ruml University of New Hampshire Durham, NH 03824 USA bence, mpetrik, rrussel,

More information

Reward Maximization Under Uncertainty: Leveraging Side-Observations on Networks

Reward Maximization Under Uncertainty: Leveraging Side-Observations on Networks Reward Maximization Under Uncertainty: Leveraging Side-Observations Reward Maximization Under Uncertainty: Leveraging Side-Observations on Networks Swapna Buccapatnam AT&T Labs Research, Middletown, NJ

More information

Online Learning with Feedback Graphs

Online Learning with Feedback Graphs Online Learning with Feedback Graphs Claudio Gentile INRIA and Google NY clagentile@gmailcom NYC March 6th, 2018 1 Content of this lecture Regret analysis of sequential prediction problems lying between

More information

On the Complexity of Best Arm Identification with Fixed Confidence

On the Complexity of Best Arm Identification with Fixed Confidence On the Complexity of Best Arm Identification with Fixed Confidence Discrete Optimization with Noise Aurélien Garivier, joint work with Emilie Kaufmann CNRS, CRIStAL) to be presented at COLT 16, New York

More information

Optimistic Bayesian Sampling in Contextual-Bandit Problems

Optimistic Bayesian Sampling in Contextual-Bandit Problems Journal of Machine Learning Research volume (2012) 2069-2106 Submitted 7/11; Revised 5/12; Published 6/12 Optimistic Bayesian Sampling in Contextual-Bandit Problems Benedict C. May School of Mathematics

More information

Contextual Gaussian Process Bandit Optimization

Contextual Gaussian Process Bandit Optimization Contextual Gaussian Process Bandit Optimization Andreas Krause Cheng Soon Ong Department of Computer Science, ETH Zurich, 89 Zurich, Switzerland krausea@ethz.ch chengsoon.ong@inf.ethz.ch Abstract How should

More information

EASINESS IN BANDITS. Gergely Neu. Pompeu Fabra University

EASINESS IN BANDITS. Gergely Neu. Pompeu Fabra University EASINESS IN BANDITS Gergely Neu Pompeu Fabra University EASINESS IN BANDITS Gergely Neu Pompeu Fabra University THE BANDIT PROBLEM Play for T rounds attempting to maximize rewards THE BANDIT PROBLEM Play

More information

Anytime optimal algorithms in stochastic multi-armed bandits

Anytime optimal algorithms in stochastic multi-armed bandits Rémy Degenne LPMA, Université Paris Diderot Vianney Perchet CREST, ENSAE REMYDEGENNE@MATHUNIV-PARIS-DIDEROTFR VIANNEYPERCHET@NORMALESUPORG Abstract We introduce an anytime algorithm for stochastic multi-armed

More information

On Regret-Optimal Learning in Decentralized Multi-player Multi-armed Bandits

On Regret-Optimal Learning in Decentralized Multi-player Multi-armed Bandits 1 On Regret-Optimal Learning in Decentralized Multi-player Multi-armed Bandits Naumaan Nayyar, Dileep Kalathil and Rahul Jain Abstract We consider the problem of learning in single-player and multiplayer

More information

arxiv: v2 [stat.ml] 14 Nov 2016

arxiv: v2 [stat.ml] 14 Nov 2016 Journal of Machine Learning Research 6 06-4 Submitted 7/4; Revised /5; Published /6 On the Complexity of Best-Arm Identification in Multi-Armed Bandit Models arxiv:407.4443v [stat.ml] 4 Nov 06 Emilie Kaufmann

More information

Gaussian Process Optimization with Mutual Information

Gaussian Process Optimization with Mutual Information Gaussian Process Optimization with Mutual Information Emile Contal 1 Vianney Perchet 2 Nicolas Vayatis 1 1 CMLA Ecole Normale Suprieure de Cachan & CNRS, France 2 LPMA Université Paris Diderot & CNRS,

More information

arxiv: v1 [cs.lg] 12 Sep 2017

arxiv: v1 [cs.lg] 12 Sep 2017 Adaptive Exploration-Exploitation Tradeoff for Opportunistic Bandits Huasen Wu, Xueying Guo,, Xin Liu University of California, Davis, CA, USA huasenwu@gmail.com guoxueying@outlook.com xinliu@ucdavis.edu

More information

Tor Lattimore & Csaba Szepesvári

Tor Lattimore & Csaba Szepesvári Bandit Algorithms Tor Lattimore & Csaba Szepesvári Outline 1 From Contextual to Linear Bandits 2 Stochastic Linear Bandits 3 Confidence Bounds for Least-Squares Estimators 4 Improved Regret for Fixed,

More information

Minimax Concave Penalized Multi-Armed Bandit Model with High-Dimensional Convariates

Minimax Concave Penalized Multi-Armed Bandit Model with High-Dimensional Convariates Minimax Concave Penalized Multi-Armed Bandit Model with High-Dimensional Convariates Xue Wang * 1 Mike Mingcheng Wei * 2 Tao Yao * 1 Abstract In this paper, we propose a Minimax Concave Penalized Multi-Armed

More information

arxiv: v3 [cs.lg] 7 Nov 2017

arxiv: v3 [cs.lg] 7 Nov 2017 ESAIM: PROCEEDINGS AND SURVEYS, Vol.?, 2017, 1-10 Editors: Will be set by the publisher arxiv:1702.00001v3 [cs.lg] 7 Nov 2017 LEARNING THE DISTRIBUTION WITH LARGEST MEAN: TWO BANDIT FRAMEWORKS Emilie Kaufmann

More information

Parallel Gaussian Process Optimization with Upper Confidence Bound and Pure Exploration

Parallel Gaussian Process Optimization with Upper Confidence Bound and Pure Exploration Parallel Gaussian Process Optimization with Upper Confidence Bound and Pure Exploration Emile Contal David Buffoni Alexandre Robicquet Nicolas Vayatis CMLA, ENS Cachan, France September 25, 2013 Motivating

More information

The geometry of Gaussian processes and Bayesian optimization. Contal CMLA, ENS Cachan

The geometry of Gaussian processes and Bayesian optimization. Contal CMLA, ENS Cachan The geometry of Gaussian processes and Bayesian optimization. Contal CMLA, ENS Cachan Background: Global Optimization and Gaussian Processes The Geometry of Gaussian Processes and the Chaining Trick Algorithm

More information

Exploiting Correlation in Finite-Armed Structured Bandits

Exploiting Correlation in Finite-Armed Structured Bandits Exploiting Correlation in Finite-Armed Structured Bandits Samarth Gupta Carnegie Mellon University Pittsburgh, PA 1513 Gauri Joshi Carnegie Mellon University Pittsburgh, PA 1513 Osman Yağan Carnegie Mellon

More information

A Change-Detection based Framework for Piecewise-stationary Multi-Armed Bandit Problem

A Change-Detection based Framework for Piecewise-stationary Multi-Armed Bandit Problem A Change-Detection based Framework for Piecewise-stationary Multi-Armed Bandit Problem Fang Liu and Joohyun Lee and Ness Shroff The Ohio State University Columbus, Ohio 43210 {liu.3977, lee.7119, shroff.11}@osu.edu

More information

Stochastic Contextual Bandits with Known. Reward Functions

Stochastic Contextual Bandits with Known. Reward Functions Stochastic Contextual Bandits with nown 1 Reward Functions Pranav Sakulkar and Bhaskar rishnamachari Ming Hsieh Department of Electrical Engineering Viterbi School of Engineering University of Southern

More information

On the Complexity of A/B Testing

On the Complexity of A/B Testing JMLR: Workshop and Conference Proceedings vol 35:1 3, 014 On the Complexity of A/B Testing Emilie Kaufmann LTCI, Télécom ParisTech & CNRS KAUFMANN@TELECOM-PARISTECH.FR Olivier Cappé CAPPE@TELECOM-PARISTECH.FR

More information

Chapter 2 Stochastic Multi-armed Bandit

Chapter 2 Stochastic Multi-armed Bandit Chapter 2 Stochastic Multi-armed Bandit Abstract In this chapter, we present the formulation, theoretical bound, and algorithms for the stochastic MAB problem. Several important variants of stochastic

More information

A minimax and asymptotically optimal algorithm for stochastic bandits

A minimax and asymptotically optimal algorithm for stochastic bandits Proceedings of Machine Learning Research 76:1 15, 017 Algorithmic Learning heory 017 A minimax and asymptotically optimal algorithm for stochastic bandits Pierre Ménard Aurélien Garivier Institut de Mathématiques

More information

Multi-Armed Bandit Formulations for Identification and Control

Multi-Armed Bandit Formulations for Identification and Control Multi-Armed Bandit Formulations for Identification and Control Cristian R. Rojas Joint work with Matías I. Müller and Alexandre Proutiere KTH Royal Institute of Technology, Sweden ERNSI, September 24-27,

More information

New Algorithms for Contextual Bandits

New Algorithms for Contextual Bandits New Algorithms for Contextual Bandits Lev Reyzin Georgia Institute of Technology Work done at Yahoo! 1 S A. Beygelzimer, J. Langford, L. Li, L. Reyzin, R.E. Schapire Contextual Bandit Algorithms with Supervised

More information

Tutorial: PART 2. Online Convex Optimization, A Game- Theoretic Approach to Learning

Tutorial: PART 2. Online Convex Optimization, A Game- Theoretic Approach to Learning Tutorial: PART 2 Online Convex Optimization, A Game- Theoretic Approach to Learning Elad Hazan Princeton University Satyen Kale Yahoo Research Exploiting curvature: logarithmic regret Logarithmic regret

More information

Optimistic Gittins Indices

Optimistic Gittins Indices Optimistic Gittins Indices Eli Gutin Operations Research Center, MIT Cambridge, MA 0242 gutin@mit.edu Vivek F. Farias MIT Sloan School of Management Cambridge, MA 0242 vivekf@mit.edu Abstract Starting

More information

Bandits on graphs and structures

Bandits on graphs and structures Bandits on graphs and structures Michal Valko To cite this version: Michal Valko. Bandits on graphs and structures. Machine Learning [stat.ml]. École normale supérieure de Cachan - ENS Cachan, 2016.

More information