Bayesian and Frequentist Methods in Bandit Models

Size: px
Start display at page:

Download "Bayesian and Frequentist Methods in Bandit Models"

Transcription

1 Bayesian and Frequentist Methods in Bandit Models Emilie Kaufmann, Telecom ParisTech Bayes In Paris, ENSAE, October 24th, 2013 Emilie Kaufmann (Telecom ParisTech) Bayesian and Frequentist Bandits BIP, 24/10/13 1 / 48

2 1 The multiarmed bandit problem 2 Bayesian bandits, frequentist bandits 3 Two Bayesian bandit algorithms Bayes-UCB Thompson Sampling 4 Bayesian algorithms for pure exploration? 5 Conclusion Emilie Kaufmann (Telecom ParisTech) Bayesian and Frequentist Bandits BIP, 24/10/13 2 / 48

3 The multiarmed bandit problem 1 The multiarmed bandit problem 2 Bayesian bandits, frequentist bandits 3 Two Bayesian bandit algorithms Bayes-UCB Thompson Sampling 4 Bayesian algorithms for pure exploration? 5 Conclusion Emilie Kaufmann (Telecom ParisTech) Bayesian and Frequentist Bandits BIP, 24/10/13 3 / 48

4 The multiarmed bandit problem Bandit model Bandit model A multi-armed bandit model is a set of K arms where Each arm a is a probability distribution ν a of mean µ a Drawing arm a is observing a realization of ν a Arms are assumed to be independent In a bandit game, at round t, a forecaster chooses arm A t to draw based on past observations, according to its sampling strategy observes reward X t ν At Emilie Kaufmann (Telecom ParisTech) Bayesian and Frequentist Bandits BIP, 24/10/13 4 / 48

5 The multiarmed bandit problem The regret minimization problem Our bandit problem: regret minimization µ = max µ a and a = argmax µ a a a The forecaster wants to maximize the reward accumulated during learning or equivalentely minimize its regret: [ n ] R n = nµ E X t He has to find a sampling strategy (or bandit algorithm) that t=1 realizes a tradeoff between exploration and exploitation Applications (with arms beeing Bernoulli random variables) Finding the best slot machine in a casino (just for the name!) Initial motivation: Sequential allocation of medical treatments Emilie Kaufmann (Telecom ParisTech) Bayesian and Frequentist Bandits BIP, 24/10/13 5 / 48

6 The multiarmed bandit problem The regret minimization problem A recent motivation for bandits: Online advertisement Yahoo!(c) has to choose between K different advertisements the one to display on its webpage for each user (indexed by t N). Ad number a unknown probability of click p a Unknown best advertisement a = argmax a p a X t,a B(p a ): (X t,a = 1)=(user t has clicked on ad a) Yahoo!(c): chooses ad A t to display for user number t observes whether the user has clicked or not: X t,at wants to maximize the click-through-rate How should Yahoo!(c) choose ad A t to display depending on the previous clicks of the (t 1) first users? Emilie Kaufmann (Telecom ParisTech) Bayesian and Frequentist Bandits BIP, 24/10/13 6 / 48

7 Bayesian bandits, frequentist bandits 1 The multiarmed bandit problem 2 Bayesian bandits, frequentist bandits 3 Two Bayesian bandit algorithms Bayes-UCB Thompson Sampling 4 Bayesian algorithms for pure exploration? 5 Conclusion Emilie Kaufmann (Telecom ParisTech) Bayesian and Frequentist Bandits BIP, 24/10/13 7 / 48

8 Bayesian bandits, frequentist bandits Two probabilistic modellings Two probabilistic models K independent arms. µ = µ a Frequentist : θ 1,..., θ K unknown parameters (Y a,t ) t is i.i.d. with distribution ν θa with mean µ a highest expectation of reward. Bayesian : i.i.d. θ a π a (Y a,t ) t is i.i.d. conditionally to θ a with distribution ν θa At time t, arm A t is chosen and reward X t = Y At,t is observed Two measures of performance Minimize regret R n (θ) = E θ [ n t=1 µ µ At ] Minimize Bayes risk [ n ] Risk n = E µ µ At = t=1 R n (θ)dπ(θ) Emilie Kaufmann (Telecom ParisTech) Bayesian and Frequentist Bandits BIP, 24/10/13 8 / 48

9 Bayesian bandits, frequentist bandits Two probabilistic models Frequentist tools, Bayesian tools Bandit algorithms based on frequentist tools use: MLE for the mean parameter of each arm confidence intervals for the parameter of each arm Bandit algorithms based on Bayesian tools use: Π t = (π1 t,..., πt K ) the current posterior over (θ 1,..., θ K ) Emilie Kaufmann (Telecom ParisTech) Bayesian and Frequentist Bandits BIP, 24/10/13 9 / 48

10 Bayesian bandits, frequentist bandits Frequentist tools, Bayesian tools Two probabilistic models Bandit algorithms based on frequentist tools use: MLE for the mean parameter of each arm confidence intervals for the parameter of each arm Bandit algorithms based on Bayesian tools use: Π t = (π t 1,..., πt K ) the current posterior over (θ 1,..., θ K ) One can separate tools and objectives: Objective Frequentist Bayesian algorithms algorithms Regret?? Bayes risk?? Emilie Kaufmann (Telecom ParisTech) Bayesian and Frequentist Bandits BIP, 24/10/13 9 / 48

11 Bayesian bandits, frequentist bandits Bayesian algorithm and Bayes risk Bayesian algorithms minimizing Bayes risk Objective Frequentist Bayesian algorithms algorithms Regret?? Bayes risk?? Emilie Kaufmann (Telecom ParisTech) Bayesian and Frequentist Bandits BIP, 24/10/13 10 / 48

12 Bayesian bandits, frequentist bandits Bayesian algorithm and Bayes risk MDP formulation of the Bernoulli bandit game Benoulli bandits with uniform prior on the means: θ a = µ a i.i.d θ a U([0, 1]) = Beta(1, 1) πa t = Beta(S a (t) + 1, N a (t) S a (t) + 1) Matrix S t M K,2 summarizes the game : Line a gives the parameters of the Beta posterior over arm a, π t a S t can be seen as a state in a Markov Decision Process, and the optimal policy is (depending on the criterion) [ ] [ n ] arg max E γ t 1 X t or arg max E X t (A t) (A t) t=1 Emilie Kaufmann (Telecom ParisTech) Bayesian and Frequentist Bandits BIP, 24/10/13 11 / 48 t=1

13 Bayesian bandits, frequentist bandits The Finite-Horizon Gittins algorithm Bayesian algorithm and Bayes risk Gittins ([1979]) shows the optimal policy in the discounted case is an index policy: A t = argmax ν Disc (π t (a)). a Similarly, in the finite-horizon case (our setting), the optimal policy has an explicit formulation A t = argmax a The Finite-Horizon Gittins algorithm But... ν F H (π t (a), n t) minimizes minimizes the Bayes risk Risk n and display very good performance on frequentist problems! FH-Gittins indices are hard to compute the algorithm is heavily horizon-dependent Emilie Kaufmann (Telecom ParisTech) Bayesian and Frequentist Bandits BIP, 24/10/13 12 / 48

14 Bayesian bandits, frequentist bandits Frequentist algorithms and regret Frequentist algorithms minimizing regret Objective Frequentist Bayesian algorithms algorithms Regret?? Bayes risk? Finite-Horizon Gittins algorithm Emilie Kaufmann (Telecom ParisTech) Bayesian and Frequentist Bandits BIP, 24/10/13 13 / 48

15 Bayesian bandits, frequentist bandits Frequentist optimality Asymptotically optimal algorithms in the frequentist setting N a (t) the number of draws of arm a up to time t K R n (θ) = (µ µ a )E θ [N a (n)] a=1 [Lai and Robbins,1985]: every consistent policy satisfies µ a < µ lim inf n E θ [N a (n)] ln n 1 KL(ν θa, ν θ ) A bandit algorithm is asymptotically optimal if µ a < µ lim sup n E θ [N a (n)] ln n 1 KL(ν θa, ν θ ) Emilie Kaufmann (Telecom ParisTech) Bayesian and Frequentist Bandits BIP, 24/10/13 14 / 48

16 Bayesian bandits, frequentist bandits The optimism principle A family of frequentist algorithms The following heuristic defines a family of optimistic index policies: For each arm a, compute a confidence interval on the unknown parameter µ a : µ a UCB a (t) w.h.p Use the optimism-in-face-of-uncertainty principle: act as if the best possible model was the true model The algorithm chooses at time t A t = arg max a UCB a (t) Emilie Kaufmann (Telecom ParisTech) Bayesian and Frequentist Bandits BIP, 24/10/13 15 / 48

17 Bayesian bandits, frequentist bandits UCB Towards optimal algorithms Example for Bernoulli rewards: UCB [Auer et al. 02] uses Hoeffding bounds: UCB a (t) = S a(t) N a (t) + α log(t) 2N a (t) and one has: E[N a (n)] K 1 2(µ a µ ) 2 ln n + K 2, with K 1 > 1. Emilie Kaufmann (Telecom ParisTech) Bayesian and Frequentist Bandits BIP, 24/10/13 16 / 48

18 Bayesian bandits, frequentist bandits KL-UCB KL-UCB: and asymptotically optimal frequentist algorithm Example for Bernoulli rewards: KL-UCB [Cappé et al. 2013] uses the index: { ( ) } Sa (t) u a (t) = argmax K x> Sa(t) N a (t), x ln(t) + c ln ln(t) N a (t) Na(t) β(t,δ)/n a (t) with and one has S a (t)/n a (t) K(p, q) = KL (B(p), B(q)) = p log E[N a (n)] ( p q 1 K(µ a, µ ln n + C. ) ) + (1 p) log ( ) 1 p 1 q Emilie Kaufmann (Telecom ParisTech) Bayesian and Frequentist Bandits BIP, 24/10/13 17 / 48

19 Bayesian bandits, frequentist bandits Frequentist algorithms and Bayes risk Frequentist algorithms optimal minimizing Bayes risk Objective Frequentist Bayesian algorithms algorithms Regret KL-UCB? Bayes risk KL-UCB Finite-Horizon Gittins algorithm (at least in an asymptotic sense, see [Lai 1987]) Emilie Kaufmann (Telecom ParisTech) Bayesian and Frequentist Bandits BIP, 24/10/13 18 / 48

20 Bayesian bandits, frequentist bandits Bayesian algorithms and regret Bayesian algorithms minimizing regret Objective Frequentist Bayesian algorithms algorithms Regret KL-UCB? Bayes risk KL-UCB Finite-Horizon Gittins algorithm We want to design Bayesian algorithm that are optimal with respect to the frequentist regret Emilie Kaufmann (Telecom ParisTech) Bayesian and Frequentist Bandits BIP, 24/10/13 19 / 48

21 Two Bayesian bandit algorithms 1 The multiarmed bandit problem 2 Bayesian bandits, frequentist bandits 3 Two Bayesian bandit algorithms Bayes-UCB Thompson Sampling 4 Bayesian algorithms for pure exploration? 5 Conclusion Emilie Kaufmann (Telecom ParisTech) Bayesian and Frequentist Bandits BIP, 24/10/13 20 / 48

22 Two Bayesian bandit algorithms UCBs versus Bayesian algorithms Bayesian algorithms versus UCBs Figure: Confidence intervals on the arms means after t rounds of a bandit game Figure: Posterior over the means of the arms after t rounds of a bandit game Emilie Kaufmann (Telecom ParisTech) Bayesian and Frequentist Bandits BIP, 24/10/13 21 / 48

23 Two Bayesian bandit algorithms UCBs versus Bayesian algorithms Bayesian algorithms versus UCBs Figure: Confidence intervals on the arms means after t rounds of a bandit game Figure: Posterior over the means of the arms after t rounds of a bandit game How do we exploit the posterior in a Bayesian bandit algorithm? Emilie Kaufmann (Telecom ParisTech) Bayesian and Frequentist Bandits BIP, 24/10/13 21 / 48

24 Two Bayesian bandit algorithms Bayes-UCB 1 The multiarmed bandit problem 2 Bayesian bandits, frequentist bandits 3 Two Bayesian bandit algorithms Bayes-UCB Thompson Sampling 4 Bayesian algorithms for pure exploration? 5 Conclusion Emilie Kaufmann (Telecom ParisTech) Bayesian and Frequentist Bandits BIP, 24/10/13 22 / 48

25 Two Bayesian bandit algorithms The Bayes-UCB algorithm Bayes-UCB Let : Π 0 = (π 0 1,..., π0 K ) be a prior distribution over (θ 1,..., θ K ) Λ t = (λ t 1,..., λt K ) be the posterior over the means (µ 1,..., µ K ) a the end of round t The Bayes-UCB algorithm chooses at time t ( Q 1 A t = argmax a 1 t(log t) c, λt 1 a where Q(α, π) is the quantile of order α of the distribution π. ) Emilie Kaufmann (Telecom ParisTech) Bayesian and Frequentist Bandits BIP, 24/10/13 23 / 48

26 Two Bayesian bandit algorithms The Bayes-UCB algorithm Bayes-UCB Let : Π 0 = (π 0 1,..., π0 K ) be a prior distribution over (θ 1,..., θ K ) Λ t = (λ t 1,..., λt K ) be the posterior over the means (µ 1,..., µ K ) a the end of round t The Bayes-UCB algorithm chooses at time t ( Q 1 A t = argmax a 1 t(log t) c, λt 1 a where Q(α, π) is the quantile of order α of the distribution π. Bernoulli reward with uniform prior: θ = µ and Π t = Λ t ( Q 1 A t = argmax a 1 t(log t) c, Beta(S a(t) + 1, N a (t) S a (t) + 1) ) ) Emilie Kaufmann (Telecom ParisTech) Bayesian and Frequentist Bandits BIP, 24/10/13 23 / 48

27 Two Bayesian bandit algorithms Bayes-UCB Theoretical results for the Bernoulli case Bayes-UCB is asymptotically optimal in this case Theorem [K.,Cappé,Garivier 2012] Let ɛ > 0. The Bayes-UCB algorithm using a uniform prior over the arms and with parameter c 5 satisfies 1 + ɛ E θ [N a (n)] KL(B(µ a ), B(µ )) log(n) + o ɛ,c (log(n)). Emilie Kaufmann (Telecom ParisTech) Bayesian and Frequentist Bandits BIP, 24/10/13 24 / 48

28 Two Bayesian bandit algorithms Link to a frequentist algorithm Bayes-UCB Bayes-UCB index is close to KL-UCB index: ũ a (t) q a (t) u a (t) with: { ( ) } Sa (t) u a (t) = argmax K x> Sa(t) N a (t), x log(t) + c log(log(t)) N a (t) Na(t) ( ) ( ) t ũ a (t) = argmax K Sa (t) log x> Sa(t) N a (t) + 1, x N a(t)+2 + c log(log(t)) (N a (t) + 1) Na(t)+1 Bayes-UCB appears to build automatically confidence intervals based on Kullback-Leibler divergence, that are adapted to the geometry of the problem in this specific case. Emilie Kaufmann (Telecom ParisTech) Bayesian and Frequentist Bandits BIP, 24/10/13 25 / 48

29 Two Bayesian bandit algorithms Bayes-UCB Where does it come from? We have a tight bound on the tail of posterior distributions (Beta distributions) First element: link between Beta and Binomial distribution: P(X a,b x) = P(S a+b 1,1 x b) Second element: Sanov inequality: for k > nx, e nd( k n,x) n + 1 P(S n,x k) e nd( k n,x) Emilie Kaufmann (Telecom ParisTech) Bayesian and Frequentist Bandits BIP, 24/10/13 26 / 48

30 Two Bayesian bandit algorithms Thompson Sampling 1 The multiarmed bandit problem 2 Bayesian bandits, frequentist bandits 3 Two Bayesian bandit algorithms Bayes-UCB Thompson Sampling 4 Bayesian algorithms for pure exploration? 5 Conclusion Emilie Kaufmann (Telecom ParisTech) Bayesian and Frequentist Bandits BIP, 24/10/13 27 / 48

31 Thompson Sampling Two Bayesian bandit algorithms Thompson Sampling A randomized Bayesian algorithm: a {1..K}, θ a (t) λ t a A t = argmax a µ(θ a (t)) (Recent) interest for this algorithm: a very old algorithm [Thompson 1933] partial analysis proposed [Granmo 2010][May, Korda, Lee, Leslie 2012] extensive numerical study beyond the Bernoulli case [Chapelle, Li 2011] first logarithmic upper bound on the regret [Agrawal,Goyal 2012] Emilie Kaufmann (Telecom ParisTech) Bayesian and Frequentist Bandits BIP, 24/10/13 28 / 48

32 Two Bayesian bandit algorithms Thompson Sampling An optimal regret bound for Bernoulli bandits Assume the first arm is the unique optimal and a = µ 1 µ a. Known result : [Agrawal,Goyal 2012] ( K ) 1 E[R n ] C ln(n) + o µ (ln(n)) a=2 a Emilie Kaufmann (Telecom ParisTech) Bayesian and Frequentist Bandits BIP, 24/10/13 29 / 48

33 Two Bayesian bandit algorithms Thompson Sampling An optimal regret bound for Bernoulli bandits Assume the first arm is the unique optimal and a = µ 1 µ a. Known result : [Agrawal,Goyal 2012] ( K ) 1 E[R n ] C ln(n) + o µ (ln(n)) a=2 a Our improvement : [K.,Korda,Munos 2012] Theorem ɛ > 0, E[R n ] (1 + ɛ) ( K a=2 a KL(B(µ a ), B(µ )) ) ln(n) + o µ,ɛ (ln(n)) Emilie Kaufmann (Telecom ParisTech) Bayesian and Frequentist Bandits BIP, 24/10/13 29 / 48

34 Two Bayesian bandit algorithms Thompson Sampling Two key elements in the proof Introduce a quantile to replace the sample: ( q a (t) := Q 1 1 ) t ln(n), πt a such that n P (θ a (t) > q a (t)) 2 t=1 and use what we know about quantiles (cf. Bayes-UCB) Emilie Kaufmann (Telecom ParisTech) Bayesian and Frequentist Bandits BIP, 24/10/13 30 / 48

35 Two Bayesian bandit algorithms Two key elements in the proof Thompson Sampling Introduce a quantile to replace the sample: ( q a (t) := Q 1 1 ) t ln(n), πt a such that n P (θ a (t) > q a (t)) 2 t=1 and use what we know about quantiles (cf. Bayes-UCB) Proove separately that the optimal arm has to be drawn a lot Proposition There exists constants b = b(µ) (0, 1) and C b < such that ( P N 1 (t) t b) C b. t=1 Emilie Kaufmann (Telecom ParisTech) Bayesian and Frequentist Bandits BIP, 24/10/13 30 / 48

36 Two Bayesian bandit algorithms Thompson Sampling Thompson Sampling in Exponential Family bandits Arm a has distribution ν θa with density f(x θ a ) = A(x) exp(t (x)θ a F (θ a )). The Jeffreys prior on arm a is π J (θ) F (θ). Practical implementation of TS with Jeffreys prior Name Distribution Prior on λ Posterior on λ B(λ) λ x (1 λ) 1 x δ 0,1 B ( 1 2, 1 ) 2 B ( s, n s) N (λ, σ 2 (x λ)2 ( ) 1 ) e 2σ 2 1 N s 2πσ 2 n, σ2 n Γ(k, λ) λ k Γ(k) xk 1 e λx 1 [0,+ [ 1 λ Γ(kn, s) Pareto(x m, λ) λx λ m 1 x λ+1 [xm,+ [ 1 λ Γ (n + 1, s n log x m ) Posterior after n observations y 1,..., y n, with s = n s=1 T (y s). Emilie Kaufmann (Telecom ParisTech) Bayesian and Frequentist Bandits BIP, 24/10/13 31 / 48

37 Two Bayesian bandit algorithms Thompson Sampling Thompson Sampling in Exponential Family bandits Theorem [Korda,K.,Munos 13] If the rewards distributions belong to a 1-dimensional canonical exponential family, Thompson sampling with Jeffreys prior π J satisfies E[N a (n)] lim = n ln n 1 KL(ν θa, ν θa ). Emilie Kaufmann (Telecom ParisTech) Bayesian and Frequentist Bandits BIP, 24/10/13 32 / 48

38 Two Bayesian bandit algorithms Thompson Sampling Thompson Sampling in Exponential Family bandits An idea of the proof θ a (t) be a sample of the posterior π a (t) on θ a. TS samples at round t A t = arg max a θ a (t). E a,t = 1 N a (t) N a(t) s=1 T (Y a,s ) F (θ a ) δ a, Ea,t θ = (µ(θ a (t)) µ a + a ) E[N a (n)] = n n P(A t = a, E a,t, Ea,t) θ + P(A t = a, E a,t, (Ea,t) θ c ) t=1 + n P(A t = a, Ea,t) c t=1 t=1 Emilie Kaufmann (Telecom ParisTech) Bayesian and Frequentist Bandits BIP, 24/10/13 33 / 48

39 Two Bayesian bandit algorithms Thompson Sampling Thompson Sampling in Exponential Family bandits Two key ingredients Theorem (posterior concentration) There exists two constants C 1,a, C 2,a such that P((E θ a,t) c F t )1 Ea,t C 1,a N a,t e Na,t(1 δac 2,a)KL(ν θa,ν µ 1 (µa+ a) ) Proposition (number of draws of the optimal arm) For every b ]0, 1[, there exists C b < such that ( P N 1 (t) t b) C b. t=1 Emilie Kaufmann (Telecom ParisTech) Bayesian and Frequentist Bandits BIP, 24/10/13 34 / 48

40 Two Bayesian bandit algorithms Thompson Sampling Understanding the deviation result Recall the result For every b ]0, 1[ there exists a constant C b < such that t=1 ( P N 1 (t) t b) C b. Where does it come from? { N 1 (t) t b} = {there exists a time range of length at least t 1 b 1 with no draw of arm 1} Emilie Kaufmann (Telecom ParisTech) Bayesian and Frequentist Bandits BIP, 24/10/13 35 / 48

41 Two Bayesian bandit algorithms Thompson Sampling µ 2 µ 2 + δ µ Assume that : on I j = [τ j, τ j + t 1 b 1 ] there is no draw of arm 1 there exists J j I j such that s J j, a 1, µ(θ a (s)) µ 2 + δ Then : s J j, µ(θ 1 (s)) µ 2 + δ This only happens with small probability Emilie Kaufmann (Telecom ParisTech) Bayesian and Frequentist Bandits BIP, 24/10/13 36 / 48

42 Two Bayesian bandit algorithms Go Bayesian! Why using Bayesian algorithm in the frequentist setting? UCB KLUCB regret time time BayesUCB Thompson time time Regret as a function of time in a ten arms Bernoulli bandit problem with low rewards, horizon n = 20000, average over N = trials. Emilie Kaufmann (Telecom ParisTech) Bayesian and Frequentist Bandits BIP, 24/10/13 37 / 48

43 Two Bayesian bandit algorithms Go Bayesian! Why using Bayesian algorithm in the frequentist setting? In the Bernoulli case, for each arm, KL-UCB requires to solve an optimization problem: { ( ) } Sa (t) u a (t) = argmax K x> Sa(t) N a (t), x ln(t) + c ln ln(t) N a (t) Na(t) Bayes-UCB requires to compute one quantile of a Beta distribution Thompson Sampling requires to compute one sample of a Beta distribution Emilie Kaufmann (Telecom ParisTech) Bayesian and Frequentist Bandits BIP, 24/10/13 38 / 48

44 Two Bayesian bandit algorithms Go Bayesian! Why using Bayesian algorithm in the frequentist setting? In the Bernoulli case, for each arm, KL-UCB requires to solve an optimization problem: { ( ) } Sa (t) u a (t) = argmax K x> Sa(t) N a (t), x ln(t) + c ln ln(t) N a (t) Na(t) Bayes-UCB requires to compute one quantile of a Beta distribution Thompson Sampling requires to compute one sample of a Beta distribution Other advantages of Bayesian algorithms: they easily generalize to more complex models......even when the posterior is not directly computable (using MCMC) the prior can incorporate correlation between arms Emilie Kaufmann (Telecom ParisTech) Bayesian and Frequentist Bandits BIP, 24/10/13 38 / 48

45 Bayesian algorithms for pure exploration? 1 The multiarmed bandit problem 2 Bayesian bandits, frequentist bandits 3 Two Bayesian bandit algorithms Bayes-UCB Thompson Sampling 4 Bayesian algorithms for pure exploration? 5 Conclusion Emilie Kaufmann (Telecom ParisTech) Bayesian and Frequentist Bandits BIP, 24/10/13 39 / 48

46 Bayesian algorithms for pure exploration? Setup and notations One bandit model, two bandit problems Recall a bandit model is simply a set of K unknown distributions. Assume µ 1 µ }{{ m > µ } m+1 µ K. Sm We have seen sofar one bandit problem: regret minimization We introduce here another bandit problem: pure-exploration: The forecaster has to find the set of m best arms, using as few observations of the arms as possible, but without suffering a loss when drawing a bad arm. The forecaster: draws the arms according to an exploration strategy stops at time τ (stopping strategy) and recommends a set S of m arms His goal: P(S = S m) 1 δ and E[τ] as small as possible Emilie Kaufmann (Telecom ParisTech) Bayesian and Frequentist Bandits BIP, 24/10/13 40 / 48

47 Bayesian algorithms for pure exploration? Algorithms for finding the m best KL-UCB: an algorithm for finding the m best arms At round t, the KL-LUCB algorithm ([K., Kalyanakrishnan, 13]) draws two well-chosen arms: u t and l t stops when CI for arms in Ŝm(t) and (Ŝm) c (t) are separated recommends the set Ŝm(τ) of m empirical best arms K=6,m=3. Set Ŝm(t), arm l t in bold Set (Ŝm(t)) c, arm u t in bold Emilie Kaufmann (Telecom ParisTech) Bayesian and Frequentist Bandits BIP, 24/10/13 41 / 48

48 Bayesian algorithms for pure exploration? Algorithms for finding the m best Bayesian algorithms for finding the m best? KL-LUCB uses KL-confidence intervals: L a (t) = min {q ˆp a (t) : N a (t)k(ˆp a (t), q) β(t, δ)}, U a (t) = max {q ˆp a (t) : N a (t)k(ˆp a (t), q) β(t, δ)}. ( ) We use β(t, δ) = log k1 Kt α to make sure P(S = Sm) 1 δ. δ Emilie Kaufmann (Telecom ParisTech) Bayesian and Frequentist Bandits BIP, 24/10/13 42 / 48

49 Bayesian algorithms for pure exploration? Algorithms for finding the m best Bayesian algorithms for finding the m best? KL-LUCB uses KL-confidence intervals: L a (t) = min {q ˆp a (t) : N a (t)k(ˆp a (t), q) β(t, δ)}, U a (t) = max {q ˆp a (t) : N a (t)k(ˆp a (t), q) β(t, δ)}. ( ) We use β(t, δ) = log k1 Kt α to make sure P(S = Sm) 1 δ. δ How to propose a Bayesian algorithm that adapts to δ? Emilie Kaufmann (Telecom ParisTech) Bayesian and Frequentist Bandits BIP, 24/10/13 42 / 48

50 Conclusion 1 The multiarmed bandit problem 2 Bayesian bandits, frequentist bandits 3 Two Bayesian bandit algorithms Bayes-UCB Thompson Sampling 4 Bayesian algorithms for pure exploration? 5 Conclusion Emilie Kaufmann (Telecom ParisTech) Bayesian and Frequentist Bandits BIP, 24/10/13 43 / 48

51 Conclusion Conclusion for regret minimization Emilie Kaufmann (Telecom ParisTech) Bayesian and Frequentist Bandits BIP, 24/10/13 44 / 48

52 Work in progress Conclusion Emilie Kaufmann (Telecom ParisTech) Bayesian and Frequentist Bandits BIP, 24/10/13 45 / 48

53 Conclusion Conclusion Regret minimization: Go Bayesian! Bayes-UCB show striking similarities with KL-UCB Thompson Sampling is an easy-to-implement alternative to the optimistic approach both algorithms are asymptotically optimal towards frequentist regret (and more efficient in practise) in the Bernoulli case Thompson Sampling with Jeffreys prior is asymptotically optimal when rewards belong to a one-dimensional exponential family, which matches the guarantees of the KL-UCB algorithm Emilie Kaufmann (Telecom ParisTech) Bayesian and Frequentist Bandits BIP, 24/10/13 46 / 48

54 Conclusion Conclusion Regret minimization: Go Bayesian! Bayes-UCB show striking similarities with KL-UCB Thompson Sampling is an easy-to-implement alternative to the optimistic approach both algorithms are asymptotically optimal towards frequentist regret (and more efficient in practise) in the Bernoulli case Thompson Sampling with Jeffreys prior is asymptotically optimal when rewards belong to a one-dimensional exponential family, which matches the guarantees of the KL-UCB algorithm Natural open question: Can Bayesian tools be used to build efficient algorithms for the pure-exploration objective? Emilie Kaufmann (Telecom ParisTech) Bayesian and Frequentist Bandits BIP, 24/10/13 46 / 48

55 References of the talk References (1/2) S.Agrawal and N.Goyal, Analysis of Thompson Sampling for the multi-armed bandit problem, COLT 2012 O.Cappé, A.Garivier, O. Maillard, R. Munos, G. Stoltz. Kullback-Leibler upper confidence bound for optimal sequential allocation, Annals of Statistics, 2013 J.C. Gittins. Bandit processes and dynamic allocation indices, Journal of the Royal Statistical Society, Serie B, 1979 E. Kaufmann, O. Cappé, and A. Garivier. On Bayesian upper confidence bounds for bandit problems. AISTATS 2012 E.Kaufmann, N.Korda, and R.Munos. Thompson Sampling: an asymptotically optimal finite-time analysis. ALT 2012 Emilie Kaufmann (Telecom ParisTech) Bayesian and Frequentist Bandits BIP, 24/10/13 47 / 48

56 References of the talk References (2/2) E.Kaufmann and S.Kalyanakrishnan. Information Complexity in Bandit Subset Selection, COLT 2013 N.Korda, E.Kaufmann, R.Munos. Thompson Sampling for one-dimensional exponential families, to appear in NIPS 2013 T.Lai, Adaptive treatment allocation and the multi-armed bandit problem, Annals of Statistics, 1987 T.Lai and H.Robbins. Asymptotically efficient adaptive allocation rules, Advances in applied mathematics, 1985 W. Thompson, On the likelihood that one unknown probability exceeds another in view of the evidence of two samples, Biometrika, 1933 Emilie Kaufmann (Telecom ParisTech) Bayesian and Frequentist Bandits BIP, 24/10/13 48 / 48

On Bayesian bandit algorithms

On Bayesian bandit algorithms On Bayesian bandit algorithms Emilie Kaufmann joint work with Olivier Cappé, Aurélien Garivier, Nathaniel Korda and Rémi Munos July 1st, 2012 Emilie Kaufmann (Telecom ParisTech) On Bayesian bandit algorithms

More information

Two optimization problems in a stochastic bandit model

Two optimization problems in a stochastic bandit model Two optimization problems in a stochastic bandit model Emilie Kaufmann joint work with Olivier Cappé, Aurélien Garivier and Shivaram Kalyanakrishnan Journées MAS 204, Toulouse Outline From stochastic optimization

More information

Stratégies bayésiennes et fréquentistes dans un modèle de bandit

Stratégies bayésiennes et fréquentistes dans un modèle de bandit Stratégies bayésiennes et fréquentistes dans un modèle de bandit thèse effectuée à Telecom ParisTech, co-dirigée par Olivier Cappé, Aurélien Garivier et Rémi Munos Journées MAS, Grenoble, 30 août 2016

More information

The information complexity of sequential resource allocation

The information complexity of sequential resource allocation The information complexity of sequential resource allocation Emilie Kaufmann, joint work with Olivier Cappé, Aurélien Garivier and Shivaram Kalyanakrishan SMILE Seminar, ENS, June 8th, 205 Sequential allocation

More information

Multi-armed bandit models: a tutorial

Multi-armed bandit models: a tutorial Multi-armed bandit models: a tutorial CERMICS seminar, March 30th, 2016 Multi-Armed Bandit model: general setting K arms: for a {1,..., K}, (X a,t ) t N is a stochastic process. (unknown distributions)

More information

Bandit models: a tutorial

Bandit models: a tutorial Gdt COS, December 3rd, 2015 Multi-Armed Bandit model: general setting K arms: for a {1,..., K}, (X a,t ) t N is a stochastic process. (unknown distributions) Bandit game: a each round t, an agent chooses

More information

Online Learning and Sequential Decision Making

Online Learning and Sequential Decision Making Online Learning and Sequential Decision Making Emilie Kaufmann CNRS & CRIStAL, Inria SequeL, emilie.kaufmann@univ-lille.fr Research School, ENS Lyon, Novembre 12-13th 2018 Emilie Kaufmann Sequential Decision

More information

On the Complexity of Best Arm Identification in Multi-Armed Bandit Models

On the Complexity of Best Arm Identification in Multi-Armed Bandit Models On the Complexity of Best Arm Identification in Multi-Armed Bandit Models Aurélien Garivier Institut de Mathématiques de Toulouse Information Theory, Learning and Big Data Simons Institute, Berkeley, March

More information

The information complexity of best-arm identification

The information complexity of best-arm identification The information complexity of best-arm identification Emilie Kaufmann, joint work with Olivier Cappé and Aurélien Garivier MAB workshop, Lancaster, January th, 206 Context: the multi-armed bandit model

More information

Revisiting the Exploration-Exploitation Tradeoff in Bandit Models

Revisiting the Exploration-Exploitation Tradeoff in Bandit Models Revisiting the Exploration-Exploitation Tradeoff in Bandit Models joint work with Aurélien Garivier (IMT, Toulouse) and Tor Lattimore (University of Alberta) Workshop on Optimization and Decision-Making

More information

On the Complexity of Best Arm Identification with Fixed Confidence

On the Complexity of Best Arm Identification with Fixed Confidence On the Complexity of Best Arm Identification with Fixed Confidence Discrete Optimization with Noise Aurélien Garivier, Emilie Kaufmann COLT, June 23 th 2016, New York Institut de Mathématiques de Toulouse

More information

arxiv: v2 [stat.ml] 19 Jul 2012

arxiv: v2 [stat.ml] 19 Jul 2012 Thompson Sampling: An Asymptotically Optimal Finite Time Analysis Emilie Kaufmann, Nathaniel Korda and Rémi Munos arxiv:105.417v [stat.ml] 19 Jul 01 Telecom Paristech UMR CNRS 5141 & INRIA Lille - Nord

More information

Bandits : optimality in exponential families

Bandits : optimality in exponential families Bandits : optimality in exponential families Odalric-Ambrym Maillard IHES, January 2016 Odalric-Ambrym Maillard Bandits 1 / 40 Introduction 1 Stochastic multi-armed bandits 2 Boundary crossing probabilities

More information

On the Complexity of Best Arm Identification with Fixed Confidence

On the Complexity of Best Arm Identification with Fixed Confidence On the Complexity of Best Arm Identification with Fixed Confidence Discrete Optimization with Noise Aurélien Garivier, joint work with Emilie Kaufmann CNRS, CRIStAL) to be presented at COLT 16, New York

More information

Regret Analysis of Stochastic and Nonstochastic Multi-armed Bandit Problems, Part I. Sébastien Bubeck Theory Group

Regret Analysis of Stochastic and Nonstochastic Multi-armed Bandit Problems, Part I. Sébastien Bubeck Theory Group Regret Analysis of Stochastic and Nonstochastic Multi-armed Bandit Problems, Part I Sébastien Bubeck Theory Group i.i.d. multi-armed bandit, Robbins [1952] i.i.d. multi-armed bandit, Robbins [1952] Known

More information

Two generic principles in modern bandits: the optimistic principle and Thompson sampling

Two generic principles in modern bandits: the optimistic principle and Thompson sampling Two generic principles in modern bandits: the optimistic principle and Thompson sampling Rémi Munos INRIA Lille, France CSML Lunch Seminars, September 12, 2014 Outline Two principles: The optimistic principle

More information

Analysis of Thompson Sampling for the multi-armed bandit problem

Analysis of Thompson Sampling for the multi-armed bandit problem Analysis of Thompson Sampling for the multi-armed bandit problem Shipra Agrawal Microsoft Research India shipra@microsoft.com avin Goyal Microsoft Research India navingo@microsoft.com Abstract We show

More information

Optimality of Thompson Sampling for Gaussian Bandits Depends on Priors

Optimality of Thompson Sampling for Gaussian Bandits Depends on Priors Optimality of Thompson Sampling for Gaussian Bandits Depends on Priors Junya Honda Akimichi Takemura The University of Tokyo {honda, takemura}@stat.t.u-tokyo.ac.jp Abstract In stochastic bandit problems,

More information

Lecture 5: Regret Bounds for Thompson Sampling

Lecture 5: Regret Bounds for Thompson Sampling CMSC 858G: Bandits, Experts and Games 09/2/6 Lecture 5: Regret Bounds for Thompson Sampling Instructor: Alex Slivkins Scribed by: Yancy Liao Regret Bounds for Thompson Sampling For each round t, we defined

More information

Dynamic resource allocation: Bandit problems and extensions

Dynamic resource allocation: Bandit problems and extensions Dynamic resource allocation: Bandit problems and extensions Aurélien Garivier Institut de Mathématiques de Toulouse MAD Seminar, Université Toulouse 1 October 3rd, 2014 The Bandit Model Roadmap 1 The Bandit

More information

Advanced Machine Learning

Advanced Machine Learning Advanced Machine Learning Bandit Problems MEHRYAR MOHRI MOHRI@ COURANT INSTITUTE & GOOGLE RESEARCH. Multi-Armed Bandit Problem Problem: which arm of a K-slot machine should a gambler pull to maximize his

More information

Lecture 4: Lower Bounds (ending); Thompson Sampling

Lecture 4: Lower Bounds (ending); Thompson Sampling CMSC 858G: Bandits, Experts and Games 09/12/16 Lecture 4: Lower Bounds (ending); Thompson Sampling Instructor: Alex Slivkins Scribed by: Guowei Sun,Cheng Jie 1 Lower bounds on regret (ending) Recap from

More information

KULLBACK-LEIBLER UPPER CONFIDENCE BOUNDS FOR OPTIMAL SEQUENTIAL ALLOCATION

KULLBACK-LEIBLER UPPER CONFIDENCE BOUNDS FOR OPTIMAL SEQUENTIAL ALLOCATION Submitted to the Annals of Statistics arxiv: math.pr/0000000 KULLBACK-LEIBLER UPPER CONFIDENCE BOUNDS FOR OPTIMAL SEQUENTIAL ALLOCATION By Olivier Cappé 1, Aurélien Garivier 2, Odalric-Ambrym Maillard

More information

On the Complexity of A/B Testing

On the Complexity of A/B Testing JMLR: Workshop and Conference Proceedings vol 35:1 3, 014 On the Complexity of A/B Testing Emilie Kaufmann LTCI, Télécom ParisTech & CNRS KAUFMANN@TELECOM-PARISTECH.FR Olivier Cappé CAPPE@TELECOM-PARISTECH.FR

More information

Reinforcement Learning

Reinforcement Learning Reinforcement Learning Lecture 5: Bandit optimisation Alexandre Proutiere, Sadegh Talebi, Jungseul Ok KTH, The Royal Institute of Technology Objectives of this lecture Introduce bandit optimisation: the

More information

The Multi-Arm Bandit Framework

The Multi-Arm Bandit Framework The Multi-Arm Bandit Framework A. LAZARIC (SequeL Team @INRIA-Lille) ENS Cachan - Master 2 MVA SequeL INRIA Lille MVA-RL Course In This Lecture A. LAZARIC Reinforcement Learning Algorithms Oct 29th, 2013-2/94

More information

arxiv: v3 [stat.ml] 6 Nov 2017

arxiv: v3 [stat.ml] 6 Nov 2017 On Bayesian index policies for sequential resource allocation Emilie Kaufmann arxiv:60.090v3 [stat.ml] 6 Nov 07 CNRS & Univ. Lille, UMR 989 CRIStAL, Inria SequeL emilie.kaufmann@univ-lille.fr Abstract

More information

Introduction to Bandit Algorithms. Introduction to Bandit Algorithms

Introduction to Bandit Algorithms. Introduction to Bandit Algorithms Stochastic K-Arm Bandit Problem Formulation Consider K arms (actions) each correspond to an unknown distribution {ν k } K k=1 with values bounded in [0, 1]. At each time t, the agent pulls an arm I t {1,...,

More information

Bandit Algorithms for Pure Exploration: Best Arm Identification and Game Tree Search. Wouter M. Koolen

Bandit Algorithms for Pure Exploration: Best Arm Identification and Game Tree Search. Wouter M. Koolen Bandit Algorithms for Pure Exploration: Best Arm Identification and Game Tree Search Wouter M. Koolen Machine Learning and Statistics for Structures Friday 23 rd February, 2018 Outline 1 Intro 2 Model

More information

Grundlagen der Künstlichen Intelligenz

Grundlagen der Künstlichen Intelligenz Grundlagen der Künstlichen Intelligenz Uncertainty & Probabilities & Bandits Daniel Hennes 16.11.2017 (WS 2017/18) University Stuttgart - IPVS - Machine Learning & Robotics 1 Today Uncertainty Probability

More information

Analysis of Thompson Sampling for the multi-armed bandit problem

Analysis of Thompson Sampling for the multi-armed bandit problem Analysis of Thompson Sampling for the multi-armed bandit problem Shipra Agrawal Microsoft Research India shipra@microsoft.com Navin Goyal Microsoft Research India navingo@microsoft.com Abstract The multi-armed

More information

Stat 260/CS Learning in Sequential Decision Problems. Peter Bartlett

Stat 260/CS Learning in Sequential Decision Problems. Peter Bartlett Stat 260/CS 294-102. Learning in Sequential Decision Problems. Peter Bartlett 1. Multi-armed bandit algorithms. Concentration inequalities. P(X ǫ) exp( ψ (ǫ))). Cumulant generating function bounds. Hoeffding

More information

arxiv: v2 [stat.ml] 14 Nov 2016

arxiv: v2 [stat.ml] 14 Nov 2016 Journal of Machine Learning Research 6 06-4 Submitted 7/4; Revised /5; Published /6 On the Complexity of Best-Arm Identification in Multi-Armed Bandit Models arxiv:407.4443v [stat.ml] 4 Nov 06 Emilie Kaufmann

More information

The Multi-Armed Bandit Problem

The Multi-Armed Bandit Problem The Multi-Armed Bandit Problem Electrical and Computer Engineering December 7, 2013 Outline 1 2 Mathematical 3 Algorithm Upper Confidence Bound Algorithm A/B Testing Exploration vs. Exploitation Scientist

More information

Informational Confidence Bounds for Self-Normalized Averages and Applications

Informational Confidence Bounds for Self-Normalized Averages and Applications Informational Confidence Bounds for Self-Normalized Averages and Applications Aurélien Garivier Institut de Mathématiques de Toulouse - Université Paul Sabatier Thursday, September 12th 2013 Context Tree

More information

Reinforcement Learning

Reinforcement Learning Reinforcement Learning Lecture 6: RL algorithms 2.0 Alexandre Proutiere, Sadegh Talebi, Jungseul Ok KTH, The Royal Institute of Technology Objectives of this lecture Present and analyse two online algorithms

More information

Csaba Szepesvári 1. University of Alberta. Machine Learning Summer School, Ile de Re, France, 2008

Csaba Szepesvári 1. University of Alberta. Machine Learning Summer School, Ile de Re, France, 2008 LEARNING THEORY OF OPTIMAL DECISION MAKING PART I: ON-LINE LEARNING IN STOCHASTIC ENVIRONMENTS Csaba Szepesvári 1 1 Department of Computing Science University of Alberta Machine Learning Summer School,

More information

1 MDP Value Iteration Algorithm

1 MDP Value Iteration Algorithm CS 0. - Active Learning Problem Set Handed out: 4 Jan 009 Due: 9 Jan 009 MDP Value Iteration Algorithm. Implement the value iteration algorithm given in the lecture. That is, solve Bellman s equation using

More information

arxiv: v4 [cs.lg] 22 Jul 2014

arxiv: v4 [cs.lg] 22 Jul 2014 Learning to Optimize Via Information-Directed Sampling Daniel Russo and Benjamin Van Roy July 23, 2014 arxiv:1403.5556v4 cs.lg] 22 Jul 2014 Abstract We propose information-directed sampling a new algorithm

More information

Stat 260/CS Learning in Sequential Decision Problems. Peter Bartlett

Stat 260/CS Learning in Sequential Decision Problems. Peter Bartlett Stat 260/CS 294-102. Learning in Sequential Decision Problems. Peter Bartlett 1. Thompson sampling Bernoulli strategy Regret bounds Extensions the flexibility of Bayesian strategies 1 Bayesian bandit strategies

More information

arxiv: v1 [cs.lg] 15 Oct 2014

arxiv: v1 [cs.lg] 15 Oct 2014 THOMPSON SAMPLING WITH THE ONLINE BOOTSTRAP By Dean Eckles and Maurits Kaptein Facebook, Inc., and Radboud University, Nijmegen arxiv:141.49v1 [cs.lg] 15 Oct 214 Thompson sampling provides a solution to

More information

Optimistic Gittins Indices

Optimistic Gittins Indices Optimistic Gittins Indices Eli Gutin Operations Research Center, MIT Cambridge, MA 0242 gutin@mit.edu Vivek F. Farias MIT Sloan School of Management Cambridge, MA 0242 vivekf@mit.edu Abstract Starting

More information

Evaluation of multi armed bandit algorithms and empirical algorithm

Evaluation of multi armed bandit algorithms and empirical algorithm Acta Technica 62, No. 2B/2017, 639 656 c 2017 Institute of Thermomechanics CAS, v.v.i. Evaluation of multi armed bandit algorithms and empirical algorithm Zhang Hong 2,3, Cao Xiushan 1, Pu Qiumei 1,4 Abstract.

More information

THE first formalization of the multi-armed bandit problem

THE first formalization of the multi-armed bandit problem EDIC RESEARCH PROPOSAL 1 Multi-armed Bandits in a Network Farnood Salehi I&C, EPFL Abstract The multi-armed bandit problem is a sequential decision problem in which we have several options (arms). We can

More information

Optimistic Bayesian Sampling in Contextual-Bandit Problems

Optimistic Bayesian Sampling in Contextual-Bandit Problems Journal of Machine Learning Research volume (2012) 2069-2106 Submitted 7/11; Revised 5/12; Published 6/12 Optimistic Bayesian Sampling in Contextual-Bandit Problems Benedict C. May School of Mathematics

More information

Complex Bandit Problems and Thompson Sampling

Complex Bandit Problems and Thompson Sampling Complex Bandit Problems and Aditya Gopalan Department of Electrical Engineering Technion, Israel aditya@ee.technion.ac.il Shie Mannor Department of Electrical Engineering Technion, Israel shie@ee.technion.ac.il

More information

Stochastic bandits: Explore-First and UCB

Stochastic bandits: Explore-First and UCB CSE599s, Spring 2014, Online Learning Lecture 15-2/19/2014 Stochastic bandits: Explore-First and UCB Lecturer: Brendan McMahan or Ofer Dekel Scribe: Javad Hosseini In this lecture, we like to answer this

More information

arxiv: v4 [math.pr] 26 Aug 2013

arxiv: v4 [math.pr] 26 Aug 2013 The Annals of Statistics 2013, Vol. 41, No. 3, 1516 1541 DOI: 10.1214/13-AOS1119 c Institute of Mathematical Statistics, 2013 arxiv:1210.1136v4 [math.pr] 26 Aug 2013 KULLBACK LEIBLER UPPER CONFIDENCE BOUNDS

More information

Annealing-Pareto Multi-Objective Multi-Armed Bandit Algorithm

Annealing-Pareto Multi-Objective Multi-Armed Bandit Algorithm Annealing-Pareto Multi-Objective Multi-Armed Bandit Algorithm Saba Q. Yahyaa, Madalina M. Drugan and Bernard Manderick Vrije Universiteit Brussel, Department of Computer Science, Pleinlaan 2, 1050 Brussels,

More information

Bandit Algorithms. Tor Lattimore & Csaba Szepesvári

Bandit Algorithms. Tor Lattimore & Csaba Szepesvári Bandit Algorithms Tor Lattimore & Csaba Szepesvári Bandits Time 1 2 3 4 5 6 7 8 9 10 11 12 Left arm $1 $0 $1 $1 $0 Right arm $1 $0 Five rounds to go. Which arm would you play next? Overview What are bandits,

More information

Bandit Algorithms. Zhifeng Wang ... Department of Statistics Florida State University

Bandit Algorithms. Zhifeng Wang ... Department of Statistics Florida State University Bandit Algorithms Zhifeng Wang Department of Statistics Florida State University Outline Multi-Armed Bandits (MAB) Exploration-First Epsilon-Greedy Softmax UCB Thompson Sampling Adversarial Bandits Exp3

More information

Exploration and exploitation of scratch games

Exploration and exploitation of scratch games Mach Learn (2013) 92:377 401 DOI 10.1007/s10994-013-5359-2 Exploration and exploitation of scratch games Raphaël Féraud Tanguy Urvoy Received: 10 January 2013 / Accepted: 12 April 2013 / Published online:

More information

Kullback-Leibler Upper Confidence Bounds for Optimal Sequential Allocation

Kullback-Leibler Upper Confidence Bounds for Optimal Sequential Allocation Kullback-Leibler Upper Confidence Bounds for Optimal Sequential Allocation Olivier Cappé, Aurélien Garivier, Odalric-Ambrym Maillard, Rémi Munos, Gilles Stoltz To cite this version: Olivier Cappé, Aurélien

More information

Complexity of stochastic branch and bound methods for belief tree search in Bayesian reinforcement learning

Complexity of stochastic branch and bound methods for belief tree search in Bayesian reinforcement learning Complexity of stochastic branch and bound methods for belief tree search in Bayesian reinforcement learning Christos Dimitrakakis Informatics Institute, University of Amsterdam, Amsterdam, The Netherlands

More information

Corrupt Bandits. Abstract

Corrupt Bandits. Abstract Corrupt Bandits Pratik Gajane Orange labs/inria SequeL Tanguy Urvoy Orange labs Emilie Kaufmann INRIA SequeL pratik.gajane@inria.fr tanguy.urvoy@orange.com emilie.kaufmann@inria.fr Editor: Abstract We

More information

Anytime optimal algorithms in stochastic multi-armed bandits

Anytime optimal algorithms in stochastic multi-armed bandits Rémy Degenne LPMA, Université Paris Diderot Vianney Perchet CREST, ENSAE REMYDEGENNE@MATHUNIV-PARIS-DIDEROTFR VIANNEYPERCHET@NORMALESUPORG Abstract We introduce an anytime algorithm for stochastic multi-armed

More information

Learning to play K-armed bandit problems

Learning to play K-armed bandit problems Learning to play K-armed bandit problems Francis Maes 1, Louis Wehenkel 1 and Damien Ernst 1 1 University of Liège Dept. of Electrical Engineering and Computer Science Institut Montefiore, B28, B-4000,

More information

THE MULTI-ARMED BANDIT PROBLEM: AN EFFICIENT NON-PARAMETRIC SOLUTION

THE MULTI-ARMED BANDIT PROBLEM: AN EFFICIENT NON-PARAMETRIC SOLUTION THE MULTI-ARMED BANDIT PROBLEM: AN EFFICIENT NON-PARAMETRIC SOLUTION Hock Peng Chan stachp@nus.edu.sg Department of Statistics and Applied Probability National University of Singapore Abstract Lai and

More information

Subsampling, Concentration and Multi-armed bandits

Subsampling, Concentration and Multi-armed bandits Subsampling, Concentration and Multi-armed bandits Odalric-Ambrym Maillard, R. Bardenet, S. Mannor, A. Baransi, N. Galichet, J. Pineau, A. Durand Toulouse, November 09, 2015 O-A. Maillard Subsampling and

More information

Multi-armed Bandits in the Presence of Side Observations in Social Networks

Multi-armed Bandits in the Presence of Side Observations in Social Networks 52nd IEEE Conference on Decision and Control December 0-3, 203. Florence, Italy Multi-armed Bandits in the Presence of Side Observations in Social Networks Swapna Buccapatnam, Atilla Eryilmaz, and Ness

More information

Notes from Week 9: Multi-Armed Bandit Problems II. 1 Information-theoretic lower bounds for multiarmed

Notes from Week 9: Multi-Armed Bandit Problems II. 1 Information-theoretic lower bounds for multiarmed CS 683 Learning, Games, and Electronic Markets Spring 007 Notes from Week 9: Multi-Armed Bandit Problems II Instructor: Robert Kleinberg 6-30 Mar 007 1 Information-theoretic lower bounds for multiarmed

More information

Multi-Armed Bandit: Learning in Dynamic Systems with Unknown Models

Multi-Armed Bandit: Learning in Dynamic Systems with Unknown Models c Qing Zhao, UC Davis. Talk at Xidian Univ., September, 2011. 1 Multi-Armed Bandit: Learning in Dynamic Systems with Unknown Models Qing Zhao Department of Electrical and Computer Engineering University

More information

The multi armed-bandit problem

The multi armed-bandit problem The multi armed-bandit problem (with covariates if we have time) Vianney Perchet & Philippe Rigollet LPMA Université Paris Diderot ORFE Princeton University Algorithms and Dynamics for Games and Optimization

More information

Regret Analysis of Stochastic and Nonstochastic Multi-armed Bandit Problems. Sébastien Bubeck Theory Group

Regret Analysis of Stochastic and Nonstochastic Multi-armed Bandit Problems. Sébastien Bubeck Theory Group Regret Analysis of Stochastic and Nonstochastic Multi-armed Bandit Problems Sébastien Bubeck Theory Group Part 1: i.i.d., adversarial, and Bayesian bandit models i.i.d. multi-armed bandit, Robbins [1952]

More information

arxiv: v3 [cs.lg] 7 Nov 2017

arxiv: v3 [cs.lg] 7 Nov 2017 ESAIM: PROCEEDINGS AND SURVEYS, Vol.?, 2017, 1-10 Editors: Will be set by the publisher arxiv:1702.00001v3 [cs.lg] 7 Nov 2017 LEARNING THE DISTRIBUTION WITH LARGEST MEAN: TWO BANDIT FRAMEWORKS Emilie Kaufmann

More information

Bandits and Exploration: How do we (optimally) gather information? Sham M. Kakade

Bandits and Exploration: How do we (optimally) gather information? Sham M. Kakade Bandits and Exploration: How do we (optimally) gather information? Sham M. Kakade Machine Learning for Big Data CSE547/STAT548 University of Washington S. M. Kakade (UW) Optimization for Big data 1 / 22

More information

COS 402 Machine Learning and Artificial Intelligence Fall Lecture 22. Exploration & Exploitation in Reinforcement Learning: MAB, UCB, Exp3

COS 402 Machine Learning and Artificial Intelligence Fall Lecture 22. Exploration & Exploitation in Reinforcement Learning: MAB, UCB, Exp3 COS 402 Machine Learning and Artificial Intelligence Fall 2016 Lecture 22 Exploration & Exploitation in Reinforcement Learning: MAB, UCB, Exp3 How to balance exploration and exploitation in reinforcement

More information

Improved Algorithms for Linear Stochastic Bandits

Improved Algorithms for Linear Stochastic Bandits Improved Algorithms for Linear Stochastic Bandits Yasin Abbasi-Yadkori abbasiya@ualberta.ca Dept. of Computing Science University of Alberta Dávid Pál dpal@google.com Dept. of Computing Science University

More information

Reinforcement Learning

Reinforcement Learning Reinforcement Learning Lecture 3: RL problems, sample complexity and regret Alexandre Proutiere, Sadegh Talebi, Jungseul Ok KTH, The Royal Institute of Technology Objectives of this lecture Introduce the

More information

Thompson Sampling for the non-stationary Corrupt Multi-Armed Bandit

Thompson Sampling for the non-stationary Corrupt Multi-Armed Bandit European Worshop on Reinforcement Learning 14 (2018 October 2018, Lille, France. Thompson Sampling for the non-stationary Corrupt Multi-Armed Bandit Réda Alami Orange Labs 2 Avenue Pierre Marzin 22300,

More information

An Estimation Based Allocation Rule with Super-linear Regret and Finite Lock-on Time for Time-dependent Multi-armed Bandit Processes

An Estimation Based Allocation Rule with Super-linear Regret and Finite Lock-on Time for Time-dependent Multi-armed Bandit Processes An Estimation Based Allocation Rule with Super-linear Regret and Finite Lock-on Time for Time-dependent Multi-armed Bandit Processes Prokopis C. Prokopiou, Peter E. Caines, and Aditya Mahajan McGill University

More information

An Information-Theoretic Analysis of Thompson Sampling

An Information-Theoretic Analysis of Thompson Sampling Journal of Machine Learning Research (2015) Submitted ; Published An Information-Theoretic Analysis of Thompson Sampling Daniel Russo Department of Management Science and Engineering Stanford University

More information

Multi-Armed Bandits. Credit: David Silver. Google DeepMind. Presenter: Tianlu Wang

Multi-Armed Bandits. Credit: David Silver. Google DeepMind. Presenter: Tianlu Wang Multi-Armed Bandits Credit: David Silver Google DeepMind Presenter: Tianlu Wang Credit: David Silver (DeepMind) Multi-Armed Bandits Presenter: Tianlu Wang 1 / 27 Outline 1 Introduction Exploration vs.

More information

Multiple Identifications in Multi-Armed Bandits

Multiple Identifications in Multi-Armed Bandits Multiple Identifications in Multi-Armed Bandits arxiv:05.38v [cs.lg] 4 May 0 Sébastien Bubeck Department of Operations Research and Financial Engineering, Princeton University sbubeck@princeton.edu Tengyao

More information

Mechanisms with Learning for Stochastic Multi-armed Bandit Problems

Mechanisms with Learning for Stochastic Multi-armed Bandit Problems Mechanisms with Learning for Stochastic Multi-armed Bandit Problems Shweta Jain 1, Satyanath Bhat 1, Ganesh Ghalme 1, Divya Padmanabhan 1, and Y. Narahari 1 Department of Computer Science and Automation,

More information

Monte-Carlo Tree Search by. MCTS by Best Arm Identification

Monte-Carlo Tree Search by. MCTS by Best Arm Identification Monte-Carlo Tree Search by Best Arm Identification and Wouter M. Koolen Inria Lille SequeL team CWI Machine Learning Group Inria-CWI workshop Amsterdam, September 20th, 2017 Part of...... a new Associate

More information

The Multi-Armed Bandit Problem

The Multi-Armed Bandit Problem Università degli Studi di Milano The bandit problem [Robbins, 1952]... K slot machines Rewards X i,1, X i,2,... of machine i are i.i.d. [0, 1]-valued random variables An allocation policy prescribes which

More information

Racing Thompson: an Efficient Algorithm for Thompson Sampling with Non-conjugate Priors

Racing Thompson: an Efficient Algorithm for Thompson Sampling with Non-conjugate Priors Racing Thompson: an Efficient Algorithm for Thompson Sampling with Non-conugate Priors Yichi Zhou 1 Jun Zhu 1 Jingwe Zhuo 1 Abstract Thompson sampling has impressive empirical performance for many multi-armed

More information

Bayesian Contextual Multi-armed Bandits

Bayesian Contextual Multi-armed Bandits Bayesian Contextual Multi-armed Bandits Xiaoting Zhao Joint Work with Peter I. Frazier School of Operations Research and Information Engineering Cornell University October 22, 2012 1 / 33 Outline 1 Motivating

More information

Sparse Linear Contextual Bandits via Relevance Vector Machines

Sparse Linear Contextual Bandits via Relevance Vector Machines Sparse Linear Contextual Bandits via Relevance Vector Machines Davis Gilton and Rebecca Willett Electrical and Computer Engineering University of Wisconsin-Madison Madison, WI 53706 Email: gilton@wisc.edu,

More information

Artificial Intelligence

Artificial Intelligence Artificial Intelligence Probabilities Marc Toussaint University of Stuttgart Winter 2018/19 Motivation: AI systems need to reason about what they know, or not know. Uncertainty may have so many sources:

More information

Multi-Armed Bandit Formulations for Identification and Control

Multi-Armed Bandit Formulations for Identification and Control Multi-Armed Bandit Formulations for Identification and Control Cristian R. Rojas Joint work with Matías I. Müller and Alexandre Proutiere KTH Royal Institute of Technology, Sweden ERNSI, September 24-27,

More information

Learning to Optimize via Information-Directed Sampling

Learning to Optimize via Information-Directed Sampling Learning to Optimize via Information-Directed Sampling Daniel Russo Stanford University Stanford, CA 94305 djrusso@stanford.edu Benjamin Van Roy Stanford University Stanford, CA 94305 bvr@stanford.edu

More information

Stochastic Contextual Bandits with Known. Reward Functions

Stochastic Contextual Bandits with Known. Reward Functions Stochastic Contextual Bandits with nown 1 Reward Functions Pranav Sakulkar and Bhaskar rishnamachari Ming Hsieh Department of Electrical Engineering Viterbi School of Engineering University of Southern

More information

Lecture 4 January 23

Lecture 4 January 23 STAT 263/363: Experimental Design Winter 2016/17 Lecture 4 January 23 Lecturer: Art B. Owen Scribe: Zachary del Rosario 4.1 Bandits Bandits are a form of online (adaptive) experiments; i.e. samples are

More information

Lecture 19: UCB Algorithm and Adversarial Bandit Problem. Announcements Review on stochastic multi-armed bandit problem

Lecture 19: UCB Algorithm and Adversarial Bandit Problem. Announcements Review on stochastic multi-armed bandit problem Lecture 9: UCB Algorithm and Adversarial Bandit Problem EECS598: Prediction and Learning: It s Only a Game Fall 03 Lecture 9: UCB Algorithm and Adversarial Bandit Problem Prof. Jacob Abernethy Scribe:

More information

On Sequential Decision Problems

On Sequential Decision Problems On Sequential Decision Problems Aurélien Garivier Séminaire du LIP, 14 mars 2018 Équipe-projet AOC: Apprentissage, Optimisation, Complexité Institut de Mathématiques de Toulouse LabeX CIMI Université Paul

More information

Stat 260/CS Learning in Sequential Decision Problems.

Stat 260/CS Learning in Sequential Decision Problems. Stat 260/CS 294-102. Learning in Sequential Decision Problems. Peter Bartlett 1. Multi-armed bandit algorithms. Exponential families. Cumulant generating function. KL-divergence. KL-UCB for an exponential

More information

On Regret-Optimal Learning in Decentralized Multi-player Multi-armed Bandits

On Regret-Optimal Learning in Decentralized Multi-player Multi-armed Bandits 1 On Regret-Optimal Learning in Decentralized Multi-player Multi-armed Bandits Naumaan Nayyar, Dileep Kalathil and Rahul Jain Abstract We consider the problem of learning in single-player and multiplayer

More information

Reward Maximization Under Uncertainty: Leveraging Side-Observations on Networks

Reward Maximization Under Uncertainty: Leveraging Side-Observations on Networks Reward Maximization Under Uncertainty: Leveraging Side-Observations Reward Maximization Under Uncertainty: Leveraging Side-Observations on Networks Swapna Buccapatnam AT&T Labs Research, Middletown, NJ

More information

The geometry of Gaussian processes and Bayesian optimization. Contal CMLA, ENS Cachan

The geometry of Gaussian processes and Bayesian optimization. Contal CMLA, ENS Cachan The geometry of Gaussian processes and Bayesian optimization. Contal CMLA, ENS Cachan Background: Global Optimization and Gaussian Processes The Geometry of Gaussian Processes and the Chaining Trick Algorithm

More information

Bayesian reinforcement learning

Bayesian reinforcement learning Bayesian reinforcement learning Markov decision processes and approximate Bayesian computation Christos Dimitrakakis Chalmers April 16, 2015 Christos Dimitrakakis (Chalmers) Bayesian reinforcement learning

More information

Multi armed bandit problem: some insights

Multi armed bandit problem: some insights Multi armed bandit problem: some insights July 4, 20 Introduction Multi Armed Bandit problems have been widely studied in the context of sequential analysis. The application areas include clinical trials,

More information

Introduction to Reinforcement Learning Part 3: Exploration for decision making, Application to games, optimization, and planning

Introduction to Reinforcement Learning Part 3: Exploration for decision making, Application to games, optimization, and planning Introduction to Reinforcement Learning Part 3: Exploration for decision making, Application to games, optimization, and planning Rémi Munos SequeL project: Sequential Learning http://researchers.lille.inria.fr/

More information

Online Learning and Sequential Decision Making

Online Learning and Sequential Decision Making Online Learning and Sequential Decision Making Emilie Kaufmann CNRS & CRIStAL, Inria SequeL, emilie.kaufmann@univ-lille.fr Research School, ENS Lyon, Novembre 12-13th 2018 Emilie Kaufmann Online Learning

More information

Online Learning Schemes for Power Allocation in Energy Harvesting Communications

Online Learning Schemes for Power Allocation in Energy Harvesting Communications Online Learning Schemes for Power Allocation in Energy Harvesting Communications Pranav Sakulkar and Bhaskar Krishnamachari Ming Hsieh Department of Electrical Engineering Viterbi School of Engineering

More information

Stochastic Bandit Models for Delayed Conversions

Stochastic Bandit Models for Delayed Conversions Stochastic Bandit Models for Delayed Conversions Claire Vernade LTCI, Telecom ParisTech, Université Paris-Saclay Olivier Cappé LIMSI, CNRS, Université Paris-Saclay Vianney Perchet CMLA, ENS Paris-Saclay,

More information

Reinforcement Learning

Reinforcement Learning Reinforcement Learning Model-Based Reinforcement Learning Model-based, PAC-MDP, sample complexity, exploration/exploitation, RMAX, E3, Bayes-optimal, Bayesian RL, model learning Vien Ngo MLR, University

More information

Stochastic Regret Minimization via Thompson Sampling

Stochastic Regret Minimization via Thompson Sampling JMLR: Workshop and Conference Proceedings vol 35:1 22, 2014 Stochastic Regret Minimization via Thompson Sampling Sudipto Guha Department of Computer and Information Sciences, University of Pennsylvania,

More information

Value Directed Exploration in Multi-Armed Bandits with Structured Priors

Value Directed Exploration in Multi-Armed Bandits with Structured Priors Value Directed Exploration in Multi-Armed Bandits with Structured Priors Bence Cserna Marek Petrik Reazul Hasan Russel Wheeler Ruml University of New Hampshire Durham, NH 03824 USA bence, mpetrik, rrussel,

More information