Subsampling, Concentration and Multi-armed bandits

Size: px
Start display at page:

Download "Subsampling, Concentration and Multi-armed bandits"

Transcription

1 Subsampling, Concentration and Multi-armed bandits Odalric-Ambrym Maillard, R. Bardenet, S. Mannor, A. Baransi, N. Galichet, J. Pineau, A. Durand Toulouse, November 09, 2015 O-A. Maillard Subsampling and Bandits 1 / 36

2 Ṛoadmap 1 Sub-sampling concentration: 1.1 Hoeffding-Serfling, 1.2 Bernstein-Serfling and 1.3 empirical Bernstein-Serfling bounds 2 Sub-sampling for stochastic multi-armed bandits: 2.1 "Best empirical sub-sampled arm" strategy, 2.2 Illustrative experiments, 2.3 Cumulative regret bound and extensions. O-A. Maillard Subsampling and Bandits 2 / 36

3 Sub-sampling concentration Introduction "Concentration inequalities for sampling without replacement", Bardenet and Maillard, Bernoulli, 2015.

4 Ṣub-sampling X = (x 1,..., x N ),a finite population of N real points. x 1 x 2 x 3 x 4 x 5... x N 2 x N 1 x N Sub-sample of size n N from X : X 1,..., X n picked uniformly randomly without replacement from X. x 1 X n 1 X 1 x 4 X 2... x N 2 X n x N Simple problem Approximating the population mean µ = 1 Ni=1 x N i. Concentration for partial sums of X 1,..., X n. Careful: dependency. O-A. Maillard Subsampling and Bandits 4 / 36

5 Ḥoeffding s reduction lemma Lemma (Hoeffding, 1963) Let X = (x 1,..., x N ) be a finite population of N real points, X 1,..., X n denote a random sample without replacement from X and Y 1,..., Y n denote a random sample with replacement from X. If f : R R is continuous and convex, then ( n ) ( n ) Ef X i Ef Y i. i=1 i=1 From sampling with to without replacement We can thus transfer some results for sampling w. replacement to the case of sampling without replacement (via Chernoff). O-A. Maillard Subsampling and Bandits 5 / 36

6 Ċomparing bounds on P(n 1 n i=1 X i µ 10 2 ), N = Probability bound Estimate Hoeffding 0.2 Bernstein Hoeffding-Serfling Bernstein-Serfling n (a) Gaussian N (0, 1) Probability bound Estimate Hoeffding 0.2 Bernstein Hoeffding-Serfling Bernstein-Serfling n (b) Log-normal ln N (1, 1) Probability bound Estimate Hoeffding Bernstein Hoeffding-Serfling Bernstein-Serfling Probability bound Estimate Hoeffding Bernstein Hoeffding-Serfling Bernstein-Serfling n (c) Bernoulli B(0.1) n (d) Bernoulli B(0.5) O-A. Maillard Subsampling and Bandits 6 / 36

7 Ṣerfling s key observation For 1 k N (considering fictitious X n+1,..., X N ) Z k = 1 k k (X t µ) and Zk = 1 t=1 N k k (X t µ). (1) t=1 Lemma (Serfling, 1974) The following forward martingale structure holds for {Zk } k N : [ ] E Zk Z k 1,..., Z1 = Zk 1. The following reverse martingale structure holds for {Z k } k N : ] E [Z k Z k+1,..., Z N 1 = Z k+1. = Structured dependency. O-A. Maillard Subsampling and Bandits 7 / 36

8 Ạ useful result Theorem (Serfling, 1974) Let a = min 1 i N x i, and b = max 1 i N x i. Then λ R +, it holds ) ( (b a)2 log E exp (λnz n λ 2 n 1 n 1 ). 8 N Moreover, ( ) log E exp λ max Z k 1 k n (b a)2 8 λ 2 ( (N n) n 2 1 n 1 N ). O-A. Maillard Subsampling and Bandits 8 / 36

9 Ạ useful result Theorem (Bardenet M. 2015) Let a = min 1 i N x i, and b = max 1 i N x i. Then λ R +, it also holds ) ( (b a)2 log E exp (λnz n λ 2 (n + 1) 1 n ). 8 N Moreover, ( ) log E exp λ max Z k 1 k n ( ) log E exp λ max n k N 1 Z k (b a)2 8 (b a)2 8 (Slight) improvement when n > N/2. λ 2 ( (N n) n 2 1 n 1 N λ 2 (1 n (n + 1) n 2 N ). ). O-A. Maillard Subsampling and Bandits 8 / 36

10 Ạ slightly improved Hoeffding-Serfling inequality Trivial corollary: Corollary (Bardenet M., 2015) For all n N, δ [0, 1], with probability higher than 1 δ, it holds nt=1 (X t µ) ρn log(1/δ) (b a), n 2n where we define (1 n 1 ) if n N/2 N ρ n =. (2) (1 n )(1 + 1/n) if n > N/2 N O-A. Maillard Subsampling and Bandits 9 / 36

11 Sub-sampling concentration Bernstein-Serfling

12 Ṭowards Bernstein-Serfling s inequality Let σ 2 = N 1 N i=1 (x i µ) 2, then Q k 1 = Q k+1 = k 1 1 ) ((X i µ) 2 σ 2 N k + 1 i=1 k+1 1 ) ((X i µ) 2 σ 2 k + 1 i=1 Lemma (Bardenet M., 2015) ] E [(X k µ) 2 Z 1,... Z k 1 = σ 2 Qk 1, where the Z i s are defined in (1). Likewise ] E [(X k+1 µ) 2 Z k+1,... Z N 1 = σ 2 + Q k+1. O-A. Maillard Subsampling and Bandits 11 / 36

13 Ạ Bernstein-Serfling inequality Corollary (Bardenet M., 2015) Let n N and δ [0, 1]. With probability larger than 1 2δ, it holds that nt=1 (X t µ) 2ρn log(1/δ) σ + κ n(b a) log(1/δ), n n n where (1 f n 1 ) if n N/2 ρ n = (1 f n )(1 + 1/n) if n > N/2 4 κ n = + f n 3 g n 1 if n N/2 4 + g 3 n+1 (1 f n ) if n > N/2, (3) with f n = n/n and g n = N/n 1. O-A. Maillard Subsampling and Bandits 12 / 36

14 Ạ Bernstein-Serling inequality Improvement over Bernstein Factor ρ n can give dramatic improvement. Proof elements Self-bounded property of the variance: Study of Z = 1 ni=1 (X (b a) 2 i µ) 2 (cf. Maurer and Pontil, 2006; via tensorization inequality for the entropy). Hoeffding reduction s lemma. O-A. Maillard Subsampling and Bandits 13 / 36

15 Ṭowards an empirical Bersntein-Serfling inequality σ n 2 = 1 n (X i µ n ) 2 = 1 n (X i X j ) 2, where µ n i=1 n 2 n = 1 n X i. i,j=1 2 n i=1 Lemma (Bardenet M., 2015) When sampling without replacement from a finite population X = (x 1,..., x N ) of size N, with range [a, b] and variance σ 2, the empirical variance σ n 2 using n < N samples satisfies ( ( ) ) log(3/δ) P σ σ n + (b a) ρ n δ. 2n Possible improvement Conjecture: Replace ( ρ n ) with 4ρ n. Difficulty: concentration for self-bounded random variables when sampling without replacement. O-A. Maillard Subsampling and Bandits 14 / 36

16 Ạn empirical Bernstein-Serfling inequality Corollary (Bardenet M., 2015) For all δ [0, 1], with probability larger than 1 5δ, it holds nt=1 (X t µ) n 2ρn log(1/δ) σ n + n κ(b a) log(1/δ) n, where we remind the definition of ρ n (1 n 1 ) if n N/2 N ρ n = (1 n )(1 + 1/n) if n > N/2, N and κ = O-A. Maillard Subsampling and Bandits 15 / 36

17 Ṣerfling-bounds Hoeffding-Serfling Bernstein-Serfling Empirical Bernstein-Serfling Hoeffding-Serfling Bernstein-Serfling Empirical Bernstein-Serfling Inverted bound Inverted bound n (e) Gaussian N (0, 1) n (f) Log-normal ln N (1, 1) Hoeffding-Serfling Bernstein-Serfling Empirical Bernstein-Serfling Hoeffding-Serfling Bernstein-Serfling Empirical Bernstein-Serfling Inverted bound Inverted bound n (g) Bernoulli B(0.1) n (h) Bernoulli B(0.5) O-A. Maillard Subsampling and Bandits 16 / 36

18 Ṣub-sampling recap What we did Improved Serfling-Hoeffding bound, new Bernstein-Serfling and empirical Bernstein-Serfling bounds. Improvement over Hoeffding s reduction due to ρ n. Improvement/Open question Tensorization inequality for the entropy in the case of sampling without replacement? Would lead to: ( ρ n ) replaced with 4ρ n. O-A. Maillard Subsampling and Bandits 17 / 36

19 Sub-sampling Bandits Introduction "Sub-sampling for multi-armed bandits", Baransi, Maillard, Mannor ECML, 2014.

20 Ṣtochastic Multi-armed bandit setting Setting Set of choices A. Each a A is associated with an unknown probability distribution ν a D with mean µ a. At each round t = 1... T the player first picks an arm A t A based on past observations. then receives (and sees) a stochastic payoff X t ν At. Goal and performance Minimize the regret at round T : [ ] def T R T = E T µ X t = a A(µ µ a ) E [ ] NT π,a t=1 where µ = max{ µ a ; a A }, a argmax{ µ a ; a A } T NT π,a = I{A t = a}. t=1 O-A. Maillard Subsampling and Bandits 19 / 36

21 Ḷower performance bound Theorem (Burnetas and Katehakis, 1996) For any strategy π that is consistent (for any bandit, sub-optimal arm a, β > 0 it holds E [ N π T,a] = o(t β )), and D P([0, 1]) lim inf T R T log T a: a>0 (µ µ a ) K inf (ν a, µ ), where K inf (ν a, µ ) def = inf{kl(ν a ν), ν D has mean > µ }. O-A. Maillard Subsampling and Bandits 20 / 36

22 Ọptimality Class of optimal algorithms Confidence bound: e.g. KL-UCB (Lai-Robbins, 1985) Bayesian: e.g. Thompson Sampling (Thompson, 1933) Sub-sampling? Provably optimal finite-time regret for some D Discrete or exponential families of dimension 1. They need to know D in order to be optimal A different algorithm for each D: TS or KL-UCB for Bernoulli, for Poisson, for Exponential, etc. O-A. Maillard Subsampling and Bandits 21 / 36

23 .Puzzling experiments (T = 20, 000, 50, 000 replicates) 10 Bernoulli(0.1, 3{0.05}, 3{0.02}, 3{0.01}) BESA kl-ucb kl-ucb+ TS Others Regret Beat BESA - 1.6% 35.4% 3.1% Run Rime 13.9X 2.8X 3.1X X 200 BESA 200 KLUCB 200 KLUCB+ 200 Thompson regret time x time x time x time x 10 Others: UCB, Moss, UCB-Tunes, DMED, UCB-V. (Credit: Akram Baransi) O-A. Maillard Subsampling and Bandits 22 / 36

24 .Puzzling experiments (T = 20, 000, 50, 000 replicates) Exponential( 1 5, 1 4, 1 3, 1 2, 1) BESA KL-UCB-exp UCB-tuned FTL 10 Others Regret ,120+ Beat BESA - 5.7% 4.3% - Run Rime 6X 2.8X X BESA 200 BESAT 200 KLUCBexp 200 UCBtuned regret time x time x time x time x 10 Others: UCB, Moss, kl-ucb,ucb-v. (Credit: Akram Baransi) O-A. Maillard Subsampling and Bandits 23 / 36

25 .Puzzling experiments (T = 20, 000, 50, 000 replicates) Poisson({ i 3 } i=1,...,6) BESA KL-UCB-Poisson kl-ucb FTL 10 Regret Beat BESA - 4.1% 0.7% - Run Rime 3.5X 1.2X X BESA 200 BESAT 200 KLUCBpoisson 200 KLUCB regret time x time x time x 10 4 time (Credit: Akram Baransi) x 10 4 O-A. Maillard Subsampling and Bandits 24 / 36

26 .Puzzling experiments (T = 20, 000, 50, 000 replicates) 300 Bernoulli all half but one BESA KL-UCB KL-UCB+ TS Regret Beat BESA % 41.6% 40.8% Run Rime 19.6X 2.8X 3X X BESA 300 KLUCB 300 KLUCB+ 300 Thompson regret time x time x time x 10 4 time (Credit: Akram Baransi) x 10 4 O-A. Maillard Subsampling and Bandits 25 / 36

27 Ạ Puzzling strategy BESA Competitive regret against state-of-the-art for various D. Same algorithm for all D. Not relying on upper confidence bounds, not Bayesian......and extremely simple to implement. Questions How is this possible? Can we prove optimality? For which distributions is it optimal? O-A. Maillard Subsampling and Bandits 26 / 36

28 Sub-sampling Bandits Best Empirical Sub-sampling Average

29 .Go back to "Follow the leader" FTL 1: Play each arm once. 2: At time t, define µ t,a = µ(x a 1:N t,a ) for all a A. µ(x ): empirical average of population X. X a 1:N t,a = {X s : A s = a, s t} 3: Choose (break ties in favor of the smallest N t ) Properties A t = argmax a {a,b} Generally bad: linear regret. µ t,a. A variant (ε-greedy) performs ok if well-tuned (Auer et al, 2002). Optimal for very specific distributions (e.g. deterministic). O-A. Maillard Subsampling and Bandits 28 / 36

30 .Follow the FAIR leader (aka BESA) Compare two arms based on "equal opportunity" i.e. same number of observations. BESA at time t for two arms a, b: 1: Sample I a t Wr(N t,a ; N t,b ) and I b t Wr(N t,b ; N t,a ). Wr(n, N): sample n points from {1,..., N} without replacement (return all the set if n N). 2: Define µ t,a = µ(x a 1:N t,a (I a t )) and µ t,b = µ(x b 1:N t,b (I b t )). 3: Choose (break ties in favor of the smallest N t ) Questions Why does it work? A t = argmax a {a,b} µ t,a. When can we prove log(t ) regret? Optimality? When does it fail? O-A. Maillard Subsampling and Bandits 29 / 36

31 .Follow the FAIR leader (aka BESA) Compare two arms based on "equal opportunity" i.e. same number of observations. BESA at time t for two arms a, b: 1: Sample I a t Wr(N t,a ; N t,b ) and I b t Wr(N t,b ; N t,a ). Ex: N t,a = 3,N t,b = 10, I t,a = {1, 2, 3}, I t,b = 3, sampled without replacement from {1,..., 10}. 2: Define µ t,a = µ(x a 1:N t,a (I a t )) and µ t,b = µ(x b 1:N t,b (I b t )). 3: Choose (break ties in favor of the smallest N t ) Questions Why does it work? A t = argmax a {a,b} µ t,a. When can we prove log(t ) regret? Optimality? When does it fail? O-A. Maillard Subsampling and Bandits 29 / 36

32 Ịntuition Assume µ b > µ a, N t,a = n a, N t,b = n b with n a > n b. The probability of making one mistake is approximatively [ ( ( P µ X1:n a a I(na ; n b ) )) ( ) ] > µ X1:n b b, (4) where I(n a ; n b ) Wr(n a ; n b ). The probability of making M consecutive mistakes is essentially [ ( ( P m [M] µ Im (n a (m) ; n b ) )) ( ) ] > µ X1:n b b, (5) X a 1:n (m) a where m M, I m (n a ; n b ) Wr(n a ; n b ), n (m) a = n a + m 1. For deterministic n a, n b : (4) decreases with e 2n b(µ b µ a) 2, (5) with e 2n b M(µ b µ a) 2 where M is the number of non-overlapping sub-samples (independent chunks). Exponential decay of probability of consecutive mistakes. O-A. Maillard Subsampling and Bandits 30 / 36

33 Ṛegret bound (slightly simplified statement) Let A = {, a} and define ( α(m, n) = E Z ν,n P Z νa,n (Z > Z ) P Z ν a,n (Z = Z ) Theorem (Regret of the BESA strategy) If α (0, 1), c > 0 such that α(m, 1) cα M, then R T 11 log(t ) µ µ a + C νa,ν + O(1), where C νa,ν depends on the problem, but not on T. Example Bernoulli µ a, µ : α(m, 1) = O( ( µa (1 µ a) 2 ) M ) ) M, O-A. Maillard Subsampling and Bandits 31 / 36

34 .Failure of the BESA strategy Uniform X a U([0.2, 0.4]), X U([0, 1.]): α(m, n) M 0.2 n Consider BESA with initial number of pulls n 0 = 0,... BESA n 0 = 0 n 0 = 3 n 0 = 7 n 0 = 8 n 0 = 9 n 0 = 10 Regret UCB kl-ucb TS FTL n 0 = 10 Regret Beat BESA n 0 = % 24.3% 24.7% - Beat BESA n 0 = 3 7.3% 7.3% 7.8% - Beat BESA n 0 = 7 1.6% 1.6% 1.8% - Beat BESA n 0 = % 0.6% 0.7% - (Credit: Akram Baransi) O-A. Maillard Subsampling and Bandits 32 / 36

35 Ṛegret performance of BESA Theorem (Regret of the BESA strategy) If α (0, 1), c > 0 such that α(m, 1) cα M, then R T 11 log(t ) µ µ a + C νa,ν + O(1), where C νa,ν depends on the problem, but not on T. If β (0, 1), c > such that α(1, n) cβ n, then BESA initialized with n 0,T ln(t )/ ln(1/β) pull of each arm gets R T Key points 11 log(t ) µ µ a + n 0,T + C νa,ν + O(1), First condition holds for large class: extends FTL. Initial number of pulls is less elegant. Alternatively: mixing with uniform like ε-greedy. O-A. Maillard Subsampling and Bandits 33 / 36

36 Ṣketch of regret analysis: 1 Basic concentration gives a log(t)1/2 min{n t,,n t,a} 1/2 : w.h.p. If N t,a < N t,, then N t,a 2 log(t). 2 w.h.p. If N t,a > N t,, then N t, w.h.p. Thus: On the event N t, > u t, must have N t,a 2 log(t) def 2 = u t. u t. w.h.p. 2 Show that N t, > u t (like for Thompson Sampling) Let τj : delay between the j th time t j and (j + 1) th time we play. ] u t [ ] P [N t, u t P τj t/u t 1 }{{} j=1 l t u t [ P s {0,..., l ] t 2, l t 2,..., l t} a tj +s = a j=1 u t j=1 [ P s { l t 2,..., l t} a tj +s = a N tj +s,a > l t 2 }{{} u t j for t c O-A. Maillard Subsampling and Bandits 34 / 36 ]

37 Ṣketch of regret analysis: 1 Basic concentration gives a log(t)1/2 min{n t,,n t,a} 1/2 : w.h.p. If N t,a < N t,, then N t,a 2 log(t). 2 w.h.p. If N t,a > N t,, then N t, w.h.p. Thus: On the event N t, > u t, must have N t,a 2 log(t) def 2 = u t. u t. w.h.p. 2 Show that N t, > u t (like for Thompson Sampling) Let τj : delay between the j th time t j and (j + 1) th time we play. ] u t [ ] P [N t, u t P τj t/u t 1 }{{} j=1 l t u t [ P s {0,..., l ] t 2, l t 2,..., l t} a tj +s = a t c j=1 u t j=1 [ P s { l t 2,..., l t} a tj +s = a N tj +s,a > N tj +s, }{{} =j ] O-A. Maillard Subsampling and Bandits 34 / 36

38 ḄESA - Recap Optimality and near-optimality regions in P([0, 1])? Properties Flexible: doesn t need class of distribution, nor the support. We can prove log(t ) regret for certain classes. Optimality (constants) unknown yet, but we are close. Exhibit cases when it fails: why, and how to repair it. O-A. Maillard Subsampling and Bandits 35 / 36

39 ḄESA - Recap Optimality and near-optimality regions in P([0, 1])? Properties Flexible: doesn t need class of distribution, nor the support. We can prove log(t ) regret for certain classes. Optimality (constants) unknown yet, but we are close. Exhibit cases when it fails: why, and how to repair it. O-A. Maillard Subsampling and Bandits 35 / 36

40 Thank you If you want to prove "adaptive" optimality of this strategy or extend it to contextual bandits, adversarial bandits, MDPs Come work with me!

Bandits : optimality in exponential families

Bandits : optimality in exponential families Bandits : optimality in exponential families Odalric-Ambrym Maillard IHES, January 2016 Odalric-Ambrym Maillard Bandits 1 / 40 Introduction 1 Stochastic multi-armed bandits 2 Boundary crossing probabilities

More information

Regret Analysis of Stochastic and Nonstochastic Multi-armed Bandit Problems, Part I. Sébastien Bubeck Theory Group

Regret Analysis of Stochastic and Nonstochastic Multi-armed Bandit Problems, Part I. Sébastien Bubeck Theory Group Regret Analysis of Stochastic and Nonstochastic Multi-armed Bandit Problems, Part I Sébastien Bubeck Theory Group i.i.d. multi-armed bandit, Robbins [1952] i.i.d. multi-armed bandit, Robbins [1952] Known

More information

The information complexity of sequential resource allocation

The information complexity of sequential resource allocation The information complexity of sequential resource allocation Emilie Kaufmann, joint work with Olivier Cappé, Aurélien Garivier and Shivaram Kalyanakrishan SMILE Seminar, ENS, June 8th, 205 Sequential allocation

More information

Two optimization problems in a stochastic bandit model

Two optimization problems in a stochastic bandit model Two optimization problems in a stochastic bandit model Emilie Kaufmann joint work with Olivier Cappé, Aurélien Garivier and Shivaram Kalyanakrishnan Journées MAS 204, Toulouse Outline From stochastic optimization

More information

Multi-armed bandit models: a tutorial

Multi-armed bandit models: a tutorial Multi-armed bandit models: a tutorial CERMICS seminar, March 30th, 2016 Multi-Armed Bandit model: general setting K arms: for a {1,..., K}, (X a,t ) t N is a stochastic process. (unknown distributions)

More information

Bandit models: a tutorial

Bandit models: a tutorial Gdt COS, December 3rd, 2015 Multi-Armed Bandit model: general setting K arms: for a {1,..., K}, (X a,t ) t N is a stochastic process. (unknown distributions) Bandit game: a each round t, an agent chooses

More information

The information complexity of best-arm identification

The information complexity of best-arm identification The information complexity of best-arm identification Emilie Kaufmann, joint work with Olivier Cappé and Aurélien Garivier MAB workshop, Lancaster, January th, 206 Context: the multi-armed bandit model

More information

On the Complexity of Best Arm Identification in Multi-Armed Bandit Models

On the Complexity of Best Arm Identification in Multi-Armed Bandit Models On the Complexity of Best Arm Identification in Multi-Armed Bandit Models Aurélien Garivier Institut de Mathématiques de Toulouse Information Theory, Learning and Big Data Simons Institute, Berkeley, March

More information

Two generic principles in modern bandits: the optimistic principle and Thompson sampling

Two generic principles in modern bandits: the optimistic principle and Thompson sampling Two generic principles in modern bandits: the optimistic principle and Thompson sampling Rémi Munos INRIA Lille, France CSML Lunch Seminars, September 12, 2014 Outline Two principles: The optimistic principle

More information

Stat 260/CS Learning in Sequential Decision Problems. Peter Bartlett

Stat 260/CS Learning in Sequential Decision Problems. Peter Bartlett Stat 260/CS 294-102. Learning in Sequential Decision Problems. Peter Bartlett 1. Multi-armed bandit algorithms. Concentration inequalities. P(X ǫ) exp( ψ (ǫ))). Cumulant generating function bounds. Hoeffding

More information

Advanced Machine Learning

Advanced Machine Learning Advanced Machine Learning Bandit Problems MEHRYAR MOHRI MOHRI@ COURANT INSTITUTE & GOOGLE RESEARCH. Multi-Armed Bandit Problem Problem: which arm of a K-slot machine should a gambler pull to maximize his

More information

On the Complexity of Best Arm Identification with Fixed Confidence

On the Complexity of Best Arm Identification with Fixed Confidence On the Complexity of Best Arm Identification with Fixed Confidence Discrete Optimization with Noise Aurélien Garivier, Emilie Kaufmann COLT, June 23 th 2016, New York Institut de Mathématiques de Toulouse

More information

Bandit Algorithms. Zhifeng Wang ... Department of Statistics Florida State University

Bandit Algorithms. Zhifeng Wang ... Department of Statistics Florida State University Bandit Algorithms Zhifeng Wang Department of Statistics Florida State University Outline Multi-Armed Bandits (MAB) Exploration-First Epsilon-Greedy Softmax UCB Thompson Sampling Adversarial Bandits Exp3

More information

Introduction to Bandit Algorithms. Introduction to Bandit Algorithms

Introduction to Bandit Algorithms. Introduction to Bandit Algorithms Stochastic K-Arm Bandit Problem Formulation Consider K arms (actions) each correspond to an unknown distribution {ν k } K k=1 with values bounded in [0, 1]. At each time t, the agent pulls an arm I t {1,...,

More information

Stochastic bandits: Explore-First and UCB

Stochastic bandits: Explore-First and UCB CSE599s, Spring 2014, Online Learning Lecture 15-2/19/2014 Stochastic bandits: Explore-First and UCB Lecturer: Brendan McMahan or Ofer Dekel Scribe: Javad Hosseini In this lecture, we like to answer this

More information

On Bayesian bandit algorithms

On Bayesian bandit algorithms On Bayesian bandit algorithms Emilie Kaufmann joint work with Olivier Cappé, Aurélien Garivier, Nathaniel Korda and Rémi Munos July 1st, 2012 Emilie Kaufmann (Telecom ParisTech) On Bayesian bandit algorithms

More information

KULLBACK-LEIBLER UPPER CONFIDENCE BOUNDS FOR OPTIMAL SEQUENTIAL ALLOCATION

KULLBACK-LEIBLER UPPER CONFIDENCE BOUNDS FOR OPTIMAL SEQUENTIAL ALLOCATION Submitted to the Annals of Statistics arxiv: math.pr/0000000 KULLBACK-LEIBLER UPPER CONFIDENCE BOUNDS FOR OPTIMAL SEQUENTIAL ALLOCATION By Olivier Cappé 1, Aurélien Garivier 2, Odalric-Ambrym Maillard

More information

Bayesian and Frequentist Methods in Bandit Models

Bayesian and Frequentist Methods in Bandit Models Bayesian and Frequentist Methods in Bandit Models Emilie Kaufmann, Telecom ParisTech Bayes In Paris, ENSAE, October 24th, 2013 Emilie Kaufmann (Telecom ParisTech) Bayesian and Frequentist Bandits BIP,

More information

Revisiting the Exploration-Exploitation Tradeoff in Bandit Models

Revisiting the Exploration-Exploitation Tradeoff in Bandit Models Revisiting the Exploration-Exploitation Tradeoff in Bandit Models joint work with Aurélien Garivier (IMT, Toulouse) and Tor Lattimore (University of Alberta) Workshop on Optimization and Decision-Making

More information

Csaba Szepesvári 1. University of Alberta. Machine Learning Summer School, Ile de Re, France, 2008

Csaba Szepesvári 1. University of Alberta. Machine Learning Summer School, Ile de Re, France, 2008 LEARNING THEORY OF OPTIMAL DECISION MAKING PART I: ON-LINE LEARNING IN STOCHASTIC ENVIRONMENTS Csaba Szepesvári 1 1 Department of Computing Science University of Alberta Machine Learning Summer School,

More information

COS 402 Machine Learning and Artificial Intelligence Fall Lecture 22. Exploration & Exploitation in Reinforcement Learning: MAB, UCB, Exp3

COS 402 Machine Learning and Artificial Intelligence Fall Lecture 22. Exploration & Exploitation in Reinforcement Learning: MAB, UCB, Exp3 COS 402 Machine Learning and Artificial Intelligence Fall 2016 Lecture 22 Exploration & Exploitation in Reinforcement Learning: MAB, UCB, Exp3 How to balance exploration and exploitation in reinforcement

More information

On the Complexity of Best Arm Identification with Fixed Confidence

On the Complexity of Best Arm Identification with Fixed Confidence On the Complexity of Best Arm Identification with Fixed Confidence Discrete Optimization with Noise Aurélien Garivier, joint work with Emilie Kaufmann CNRS, CRIStAL) to be presented at COLT 16, New York

More information

Stratégies bayésiennes et fréquentistes dans un modèle de bandit

Stratégies bayésiennes et fréquentistes dans un modèle de bandit Stratégies bayésiennes et fréquentistes dans un modèle de bandit thèse effectuée à Telecom ParisTech, co-dirigée par Olivier Cappé, Aurélien Garivier et Rémi Munos Journées MAS, Grenoble, 30 août 2016

More information

The Multi-Arm Bandit Framework

The Multi-Arm Bandit Framework The Multi-Arm Bandit Framework A. LAZARIC (SequeL Team @INRIA-Lille) ENS Cachan - Master 2 MVA SequeL INRIA Lille MVA-RL Course In This Lecture A. LAZARIC Reinforcement Learning Algorithms Oct 29th, 2013-2/94

More information

arxiv: v2 [stat.ml] 14 Nov 2016

arxiv: v2 [stat.ml] 14 Nov 2016 Journal of Machine Learning Research 6 06-4 Submitted 7/4; Revised /5; Published /6 On the Complexity of Best-Arm Identification in Multi-Armed Bandit Models arxiv:407.4443v [stat.ml] 4 Nov 06 Emilie Kaufmann

More information

Lecture 19: UCB Algorithm and Adversarial Bandit Problem. Announcements Review on stochastic multi-armed bandit problem

Lecture 19: UCB Algorithm and Adversarial Bandit Problem. Announcements Review on stochastic multi-armed bandit problem Lecture 9: UCB Algorithm and Adversarial Bandit Problem EECS598: Prediction and Learning: It s Only a Game Fall 03 Lecture 9: UCB Algorithm and Adversarial Bandit Problem Prof. Jacob Abernethy Scribe:

More information

Online Learning and Sequential Decision Making

Online Learning and Sequential Decision Making Online Learning and Sequential Decision Making Emilie Kaufmann CNRS & CRIStAL, Inria SequeL, emilie.kaufmann@univ-lille.fr Research School, ENS Lyon, Novembre 12-13th 2018 Emilie Kaufmann Sequential Decision

More information

Regret Analysis of Stochastic and Nonstochastic Multi-armed Bandit Problems. Sébastien Bubeck Theory Group

Regret Analysis of Stochastic and Nonstochastic Multi-armed Bandit Problems. Sébastien Bubeck Theory Group Regret Analysis of Stochastic and Nonstochastic Multi-armed Bandit Problems Sébastien Bubeck Theory Group Part 1: i.i.d., adversarial, and Bayesian bandit models i.i.d. multi-armed bandit, Robbins [1952]

More information

Dynamic resource allocation: Bandit problems and extensions

Dynamic resource allocation: Bandit problems and extensions Dynamic resource allocation: Bandit problems and extensions Aurélien Garivier Institut de Mathématiques de Toulouse MAD Seminar, Université Toulouse 1 October 3rd, 2014 The Bandit Model Roadmap 1 The Bandit

More information

Bandit Algorithms. Tor Lattimore & Csaba Szepesvári

Bandit Algorithms. Tor Lattimore & Csaba Szepesvári Bandit Algorithms Tor Lattimore & Csaba Szepesvári Bandits Time 1 2 3 4 5 6 7 8 9 10 11 12 Left arm $1 $0 $1 $1 $0 Right arm $1 $0 Five rounds to go. Which arm would you play next? Overview What are bandits,

More information

arxiv: v2 [stat.ml] 19 Jul 2012

arxiv: v2 [stat.ml] 19 Jul 2012 Thompson Sampling: An Asymptotically Optimal Finite Time Analysis Emilie Kaufmann, Nathaniel Korda and Rémi Munos arxiv:105.417v [stat.ml] 19 Jul 01 Telecom Paristech UMR CNRS 5141 & INRIA Lille - Nord

More information

Kullback-Leibler Upper Confidence Bounds for Optimal Sequential Allocation

Kullback-Leibler Upper Confidence Bounds for Optimal Sequential Allocation Kullback-Leibler Upper Confidence Bounds for Optimal Sequential Allocation Olivier Cappé, Aurélien Garivier, Odalric-Ambrym Maillard, Rémi Munos, Gilles Stoltz To cite this version: Olivier Cappé, Aurélien

More information

Reinforcement Learning

Reinforcement Learning Reinforcement Learning Lecture 5: Bandit optimisation Alexandre Proutiere, Sadegh Talebi, Jungseul Ok KTH, The Royal Institute of Technology Objectives of this lecture Introduce bandit optimisation: the

More information

New Algorithms for Contextual Bandits

New Algorithms for Contextual Bandits New Algorithms for Contextual Bandits Lev Reyzin Georgia Institute of Technology Work done at Yahoo! 1 S A. Beygelzimer, J. Langford, L. Li, L. Reyzin, R.E. Schapire Contextual Bandit Algorithms with Supervised

More information

Lecture 4: Lower Bounds (ending); Thompson Sampling

Lecture 4: Lower Bounds (ending); Thompson Sampling CMSC 858G: Bandits, Experts and Games 09/12/16 Lecture 4: Lower Bounds (ending); Thompson Sampling Instructor: Alex Slivkins Scribed by: Guowei Sun,Cheng Jie 1 Lower bounds on regret (ending) Recap from

More information

Active Learning and Optimized Information Gathering

Active Learning and Optimized Information Gathering Active Learning and Optimized Information Gathering Lecture 7 Learning Theory CS 101.2 Andreas Krause Announcements Project proposal: Due tomorrow 1/27 Homework 1: Due Thursday 1/29 Any time is ok. Office

More information

THE MULTI-ARMED BANDIT PROBLEM: AN EFFICIENT NON-PARAMETRIC SOLUTION

THE MULTI-ARMED BANDIT PROBLEM: AN EFFICIENT NON-PARAMETRIC SOLUTION THE MULTI-ARMED BANDIT PROBLEM: AN EFFICIENT NON-PARAMETRIC SOLUTION Hock Peng Chan stachp@nus.edu.sg Department of Statistics and Applied Probability National University of Singapore Abstract Lai and

More information

The Multi-Armed Bandit Problem

The Multi-Armed Bandit Problem Università degli Studi di Milano The bandit problem [Robbins, 1952]... K slot machines Rewards X i,1, X i,2,... of machine i are i.i.d. [0, 1]-valued random variables An allocation policy prescribes which

More information

1 MDP Value Iteration Algorithm

1 MDP Value Iteration Algorithm CS 0. - Active Learning Problem Set Handed out: 4 Jan 009 Due: 9 Jan 009 MDP Value Iteration Algorithm. Implement the value iteration algorithm given in the lecture. That is, solve Bellman s equation using

More information

Finite-time Analysis of the Multiarmed Bandit Problem*

Finite-time Analysis of the Multiarmed Bandit Problem* Machine Learning, 47, 35 56, 00 c 00 Kluwer Academic Publishers. Manufactured in The Netherlands. Finite-time Analysis of the Multiarmed Bandit Problem* PETER AUER University of Technology Graz, A-8010

More information

Introduction to Reinforcement Learning and multi-armed bandits

Introduction to Reinforcement Learning and multi-armed bandits Introduction to Reinforcement Learning and multi-armed bandits Rémi Munos INRIA Lille - Nord Europe Currently on leave at MSR-NE http://researchers.lille.inria.fr/ munos/ NETADIS Summer School 2013, Hillerod,

More information

Boundary Crossing for General Exponential Families

Boundary Crossing for General Exponential Families Proceedings of Machine Learning Research 76:1 34, 017 Algorithmic Learning Theory 017 Boundary Crossing for General Exponential Families Odalric-Ambrym Maillard INRIA Lille - Nord Europe, Villeneuve d

More information

Analysis of Thompson Sampling for the multi-armed bandit problem

Analysis of Thompson Sampling for the multi-armed bandit problem Analysis of Thompson Sampling for the multi-armed bandit problem Shipra Agrawal Microsoft Research India shipra@microsoft.com avin Goyal Microsoft Research India navingo@microsoft.com Abstract We show

More information

arxiv: v4 [math.pr] 26 Aug 2013

arxiv: v4 [math.pr] 26 Aug 2013 The Annals of Statistics 2013, Vol. 41, No. 3, 1516 1541 DOI: 10.1214/13-AOS1119 c Institute of Mathematical Statistics, 2013 arxiv:1210.1136v4 [math.pr] 26 Aug 2013 KULLBACK LEIBLER UPPER CONFIDENCE BOUNDS

More information

Introduction to Reinforcement Learning Part 3: Exploration for decision making, Application to games, optimization, and planning

Introduction to Reinforcement Learning Part 3: Exploration for decision making, Application to games, optimization, and planning Introduction to Reinforcement Learning Part 3: Exploration for decision making, Application to games, optimization, and planning Rémi Munos SequeL project: Sequential Learning http://researchers.lille.inria.fr/

More information

Ordinal optimization - Empirical large deviations rate estimators, and multi-armed bandit methods

Ordinal optimization - Empirical large deviations rate estimators, and multi-armed bandit methods Ordinal optimization - Empirical large deviations rate estimators, and multi-armed bandit methods Sandeep Juneja Tata Institute of Fundamental Research Mumbai, India joint work with Peter Glynn Applied

More information

Logarithmic Online Regret Bounds for Undiscounted Reinforcement Learning

Logarithmic Online Regret Bounds for Undiscounted Reinforcement Learning Logarithmic Online Regret Bounds for Undiscounted Reinforcement Learning Peter Auer Ronald Ortner University of Leoben, Franz-Josef-Strasse 18, 8700 Leoben, Austria auer,rortner}@unileoben.ac.at Abstract

More information

Lecture 5: Regret Bounds for Thompson Sampling

Lecture 5: Regret Bounds for Thompson Sampling CMSC 858G: Bandits, Experts and Games 09/2/6 Lecture 5: Regret Bounds for Thompson Sampling Instructor: Alex Slivkins Scribed by: Yancy Liao Regret Bounds for Thompson Sampling For each round t, we defined

More information

Ordinal Optimization and Multi Armed Bandit Techniques

Ordinal Optimization and Multi Armed Bandit Techniques Ordinal Optimization and Multi Armed Bandit Techniques Sandeep Juneja. with Peter Glynn September 10, 2014 The ordinal optimization problem Determining the best of d alternative designs for a system, on

More information

Reinforcement Learning

Reinforcement Learning Reinforcement Learning Lecture 6: RL algorithms 2.0 Alexandre Proutiere, Sadegh Talebi, Jungseul Ok KTH, The Royal Institute of Technology Objectives of this lecture Present and analyse two online algorithms

More information

Alireza Shafaei. Machine Learning Reading Group The University of British Columbia Summer 2017

Alireza Shafaei. Machine Learning Reading Group The University of British Columbia Summer 2017 s s Machine Learning Reading Group The University of British Columbia Summer 2017 (OCO) Convex 1/29 Outline (OCO) Convex Stochastic Bernoulli s (OCO) Convex 2/29 At each iteration t, the player chooses

More information

Anytime optimal algorithms in stochastic multi-armed bandits

Anytime optimal algorithms in stochastic multi-armed bandits Rémy Degenne LPMA, Université Paris Diderot Vianney Perchet CREST, ENSAE REMYDEGENNE@MATHUNIV-PARIS-DIDEROTFR VIANNEYPERCHET@NORMALESUPORG Abstract We introduce an anytime algorithm for stochastic multi-armed

More information

Learning to play K-armed bandit problems

Learning to play K-armed bandit problems Learning to play K-armed bandit problems Francis Maes 1, Louis Wehenkel 1 and Damien Ernst 1 1 University of Liège Dept. of Electrical Engineering and Computer Science Institut Montefiore, B28, B-4000,

More information

The Multi-Armed Bandit Problem

The Multi-Armed Bandit Problem The Multi-Armed Bandit Problem Electrical and Computer Engineering December 7, 2013 Outline 1 2 Mathematical 3 Algorithm Upper Confidence Bound Algorithm A/B Testing Exploration vs. Exploitation Scientist

More information

Upper-Confidence-Bound Algorithms for Active Learning in Multi-Armed Bandits

Upper-Confidence-Bound Algorithms for Active Learning in Multi-Armed Bandits Upper-Confidence-Bound Algorithms for Active Learning in Multi-Armed Bandits Alexandra Carpentier 1, Alessandro Lazaric 1, Mohammad Ghavamzadeh 1, Rémi Munos 1, and Peter Auer 2 1 INRIA Lille - Nord Europe,

More information

Robustness of Anytime Bandit Policies

Robustness of Anytime Bandit Policies Robustness of Anytime Bandit Policies Antoine Salomon, Jean-Yves Audibert To cite this version: Antoine Salomon, Jean-Yves Audibert. Robustness of Anytime Bandit Policies. 011. HAL Id:

More information

Optimality of Thompson Sampling for Gaussian Bandits Depends on Priors

Optimality of Thompson Sampling for Gaussian Bandits Depends on Priors Optimality of Thompson Sampling for Gaussian Bandits Depends on Priors Junya Honda Akimichi Takemura The University of Tokyo {honda, takemura}@stat.t.u-tokyo.ac.jp Abstract In stochastic bandit problems,

More information

Multi-Armed Bandits. Credit: David Silver. Google DeepMind. Presenter: Tianlu Wang

Multi-Armed Bandits. Credit: David Silver. Google DeepMind. Presenter: Tianlu Wang Multi-Armed Bandits Credit: David Silver Google DeepMind Presenter: Tianlu Wang Credit: David Silver (DeepMind) Multi-Armed Bandits Presenter: Tianlu Wang 1 / 27 Outline 1 Introduction Exploration vs.

More information

Stat 260/CS Learning in Sequential Decision Problems.

Stat 260/CS Learning in Sequential Decision Problems. Stat 260/CS 294-102. Learning in Sequential Decision Problems. Peter Bartlett 1. Multi-armed bandit algorithms. Exponential families. Cumulant generating function. KL-divergence. KL-UCB for an exponential

More information

Multi-Armed Bandit Formulations for Identification and Control

Multi-Armed Bandit Formulations for Identification and Control Multi-Armed Bandit Formulations for Identification and Control Cristian R. Rojas Joint work with Matías I. Müller and Alexandre Proutiere KTH Royal Institute of Technology, Sweden ERNSI, September 24-27,

More information

Lecture 4 January 23

Lecture 4 January 23 STAT 263/363: Experimental Design Winter 2016/17 Lecture 4 January 23 Lecturer: Art B. Owen Scribe: Zachary del Rosario 4.1 Bandits Bandits are a form of online (adaptive) experiments; i.e. samples are

More information

An Estimation Based Allocation Rule with Super-linear Regret and Finite Lock-on Time for Time-dependent Multi-armed Bandit Processes

An Estimation Based Allocation Rule with Super-linear Regret and Finite Lock-on Time for Time-dependent Multi-armed Bandit Processes An Estimation Based Allocation Rule with Super-linear Regret and Finite Lock-on Time for Time-dependent Multi-armed Bandit Processes Prokopis C. Prokopiou, Peter E. Caines, and Aditya Mahajan McGill University

More information

The multi armed-bandit problem

The multi armed-bandit problem The multi armed-bandit problem (with covariates if we have time) Vianney Perchet & Philippe Rigollet LPMA Université Paris Diderot ORFE Princeton University Algorithms and Dynamics for Games and Optimization

More information

Exploration and exploitation of scratch games

Exploration and exploitation of scratch games Mach Learn (2013) 92:377 401 DOI 10.1007/s10994-013-5359-2 Exploration and exploitation of scratch games Raphaël Féraud Tanguy Urvoy Received: 10 January 2013 / Accepted: 12 April 2013 / Published online:

More information

Corrupt Bandits. Abstract

Corrupt Bandits. Abstract Corrupt Bandits Pratik Gajane Orange labs/inria SequeL Tanguy Urvoy Orange labs Emilie Kaufmann INRIA SequeL pratik.gajane@inria.fr tanguy.urvoy@orange.com emilie.kaufmann@inria.fr Editor: Abstract We

More information

Multi-Armed Bandit: Learning in Dynamic Systems with Unknown Models

Multi-Armed Bandit: Learning in Dynamic Systems with Unknown Models c Qing Zhao, UC Davis. Talk at Xidian Univ., September, 2011. 1 Multi-Armed Bandit: Learning in Dynamic Systems with Unknown Models Qing Zhao Department of Electrical and Computer Engineering University

More information

On the Complexity of A/B Testing

On the Complexity of A/B Testing JMLR: Workshop and Conference Proceedings vol 35:1 3, 014 On the Complexity of A/B Testing Emilie Kaufmann LTCI, Télécom ParisTech & CNRS KAUFMANN@TELECOM-PARISTECH.FR Olivier Cappé CAPPE@TELECOM-PARISTECH.FR

More information

Online Learning: Bandit Setting

Online Learning: Bandit Setting Online Learning: Bandit Setting Daniel asabi Summer 04 Last Update: October 0, 06 Introduction [TODO Bandits. Stocastic setting Suppose tere exists unknown distributions ν,..., ν, suc tat te loss at eac

More information

The geometry of Gaussian processes and Bayesian optimization. Contal CMLA, ENS Cachan

The geometry of Gaussian processes and Bayesian optimization. Contal CMLA, ENS Cachan The geometry of Gaussian processes and Bayesian optimization. Contal CMLA, ENS Cachan Background: Global Optimization and Gaussian Processes The Geometry of Gaussian Processes and the Chaining Trick Algorithm

More information

Introduction to Reinforcement Learning Part 3: Exploration for sequential decision making

Introduction to Reinforcement Learning Part 3: Exploration for sequential decision making Introduction to Reinforcement Learning Part 3: Exploration for sequential decision making Rémi Munos SequeL project: Sequential Learning http://researchers.lille.inria.fr/ munos/ INRIA Lille - Nord Europe

More information

µ X (A) = P ( X 1 (A) )

µ X (A) = P ( X 1 (A) ) 1 STOCHASTIC PROCESSES This appendix provides a very basic introduction to the language of probability theory and stochastic processes. We assume the reader is familiar with the general measure and integration

More information

Bandits and Exploration: How do we (optimally) gather information? Sham M. Kakade

Bandits and Exploration: How do we (optimally) gather information? Sham M. Kakade Bandits and Exploration: How do we (optimally) gather information? Sham M. Kakade Machine Learning for Big Data CSE547/STAT548 University of Washington S. M. Kakade (UW) Optimization for Big data 1 / 22

More information

A minimax and asymptotically optimal algorithm for stochastic bandits

A minimax and asymptotically optimal algorithm for stochastic bandits Proceedings of Machine Learning Research 76:1 15, 017 Algorithmic Learning heory 017 A minimax and asymptotically optimal algorithm for stochastic bandits Pierre Ménard Aurélien Garivier Institut de Mathématiques

More information

THE first formalization of the multi-armed bandit problem

THE first formalization of the multi-armed bandit problem EDIC RESEARCH PROPOSAL 1 Multi-armed Bandits in a Network Farnood Salehi I&C, EPFL Abstract The multi-armed bandit problem is a sequential decision problem in which we have several options (arms). We can

More information

arxiv: v4 [cs.lg] 22 Jul 2014

arxiv: v4 [cs.lg] 22 Jul 2014 Learning to Optimize Via Information-Directed Sampling Daniel Russo and Benjamin Van Roy July 23, 2014 arxiv:1403.5556v4 cs.lg] 22 Jul 2014 Abstract We propose information-directed sampling a new algorithm

More information

Lecture 5: Probabilistic tools and Applications II

Lecture 5: Probabilistic tools and Applications II T-79.7003: Graphs and Networks Fall 2013 Lecture 5: Probabilistic tools and Applications II Lecturer: Charalampos E. Tsourakakis Oct. 11, 2013 5.1 Overview In the first part of today s lecture we will

More information

1 A Support Vector Machine without Support Vectors

1 A Support Vector Machine without Support Vectors CS/CNS/EE 53 Advanced Topics in Machine Learning Problem Set 1 Handed out: 15 Jan 010 Due: 01 Feb 010 1 A Support Vector Machine without Support Vectors In this question, you ll be implementing an online

More information

Exploiting Correlation in Finite-Armed Structured Bandits

Exploiting Correlation in Finite-Armed Structured Bandits Exploiting Correlation in Finite-Armed Structured Bandits Samarth Gupta Carnegie Mellon University Pittsburgh, PA 1513 Gauri Joshi Carnegie Mellon University Pittsburgh, PA 1513 Osman Yağan Carnegie Mellon

More information

Gambling in a rigged casino: The adversarial multi-armed bandit problem

Gambling in a rigged casino: The adversarial multi-armed bandit problem Gambling in a rigged casino: The adversarial multi-armed bandit problem Peter Auer Institute for Theoretical Computer Science University of Technology Graz A-8010 Graz (Austria) pauer@igi.tu-graz.ac.at

More information

Multi-armed bandit based policies for cognitive radio s decision making issues

Multi-armed bandit based policies for cognitive radio s decision making issues Multi-armed bandit based policies for cognitive radio s decision making issues Wassim Jouini SUPELEC/IETR wassim.jouini@supelec.fr Damien Ernst University of Liège dernst@ulg.ac.be Christophe Moy SUPELEC/IETR

More information

Toward a Classification of Finite Partial-Monitoring Games

Toward a Classification of Finite Partial-Monitoring Games Toward a Classification of Finite Partial-Monitoring Games Gábor Bartók ( *student* ), Dávid Pál, and Csaba Szepesvári Department of Computing Science, University of Alberta, Canada {bartok,dpal,szepesva}@cs.ualberta.ca

More information

STAT 200C: High-dimensional Statistics

STAT 200C: High-dimensional Statistics STAT 200C: High-dimensional Statistics Arash A. Amini May 30, 2018 1 / 59 Classical case: n d. Asymptotic assumption: d is fixed and n. Basic tools: LLN and CLT. High-dimensional setting: n d, e.g. n/d

More information

Models of collective inference

Models of collective inference Models of collective inference Laurent Massoulié (Microsoft Research-Inria Joint Centre) Mesrob I. Ohannessian (University of California, San Diego) Alexandre Proutière (KTH Royal Institute of Technology)

More information

Annealing-Pareto Multi-Objective Multi-Armed Bandit Algorithm

Annealing-Pareto Multi-Objective Multi-Armed Bandit Algorithm Annealing-Pareto Multi-Objective Multi-Armed Bandit Algorithm Saba Q. Yahyaa, Madalina M. Drugan and Bernard Manderick Vrije Universiteit Brussel, Department of Computer Science, Pleinlaan 2, 1050 Brussels,

More information

PAC Subset Selection in Stochastic Multi-armed Bandits

PAC Subset Selection in Stochastic Multi-armed Bandits In Langford, Pineau, editors, Proceedings of the 9th International Conference on Machine Learning, pp 655--66, Omnipress, New York, NY, USA, 0 PAC Subset Selection in Stochastic Multi-armed Bandits Shivaram

More information

Online learning with feedback graphs and switching costs

Online learning with feedback graphs and switching costs Online learning with feedback graphs and switching costs A Proof of Theorem Proof. Without loss of generality let the independent sequence set I(G :T ) formed of actions (or arms ) from to. Given the sequence

More information

Grundlagen der Künstlichen Intelligenz

Grundlagen der Künstlichen Intelligenz Grundlagen der Künstlichen Intelligenz Uncertainty & Probabilities & Bandits Daniel Hennes 16.11.2017 (WS 2017/18) University Stuttgart - IPVS - Machine Learning & Robotics 1 Today Uncertainty Probability

More information

Upper-Confidence-Bound Algorithms for Active Learning in Multi-armed Bandits

Upper-Confidence-Bound Algorithms for Active Learning in Multi-armed Bandits Upper-Confidence-Bound Algorithms for Active Learning in Multi-armed Bandits Alexandra Carpentier 1, Alessandro Lazaric 1, Mohammad Ghavamzadeh 1, Rémi Munos 1, and Peter Auer 2 1 INRIA Lille - Nord Europe,

More information

Explore no more: Improved high-probability regret bounds for non-stochastic bandits

Explore no more: Improved high-probability regret bounds for non-stochastic bandits Explore no more: Improved high-probability regret bounds for non-stochastic bandits Gergely Neu SequeL team INRIA Lille Nord Europe gergely.neu@gmail.com Abstract This work addresses the problem of regret

More information

Stochastic Contextual Bandits with Known. Reward Functions

Stochastic Contextual Bandits with Known. Reward Functions Stochastic Contextual Bandits with nown 1 Reward Functions Pranav Sakulkar and Bhaskar rishnamachari Ming Hsieh Department of Electrical Engineering Viterbi School of Engineering University of Southern

More information

Learning Methods for Online Prediction Problems. Peter Bartlett Statistics and EECS UC Berkeley

Learning Methods for Online Prediction Problems. Peter Bartlett Statistics and EECS UC Berkeley Learning Methods for Online Prediction Problems Peter Bartlett Statistics and EECS UC Berkeley Course Synopsis A finite comparison class: A = {1,..., m}. 1. Prediction with expert advice. 2. With perfect

More information

Informational Confidence Bounds for Self-Normalized Averages and Applications

Informational Confidence Bounds for Self-Normalized Averages and Applications Informational Confidence Bounds for Self-Normalized Averages and Applications Aurélien Garivier Institut de Mathématiques de Toulouse - Université Paul Sabatier Thursday, September 12th 2013 Context Tree

More information

From Bandits to Experts: A Tale of Domination and Independence

From Bandits to Experts: A Tale of Domination and Independence From Bandits to Experts: A Tale of Domination and Independence Nicolò Cesa-Bianchi Università degli Studi di Milano N. Cesa-Bianchi (UNIMI) Domination and Independence 1 / 1 From Bandits to Experts: A

More information

Multiple Identifications in Multi-Armed Bandits

Multiple Identifications in Multi-Armed Bandits Multiple Identifications in Multi-Armed Bandits arxiv:05.38v [cs.lg] 4 May 0 Sébastien Bubeck Department of Operations Research and Financial Engineering, Princeton University sbubeck@princeton.edu Tengyao

More information

An Experimental Evaluation of High-Dimensional Multi-Armed Bandits

An Experimental Evaluation of High-Dimensional Multi-Armed Bandits An Experimental Evaluation of High-Dimensional Multi-Armed Bandits Naoki Egami Romain Ferrali Kosuke Imai Princeton University Talk at Political Data Science Conference Washington University, St. Louis

More information

Multi-armed Bandits in the Presence of Side Observations in Social Networks

Multi-armed Bandits in the Presence of Side Observations in Social Networks 52nd IEEE Conference on Decision and Control December 0-3, 203. Florence, Italy Multi-armed Bandits in the Presence of Side Observations in Social Networks Swapna Buccapatnam, Atilla Eryilmaz, and Ness

More information

Reinforcement Learning

Reinforcement Learning Reinforcement Learning Markov decision process & Dynamic programming Evaluative feedback, value function, Bellman equation, optimality, Markov property, Markov decision process, dynamic programming, value

More information

Confident Bayesian Sequence Prediction. Tor Lattimore

Confident Bayesian Sequence Prediction. Tor Lattimore Confident Bayesian Sequence Prediction Tor Lattimore Sequence Prediction Can you guess the next number? 1, 2, 3, 4, 5,... 3, 1, 4, 1, 5,... 1, 0, 0, 0, 1, 0, 1, 1, 1, 1, 0, 1, 0, 0, 1, 0, 1, 0, 1, 0, 1,

More information

Bandits with Delayed, Aggregated Anonymous Feedback

Bandits with Delayed, Aggregated Anonymous Feedback Ciara Pike-Burke 1 Shipra Agrawal 2 Csaba Szepesvári 3 4 Steffen Grünewälder 1 Abstract We study a variant of the stochastic K-armed bandit problem, which we call bandits with delayed, aggregated anonymous

More information

Online learning with noisy side observations

Online learning with noisy side observations Online learning with noisy side observations Tomáš Kocák Gergely Neu Michal Valko Inria Lille - Nord Europe, France DTIC, Universitat Pompeu Fabra, Barcelona, Spain Inria Lille - Nord Europe, France SequeL

More information