Bandits : optimality in exponential families

Size: px
Start display at page:

Download "Bandits : optimality in exponential families"

Transcription

1 Bandits : optimality in exponential families Odalric-Ambrym Maillard IHES, January 2016 Odalric-Ambrym Maillard Bandits 1 / 40

2 Introduction 1 Stochastic multi-armed bandits 2 Boundary crossing probabilities 3 Analysis for exponential families Odalric-Ambrym Maillard Bandits 2 / 40

3 Stochastic multi-armed bandits

4 Application examples Clinical trials K possible treatments (unknown effects) What treatment allocate to each patient, depending on observed effects on previous patients? Online advertising K ads that can be displayed What ad show to each user, depending on observed clicks from past users? Odalric-Ambrym Maillard Bandits 4 / 40

5 The stochastic Multi-Armed Bandit setting Setting Set of choices A Each a A is associated with an unknown probability distribution ν a D with mean µ a Game The game is sequential: At each round t 1, Goal the player first picks an arm A t A based on previous observations then receives (and sees) a stochastic payoff Y t ν At [ T ] Maximize the cumulative payoff E Y t or equivalently t=1 minimize the regret at round T : [ T ] def R T = T µ E Y t where t=1 µ = max{ µ a ; a A }, a argmax{ µ a ; a A } Odalric-Ambrym Maillard Bandits 5 / 40

6 Applications of Bandits What book to display? (+Few interactions) What drug is the most efficient? (+Economic/Human cost) What power to inject in the electrical network? (+Fast decisions) What news to display? (+Short lifespan) What action to choose? (+Risk Aversion) How to organize future cities? (+Long-term decisions) Odalric-Ambrym Maillard Bandits 6 / 40

7 Core problem Decision Making under Uncertainty Exploration versus Exploitation Sample more information or trust current estimates? Optimism in face of (modeled) uncertainty Build empirical confidence bounds on value Choose point with highest value Odalric-Ambrym Maillard Bandits 7 / 40

8 Some historical facts 1933: Introduction by Thompson ("Bayesian" way) 1952: Frequentist formalization by Robbins 1985: Lai and Robbins first lower-performance bounds 1996: Burnetas and Katehakis get general lower-performance bounds 2002: Upper-Confidence Bound (UCB) algorithm gets first non-asymptotic guarantee 2010+: Provably optimal strategies KL-UCB, D-MED, Thompson Sampling for specific distributions This talk: Extension to general exponential families Odalric-Ambrym Maillard Bandits 8 / 40

9 Typical performance guarantees Lower bound (Burnetas and Katehakis 96) For all "consistent" algorithm, for arbitrary D P([0, 1]) lim inf T R T log(t ) a A µ µ a K inf (ν a, µ ), where K inf (ν a, µ ) = inf{kl(ν a, ν) : ν D, E ν [Y ] > µ } Upper bounds (Cappe, Garivier, M, Munos, Stoltz 2013) KL-UCB achieves for specific D (eg 1-dim exponential families): R T a A µ µ a K inf (ν a, µ ) log(t ) + log(t )2/3 Odalric-Ambrym Maillard Bandits 9 / 40

10 The KL-UCB algorithm Pull arm A t+1 that maximizes the upper bound where { ( ) sup E ν [Y ]: ν D, KL Π D ( νa,na(t),ν ) f (t) } N a (t) ν a,n = 1 n nm=1 δ Xa,m, N a (t) = t s=1 I{A s = a} Π D : M 1 (R) D projection on D f : N R non-decreasing Build empirical distribution for arm a Odalric-Ambrym Maillard Bandits 10 / 40

11 The KL-UCB algorithm Pull arm A t+1 that maximizes the upper bound where { ( ) sup E ν [Y ]: ν D, KL Π D ( νa,na(t),ν ) f (t) } N a (t) ν a,n = 1 n nm=1 δ Xa,m, N a (t) = t s=1 I{A s = a} Π D : M 1 (R) D projection on D f : N R non-decreasing Project it on D Odalric-Ambrym Maillard Bandits 10 / 40

12 The KL-UCB algorithm Pull arm A t+1 that maximizes the upper bound where { ( ) sup E ν [Y ]: ν D, KL Π D ( νa,na(t),ν ) f (t) } N a (t) ν a,n = 1 n nm=1 δ Xa,m, N a (t) = t s=1 I{A s = a} Π D : M 1 (R) D projection on D f : N R non-decreasing Threshold Consider all ν D close to this distribution Odalric-Ambrym Maillard Bandits 10 / 40

13 The KL-UCB algorithm Pull arm A t+1 that maximizes the upper bound where { ( ) sup E ν [Y ]: ν D, KL Π D ( νa,na(t),ν ) f (t) } N a (t) ν a,n = 1 n nm=1 δ Xa,m, N a (t) = t s=1 I{A s = a} Π D : M 1 (R) D projection on D f : N R non-decreasing Compute the maximal mean of these candidate distributions Odalric-Ambrym Maillard Bandits 10 / 40

14 The KL-UCB+ algorithm Pull arm A t+1 that maximizes the upper bound ( ) { ( ) ) f t/n a (t) } sup E ν [Y ]: ν D, KL Π D ( νa,na(t),ν N a (t) where ν a,n = 1 n nm=1 δ Xa,m, N a (t) = t s=1 I{A s = a} Π D : M 1 (R) D projection on D f : N R non-decreasing Odalric-Ambrym Maillard Bandits 11 / 40

15 From Regret to Boundary Crossing Probabilities (a) The regret is given by R T = (µ µ a )E[N a (T )] {}}{ (b) {}}{ a A a A(µ log(t ) µ a ) K inf (ν a, µ ) + log(t ) 2/3 Theorem Let 0 < ε < min{µ µ a, a A} Then, the number of pulls of a sub-optimal arm a A by Algorithm KL-UCB satisfies E [ N T (a) ] T 1 T { } P K inf (Π D ( ν a,n ), µ ε) f (T )/n n=1 }{{} (a) Easy term, via Sanov } { ( P K inf ΠD ( ν a,n a (t)), µ ε ) f (t)/n a (t) t= A } {{ } (b) Difficult: Boundary Crossing Probability Odalric-Ambrym Maillard Bandits 12 / 40

16 From Regret to Boundary Crossing Probabilities (a) The regret is given by R T = (µ µ a )E[N a (T )] {}}{ (b) {}}{ a A a A(µ log(t ) µ a ) K inf (ν a, µ ) + log(t ) 2/3 Theorem Let 0 < ε < min{µ µ a, a A} Likewise, the number of pulls of a sub-optimal arm a A by Algorithm KL-UCB + satisfies E [ N T (a) ] T 1 T { } P K inf (Π D ( ν a,n ), µ ε) f (T /n)/n n=1 } {{ } (a) Easy term, via Sanov } { ( P K inf ΠD ( ν a,n a (t)), µ ε ) f (t/n a (t))/n a (t) t= A } {{ } (b) Difficult: Boundary Crossing Probability Odalric-Ambrym Maillard Bandits 12 / 40

17 Exponential families E parametric family of distributions of the form where ν θ (x) = exp ( θ, F (x) ψ(θ) ) ν 0 (x), F : X R K, ν 0 P(X ): reference function/measure { } def θ Θ E = θ R K ; ψ(θ) < natural parameter ψ(θ) def ( ) = log exp θ, F (x) ν 0 (dx): log-partition function Example X Bernoulli: X = {0, 1}, F : x x (K = 1), Θ D = R, ψ(θ) = log(1 + e θ ) B(µ): parameter θ = log(µ/(1 µ)) Gaussian: X = R, F : x (x, x 2 )(K = 2), Θ D = R R, ( ) ψ(θ) = θ2 1 4θ log π θ 2 N (µ, σ 2 ): parameter θ = ( µ, 1 ) σ 2 2σ 2 Odalric-Ambrym Maillard Bandits 13 / 40

18 Basic properties of regular exponential families E νθ (F (X)) = ψ(θ) Odalric-Ambrym Maillard Bandits 14 / 40

19 Basic properties of regular exponential families E νθ (F (X)) = ψ(θ) KL(ν θ, ν θ ) = B ψ (θ, θ ) = θ θ, ψ(θ) ψ(θ) + ψ(θ }{{}}{{} ) }{{} Bregman div natural dual Odalric-Ambrym Maillard Bandits 14 / 40

20 Basic properties of regular exponential families E νθ (F (X)) = ψ(θ) KL(ν θ, ν θ ) = B ψ (θ, θ ) = θ θ, ψ(θ) ψ(θ) + ψ(θ }{{}}{{} ) }{{} Bregman div natural dual Estimation for regular family: Odalric-Ambrym Maillard Bandits 14 / 40

21 Basic properties of regular exponential families E νθ (F (X)) = ψ(θ) KL(ν θ, ν θ ) = B ψ (θ, θ ) = θ θ, ψ(θ) ψ(θ) + ψ(θ }{{}}{{} ) }{{} Bregman div natural dual Estimation for regular family: Let {Xi } i n ν θ, ν n = 1 n n i=1 δ X i Odalric-Ambrym Maillard Bandits 14 / 40

22 Basic properties of regular exponential families E νθ (F (X)) = ψ(θ) KL(ν θ, ν θ ) = B ψ (θ, θ ) = θ θ, ψ(θ) ψ(θ) + ψ(θ }{{}}{{} ) }{{} Bregman div natural dual Estimation for regular family: Let {Xi } i n ν θ, ν n = 1 n n i=1 δ X i def Fn = 1 n n i=1 F (X i) = (F (X)) ψ(θ E νn D ) In the sequel, we would like to have Π E ( ν a,n) = ν θn for some θ n Θ E Odalric-Ambrym Maillard Bandits 14 / 40

23 Ṭechnical assumption: Well-defined empirical parameter θ n We assume that θ Θ and F n ψ(θ ρ ) where Θ is convex Neighborhood Θ 2ρ Θ I where Θ2ρ = {θ : inf θ Θ θ θ < 2ρ} Θ I def = { θ Θ E ; 0 < λ MIN ( 2 ψ(θ)) λ MAX ( 2 ψ(θ)) < with λ MIN/MAX (M): min/max eigenvalues of a sdp matrix M 2ρ < ρ = d(θ, Θ c I ) }, θ Θ Θ 2ρ Θ I Θ E Then we can form Π E ( ν a,n) = ν θn with θ n Θ ρ and K inf ( ΠD ( ν a,n), µ ε ) = inf{b ψ ( θ n, θ) : E νθ [X] µ ε} Odalric-Ambrym Maillard Bandits 15 / 40

24 Boundary crossing probabilities for general exponential families

25 What it is about { ( P θ K inf ΠE ( ν a,n a (t)), µ ε ) } f (t/n a (t))/n a (t) P θ { t n=1 } inf{b ψ ( θ n, θ) : E νθ [X] µ ε} f (t/n)/n Eg for canonical exp family of dimension 1: Unique θ = θ ε R such that E νθ [X] = µ ε, thus: P θ { t n=1 } B ψ ( θ n, θε) f (t/n)/n We want this to be o(1/t) for well-chosen f Odalric-Ambrym Maillard Bandits 17 / 40

26 Known results Theorem (Cappé, Garivier, M, Munos, Stoltz 2013) For canonical (F (x) = x X ) exp families of dimension K = 1: P θ { t n=1 B ψ ( θ n, θ ) f ( t ) } /n e f (t) log(t) e f (t) For f (x) = log(x) + ξ log log(x), the bound is O( log2 ξ (t) t ) A similar result is shown for discrete distributions Limitations Requires ξ > 2, while experiments shows ξ 0 still works (and even better) Only for threshold f (t)/n, does not work for f (t/n)/n Consider θ : does not take advantage of ε > 0 Odalric-Ambrym Maillard Bandits 18 / 40

27 Forgetted result Theorem (Lai, 1988; exponential family of dimension K) For γ (0, 1) let C γ (θ) = {θ R K : θ, θ γ θ θ } Then, for f (x) = log(x) + ξ log log(x) it holds for all θ Θ ρ such that θ θ 2 δ t, where δ t 0, tδ t as t, { t P θ n=1 ( θ n Θ ρ B ψ ( θ n, θ ) f ) } t/n /n ψ( θ n ) ψ(θ ) C γ (θ θ ) ( t θ θ 2 ) = O log ξ 1+K/2 (t θ θ 2 ) t Pros ξ > K/2 1 is enough For f (t/n) Cons Blue term Asymptotic "Boundary crossing problems for sample means", 1988 Odalric-Ambrym Maillard Bandits 19 / 40

28 Illustrative experiment on Bernoulli distributions Regret as function of time for KL-UCB and KL-UCB+ (mean and quantiles) Odalric-Ambrym Maillard Bandits 20 / 40

29 This talk Explain the proof technique for this result: 1 Peeling (standard) and covering 2 Change of measure argument 3 Localization + change of measure 4 Concentration (standard) 5 Fine tuning (gain log 2 factor) Show that Lai s theorem can be made non-asymptotic, and used to control the general case Odalric-Ambrym Maillard Bandits 21 / 40

30 Analysis and proof technique (up to some details for presentation purpose) P θ { θ } 1 n t n Θ ρ K inf (Π E ( ν a,n), µ ε) f (t/n)/n }{{} E n

31 I Peeling Let n 0 = 1 < n 1 < < n It = t + 1 for some I t N { } I t 1 { } P θ E n P θ E n 1 n t i=0 n i n<n i+1 For some b > 1 and β (0, 1), use: b i def if i < I n i = t = log b (βt + β) t + 1 if i = I t, (valid sequence since n It 1 b log b (βt+β) < t + 1 = n It ) Odalric-Ambrym Maillard Bandits 23 / 40

32 II Cone covering 1 First: 2 Then K inf (Π E ( ν a,n), µ ε) = inf{b ψ ( θ n, θ) : θ Θ E, µ θ µ ε} = inf{b ψ ( θ n, θ ) : θ Θ E, µ θ µ ε} B ψ ( θ n, θ ) = B ψ ( θ n, θ ) B ψ (θ, θ ), ψ(θ ) ψ( θ n ) }{{} shift Odalric-Ambrym Maillard Bandits 24 / 40

33 II Cone covering { } Consider half-cone C γ ( ) = θ R K : θ, γ θ, then set { c } 1 c Cγ,K of directions st R K = C γ,k c=1 C γ ( c ) C γ ( ) invariant by rescaling of : We can choose c small enough so that θ c Θ ρ C γ,1 = 2, C γ,k when γ 1 Odalric-Ambrym Maillard Bandits 25 / 40

34 II Cone covering K inf (Π E ( ν a,n), µ ε) = inf{b ψ ( θ n, θ ) : θ Θ E, µ θ µ ε} B ψ ( θ n, θ c ) θ c Θ ρ, µ θ c µ ε, c [C γ,k ] Thus P θ { where E n,c,c = n i n<n i+1 E n } C γ,k min c c =1 P θ { n i n<n i+1 E n,c,c { θ n Θ ρ ψ(θ c ) F n C γ ( c) B ψ ( θ n, θ c ) f (t/n) } n } E n,c,c is almost the event appearing in Lai s theorem Odalric-Ambrym Maillard Bandits 26 / 40

35 Recap of steps I II { } I t 1 P θ E n 1 n t i=0 min c C γ,k P θ c =1 { n i n<n i+1 E n,c,c } Odalric-Ambrym Maillard Bandits 27 / 40

36 III Change of measure Lemma (Change of measure) If n nf (t/n) is non-decreasing and θ c Θ ρ, then P θ { n i n<n i+1 E n,c,c } ( C exp n ) i 2 v ρδc 2 2ni f (t/n i ) γδ c v ρ V ρ { } P θ c E n,c,c, n i n<n i+1 where v ρ = inf θ Θρ λ MIN ( 2 ψ(θ)), V ρ = sup θ Θρ λ MAX ( 2 ψ(θ)), δ c = c Concentration of B ψ ( θ n, θ c ) for distribution ν θ vs concentration of B ψ ( θ n, θ c ) for distribution ν θ c Odalric-Ambrym Maillard Bandits 28 / 40

37 III Change of measure: proof details for n iid variables look at the ratio dp θ Π n k=1 = ν θ (X k) dp θ c Π n k=1 ν θ c (X k ) ( = exp n c, F ( ) ) a,n n ψ(θ ) ψ(θ c ) ( = exp n c, ψ(θ c ) F a,n }{{} (a) ) n B ψ (θ c, θ ) }{{} (b) Odalric-Ambrym Maillard Bandits 29 / 40

38 III Change of measure: proof details for n iid variables look at the ratio ( dp θ = exp dp θ c For (b): n c, ψ(θ c ) F a,n }{{} (a) B ψ (θ c, θ ) 1 2 v ρδ 2 c ) n B ψ (θ c, θ ) }{{} (b) Odalric-Ambrym Maillard Bandits 29 / 40

39 III Change of measure: proof details for n iid variables look at the ratio ( dp θ = exp dp θ c For (b): n c, ψ(θ c ) F a,n }{{} (a) B ψ (θ c, θ ) 1 2 v ρδc 2 For (a): (up to going from c to c) ) n B ψ (θ c, θ ) }{{} (b) c, ψ(θ c ) F n γ c ψ(θ c ) F n γδ c v ρ θ n θ + c γδ c v ρ 2 V ρ B ψ ( θ n, θ c ) γδ c v ρ 2f (t/n) V ρ n Odalric-Ambrym Maillard Bandits 29 / 40

40 IV Localization By construction: ψ(θ c ) F n C p ( c ) c, ψ(θ c ) F n γ c ψ(θ c ) F n, Thus, we look at ψ(θ c ) F n Decomposition, for some constant ε t,i,c > 0 { } P θ c n i n<n i+1 E n,c P θ c { E n,c ψ(θ c ) F n < ε t,i,c n i n<n i+1 }{{} (a) Requires some care { + P θ c E n,c ψ(θ c ) F } n ε t,i,c n i n<n i+1 }{{} (b) "Standard" concentration } Odalric-Ambrym Maillard Bandits 30 / 40

41 V Localized change of measure and concentration Lemma (Concentration (admitted)) Under some technical assumption, ( (b) 2K exp n2 i ε2 ) t,i,c 2KV 2ρ n i+1 2KV For ε t,i,c = 2ρ n i+1 f (t/n i+1 ), then (b) 2K exp( f (t/n ni 2 i+1 )) Lemma (Localized change of measure) For the ε t,i,c considered, we have ( ) (a) 2C b,ρ exp f (t/n i+1 ) f (t/n i+1 ) K/2, for an explicit constant C b,ρ = max { 2KbV2ρ ρ 2 v 2 ρ, 4, 8K 2 b 2 V2ρ 2 } K/2 (K+2)vρ 2 Odalric-Ambrym Maillard Bandits 31 / 40

42 Recap { P θ 1 n t } θ n Θ ρ K inf (Π E ( ν a,n), µ ε) f (t/n)/n End of the proof: C γ,k I t 1 c=1 i=0 }{{} Peeling+Covering [ e f (t/n i+1) Ce n i 2 vρδ2 2n c γδ cv i f (t/n i ) ρ Vρ }{{} Change of measure ] 2CC b,ρ f (t/n i+1 ) K/2 + 2K } {{ } Localized change of measure + Concentration "Change of measure" term kills n i δc 2 / log(tδc 2 ) terms Remains e f (tδ2 c ) f (tδc 2 ) K/2 / log(tδc 2 ) 1 log(tδ 2 tδc 2 c ) ξ+k/2 1 Odalric-Ambrym Maillard Bandits 32 / 40

43 Illustrative experiment on Bernoulli distributions Regret as function of time for KL-UCB and KL-UCB+ (mean and quantiles) Odalric-Ambrym Maillard Bandits 33 / 40

44 Back to V(a) Localized change of measure P θ c { Let θ c n i n<n i+1 E n,c ψ(θ c ) F n < ε t,i,c def = θ c Θ ρ E n,c { θ n Θ ρ B ψ ( θ n, θ c) f (t/n)/n} } Odalric-Ambrym Maillard Bandits 34 / 40

45 Back to V(a) Localized change of measure: idea Change of measure from P θ c to the mixture Q B ( ) = θ B P θ ( ) where B = Θ 2ρ Ball(θ c, 2ε t,i,c v ρ ) We have Ball( θ n, min{ ε t,i,c v ρ, ρ}) B Θ 2ρ Study for n iid points the ratio dp θ dq B = = [ [ θ B θ B Π n k=1 ν θ (X ] 1 k) Π n k=1 ν θ(x k ) dθ ( exp n θ θ, F a,n n ( ψ(θ ) ψ(θ) ) ] 1 )dθ }{{} h(θ ) Odalric-Ambrym Maillard Bandits 35 / 40

46 Back to V(a) Localized change of measure: idea Studying h, it comes for θ Θ 2ρ h(θ ) nb ψ ( θ n, θ) n 2 (θ θ n ) 2 ψ( θ)(θ θ n ) f (t/n) n 2 θ θ n 2 V 2ρ (convexity of Θ 2ρ ) Thus using that Ball( θ n, min{ ε t,i,c v ρ, ρ}) B dp θ dq B [ θ B e f (t/n i+1) exp (f (t/n) n ] 1 2 θ θ n 2 V 2ρ )dθ [ ( exp n ) 1 i+1 x Ball(0,min{ρ, ε t,i,c vρ }) 2 x 2 V 2ρ dx], Odalric-Ambrym Maillard Bandits 36 / 40

47 Back to V(a) Localized change of measure: idea Thus for our event of interest Ω, [ } ] 1 P θ c {Ω e f (t/n i+1) e n { } i+1 x Ball(0,min{ρ, ε 2 2 V 2ρ dx P t,i,c θ Ω dθ vρ }{{ }) B }{{}} B Volume of a Gaussian ball And after some easy computations } ( ) { P θ c {Ω 2 exp f (t/n i+1 ) min ρ 2 vρ 2, ε 2 t,i,c, (K + 2)v ρ 2 } K/2 2 K ε K Kn i+1 V t,i,c 2ρ 2KV We conclude using ε t,i,c = 2ρ n i+1 f (t/n i+1 ) ni 2, Odalric-Ambrym Maillard Bandits 37 / 40

48 Conclusion For a general exponential family E, known, we can* Exhibit a concentration of measure for the Bregman divergence induced by the family: in finite time for all dimension K powerful proof technique Get fine understanding of the threshold f (x) = log(x) + ξ log log(x) improve over existing work: ξ > K/2 1 is enough + hold for all K + KL-UCB+ as well closely follows what is observed in experiments show this was known 30 years ago by Lai but we still need to know the family E * we skipped some regularity restrictions on E Odalric-Ambrym Maillard Bandits 38 / 40

49 Teaser Regret as function of time for KL-UCB, KL-UCB+ and BESA (mean and quantiles) BESA does not know the family E "Sub-sampling for the multi-armed bandit", A Baransi et al 2014 Odalric-Ambrym Maillard Bandits 39 / 40

50 Looking for Interns, PhD candidates, collaborations Thank you

Subsampling, Concentration and Multi-armed bandits

Subsampling, Concentration and Multi-armed bandits Subsampling, Concentration and Multi-armed bandits Odalric-Ambrym Maillard, R. Bardenet, S. Mannor, A. Baransi, N. Galichet, J. Pineau, A. Durand Toulouse, November 09, 2015 O-A. Maillard Subsampling and

More information

Two optimization problems in a stochastic bandit model

Two optimization problems in a stochastic bandit model Two optimization problems in a stochastic bandit model Emilie Kaufmann joint work with Olivier Cappé, Aurélien Garivier and Shivaram Kalyanakrishnan Journées MAS 204, Toulouse Outline From stochastic optimization

More information

The information complexity of sequential resource allocation

The information complexity of sequential resource allocation The information complexity of sequential resource allocation Emilie Kaufmann, joint work with Olivier Cappé, Aurélien Garivier and Shivaram Kalyanakrishan SMILE Seminar, ENS, June 8th, 205 Sequential allocation

More information

On Bayesian bandit algorithms

On Bayesian bandit algorithms On Bayesian bandit algorithms Emilie Kaufmann joint work with Olivier Cappé, Aurélien Garivier, Nathaniel Korda and Rémi Munos July 1st, 2012 Emilie Kaufmann (Telecom ParisTech) On Bayesian bandit algorithms

More information

On the Complexity of Best Arm Identification in Multi-Armed Bandit Models

On the Complexity of Best Arm Identification in Multi-Armed Bandit Models On the Complexity of Best Arm Identification in Multi-Armed Bandit Models Aurélien Garivier Institut de Mathématiques de Toulouse Information Theory, Learning and Big Data Simons Institute, Berkeley, March

More information

Boundary Crossing for General Exponential Families

Boundary Crossing for General Exponential Families Proceedings of Machine Learning Research 76:1 34, 017 Algorithmic Learning Theory 017 Boundary Crossing for General Exponential Families Odalric-Ambrym Maillard INRIA Lille - Nord Europe, Villeneuve d

More information

Stratégies bayésiennes et fréquentistes dans un modèle de bandit

Stratégies bayésiennes et fréquentistes dans un modèle de bandit Stratégies bayésiennes et fréquentistes dans un modèle de bandit thèse effectuée à Telecom ParisTech, co-dirigée par Olivier Cappé, Aurélien Garivier et Rémi Munos Journées MAS, Grenoble, 30 août 2016

More information

The information complexity of best-arm identification

The information complexity of best-arm identification The information complexity of best-arm identification Emilie Kaufmann, joint work with Olivier Cappé and Aurélien Garivier MAB workshop, Lancaster, January th, 206 Context: the multi-armed bandit model

More information

Online Learning and Sequential Decision Making

Online Learning and Sequential Decision Making Online Learning and Sequential Decision Making Emilie Kaufmann CNRS & CRIStAL, Inria SequeL, emilie.kaufmann@univ-lille.fr Research School, ENS Lyon, Novembre 12-13th 2018 Emilie Kaufmann Sequential Decision

More information

Bayesian and Frequentist Methods in Bandit Models

Bayesian and Frequentist Methods in Bandit Models Bayesian and Frequentist Methods in Bandit Models Emilie Kaufmann, Telecom ParisTech Bayes In Paris, ENSAE, October 24th, 2013 Emilie Kaufmann (Telecom ParisTech) Bayesian and Frequentist Bandits BIP,

More information

Revisiting the Exploration-Exploitation Tradeoff in Bandit Models

Revisiting the Exploration-Exploitation Tradeoff in Bandit Models Revisiting the Exploration-Exploitation Tradeoff in Bandit Models joint work with Aurélien Garivier (IMT, Toulouse) and Tor Lattimore (University of Alberta) Workshop on Optimization and Decision-Making

More information

Bandit models: a tutorial

Bandit models: a tutorial Gdt COS, December 3rd, 2015 Multi-Armed Bandit model: general setting K arms: for a {1,..., K}, (X a,t ) t N is a stochastic process. (unknown distributions) Bandit game: a each round t, an agent chooses

More information

On the Complexity of Best Arm Identification with Fixed Confidence

On the Complexity of Best Arm Identification with Fixed Confidence On the Complexity of Best Arm Identification with Fixed Confidence Discrete Optimization with Noise Aurélien Garivier, Emilie Kaufmann COLT, June 23 th 2016, New York Institut de Mathématiques de Toulouse

More information

Regret Analysis of Stochastic and Nonstochastic Multi-armed Bandit Problems, Part I. Sébastien Bubeck Theory Group

Regret Analysis of Stochastic and Nonstochastic Multi-armed Bandit Problems, Part I. Sébastien Bubeck Theory Group Regret Analysis of Stochastic and Nonstochastic Multi-armed Bandit Problems, Part I Sébastien Bubeck Theory Group i.i.d. multi-armed bandit, Robbins [1952] i.i.d. multi-armed bandit, Robbins [1952] Known

More information

KULLBACK-LEIBLER UPPER CONFIDENCE BOUNDS FOR OPTIMAL SEQUENTIAL ALLOCATION

KULLBACK-LEIBLER UPPER CONFIDENCE BOUNDS FOR OPTIMAL SEQUENTIAL ALLOCATION Submitted to the Annals of Statistics arxiv: math.pr/0000000 KULLBACK-LEIBLER UPPER CONFIDENCE BOUNDS FOR OPTIMAL SEQUENTIAL ALLOCATION By Olivier Cappé 1, Aurélien Garivier 2, Odalric-Ambrym Maillard

More information

Multi-armed bandit models: a tutorial

Multi-armed bandit models: a tutorial Multi-armed bandit models: a tutorial CERMICS seminar, March 30th, 2016 Multi-Armed Bandit model: general setting K arms: for a {1,..., K}, (X a,t ) t N is a stochastic process. (unknown distributions)

More information

On the Complexity of Best Arm Identification with Fixed Confidence

On the Complexity of Best Arm Identification with Fixed Confidence On the Complexity of Best Arm Identification with Fixed Confidence Discrete Optimization with Noise Aurélien Garivier, joint work with Emilie Kaufmann CNRS, CRIStAL) to be presented at COLT 16, New York

More information

Two generic principles in modern bandits: the optimistic principle and Thompson sampling

Two generic principles in modern bandits: the optimistic principle and Thompson sampling Two generic principles in modern bandits: the optimistic principle and Thompson sampling Rémi Munos INRIA Lille, France CSML Lunch Seminars, September 12, 2014 Outline Two principles: The optimistic principle

More information

Advanced Machine Learning

Advanced Machine Learning Advanced Machine Learning Bandit Problems MEHRYAR MOHRI MOHRI@ COURANT INSTITUTE & GOOGLE RESEARCH. Multi-Armed Bandit Problem Problem: which arm of a K-slot machine should a gambler pull to maximize his

More information

Kullback-Leibler Upper Confidence Bounds for Optimal Sequential Allocation

Kullback-Leibler Upper Confidence Bounds for Optimal Sequential Allocation Kullback-Leibler Upper Confidence Bounds for Optimal Sequential Allocation Olivier Cappé, Aurélien Garivier, Odalric-Ambrym Maillard, Rémi Munos, Gilles Stoltz To cite this version: Olivier Cappé, Aurélien

More information

Reinforcement Learning

Reinforcement Learning Reinforcement Learning Lecture 5: Bandit optimisation Alexandre Proutiere, Sadegh Talebi, Jungseul Ok KTH, The Royal Institute of Technology Objectives of this lecture Introduce bandit optimisation: the

More information

The Multi-Arm Bandit Framework

The Multi-Arm Bandit Framework The Multi-Arm Bandit Framework A. LAZARIC (SequeL Team @INRIA-Lille) ENS Cachan - Master 2 MVA SequeL INRIA Lille MVA-RL Course In This Lecture A. LAZARIC Reinforcement Learning Algorithms Oct 29th, 2013-2/94

More information

arxiv: v4 [math.pr] 26 Aug 2013

arxiv: v4 [math.pr] 26 Aug 2013 The Annals of Statistics 2013, Vol. 41, No. 3, 1516 1541 DOI: 10.1214/13-AOS1119 c Institute of Mathematical Statistics, 2013 arxiv:1210.1136v4 [math.pr] 26 Aug 2013 KULLBACK LEIBLER UPPER CONFIDENCE BOUNDS

More information

Bandit View on Continuous Stochastic Optimization

Bandit View on Continuous Stochastic Optimization Bandit View on Continuous Stochastic Optimization Sébastien Bubeck 1 joint work with Rémi Munos 1 & Gilles Stoltz 2 & Csaba Szepesvari 3 1 INRIA Lille, SequeL team 2 CNRS/ENS/HEC 3 University of Alberta

More information

Stat 260/CS Learning in Sequential Decision Problems.

Stat 260/CS Learning in Sequential Decision Problems. Stat 260/CS 294-102. Learning in Sequential Decision Problems. Peter Bartlett 1. Multi-armed bandit algorithms. Exponential families. Cumulant generating function. KL-divergence. KL-UCB for an exponential

More information

Introduction to Bandit Algorithms. Introduction to Bandit Algorithms

Introduction to Bandit Algorithms. Introduction to Bandit Algorithms Stochastic K-Arm Bandit Problem Formulation Consider K arms (actions) each correspond to an unknown distribution {ν k } K k=1 with values bounded in [0, 1]. At each time t, the agent pulls an arm I t {1,...,

More information

Dynamic resource allocation: Bandit problems and extensions

Dynamic resource allocation: Bandit problems and extensions Dynamic resource allocation: Bandit problems and extensions Aurélien Garivier Institut de Mathématiques de Toulouse MAD Seminar, Université Toulouse 1 October 3rd, 2014 The Bandit Model Roadmap 1 The Bandit

More information

arxiv: v2 [stat.ml] 14 Nov 2016

arxiv: v2 [stat.ml] 14 Nov 2016 Journal of Machine Learning Research 6 06-4 Submitted 7/4; Revised /5; Published /6 On the Complexity of Best-Arm Identification in Multi-Armed Bandit Models arxiv:407.4443v [stat.ml] 4 Nov 06 Emilie Kaufmann

More information

Stochastic bandits: Explore-First and UCB

Stochastic bandits: Explore-First and UCB CSE599s, Spring 2014, Online Learning Lecture 15-2/19/2014 Stochastic bandits: Explore-First and UCB Lecturer: Brendan McMahan or Ofer Dekel Scribe: Javad Hosseini In this lecture, we like to answer this

More information

An Estimation Based Allocation Rule with Super-linear Regret and Finite Lock-on Time for Time-dependent Multi-armed Bandit Processes

An Estimation Based Allocation Rule with Super-linear Regret and Finite Lock-on Time for Time-dependent Multi-armed Bandit Processes An Estimation Based Allocation Rule with Super-linear Regret and Finite Lock-on Time for Time-dependent Multi-armed Bandit Processes Prokopis C. Prokopiou, Peter E. Caines, and Aditya Mahajan McGill University

More information

Stat 260/CS Learning in Sequential Decision Problems. Peter Bartlett

Stat 260/CS Learning in Sequential Decision Problems. Peter Bartlett Stat 260/CS 294-102. Learning in Sequential Decision Problems. Peter Bartlett 1. Multi-armed bandit algorithms. Concentration inequalities. P(X ǫ) exp( ψ (ǫ))). Cumulant generating function bounds. Hoeffding

More information

Lecture 4: Lower Bounds (ending); Thompson Sampling

Lecture 4: Lower Bounds (ending); Thompson Sampling CMSC 858G: Bandits, Experts and Games 09/12/16 Lecture 4: Lower Bounds (ending); Thompson Sampling Instructor: Alex Slivkins Scribed by: Guowei Sun,Cheng Jie 1 Lower bounds on regret (ending) Recap from

More information

Csaba Szepesvári 1. University of Alberta. Machine Learning Summer School, Ile de Re, France, 2008

Csaba Szepesvári 1. University of Alberta. Machine Learning Summer School, Ile de Re, France, 2008 LEARNING THEORY OF OPTIMAL DECISION MAKING PART I: ON-LINE LEARNING IN STOCHASTIC ENVIRONMENTS Csaba Szepesvári 1 1 Department of Computing Science University of Alberta Machine Learning Summer School,

More information

Robustness of Anytime Bandit Policies

Robustness of Anytime Bandit Policies Robustness of Anytime Bandit Policies Antoine Salomon, Jean-Yves Audibert To cite this version: Antoine Salomon, Jean-Yves Audibert. Robustness of Anytime Bandit Policies. 011. HAL Id:

More information

THE MULTI-ARMED BANDIT PROBLEM: AN EFFICIENT NON-PARAMETRIC SOLUTION

THE MULTI-ARMED BANDIT PROBLEM: AN EFFICIENT NON-PARAMETRIC SOLUTION THE MULTI-ARMED BANDIT PROBLEM: AN EFFICIENT NON-PARAMETRIC SOLUTION Hock Peng Chan stachp@nus.edu.sg Department of Statistics and Applied Probability National University of Singapore Abstract Lai and

More information

Pure Exploration in Finitely Armed and Continuous Armed Bandits

Pure Exploration in Finitely Armed and Continuous Armed Bandits Pure Exploration in Finitely Armed and Continuous Armed Bandits Sébastien Bubeck INRIA Lille Nord Europe, SequeL project, 40 avenue Halley, 59650 Villeneuve d Ascq, France Rémi Munos INRIA Lille Nord Europe,

More information

The geometry of Gaussian processes and Bayesian optimization. Contal CMLA, ENS Cachan

The geometry of Gaussian processes and Bayesian optimization. Contal CMLA, ENS Cachan The geometry of Gaussian processes and Bayesian optimization. Contal CMLA, ENS Cachan Background: Global Optimization and Gaussian Processes The Geometry of Gaussian Processes and the Chaining Trick Algorithm

More information

Multi-Armed Bandit: Learning in Dynamic Systems with Unknown Models

Multi-Armed Bandit: Learning in Dynamic Systems with Unknown Models c Qing Zhao, UC Davis. Talk at Xidian Univ., September, 2011. 1 Multi-Armed Bandit: Learning in Dynamic Systems with Unknown Models Qing Zhao Department of Electrical and Computer Engineering University

More information

Informational Confidence Bounds for Self-Normalized Averages and Applications

Informational Confidence Bounds for Self-Normalized Averages and Applications Informational Confidence Bounds for Self-Normalized Averages and Applications Aurélien Garivier Institut de Mathématiques de Toulouse - Université Paul Sabatier Thursday, September 12th 2013 Context Tree

More information

Bandit Algorithms for Pure Exploration: Best Arm Identification and Game Tree Search. Wouter M. Koolen

Bandit Algorithms for Pure Exploration: Best Arm Identification and Game Tree Search. Wouter M. Koolen Bandit Algorithms for Pure Exploration: Best Arm Identification and Game Tree Search Wouter M. Koolen Machine Learning and Statistics for Structures Friday 23 rd February, 2018 Outline 1 Intro 2 Model

More information

Alireza Shafaei. Machine Learning Reading Group The University of British Columbia Summer 2017

Alireza Shafaei. Machine Learning Reading Group The University of British Columbia Summer 2017 s s Machine Learning Reading Group The University of British Columbia Summer 2017 (OCO) Convex 1/29 Outline (OCO) Convex Stochastic Bernoulli s (OCO) Convex 2/29 At each iteration t, the player chooses

More information

arxiv: v2 [stat.ml] 19 Jul 2012

arxiv: v2 [stat.ml] 19 Jul 2012 Thompson Sampling: An Asymptotically Optimal Finite Time Analysis Emilie Kaufmann, Nathaniel Korda and Rémi Munos arxiv:105.417v [stat.ml] 19 Jul 01 Telecom Paristech UMR CNRS 5141 & INRIA Lille - Nord

More information

Multi-armed Bandits in the Presence of Side Observations in Social Networks

Multi-armed Bandits in the Presence of Side Observations in Social Networks 52nd IEEE Conference on Decision and Control December 0-3, 203. Florence, Italy Multi-armed Bandits in the Presence of Side Observations in Social Networks Swapna Buccapatnam, Atilla Eryilmaz, and Ness

More information

Grundlagen der Künstlichen Intelligenz

Grundlagen der Künstlichen Intelligenz Grundlagen der Künstlichen Intelligenz Uncertainty & Probabilities & Bandits Daniel Hennes 16.11.2017 (WS 2017/18) University Stuttgart - IPVS - Machine Learning & Robotics 1 Today Uncertainty Probability

More information

A minimax and asymptotically optimal algorithm for stochastic bandits

A minimax and asymptotically optimal algorithm for stochastic bandits Proceedings of Machine Learning Research 76:1 15, 017 Algorithmic Learning heory 017 A minimax and asymptotically optimal algorithm for stochastic bandits Pierre Ménard Aurélien Garivier Institut de Mathématiques

More information

THE first formalization of the multi-armed bandit problem

THE first formalization of the multi-armed bandit problem EDIC RESEARCH PROPOSAL 1 Multi-armed Bandits in a Network Farnood Salehi I&C, EPFL Abstract The multi-armed bandit problem is a sequential decision problem in which we have several options (arms). We can

More information

Anytime optimal algorithms in stochastic multi-armed bandits

Anytime optimal algorithms in stochastic multi-armed bandits Rémy Degenne LPMA, Université Paris Diderot Vianney Perchet CREST, ENSAE REMYDEGENNE@MATHUNIV-PARIS-DIDEROTFR VIANNEYPERCHET@NORMALESUPORG Abstract We introduce an anytime algorithm for stochastic multi-armed

More information

On the Complexity of A/B Testing

On the Complexity of A/B Testing JMLR: Workshop and Conference Proceedings vol 35:1 3, 014 On the Complexity of A/B Testing Emilie Kaufmann LTCI, Télécom ParisTech & CNRS KAUFMANN@TELECOM-PARISTECH.FR Olivier Cappé CAPPE@TELECOM-PARISTECH.FR

More information

Optimality of Thompson Sampling for Gaussian Bandits Depends on Priors

Optimality of Thompson Sampling for Gaussian Bandits Depends on Priors Optimality of Thompson Sampling for Gaussian Bandits Depends on Priors Junya Honda Akimichi Takemura The University of Tokyo {honda, takemura}@stat.t.u-tokyo.ac.jp Abstract In stochastic bandit problems,

More information

arxiv: v3 [stat.ml] 6 Nov 2017

arxiv: v3 [stat.ml] 6 Nov 2017 On Bayesian index policies for sequential resource allocation Emilie Kaufmann arxiv:60.090v3 [stat.ml] 6 Nov 07 CNRS & Univ. Lille, UMR 989 CRIStAL, Inria SequeL emilie.kaufmann@univ-lille.fr Abstract

More information

Bandit Algorithms. Tor Lattimore & Csaba Szepesvári

Bandit Algorithms. Tor Lattimore & Csaba Szepesvári Bandit Algorithms Tor Lattimore & Csaba Szepesvári Bandits Time 1 2 3 4 5 6 7 8 9 10 11 12 Left arm $1 $0 $1 $1 $0 Right arm $1 $0 Five rounds to go. Which arm would you play next? Overview What are bandits,

More information

Regret Analysis of Stochastic and Nonstochastic Multi-armed Bandit Problems. Sébastien Bubeck Theory Group

Regret Analysis of Stochastic and Nonstochastic Multi-armed Bandit Problems. Sébastien Bubeck Theory Group Regret Analysis of Stochastic and Nonstochastic Multi-armed Bandit Problems Sébastien Bubeck Theory Group Part 1: i.i.d., adversarial, and Bayesian bandit models i.i.d. multi-armed bandit, Robbins [1952]

More information

Bandit Algorithms. Zhifeng Wang ... Department of Statistics Florida State University

Bandit Algorithms. Zhifeng Wang ... Department of Statistics Florida State University Bandit Algorithms Zhifeng Wang Department of Statistics Florida State University Outline Multi-Armed Bandits (MAB) Exploration-First Epsilon-Greedy Softmax UCB Thompson Sampling Adversarial Bandits Exp3

More information

Regret lower bounds and extended Upper Confidence Bounds policies in stochastic multi-armed bandit problem

Regret lower bounds and extended Upper Confidence Bounds policies in stochastic multi-armed bandit problem Journal of Machine Learning Research 1 01) 1-48 Submitted 4/00; Published 10/00 Regret lower bounds and extended Upper Confidence Bounds policies in stochastic multi-armed bandit problem Antoine Salomon

More information

Learning to play K-armed bandit problems

Learning to play K-armed bandit problems Learning to play K-armed bandit problems Francis Maes 1, Louis Wehenkel 1 and Damien Ernst 1 1 University of Liège Dept. of Electrical Engineering and Computer Science Institut Montefiore, B28, B-4000,

More information

Reinforcement Learning

Reinforcement Learning Reinforcement Learning Lecture 6: RL algorithms 2.0 Alexandre Proutiere, Sadegh Talebi, Jungseul Ok KTH, The Royal Institute of Technology Objectives of this lecture Present and analyse two online algorithms

More information

Complexity of stochastic branch and bound methods for belief tree search in Bayesian reinforcement learning

Complexity of stochastic branch and bound methods for belief tree search in Bayesian reinforcement learning Complexity of stochastic branch and bound methods for belief tree search in Bayesian reinforcement learning Christos Dimitrakakis Informatics Institute, University of Amsterdam, Amsterdam, The Netherlands

More information

Multi-Armed Bandits. Credit: David Silver. Google DeepMind. Presenter: Tianlu Wang

Multi-Armed Bandits. Credit: David Silver. Google DeepMind. Presenter: Tianlu Wang Multi-Armed Bandits Credit: David Silver Google DeepMind Presenter: Tianlu Wang Credit: David Silver (DeepMind) Multi-Armed Bandits Presenter: Tianlu Wang 1 / 27 Outline 1 Introduction Exploration vs.

More information

arxiv: v4 [cs.lg] 22 Jul 2014

arxiv: v4 [cs.lg] 22 Jul 2014 Learning to Optimize Via Information-Directed Sampling Daniel Russo and Benjamin Van Roy July 23, 2014 arxiv:1403.5556v4 cs.lg] 22 Jul 2014 Abstract We propose information-directed sampling a new algorithm

More information

COS 402 Machine Learning and Artificial Intelligence Fall Lecture 22. Exploration & Exploitation in Reinforcement Learning: MAB, UCB, Exp3

COS 402 Machine Learning and Artificial Intelligence Fall Lecture 22. Exploration & Exploitation in Reinforcement Learning: MAB, UCB, Exp3 COS 402 Machine Learning and Artificial Intelligence Fall 2016 Lecture 22 Exploration & Exploitation in Reinforcement Learning: MAB, UCB, Exp3 How to balance exploration and exploitation in reinforcement

More information

The Multi-Armed Bandit Problem

The Multi-Armed Bandit Problem The Multi-Armed Bandit Problem Electrical and Computer Engineering December 7, 2013 Outline 1 2 Mathematical 3 Algorithm Upper Confidence Bound Algorithm A/B Testing Exploration vs. Exploitation Scientist

More information

Corrupt Bandits. Abstract

Corrupt Bandits. Abstract Corrupt Bandits Pratik Gajane Orange labs/inria SequeL Tanguy Urvoy Orange labs Emilie Kaufmann INRIA SequeL pratik.gajane@inria.fr tanguy.urvoy@orange.com emilie.kaufmann@inria.fr Editor: Abstract We

More information

On Sequential Decision Problems

On Sequential Decision Problems On Sequential Decision Problems Aurélien Garivier Séminaire du LIP, 14 mars 2018 Équipe-projet AOC: Apprentissage, Optimisation, Complexité Institut de Mathématiques de Toulouse LabeX CIMI Université Paul

More information

Stochastic Contextual Bandits with Known. Reward Functions

Stochastic Contextual Bandits with Known. Reward Functions Stochastic Contextual Bandits with nown 1 Reward Functions Pranav Sakulkar and Bhaskar rishnamachari Ming Hsieh Department of Electrical Engineering Viterbi School of Engineering University of Southern

More information

16.1 Bounding Capacity with Covering Number

16.1 Bounding Capacity with Covering Number ECE598: Information-theoretic methods in high-dimensional statistics Spring 206 Lecture 6: Upper Bounds for Density Estimation Lecturer: Yihong Wu Scribe: Yang Zhang, Apr, 206 So far we have been mostly

More information

Lecture 4 January 23

Lecture 4 January 23 STAT 263/363: Experimental Design Winter 2016/17 Lecture 4 January 23 Lecturer: Art B. Owen Scribe: Zachary del Rosario 4.1 Bandits Bandits are a form of online (adaptive) experiments; i.e. samples are

More information

arxiv: v1 [cs.ds] 4 Mar 2016

arxiv: v1 [cs.ds] 4 Mar 2016 Sequential ranking under random semi-bandit feedback arxiv:1603.01450v1 [cs.ds] 4 Mar 2016 Hossein Vahabi, Paul Lagrée, Claire Vernade, Olivier Cappé March 7, 2016 Abstract In many web applications, a

More information

Uncertainty. Jayakrishnan Unnikrishnan. CSL June PhD Defense ECE Department

Uncertainty. Jayakrishnan Unnikrishnan. CSL June PhD Defense ECE Department Decision-Making under Statistical Uncertainty Jayakrishnan Unnikrishnan PhD Defense ECE Department University of Illinois at Urbana-Champaign CSL 141 12 June 2010 Statistical Decision-Making Relevant in

More information

The multi armed-bandit problem

The multi armed-bandit problem The multi armed-bandit problem (with covariates if we have time) Vianney Perchet & Philippe Rigollet LPMA Université Paris Diderot ORFE Princeton University Algorithms and Dynamics for Games and Optimization

More information

Introduction to Reinforcement Learning and multi-armed bandits

Introduction to Reinforcement Learning and multi-armed bandits Introduction to Reinforcement Learning and multi-armed bandits Rémi Munos INRIA Lille - Nord Europe Currently on leave at MSR-NE http://researchers.lille.inria.fr/ munos/ NETADIS Summer School 2013, Hillerod,

More information

Complex Bandit Problems and Thompson Sampling

Complex Bandit Problems and Thompson Sampling Complex Bandit Problems and Aditya Gopalan Department of Electrical Engineering Technion, Israel aditya@ee.technion.ac.il Shie Mannor Department of Electrical Engineering Technion, Israel shie@ee.technion.ac.il

More information

Introduction to Reinforcement Learning Part 3: Exploration for decision making, Application to games, optimization, and planning

Introduction to Reinforcement Learning Part 3: Exploration for decision making, Application to games, optimization, and planning Introduction to Reinforcement Learning Part 3: Exploration for decision making, Application to games, optimization, and planning Rémi Munos SequeL project: Sequential Learning http://researchers.lille.inria.fr/

More information

Lecture 8: Information Theory and Statistics

Lecture 8: Information Theory and Statistics Lecture 8: Information Theory and Statistics Part II: Hypothesis Testing and Estimation I-Hsiang Wang Department of Electrical Engineering National Taiwan University ihwang@ntu.edu.tw December 22, 2015

More information

Lecture 8: Information Theory and Statistics

Lecture 8: Information Theory and Statistics Lecture 8: Information Theory and Statistics Part II: Hypothesis Testing and I-Hsiang Wang Department of Electrical Engineering National Taiwan University ihwang@ntu.edu.tw December 23, 2015 1 / 50 I-Hsiang

More information

Ordinal optimization - Empirical large deviations rate estimators, and multi-armed bandit methods

Ordinal optimization - Empirical large deviations rate estimators, and multi-armed bandit methods Ordinal optimization - Empirical large deviations rate estimators, and multi-armed bandit methods Sandeep Juneja Tata Institute of Fundamental Research Mumbai, India joint work with Peter Glynn Applied

More information

Lecture 35: December The fundamental statistical distances

Lecture 35: December The fundamental statistical distances 36-705: Intermediate Statistics Fall 207 Lecturer: Siva Balakrishnan Lecture 35: December 4 Today we will discuss distances and metrics between distributions that are useful in statistics. I will be lose

More information

COMPUTING OPTIMAL SEQUENTIAL ALLOCATION RULES IN CLINICAL TRIALS* Michael N. Katehakis. State University of New York at Stony Brook. and.

COMPUTING OPTIMAL SEQUENTIAL ALLOCATION RULES IN CLINICAL TRIALS* Michael N. Katehakis. State University of New York at Stony Brook. and. COMPUTING OPTIMAL SEQUENTIAL ALLOCATION RULES IN CLINICAL TRIALS* Michael N. Katehakis State University of New York at Stony Brook and Cyrus Derman Columbia University The problem of assigning one of several

More information

Sparse Linear Contextual Bandits via Relevance Vector Machines

Sparse Linear Contextual Bandits via Relevance Vector Machines Sparse Linear Contextual Bandits via Relevance Vector Machines Davis Gilton and Rebecca Willett Electrical and Computer Engineering University of Wisconsin-Madison Madison, WI 53706 Email: gilton@wisc.edu,

More information

Machine Learning Theory (CS 6783)

Machine Learning Theory (CS 6783) Machine Learning Theory (CS 6783) Tu-Th 1:25 to 2:40 PM Hollister, 306 Instructor : Karthik Sridharan ABOUT THE COURSE No exams! 5 assignments that count towards your grades (55%) One term project (40%)

More information

Deviations of stochastic bandit regret

Deviations of stochastic bandit regret Deviations of stochastic bandit regret Antoine Salomon 1 and Jean-Yves Audibert 1,2 1 Imagine École des Ponts ParisTech Université Paris Est salomona@imagine.enpc.fr audibert@imagine.enpc.fr 2 Sierra,

More information

Introduction to Reinforcement Learning Part 3: Exploration for sequential decision making

Introduction to Reinforcement Learning Part 3: Exploration for sequential decision making Introduction to Reinforcement Learning Part 3: Exploration for sequential decision making Rémi Munos SequeL project: Sequential Learning http://researchers.lille.inria.fr/ munos/ INRIA Lille - Nord Europe

More information

Bandits and Exploration: How do we (optimally) gather information? Sham M. Kakade

Bandits and Exploration: How do we (optimally) gather information? Sham M. Kakade Bandits and Exploration: How do we (optimally) gather information? Sham M. Kakade Machine Learning for Big Data CSE547/STAT548 University of Washington S. M. Kakade (UW) Optimization for Big data 1 / 22

More information

Active Learning and Optimized Information Gathering

Active Learning and Optimized Information Gathering Active Learning and Optimized Information Gathering Lecture 7 Learning Theory CS 101.2 Andreas Krause Announcements Project proposal: Due tomorrow 1/27 Homework 1: Due Thursday 1/29 Any time is ok. Office

More information

Reward Maximization Under Uncertainty: Leveraging Side-Observations on Networks

Reward Maximization Under Uncertainty: Leveraging Side-Observations on Networks Reward Maximization Under Uncertainty: Leveraging Side-Observations Reward Maximization Under Uncertainty: Leveraging Side-Observations on Networks Swapna Buccapatnam AT&T Labs Research, Middletown, NJ

More information

LECTURE 15: COMPLETENESS AND CONVEXITY

LECTURE 15: COMPLETENESS AND CONVEXITY LECTURE 15: COMPLETENESS AND CONVEXITY 1. The Hopf-Rinow Theorem Recall that a Riemannian manifold (M, g) is called geodesically complete if the maximal defining interval of any geodesic is R. On the other

More information

Multi-Armed Bandit Formulations for Identification and Control

Multi-Armed Bandit Formulations for Identification and Control Multi-Armed Bandit Formulations for Identification and Control Cristian R. Rojas Joint work with Matías I. Müller and Alexandre Proutiere KTH Royal Institute of Technology, Sweden ERNSI, September 24-27,

More information

EASINESS IN BANDITS. Gergely Neu. Pompeu Fabra University

EASINESS IN BANDITS. Gergely Neu. Pompeu Fabra University EASINESS IN BANDITS Gergely Neu Pompeu Fabra University EASINESS IN BANDITS Gergely Neu Pompeu Fabra University THE BANDIT PROBLEM Play for T rounds attempting to maximize rewards THE BANDIT PROBLEM Play

More information

Stat 260/CS Learning in Sequential Decision Problems. Peter Bartlett

Stat 260/CS Learning in Sequential Decision Problems. Peter Bartlett Stat 260/CS 294-102. Learning in Sequential Decision Problems. Peter Bartlett 1. Thompson sampling Bernoulli strategy Regret bounds Extensions the flexibility of Bayesian strategies 1 Bayesian bandit strategies

More information

Homework 7: Solutions. P3.1 from Lehmann, Romano, Testing Statistical Hypotheses.

Homework 7: Solutions. P3.1 from Lehmann, Romano, Testing Statistical Hypotheses. Stat 300A Theory of Statistics Homework 7: Solutions Nikos Ignatiadis Due on November 28, 208 Solutions should be complete and concisely written. Please, use a separate sheet or set of sheets for each

More information

Exploiting Correlation in Finite-Armed Structured Bandits

Exploiting Correlation in Finite-Armed Structured Bandits Exploiting Correlation in Finite-Armed Structured Bandits Samarth Gupta Carnegie Mellon University Pittsburgh, PA 1513 Gauri Joshi Carnegie Mellon University Pittsburgh, PA 1513 Osman Yağan Carnegie Mellon

More information

Computational Oracle Inequalities for Large Scale Model Selection Problems

Computational Oracle Inequalities for Large Scale Model Selection Problems for Large Scale Model Selection Problems University of California at Berkeley Queensland University of Technology ETH Zürich, September 2011 Joint work with Alekh Agarwal, John Duchi and Clément Levrard.

More information

Lecture 1: Entropy, convexity, and matrix scaling CSE 599S: Entropy optimality, Winter 2016 Instructor: James R. Lee Last updated: January 24, 2016

Lecture 1: Entropy, convexity, and matrix scaling CSE 599S: Entropy optimality, Winter 2016 Instructor: James R. Lee Last updated: January 24, 2016 Lecture 1: Entropy, convexity, and matrix scaling CSE 599S: Entropy optimality, Winter 2016 Instructor: James R. Lee Last updated: January 24, 2016 1 Entropy Since this course is about entropy maximization,

More information

Mechanisms with Learning for Stochastic Multi-armed Bandit Problems

Mechanisms with Learning for Stochastic Multi-armed Bandit Problems Mechanisms with Learning for Stochastic Multi-armed Bandit Problems Shweta Jain 1, Satyanath Bhat 1, Ganesh Ghalme 1, Divya Padmanabhan 1, and Y. Narahari 1 Department of Computer Science and Automation,

More information

arxiv: v3 [cs.lg] 7 Nov 2017

arxiv: v3 [cs.lg] 7 Nov 2017 ESAIM: PROCEEDINGS AND SURVEYS, Vol.?, 2017, 1-10 Editors: Will be set by the publisher arxiv:1702.00001v3 [cs.lg] 7 Nov 2017 LEARNING THE DISTRIBUTION WITH LARGEST MEAN: TWO BANDIT FRAMEWORKS Emilie Kaufmann

More information

Thompson Sampling for the non-stationary Corrupt Multi-Armed Bandit

Thompson Sampling for the non-stationary Corrupt Multi-Armed Bandit European Worshop on Reinforcement Learning 14 (2018 October 2018, Lille, France. Thompson Sampling for the non-stationary Corrupt Multi-Armed Bandit Réda Alami Orange Labs 2 Avenue Pierre Marzin 22300,

More information

Learning to Optimize via Information-Directed Sampling

Learning to Optimize via Information-Directed Sampling Learning to Optimize via Information-Directed Sampling Daniel Russo Stanford University Stanford, CA 94305 djrusso@stanford.edu Benjamin Van Roy Stanford University Stanford, CA 94305 bvr@stanford.edu

More information

Multi-armed bandit based policies for cognitive radio s decision making issues

Multi-armed bandit based policies for cognitive radio s decision making issues Multi-armed bandit based policies for cognitive radio s decision making issues Wassim Jouini SUPELEC/IETR wassim.jouini@supelec.fr Damien Ernst University of Liège dernst@ulg.ac.be Christophe Moy SUPELEC/IETR

More information

Analysis of Thompson Sampling for the multi-armed bandit problem

Analysis of Thompson Sampling for the multi-armed bandit problem Analysis of Thompson Sampling for the multi-armed bandit problem Shipra Agrawal Microsoft Research India shipra@microsoft.com avin Goyal Microsoft Research India navingo@microsoft.com Abstract We show

More information

An Asymptotically Optimal Algorithm for the Max k-armed Bandit Problem

An Asymptotically Optimal Algorithm for the Max k-armed Bandit Problem An Asymptotically Optimal Algorithm for the Max k-armed Bandit Problem Matthew J. Streeter February 27, 2006 CMU-CS-06-110 Stephen F. Smith School of Computer Science Carnegie Mellon University Pittsburgh,

More information

U Logo Use Guidelines

U Logo Use Guidelines Information Theory Lecture 3: Applications to Machine Learning U Logo Use Guidelines Mark Reid logo is a contemporary n of our heritage. presents our name, d and our motto: arn the nature of things. authenticity

More information