Stat 260/CS Learning in Sequential Decision Problems.

Size: px
Start display at page:

Download "Stat 260/CS Learning in Sequential Decision Problems."

Transcription

1 Stat 260/CS Learning in Sequential Decision Problems. Peter Bartlett 1. Multi-armed bandit algorithms. Exponential families. Cumulant generating function. KL-divergence. KL-UCB for an exponential family. KL vs c.g.f. bounds. Bounded rewards: Bernoulli and Hoeffding. Empirical KL-UCB. See (Olivier Cappé, Aurélien Garivier, Odalric-Ambrym Maillard, Rémi Munos and Gilles Stoltz, 2013) 1

2 Recall: Concentration inequalities. Definition: Cumulant-generating function: Γ X (λ) = logeexp(λx), We consider upper bounds ψ : R R, satisfying ψ(λ) Γ X (λ). The Legendre transform (convex conjugate) ofψ is ψ (ǫ) = sup(λǫ ψ(λ)). λ R Theorem: For ǫ 0,P(X EX ǫ) exp ( ψ X EX (ǫ)). 2

3 Recall: Concentration Inequalities. Theorem: IfX 1,X 2,...,X n are mean zero, i.i.d. with cgf upper bound ψ, then X n = 1 n n i=1 X i satisfies P ( Xn ǫ ) exp( nψ (ǫ)), And the exponent can t be improved. Theorem: [Cramér-Chernoff] If X 1,X 2,...,X n are iid and mean zero, and have cgf Γ, then for ǫ > 0 and X n = 1 n n i=1 X i, lim n 1 n logp( X n ǫ ) = Γ (ǫ). (Γ sometimes called Cramér function. Lower bound is a change-of-measure argument plus central limit theorem.) 3

4 Outline. For an exponential family, we can compute the c.g.f. exactly. Its convex conjugate corresponds to a KL-divergence. For reward distributions from the exponential family, concentration inequalities involving the KL-divergence define an upper confidence bound strategy: KL-UCB. If the reward distributions are bounded, the c.g.f. of a particular exponential family (a scaled, shifted Bernoulli) gives a bound on the c.g.f. And we can bound this, in turn, with a quadratic (like Hoeffding s inequality), which corresponds to another exponential family (a Gaussian). KL-UCB for Bernoulli improves on KL-UCB for Gaussian. (KL-UCB for Gaussian corresponds to the original UCB strategy.) There s also a non-parametric version of KL-UCB (called empirical KL-UCB) for bounded rewards. It works with the set of distributions with finite support. 4

5 Exponential families. Definition: Canonical exponential family defined wrt measure P : dp θ (x) = exp(θx A(θ)), dp ( ) A(θ) = log exp(θx)dp(x), θ Θ = {θ : A(θ) < }. 5

6 Exponential families. µ(θ) := E θ X = A (θ). θ(µ) defined on µ(θ). ( one-to-one because Var θ X = A (θ) > 0). Γ θ (λ) = A(θ +λ) A(θ), Γ θ 1 (µ(θ 2 )) = A(θ 1 ) A(θ 2 )+µ(θ 2 )(θ 2 θ 1 ), D KL (P θ1,p θ2 ) = Γ θ 2 (µ(θ 1 )). 6

7 Exponential families. A (θ) = = E θ X. ( Γ θ (λ) = log xexp(θx)dp(x) ( = log exp(a(θ)) = A(θ +λ) A(θ). ) exp(λx+θx A(θ))dP(x) ) exp((λ+θ)x)dp(x) A(θ) 7

8 Exponential families. Γ θ 1 (µ(θ 2 )) = sup(λµ(θ 2 ) (A(θ 1 +λ) A(θ 1 ))) λ Maximum has µ(θ 2 ) = µ(θ 1 +λ), that is, λ = θ 2 θ 1, so Γ θ 1 (µ(θ 2 )) = (θ 2 θ 1 )µ(θ 2 )+A(θ 1 ) A(θ 2 ). 8

9 Exponential families. D KL (P θ1,p θ2 ) = = log dp θ 1 dp θ2 dp θ1 ((θ 1 θ 2 )x)exp(θ 1 x A(θ 1 ))dp(x) +A(θ 2 ) A(θ 1 ) = µ(θ 1 )(θ 1 θ 2 )+A(θ 2 ) A(θ 1 ) = Γ θ 2 (µ(θ 1 )) 9

10 Exponential families. Example: Bernoulli: P θ (x) = exp(θx A(θ)), A(θ) = log ( 1+e θ), µ(θ) = P θ (1) = eθ 1+e θ, θ = log µ 1 µ, Γ θ (λ) = log ( 1 µ(θ)+µ(θ)e λ), Θ = R. 10

11 Exponential families. Example: Bernoulli: Γ θ 1 (µ 2 ) = sup λ Maximum has µ 2 = ( λµ2 log ( 1 µ 1 +µ 1 e λ)) µ 1 e λ 1 µ 1 +µ 1 e λ, that is, λ = log µ 2(1 µ 1 ) µ 1 (1 µ 2 ) so Γ θ 1 (µ 2 ) = µ 2 log µ 2 µ 1 +(1 µ 2 )log 1 µ 2 1 µ 1 = D KL (P θ2,p θ1 ). 11

12 KL-UCB for exponential families: Use ψ = Γ Define the sample averages ˆµ j (t) = 1 T j (t) t X Is,s1[I s = j], s=1 ˆµ j,t = 1 t t X j,s. s=1 If X j,s has mean µ and c.g.f. Γ µ, anda < µ, Pr(ˆµ j,n a) e nγ (a), that is, or Pr ( Pr ˆµ j,n < µ andγ µ(ˆµ j,n ) f(n) ) e f(n), n ( ) ( ) f(n) ˆµ j,n < µ and D KL Pˆµj,n,P µ e f(n). n (Note that P µ denotes P θ(µ).) 12

13 KL-UCB for exponential families. KL-UCB Strategy for an exponential family (P µ denotes P θ(µ) ): I t = t for t = 1,...,k, I t = arg max 1 j k sup { µ(θ) : θ Θ and where f(t) = logt+3loglog(t). Equivalent to UCB withψ = Γ µ. } ( ) f(t) D KL Pˆµj (t 1),P µ, T j (t 1) 13

14 KL-UCB for exponential families. We can think of D KL ( Pˆµj,t 1,P µ ) as a divergence defined in terms of means: for any ˆµ,µ µ(θ), d(ˆµ,µ) = D KL (Pˆµ,P µ ) = (θ(ˆµ) θ(µ))ˆµ A(θ(ˆµ))+A(θ(µ)). Then d(ˆµ, µ) = 0 iff ˆµ = µ, d is strictly convex and differentiable. We can extend it to the closure of µ(θ), by taking limits, allowing infinite values, and setting d(µ,µ) = 0 at boundaries. (Consider, for example, ˆµ = 0 for a Bernoulli.) 14

15 KL-UCB for exponential families. Theorem: KL-UCB for an exponential family satisfies: logn ( ) ET j (n) ( ) +O logn. D KL Pµj,P µ And the leading term is optimal (including the constant). 15

16 KL-UCB for bounded rewards. Theorem: For X [0,1] withex = µ, define Y Bernoulli(µ). Then Γ X (λ) Γ Y (λ). Notice that this gives a c.g.f. bound ψ Xµ forx satisfying: ψ X µ (µ ) = µ log µ µ +(1 µ )log 1 µ 1 µ. 16

17 KL-UCB for bounded rewards. Proof: For x [0,1],exp(λx) lies below the line from(0,e 0 ) to(1,e λ ): exp(λx) x ( e λ e 0) +e 0, so Eexp(λX) µ ( e λ 1 ) +1 = Eexp(λY). 17

18 KL-UCB-Bernoulli for bounded rewards. KL-UCB-Bernoulli Strategy For the Bernoulli family P µ : I t = t for t = 1,...,k, I t = arg max 1 j k sup { µ (0,1) : d(ˆµ j (t 1),µ) f(t) }, T j (t 1) where d(µ 1,µ 2 ) = µ 1 log µ 1 µ 2 +(1 µ 1 )log 1 µ 1 1 µ 2 and f(t) = logt+3loglog(t). 18

19 KL-UCB-Bernoulli for bounded rewards. Theorem: KL-UCB-Bernoulli satisfies: ET j (n) logn d(µ j,µ ) +O ( logn ), where d(µ 1,µ 2 ) = µ 1 log µ 1 µ 2 +(1 µ 1 )log 1 µ 1 1 µ 2. The leading term is optimal for Bernoulli rewards, but might not be optimal, for example, if the variance is lower than µ(1 µ). 19

20 KL-UCB: More concentration inequalities. Now, Pinsker s inequality gives ψx µ (µ ) = D KL (µ,µ) = µ log µ µ +(1 µ )log 1 µ 1 µ 2(µ µ) 2. which shows this is at least as good as Hoeffding s inequality: P ( X n µ ) exp ( 2n(µ µ) 2) P ( Xn µ+ǫ ) exp ( 2nǫ 2). 20

21 Exponential families. Example: Gaussian: p θ (x) = exp( x2 /(2σ 2 )) 2πσ 2 ( ) µ µ2 exp σ2x, 2σ 2 θ = µ σ 2, µ(θ) = σ 2 θ, A(θ) = σ2 θ 2 2, Γ θ (λ) = σ2 2 (λ+θ)2 θ2 σ 2 2, Γ θ 1 (µ 2 ) = 1 2σ 2 (µ 2 µ 1 ) 2. 21

22 Exponential families. Withσ 2 = 1/4, Pinsker s inequality corresponds to Hoeffding s inequality. So we can view the UCB strategy (based on Hoeffding s inequality), as a special case of KL-UCB, modeling the reward distributions from [0,1] as N(µ,1/4). 22

23 KL-UCB-Gaussian for bounded rewards. KL-UCB-Gaussian Strategy For the Gaussian family P µ : I t = t for t = 1,...,k, I t = arg max 1 j k sup { µ (0,1) : d(ˆµ j (t 1),µ) f(t) }, T j (t 1) where f(t) = logt+3loglog(t) and d(µ 1,µ 2 ) = 2(µ 1 µ 2 ) 2. This is equivalent to the UCB strategy (based on Hoeffding) that we saw last time. 23

24 UCB for bounded rewards. Theorem: UCB satisfies: ET j (n) logn d(µ j,µ ) +O ( logn ), where d(µ 1,µ 2 ) = 2(µ 1 µ 2 ) 2. This result is weaker (because of Pinsker s inequality) than the result for KL-UCB-Bernoulli. 24

25 KL-UCB regret bounds: upper versus lower. Denote the canonical exponential family defined wrt a measure m by E m : { E m = P : dp } (x) = exp(θx A(θ)), anda(θ) <, dm ( ) where A(θ) = log exp(θx) dm(x). WriteP m,θ for the element of E m with parameter θ, andp m,µ for the element of E m with mean µ (and there s a one-to-one map between θ and µ, so it s well-defined.) And define fore m the relevant divergence as a function of expectations: d m (µ,µ ) := D KL (P m,µ,p m,µ ). 25

26 KL-UCB regret bounds: upper versus lower. We have derived bounds on Γ Pj in terms ofγ Pm,µj, for some exponential families E m. For instance, if we let P denote the set of distributions on [0,1], and consider two exponential families, the Bernoulli (call it E B ) and the Gaussian with variance 1/4 (call it E G ), then we have: For all P P withpx = µ, and all λ, And this is equivalent to: for all µ, Γ P (λ) Γ PB,µ (λ) Γ PG,µ (λ). Γ P(µ ) Γ P B,µ (µ ) Γ P G,µ (µ ), that is, Γ P(µ ) d B (µ,µ) d G (µ,µ). 26

27 KL-UCB regret bounds: upper versus lower. We have seen upper bounds on regret based on these inequalities of the form R n ( logn ( )) j d m (µ j,µ ) +O logn. j: j >0 And we ve seen lower bounds that are (roughly) of the form R n ( ) logn j D KL (P j,p j ) +o(1). j: j >0 To understand the gap between the upper bounds and the lower bounds, we can consider the I-projection ofp j E Pj on to{p : PX = µ j }. 27

28 KL-UCB regret bounds: upper versus lower. Theorem: Fix a measure m and an exponential family E m. For all Q E m and P withpx = µ, D KL (P,Q) = D KL (P,P m,µ )+D KL (P m,µ,q). In particular, inf{d KL (P,Q) : PX = µ} = D KL (P m,µ,q). We say that P m,µ is the I-projection of Q E m onto{p : PX = µ}. 28

29 KL-UCB regret bounds: upper versus lower. The negative KL-divergence D KL (P,Q) = log dp dq dp dp dq = log dq dp dq is also called the entropy ofp (defined with respect toq),h Q (P). So the result says that among all distributions satisfying the mean constraint PX = µ, the one with maximum entropy (wrt any Q ine m ) isp m,µ in the exponential family E m. 29

30 KL-UCB regret bounds: upper versus lower. Using this fact, we can see that D KL (P j,p j ) inf{d KL (P,P j ) : PX = µ j } ( ) = D KL PPj,µ j,p j = Γ P j (µ j ) (both distributions are in E Pj ) Γ P m,µ (µ j ) = d m (µ j,µ ), where E m is one of the exponential families that give the upper bounds (Bernoulli or Gaussian). 30

31 KL-UCB regret bounds: upper versus lower. So the upper bound might be loose because P j is further fromp j than the I-projection of P j on to{px = µ j } (i.e., because P j is not ine Pj ), or because Γ Pm,µ is a loose upper bound onγ Pj (i.e., because P j is not ine m ). 31

32 KL-UCB: Regret bounds. The KL-UCB strategies choose I 1 = 1,...,I k = k, and then where I t+1 = arg max U j(t), 1 j k { U j (t) = sup µ µ(θ) s.t. d(ˆµ j (t),µ) f(t) }. T j (t) For a suboptimal armj, we want to bound ET j (n) = 1+ n P{I t+1 = j}. t=k We might have I t+1 = j if either U j (t) is not an upper bound onµ (for a suitable choice forf(t), this has negligible probability), or it is an upper bound, butu j (t) is bigger (and so exceeds µ ; this can t happen too often). 32

33 KL-UCB: Regret bounds. {I t+1 = j} {µ U j (t)} {I t+1 = j andµ < U j (t) U j (t)} {µ U j (t)} {I t+1 = j andµ < U j (t)}. Also, { { {µ < U j (t)} = µ < sup µ µ(θ) s.t. d(ˆµ j (t),µ) f(t) T j (t) { } ˆµ j (t) µ f(t)/t j (t), { } ˆµ j (t) µ f(n)/t j (t), { where µ f(n)/t j (t) := min µ : d(µ,µ ) f(n) }. T j (t) }} 33

34 KL-UCB: Regret bounds. And ET j (n) = 1+ n 1 t=k P{I t+1 = j}. n 1 t=k P{µ U j (t)} }{{} times upper bound violated 3+4eloglogn. 34

35 KL-UCB: Regret bounds. n 1 t=k = P n 1 t=k n k m=1 M + { } I t+1 = j and ˆµ j (t) µ f(n)/t j (t) n k+1 m=2 P } P {ˆµ j,m 1 µ f(n)/(m 1) and mthj at t+1 { } ˆµ j,m µ f(n)/m n k m=m+1 P for M = f(n)/d(µ j,µ ). { } ˆµ j,m µ f(n)/m, 35

36 KL-UCB: Regret bounds. n k m=m+1 (Relate d method.) P { } ˆµ j,m µ f(n)/m ( ) µ f(n)/m,µ j n k m=m+1 exp. ( ) = O f(n). ( ( )) md µ f(n)/m,µ j tod(µ j,µ ), bound by integral, use Laplace s 36

37 Empirical KL-UCB for rewards in[0,1]. Empirical KL-UCB Strategy: I t = t for t = 1,...,k, I t = arg max 1 j k sup { E P X : supp(p) <, ) D KL (ˆPj (t 1),P f(t) }, T j (t 1) where ˆP j (t 1) is the empirical distribution of thet j (t 1) pulls of arm j up to timet 1, andf(t) = logt+3loglog(t). 37

38 Empirical KL-UCB for rewards in[0,1]. It turns out that it s always a finite convex optimization: { ) } sup E P X : supp(p) <,D KL (ˆPj (t 1),P γ { = sup E P X : supp(p) supp(ˆp j (t 1)) {1}, ) } D KL (ˆPj (t 1),P γ. 38

39 Empirical KL-UCB for rewards in[0,1]. Empirical KL-UCB Strategy: I t = t for t = 1,...,k, I t = arg max 1 j k sup { E P X : supp(p) supp(ˆp j (t 1)) {1}, ) D KL (ˆPj (t 1),P f(t) }, T j (t 1) where ˆP j (t 1) is the empirical distribution of thet j (t 1) pulls of arm j up to timet 1, andf(t) = logt+3loglog(t). 39

40 Empirical KL-UCB for rewards in[0,1]. Theorem: Empirical KL-UCB for rewards in[0,1] satisfies: ET j (n) logn ( ) inf{d KL (P j,p) : PX > µ } +O log 4/5 nloglogn, provided µ j > 0 and µ < 1. The leading term is optimal (including the constant). But the remainder term is worse than in the parametric case. 40

Stat 260/CS Learning in Sequential Decision Problems. Peter Bartlett

Stat 260/CS Learning in Sequential Decision Problems. Peter Bartlett Stat 260/CS 294-102. Learning in Sequential Decision Problems. Peter Bartlett 1. Multi-armed bandit algorithms. Concentration inequalities. P(X ǫ) exp( ψ (ǫ))). Cumulant generating function bounds. Hoeffding

More information

Stat 260/CS Learning in Sequential Decision Problems. Peter Bartlett

Stat 260/CS Learning in Sequential Decision Problems. Peter Bartlett Stat 260/CS 294-102. Learning in Sequential Decision Problems. Peter Bartlett 1. Thompson sampling Bernoulli strategy Regret bounds Extensions the flexibility of Bayesian strategies 1 Bayesian bandit strategies

More information

The information complexity of sequential resource allocation

The information complexity of sequential resource allocation The information complexity of sequential resource allocation Emilie Kaufmann, joint work with Olivier Cappé, Aurélien Garivier and Shivaram Kalyanakrishan SMILE Seminar, ENS, June 8th, 205 Sequential allocation

More information

The information complexity of best-arm identification

The information complexity of best-arm identification The information complexity of best-arm identification Emilie Kaufmann, joint work with Olivier Cappé and Aurélien Garivier MAB workshop, Lancaster, January th, 206 Context: the multi-armed bandit model

More information

On Bayesian bandit algorithms

On Bayesian bandit algorithms On Bayesian bandit algorithms Emilie Kaufmann joint work with Olivier Cappé, Aurélien Garivier, Nathaniel Korda and Rémi Munos July 1st, 2012 Emilie Kaufmann (Telecom ParisTech) On Bayesian bandit algorithms

More information

Bandits : optimality in exponential families

Bandits : optimality in exponential families Bandits : optimality in exponential families Odalric-Ambrym Maillard IHES, January 2016 Odalric-Ambrym Maillard Bandits 1 / 40 Introduction 1 Stochastic multi-armed bandits 2 Boundary crossing probabilities

More information

Two optimization problems in a stochastic bandit model

Two optimization problems in a stochastic bandit model Two optimization problems in a stochastic bandit model Emilie Kaufmann joint work with Olivier Cappé, Aurélien Garivier and Shivaram Kalyanakrishnan Journées MAS 204, Toulouse Outline From stochastic optimization

More information

Revisiting the Exploration-Exploitation Tradeoff in Bandit Models

Revisiting the Exploration-Exploitation Tradeoff in Bandit Models Revisiting the Exploration-Exploitation Tradeoff in Bandit Models joint work with Aurélien Garivier (IMT, Toulouse) and Tor Lattimore (University of Alberta) Workshop on Optimization and Decision-Making

More information

Stratégies bayésiennes et fréquentistes dans un modèle de bandit

Stratégies bayésiennes et fréquentistes dans un modèle de bandit Stratégies bayésiennes et fréquentistes dans un modèle de bandit thèse effectuée à Telecom ParisTech, co-dirigée par Olivier Cappé, Aurélien Garivier et Rémi Munos Journées MAS, Grenoble, 30 août 2016

More information

On the Complexity of Best Arm Identification in Multi-Armed Bandit Models

On the Complexity of Best Arm Identification in Multi-Armed Bandit Models On the Complexity of Best Arm Identification in Multi-Armed Bandit Models Aurélien Garivier Institut de Mathématiques de Toulouse Information Theory, Learning and Big Data Simons Institute, Berkeley, March

More information

On the Complexity of Best Arm Identification with Fixed Confidence

On the Complexity of Best Arm Identification with Fixed Confidence On the Complexity of Best Arm Identification with Fixed Confidence Discrete Optimization with Noise Aurélien Garivier, Emilie Kaufmann COLT, June 23 th 2016, New York Institut de Mathématiques de Toulouse

More information

Bandit models: a tutorial

Bandit models: a tutorial Gdt COS, December 3rd, 2015 Multi-Armed Bandit model: general setting K arms: for a {1,..., K}, (X a,t ) t N is a stochastic process. (unknown distributions) Bandit game: a each round t, an agent chooses

More information

Stat 260/CS Learning in Sequential Decision Problems. Peter Bartlett

Stat 260/CS Learning in Sequential Decision Problems. Peter Bartlett Stat 260/CS 294-102. Learning in Sequential Decision Problems. Peter Bartlett 1. Adversarial bandits Definition: sequential game. Lower bounds on regret from the stochastic case. Exp3: exponential weights

More information

On the Complexity of Best Arm Identification with Fixed Confidence

On the Complexity of Best Arm Identification with Fixed Confidence On the Complexity of Best Arm Identification with Fixed Confidence Discrete Optimization with Noise Aurélien Garivier, joint work with Emilie Kaufmann CNRS, CRIStAL) to be presented at COLT 16, New York

More information

Regret Analysis of Stochastic and Nonstochastic Multi-armed Bandit Problems, Part I. Sébastien Bubeck Theory Group

Regret Analysis of Stochastic and Nonstochastic Multi-armed Bandit Problems, Part I. Sébastien Bubeck Theory Group Regret Analysis of Stochastic and Nonstochastic Multi-armed Bandit Problems, Part I Sébastien Bubeck Theory Group i.i.d. multi-armed bandit, Robbins [1952] i.i.d. multi-armed bandit, Robbins [1952] Known

More information

The Multi-Arm Bandit Framework

The Multi-Arm Bandit Framework The Multi-Arm Bandit Framework A. LAZARIC (SequeL Team @INRIA-Lille) ENS Cachan - Master 2 MVA SequeL INRIA Lille MVA-RL Course In This Lecture A. LAZARIC Reinforcement Learning Algorithms Oct 29th, 2013-2/94

More information

Multi-armed bandit models: a tutorial

Multi-armed bandit models: a tutorial Multi-armed bandit models: a tutorial CERMICS seminar, March 30th, 2016 Multi-Armed Bandit model: general setting K arms: for a {1,..., K}, (X a,t ) t N is a stochastic process. (unknown distributions)

More information

The Moment Method; Convex Duality; and Large/Medium/Small Deviations

The Moment Method; Convex Duality; and Large/Medium/Small Deviations Stat 928: Statistical Learning Theory Lecture: 5 The Moment Method; Convex Duality; and Large/Medium/Small Deviations Instructor: Sham Kakade The Exponential Inequality and Convex Duality The exponential

More information

Introduction to Bandit Algorithms. Introduction to Bandit Algorithms

Introduction to Bandit Algorithms. Introduction to Bandit Algorithms Stochastic K-Arm Bandit Problem Formulation Consider K arms (actions) each correspond to an unknown distribution {ν k } K k=1 with values bounded in [0, 1]. At each time t, the agent pulls an arm I t {1,...,

More information

Dynamic resource allocation: Bandit problems and extensions

Dynamic resource allocation: Bandit problems and extensions Dynamic resource allocation: Bandit problems and extensions Aurélien Garivier Institut de Mathématiques de Toulouse MAD Seminar, Université Toulouse 1 October 3rd, 2014 The Bandit Model Roadmap 1 The Bandit

More information

The Multi-Armed Bandit Problem

The Multi-Armed Bandit Problem The Multi-Armed Bandit Problem Electrical and Computer Engineering December 7, 2013 Outline 1 2 Mathematical 3 Algorithm Upper Confidence Bound Algorithm A/B Testing Exploration vs. Exploitation Scientist

More information

Informational Confidence Bounds for Self-Normalized Averages and Applications

Informational Confidence Bounds for Self-Normalized Averages and Applications Informational Confidence Bounds for Self-Normalized Averages and Applications Aurélien Garivier Institut de Mathématiques de Toulouse - Université Paul Sabatier Thursday, September 12th 2013 Context Tree

More information

arxiv: v4 [math.pr] 26 Aug 2013

arxiv: v4 [math.pr] 26 Aug 2013 The Annals of Statistics 2013, Vol. 41, No. 3, 1516 1541 DOI: 10.1214/13-AOS1119 c Institute of Mathematical Statistics, 2013 arxiv:1210.1136v4 [math.pr] 26 Aug 2013 KULLBACK LEIBLER UPPER CONFIDENCE BOUNDS

More information

Online Learning and Sequential Decision Making

Online Learning and Sequential Decision Making Online Learning and Sequential Decision Making Emilie Kaufmann CNRS & CRIStAL, Inria SequeL, emilie.kaufmann@univ-lille.fr Research School, ENS Lyon, Novembre 12-13th 2018 Emilie Kaufmann Sequential Decision

More information

Kullback-Leibler Upper Confidence Bounds for Optimal Sequential Allocation

Kullback-Leibler Upper Confidence Bounds for Optimal Sequential Allocation Kullback-Leibler Upper Confidence Bounds for Optimal Sequential Allocation Olivier Cappé, Aurélien Garivier, Odalric-Ambrym Maillard, Rémi Munos, Gilles Stoltz To cite this version: Olivier Cappé, Aurélien

More information

Advanced Machine Learning

Advanced Machine Learning Advanced Machine Learning Bandit Problems MEHRYAR MOHRI MOHRI@ COURANT INSTITUTE & GOOGLE RESEARCH. Multi-Armed Bandit Problem Problem: which arm of a K-slot machine should a gambler pull to maximize his

More information

KULLBACK-LEIBLER UPPER CONFIDENCE BOUNDS FOR OPTIMAL SEQUENTIAL ALLOCATION

KULLBACK-LEIBLER UPPER CONFIDENCE BOUNDS FOR OPTIMAL SEQUENTIAL ALLOCATION Submitted to the Annals of Statistics arxiv: math.pr/0000000 KULLBACK-LEIBLER UPPER CONFIDENCE BOUNDS FOR OPTIMAL SEQUENTIAL ALLOCATION By Olivier Cappé 1, Aurélien Garivier 2, Odalric-Ambrym Maillard

More information

Bayesian and Frequentist Methods in Bandit Models

Bayesian and Frequentist Methods in Bandit Models Bayesian and Frequentist Methods in Bandit Models Emilie Kaufmann, Telecom ParisTech Bayes In Paris, ENSAE, October 24th, 2013 Emilie Kaufmann (Telecom ParisTech) Bayesian and Frequentist Bandits BIP,

More information

arxiv: v2 [stat.ml] 14 Nov 2016

arxiv: v2 [stat.ml] 14 Nov 2016 Journal of Machine Learning Research 6 06-4 Submitted 7/4; Revised /5; Published /6 On the Complexity of Best-Arm Identification in Multi-Armed Bandit Models arxiv:407.4443v [stat.ml] 4 Nov 06 Emilie Kaufmann

More information

Statistical Machine Learning

Statistical Machine Learning Statistical Machine Learning UoC Stats 37700, Winter quarter Lecture 2: Introduction to statistical learning theory. 1 / 22 Goals of statistical learning theory SLT aims at studying the performance of

More information

1 MDP Value Iteration Algorithm

1 MDP Value Iteration Algorithm CS 0. - Active Learning Problem Set Handed out: 4 Jan 009 Due: 9 Jan 009 MDP Value Iteration Algorithm. Implement the value iteration algorithm given in the lecture. That is, solve Bellman s equation using

More information

Ordinal optimization - Empirical large deviations rate estimators, and multi-armed bandit methods

Ordinal optimization - Empirical large deviations rate estimators, and multi-armed bandit methods Ordinal optimization - Empirical large deviations rate estimators, and multi-armed bandit methods Sandeep Juneja Tata Institute of Fundamental Research Mumbai, India joint work with Peter Glynn Applied

More information

Reinforcement Learning

Reinforcement Learning Reinforcement Learning Lecture 5: Bandit optimisation Alexandre Proutiere, Sadegh Talebi, Jungseul Ok KTH, The Royal Institute of Technology Objectives of this lecture Introduce bandit optimisation: the

More information

10-704: Information Processing and Learning Fall Lecture 24: Dec 7

10-704: Information Processing and Learning Fall Lecture 24: Dec 7 0-704: Information Processing and Learning Fall 206 Lecturer: Aarti Singh Lecture 24: Dec 7 Note: These notes are based on scribed notes from Spring5 offering of this course. LaTeX template courtesy of

More information

arxiv: v2 [stat.ml] 19 Jul 2012

arxiv: v2 [stat.ml] 19 Jul 2012 Thompson Sampling: An Asymptotically Optimal Finite Time Analysis Emilie Kaufmann, Nathaniel Korda and Rémi Munos arxiv:105.417v [stat.ml] 19 Jul 01 Telecom Paristech UMR CNRS 5141 & INRIA Lille - Nord

More information

Lecture 19: UCB Algorithm and Adversarial Bandit Problem. Announcements Review on stochastic multi-armed bandit problem

Lecture 19: UCB Algorithm and Adversarial Bandit Problem. Announcements Review on stochastic multi-armed bandit problem Lecture 9: UCB Algorithm and Adversarial Bandit Problem EECS598: Prediction and Learning: It s Only a Game Fall 03 Lecture 9: UCB Algorithm and Adversarial Bandit Problem Prof. Jacob Abernethy Scribe:

More information

Conjugate Predictive Distributions and Generalized Entropies

Conjugate Predictive Distributions and Generalized Entropies Conjugate Predictive Distributions and Generalized Entropies Eduardo Gutiérrez-Peña Department of Probability and Statistics IIMAS-UNAM, Mexico Padova, Italy. 21-23 March, 2013 Menu 1 Antipasto/Appetizer

More information

Anytime optimal algorithms in stochastic multi-armed bandits

Anytime optimal algorithms in stochastic multi-armed bandits Rémy Degenne LPMA, Université Paris Diderot Vianney Perchet CREST, ENSAE REMYDEGENNE@MATHUNIV-PARIS-DIDEROTFR VIANNEYPERCHET@NORMALESUPORG Abstract We introduce an anytime algorithm for stochastic multi-armed

More information

Subsampling, Concentration and Multi-armed bandits

Subsampling, Concentration and Multi-armed bandits Subsampling, Concentration and Multi-armed bandits Odalric-Ambrym Maillard, R. Bardenet, S. Mannor, A. Baransi, N. Galichet, J. Pineau, A. Durand Toulouse, November 09, 2015 O-A. Maillard Subsampling and

More information

A minimax and asymptotically optimal algorithm for stochastic bandits

A minimax and asymptotically optimal algorithm for stochastic bandits Proceedings of Machine Learning Research 76:1 15, 017 Algorithmic Learning heory 017 A minimax and asymptotically optimal algorithm for stochastic bandits Pierre Ménard Aurélien Garivier Institut de Mathématiques

More information

On the Complexity of A/B Testing

On the Complexity of A/B Testing JMLR: Workshop and Conference Proceedings vol 35:1 3, 014 On the Complexity of A/B Testing Emilie Kaufmann LTCI, Télécom ParisTech & CNRS KAUFMANN@TELECOM-PARISTECH.FR Olivier Cappé CAPPE@TELECOM-PARISTECH.FR

More information

Ordinal Optimization and Multi Armed Bandit Techniques

Ordinal Optimization and Multi Armed Bandit Techniques Ordinal Optimization and Multi Armed Bandit Techniques Sandeep Juneja. with Peter Glynn September 10, 2014 The ordinal optimization problem Determining the best of d alternative designs for a system, on

More information

Analysis of Thompson Sampling for the multi-armed bandit problem

Analysis of Thompson Sampling for the multi-armed bandit problem Analysis of Thompson Sampling for the multi-armed bandit problem Shipra Agrawal Microsoft Research India shipra@microsoft.com Navin Goyal Microsoft Research India navingo@microsoft.com Abstract The multi-armed

More information

Grundlagen der Künstlichen Intelligenz

Grundlagen der Künstlichen Intelligenz Grundlagen der Künstlichen Intelligenz Uncertainty & Probabilities & Bandits Daniel Hennes 16.11.2017 (WS 2017/18) University Stuttgart - IPVS - Machine Learning & Robotics 1 Today Uncertainty Probability

More information

Annealing-Pareto Multi-Objective Multi-Armed Bandit Algorithm

Annealing-Pareto Multi-Objective Multi-Armed Bandit Algorithm Annealing-Pareto Multi-Objective Multi-Armed Bandit Algorithm Saba Q. Yahyaa, Madalina M. Drugan and Bernard Manderick Vrije Universiteit Brussel, Department of Computer Science, Pleinlaan 2, 1050 Brussels,

More information

Parametric Techniques Lecture 3

Parametric Techniques Lecture 3 Parametric Techniques Lecture 3 Jason Corso SUNY at Buffalo 22 January 2009 J. Corso (SUNY at Buffalo) Parametric Techniques Lecture 3 22 January 2009 1 / 39 Introduction In Lecture 2, we learned how to

More information

Optimality of Thompson Sampling for Gaussian Bandits Depends on Priors

Optimality of Thompson Sampling for Gaussian Bandits Depends on Priors Optimality of Thompson Sampling for Gaussian Bandits Depends on Priors Junya Honda Akimichi Takemura The University of Tokyo {honda, takemura}@stat.t.u-tokyo.ac.jp Abstract In stochastic bandit problems,

More information

Theoretical Statistics. Lecture 22.

Theoretical Statistics. Lecture 22. Theoretical Statistics. Lecture 22. Peter Bartlett 1. Recall: Asymptotic testing. 2. Quadratic mean differentiability. 3. Local asymptotic normality. [vdv7] 1 Recall: Asymptotic testing Consider the asymptotics

More information

Exploiting Correlation in Finite-Armed Structured Bandits

Exploiting Correlation in Finite-Armed Structured Bandits Exploiting Correlation in Finite-Armed Structured Bandits Samarth Gupta Carnegie Mellon University Pittsburgh, PA 1513 Gauri Joshi Carnegie Mellon University Pittsburgh, PA 1513 Osman Yağan Carnegie Mellon

More information

Parametric Techniques

Parametric Techniques Parametric Techniques Jason J. Corso SUNY at Buffalo J. Corso (SUNY at Buffalo) Parametric Techniques 1 / 39 Introduction When covering Bayesian Decision Theory, we assumed the full probabilistic structure

More information

Two generic principles in modern bandits: the optimistic principle and Thompson sampling

Two generic principles in modern bandits: the optimistic principle and Thompson sampling Two generic principles in modern bandits: the optimistic principle and Thompson sampling Rémi Munos INRIA Lille, France CSML Lunch Seminars, September 12, 2014 Outline Two principles: The optimistic principle

More information

A. Notation. Attraction probability of item d. (d) Highest attraction probability, (1) A

A. Notation. Attraction probability of item d. (d) Highest attraction probability, (1) A A Notation Symbol Definition (d) Attraction probability of item d max Highest attraction probability, (1) A Binary attraction vector, where A(d) is the attraction indicator of item d P Distribution over

More information

March 1, Florida State University. Concentration Inequalities: Martingale. Approach and Entropy Method. Lizhe Sun and Boning Yang.

March 1, Florida State University. Concentration Inequalities: Martingale. Approach and Entropy Method. Lizhe Sun and Boning Yang. Florida State University March 1, 2018 Framework 1. (Lizhe) Basic inequalities Chernoff bounding Review for STA 6448 2. (Lizhe) Discrete-time martingales inequalities via martingale approach 3. (Boning)

More information

Theoretical Statistics. Lecture 17.

Theoretical Statistics. Lecture 17. Theoretical Statistics. Lecture 17. Peter Bartlett 1. Asymptotic normality of Z-estimators: classical conditions. 2. Asymptotic equicontinuity. 1 Recall: Delta method Theorem: Supposeφ : R k R m is differentiable

More information

Bandit Algorithms. Zhifeng Wang ... Department of Statistics Florida State University

Bandit Algorithms. Zhifeng Wang ... Department of Statistics Florida State University Bandit Algorithms Zhifeng Wang Department of Statistics Florida State University Outline Multi-Armed Bandits (MAB) Exploration-First Epsilon-Greedy Softmax UCB Thompson Sampling Adversarial Bandits Exp3

More information

Relative Loss Bounds for Multidimensional Regression Problems

Relative Loss Bounds for Multidimensional Regression Problems Relative Loss Bounds for Multidimensional Regression Problems Jyrki Kivinen and Manfred Warmuth Presented by: Arindam Banerjee A Single Neuron For a training example (x, y), x R d, y [0, 1] learning solves

More information

Analysis of Thompson Sampling for the multi-armed bandit problem

Analysis of Thompson Sampling for the multi-armed bandit problem Analysis of Thompson Sampling for the multi-armed bandit problem Shipra Agrawal Microsoft Research India shipra@microsoft.com avin Goyal Microsoft Research India navingo@microsoft.com Abstract We show

More information

Learning Methods for Online Prediction Problems. Peter Bartlett Statistics and EECS UC Berkeley

Learning Methods for Online Prediction Problems. Peter Bartlett Statistics and EECS UC Berkeley Learning Methods for Online Prediction Problems Peter Bartlett Statistics and EECS UC Berkeley Course Synopsis A finite comparison class: A = {1,..., m}. 1. Prediction with expert advice. 2. With perfect

More information

P (A G) dp G P (A G)

P (A G) dp G P (A G) First homework assignment. Due at 12:15 on 22 September 2016. Homework 1. We roll two dices. X is the result of one of them and Z the sum of the results. Find E [X Z. Homework 2. Let X be a r.v.. Assume

More information

Regret Analysis of Stochastic and Nonstochastic Multi-armed Bandit Problems. Sébastien Bubeck Theory Group

Regret Analysis of Stochastic and Nonstochastic Multi-armed Bandit Problems. Sébastien Bubeck Theory Group Regret Analysis of Stochastic and Nonstochastic Multi-armed Bandit Problems Sébastien Bubeck Theory Group Part 1: i.i.d., adversarial, and Bayesian bandit models i.i.d. multi-armed bandit, Robbins [1952]

More information

Hoeffding, Chernoff, Bennet, and Bernstein Bounds

Hoeffding, Chernoff, Bennet, and Bernstein Bounds Stat 928: Statistical Learning Theory Lecture: 6 Hoeffding, Chernoff, Bennet, Bernstein Bounds Instructor: Sham Kakade 1 Hoeffding s Bound We say X is a sub-gaussian rom variable if it has quadratically

More information

THE first formalization of the multi-armed bandit problem

THE first formalization of the multi-armed bandit problem EDIC RESEARCH PROPOSAL 1 Multi-armed Bandits in a Network Farnood Salehi I&C, EPFL Abstract The multi-armed bandit problem is a sequential decision problem in which we have several options (arms). We can

More information

Probabilistic Graphical Models. Theory of Variational Inference: Inner and Outer Approximation. Lecture 15, March 4, 2013

Probabilistic Graphical Models. Theory of Variational Inference: Inner and Outer Approximation. Lecture 15, March 4, 2013 School of Computer Science Probabilistic Graphical Models Theory of Variational Inference: Inner and Outer Approximation Junming Yin Lecture 15, March 4, 2013 Reading: W & J Book Chapters 1 Roadmap Two

More information

Stochastic bandits: Explore-First and UCB

Stochastic bandits: Explore-First and UCB CSE599s, Spring 2014, Online Learning Lecture 15-2/19/2014 Stochastic bandits: Explore-First and UCB Lecturer: Brendan McMahan or Ofer Dekel Scribe: Javad Hosseini In this lecture, we like to answer this

More information

Fast Rates for Estimation Error and Oracle Inequalities for Model Selection

Fast Rates for Estimation Error and Oracle Inequalities for Model Selection Fast Rates for Estimation Error and Oracle Inequalities for Model Selection Peter L. Bartlett Computer Science Division and Department of Statistics University of California, Berkeley bartlett@cs.berkeley.edu

More information

Lecture 32: Asymptotic confidence sets and likelihoods

Lecture 32: Asymptotic confidence sets and likelihoods Lecture 32: Asymptotic confidence sets and likelihoods Asymptotic criterion In some problems, especially in nonparametric problems, it is difficult to find a reasonable confidence set with a given confidence

More information

Probabilistic Graphical Models

Probabilistic Graphical Models School of Computer Science Probabilistic Graphical Models Variational Inference III: Variational Principle I Junming Yin Lecture 16, March 19, 2012 X 1 X 1 X 1 X 1 X 2 X 3 X 2 X 2 X 3 X 3 Reading: X 4

More information

Statistics: Learning models from data

Statistics: Learning models from data DS-GA 1002 Lecture notes 5 October 19, 2015 Statistics: Learning models from data Learning models from data that are assumed to be generated probabilistically from a certain unknown distribution is a crucial

More information

Reinforcement Learning

Reinforcement Learning Reinforcement Learning Lecture 6: RL algorithms 2.0 Alexandre Proutiere, Sadegh Talebi, Jungseul Ok KTH, The Royal Institute of Technology Objectives of this lecture Present and analyse two online algorithms

More information

Brief Review on Estimation Theory

Brief Review on Estimation Theory Brief Review on Estimation Theory K. Abed-Meraim ENST PARIS, Signal and Image Processing Dept. abed@tsi.enst.fr This presentation is essentially based on the course BASTA by E. Moulines Brief review on

More information

Lecture 4: Exponential family of distributions and generalized linear model (GLM) (Draft: version 0.9.2)

Lecture 4: Exponential family of distributions and generalized linear model (GLM) (Draft: version 0.9.2) Lectures on Machine Learning (Fall 2017) Hyeong In Choi Seoul National University Lecture 4: Exponential family of distributions and generalized linear model (GLM) (Draft: version 0.9.2) Topics to be covered:

More information

40.530: Statistics. Professor Chen Zehua. Singapore University of Design and Technology

40.530: Statistics. Professor Chen Zehua. Singapore University of Design and Technology Singapore University of Design and Technology Lecture 9: Hypothesis testing, uniformly most powerful tests. The Neyman-Pearson framework Let P be the family of distributions of concern. The Neyman-Pearson

More information

CS Lecture 19. Exponential Families & Expectation Propagation

CS Lecture 19. Exponential Families & Expectation Propagation CS 6347 Lecture 19 Exponential Families & Expectation Propagation Discrete State Spaces We have been focusing on the case of MRFs over discrete state spaces Probability distributions over discrete spaces

More information

14 : Theory of Variational Inference: Inner and Outer Approximation

14 : Theory of Variational Inference: Inner and Outer Approximation 10-708: Probabilistic Graphical Models 10-708, Spring 2017 14 : Theory of Variational Inference: Inner and Outer Approximation Lecturer: Eric P. Xing Scribes: Maria Ryskina, Yen-Chia Hsu 1 Introduction

More information

Mechanisms with Learning for Stochastic Multi-armed Bandit Problems

Mechanisms with Learning for Stochastic Multi-armed Bandit Problems Mechanisms with Learning for Stochastic Multi-armed Bandit Problems Shweta Jain 1, Satyanath Bhat 1, Ganesh Ghalme 1, Divya Padmanabhan 1, and Y. Narahari 1 Department of Computer Science and Automation,

More information

Minimax Estimation of Kernel Mean Embeddings

Minimax Estimation of Kernel Mean Embeddings Minimax Estimation of Kernel Mean Embeddings Bharath K. Sriperumbudur Department of Statistics Pennsylvania State University Gatsby Computational Neuroscience Unit May 4, 2016 Collaborators Dr. Ilya Tolstikhin

More information

On Sequential Decision Problems

On Sequential Decision Problems On Sequential Decision Problems Aurélien Garivier Séminaire du LIP, 14 mars 2018 Équipe-projet AOC: Apprentissage, Optimisation, Complexité Institut de Mathématiques de Toulouse LabeX CIMI Université Paul

More information

Complexity of stochastic branch and bound methods for belief tree search in Bayesian reinforcement learning

Complexity of stochastic branch and bound methods for belief tree search in Bayesian reinforcement learning Complexity of stochastic branch and bound methods for belief tree search in Bayesian reinforcement learning Christos Dimitrakakis Informatics Institute, University of Amsterdam, Amsterdam, The Netherlands

More information

Coding on Countably Infinite Alphabets

Coding on Countably Infinite Alphabets Coding on Countably Infinite Alphabets Non-parametric Information Theory Licence de droits d usage Outline Lossless Coding on infinite alphabets Source Coding Universal Coding Infinite Alphabets Enveloppe

More information

Multi armed bandit problem: some insights

Multi armed bandit problem: some insights Multi armed bandit problem: some insights July 4, 20 Introduction Multi Armed Bandit problems have been widely studied in the context of sequential analysis. The application areas include clinical trials,

More information

Upper-Confidence-Bound Algorithms for Active Learning in Multi-Armed Bandits

Upper-Confidence-Bound Algorithms for Active Learning in Multi-Armed Bandits Upper-Confidence-Bound Algorithms for Active Learning in Multi-Armed Bandits Alexandra Carpentier 1, Alessandro Lazaric 1, Mohammad Ghavamzadeh 1, Rémi Munos 1, and Peter Auer 2 1 INRIA Lille - Nord Europe,

More information

Lecture 26: Likelihood ratio tests

Lecture 26: Likelihood ratio tests Lecture 26: Likelihood ratio tests Likelihood ratio When both H 0 and H 1 are simple (i.e., Θ 0 = {θ 0 } and Θ 1 = {θ 1 }), Theorem 6.1 applies and a UMP test rejects H 0 when f θ1 (X) f θ0 (X) > c 0 for

More information

Mean and variance. Compute the mean and variance of the distribution with density

Mean and variance. Compute the mean and variance of the distribution with density Mean and variance Compute the mean and variance of the distribution with density > f

More information

Multi-Armed Bandits. Credit: David Silver. Google DeepMind. Presenter: Tianlu Wang

Multi-Armed Bandits. Credit: David Silver. Google DeepMind. Presenter: Tianlu Wang Multi-Armed Bandits Credit: David Silver Google DeepMind Presenter: Tianlu Wang Credit: David Silver (DeepMind) Multi-Armed Bandits Presenter: Tianlu Wang 1 / 27 Outline 1 Introduction Exploration vs.

More information

A Few Notes on Fisher Information (WIP)

A Few Notes on Fisher Information (WIP) A Few Notes on Fisher Information (WIP) David Meyer dmm@{-4-5.net,uoregon.edu} Last update: April 30, 208 Definitions There are so many interesting things about Fisher Information and its theoretical properties

More information

q P (T b < T a Z 0 = z, X 1 = 1) = p P (T b < T a Z 0 = z + 1) + q P (T b < T a Z 0 = z 1)

q P (T b < T a Z 0 = z, X 1 = 1) = p P (T b < T a Z 0 = z + 1) + q P (T b < T a Z 0 = z 1) Random Walks Suppose X 0 is fixed at some value, and X 1, X 2,..., are iid Bernoulli trials with P (X i = 1) = p and P (X i = 1) = q = 1 p. Let Z n = X 1 + + X n (the n th partial sum of the X i ). The

More information

Information Theory and Hypothesis Testing

Information Theory and Hypothesis Testing Summer School on Game Theory and Telecommunications Campione, 7-12 September, 2014 Information Theory and Hypothesis Testing Mauro Barni University of Siena September 8 Review of some basic results linking

More information

Multiple Identifications in Multi-Armed Bandits

Multiple Identifications in Multi-Armed Bandits Multiple Identifications in Multi-Armed Bandits arxiv:05.38v [cs.lg] 4 May 0 Sébastien Bubeck Department of Operations Research and Financial Engineering, Princeton University sbubeck@princeton.edu Tengyao

More information

Robustness and duality of maximum entropy and exponential family distributions

Robustness and duality of maximum entropy and exponential family distributions Chapter 7 Robustness and duality of maximum entropy and exponential family distributions In this lecture, we continue our study of exponential families, but now we investigate their properties in somewhat

More information

COS 402 Machine Learning and Artificial Intelligence Fall Lecture 22. Exploration & Exploitation in Reinforcement Learning: MAB, UCB, Exp3

COS 402 Machine Learning and Artificial Intelligence Fall Lecture 22. Exploration & Exploitation in Reinforcement Learning: MAB, UCB, Exp3 COS 402 Machine Learning and Artificial Intelligence Fall 2016 Lecture 22 Exploration & Exploitation in Reinforcement Learning: MAB, UCB, Exp3 How to balance exploration and exploitation in reinforcement

More information

Pure Exploration in Finitely Armed and Continuous Armed Bandits

Pure Exploration in Finitely Armed and Continuous Armed Bandits Pure Exploration in Finitely Armed and Continuous Armed Bandits Sébastien Bubeck INRIA Lille Nord Europe, SequeL project, 40 avenue Halley, 59650 Villeneuve d Ascq, France Rémi Munos INRIA Lille Nord Europe,

More information

Chapter 2 Stochastic Multi-armed Bandit

Chapter 2 Stochastic Multi-armed Bandit Chapter 2 Stochastic Multi-armed Bandit Abstract In this chapter, we present the formulation, theoretical bound, and algorithms for the stochastic MAB problem. Several important variants of stochastic

More information

Quick Tour of Basic Probability Theory and Linear Algebra

Quick Tour of Basic Probability Theory and Linear Algebra Quick Tour of and Linear Algebra Quick Tour of and Linear Algebra CS224w: Social and Information Network Analysis Fall 2011 Quick Tour of and Linear Algebra Quick Tour of and Linear Algebra Outline Definitions

More information

Speech Recognition Lecture 7: Maximum Entropy Models. Mehryar Mohri Courant Institute and Google Research

Speech Recognition Lecture 7: Maximum Entropy Models. Mehryar Mohri Courant Institute and Google Research Speech Recognition Lecture 7: Maximum Entropy Models Mehryar Mohri Courant Institute and Google Research mohri@cims.nyu.com This Lecture Information theory basics Maximum entropy models Duality theorem

More information

Lecture 9: October 25, Lower bounds for minimax rates via multiple hypotheses

Lecture 9: October 25, Lower bounds for minimax rates via multiple hypotheses Information and Coding Theory Autumn 07 Lecturer: Madhur Tulsiani Lecture 9: October 5, 07 Lower bounds for minimax rates via multiple hypotheses In this lecture, we extend the ideas from the previous

More information

STAT 200C: High-dimensional Statistics

STAT 200C: High-dimensional Statistics STAT 200C: High-dimensional Statistics Arash A. Amini May 30, 2018 1 / 59 Classical case: n d. Asymptotic assumption: d is fixed and n. Basic tools: LLN and CLT. High-dimensional setting: n d, e.g. n/d

More information

Lecture 8: Information Theory and Statistics

Lecture 8: Information Theory and Statistics Lecture 8: Information Theory and Statistics Part II: Hypothesis Testing and I-Hsiang Wang Department of Electrical Engineering National Taiwan University ihwang@ntu.edu.tw December 23, 2015 1 / 50 I-Hsiang

More information

Lecture 21: Minimax Theory

Lecture 21: Minimax Theory Lecture : Minimax Theory Akshay Krishnamurthy akshay@cs.umass.edu November 8, 07 Recap In the first part of the course, we spent the majority of our time studying risk minimization. We found many ways

More information

Bandit View on Continuous Stochastic Optimization

Bandit View on Continuous Stochastic Optimization Bandit View on Continuous Stochastic Optimization Sébastien Bubeck 1 joint work with Rémi Munos 1 & Gilles Stoltz 2 & Csaba Szepesvari 3 1 INRIA Lille, SequeL team 2 CNRS/ENS/HEC 3 University of Alberta

More information

Chapter 7. Basic Probability Theory

Chapter 7. Basic Probability Theory Chapter 7. Basic Probability Theory I-Liang Chern October 20, 2016 1 / 49 What s kind of matrices satisfying RIP Random matrices with iid Gaussian entries iid Bernoulli entries (+/ 1) iid subgaussian entries

More information