An adaptive algorithm for finite stochastic partial monitoring

Size: px
Start display at page:

Download "An adaptive algorithm for finite stochastic partial monitoring"

Transcription

1 An adaptive algorithm for finite stochastic partial monitoring Gábor Bartók Navid Zolghadr Csaba Szepesvári Department of Computing Science, University of Alberta, AB, Canada, T6G E8 Abstract We present a new anytime algorithm that achieves near-optimal regret for any instance of finite stochastic partial monitoring. In particular, the new algorithm achieves the minimax regret, within logarithmic factors, for both easy and hard problems. For easy problems, it additionally achieves logarithmic individual regret. Most importantly, the algorithm is adaptive in the sense that if the opponent strategy is in an easy region of the strategy space then the regret grows as if the problem was easy. As an implication, we show that under some reasonable additional assumptions, the algorithm enjoys an O( T ) regret in Dynamic Pricing, proven to be hard by Bartók et al. ().. Introduction Partial monitoring can be cast as a sequential game played by a learner and an opponent. In every time step, the learner chooses an action and simultaneously the opponent chooses an outcome. Then, based on the action and the outcome, the learner suffers some loss and receives some feedback. Neither the outcome nor the loss are revealed to the learner. Thus, a partialmonitoring game with N actions and M outcomes is defined with the pair G = (L, H), where L R N M is the loss matrix, and H Σ N M is the feedback matrix over some arbitrary set of symbols Σ. These matrices are announced to both the learner and the opponent before the game starts. At time step t, if I t N = {,,..., N} and J t M = {,,..., M} denote the (possibly random) choices of the learner and the opponent, respectively then, the loss suffered by the learner in that time step is L[I t, J t ], while the Appearing in Proceedings of the 9 th International Conference on Machine Learning, Edinburgh, Scotland, UK,. Copyright by the author(s)/owner(s). feedback received is H[I t, J t ]. The goal of the learner (or player) is to minimize his cumulative loss T t= L[I t, J t ]. The performance of the learner is measured in terms of the regret, defined as the excess cumulative loss he suffers compared to that of the best fixed action in hindsight: R T = T t= L[I t, J t ] min i N T L[i, J t ]. t= The regret usually grows with the time horizon T. What distinguishes between a successful and an unsuccessful learner is the growth rate of the regret. A regret linear in T means that the learner does not approach the performance of the optimal action. On the other hand, if the growth rate is sublinear, it is said that the learner can learn the game. In this paper we restrict our attention to stochastic games, adding the extra assumption that the opponent generates the outcomes with a sequence of independent and identically distributed random variables. This distribution will be called the opponent strategy. As for the player, a player strategy (or algorithm) is a (possibly random) function from the set of feedback sequences (observation histories) to the set of actions. In stochastic games, we use a slightly different notion of regret: we compare the cumulative loss with that of the action with the lowest expected loss. R T = T t= L[I t, J t ] T min i N E[L[i, J ]]. The hardness of a game is defined in terms of the minimax expected regret (or minimax regret for short): R T (G) = min A max E[R T ], p M where M is the space of opponent strategies, and A is any strategy of the player. In other words, the

2 minimax regret is the worst-case expected regret of the best algorithm. A question of major importance is how the minimax regret scales with the parameters of the game, such as the time horizon T, the number of actions N, the number of outcomes M. In the stochastic setting, another measure of hardness is worth studying, namely the individual or problem-dependent regret, defined as the expected regret given a fixed opponent strategy... Related work Two special cases of partial monitoring have been extensively studied for a long time: full-information games, where the feedback carries enough information for the learner to infer the outcome for any action-outcome pair, and bandit games, where the learner receives the loss of the chosen action as feedback. Since Vovk (99) and Littlestone & Warmuth (99) we know that for full-information games, the minimax regret scales as Θ( T log N). For bandit games, the minimax regret has been proven to scale as Θ( NT ) (Audibert & Bubeck, 9). The individual regret of these kind of games has also been studied: Auer et al. () showed that given any opponent strategy, the expected regret can be upper bounded by c i N:δ i δ i log T, where δ i is the expected difference between the loss of action i and an optimal action. Finite partial monitoring problems were introduced by Piccolboni & Schindelhauer (). They proved that a game is either hopeless (that is, its minimax regret scales linearly with T ), or the regret can be upper bounded by O(T / ). They also give a characterization of hopeless games. Namely, a game is hopeless if it does not satisfy the global observability condition (see Definition 5 in Section ). Their upper bound for non-hopeless games was tightened to O(T / ) by Cesa-Bianchi et al. (6), who also showed that there exists a game with a matching lower bound. Cesa-Bianchi et al. (6) posted the problem of characterizing partial-monitoring games with minimax regret less than Θ(T / ). This problem has been solved since then. The first steps towards classifying partialmonitoring games were made by Bartók et al. (), who characterized almost all games with two outcomes. They proved that there are only four categories: games with minimax regret, Θ( T ), Θ(T / ), and Θ(T ), and named them trivial, easy, hard, and hopeless, re- The Exp algorithm due to Auer et al. () achieves almost the same regret, with an extra logarithmic term. spectively. They also found that there exist games that are easy, but can not easily be transformed to a bandit or full-information game. Later, Bartók et al. () proved the same results for finite stochastic partial monitoring, with any finite number of outcomes. The condition that separates easy games from hard games is the local observability condition (see Definition 6). The algorithm Balaton introduced there works by eliminating actions that are thought to be suboptimal with high confidence. They conjectured in their paper that the same classification holds for nonstochastic games, without changing the condition. Recently, Foster & Rakhlin () designed the algorithm NeighborhoodWatch that proves this conjecture to be true. Foster & Rakhlin prove an upper bound on a stronger notion of regret, called internal regret... Contributions In this paper, we extend the results of Bartók et al. (). We introduce a new algorithm, called for Confidence Bound Partial monitoring, with various desirable properties. First of all, while Balaton only works on easy games, can be run on any non-hopeless game, and it achieves (up to logarithmic factors) the minimax regret rates both for easy and hard games (see Corollaries and ). Furthermore, it also achieves logarithmic problem-dependent regret for easy games (see Corollary ). It is also an anytime algorithm, meaning that it does not have to know the time horizon, nor does it have to use the doubling trick, to achieve the desired performance. The final, and potentially most impactful, aspect of our algorithm is that through additional assumptions on the set of opponent strategies, the minimax regret of even hard games can be brought down to Θ( T )! While this statement may seem to contradict the result of Bartók et al. (), in fact it does not. For the precise statement, see Theorem. We call this property adaptiveness to emphasize that the algorithm does not even have to know that the set of opponent strategies is restricted.. Definitions and notations Recall from the introduction that an instance of partial monitoring with N actions and M outcomes is defined by the pair of matrices L R N M and H Σ N M, where Σ is an arbitrary set of symbols. In each round t, the opponent chooses an outcome J t M and simultaneously the learner chooses an action I t N. Then, Note that these results do not concern the growth rate in terms of other parameters (like N).

3 the feedback H[I t, J t ] is revealed and the learner suffers the loss L[I t, J t ]. It is important to note that the loss is not revealed to the learner. As it was previously mentioned, in this paper we deal with stochastic opponents only. In this case, the choice of the opponent is governed by a sequence J, J,... of i.i.d. random variables. The distribution of these variables p M is called an opponent strategy, where M, also called the probability simplex, is the set of all distributions over the M outcomes. It is easy to see that, given opponent strategy p, the expected loss of action i can be expressed as l i p, where l i is defined as the column vector consisting of the i th row of L. The following definitions, taken from Bartók et al. (), are essential for understanding how the structure of L and H determines the hardness of a game. Action i is called optimal under strategy p if its expected loss is not greater than that of any other action i N. That is, l i p l i p. Determining which action is optimal under opponent strategies yields the cell decomposition of the probability simplex M : Definition (Cell decomposition). For every action i N, let C i = {p M : action i is optimal under p}. The sets C,..., C N constitute the cell decomposition of M. Now we can define the following important properties of actions: Definition (Properties of actions). Action i is called dominated if C i =. If an action is not dominated then it is called non-dominated. Action i is called degenerate if it is nondominated and there exists an action i such that C i C i. If an action is neither dominated nor degenerate then it is called Pareto-optimal. The set of Pareto-optimal actions is denoted by P. From the definition of cells we see that a cell is either empty or it is a closed polytope. Furthermore, Paretooptimal actions have (M )-dimensional cells. The following definition, important for our algorithm, also uses the dimensionality of polytopes: Definition (Neighbors). Two Pareto-optimal actions i and j are neighbors if C i C j is an (M )- dimensional polytope. Let N be the set of unordered pairs over N that contains neighboring action-pairs. The neighborhood action set of two neighboring actions i, j is defined as N + = {k N : C i C j C k }. The concept of cell decomposition also appears in Piccolboni & Schindelhauer (). Note that the neighborhood action set N + naturally contains i and j. If N + contains some other action k then either C k = C i, C k = C j, or C k = C i C j. In general, the elements of the feedback matrix H can be arbitrary symbols. Nevertheless, the nature of the symbols themselves does not matter in terms of the structure of the game. What determines the feedback structure of a game is the occurrence of identical symbols in each row of H. To standardize the feedback structure, the signal matrix is defined for each action: Definition. Let s i be the number of distinct symbols in the i th row of H and let σ,..., σ si Σ be an enumeration of those symbols. Then the signal matrix S i {, } si M of action i is defined as S i [k, l] = I {H[i,l]=σk }. The idea of this definition is that if p M is the opponent s strategy then S i p gives the distribution over the symbols underlying action i. In fact, it is also true that observing H[I t, J t ] is equivalent to observing the vector S It e Jt, where e k is the k th unit vector in the standard basis of R M. From now on we assume without loss of generality that the learner s observation at time step t is the random vector Y t = S It e Jt. Note that the dimensionality of this vector depends on the action chosen by the learner, namely Y t R s I t. The following two definitions play a key role in classifying partial-monitoring games based on their difficulty. Definition 5 (Global observability (Piccolboni & Schindelhauer, )). A partial-monitoring game (L, H) admits the global observability condition, if for all pairs i, j of actions, l i l j k N Im S k. Definition 6 (Local observability (Bartók et al., )). A pair of neighboring actions i, j is said to be locally observable if l i l j k N + Im Sk. We denote by L N the set of locally observable pairs of actions (the pairs are unordered). A game satisfies the local observability condition if every pair of neighboring actions is locally observable, i.e., if L = N. The main result of Bartók et al. () is that locally observable games have Õ( T ) minimax regret. It is easy to see that local observability implies global observability. Also, from Piccolboni & Schindelhauer () we know that if global observability does not hold then the game has linear minimax regret. From now on, we only deal with games that admit the global observability condition. A collection of the concepts and symbols introduced in this section is shown in Table.

4 Table. List of basic symbols Symbol Definition Found in/at N, M N number of actions and outcomes N {,..., N}, set of actions M R M M-dim. simplex, set of opponent strategies p M opponent strategy L R N M loss matrix H Σ N M feedback matrix l i R M l i = L[i, :], loss vector underlying action i C i M cell of action i Definition P N set of Pareto-optimal actions Definition N N set of unordered neighboring action-pairs Definition N + N neighborhood action set of {i, j} N Definition S i {, } si M signal matrix of action i Definition L N set of locally observable action pairs Definition 6 V N observer actions underlying {i, j} N Definition 7 v,k R s k, k V observer vectors Definition 7 W i R confidence width for action i N Definition 7. The proposed algorithm Our algorithm builds on the core idea underlying algorithm Balaton of Bartók et al. (), so we start with a brief review of Balaton. Balaton uses sweeps to successively eliminate suboptimal actions. This is done by estimating the differences between the expected losses of pairs of actions, i.e., δ = (l i l j ) p (i, j N). In fact, Balaton exploits that it suffices to keep track of δ for neighboring pairs of actions (i.e., for action pairs i, j such that {i, j} N ). This is because if an action i is suboptimal, it will have a neighbor j that has a smaller expected loss and so the action i will get eliminated when δ is checked. Now, to estimate δ for some {i, j} N one observes that under the local observability condition, it holds that l i l j = k N + Si v,k for some vec- tors v,k R σ k. This yields that δ = (l i l j ) p = k N + v,k S kp def. Since ν k = S k p is the vector of the distribution of symbols under action k, which can be estimated by ν k (t), the empirical frequencies of the individual symbols observed under k up to time t, Balaton uses k N + v,k ν k(t) to estimate δ. Since none of the actions in N + before one of {i, j} gets eliminated, the estimate of δ gets refined until one of {i, j} is eliminated. can get eliminated The essence of why Balaton achieves a low regret is as follows: When i is not a neighbor of the optimal action i one can show that it will be eliminated before all neighbors j between i and i get eliminated. Thus, the contribution of such far actions to the regret is minimal. When i is a neighbor of i, it will be eliminated in time proportional to δ i,i. Thus the contribution to the regret of such an action is proportional to δ def i, where δ i = δ i,i. It also holds that the contribution to the regret of i cannot be larger than δ i T. Thus, the contribution of i to the regret is at most min(δ i T, δ i ) T. When some pairs {i, j} N are not locally observable, one needs to use actions other than those in N + to construct an estimate of δ. Under global observability, l i l j = k V Si v,k for an appropriate subset V N and an appropriate set of vectors v,. Thus, if the actions in V are kept in play, one can estimate the difference δ as before, using k N + v,k ν k(t). This motivates the following definition: Definition 7 (Observer sets and observer vectors). The observer set V N underlying a pair of neighboring actions {i, j} N is a set of actions such that l i l j k V Im S k. The observer vectors (v,k ) k V are defined to satisfy the equation l i l j = k V Sk v,k. In particular, v,k R s k. In what follows, the choice of the observer sets and vectors is restricted so that V = V j,i and v,k = v j,i,k. Furthermore, the observer set V is constrained to be a superset of N + and in particular when a pair {i, j} is locally observable, V = N + must hold. Finally, for any action k {} N N +, let W k = max :k V v,k be the confidence width of action k. The reason of the particular choice V = N + for lo-

5 cally observable pairs {i, j} is that we plan to use V (and the vectors v, ) in the case of locally observable pairs, too. For not locally observable pairs, the whole action set N is always a valid observer set (thus, V can be found). However, whenever possible, it is better to use a smaller set. The actual choice of V (and v,k ) is postponed until the effect of this choice on the regret becomes clear. With the observer sets, the basic idea of the algorithm becomes as follows: (i) Eliminate the suboptimal actions in successive sweeps; (ii) In each sweep, enrich the set of remaining actions P(t) by adding the observer actions underlying the remaining neighboring pairs {i, j} N (t): V(t) = {} N (t) V ; (iii) Explore the actions in P(t) V(t) to update the symbol frequency estimate vectors ν k (t). Another refinement is to eliminate the sweeps so as to make the algorithm enjoy an advantageous anytime property. This can be achieved by selecting in each step only one action. We propose the action to be chosen should be the one that maximizes the reduction of the remaining uncertainty. This algorithm could be shown to enjoy T regret for locally observable games. However, if we run it on a non-locally observable game and the opponent strategy is on C i C j for {i, j} N \ L, it will suffer linear regret! The reason is that if both actions i and j are optimal, and thus never get eliminated, the algorithm will choose actions from V \ N + too often. Furthermore, even if the opponent strategy is not on the boundary the regret can be too high: say action i is optimal but δ j is small, while {i, j} N \ L. Then a third action k V with large δ k will be chosen proportional to /δj times, causing high regret. To combat this we restrict the frequency with which an action can be used for information seeking purposes. For this, we introduce the set of rarely chosen actions, R(t) = {k N : n k (t) η k f(t)}, where η k R, f : N R are tuning parameters to be chosen later. Then, the set of actions available at time t is restricted to P(t) N + (t) (V(t) R(t)), where N + (t) = {} N (t) N +. We will show that with these modifications, the algorithm achieves O(T / ) regret in the general case, while it will also be shown to achieve an O( T ) regret when the opponent uses a benign strategy. A pseudocode for the algorithm is given in Algorithm. It remains to specify the function getpolytope. It gets the array halfspace as input. The array half- Space stores which neighboring action pairs have a confident estimate on the difference of their expected losses, along with the sign of the difference (if confi- Algorithm Input: L, H, α, η,..., η N, f = f( ) Calculate P, N, V, v,k, W k for t = to N do Choose I t = t and observe Y t {Initialization} n It {# times the action is chosen} ν It Y t {Cumulative observations} end for for t = N +, N +,... do for each {i, j} N do δ k V v,k ν k n k {Loss diff. estimate} c k V v,k α log t n k {Confidence} if δ c then halfspace(i, j) sgn δ else halfspace(i, j) end if end for [P(t), N (t)] getpolytope(p, N, halfspace) N + (t) = {} N (t) N + ij V(t) = {} N (t) V ij R(t) = {k N : n k (t) η k f(t)} S(t) = P(t) N + (t) (V(t) R(t)) W Choose I t = argmax i i S(t) n i ν It ν It + Y t n It n It + end for and observe Y t dent). Each of these confident pairs define an open halfspace, namely {} = { p M : halfspace(i, j)(l i l j ) p > }. The function getpolytope calculates the open polytope defined as the intersection of the above halfspaces. Then for all i P it checks if C i intersects with the open polytope. If so, then i will be an element of P(t). Similarly, for every {i, j} N, it checks if C i C j intersects with the open polytope and puts the pair in N (t) if it does. Note that it is not enough to compute P(t) and then drop from N those pairs {k, l} where one of k or l is excluded from P(t): it is possible that the boundary C k C l between the cells of two actions k, l P(t) is included in the rejected region. For an illustration of cell decomposition and excluding cells, see Figure. Computational complexity The computationally heavy parts of the algorithm are the initial calculation of the cell decomposition and the function getpolytope. All of these require linear programming. In the preprocessing phase we need to solve N + N linear

6 (,, ) (,, ) (,, ) (a) Cell decomposition. (,, ) (,, ) (,, ) (b) Gray indicates excluded area. Figure. An example of cell decomposition (M = ). programs to determine cells and neighboring pairs of cells. Then in every round, at most N linear programs are needed. The algorithm can be sped up by caching previously solved linear programs.. Analysis of the algorithm The first theorem in this section is an individual upper bound on the regret of. Theorem. Let (L, H) be an N by M partialmonitoring game. For a fixed opponent strategy p M, let δ i denote the difference between the expected loss of action i and an optimal action. For any time horizon T, algorithm with parameters α >, ν k = W / k, f(t) = α / t / log / t has expected regret E[R T ] {} N + + ( V + ) α N k= δ k > W k k V\N + δ k min ( d k δ k α log T d Wk l(k) δl(k) α / W / k T / log / T + N k= α log T, ) + δ k α / W / k T / log / T k V\N + δ k + d k α / W / T / log / T, where W = max k N W k, V = {} N V, N + = {} N N +, and d,..., d N are game-dependent constants. The proof is omitted for lack of space. Here we give a For complete proofs we refer the reader to the supplementary material. short explanation of the different terms in the bound. The first term corresponds to the confidence interval failure event. The second term comes from the initialization phase of the algorithm. The remaining four terms come from categorizing the choices of the algorithm by two criteria: () Would I t be different if R(t) was defined as R(t) = N? () Is I t P(t) N t +? These two binary events lead to four different cases in the proof, resulting in the last four terms of the bound. An implication of Theorem is an upper bound on the individual regret of locally observable games: Corollary. If G is locally observable then E[R T ] ( V + ) α {} N + N δ k + Wk d k α log T. δ k k= Proof. If a game is locally observable then V \N + =, leaving the last two sums of the statement of Theorem zero. The following corollary is an upper bound on the minimax regret of any globally observable game. Corollary. Let G be a globally observable game. Then there exists a constant c such that the expected regret can be upper bounded independently of the choice of p as E[R T ] ct / log / T. The following theorem is an upper bound on the minimax regret of any globally observable game against benign opponents. To state the theorem, we need a new definition. Let A be some subset of actions in G. We call A a point-local game in G if i A C i. Theorem. Let G be a globally observable game. Let M be some subset of the probability simplex such that its topological closure has C i C j = for every {i, j} N \ L. Then there exists a constant c such that for every p, algorithm with parameters α >, ν k = W / k, f(t) = α / t / log / t achieves E[R T ] cd pmax bt log T, where b is the size of the largest point-local game, and d pmax is a game-dependent constant. In a nutshell, the proof revisits the four cases of the proof of Theorem, and shows that the terms which would yield T / upper bound can be non-zero only for a limited number of time steps.

7 Remark. Note that the above theorem implies that does not need to have any prior knowledge about to achieve T regret. This is why we say our algorithm is adaptive. An immediate implication of Theorem is the following minimax bound for locally observable games: Corollary. Let G be a locally observable finite partial monitoring game. Then there exists a constant c such that for every p M, E[R T ] c T log T. Remark. The upper bounds in Corollaries and both have matching lower bounds up to logarithmic factors (Bartók et al., ), proving that achieves near optimal regret in both locally observable and non-locally observable games. 5. Experiments We demonstrate the results of the previous sections using instances of Dynamic Pricing, as well as a locally observable game. We compare the results of to two other algorithms: Balaton (Bartók et al., ) which is, as mentioned earlier in the paper, the first algorithm that achieves Õ( T ) minimax regret for all locally observable finite stochastic partial-monitoring games; and (Piccolboni & Schindelhauer, ), which achieves O(T / ) minimax regret on all non-hopeless finite partial-monitoring games, even against adversarial opponents. 5.. A locally observable game The game we use to compare and Balaton has actions and outcomes. The game is described with the loss and feedback matrices: L = ; H = a b b b a b. b b a We ran the algorithms times for 5 different stochastic strategies. We averaged the results for each strategy and then took pointwise maximum over the 5 strategies. Figure (a) shows the empirical minimax regret calculated the way described above. In addition, Figure (b) shows the regret of the algorithms against one of the opponents, averaged over runs. The results indicate that outperforms both and Balaton. We also observe that, although the asymptotic performace of Balaton is proven to be better than that of, a larger constant factor makes Balaton lose against even at time step ten million. 5.. Dynamic Pricing In Dynamic Pricing, at every time step a seller (player) sets a price for his product while a buyer (opponent) secretly sets a maximum price he is willing to pay. The feedback for the seller is buy or no-buy, while his loss is either a preset constant (no-buy) or the difference between the prices (buy). The finite version of the game can be described with the following matrices: N y y y c N L = H = n y y c c n n y This game is not locally observable and thus it is hard (Bartók et al., ). Simple linear algebra gives that the locally observable action pairs are the consecutive actions (L = {{i, i + } : i N }), while quite surprisingly, all action pairs are neighbors. We compare with on Dynamic Pricing with N = M = 5 and c =. Since Balaton is undefined on not locally observable games, we can not include it in the comparison. To demonstrate the adaptiveness of, we use two sets of opponent strategies. The benign setting is a set of opponents which are far away from dangerous regions, that is, from boundaries between cells of non-locally observable neighboring action pairs. The harsh settings, however, include opponent strategies that are close or on the boundary between two such actions. For each setting we maximize over 5 strategies and average over runs. We also compare the individual regret of the two algorithms against one benign and one harsh strategy. We averaged over runs and plotted the 9 percent confidence intervals. The results (shown in Figures and ) indicate that has a significant advantage over on benign settings. Nevertheless, for the harsh settings slightly outperforms, which we think is a reasonable price to pay for the benefit of adaptivity. References Audibert, J-Y. and Bubeck, S. Minimax policies for adversarial and stochastic bandits. In COLT 9, Proceedings of the nd Annual Conference on Learning Theory, Montréal, Canada, June 8, 9, pp Omnipress, 9. Auer, P., Cesa-Bianchi, N., and Fischer, P. Finite-time Analysis of the Multiarmed Bandit Problem. Mach. Learn., 7(-):5 56,. ISSN Auer, P., Cesa-Bianchi, N., Freund, Y., and Schapire,

8 Minimax Regret 5 x Balaton Regret.5 x Balaton 6 8 x 5 (a) Pointwise maximum over 5 settings. 5, 5,, 7,5,,, (b) Regret against one opponent strategy. Figure. Comparing with Balaton and on the easy game Minimax Regret x x 5 (a) Pointwise maximum over 5 settings. Regret 8 x , 5, 75,,, (b) Regret against one opponent strategy. Figure. Comparing and on benign setting of the Dynamic Pricing game. Minimax Regret 7 x x 5 (a) Pointwise maximum over 5 settings. Regret 5 x 5, 5, 75,,, (b) Regret against one opponent strategy. Figure. Comparing and on harsh setting of the Dynamic Pricing game. R.E. The nonstochastic multiarmed bandit problem. SIAM Journal on Computing, ():8 77,. Bartók, G., Pál, D., and Szepesvári, Cs. Toward a classification of finite partial-monitoring games. In ALT, Canberra, Australia, pp. 8,. Bartók, G., Pál, D., and Szepesvári, Cs. Minimax regret of finite partial-monitoring games in stochastic environments. In COLT, July. Cesa-Bianchi, N., Lugosi, G., and Stoltz, G. Regret minimization under partial monitoring. In Information Theory Workshop, 6. ITW 6 Punta del Este. IEEE, pp IEEE, 6. Foster, D.P. and Rakhlin, A. No internal regret via neighborhood watch. CoRR, abs/8.688,. Littlestone, N. and Warmuth, M.K. The weighted majority algorithm. Information and Computation, 8: 6, 99. Piccolboni, A. and Schindelhauer, C. Discrete prediction games with arbitrary feedback and loss. In COLT, pp. 8. Springer-Verlag,. Vovk, V.G. Aggregating strategies. In Annual Workshop on Computational Learning Theory, 99.

An adaptive algorithm for finite stochastic partial monitoring

An adaptive algorithm for finite stochastic partial monitoring An adaptive algorithm for finite stochastic partial monitoring Gábor Bartó Navid Zolghadr Csaba Szepesvári Department of Computing Science, University of Alberta, AB, Canada, T6G 2E8 barto@ualberta.ca

More information

Partial monitoring classification, regret bounds, and algorithms

Partial monitoring classification, regret bounds, and algorithms Partial monitoring classification, regret bounds, and algorithms Gábor Bartók Department of Computing Science University of Alberta Dávid Pál Department of Computing Science University of Alberta Csaba

More information

Efficient Partial Monitoring with Prior Information

Efficient Partial Monitoring with Prior Information Efficient Partial Monitoring with Prior Information Hastagiri P Vanchinathan Dept. of Computer Science ETH Zürich, Switzerland hastagiri@inf.ethz.ch Gábor Bartók Dept. of Computer Science ETH Zürich, Switzerland

More information

Toward a Classification of Finite Partial-Monitoring Games

Toward a Classification of Finite Partial-Monitoring Games Toward a Classification of Finite Partial-Monitoring Games Gábor Bartók ( *student* ), Dávid Pál, and Csaba Szepesvári Department of Computing Science, University of Alberta, Canada {bartok,dpal,szepesva}@cs.ualberta.ca

More information

University of Alberta. The Role of Information in Online Learning

University of Alberta. The Role of Information in Online Learning University of Alberta The Role of Information in Online Learning by Gábor Bartók A thesis submitted to the Faculty of Graduate Studies and Research in partial fulfillment of the requirements for the degree

More information

Minimax Regret of Finite Partial-Monitoring Games in Stochastic Environments

Minimax Regret of Finite Partial-Monitoring Games in Stochastic Environments JMLR: Workshop and Conference Proceedings vol (2010) 1 21 24th Annual Conference on Learning Theory Minimax Regret of Finite Partial-Monitoring Games in Stochastic Environments Gábor Bartók bartok@cs.ualberta.ca

More information

Advanced Machine Learning

Advanced Machine Learning Advanced Machine Learning Bandit Problems MEHRYAR MOHRI MOHRI@ COURANT INSTITUTE & GOOGLE RESEARCH. Multi-Armed Bandit Problem Problem: which arm of a K-slot machine should a gambler pull to maximize his

More information

The No-Regret Framework for Online Learning

The No-Regret Framework for Online Learning The No-Regret Framework for Online Learning A Tutorial Introduction Nahum Shimkin Technion Israel Institute of Technology Haifa, Israel Stochastic Processes in Engineering IIT Mumbai, March 2013 N. Shimkin,

More information

Bandit Algorithms. Tor Lattimore & Csaba Szepesvári

Bandit Algorithms. Tor Lattimore & Csaba Szepesvári Bandit Algorithms Tor Lattimore & Csaba Szepesvári Bandits Time 1 2 3 4 5 6 7 8 9 10 11 12 Left arm $1 $0 $1 $1 $0 Right arm $1 $0 Five rounds to go. Which arm would you play next? Overview What are bandits,

More information

Regret Analysis of Stochastic and Nonstochastic Multi-armed Bandit Problems, Part I. Sébastien Bubeck Theory Group

Regret Analysis of Stochastic and Nonstochastic Multi-armed Bandit Problems, Part I. Sébastien Bubeck Theory Group Regret Analysis of Stochastic and Nonstochastic Multi-armed Bandit Problems, Part I Sébastien Bubeck Theory Group i.i.d. multi-armed bandit, Robbins [1952] i.i.d. multi-armed bandit, Robbins [1952] Known

More information

Online Learning and Online Convex Optimization

Online Learning and Online Convex Optimization Online Learning and Online Convex Optimization Nicolò Cesa-Bianchi Università degli Studi di Milano N. Cesa-Bianchi (UNIMI) Online Learning 1 / 49 Summary 1 My beautiful regret 2 A supposedly fun game

More information

Online Learning with Feedback Graphs

Online Learning with Feedback Graphs Online Learning with Feedback Graphs Claudio Gentile INRIA and Google NY clagentile@gmailcom NYC March 6th, 2018 1 Content of this lecture Regret analysis of sequential prediction problems lying between

More information

Bandit models: a tutorial

Bandit models: a tutorial Gdt COS, December 3rd, 2015 Multi-Armed Bandit model: general setting K arms: for a {1,..., K}, (X a,t ) t N is a stochastic process. (unknown distributions) Bandit game: a each round t, an agent chooses

More information

EASINESS IN BANDITS. Gergely Neu. Pompeu Fabra University

EASINESS IN BANDITS. Gergely Neu. Pompeu Fabra University EASINESS IN BANDITS Gergely Neu Pompeu Fabra University EASINESS IN BANDITS Gergely Neu Pompeu Fabra University THE BANDIT PROBLEM Play for T rounds attempting to maximize rewards THE BANDIT PROBLEM Play

More information

Multi-armed bandit models: a tutorial

Multi-armed bandit models: a tutorial Multi-armed bandit models: a tutorial CERMICS seminar, March 30th, 2016 Multi-Armed Bandit model: general setting K arms: for a {1,..., K}, (X a,t ) t N is a stochastic process. (unknown distributions)

More information

Explore no more: Improved high-probability regret bounds for non-stochastic bandits

Explore no more: Improved high-probability regret bounds for non-stochastic bandits Explore no more: Improved high-probability regret bounds for non-stochastic bandits Gergely Neu SequeL team INRIA Lille Nord Europe gergely.neu@gmail.com Abstract This work addresses the problem of regret

More information

Exponential Weights on the Hypercube in Polynomial Time

Exponential Weights on the Hypercube in Polynomial Time European Workshop on Reinforcement Learning 14 (2018) October 2018, Lille, France. Exponential Weights on the Hypercube in Polynomial Time College of Information and Computer Sciences University of Massachusetts

More information

Lecture 19: UCB Algorithm and Adversarial Bandit Problem. Announcements Review on stochastic multi-armed bandit problem

Lecture 19: UCB Algorithm and Adversarial Bandit Problem. Announcements Review on stochastic multi-armed bandit problem Lecture 9: UCB Algorithm and Adversarial Bandit Problem EECS598: Prediction and Learning: It s Only a Game Fall 03 Lecture 9: UCB Algorithm and Adversarial Bandit Problem Prof. Jacob Abernethy Scribe:

More information

Yevgeny Seldin. University of Copenhagen

Yevgeny Seldin. University of Copenhagen Yevgeny Seldin University of Copenhagen Classical (Batch) Machine Learning Collect Data Data Assumption The samples are independent identically distributed (i.i.d.) Machine Learning Prediction rule New

More information

Lecture 4: Lower Bounds (ending); Thompson Sampling

Lecture 4: Lower Bounds (ending); Thompson Sampling CMSC 858G: Bandits, Experts and Games 09/12/16 Lecture 4: Lower Bounds (ending); Thompson Sampling Instructor: Alex Slivkins Scribed by: Guowei Sun,Cheng Jie 1 Lower bounds on regret (ending) Recap from

More information

Online Learning with Gaussian Payoffs and Side Observations

Online Learning with Gaussian Payoffs and Side Observations Online Learning with Gaussian Payoffs and Side Observations Yifan Wu 1 András György 2 Csaba Szepesvári 1 1 Department of Computing Science University of Alberta 2 Department of Electrical and Electronic

More information

From Bandits to Experts: A Tale of Domination and Independence

From Bandits to Experts: A Tale of Domination and Independence From Bandits to Experts: A Tale of Domination and Independence Nicolò Cesa-Bianchi Università degli Studi di Milano N. Cesa-Bianchi (UNIMI) Domination and Independence 1 / 1 From Bandits to Experts: A

More information

Explore no more: Improved high-probability regret bounds for non-stochastic bandits

Explore no more: Improved high-probability regret bounds for non-stochastic bandits Explore no more: Improved high-probability regret bounds for non-stochastic bandits Gergely Neu SequeL team INRIA Lille Nord Europe gergely.neu@gmail.com Abstract This work addresses the problem of regret

More information

New bounds on the price of bandit feedback for mistake-bounded online multiclass learning

New bounds on the price of bandit feedback for mistake-bounded online multiclass learning Journal of Machine Learning Research 1 8, 2017 Algorithmic Learning Theory 2017 New bounds on the price of bandit feedback for mistake-bounded online multiclass learning Philip M. Long Google, 1600 Amphitheatre

More information

Learning, Games, and Networks

Learning, Games, and Networks Learning, Games, and Networks Abhishek Sinha Laboratory for Information and Decision Systems MIT ML Talk Series @CNRG December 12, 2016 1 / 44 Outline 1 Prediction With Experts Advice 2 Application to

More information

Advanced Topics in Machine Learning and Algorithmic Game Theory Fall semester, 2011/12

Advanced Topics in Machine Learning and Algorithmic Game Theory Fall semester, 2011/12 Advanced Topics in Machine Learning and Algorithmic Game Theory Fall semester, 2011/12 Lecture 4: Multiarmed Bandit in the Adversarial Model Lecturer: Yishay Mansour Scribe: Shai Vardi 4.1 Lecture Overview

More information

On the Complexity of Best Arm Identification in Multi-Armed Bandit Models

On the Complexity of Best Arm Identification in Multi-Armed Bandit Models On the Complexity of Best Arm Identification in Multi-Armed Bandit Models Aurélien Garivier Institut de Mathématiques de Toulouse Information Theory, Learning and Big Data Simons Institute, Berkeley, March

More information

The Online Approach to Machine Learning

The Online Approach to Machine Learning The Online Approach to Machine Learning Nicolò Cesa-Bianchi Università degli Studi di Milano N. Cesa-Bianchi (UNIMI) Online Approach to ML 1 / 53 Summary 1 My beautiful regret 2 A supposedly fun game I

More information

Multiple Identifications in Multi-Armed Bandits

Multiple Identifications in Multi-Armed Bandits Multiple Identifications in Multi-Armed Bandits arxiv:05.38v [cs.lg] 4 May 0 Sébastien Bubeck Department of Operations Research and Financial Engineering, Princeton University sbubeck@princeton.edu Tengyao

More information

THE first formalization of the multi-armed bandit problem

THE first formalization of the multi-armed bandit problem EDIC RESEARCH PROPOSAL 1 Multi-armed Bandits in a Network Farnood Salehi I&C, EPFL Abstract The multi-armed bandit problem is a sequential decision problem in which we have several options (arms). We can

More information

Improved Algorithms for Linear Stochastic Bandits

Improved Algorithms for Linear Stochastic Bandits Improved Algorithms for Linear Stochastic Bandits Yasin Abbasi-Yadkori abbasiya@ualberta.ca Dept. of Computing Science University of Alberta Dávid Pál dpal@google.com Dept. of Computing Science University

More information

Online Learning with Experts & Multiplicative Weights Algorithms

Online Learning with Experts & Multiplicative Weights Algorithms Online Learning with Experts & Multiplicative Weights Algorithms CS 159 lecture #2 Stephan Zheng April 1, 2016 Caltech Table of contents 1. Online Learning with Experts With a perfect expert Without perfect

More information

Applications of on-line prediction. in telecommunication problems

Applications of on-line prediction. in telecommunication problems Applications of on-line prediction in telecommunication problems Gábor Lugosi Pompeu Fabra University, Barcelona based on joint work with András György and Tamás Linder 1 Outline On-line prediction; Some

More information

Theory and Applications of A Repeated Game Playing Algorithm. Rob Schapire Princeton University [currently visiting Yahoo!

Theory and Applications of A Repeated Game Playing Algorithm. Rob Schapire Princeton University [currently visiting Yahoo! Theory and Applications of A Repeated Game Playing Algorithm Rob Schapire Princeton University [currently visiting Yahoo! Research] Learning Is (Often) Just a Game some learning problems: learn from training

More information

Online Learning and Sequential Decision Making

Online Learning and Sequential Decision Making Online Learning and Sequential Decision Making Emilie Kaufmann CNRS & CRIStAL, Inria SequeL, emilie.kaufmann@univ-lille.fr Research School, ENS Lyon, Novembre 12-13th 2018 Emilie Kaufmann Online Learning

More information

The Multi-Arm Bandit Framework

The Multi-Arm Bandit Framework The Multi-Arm Bandit Framework A. LAZARIC (SequeL Team @INRIA-Lille) ENS Cachan - Master 2 MVA SequeL INRIA Lille MVA-RL Course In This Lecture A. LAZARIC Reinforcement Learning Algorithms Oct 29th, 2013-2/94

More information

Csaba Szepesvári 1. University of Alberta. Machine Learning Summer School, Ile de Re, France, 2008

Csaba Szepesvári 1. University of Alberta. Machine Learning Summer School, Ile de Re, France, 2008 LEARNING THEORY OF OPTIMAL DECISION MAKING PART I: ON-LINE LEARNING IN STOCHASTIC ENVIRONMENTS Csaba Szepesvári 1 1 Department of Computing Science University of Alberta Machine Learning Summer School,

More information

Stat 260/CS Learning in Sequential Decision Problems. Peter Bartlett

Stat 260/CS Learning in Sequential Decision Problems. Peter Bartlett Stat 260/CS 294-102. Learning in Sequential Decision Problems. Peter Bartlett 1. Adversarial bandits Definition: sequential game. Lower bounds on regret from the stochastic case. Exp3: exponential weights

More information

Internal Regret in On-line Portfolio Selection

Internal Regret in On-line Portfolio Selection Internal Regret in On-line Portfolio Selection Gilles Stoltz (gilles.stoltz@ens.fr) Département de Mathématiques et Applications, Ecole Normale Supérieure, 75005 Paris, France Gábor Lugosi (lugosi@upf.es)

More information

Online Learning to Rank with Top-k Feedback

Online Learning to Rank with Top-k Feedback Journal of Machine Learning Research 18 (2017) 1-50 Submitted 6/16; Revised 8/17; Published 10/17 Online Learning to Rank with Top-k Feedback Sougata Chaudhuri 1 Ambuj Tewari 1,2 Department of Statistics

More information

The Multi-Armed Bandit Problem

The Multi-Armed Bandit Problem The Multi-Armed Bandit Problem Electrical and Computer Engineering December 7, 2013 Outline 1 2 Mathematical 3 Algorithm Upper Confidence Bound Algorithm A/B Testing Exploration vs. Exploitation Scientist

More information

Anytime optimal algorithms in stochastic multi-armed bandits

Anytime optimal algorithms in stochastic multi-armed bandits Rémy Degenne LPMA, Université Paris Diderot Vianney Perchet CREST, ENSAE REMYDEGENNE@MATHUNIV-PARIS-DIDEROTFR VIANNEYPERCHET@NORMALESUPORG Abstract We introduce an anytime algorithm for stochastic multi-armed

More information

Piecewise-stationary Bandit Problems with Side Observations

Piecewise-stationary Bandit Problems with Side Observations Jia Yuan Yu jia.yu@mcgill.ca Department Electrical and Computer Engineering, McGill University, Montréal, Québec, Canada. Shie Mannor shie.mannor@mcgill.ca; shie@ee.technion.ac.il Department Electrical

More information

Efficient learning by implicit exploration in bandit problems with side observations

Efficient learning by implicit exploration in bandit problems with side observations Efficient learning by implicit exploration in bandit problems with side observations omáš Kocák Gergely Neu Michal Valko Rémi Munos SequeL team, INRIA Lille Nord Europe, France {tomas.kocak,gergely.neu,michal.valko,remi.munos}@inria.fr

More information

A Relative Exponential Weighing Algorithm for Adversarial Utility-based Dueling Bandits

A Relative Exponential Weighing Algorithm for Adversarial Utility-based Dueling Bandits A Relative Exponential Weighing Algorithm for Adversarial Utility-based Dueling Bandits Pratik Gajane Tanguy Urvoy Fabrice Clérot Orange-labs, Lannion, France PRATIK.GAJANE@ORANGE.COM TANGUY.URVOY@ORANGE.COM

More information

Learning Algorithms for Minimizing Queue Length Regret

Learning Algorithms for Minimizing Queue Length Regret Learning Algorithms for Minimizing Queue Length Regret Thomas Stahlbuhk Massachusetts Institute of Technology Cambridge, MA Brooke Shrader MIT Lincoln Laboratory Lexington, MA Eytan Modiano Massachusetts

More information

Hybrid Machine Learning Algorithms

Hybrid Machine Learning Algorithms Hybrid Machine Learning Algorithms Umar Syed Princeton University Includes joint work with: Rob Schapire (Princeton) Nina Mishra, Alex Slivkins (Microsoft) Common Approaches to Machine Learning!! Supervised

More information

Online Aggregation of Unbounded Signed Losses Using Shifting Experts

Online Aggregation of Unbounded Signed Losses Using Shifting Experts Proceedings of Machine Learning Research 60: 5, 207 Conformal and Probabilistic Prediction and Applications Online Aggregation of Unbounded Signed Losses Using Shifting Experts Vladimir V. V yugin Institute

More information

Tutorial: PART 1. Online Convex Optimization, A Game- Theoretic Approach to Learning.

Tutorial: PART 1. Online Convex Optimization, A Game- Theoretic Approach to Learning. Tutorial: PART 1 Online Convex Optimization, A Game- Theoretic Approach to Learning http://www.cs.princeton.edu/~ehazan/tutorial/tutorial.htm Elad Hazan Princeton University Satyen Kale Yahoo Research

More information

Tutorial: PART 2. Online Convex Optimization, A Game- Theoretic Approach to Learning

Tutorial: PART 2. Online Convex Optimization, A Game- Theoretic Approach to Learning Tutorial: PART 2 Online Convex Optimization, A Game- Theoretic Approach to Learning Elad Hazan Princeton University Satyen Kale Yahoo Research Exploiting curvature: logarithmic regret Logarithmic regret

More information

Bandit View on Continuous Stochastic Optimization

Bandit View on Continuous Stochastic Optimization Bandit View on Continuous Stochastic Optimization Sébastien Bubeck 1 joint work with Rémi Munos 1 & Gilles Stoltz 2 & Csaba Szepesvari 3 1 INRIA Lille, SequeL team 2 CNRS/ENS/HEC 3 University of Alberta

More information

PAC Subset Selection in Stochastic Multi-armed Bandits

PAC Subset Selection in Stochastic Multi-armed Bandits In Langford, Pineau, editors, Proceedings of the 9th International Conference on Machine Learning, pp 655--66, Omnipress, New York, NY, USA, 0 PAC Subset Selection in Stochastic Multi-armed Bandits Shivaram

More information

A Drifting-Games Analysis for Online Learning and Applications to Boosting

A Drifting-Games Analysis for Online Learning and Applications to Boosting A Drifting-Games Analysis for Online Learning and Applications to Boosting Haipeng Luo Department of Computer Science Princeton University Princeton, NJ 08540 haipengl@cs.princeton.edu Robert E. Schapire

More information

Worst-Case Bounds for Gaussian Process Models

Worst-Case Bounds for Gaussian Process Models Worst-Case Bounds for Gaussian Process Models Sham M. Kakade University of Pennsylvania Matthias W. Seeger UC Berkeley Abstract Dean P. Foster University of Pennsylvania We present a competitive analysis

More information

Agnostic Online learnability

Agnostic Online learnability Technical Report TTIC-TR-2008-2 October 2008 Agnostic Online learnability Shai Shalev-Shwartz Toyota Technological Institute Chicago shai@tti-c.org ABSTRACT We study a fundamental question. What classes

More information

CS261: A Second Course in Algorithms Lecture #11: Online Learning and the Multiplicative Weights Algorithm

CS261: A Second Course in Algorithms Lecture #11: Online Learning and the Multiplicative Weights Algorithm CS61: A Second Course in Algorithms Lecture #11: Online Learning and the Multiplicative Weights Algorithm Tim Roughgarden February 9, 016 1 Online Algorithms This lecture begins the third module of the

More information

Online Learning with Feedback Graphs

Online Learning with Feedback Graphs Online Learning with Feedback Graphs Nicolò Cesa-Bianchi Università degli Studi di Milano Joint work with: Noga Alon (Tel-Aviv University) Ofer Dekel (Microsoft Research) Tomer Koren (Technion and Microsoft

More information

Regret Analysis of Stochastic and Nonstochastic Multi-armed Bandit Problems. Sébastien Bubeck Theory Group

Regret Analysis of Stochastic and Nonstochastic Multi-armed Bandit Problems. Sébastien Bubeck Theory Group Regret Analysis of Stochastic and Nonstochastic Multi-armed Bandit Problems Sébastien Bubeck Theory Group Part 1: i.i.d., adversarial, and Bayesian bandit models i.i.d. multi-armed bandit, Robbins [1952]

More information

New Algorithms for Contextual Bandits

New Algorithms for Contextual Bandits New Algorithms for Contextual Bandits Lev Reyzin Georgia Institute of Technology Work done at Yahoo! 1 S A. Beygelzimer, J. Langford, L. Li, L. Reyzin, R.E. Schapire Contextual Bandit Algorithms with Supervised

More information

Corrupt Bandits. Abstract

Corrupt Bandits. Abstract Corrupt Bandits Pratik Gajane Orange labs/inria SequeL Tanguy Urvoy Orange labs Emilie Kaufmann INRIA SequeL pratik.gajane@inria.fr tanguy.urvoy@orange.com emilie.kaufmann@inria.fr Editor: Abstract We

More information

Adaptive Sampling Under Low Noise Conditions 1

Adaptive Sampling Under Low Noise Conditions 1 Manuscrit auteur, publié dans "41èmes Journées de Statistique, SFdS, Bordeaux (2009)" Adaptive Sampling Under Low Noise Conditions 1 Nicolò Cesa-Bianchi Dipartimento di Scienze dell Informazione Università

More information

Optimal and Adaptive Online Learning

Optimal and Adaptive Online Learning Optimal and Adaptive Online Learning Haipeng Luo Advisor: Robert Schapire Computer Science Department Princeton University Examples of Online Learning (a) Spam detection 2 / 34 Examples of Online Learning

More information

Advanced Machine Learning

Advanced Machine Learning Advanced Machine Learning Learning and Games MEHRYAR MOHRI MOHRI@ COURANT INSTITUTE & GOOGLE RESEARCH. Outline Normal form games Nash equilibrium von Neumann s minimax theorem Correlated equilibrium Internal

More information

Gambling in a rigged casino: The adversarial multi-armed bandit problem

Gambling in a rigged casino: The adversarial multi-armed bandit problem Gambling in a rigged casino: The adversarial multi-armed bandit problem Peter Auer Institute for Theoretical Computer Science University of Technology Graz A-8010 Graz (Austria) pauer@igi.tu-graz.ac.at

More information

Learning to play K-armed bandit problems

Learning to play K-armed bandit problems Learning to play K-armed bandit problems Francis Maes 1, Louis Wehenkel 1 and Damien Ernst 1 1 University of Liège Dept. of Electrical Engineering and Computer Science Institut Montefiore, B28, B-4000,

More information

Perceptron Mistake Bounds

Perceptron Mistake Bounds Perceptron Mistake Bounds Mehryar Mohri, and Afshin Rostamizadeh Google Research Courant Institute of Mathematical Sciences Abstract. We present a brief survey of existing mistake bounds and introduce

More information

Game Theory, On-line prediction and Boosting (Freund, Schapire)

Game Theory, On-line prediction and Boosting (Freund, Schapire) Game heory, On-line prediction and Boosting (Freund, Schapire) Idan Attias /4/208 INRODUCION he purpose of this paper is to bring out the close connection between game theory, on-line prediction and boosting,

More information

Regret to the Best vs. Regret to the Average

Regret to the Best vs. Regret to the Average Regret to the Best vs. Regret to the Average Eyal Even-Dar 1, Michael Kearns 1, Yishay Mansour 2, Jennifer Wortman 1 1 Department of Computer and Information Science, University of Pennsylvania 2 School

More information

arxiv: v3 [cs.lg] 30 Jun 2012

arxiv: v3 [cs.lg] 30 Jun 2012 arxiv:05874v3 [cslg] 30 Jun 0 Orly Avner Shie Mannor Department of Electrical Engineering, Technion Ohad Shamir Microsoft Research New England Abstract We consider a multi-armed bandit problem where the

More information

Introduction to Bandit Algorithms. Introduction to Bandit Algorithms

Introduction to Bandit Algorithms. Introduction to Bandit Algorithms Stochastic K-Arm Bandit Problem Formulation Consider K arms (actions) each correspond to an unknown distribution {ν k } K k=1 with values bounded in [0, 1]. At each time t, the agent pulls an arm I t {1,...,

More information

Better Algorithms for Benign Bandits

Better Algorithms for Benign Bandits Better Algorithms for Benign Bandits Elad Hazan IBM Almaden 650 Harry Rd, San Jose, CA 95120 ehazan@cs.princeton.edu Satyen Kale Microsoft Research One Microsoft Way, Redmond, WA 98052 satyen.kale@microsoft.com

More information

COS 402 Machine Learning and Artificial Intelligence Fall Lecture 22. Exploration & Exploitation in Reinforcement Learning: MAB, UCB, Exp3

COS 402 Machine Learning and Artificial Intelligence Fall Lecture 22. Exploration & Exploitation in Reinforcement Learning: MAB, UCB, Exp3 COS 402 Machine Learning and Artificial Intelligence Fall 2016 Lecture 22 Exploration & Exploitation in Reinforcement Learning: MAB, UCB, Exp3 How to balance exploration and exploitation in reinforcement

More information

Adaptive Online Gradient Descent

Adaptive Online Gradient Descent University of Pennsylvania ScholarlyCommons Statistics Papers Wharton Faculty Research 6-4-2007 Adaptive Online Gradient Descent Peter Bartlett Elad Hazan Alexander Rakhlin University of Pennsylvania Follow

More information

Online Convex Optimization

Online Convex Optimization Advanced Course in Machine Learning Spring 2010 Online Convex Optimization Handouts are jointly prepared by Shie Mannor and Shai Shalev-Shwartz A convex repeated game is a two players game that is performed

More information

Prediction and Playing Games

Prediction and Playing Games Prediction and Playing Games Vineel Pratap vineel@eng.ucsd.edu February 20, 204 Chapter 7 : Prediction, Learning and Games - Cesa Binachi & Lugosi K-Person Normal Form Games Each player k (k =,..., K)

More information

Online Learning with Gaussian Payoffs and Side Observations

Online Learning with Gaussian Payoffs and Side Observations Online Learning with Gaussian Payoffs and Side Observations Yifan Wu András György 2 Csaba Szepesvári Dept. of Computing Science University of Alberta {ywu2,szepesva}@ualberta.ca 2 Dept. of Electrical

More information

Optimism in the Face of Uncertainty Should be Refutable

Optimism in the Face of Uncertainty Should be Refutable Optimism in the Face of Uncertainty Should be Refutable Ronald ORTNER Montanuniversität Leoben Department Mathematik und Informationstechnolgie Franz-Josef-Strasse 18, 8700 Leoben, Austria, Phone number:

More information

Lecture 3: Lower Bounds for Bandit Algorithms

Lecture 3: Lower Bounds for Bandit Algorithms CMSC 858G: Bandits, Experts and Games 09/19/16 Lecture 3: Lower Bounds for Bandit Algorithms Instructor: Alex Slivkins Scribed by: Soham De & Karthik A Sankararaman 1 Lower Bounds In this lecture (and

More information

Mistake Bounds on Noise-Free Multi-Armed Bandit Game

Mistake Bounds on Noise-Free Multi-Armed Bandit Game Mistake Bounds on Noise-Free Multi-Armed Bandit Game Atsuyoshi Nakamura a,, David P. Helmbold b, Manfred K. Warmuth b a Hokkaido University, Kita 14, Nishi 9, Kita-ku, Sapporo 060-0814, Japan b University

More information

Combinatorial Bandits

Combinatorial Bandits Combinatorial Bandits Nicolò Cesa-Bianchi Università degli Studi di Milano, Italy Gábor Lugosi 1 ICREA and Pompeu Fabra University, Spain Abstract We study sequential prediction problems in which, at each

More information

Reinforcement Learning

Reinforcement Learning Reinforcement Learning Lecture 5: Bandit optimisation Alexandre Proutiere, Sadegh Talebi, Jungseul Ok KTH, The Royal Institute of Technology Objectives of this lecture Introduce bandit optimisation: the

More information

Convergence and No-Regret in Multiagent Learning

Convergence and No-Regret in Multiagent Learning Convergence and No-Regret in Multiagent Learning Michael Bowling Department of Computing Science University of Alberta Edmonton, Alberta Canada T6G 2E8 bowling@cs.ualberta.ca Abstract Learning in a multiagent

More information

Better Algorithms for Selective Sampling

Better Algorithms for Selective Sampling Francesco Orabona Nicolò Cesa-Bianchi DSI, Università degli Studi di Milano, Italy francesco@orabonacom nicolocesa-bianchi@unimiit Abstract We study online algorithms for selective sampling that use regularized

More information

The multi armed-bandit problem

The multi armed-bandit problem The multi armed-bandit problem (with covariates if we have time) Vianney Perchet & Philippe Rigollet LPMA Université Paris Diderot ORFE Princeton University Algorithms and Dynamics for Games and Optimization

More information

Online Learning and Sequential Decision Making

Online Learning and Sequential Decision Making Online Learning and Sequential Decision Making Emilie Kaufmann CNRS & CRIStAL, Inria SequeL, emilie.kaufmann@univ-lille.fr Research School, ENS Lyon, Novembre 12-13th 2018 Emilie Kaufmann Sequential Decision

More information

Importance Weighting Without Importance Weights: An Efficient Algorithm for Combinatorial Semi-Bandits

Importance Weighting Without Importance Weights: An Efficient Algorithm for Combinatorial Semi-Bandits Journal of Machine Learning Research 17 (2016 1-21 Submitted 3/15; Revised 7/16; Published 8/16 Importance Weighting Without Importance Weights: An Efficient Algorithm for Combinatorial Semi-Bandits Gergely

More information

Algorithms, Games, and Networks January 17, Lecture 2

Algorithms, Games, and Networks January 17, Lecture 2 Algorithms, Games, and Networks January 17, 2013 Lecturer: Avrim Blum Lecture 2 Scribe: Aleksandr Kazachkov 1 Readings for today s lecture Today s topic is online learning, regret minimization, and minimax

More information

Learning in Zero-Sum Team Markov Games using Factored Value Functions

Learning in Zero-Sum Team Markov Games using Factored Value Functions Learning in Zero-Sum Team Markov Games using Factored Value Functions Michail G. Lagoudakis Department of Computer Science Duke University Durham, NC 27708 mgl@cs.duke.edu Ronald Parr Department of Computer

More information

arxiv: v1 [cs.lg] 15 Jan 2016

arxiv: v1 [cs.lg] 15 Jan 2016 A Relative Exponential Weighing Algorithm for Adversarial Utility-based Dueling Bandits extended version arxiv:6.3855v [cs.lg] 5 Jan 26 Pratik Gajane Tanguy Urvoy Fabrice Clérot Orange-labs, Lannion, France

More information

Full-information Online Learning

Full-information Online Learning Introduction Expert Advice OCO LM A DA NANJING UNIVERSITY Full-information Lijun Zhang Nanjing University, China June 2, 2017 Outline Introduction Expert Advice OCO 1 Introduction Definitions Regret 2

More information

Bandits for Online Optimization

Bandits for Online Optimization Bandits for Online Optimization Nicolò Cesa-Bianchi Università degli Studi di Milano N. Cesa-Bianchi (UNIMI) Bandits for Online Optimization 1 / 16 The multiarmed bandit problem... K slot machines Each

More information

Tsinghua Machine Learning Guest Lecture, June 9,

Tsinghua Machine Learning Guest Lecture, June 9, Tsinghua Machine Learning Guest Lecture, June 9, 2015 1 Lecture Outline Introduction: motivations and definitions for online learning Multi-armed bandit: canonical example of online learning Combinatorial

More information

Optimization, Learning, and Games with Predictable Sequences

Optimization, Learning, and Games with Predictable Sequences Optimization, Learning, and Games with Predictable Sequences Alexander Rakhlin University of Pennsylvania Karthik Sridharan University of Pennsylvania Abstract We provide several applications of Optimistic

More information

A Second-order Bound with Excess Losses

A Second-order Bound with Excess Losses A Second-order Bound with Excess Losses Pierre Gaillard 12 Gilles Stoltz 2 Tim van Erven 3 1 EDF R&D, Clamart, France 2 GREGHEC: HEC Paris CNRS, Jouy-en-Josas, France 3 Leiden University, the Netherlands

More information

Conditional Swap Regret and Conditional Correlated Equilibrium

Conditional Swap Regret and Conditional Correlated Equilibrium Conditional Swap Regret and Conditional Correlated quilibrium Mehryar Mohri Courant Institute and Google 251 Mercer Street New York, NY 10012 mohri@cims.nyu.edu Scott Yang Courant Institute 251 Mercer

More information

Online Learning with Costly Features and Labels

Online Learning with Costly Features and Labels Online Learning with Costly Features and Labels Navid Zolghadr Department of Computing Science University of Alberta zolghadr@ualberta.ca Gábor Bartók Department of Computer Science TH Zürich bartok@inf.ethz.ch

More information

1 Overview. 2 Learning from Experts. 2.1 Defining a meaningful benchmark. AM 221: Advanced Optimization Spring 2016

1 Overview. 2 Learning from Experts. 2.1 Defining a meaningful benchmark. AM 221: Advanced Optimization Spring 2016 AM 1: Advanced Optimization Spring 016 Prof. Yaron Singer Lecture 11 March 3rd 1 Overview In this lecture we will introduce the notion of online convex optimization. This is an extremely useful framework

More information

Introducing strategic measure actions in multi-armed bandits

Introducing strategic measure actions in multi-armed bandits 213 IEEE 24th International Symposium on Personal, Indoor and Mobile Radio Communications: Workshop on Cognitive Radio Medium Access Control and Network Solutions Introducing strategic measure actions

More information

Downloaded 11/11/14 to Redistribution subject to SIAM license or copyright; see

Downloaded 11/11/14 to Redistribution subject to SIAM license or copyright; see SIAM J. COMPUT. Vol. 32, No. 1, pp. 48 77 c 2002 Society for Industrial and Applied Mathematics THE NONSTOCHASTIC MULTIARMED BANDIT PROBLEM PETER AUER, NICOLÒ CESA-BIANCHI, YOAV FREUND, AND ROBERT E. SCHAPIRE

More information

DIMACS Technical Report March Game Seki 1

DIMACS Technical Report March Game Seki 1 DIMACS Technical Report 2007-05 March 2007 Game Seki 1 by Diogo V. Andrade RUTCOR, Rutgers University 640 Bartholomew Road Piscataway, NJ 08854-8003 dandrade@rutcor.rutgers.edu Vladimir A. Gurvich RUTCOR,

More information