Bernoulli Rank-1 Bandits for Click Feedback

Similar documents
DCM Bandits: Learning to Rank with Multiple Clicks

Online Learning to Rank in Stochastic Click Models

Online Learning to Rank in Stochastic Click Models

Combinatorial Cascading Bandits

Cascading Bandits: Learning to Rank in the Cascade Model

arxiv: v1 [cs.ds] 4 Mar 2016

A. Notation. Attraction probability of item d. (d) Highest attraction probability, (1) A

Online Diverse Learning to Rank from Partial-Click Feedback

On the Complexity of Best Arm Identification in Multi-Armed Bandit Models

arxiv: v1 [stat.ml] 6 Jun 2018

Stratégies bayésiennes et fréquentistes dans un modèle de bandit

The Multi-Armed Bandit Problem

Improving Regret Bounds for Combinatorial Semi-Bandits with Probabilistically Triggered Arms and Its Applications

Bandit Algorithms. Zhifeng Wang ... Department of Statistics Florida State University

arxiv: v1 [cs.lg] 7 Sep 2018

Large-scale Information Processing, Summer Recommender Systems (part 2)

COS 402 Machine Learning and Artificial Intelligence Fall Lecture 22. Exploration & Exploitation in Reinforcement Learning: MAB, UCB, Exp3

Learning Exploration/Exploitation Strategies for Single Trajectory Reinforcement Learning

Reinforcement Learning

Stochastic Bandit Models for Delayed Conversions

Online Learning and Sequential Decision Making

On Bayesian bandit algorithms

Lecture 3: Lower Bounds for Bandit Algorithms

Cost-aware Cascading Bandits

Annealing-Pareto Multi-Objective Multi-Armed Bandit Algorithm

Analysis of Thompson Sampling for the multi-armed bandit problem

Anytime optimal algorithms in stochastic multi-armed bandits

Complex Bandit Problems and Thompson Sampling

Stochastic Contextual Bandits with Known. Reward Functions

Sparse Linear Contextual Bandits via Relevance Vector Machines

Lecture 4: Lower Bounds (ending); Thompson Sampling

Multi-armed Bandits in the Presence of Side Observations in Social Networks

arxiv: v2 [cs.lg] 13 Jun 2018

Lecture 19: UCB Algorithm and Adversarial Bandit Problem. Announcements Review on stochastic multi-armed bandit problem

Online Learning with Gaussian Payoffs and Side Observations

New bounds on the price of bandit feedback for mistake-bounded online multiclass learning

Improved Algorithms for Linear Stochastic Bandits

Evaluation of multi armed bandit algorithms and empirical algorithm

Advanced Machine Learning

Two optimization problems in a stochastic bandit model

Revisiting the Exploration-Exploitation Tradeoff in Bandit Models

Multi-armed bandit models: a tutorial

Online Influence Maximization under Independent Cascade Model with Semi-Bandit Feedback

Bandits with Delayed, Aggregated Anonymous Feedback

The information complexity of sequential resource allocation

Bandit Learning with Implicit Feedback

Bandit models: a tutorial

Learning Algorithms for Minimizing Queue Length Regret

Bandits : optimality in exponential families

An Optimal Bidimensional Multi Armed Bandit Auction for Multi unit Procurement

Spectral Bandits for Smooth Graph Functions with Applications in Recommender Systems

The Knowledge Gradient for Sequential Decision Making with Stochastic Binary Feedbacks

Two generic principles in modern bandits: the optimistic principle and Thompson sampling

Contextual Combinatorial Cascading Bandits

Pure Exploration Stochastic Multi-armed Bandits

arxiv: v1 [cs.lg] 12 Sep 2017

Exploiting Correlation in Finite-Armed Structured Bandits

Contextual Bandits with Latent Confounders: An NMF Approach

Stat 260/CS Learning in Sequential Decision Problems.

The Multi-Arm Bandit Framework

Bayesian and Frequentist Methods in Bandit Models

Online Learning with Feedback Graphs

Stat 260/CS Learning in Sequential Decision Problems. Peter Bartlett

Regret Analysis of Stochastic and Nonstochastic Multi-armed Bandit Problems, Part I. Sébastien Bubeck Theory Group

Administration. CSCI567 Machine Learning (Fall 2018) Outline. Outline. HW5 is available, due on 11/18. Practice final will also be available soon.

Offline Evaluation of Ranking Policies with Click Models

Multiple Identifications in Multi-Armed Bandits

Hybrid Machine Learning Algorithms

A Change-Detection based Framework for Piecewise-stationary Multi-Armed Bandit Problem

Combinatorial Multi-Armed Bandit and Its Extension to Probabilistically Triggered Arms

Reward Maximization Under Uncertainty: Leveraging Side-Observations on Networks

New Algorithms for Contextual Bandits

arxiv: v1 [cs.lg] 15 Oct 2014

arxiv: v3 [cs.lg] 12 Jul 2017

Learning to play K-armed bandit problems

Collaborative topic models: motivations cont

Transferable Contextual Bandit for Cross-Domain Recommendation

arxiv: v4 [cs.lg] 22 Jul 2014

arxiv: v2 [stat.ml] 19 Jul 2012

Reinforcement Learning

Decoupled Collaborative Ranking

Introduction to Multi-Armed Bandits

The information complexity of best-arm identification

Grundlagen der Künstlichen Intelligenz

COMBINATORIAL MULTI-ARMED BANDIT PROBLEM WITH PROBABILISTICALLY TRIGGERED ARMS: A CASE WITH BOUNDED REGRET

Online Learning with Experts & Multiplicative Weights Algorithms

arxiv: v1 [stat.ml] 31 Jan 2015

On the Complexity of Best Arm Identification with Fixed Confidence

Racing Thompson: an Efficient Algorithm for Thompson Sampling with Non-conjugate Priors

The No-Regret Framework for Online Learning

Linear Multi-Resource Allocation with Semi-Bandit Feedback

Online Learning with Gaussian Payoffs and Side Observations

Csaba Szepesvári 1. University of Alberta. Machine Learning Summer School, Ile de Re, France, 2008

Stat 260/CS Learning in Sequential Decision Problems. Peter Bartlett

Discover Relevant Sources : A Multi-Armed Bandit Approach

1 MDP Value Iteration Algorithm

On the Complexity of Best Arm Identification with Fixed Confidence

Analysis of Thompson Sampling for the multi-armed bandit problem

Alireza Shafaei. Machine Learning Reading Group The University of British Columbia Summer 2017

Informational Confidence Bounds for Self-Normalized Averages and Applications

Transcription:

Bernoulli Rank- Bandits for Click Feedback Sumeet Katariya, Branislav Kveton 2, Csaba Szepesvári 3, Claire Vernade 4, Zheng Wen 2 University of Wisconsin-Madison, 2 Adobe Research, 3 University of Alberta, 4 Telecom ParisTech katariya@wisc.edu, kveton@adobe.com, szepesva@cs.ualberta.ca claire.vernade@telecom-paristech.fr, zwen@adobe.com Abstract The probability that a user will click a search result depends both on its relevance and its position on the results page. The position based model explains this behavior by ascribing to every item an attraction probability, and to every position an examination probability. To be clicked, a result must be both attractive and examined. The probabilities of an item-position pair being clicked thus form the entries of a rank- matrix. We propose the learning problem of a Bernoulli rank- bandit where at each step, the learning agent chooses a pair of row and column arms, and receives the product of their Bernoulli-distributed values as a reward. This is a special case of the stochastic rank- bandit problem considered in recent work that proposed an elimination based algorithm RankElim, and showed that RankElim s regret scales linearly with the number of rows and columns on benign instances. These are the instances where the minimum of the average row and column rewards µ is bounded away from zero. The issue with RankElim is that it fails to be competitive with straightforward bandit strategies as µ. In this paper we propose, which replaces the crude confidence intervals of RankElim with confidence intervals based on Kullback-Leibler (KL) divergences. With the help of a novel result concerning the scaling of KL divergences we prove that with this change, our algorithm will be competitive no matter the value of µ. Experiments with synthetic data confirm that on benign instances the performance of is significantly better than that of even RankElim. Similarly, experiments with models derived from real-data confirm that the improvements are significant across the board, regardless of whether the data is benign or not. Introduction When deciding which search results to present, click logs are of particular interest. A fundamental problem in click data is position bias. The probability of an element being clicked depends not only on its relevance, but also on its position on the results page. The position-based model (PBM), first proposed by Richardson et al. [27] and then formalized by Craswell et al. [28], models this behavior by associating with each item a probability of being attractive, and with each position a probability of being examined. To be clicked, a result must be both attractive and examined. Given click logs, the attraction and examination probabilities can be learned using the maximum-likelihood estimation (MLE) or the expectationmaximization (EM) algorithms [Chuklin et al., 25]. An online learning model for this problem is proposed in Katariya et al. [27], called stochastic rank- bandit. The objective of the learning agent is to learn the most rewarding item and position, which is the maximum entry of a rank- matrix. At time t, the agent chooses a pair of row and column arms, and receives the product of their values as a reward. The goal of the agent is to maximize its expected cumulative reward, or equivalently to minimize its expected cumulative regret with respect to the optimal solution, the most rewarding pair of row and column arms. This learning problem is challenging because when the agent receives the reward of zero, it could mean either that the item was unattractive, or the position was not examined, or both. Katariya et al. [27] also proposed an elimination algorithm, RankElim, whose regret is O((K + L) µ 2 log n), where K is the number of rows, L is the number of columns, is the minimum of the row and column gaps, and µ is the minimum of the average row and column rewards. When µ is bounded away from zero, the regret scales linearly with K + L, while it scales inversely with. This is a significant improvement over using a standard bandit algorithm that (disregarding the problem structure) would treat item-position pairs as unrelated arms and would achieve a regret of O(KL log n). The issue is that as µ gets small, the regret bound worsens significantly. As we verify in Section 5, this indeed happens on models derived from some real-world problems. To illustrate the severity of this problem, consider as an example where K = L, and the row and column rewards are Bernoulli distributed. Let the mean reward of row and column be, and the mean reward of all other rows and columns be. We refer to this setting as a needle in a haystack, because there is a single rewarding entry out of K 2 entries. For this setting, µ = /K, and consequently the regret of RankElim is O(µ 2 K log n) = O(K 3 log n). However, a naive 2

bandit algorithm that ignores the rank- structure and treats each row-column pair as unrelated arms has O(K 2 log n) regret. While a naive bandit algorithm is unable to exploit the rank- structure when µ is large, RankElim is unable to keep up with a naive algorithm when µ is small. Our goal in this paper is to design an algorithm that performs well across all rank- problem instances regardless of their parameters. In this paper we propose that this improvement can be achieved by replacing the confidence intervals used by RankElim by strictly tighter confidence intervals based on Kullback-Leibler (KL) divergences. This leads to our algorithm that we call. Based on the work of Garivier and Cappe [2], we expect this change to lead to an improved behavior, especially for extreme instances, as µ. Indeed, in this paper we show that KL divergences enjoy a peculiar scaling. In particular, thanks to this improvement, for the needle in a haystack problem discussed above the regret of becomes O(K 2 log n). Our contributions are as follows. First, we propose a Bernoulli rank- bandit, which is a special class of a stochastic rank- bandit where the rewards are Bernoulli distributed. Second, we modify RankElim for solving the Bernoulli rank- bandit, which we call, to use KL-UCB intervals. Third, we derive a O((K + L) (µγ ) log n) gap-dependent upper bound on the n-step regret of, where K, L, and µ are as above, while γ = max {µ, p max } with p max being the maximum of the row and column rewards; effectively replacing the µ 2 term of the previous regret bound of RankElim with (µγ). It follows that the new bound is an unilateral improvement over the previous one and is a strict improvement when µ < p max, which is expected to happen quite often in practical problems. For the needle in a haystack problem, the new bound essentially matches that of the naive bandit algorithm, while never worsening the bound of RankElim. Our final contribution is the experimental validation of, on both synthetic and real-world problems. The experiments indicate that outperforms several baselines across almost all problem instances. We denote random variables by boldface letters and define [n] = {,..., n}. For any sets A and B, we denote by A B the set of all vectors whose entries are indexed by B and take values from A. We let d(p, q) = p log p p q + ( p) log q denote the KL divergence between the Bernoulli distributions with means p, q [, ]. As usual, the formula for d(p, q) is defined through its continuous extension as p, q approach the boundaries of [, ]. 2 Setting The setting of the Bernoulli rank- bandit is the same as that of the stochastic rank- bandit [Katariya et al., 27], with the additional requirement that the row and column rewards are Bernoulli distributed. We state the setting for complete- Alternatively, the worst-case regret bound for RankElim becomes O(Kn 2/3 log n), while that of for a naive bandit algorithm with a naive bound is O(Kn /2 log n). ness, and borrow the notation from Katariya et al. [27] for the ease of comparison. An instance of our learning problem is defined by a tuple (K, L, P U, P V ), where K is the number of rows, L is the number of columns, P U is a distribution over {, } K from which the row rewards are drawn, and P V is a distribution over {, } L from which the column rewards are drawn. Let the row and column rewards be (u t, v t ) i.i.d P U P V, t =,..., n. In particular, u t and v t are drawn independently at any time t. At time t, the learning agent chooses a row index i t [K] and a column index j t [L], and observes u t (i t )v(j t ) as its reward. The indices i t and j t chosen by the learning agent are allowed to depend only on the history of the agent up to time t. Let the time horizon be n. The goal of the agent is to maximize its expected cumulative reward in n steps. This is equivalent to minimizing the expected cumulative regret in n steps [ n ] R(n) = E R(i t, j t, u t, v t ), t= where R(i t, j t, u t, v t ) = u t (i )v t (j ) u t (i t )v t (j t ) is the instantaneous stochastic regret of the agent at time t, and (i, j ) = arg max (i,j) [K] [L] E [u(i)v(j)] is the optimal solution in hindsight of knowing P U and P V. 3 Algorithm The pseudocode of our algorithm,, is in Algorithm. As noted earlier this algorithm is based on RankElim [Katariya et al., 27] with the difference that we replace their confidence intervals with KL-based confidence intervals. For the reader s benefit, we explain the full algorithm. is an elimination algorithm that operates in stages, where the elimination is conducted with KL-UCB confidence intervals. The lengths of the stages quadruple from one stage to the next, and the algorithm is designed such that at the end of stage l, it eliminates with high probability any row and column whose gap scaled by a problem dependent constant is at least l = 2 l. We denote the remaining rows and columns in stage l by I l and J l, respectively. Every stage has an exploration phase and an exploitation phase. During row-exploration in stage l (lines 2 6), every remaining row is played with a randomly chosen remaining column, In the exploitation phase, we construct high-probability KL-UCB [Garivier and Cappe, 2] confidence intervals [Ll U(i), UU l (i)] for row i I l, and confidence intervals [Ll V(j), UV l (j)] for column j J l. As noted earlier, this is where we depart from RankElim. The elimination uses row i l and column j l, where i l = arg max i I l L U l (i), j l = arg max j J l L V l (j). 22

Algorithm for Bernoulli rank- bandits. : // Initialization 2: t,, n 3: C U K,L, C V K,L // Zero matrix with K rows and L columns 4: h U (,..., K), hv (,..., L) 5: 6: for l =,,... do 6 7: n l 2 l log n 8: I l i [K] {hu l (i)}, J l j [L] {hv l (j)} 9: : // Row and column exploration : for n l n l times do 2: Choose uniformly at random column j [L] 3: 4: j hl V(j) for all i I l do 5: Cl U(i, j) CU l (i, j) + u t(i)v t (j) 6: t t + 7: Choose uniformly at random row i [K] 8: 9: i hl U(i) for all j J l do 2: Cl V(i, j) CV l (i, j) + u t(i)v t (j) 2: t t + 22: 23: // UCBs and LCBs on the expected rewards of all remaining rows and columns with divergence constraint δ l log n + 3 log log n 24: 25: for all i I l do 26: û l (i) n l L j= CU l (i, j) 27: Ul U(i) arg max q [û l (i),] {n l d (û l (i), q) δ l } 28: Ll U(i) arg min q [,û l (i)] {n l d (û l (i), q) δ l } 29: for all j J l do K i= CV l 3: ˆv l (j) n l (i, j) 3: Ul V(j) arg max q [ˆv l (j),] {n l d (ˆv l (j), q) δ l } 32: Ll V(j) arg min q [,ˆv l (j)] {n l d (ˆv l (j), q) δ l } 33: 34: // Row and column elimination 35: i l arg max i Il Ll U(i) 36: 37: hl+ U hu l for i =,..., K do 38: if Ul U(hU l (i)) LU l (i l) then 39: hl+ U (i) i l 4: 4: j l arg max j Jl Ll V(j) 42: 43: hl+ V hv l for j =,..., L do 44: if Ul V(hV l (j)) LV l (j l) then 45: hl+ V (j) j l 46: 47: l+ l /2, Cl+ U CU l, CV l+ CV l We eliminate any row i and column j such that U U l (i) L U l (i l ), U V l (j) L V l (j l ). We also track the remaining rows and columns in stage l by hl U and hv l, respectively. When row i is eliminated by row i l, we set hl U(i) = i l. If row i l is eliminated by row i l at a later stage l > l, we update hl U(i) = i l. This is analogous for columns. The remaining rows I l and columns J l can be then defined as the unique values in hl U and hv l, respectively. The maps hl U and hv l help to guarantee that the row and column means are non-decreasing. The KL-UCB confidence intervals in can be found by solving a one-dimensional convex optimization problem for every row (lines 27 28) and column (lines 3 32). They can be found efficiently using binary search because the Kullback-Leibler divergence d(x, q) is convex in q as q moves away from x in either direction. The KL-UCB confidence intervals need to be computed only once per stage. Hence, has to solve at most K + L convex optimization problems per stage, and hence (K + L) log n problems overall. 4 Analysis In this section, we derive a gap-dependent upper bound on the n-step regret of. The hardness of our learning problem is measured by two kinds of metrics. The first kind are gaps. The gaps of row i [K] and column j [L] are defined as U i = ū(i ) ū(i), V j = v(j ) v(j), () respectively; and the minimum row and column gaps are defined as min U = min i [K]: i U> U i, min V = min j [L]: j V> V j, (2) respectively. Roughly speaking, the smaller the gaps, the harder the problem. This inverse dependence on gaps is tight [Katariya et al., 27]. The second kind of metrics are the extremal parameters K L µ = min ū(i), v(j) K L, (3) i= j= { } p max = max max ū(i), max v(j). (4) i [K] j [L] The first metric, µ, is the minimum of the average of entries of ū and v. This quantity appears in our analysis due to the averaging character of. The smaller the value of µ, the larger the regret. The second metric, p max, is the maximum entry in ū and v. As we shall see the regret scales inversely with γ = max {µ, p max }. (5) Note that if µ and p max at the same time, then the row and columns gaps must also approach one. With this we are ready to state our main result. Theorem. Let C = 6e + 82 and n 5. Then the expected n-step regret of is bounded as R(n) 6 µγ K i= U i + L j= V j log n + C(K + L), 23

where U i = U i + { U i = } V min, V j = V j + { V j = } U min. The difference from the main result of Katariya et al. [27] is that the first term in our bound scales with /(µγ) instead of /µ 2. Since µ γ and in fact often µ γ, this is a significant improvement. We validate this empirically in the next section. Due to the lack of space, we only provide a sketch of the proof of Theorem. At a high level, it follows the steps of the proof of Katariya et al. [27]. Focusing on the source of the improvement, we first state and prove a new lemma, which allows us to replace one /µ in the regret bound with /γ. Recall from Section that d denotes the KL divergence between Bernoulli random variables with means p, q [, ]. Lemma. Let c, p, q [, ]. Then c( max {p, q})d(p, q) d(cp, cq) cd(p, q). (6) In particular, 2c max(c, max {p, q})(p q) 2 d(cp, cq). (7) Proof. The proof of (6) is based on differentiation. The first two derivatives of d(cp, cq) with respect to q are c(q p) d(cp, cq) = q q( cq), 2 q 2 d(cp, cq) = c2 (q p) 2 + cp( cp) q 2 ( cq) 2 ; and the first two derivatives of cd(p, q) with respect to q are c(q p) [cd(p, q)] = q q( q), 2 q 2 [cd(p, q)] = c(q p)2 + cp( p) q 2 ( q) 2. The second derivatives show that both d(cp, cq) and cd(p, q) are convex in q for any p. The minima are at q = p. We fix p and c, and prove (6) for any q. The upper bound is derived as follows. Since d(cp, cx) = cd(p, x) = when x = p, the upper bound holds if cd(p, x) increases faster than d(cp, cx) for any p < x q, and if cd(p, x) decreases faster than d(cp, cx) for any q x < p. This follows from the definitions of xd(cp, cx) and x [cd(p, x)]. In particular, both derivatives have the same sign for any x, and /( cx) /( x) for x [min {p, q}, max {p, q}]. The lower bound is derived as follows. Note that the ratio of x [cd(p, x)] and x x x d(cp, cx) is bounded from above as [cd(p, x)] cx = d(cp, cx) x x max {p, q} for any x [min {p, q}, max {p, q}]. Therefore, we get a lower bound on d(cp, cq) when we multiply cd(p, q) by max {p, q}. To prove (7) note that by Pinsker s inequality, for any p, q, d(p, q) 2(p q) 2. Hence, on one hand, d(cp, cq) 2c 2 (p q) 2. On the other hand, we have from (6) that d(cp, cq) 2c( max {p, q})(p q) 2. Taking the maximum of the right-hand sides in these two equations gives (7). Proof sketch of Theorem. We proceed along the lines of Katariya et al. [27]. The key step in their analysis is the upper bound on the expected n-step regret of any suboptimal row i [K]. This bound is proved as follows. First, Katariya et al. [27] show that row i is eliminated with a high probability after O((µ i U) 2 log n) observations, for any column elimination strategy. Then they argue that the amortized per-observation regret before the elimination is O( i U ). Therefore, the maximum regret due to row i is O(µ 2 ( i U) log n). The expected n-step regret of any suboptimal column j [L] is bounded analogously. We modify the above argument as follows. Roughly speaking, due to the KL-UCB confidence interval, a suboptimal row i is eliminated with a high probability after ( ) O d(µ(ū(i ) i U), µū(i )) log n observations. Therefore, the expected n-step regret due to exploring row i is ( ) i U O d(µ(ū(i ) i U), µū(i )) log n. Now we apply (7) of Lemma to get that the regret is ( ) O log n. µγ i U The regret of any suboptimal column j [L] is bounded analogously. 5 Experiments We conduct two experiments. In Section 5., we compare our algorithm to other algorithms in the literature on a synthetic problem. In Section 5.2, we evaluate the same algorithms on click models that are trained on a real-world dataset. 5. Comparison to Alternative Algorithms Following Katariya et al. [27], we consider the needle in a haystack class of problems, where only one item is attractive and one position is examined. We recall the problem here. The i-th entry of u t, u t (i), and the j-th entry of v t, v t (j), are independent Bernoulli variables with means ū(i) = p U + U {i = }, v(j) = p V + V {j = }, for some (p U, p V ) [, ] 2 and gaps ( U, V ) (, p U ] (, p V ]. Note that arm (, ) is optimal with an expected reward of (p U + U )(p V + V ). The goal of this experiment is to compare with five other algorithms from the literature and validate that its regret scales linearly with K and L, which implies that it exploits the problem structure. In this experiment, we set (8) 24

8K K=32,L=32 5K K=64,L=64 M K=28,L=28 RankElim Elim 35K 375K 75K 9K 25K 5K 45K 25K 25K 5K M.5M 2M 5K M.5M 2M 5K M.5M 2M (a) (b) (c) Figure:Then-stepregretof,Elim,RankElimandonproblem(8)for(a)K = L=32(b)K = L=64(c) K =L=28.Theresultsareaveragedover2 runs. Attractionprob..8.6.4.2 Attractionprob Query Query2 Examinationprob..8.6.4.2 Examinationprob Query Query2 4M 3M 2M M RankElim Elim Query 2M.5M M 5K RankElim Elim Query2 2 4 6 8 Sortedrowindex 2 4 6 8 Sortedcolindex 2.5M 5M 7.5M M 2.5M 5M 7.5M M (a) (b) (c) (d) Figure2:(a)Thesortedatractionprobabilitiesoftheitemsfrom2queriesfromtheYandexdataset.(b)Thesortedexaminationprobabilities ofthepositionsforthesame2queries.(c)then-stepregretinquery.(d)then-stepregretinquery2.theresultsareaveragedover5 runs. p U =p V =.25, U = V =.5,andK =L,sothatµ= ( /K).25+.75/K=.25+.5/K, p max =.25, andγ=µ=.25+.5/k. InadditiontocomparingtoRankElim,wealsocompare toelim[auerand Ortner,2],[Auer etal., 22],KL-UCB[GarivierandCappe,2],andThompson sampling[thompson,933].ischosenasabaseline asithasbeenusedbykatariyaetal.[27]intheirexperiments.elimusesaneliminationapproachsimilarto RankElimand.KL-UCBissimilarto, butituseskl-ucbconfidenceintervals.thompsonsampling ()isabayesianalgorithmthatmaximizestheexpectedrewardwithrespecttoarandomlydrawnbelief. Figureshowsthen-stepregretofthealgorithmsdescribedaboveasafunctionoftimenforK = L,thelatterof whichdoublesfromoneplottothenext. Weobservethattheregretofflatensinalthree problems,whichindicatesthatlearnstheoptimalarm. WealsoseethattheregretofdoublesasK andldouble,indicatingthatourboundintheoremhastherightscalingink +L,andthatthealgorithm leveragestheproblemstructure. Ontheotherhand,theregretof,Elim,KL-UCBandquadrupleswhenK andldouble,confirmingthattheirregretisω(kl). Next, weobservethat while KL-UCBandhavesmalerregret thanwhenk andlaresmal,the(k +L)- scalingofenablesittooutperformthesealgorithmsforlargek andl(figurec). Finaly,notethat outperformsrankeliminalthreeexperiments,confirmingtheimportanceoftighterconfidenceinter- vals.itisworthnotingthatµ=γforthisproblem,andhence µ 2 = µγ. AccordingtoTheorem,should notperformbeterthanrankelim. Yetitis4timesbeter asseeninfigurea. Thissuggeststhatourupperboundis loose. 5.2 ModelsBasedonReal-WorldData Inthisexperiment,wecomparetootheralgorithmsonclickmodelsthataretrainedontheYandexdataset [Yandex,23],ananonymizedsearchlogof35Msearchsessions. Eachsesioncontainsaquery,thelistofdisplayed documentsatpositionsto,andtheclicksonthosedocuments. Weselect2mostfrequentqueriesfromthedataset, andestimatetheparametersofthepbmmodelusingtheem algorithm[markov,24;chuklin etal.,25]. Toilustrateourlearned models, weplottheparameters oftwoqueries,queriesand2.figure2ashowsthesorted atractionprobabilitiesofitemsinthequeries,andfigure2b showsthesortedexaminationprobabilitiesofthepositions. QueryhasL=87itemsandQuery2hasL=87items. K = isthenumberofdocumentsdisplayedperquery. Weilustratetheperformanceonthesequeriesbecausethey difernotablyintheirµ(3)andp max (4),sowecanstudythe performanceofouralgorithmindiferentreal-worldsetings. Figure2canddshowtheregretofalalgorithmsonQueries and2,respectively. We first notethat KL-UCB and do beterthan onbothqueries. AsseeninSection5., isexpectedtoimproveoverthesebaselinesfor largek andl,whichisnotthecasehere. Withrespectto 25

5K 375K 25K 25K Average 2.5M 5M 7.5M M RankElim Elim Figure3: Theaveragen-stepregretoveral2queriesfromthe Yandexdataset,with5runsperquery. otheralgorithms, weseethatissignificantly beterthanrankelimandelimandno worsethan onquery,whileforquery2,issuperiortoalofthem.notethatp max =.85inQueryishigher thanp max =.66inQuery2. Also,µ=.3inQueryis lowerthanµ=.28inquery2. From(5),γ=.5in Query,whichislowerthanγ=.34inQuery2. Ourupperbound(Theorem)ontheregretofscales aso (µγ) ),andsoweexpecttoperform beteronquery2.ourresultsconfirmthisexpectation. InFigure3,weplottheaverageregretoveral2queries, wherethestandarderoriscomputedbyrepeatingthisprocedure5times.hasthelowestregretofal algorithmsexceptforkl-ucband.itsregretis.9percentlowerthanthatof,and79percentlowerthanthat ofrankelim.thisisexpected.somereal-worldinstances haveabenignrank-structurelikequery2,whileothersdo not,like Query. Hence, weseeareductionintheaveragegainsofoverinfigure3ascomparedtofigure2d.thehighregretofrankelim,whichis alsodesignedtoexploittheproblemstructure,showsthatit ismoresensitivetounfavorablerank-structures.thus,the goodnewsisthatimprovesonthislimitation ofrankelim. However,KL-UCBandperformbeteron average,andwebelievethisisduetothefact4outofour 2querieshaveL < 2,andhenceKL < 2. Thisis inlinewiththeresultsofsection5.,whichsuggestthatthe advantageofoverkl-ucbandwil kick in onlyformuchlargervaluesofk andl. 6 Related Work Our algorithmis based on RankElim of Katariya et al.[27]. The maindiferenceisthatwereplacetheconfidenceintervalsofrankelim,whicharebasedonsubgaussiantailinequalities,withconfidenceintervalsbasedonkl divergences. Asdiscusedbeforehand,thisresultsinanunilateralimprovementoftheirregretbound. Thenewalgorithmisstilabletoexploittheproblemstructureofbenign instances,whileitsregretiscontroledonprobleminstances thatare hard forrankelim. Asdemonstratedintheprevioussection,thenewalgorithmisalsoa majorpractical improvementoverrankelim,whileitremainscompetitive withalternativesonhardinstances. Severalotherpapersstudiedbandits wherethepayofis givenbyalowrank matrix. Zhaoetal.[23]proposed abanditalgorithmforlow-rank matrixcompletion, which approximatestheposterioroverlatentitemfeaturesbya singlepoint. Theauthorsdonotanalyzethisalgorithm. Kawale etal.[25]proposedabanditalgorithmforlow- rankmatrixcompletionusingthompsonsamplingwithrao- Blackwelization. Theyanalyzeavariantoftheiralgorithm whosen-stepregretforrank-matricesiso (/ 2 )logn). Thisissuboptimalcomparedtoouralgorithm. Mailardet al.[24]studiedabanditproblemwherethearmsarepartitionedintolatentgroups.inthiswork,wedonotmakeany suchasumptions,butourresultsarelimitedtorank. Gentileetal.[24]proposedanalgorithmthatclustersusers basedontheirpreferences,undertheasumptionthatthefeaturesofitemsareknown.senetal.[27]proposedanalgorithmforcontextualbanditswithlatentconfounders,which reducestoa multi-armedbanditproblemwherethereward matrixislow-rank. TheyuseanNMF-basedapproachand requirethattherewardmatrixobeysavariantoftherestricted isometryproperty. Wemakenosuchasumptions. Also,our learningagentcontrolsboththerowandcolumnwhileinthe abovepapers,therowsarecontroledbytheenvironment. is motivatedbythestructureofthepbm [Richardson etal.,27]. Lagreeetal.[26]proposed abanditalgorithmforthis modelbuttheyasumethatthe examinationprobabilitiesareknown.canbe usedtosolvethisproblemwithoutthisasumption.thecascademodel[crasweletal.,28]isanalternativewayofexplainingthepositionbiasinclickdata[chuklinetal.,25]. Banditalgorithmsforthisclasof modelshavebeenproposedinseveralrecentpapers[kvetonetal.,25a;combes etal.,25; Kvetonetal.,25b; Katariyaetal.,26; Zongetal.,26;Lietal.,26]. 7 Conclusions Inthiswork,weproposed,aneliminationalgorithmthatusesKL-UCBconfidenceintervalstofindthe maximumentryofastochasticrank-matrixwithbernouli rewards. Thealgorithmisa modificationofrankelim [Katariyaetal.,27],wherethesubgausianconfidenceintervalsarereplacedbytheoneswithKLdivergences.Aswe demonstratebothempiricalyandanalyticaly,thischangeresultsinasignificantimprovement.asaresult,weobtainthe firstalgorithmthatisabletoexploittherank-structurewithoutpayingasignificantpenaltyoninstanceswheretherank- structurecannotbeexploited. Wenotethat usestherank-structureof theproblemandthattherearenoguaranteesbeyondrank-. Whilethedependenceoftheregretofon isknowntobetight[katariyaetal.,27],thequestionabout theoptimaldependenceonµisstilopen.finaly,wepoint outthatandkl-ucbperformbeterthan inourexperiments,especialyforsmallandk.thisisbecauseisaneliminationalgorithm.elimination algorithmstendtohavehigherregretinitialythanucb-style algorithmsbecausetheyexploremoreaggresively.itisnot inconceivabletohavealgorithmsthatleveragetherank- structureinthefuture. 26

References [Auer and Ortner, 2] Peter Auer and Ronald Ortner. UCB revisited: Improved regret bounds for the stochastic multi-armed bandit problem. Periodica Mathematica Hungarica, 6(-2):55 65, 2. [Auer et al., 22] Peter Auer, Nicolo Cesa-Bianchi, and Paul Fischer. Finite-time analysis of the multiarmed bandit problem. Machine Learning, 47:235 256, 22. [Chuklin et al., 25] Aleksandr Chuklin, Ilya Markov, and Maarten de Rijke. Click Models for Web Search. Morgan & Claypool Publishers, 25. [Combes et al., 25] Richard Combes, Stefan Magureanu, Alexandre Proutiere, and Cyrille Laroche. Learning to rank: lower bounds and efficient algorithms. In Proceedings of the 25 ACM SIGMETRICS International Conference on Measurement and Modeling of Computer Systems, 25. [Craswell et al., 28] Nick Craswell, Onno Zoeter, Michael Taylor, and Bill Ramsey. An experimental comparison of click position-bias models. In Proceedings of the st ACM International Conference on Web Search and Data Mining, pages 87 94, 28. [Garivier and Cappe, 2] Aurelien Garivier and Olivier Cappe. The KL-UCB algorithm for bounded stochastic bandits and beyond. In Proceeding of the 24th Annual Conference on Learning Theory, pages 359 376, 2. [Gentile et al., 24] Claudio Gentile, Shuai Li, and Giovanni Zappella. Online clustering of bandits. In Proceedings of the 3st International Conference on Machine Learning, pages 757 765, 24. [Katariya et al., 26] Sumeet Katariya, Branislav Kveton, Csaba Szepesvari, and Zheng Wen. DCM bandits: Learning to rank with multiple clicks. In Proceedings of the 33rd International Conference on Machine Learning, 26. [Katariya et al., 27] Sumeet Katariya, Branislav Kveton, Csaba Szepesvari, Claire Vernade, and Zheng Wen. Stochastic rank- bandits. In AISTA, 27. [Kawale et al., 25] Jaya Kawale, Hung Bui, Branislav Kveton, Long Tran-Thanh, and Sanjay Chawla. Efficient Thompson sampling for online matrix-factorization recommendation. In Advances in Neural Information Processing Systems 28, pages 297 35, 25. [Kveton et al., 25a] Branislav Kveton, Csaba Szepesvari, Zheng Wen, and Azin Ashkan. Cascading bandits: Learning to rank in the cascade model. In Proceedings of the 32nd International Conference on Machine Learning, 25. [Kveton et al., 25b] Branislav Kveton, Zheng Wen, Azin Ashkan, and Csaba Szepesvari. Combinatorial cascading bandits. In Advances in Neural Information Processing Systems 28, pages 45 458, 25. [Lagree et al., 26] Paul Lagree, Claire Vernade, and Olivier Cappe. Multiple-play bandits in the position-based model. CoRR, abs/66.2448, 26. [Li et al., 26] Shuai Li, Baoxiang Wang, Shengyu Zhang, and Wei Chen. Contextual combinatorial cascading bandits. In Proceedings of the 33rd International Conference on Machine Learning, pages 245 253, 26. [Maillard and Mannor, 24] Odalric-Ambrym Maillard and Shie Mannor. Latent bandits. In Proceedings of the 3st International Conference on Machine Learning, pages 36 44, 24. [Markov, 24] Ilya Markov. Pyclick - click models for web search. https://github.com/markovi/pyclick, 24. [Richardson et al., 27] Matthew Richardson, Ewa Dominowska, and Robert Ragno. Predicting clicks: Estimating the click-through rate for new ads. In Proceedings of the 6th International Conference on World Wide Web, pages 52 53, 27. [Sen et al., 27] Rajat Sen, Karthikeyan Shanmugam, Murat Kocaoglu, Alexandros G Dimakis, and Sanjay Shakkottai. Contextual bandits with latent confounders: An nmf approach. In AISTA, 27. [Thompson, 933] William. R. Thompson. On the likelihood that one unknown probability exceeds another in view of the evidence of two samples. Biometrika, 25(3-4):285 294, 933. [Yandex, 23] Yandex personalized web search challenge. https://www.kaggle.com/c/yandex-personalizedweb-search-challenge, 23. [Zhao et al., 23] Xiaoxue Zhao, Weinan Zhang, and Jun Wang. Interactive collaborative filtering. In Proceedings of the 22nd ACM International Conference on Information and Knowledge Management, pages 4 42, 23. [Zong et al., 26] Shi Zong, Hao Ni, Kenny Sung, Nan Rosemary Ke, Zheng Wen, and Branislav Kveton. Cascading bandits for large-scale recommendation problems. In Proceedings of the 32nd Conference on Uncertainty in Artificial Intelligence, 26. 27