Thompson Sampling for Complex Online Problems

Size: px
Start display at page:

Download "Thompson Sampling for Complex Online Problems"

Transcription

1 Thompson Sampling for Complex Online Problems Anonymous Author(s) Affiliation Address Abstract We study stochastic multi-armed bandit settings with complex actions over a set of basic arms, where the decision maker has to select a subset of the basic arms or a partition of the basic arms at every round (rather than only selecting a single basic arm). The reward of the complex action is some function of the basic arms rewards, and the feedback observed may not necessarily be the reward per-arm. For instance, when the complex actions are subsets of the arms, we may only observe the maximum reward over the chosen subset. We prove the first general frequentist regret bound for Thompson sampling applied to complex bandit problems. Our result makes no structural assumptions on the prior and holds for general discretely-supported prior distributions. The regret bound scales logarithmically with time but with a non-trivial, and often more improved, constant that captures the information structure of the complex bandit. As applications, we obtain corollaries that show improved regret bounds for a class of complex, subset-selection bandit problems. Using particle filters for computing posterior distributions without an explicit closed-form, we apply Thompson-sampling algorithms for subset selection and job-scheduling problems and present numerical results. 1 Introduction The Multi-Armed Bandit (MAB) is a classical framework in machine learning and optimization. In the basic MAB setting, there is a finite set of actions, each of which has a reward derived from some stochastic process, and a learner selects actions to optimize long-term performance. The MAB framework gives a crystallized abstraction of a fundamental decision problem whether to explore or exploit in the face of uncertainty. Bandit problems have been extensively studied, and several well-performing methods now exist for optimizing the reward [1,, 3, ]. However, the requirement that the actions rewards be independent is often a severe limitation, as seen in these examples: Web Advertising: Assume a publisher controlling advertisements on a web-site, selecting each time a (small) subset of ads to be displayed to the user. As the publisher is paid per click, it would like to maximize its revenue, but the dependency between different ads causes the problem not to decompose nicely. For example, showing two car ads might not significantly increase the click probability over a single car ad. Job Scheduling: Assume we have a small number of resources (say, machines) and in each time step we receive a set of jobs (the basic arms ), where the duration of each job follows some fixed but unknown distribution. The latency of a machine is the sum of the latencies of the jobs (basic arms) assigned to it, and the makespan of the system is the maximum latency over the machines. Here, the decision maker s complex action is to partition the jobs (basic arms) between the machines to minimize the makespan. These examples motivate settings where a more complex model than the simple MAB is required. Our high-level goal is to describe a methodology that can tackle such problems and also guarantee 1

2 high performance. An additional complication in the problems above is that it is unlikely that we will get to observe the reward of each basic action chosen. Rather, we can hope to receive only an aggregate reward for the complex action taken. Our approach to complex bandit problems stems from the idea that when faced with uncertainty, pretending to be Bayesian can be advantageous. A purely Bayesian view of the multi-armed bandit assumes that the model parameters (i.e., the arms distributions) are drawn from a prior distribution, from which one updates a posterior distribution based on the observations at hand. We argue that even in a frequentist setup, in which the stochastic model is unknown but fixed, working with a fictitious prior over the model (i.e., being pseudo- Bayesian) helps solve very general bandit problems with complex actions and observations. Algorithmically, the prescription is to perform Thompson sampling [, 6, 7]: Start with a fictitious prior distribution over the parameters of the basic arms of the model, whose posterior gets updated as actions are played. At suitable instants, a parameter is randomly drawn according to the posterior and the (complex) action optimal for the parameter is played. The intuition behind this approach is twofold: (1) Updating the posterior adds useful information about the true unknown parameter, so with more and more accumulated information the posterior typically shrinks to the true parameter. () Correlations among complex bandit actions (due to their dependence on the basic parameters) are implicitly captured by posterior updates on the space of basic parameters. The main advantage of a pseudo-bayesian approach, compared to other MAB methodologies such as UCB, is that it can handle a wide range of information models that go beyond observing the individual rewards alone. For example, suppose we observe only the total flow in the multi-commodity flow problem above. In Thompson sampling, we merely need to compute a posterior given this observation and use it. In contrast, it seems difficult to adapt an algorithm such as UCB to handle this case without having an exponential dependence on the number of arms 1. The Bayesian view taken by Thompson sampling also allows us to use efficient numerical algorithms such as particle filtering [9, 1] to estimate and track posterior distributions. Our main analytical result is a general regret bound for Thompson sampling in complex bandit settings. No specific structure is imposed on the initial (fictitious) prior, except that it be discretely supported and put a nonzero mass on the true model. The bound for this general setting scales logarithmically with time, as is standard in stochastic bandit results. But more interestingly, the preconstant for this logarithmic scaling can be explicitly characterized in terms of the bandit s KL divergence geometry and represents the information complexity of a bandit problem. The standard or basic multi-armed bandit imposes no structure among the actions, and its information complexity simply becomes a sum of terms, one for each separate action. However, in a complex bandit setting rewards are often informative about other parameters of the model, in which case the bound becomes more involved due to coupling across complex actions. Recent work has shown the regret-optimality of Thompson sampling for the basic MAB [7, 1], and has even provided regret bounds for a very special complex bandit setting the linear bandit case where the reward is a linear function of the actions [13]. However, the analysis of general complex bandits poses difficulties that cannot be circumvented using the techniques in existing work. Indeed, these existing proof techniques rely heavily on the structure of the prior and posterior distributions either product-form Beta distributions for the standard MAB or standard normal distributions for linear bandits. These methods break down when analyzing the evolution of complicated posterior distributions which often lack even a closed form expression. In contrast, we develop a new proof technique based on looking at the general form of the posterior. This allows us to track the posterior distributions that result from general action and feedback sets, 1 The work of Dani et al. [8] first extended the UCB framework to the case of linear cost functions. However, for more complex, nonlinear rewards (e.g., total multi-commodity flow, network shortest path or makespan for job scheduling), it is unclear how UCB-like algorithms can be applied apart from treating all the complex actions independently. More precisely, we obtain a bound of the form B + ClogT, in which C is a non-trivial preconstant that captures precisely the structure of correlations among actions and thus is often better than the decoupled sum-ofinverse-kl-divergences bounds seen in literature [11]. The additive constant (wrt time) B, though potentially large and depending on the total number of complex actions, appears to be merely an artifact of our proof technique tailored towards extracting the time scalingc. This is borne out, for instance, from numerical experiments on complex bandit problems in Section. We remark that such additive constants, in fact, often appear in regret analyses of basic Thompson sampling [1, 7].

3 and it is rather surprising that with almost no structural assumptions we can derive a regret upper bound that, in the standard case, reduces to Lai and Robbins classic lower bound, and gives nontrivial and improved regret scalings for complex bandits. In this way, our result can be viewed as a generalization of existing performance results for posterior sampling algorithms. We complement our theoretical findings with numerical studies of Thompson sampling. The algorithm is implemented using a simple particle filter [9] to maintain and sample from posterior distributions. We evaluate the performance of the algorithm on two complex bandit scenarios subset selection from a bandit and job scheduling. Related Work: Bayesian ideas for the multi-armed bandit date back nearly 8 years ago to the work of W. R. Thompson [], who introduced an elegant algorithm based on posterior sampling. However, there has been surprisingly meager work on using Thompson sampling in the control setup. A notable exception is [1] that develops general Bayesian control rules and demonstrates them for classic bandits and Markov decision processes (i.e., reinforcement learning). On the empirical side, a few recent works have demonstrated the success of Thompson sampling [6, 1]. Recent work has shown frequentist-style regret bounds for Thompson sampling in the standard bandit model [7, 1], and Bayes risk bounds in the purely Bayesian setting [16]. Our work differs from this literature in that we focus on complex actions and show frequentist regret bounds that take into account the structure of the problem. Regarding bandit problems with actions/rewards more complex than the basic MAB, a line of work that deserves particular mention is that of linear bandit optimization [17, 8, 18]. In this setting, actions are identified with decision vectors in a Euclidean space, and the obtained rewards are random linear functions of actions, drawn from an unknown distribution. Here, we typically see regret bounds for generalizations of the UCB algorithm that show polylogarithmic regret for this setting. However, the methods and bounds are highly tailored to the specific linear feedback structure and do not carry over to other kinds of feedback. Setup and Notation Consider a general stochastic model X 1,X,... of independent and identically distributed random variables drawn from R N (N represents, in a sense, the dimension of a standard MAB). The distribution of each X t is parametrized by θ Θ, where Θ denotes the set of candidate parameters. At each time t, an action A t is played from a set of candidate actions A, following which the decision maker obtains a stochastic observation Y t = f(x t,a t ) Y and a reward g(f(x t,a t )) R. Here, f andg are general fixed functions, and we will often denoteg f by the functionh. We denote by l(y;a,θ) the likelihood of observing y upon playing action a, when the distribution parameter isθ. Forθ Θ, leta (θ) be an action that yields the highest expected reward for a model with parameter θ, i.e., a (θ) := argmax a A E θ [h(x 1,a), with arbitrary tie-breaking 3. We use e (j) to denote the j-th unit vector in finite-dimensional Euclidean space. The goal is to play an action at each time t to minimize the (expected) regret over T rounds: R T := T t=1 h(x t,a (θ )) h(x t,a t ), or alternatively, the number of plays of suboptimal actions: T t=1 ½{A t a }. 3 Regret Performance: Overview We propose using Thompson sampling (Algorithm 1) to achieve low regret. Before stating the general regret bound, it is instructive to develop an intuitive understanding of how Thompson sampling learns to play good actions by adaptively shrinking the posterior distribution. To this end, let us assume that there are finitely many actions A. Let us also index the actions in A as {1,,..., A }, with the index A denoting the optimal action a (we will require this labeling later when we often associate each coordinate of A -dimensional space with the respective action). Denote by 3 The subscript θ denotes the probability measure parametrized by θ, and by default, the absence of a subscript is to be understood as working with the parameter θ. We refer to the latter objective as regret since, under bounded rewards, both the objectives scale similarly with the problem size. 3

4 Algorithm 1 Thompson Sampling Input: Parameter space Θ, action space A, output space Y, likelihood l(y;a,θ). Parameter: Distributionπ over Θ. Initialization: Set π = π. for each t = 1,,... end for 1. Draw θ t Θ according to the distributionπ t 1.. Play A t = a (θ t ). 3. Observe Y t = f(x t,a t ).. (Posterior Update) Set the distributionπ t over Θ to S Θ : π t (S) = S l(y t;a t,θ)π t 1 (dθ) Θ l(y t;a t,θ)π t 1 (dθ). D(θ a θ a ) the Kullback-Leibler divergence between the output distributions of parameters θ and θ upon playing action a, i.e., between the distributions{l(y;a,θ ) : y Y} and {l(y;a,θ) : y Y}. When ( action A t is played ) at time t, the prior density gets updated to the posterior as π t (dθ) exp π t 1 (dθ). Observe that the conditional expectation of the instantaneous log l(yt;at,θ ) l(y t;a t,θ) log-likelihood ratio log l(yt;at,θ ) l(y t;a t,θ), is simply the appropriate marginal KL divergence, i.e., [ ] E log l(yt;at,θ ) l(y t;a t,θ) A t = a A ½{A t = a}d(θa θ a ). Hence, up to a coarse approximation, log l(yt;at,θ ) l(y t;a t,θ) a A ½{A t = a}d(θa θ a ), with which we can write ( ) π t (dθ) exp a AN t (a)d(θ a θ a ) π (dθ), (1) with N t (a) := t i=1 ½{A i = a} denoting the play count of a. The quantity in the exponent can be interpreted as a loss suffered by the model θ up to time t, and each time an action a is played, θ incurs an additional loss of essentially the marginal KL divergence D(θ a θ a ). Upon closer inspection, the posterior approximation (1) yields detailed insights into the dynamics of posterior-based sampling. First, since exp ( a A N t(a)d(θ a θ a ) ) 1, the true model θ always retains a significant share of posterior mass: π t (dθ ) exp() π(dθ ) Θ 1 π(dθ) = π (dθ ). This means that Thompson sampling samplesθ, and hence playsa, with at least a constant probability each time, so that N t (a ) = Ω(t). Next, for each actiona A, let us defines a := {θ Θ : a (θ) = a} to be the decision region ofa, i.e., the set of models in Θ whose optimal action is a. Within S a, let S a be the models that exactly matchθ in the sense of the marginal distribution of actiona, i.e.,s a := {θ S a : D(θa θ a ) = }. Let S a be the remaining models ins a. Suppose we can show that each model in anys a,a a, is such thatd(θa θ a ) is bounded strictly away fromwith a gap ofξ >. Then, our preceding calculation immediately tells us that any such model is sampled at time t with a probability exponentially decaying in t: π t (dθ) e ξω(t) π (dθ) π (dθ ). Let us practically neglect the regret from suchs -sampling. On the other hand, how much does the algorithm have to work to make models in S a, a a suffer large (i.e., logt ) losses and thus rid them of significant posterior probability? A model θ S a suffers loss whenever the algorithm plays an action a for which D(θ a θ a ) >. Hence, several actions can help in making a bad model (or set of models) suffer large enough loss. Imagine that we track the play count vector N t := (N t (a)) a A in the integer lattice from t = Note: Plays of a do not help increase the losses of these models.

5 through t = T, from its initial value N = (,...,). There comes a first time τ 1 when some action a 1 a is eliminated (i.e., when all its models losses exceed logt ). The argument of the preceding paragraph indicates that the play count of a 1 will stay fixed at N τ1 (a 1 ) for the remainder of the horizon up to T. Moving on, there arrives a time τ τ 1 when another action a / {a,a 1 } is eliminated, at which point its play count ceases to increase beyond N τ (a ), and so on. The upshot of the calculation is this: Continuing until all actions a a (i.e., the regions S a) are eliminated, we get a worst-case combinatorial bound for the total number of times suboptimal actions can be played. If we let z k = N τk, i.e., the play counts of all actions at time τ k, then for all i k we must have the constraint z i (a k ) = z k (a k ) as plays of a k do not occur after time τ k. Moreover, min θ S ak z k,d θ logt : action a k is eliminated precisely at time τ k. A crude bound on the total number of bad plays is thus max z k 1 s.t. play count sequence {z k }, suboptimal action sequence {a k }, z i (a k ) = z k (a k ),i k, min θ S a k z k,d θ logt, k. The final constraint above ensures that an action a k is eliminated at time τ k, and the penultimate constraint encodes the fact that the eliminated action a k is not played after time τ k. The bound not only depends on logt. but also on the KL-divergence geometry of the bandit, i.e., the marginal divergencesd(θ a θ a ). Notice that no specific form for the prior or posterior was assumed to derive the bound, save the fact that π (dθ ), i.e., that the prior puts enough mass on the truth. The punchline is this: All our heuristic calculations leading up to the bound () can be made precise in a probabilistic sense. Theorem 1, to follow, states that under reasonable priors over a discrete set of models, the number of suboptimal plays scales with time as (), with high probability. We will also see how the bound (), though general-looking, is non-trivial in that (a) for the standard multi-armed bandit, it is essentially the optimum regret scaling, and (b) for a family of complex bandit problems, it can be significantly less than the decoupled bound in (a). Regret Performance: Formal Results Our main result is a high-probability regret bound for Thompson sampling for large enough time horizons. We prove the bound under the following assumptions about the parameter space Θ, action space A, observation space Y, and the fictitious prior π. Assumption 1 (Finitely many actions, observations). The action and observation spaces A and Y are finite: A, Y <. Assumption (Bounded rewards). x X,a A : h(x,a) [,1] 6. Assumption 3 (Discrete prior, and Grain of truth ). The prior distribution π is supported over a discrete set of particles: Θ = {θ 1,...,θ L }, with θ Θ and π(θ ) >. Furthermore, there exists Γ (,1/) such that Γ l(y;a,θ) 1 Γ θ Θ,a A,y Y. Assumption (Unique best action). 7 The optimal action in the sense of expected reward is unique, i.e., E[h(X 1,a )] > max a A,a a E[h(X 1,a)]. For each action a A, let S a := {θ Θ : a (θ) = a} be the set of parameters for which playing a is optimal. For any suboptimal action a a, let S a := {θ S a : D(θa θ a ) = }, S a := S a \S a, and ξ := inf θ S D(θ a a θ a ). We can now state our regret bound for Thompson sampling for general complex bandits. The bound is a refinement of the heuristic path-based bound derived in Section 3. Theorem 1 (Thompson Sampling, General Regret Bound). Under Assumptions 1-, the following holds for the Thompson Sampling algorithm. For δ,ǫ (,1), there exists T such that 6 In general, any upper bound on the absolute value of the reward function suffices. 7 This assumption is made only for the sake of notational convenience and does not affect the essence of this paper s results. ()

6 for all T T, with probability at least 1 δ, a a N T(a) B + C(logT), where B B(δ,ǫ,A,Y,Θ,π) is a problem-dependent constant that does not depend ont, and 8 : C(logT) := max s.t. z k (a k ) z k Z + {},a k A\{a },1 k, z i z k,z i (a k ) = z k (a k ),i k, 1 j,k : min z k,d θ 1+ǫ θ S a k logt, min z k e (j),d θ < 1+ǫ θ S a k logt. The proof may be found in Appendix A of the supplementary material, and uses a self-normalized concentration inequality to track the evolution of the posterior distribution in its general form. The usefulness of Theorem 1 lies in the fact that it can couple information across complex actions and give better leading constants for regret scaling than the standard decoupled case. We remark that the non-scaling additive constant B seems large in our proofs, yet we believe that this is an artifact of our proof technique tailored primarily to extract the time scaling of the regret. Indeed, numerical results in Section show practically no additive factor behaviour..1 Application: Playing Subsets of Bandit Arms, Full Information Let us take a standard N-armed Bernoulli bandit with arm parameters µ 1 µ µ N. Suppose the (complex) actions are all size M subsets of the N arms. Following the choice of a subset, we get to observe the rewards of all the M chosen arms (thus the output space is {,1} M ), and receive some bounded reward of the chosen arms. A natural (discrete) prior for this problem can be obtained by discretizing each of the N basic { ) N dimensions and putting uniform mass over all points: Θ = β,β,... ( β} 1β 1,β (,1), π(θ) = 1 Θ θ Θ. We can then show: Corollary 1. Suppose µ (µ 1,µ,...,µ N ) Θ and µ N M < µ N M+1. Then, the following holds for the Thompson sampling algorithm. For δ,ǫ (,1), there exists T such that for all T T, with probability at least1 δ, ( ) a a N N M 1+ǫ 1 T(a) B + i=1 D(µ logt, i µ N M+1) where B B (δ,ǫ,a,y,θ,π) is a problem-dependent constant that does not depend on T. This result, whose proof is in Appendix B of the supplementary material, illustrates the power of additional information from observing several arms of a bandit. Even though the total number of actions ( N M) can be exponential inm, the regret bound still scales aso((n M)logT). Note also that for M = 1 (the standard MAB), the regret scaling is essentially N M 1 i=1 D(µ logt, i µ N M+1) which is interestingly the best known regret bound for standard Bernoulli bandits obtained by specialized, regret-optimal algorithms such as KL-UCB [], and more recently, Thompson Sampling with the Beta prior [1].. Application: Playing Subsets of Bandit Arms, MAX Reward Using the same setting and size-m subset actions as before but not being able to observe all the individual arms rewards results in much more interesting bandit settings. Here, we assume that we get to observe as the reward only the maximum value of M chosen arms of the standard N- armed Bernoulli bandit. The feedback is still aggregated across basic arms but at the same time very 8 C(logT) C(T,δ,ǫ,A,Y,Θ,π) as well, but we suppress the dependence on the problem parameters since we are mainly concerned with the time scaling. (3) 6

7 Cumulative regret for MAX 1 1 N = 1 arms, M = Subset size =. Thompson Sampling UCB x 1 Cumulative regret for MAX 1 x N = 1 arms, M = Subset size = 3. 1 Thompson Sampling 1 UCB x 1 for Makespan Scheduling 1 jobs on machines. 3 Thompson Sampling Figure 1: Left and center: Cumulative regret with observing the maximum of a pair out of 1 arms (left), and that of a triple out of 1 arms (center), for (a) Thompson sampling using a particle filter, and (b) UCB treating each subset as a separate actions. The arm means are chosen to be equally spaced in [,1]. The regret is averaged across 1 runs, and the confidence intervals shown are±1 standard deviation. Right: Cumulative regret with respect to the best makespan with particle-filter-based Thompson sampling, for scheduling 1 jobs on machines. The job means are chosen to be equally spaced in [,1]. The best job assignment gives an expected makespan of 31. The regret is averaged across 1 runs, and the confidence intervals shown are ±1 standard deviation. different from the full information case observing a reward of is very uninformative whereas a value of 1 is highly informative about the constituent arms. We apply the general machinery of Theorem 1 to obtain a non-trivial regret bound for the MAX bandit.. Let β (,1), and suppose that Θ = {1 β R,1 β R 1,...,1 β,1 β} N, for positive integersrandn. As before, letµ Θ denote the basic arms parameters, and letµ min := min a A i a (1 µ i). Corollary. For M N,M N,δ,ǫ (,1), there existst such that for allt T, with probability at least 1 δ, ( )[ a a N 1+ǫ T(a) B 3 +(log) 1+ ( ) ] N 1 logt M This regret bound is of the order of ( N 1, which is significantly smaller than the usual, decoupled bound of A logt M ) logt ) logt µ min µ min (1 β). = ( N µ M by a multiplicative factor of (N 1 = min µ min ( M) N M N N, or by an additive factor of ( ) N 1 logt M 1. In fact, though this is a provable reduction in the regret scaling, the µ min actual reduction is likely to be much better in practice the experimental results in Section attest to this. The proof of the corollary uses sharp combinatorial estimates relating to vertices on the N-dimensional hypercube, and can be found in Appendix C, in the supplementary material. Numerical Experiments We evaluate the performance of Thompson sampling (Algorithm 1) on two complex bandit settings (a) Playing subsets of arms with the MAX reward function, and (b) Job scheduling over machines to minimize makespan. Where the posterior distribution is not closed-form, we approximate it using a particle filter [9, 1] that allows efficient updates after each play. 1. Subset Plays, MAX Reward: We assume the setup of Section. where one plays a size-m subset in each round and observes the maximum value. The individual arms reward parameters are taken to be equi-spaced in (, 1). It is observed that Thompson sampling outperforms standard decoupled UCB by a wide margin in the cases we consider (Figure 1, left and center). The differences are especially pronounced for the larger problem size N = 1,M = 3, where UCB, that sees ( N M) separate actions, appears be in the exploratory phase throughout. Figure affords a closer look at the regret for the above problem, and presents the results of using a flat prior over a uniformly discretized grid of models in[,1] 1 the setting of Theorem 1.. Subset Plays, Average Reward: We apply Thompson sampling again to the problem of choosing the best M out of N basic arms of a Bernoulli bandit, but this time receiving a reward that is the average value of the chosen subset. This specific form of the feedback makes it possible to use a continuous, Gaussian prior density over the space of basic parameters that is updated to a Gaussian posterior assuming a fictitious Gaussian likelihood model [13]. This is a fast, practical alternative to UCB-style deterministic methods [8, 18] which require performing a convex optimization every in- M ) 7

8 (a) N = 1, M = (b) N = 1, M = (c) N = 1, M = (d) N = 1, M = Figure : Cumulative regret with observing the maximum value of M out of N = 1 arms for Thompson sampling. The prior is uniform over the discrete domain {.1,.3,.,.7,.9}N, with the arms means lying in the same domain (setting of Theorem 1). The regret is averaged across 1 runs, and the confidence intervals shown are ±1 standard deviation stant. Figure 3 shows the regret of Thompson sampling with a Gaussian prior/posterior for choosing various size M subsets (3,, 1,, ) out of N = 1 arms. It is practically impossible to naively apply a decoupled bandit algorithm over such a problem due to the very large number of complex actions (e.g., there are 113 actions even for M = 1) 9. However, Thompson sampling merely samples from a N = 1 dimensional Gaussian and picks the best M coordinates of the sample, which yields a dramatic reduction in running time. The constant factors in the regret curves are seen to be modest when compared to the total number of complex actions x 1. 1 x x x 1 (a) (1, 3) 6 8 (b) (1, ) 1 x 1 x (c) (1, 1) 1 x (d) (1, ) 1 x x 1 (e) (1, ) Figure 3: Cumulative regret for (N, M ): Observing the average value of M out of N = 1 arms for Thompson sampling. The prior is a standard normal independent density over N dimensions, and the posterior is also normal under a Gaussian likelihood model. The regret is averaged across 1 runs. Confidence intervals are ±1 standard deviation. 3. Job Scheduling: We consider a stochastic job-scheduling problem in order to illustrate the versatility of Thompson sampling for bandit settings more complicated than subset actions. There are N = 1 types of jobs and machines. Every job type has a different, unknown mean duration, with the job means taken to be equally spaced in [, N ], i.e., NiN +1, i = 1,..., N. At each round, one job of each type arrives to the scheduler, with a random duration that follows the exponential distribution with the corresponding mean. All jobs must be scheduled on one of two possible machines. The loss suffered upon scheduling is the makespan, i.e., the maximum of the two job durations on the machines. Once the jobs in a round are assigned to the machines, only the total durations on the machines machines can be observed. Figure 1 (right) shows the results of applying Thompson sampling with an exponential prior for the jobs means along with a particle filter. 6 Discussion We applied Thompson sampling to balance exploration and exploitation in bandit problems where the action/observation space is complex. Our theoretical analysis provides a generic regret bound for Thompson sampling, scaling logarithmically in time but with improved constants that capture the structure of the problem. In practice, the algorithm is easy to implement using sequential MonteCarlo methods such as particle filters. Moving forward, it would be interesting to see if we can get a Thompson sampling-like algorithm that works for both stochastic and adversarial bandits. Another natural extension is to consider models where the dynamics are Markov processes or even Markov decision processes, where using Thompson sampling or algorithms akin to it has been shown to work well in practice [19, 1]. We can also hope to develop a general theory to handle complex spaces like the X-armed bandit problem [, 1] with a continuous state space. 9 Both the ConfidenceBall algorithm of Dani et al. [8] and the OFUL algorithm [18] account for linear feedback across coupled actions via tight confidence sets. However, as stated, they require searching over the space of all actions/subsets, so we are unclear about how one might efficiently apply them here. 8

9 References [1] J. C. Gittins, K. D. Glazebrook, and R. R. Weber, Multi-Armed Bandit Allocation Indices. Wiley, 11. [] P. Auer, N. Cesa-Bianchi, and P. Fischer, Finite-time analysis of the multiarmed bandit problem, Machine Learning, vol. 7, no. -3, pp. 3 6,. [3] J.-Y. Audibert and S. Bubeck, Minimax policies for adversarial and stochastic bandits, in Proceedings of the nd Annual Conference on Learning Theory, Omnipress, pp ,. [] A. Garivier and O. Cappé, The KL-UCB algorithm for bounded stochastic bandits and beyond, Journal of Machine Learning Research - Proceedings Track, vol. 19, pp , 11. [] W. R. Thompson, On the likelihood that one unknown probability exceeds another in view of the evidence of two samples, Biometrika, vol., no. 3, pp. 8 9, [6] S. Scott, A modern Bayesian look at the multi-armed bandit, Applied Stochastic Models in Business and Industry, vol. 6, pp , 1. [7] S. Agrawal and N. Goyal, Analysis of Thompson sampling for the multi-armed bandit problem., Journal of Machine Learning Research - Proceedings Track, vol. 3, pp , 1. [8] V. Dani, T. P. Hayes, and S. M. Kakade, Stochastic linear optimization under bandit feedback, in COLT, pp , 8. [9] B. Ristic, S. Arulampalam, and N. Gordon, Beyond the Kalman Filter: Particle Filters for Tracking Applications. Artech House,. [1] A. Doucet, N. D. Freitas, and N. Gordon, Sequential Monte Carlo Methods in Practice. Springer, 1. [11] T. L. Lai and H. Robbins, Asymptotically efficient adaptive allocation rules, Advances in Applied Mathematics, vol. 6, no. 1, pp., 198. [1] E. Kaufmann, N. Korda, and R. Munos, Thompson sampling: An asymptotically optimal finite-time analysis, in Proceedings of the Twenty-third International Conference on Algorithmic Learning Theory, 1. [13] S. Agrawal and N. Goyal, Thompson sampling for contextual bandits with linear payoffs, in Advances in Neural Information Processing Systems, pp. 31 3, 11. [1] P. A. Ortega and D. A. Braun, A minimum relative entropy principle for learning and acting, JAIR, vol. 38, pp. 7 11, 1. [1] O. Chapelle and L. Li, An empirical evaluation of Thompson sampling, in NIPS-11, 11. [16] D. Russo and B. V. Roy, Learning to optimize via posterior sampling, CoRR, vol. abs/131.69, 13. [17] P. Auer, Using confidence bounds for exploitation-exploration trade-offs, J. Mach. Learn. Res., vol. 3, pp. 397, 3. [18] Y. Abbasi-Yadkori, D. Pal, and C. Szepesvari, Improved algorithms for linear stochastic bandits, in Advances in Neural Information Processing Systems, pp. 31 3, 11. [19] P. Poupart, Bayesian reinforcement learning, in Encyclopedia of Machine Learning, pp. 9 93, 1. [] N. Srinivas, A. Krause, S. Kakade, and M. Seeger, Gaussian process optimization in the bandit setting: No regret and experimental design, in ICML, pp. 11 1, 1. [1] S. Bubeck, R. Munos, G. Stoltz, and C. Szepesvári, X-armed bandits, J. Mach. Learn. Res., vol. 1, pp , 11. [] R. Ahlswede, H. Aydinian, and L. Khachatrian, Maximum number of constant weight vertices of the unit n-cube contained in a k-dimensional subspace, Combinatorica, vol. 3, no. 1, pp., 3. 9

10 Appendices: Thompson Sampling for Complex Online Problems A Proof of Theorem 1 Sampling from the posterior as proportional to exponential weights: Let N t (a) be the number of times action a has been played up to (and including) time t. At any time t, the posterior distributionπ t over Θ is given by Bayes rule: S Θ : π t (S) = W t(s) W t (Θ), W t(s) := W t (θ)π(dθ), () with the weight W t (θ) of each θ being the likelihood of observing the history under θ: t [ ] l(yi ;A i,θ) W t (θ) := l(y i=1 i ;A i,θ = t ) a Ay Y i=1 = exp t ½{A i = a,y i = y}log l(y;a,θ ) l(y;a,θ) a Ay Y i=1 = exp N t (a) t i=1 ½{A i = a,y i = y} N t (a) a A y Y S [ ] ½{Ai=a,Y l(y;a,θ) i=y} l(y;a,θ ) log l(y;a,θ ) l(y;a,θ) where we set N t (a) := t i=1 ½{A t i=1 i = a}. Let Z t (a,y) := ½{Ai=a,Yi=y} N t(a), and Z t (a) := (Z t (a,y)) y Y R Y. Thus Z t (a) is the empirical distribution of the observations from playing action a up to timet. The expression forw t (θ) above becomes W t (θ) = exp N t (a)d(θa θ a ) N t (a) (Z t (a,y) l(y;a,θ ))log l(y;a,θ ). l(y;a,θ) a A a A y Y () Here, for any θ Θ and a A, θ a is used to denote the marginal probability distribution {l(y;a,θ)} y Y of the output of actionawhen the bandit has parameterθ. For probability measures ν,µ over Y,D(ν µ) measures the Kullback-Leibler (KL) divergence of ν wrtµ. Note that by definition, W t (θ ) = 1 at all times t a fact that we use often in the analysis. Instead of observing Y t = f(x t,a t ) at each round t, consider the following alternative probability space for the stochastic bandit in a time horizon 1,,... with probability measure P. First, for each action a A and each time k = 1,,..., an independent random variable Q a (k) Y, is drawn with P[Q a (k) = y] = l(y;a,θ ). Denote by Q {Q a (k)} a A,k 1 the A matrix of these independent random variables. Next, at each round t = 1,,..., playing action A t = a yields the observation Y t = Q a (N a (t)+1). Thus, in this space, Z t (a,y) = U Nt(a)(a,y), where U j (a,y) := 1 j j ½{Q a (k) = y}. The following lemma shows that the distribution of sample paths seen by a bandit algorithm in both probability spaces (i.e., associated with the measures P and P) is identical. This allows us to equivalently work in the latter space to make statements about the regret of an algorithm. Lemma 1. For any action-observation sequence (a t,y t ), t = 1,...,T of a bandit algorithm, P[ 1 t T (A t,y t ) = (a t,y t )] = P[ 1 t T (A t,y t ) = (a t,y t )]. Henceforth, we will drop the tilde on P and always work in the latter probability space, involving the matrixq., 1

11 Lemma. For any suboptimal action a a, δ a = min D(θa θ a ) >. θ S a Let N t(a) (resp. N t (a)) be the number of times that a parameter has been drawn from S a (resp. S a ), so that N t (a) = N t(a)+n t (a). The following self-normalized, uniform deviation bound controls the empirical distribution of each row Q a ( ) of the random reward matrixq. It is a version of a bound proved in [18]. Theorem. Let a A,y Y and δ (,1). Then, with probability at least1 δ, ( ) k 1 U k (a,y) l(y;a,θ ) 1 k k log. δ Put c := log Y A δ, and ρ(x) ρ c (x) := data event G G(c) := c+ logx for x >. It follows, then, that the good { a A y Y k 1 U k (a,y) l(y;a,θ ) ρ(k) } k occurs with probability at least 1 δ. Lemma 3. Fix ǫ (,1). There exist λ,n, not depending on T, so that the following is true. For any θ Θ, a A and y Y, under the event G, 1. At all timest 1, N t (a)d(θ a θ a )+N t (a) y Y. IfN t (a) n, then N t (a)d(θ a θ a )+N t (a) y Y Proof. Under G, we have (Z t (a,y) l(y;a,θ ))log l(y;a,θ ) l(y;a,θ) λ, (Z t (a,y) l(y;a,θ ))log l(y;a,θ ) l(y;a,θ) ()N t(a)d(θ a θ a ). N t (a)d(θa θ a )+N t (a) (Z t (a,y) l(y;a,θ ))log l(y;a,θ ) l(y;a,θ) y Y N t (a)d(θa θ a ) N t (a) Z t (a,y) l(y;a,θ ) log l(y;a,θ ) l(y;a,θ) y Y N t (a)d(θa θ a ) ρ(n t (a)) N t (a) log l(y;a,θ ) l(y;a,θ). (6) y Y For a fixed θ Θ, a A, the expression above diverges to +, viewed as a function of N t (a), as N t (a) (except when θ a = θ a, in which case the expression is identically.) Hence, the expression achieves a finite minimum λ θ,a (not depending on T ) over non-negative integers N t (a) Z +. Since there are only finitely many parameters θ Θ, it follows that if we set λ := max θ Θ,a A λ θ,a, then the above expression is bounded below by λ, uniformly across Θ. This proves the first part of the lemma. To show the second part, notice again that for fixed θ Θ and a A, there exists n θ,a such that ρ(x) x log l(y;a,θ ) l(y;a,θ) ǫxd(θ a θ a ), x n θ,a y Y since ρ(x) = o(x). Setting n := max θ Θ,a A n θ,a then completes the proof of the second part. 11

12 A.1 Regret due to sampling from S a The result of Lemma 3 implies that under the event G, and at all times t 1: π t (θ ) = W t(θ )π(θ ) Θ W t(θ)π(dθ) = π(θ ) Θ W t(θ)π(dθ) π(θ ) Θ exp(λ A )π(dθ) = π(θ )e λ A p, say. (7) Also, under the event G, the posterior probability of θ S a at all times t can be bounded above using Lemma 3 and the basic bound in (6): π t (θ) = W t(θ)π(θ) Θ W t(ψ)π(dψ) W t(θ)π(θ) π(θ ) = π(θ) π(θ ) exp N t (a)d(θa θ a ) a A a A π(θ)eλ A π(θ ) π(θ)eλ A π(θ ) N t (a) y Y exp N t (a )D(θa θ a ) N t(a ) y Y exp N t (a )D(θa θ a )+ρ(n t(a)) N t (a ) y Y (Z t (a,y) l(y;a,θ ))log l(y;a,θ ) l(y;a,θ) (Z t (a,y) l(y;a,θ ))log l(y;a,θ ) l(y;a,θ) log l(y;a,θ ) l(y;a,θ). In the above, the penultimate inequality is by Lemma 3 applied to all actions a a, and the final inequality follows in a manner similar to (6), for action a. Letting d := eλ A π(θ ), we have that under the event G, for a a and θ S a, π t (θ) dπ(θ)exp N t (a )D(θa θ a )+ρ(n t(a)) N t (a ) log l(y;a,θ ) l(y;a,θ). (8) y Y Recall that by definition, any θ S a with a a can be resolved apart from θ in the action a, i.e., D(θa θ a ) ξ. Moreover, the discrete prior assumption (Assumption 3) implies that ξ >. Using this, we can bound the right-hand side of (8) further under the event G: ( π t (θ) dπ(θ)exp ξn t (a )+ρ(n t (a)) N t (a )log 1 Γ ). (9) Γ Integrating (9) over θ S a and noticing that π(s a) 1 gives, under G, ( π t (S a) dexp ξn t (a )+ρ(n t (a)) N t (a )log 1 Γ ). (1) Γ We can now estimate, using the conditional version of Markov s inequality, the number of times that parameters from S a are sampled under good data G: [ P ½{θ t S a} > η ] G η 1 E [ ½{θ t S a} ] G = η 1 E [ π t (S a) ] G t=1 η 1 t=1 t=1 ( [ ( 1 E dexp ξn t (a )+ρ(n t (a)) N t (a )log 1 Γ ) ]) G, (11) Γ where the final inequality is by (1) and the fact that π t (S a) a bdenotes the minimum of a and b. t=1 1

13 At each time t, if we let F t denote the σ-algebra generated by the random variables {(θ i,a i,y i ) : i t}, then E [e ] [ [ ξnt(a ) G = E E e ] ξnt(a ) ] Ft 1,G G [ [ = E e ξnt 1(a ) E e ] ξ½{at=a } ] Ft 1,G G [ [ E e ξnt 1(a ) E e ] ξ½{θt=θ } ] Ft 1,G G (θ t = θ A t = a ) [ = E e ξnt 1(a ) ( π t (θ )e ξ +1 π t (θ ) ) ] G E [e ξnt 1(a ) ( p e ξ +1 p ) ] G = ( p e ξ +1 p ) E [e ] ξnt 1(a ) G, where, in the penultimate step, we use π t (θ ) p ½ G from (7). Iterating this estimate and using it in (11) together with the trivial bound N t (a ) t gives [ P ½{θ t S a} > η ] G η (1 d 1 ( p e ξ +1 p ) ( t exp ρ(t) tlog 1 Γ )). Γ t=1 t=1 Since p e ξ + 1 p < 1 and ρ(t) t = o(t), the sum above is dominated by a geometric series after finitely many t, and is thus a finite quantity α <, say. (Note that α does not depend on T.) Replacing δ by δ A and taking a union bound over alla a, this proves Lemma. There existsα < such that [ ] P G, a a ½{θ t S a} > α A δ. δ A. Regret due to sampling from S a For θ Θ,a A, define b θ,a : R + R by { λ, x < n b θ,a (x) := ()xd(θa θ a ), x n, t=1 where λ and n satisfy the assertion of Lemma 3. Thus, by Lemma 3, under G, and for all θ Θ, W t (θ) e a A b θ,a(n t(a)) e a A b θ,a(n t (a)), where the last inequality is because N t (a) = N t(a) + N t (a), and because b θ,a (x) is monotone non-decreasing inx. Note: In what follows, we assume that T > is large enough such that logt λ A ǫ holds. We proceed to define the following sequence of non-decreasing stopping times, and associated sets of actions, for the time horizon 1,,..., T. Let τ := 1 and A :=. For each k = 1,...,, let τ k := min τ k 1 t T s.t. a k / A k 1 {a }, k 1 min N θ S a τ m (a m )D(θa m θ am )+ N t(a)d(θ a θ a ) 1+ǫ k logt. m=1 a/ A k 1 In other words, for eachk,a k represents a set of eliminated suboptimal actions. τ k is the first time after τ k 1, when some suboptimal action (which is not already eliminated) gets eliminated in the sense of satisfying the inequality in (1). Essentially, the inequality checks whether the condition a a N t(a)d(θ a θ a ) logt (1) 13

14 is met for all particles θ S a k at time t, with a slight modification in that the play count N t(a) is frozen ton τm (a m ) if actionahas been eliminated at an earlier timeτ m t, and the introduction of the factor 1+ǫ to the right hand side. In case more than one suboptimal action is eliminated for the first time at τ k, we use a fixed tiebreaking rule inato resolve the tie. We then put A k := A k 1 {a k }. Thus, τ τ 1... τ T, and A A 1... A = A. For each action a a, by definition, there exists a unique τ k for which a is first eliminated at τ k, i.e.,a k \A k 1 = a. Let τ(a) := τ k. The following lemma states that after an action a is eliminated, the algorithm does not sample from S a more than a constant number of times. Lemma. IflogT λ A, then [ P G, k T t=τ k +1 ] ½{θ t S a k } > A δπ(θ δ. ) Proof. Observe that under G, whenever T t > τ k, every θ S a k satisfies ( ) W t (θ) exp exp ( a Ab θ,a (N t(a)) a A(()N t(a)d(θ a θ a ) λ) exp () k 1 m=1 ( exp () 1+ǫ logt +ǫlogt ) = exp ( () a AN t(a)d(θ a θ a )+λ A N τ m (a m )D(θa m θ am ) () N t(a)d(θ a θ a )+λ A a/ A k 1 ) = 1 T. The second inequality above is because the definition of b θ,a (x) implies that x (1 ǫ)xd(θa θ a ) b θ,a (x) λ. The penultimate inequality above is due to the fact that for any m k, we have τ m τ k t, implying that N t(a m ) N τ m (a m ). We now estimate E [ ½{t > τ k }½{θ t S a k } ] [ [ G = E E ½{t > τk }½{θ t S a k } ] ] G,Ft G which implies that [ T E Thus, = E [ ½{t > τ k }π t (S a k ) ] G = E ½{t > τ k } ] E [½{t > τ k } T 1 G π(θ T 1 ) π(θ ), t=τ k +1 ½{θ t S a k } G P [ T t=τ k +1 ] = T t=1 ½{θ t S a k } > W S a t (θ)π(dθ) k Θ W t(θ)π(dθ) G E [ ½{t > τ k }½{θ t S a k } G ] 1 π(θ ). ] 1 G δπ(θ δ. ) Replacing δ by δ A and taking a union bound over k = 1,,..., proves the lemma. ) 1

15 Now we bound the number of plays of suboptimal actions under the event H := G { } { } a a ½{θ t S a} α A T k ½{θ t S a δ k } A δπ(θ, ) t=1 t=τ k +1 which, according to the results of Theorem, Lemma and Lemma, occurs with probability at least 1 (δ +δ). Under the event H, we have a a N T(a) = = = N T(a k ) N τ k (a k )+ N τ k (a k )+ N τ k (a k )+ A δπ(θ ). (N T(a k ) N τ k (a k )) T t=τ k +1 Lemma 6. Under H, N τ k (a k ) C T, where C T solves C(logT) := max s.t. z k (a k ) ½{θ t S a k } z k Z + {},a k A\{a },1 k, z i z k,z i (a k ) = z k (a k ),i k, 1 j,k : min z k,d θ 1+ǫ θ S a k logt, min z k e (j),d θ < 1+ǫ θ S a k logt. Proof. With regard to the definition of the τ k and a k in (1), if we take and a k = a k, 1 k, z k (a) = { N τ(a) (a), τ(a) τ k, N τ k (a), τ(a) > τ k, then it follows, from (1), that the z k and a k satisfy all the constraints of the optimization problem (13). We also have z k (k) = N τ k (a k ). This proves the lemma. B Proof of Corollary 1 The optimal action (in this case a subset) is a = {N M +1,...,N}. It can be checked that the assumptions 1- are verified, thus the bound (3) applies and we will be done if we estimatec(logt). The essence of the proof is to first partition the space of suboptimal actions (subsets) according to the least-index basic arm that they contain, i.e., for i = 1,,...,N M, let A i := {a [N] : a a,min{j a} = i} be all the actions whose least-index (or weakest ) arm isi This covers all ofa\{a } since every suboptimal set must contain a basic arm of indexn M or lesser. (13) 1

16 Take any sequence {z k }, {a k} feasible for (3) Fix 1 i N M and consider the sum ( ) k:a k A i z k (a k ). We claim that this does not exceed 1 + logt. If, 1+ǫ 1 D(µ i µ N M+1) on the contrary, it does, put ˆk := max{k : a k A i }. Take any model θ S aˆk. We must have D(µ a θ a ) =, and since the KL divergence due to observing a tuple of M independent rewards is simply the sum of the M individual (binary) KL divergences, this forces θ j = µ j for all j N M +1. However, the optimal action forθ isaˆk containing the basic armi. Hence, we get that θ i µ N M+1 µ i, which implies that D(µ i θ i ) D(µ i µ N M+1 ). It now remains to estimate zˆk e (ˆk),D θ = N zˆk(a) δ j aˆk,d(µ j θ j ) j=1 a:j a ( ) zˆk(a) 1 D(µ i θ i ) a:i a ( ) zˆk(a) 1 D(µ i µ N M+1 ) a A i ( ) = z k (a k ) 1 D(µ i µ N M+1 ) k:a k A i > logt, by hypothesis. This violates the final inequality of (3) and yields the desired contradiction. Since the above argument is valid for any 1 i N M, summing over all such i completes the proof. C Proof of Corollary Lemma 7. Let T be large enough such that max θ Θ,a A D(θa θ a ) 1+ǫ logt. Then, the optimization problem (3) admits the following upper bound: C(logT) max z 1 s.t. z R {}, a A,a a, min θ S a z,d θ (1+ǫ) logt, ( ) 1+ǫ logt, â A,â a. z(â) δâ Proof. Take a feasible solution {z k,a k } for the optimization problem (3). We will show that z = z and a = a satisfy the constraints (1) above and yield the same objective function value in both optimization problems. First, z 1 = â A,â a z(â) = z (a k ) = z k (a k ), asz (a k ) z k (a k ), for allk, by (3). This shows that the objective functions of both (3) and (1) are equal at {z k,a k } and (z, a) respectively. (1) 16

17 Next, for any 1 j and the unit vector e (j), we have min θ S a z,d θ = min θ S a k z k,d θ min θ S a k z k e (j),d θ + max θ Θ,a A D(θ a θ a ) 1+ǫ 1+ǫ (1+ǫ) logt + logt = logt. This shows that the penultimate constraint in (1) is satisfied. For the final constraint in (1), fix 1 j, so that we have δ aj z(a j ) = δ aj z j (a j ) min θ S a exactly as in the preceding derivation. This implies that z(â) δâ z j,d θ (1+ǫ) logt, ( 1+ǫ Lemma 8. Let T satisfy Assumptions 1- and the hypothesis of Lemma 7. Suppose min a a δ a = min D(θa θ a ). a a,θ S a Suppose also that L Z + is such that for every a a and θ S a, {â A : â a,d(θ â θâ) } L, ) logt for all â a. i.e., at leastlcoordinates of D θ (excluding the A -th coordinate a ) are at least. Then, ( ) A L (1+ǫ) C(logT) logt. Proof. Consider a solution (z, a) to a relaxation of the optimization problem (1) obtained by replacing δâ ( ) with and D θ with D θ := min(d θ, ½) D 1 θ. We claim that z 1 ½,z A L χ where χ := (1+ǫ) logt. If not, lety = χ ( 1,..., 1,), and observe that But then, D θ,y z = D θ,y D θ,z ½,y z = ½,y ½,z χ L 1 χ = χ(l 1). < χ() χ( A L) = χ(l 1) D θ,y z ½,y z = ½,y z, since D θ ½ by definition and z y by hypothesis. This is a contradiction. Playing Subsets with Max reward: Let β (,1), and suppose that Θ = {1 β R,1 β R 1,...,1 β,1 β} N, for positive integers R and N. Consider an N armed Bernoulli bandit with arm parameters µ Θ. The complex actions are all size M subsets of the N basic arms, M N 1. Let µ min := min a A i a (1 µ i). 1 Here ½ represents an all-ones vector of dimension A, and the minimum is taken coordinatewise. Also, a solution exists since the objective is continuous and the feasible region is compact. 17

Complex Bandit Problems and Thompson Sampling

Complex Bandit Problems and Thompson Sampling Complex Bandit Problems and Aditya Gopalan Department of Electrical Engineering Technion, Israel aditya@ee.technion.ac.il Shie Mannor Department of Electrical Engineering Technion, Israel shie@ee.technion.ac.il

More information

Regret Analysis of Stochastic and Nonstochastic Multi-armed Bandit Problems, Part I. Sébastien Bubeck Theory Group

Regret Analysis of Stochastic and Nonstochastic Multi-armed Bandit Problems, Part I. Sébastien Bubeck Theory Group Regret Analysis of Stochastic and Nonstochastic Multi-armed Bandit Problems, Part I Sébastien Bubeck Theory Group i.i.d. multi-armed bandit, Robbins [1952] i.i.d. multi-armed bandit, Robbins [1952] Known

More information

Bandit models: a tutorial

Bandit models: a tutorial Gdt COS, December 3rd, 2015 Multi-Armed Bandit model: general setting K arms: for a {1,..., K}, (X a,t ) t N is a stochastic process. (unknown distributions) Bandit game: a each round t, an agent chooses

More information

Multi-armed bandit models: a tutorial

Multi-armed bandit models: a tutorial Multi-armed bandit models: a tutorial CERMICS seminar, March 30th, 2016 Multi-Armed Bandit model: general setting K arms: for a {1,..., K}, (X a,t ) t N is a stochastic process. (unknown distributions)

More information

Online Learning and Sequential Decision Making

Online Learning and Sequential Decision Making Online Learning and Sequential Decision Making Emilie Kaufmann CNRS & CRIStAL, Inria SequeL, emilie.kaufmann@univ-lille.fr Research School, ENS Lyon, Novembre 12-13th 2018 Emilie Kaufmann Sequential Decision

More information

On the Complexity of Best Arm Identification in Multi-Armed Bandit Models

On the Complexity of Best Arm Identification in Multi-Armed Bandit Models On the Complexity of Best Arm Identification in Multi-Armed Bandit Models Aurélien Garivier Institut de Mathématiques de Toulouse Information Theory, Learning and Big Data Simons Institute, Berkeley, March

More information

Stratégies bayésiennes et fréquentistes dans un modèle de bandit

Stratégies bayésiennes et fréquentistes dans un modèle de bandit Stratégies bayésiennes et fréquentistes dans un modèle de bandit thèse effectuée à Telecom ParisTech, co-dirigée par Olivier Cappé, Aurélien Garivier et Rémi Munos Journées MAS, Grenoble, 30 août 2016

More information

Sparse Linear Contextual Bandits via Relevance Vector Machines

Sparse Linear Contextual Bandits via Relevance Vector Machines Sparse Linear Contextual Bandits via Relevance Vector Machines Davis Gilton and Rebecca Willett Electrical and Computer Engineering University of Wisconsin-Madison Madison, WI 53706 Email: gilton@wisc.edu,

More information

On Bayesian bandit algorithms

On Bayesian bandit algorithms On Bayesian bandit algorithms Emilie Kaufmann joint work with Olivier Cappé, Aurélien Garivier, Nathaniel Korda and Rémi Munos July 1st, 2012 Emilie Kaufmann (Telecom ParisTech) On Bayesian bandit algorithms

More information

Analysis of Thompson Sampling for the multi-armed bandit problem

Analysis of Thompson Sampling for the multi-armed bandit problem Analysis of Thompson Sampling for the multi-armed bandit problem Shipra Agrawal Microsoft Research India shipra@microsoft.com avin Goyal Microsoft Research India navingo@microsoft.com Abstract We show

More information

Two generic principles in modern bandits: the optimistic principle and Thompson sampling

Two generic principles in modern bandits: the optimistic principle and Thompson sampling Two generic principles in modern bandits: the optimistic principle and Thompson sampling Rémi Munos INRIA Lille, France CSML Lunch Seminars, September 12, 2014 Outline Two principles: The optimistic principle

More information

An Information-Theoretic Analysis of Thompson Sampling

An Information-Theoretic Analysis of Thompson Sampling Journal of Machine Learning Research (2015) Submitted ; Published An Information-Theoretic Analysis of Thompson Sampling Daniel Russo Department of Management Science and Engineering Stanford University

More information

The information complexity of sequential resource allocation

The information complexity of sequential resource allocation The information complexity of sequential resource allocation Emilie Kaufmann, joint work with Olivier Cappé, Aurélien Garivier and Shivaram Kalyanakrishan SMILE Seminar, ENS, June 8th, 205 Sequential allocation

More information

Bayesian and Frequentist Methods in Bandit Models

Bayesian and Frequentist Methods in Bandit Models Bayesian and Frequentist Methods in Bandit Models Emilie Kaufmann, Telecom ParisTech Bayes In Paris, ENSAE, October 24th, 2013 Emilie Kaufmann (Telecom ParisTech) Bayesian and Frequentist Bandits BIP,

More information

Advanced Machine Learning

Advanced Machine Learning Advanced Machine Learning Bandit Problems MEHRYAR MOHRI MOHRI@ COURANT INSTITUTE & GOOGLE RESEARCH. Multi-Armed Bandit Problem Problem: which arm of a K-slot machine should a gambler pull to maximize his

More information

On the Complexity of Best Arm Identification with Fixed Confidence

On the Complexity of Best Arm Identification with Fixed Confidence On the Complexity of Best Arm Identification with Fixed Confidence Discrete Optimization with Noise Aurélien Garivier, Emilie Kaufmann COLT, June 23 th 2016, New York Institut de Mathématiques de Toulouse

More information

Lecture 4: Lower Bounds (ending); Thompson Sampling

Lecture 4: Lower Bounds (ending); Thompson Sampling CMSC 858G: Bandits, Experts and Games 09/12/16 Lecture 4: Lower Bounds (ending); Thompson Sampling Instructor: Alex Slivkins Scribed by: Guowei Sun,Cheng Jie 1 Lower bounds on regret (ending) Recap from

More information

Revisiting the Exploration-Exploitation Tradeoff in Bandit Models

Revisiting the Exploration-Exploitation Tradeoff in Bandit Models Revisiting the Exploration-Exploitation Tradeoff in Bandit Models joint work with Aurélien Garivier (IMT, Toulouse) and Tor Lattimore (University of Alberta) Workshop on Optimization and Decision-Making

More information

Improved Algorithms for Linear Stochastic Bandits

Improved Algorithms for Linear Stochastic Bandits Improved Algorithms for Linear Stochastic Bandits Yasin Abbasi-Yadkori abbasiya@ualberta.ca Dept. of Computing Science University of Alberta Dávid Pál dpal@google.com Dept. of Computing Science University

More information

Stat 260/CS Learning in Sequential Decision Problems. Peter Bartlett

Stat 260/CS Learning in Sequential Decision Problems. Peter Bartlett Stat 260/CS 294-102. Learning in Sequential Decision Problems. Peter Bartlett 1. Thompson sampling Bernoulli strategy Regret bounds Extensions the flexibility of Bayesian strategies 1 Bayesian bandit strategies

More information

The information complexity of best-arm identification

The information complexity of best-arm identification The information complexity of best-arm identification Emilie Kaufmann, joint work with Olivier Cappé and Aurélien Garivier MAB workshop, Lancaster, January th, 206 Context: the multi-armed bandit model

More information

Bandits : optimality in exponential families

Bandits : optimality in exponential families Bandits : optimality in exponential families Odalric-Ambrym Maillard IHES, January 2016 Odalric-Ambrym Maillard Bandits 1 / 40 Introduction 1 Stochastic multi-armed bandits 2 Boundary crossing probabilities

More information

Multiple Identifications in Multi-Armed Bandits

Multiple Identifications in Multi-Armed Bandits Multiple Identifications in Multi-Armed Bandits arxiv:05.38v [cs.lg] 4 May 0 Sébastien Bubeck Department of Operations Research and Financial Engineering, Princeton University sbubeck@princeton.edu Tengyao

More information

arxiv: v2 [stat.ml] 19 Jul 2012

arxiv: v2 [stat.ml] 19 Jul 2012 Thompson Sampling: An Asymptotically Optimal Finite Time Analysis Emilie Kaufmann, Nathaniel Korda and Rémi Munos arxiv:105.417v [stat.ml] 19 Jul 01 Telecom Paristech UMR CNRS 5141 & INRIA Lille - Nord

More information

Learning to Optimize via Information-Directed Sampling

Learning to Optimize via Information-Directed Sampling Learning to Optimize via Information-Directed Sampling Daniel Russo Stanford University Stanford, CA 94305 djrusso@stanford.edu Benjamin Van Roy Stanford University Stanford, CA 94305 bvr@stanford.edu

More information

Regret Analysis of Stochastic and Nonstochastic Multi-armed Bandit Problems. Sébastien Bubeck Theory Group

Regret Analysis of Stochastic and Nonstochastic Multi-armed Bandit Problems. Sébastien Bubeck Theory Group Regret Analysis of Stochastic and Nonstochastic Multi-armed Bandit Problems Sébastien Bubeck Theory Group Part 1: i.i.d., adversarial, and Bayesian bandit models i.i.d. multi-armed bandit, Robbins [1952]

More information

arxiv: v4 [cs.lg] 22 Jul 2014

arxiv: v4 [cs.lg] 22 Jul 2014 Learning to Optimize Via Information-Directed Sampling Daniel Russo and Benjamin Van Roy July 23, 2014 arxiv:1403.5556v4 cs.lg] 22 Jul 2014 Abstract We propose information-directed sampling a new algorithm

More information

Learning Algorithms for Minimizing Queue Length Regret

Learning Algorithms for Minimizing Queue Length Regret Learning Algorithms for Minimizing Queue Length Regret Thomas Stahlbuhk Massachusetts Institute of Technology Cambridge, MA Brooke Shrader MIT Lincoln Laboratory Lexington, MA Eytan Modiano Massachusetts

More information

Reinforcement Learning

Reinforcement Learning Reinforcement Learning Lecture 5: Bandit optimisation Alexandre Proutiere, Sadegh Talebi, Jungseul Ok KTH, The Royal Institute of Technology Objectives of this lecture Introduce bandit optimisation: the

More information

Two optimization problems in a stochastic bandit model

Two optimization problems in a stochastic bandit model Two optimization problems in a stochastic bandit model Emilie Kaufmann joint work with Olivier Cappé, Aurélien Garivier and Shivaram Kalyanakrishnan Journées MAS 204, Toulouse Outline From stochastic optimization

More information

Multi-Armed Bandit: Learning in Dynamic Systems with Unknown Models

Multi-Armed Bandit: Learning in Dynamic Systems with Unknown Models c Qing Zhao, UC Davis. Talk at Xidian Univ., September, 2011. 1 Multi-Armed Bandit: Learning in Dynamic Systems with Unknown Models Qing Zhao Department of Electrical and Computer Engineering University

More information

Learning to play K-armed bandit problems

Learning to play K-armed bandit problems Learning to play K-armed bandit problems Francis Maes 1, Louis Wehenkel 1 and Damien Ernst 1 1 University of Liège Dept. of Electrical Engineering and Computer Science Institut Montefiore, B28, B-4000,

More information

Reward Maximization Under Uncertainty: Leveraging Side-Observations on Networks

Reward Maximization Under Uncertainty: Leveraging Side-Observations on Networks Reward Maximization Under Uncertainty: Leveraging Side-Observations Reward Maximization Under Uncertainty: Leveraging Side-Observations on Networks Swapna Buccapatnam AT&T Labs Research, Middletown, NJ

More information

Multi-armed Bandits in the Presence of Side Observations in Social Networks

Multi-armed Bandits in the Presence of Side Observations in Social Networks 52nd IEEE Conference on Decision and Control December 0-3, 203. Florence, Italy Multi-armed Bandits in the Presence of Side Observations in Social Networks Swapna Buccapatnam, Atilla Eryilmaz, and Ness

More information

THE first formalization of the multi-armed bandit problem

THE first formalization of the multi-armed bandit problem EDIC RESEARCH PROPOSAL 1 Multi-armed Bandits in a Network Farnood Salehi I&C, EPFL Abstract The multi-armed bandit problem is a sequential decision problem in which we have several options (arms). We can

More information

Evaluation of multi armed bandit algorithms and empirical algorithm

Evaluation of multi armed bandit algorithms and empirical algorithm Acta Technica 62, No. 2B/2017, 639 656 c 2017 Institute of Thermomechanics CAS, v.v.i. Evaluation of multi armed bandit algorithms and empirical algorithm Zhang Hong 2,3, Cao Xiushan 1, Pu Qiumei 1,4 Abstract.

More information

Bandit Algorithms. Tor Lattimore & Csaba Szepesvári

Bandit Algorithms. Tor Lattimore & Csaba Szepesvári Bandit Algorithms Tor Lattimore & Csaba Szepesvári Bandits Time 1 2 3 4 5 6 7 8 9 10 11 12 Left arm $1 $0 $1 $1 $0 Right arm $1 $0 Five rounds to go. Which arm would you play next? Overview What are bandits,

More information

Analysis of Thompson Sampling for the multi-armed bandit problem

Analysis of Thompson Sampling for the multi-armed bandit problem Analysis of Thompson Sampling for the multi-armed bandit problem Shipra Agrawal Microsoft Research India shipra@microsoft.com Navin Goyal Microsoft Research India navingo@microsoft.com Abstract The multi-armed

More information

Yevgeny Seldin. University of Copenhagen

Yevgeny Seldin. University of Copenhagen Yevgeny Seldin University of Copenhagen Classical (Batch) Machine Learning Collect Data Data Assumption The samples are independent identically distributed (i.i.d.) Machine Learning Prediction rule New

More information

Stochastic Contextual Bandits with Known. Reward Functions

Stochastic Contextual Bandits with Known. Reward Functions Stochastic Contextual Bandits with nown 1 Reward Functions Pranav Sakulkar and Bhaskar rishnamachari Ming Hsieh Department of Electrical Engineering Viterbi School of Engineering University of Southern

More information

Anytime optimal algorithms in stochastic multi-armed bandits

Anytime optimal algorithms in stochastic multi-armed bandits Rémy Degenne LPMA, Université Paris Diderot Vianney Perchet CREST, ENSAE REMYDEGENNE@MATHUNIV-PARIS-DIDEROTFR VIANNEYPERCHET@NORMALESUPORG Abstract We introduce an anytime algorithm for stochastic multi-armed

More information

Alireza Shafaei. Machine Learning Reading Group The University of British Columbia Summer 2017

Alireza Shafaei. Machine Learning Reading Group The University of British Columbia Summer 2017 s s Machine Learning Reading Group The University of British Columbia Summer 2017 (OCO) Convex 1/29 Outline (OCO) Convex Stochastic Bernoulli s (OCO) Convex 2/29 At each iteration t, the player chooses

More information

Lecture 3: Lower Bounds for Bandit Algorithms

Lecture 3: Lower Bounds for Bandit Algorithms CMSC 858G: Bandits, Experts and Games 09/19/16 Lecture 3: Lower Bounds for Bandit Algorithms Instructor: Alex Slivkins Scribed by: Soham De & Karthik A Sankararaman 1 Lower Bounds In this lecture (and

More information

An Estimation Based Allocation Rule with Super-linear Regret and Finite Lock-on Time for Time-dependent Multi-armed Bandit Processes

An Estimation Based Allocation Rule with Super-linear Regret and Finite Lock-on Time for Time-dependent Multi-armed Bandit Processes An Estimation Based Allocation Rule with Super-linear Regret and Finite Lock-on Time for Time-dependent Multi-armed Bandit Processes Prokopis C. Prokopiou, Peter E. Caines, and Aditya Mahajan McGill University

More information

Bandits and Exploration: How do we (optimally) gather information? Sham M. Kakade

Bandits and Exploration: How do we (optimally) gather information? Sham M. Kakade Bandits and Exploration: How do we (optimally) gather information? Sham M. Kakade Machine Learning for Big Data CSE547/STAT548 University of Washington S. M. Kakade (UW) Optimization for Big data 1 / 22

More information

Optimality of Thompson Sampling for Gaussian Bandits Depends on Priors

Optimality of Thompson Sampling for Gaussian Bandits Depends on Priors Optimality of Thompson Sampling for Gaussian Bandits Depends on Priors Junya Honda Akimichi Takemura The University of Tokyo {honda, takemura}@stat.t.u-tokyo.ac.jp Abstract In stochastic bandit problems,

More information

Subsampling, Concentration and Multi-armed bandits

Subsampling, Concentration and Multi-armed bandits Subsampling, Concentration and Multi-armed bandits Odalric-Ambrym Maillard, R. Bardenet, S. Mannor, A. Baransi, N. Galichet, J. Pineau, A. Durand Toulouse, November 09, 2015 O-A. Maillard Subsampling and

More information

Online Learning with Feedback Graphs

Online Learning with Feedback Graphs Online Learning with Feedback Graphs Claudio Gentile INRIA and Google NY clagentile@gmailcom NYC March 6th, 2018 1 Content of this lecture Regret analysis of sequential prediction problems lying between

More information

arxiv: v1 [cs.lg] 7 Sep 2018

arxiv: v1 [cs.lg] 7 Sep 2018 Analysis of Thompson Sampling for Combinatorial Multi-armed Bandit with Probabilistically Triggered Arms Alihan Hüyük Bilkent University Cem Tekin Bilkent University arxiv:809.02707v [cs.lg] 7 Sep 208

More information

KULLBACK-LEIBLER UPPER CONFIDENCE BOUNDS FOR OPTIMAL SEQUENTIAL ALLOCATION

KULLBACK-LEIBLER UPPER CONFIDENCE BOUNDS FOR OPTIMAL SEQUENTIAL ALLOCATION Submitted to the Annals of Statistics arxiv: math.pr/0000000 KULLBACK-LEIBLER UPPER CONFIDENCE BOUNDS FOR OPTIMAL SEQUENTIAL ALLOCATION By Olivier Cappé 1, Aurélien Garivier 2, Odalric-Ambrym Maillard

More information

Lecture 5: Regret Bounds for Thompson Sampling

Lecture 5: Regret Bounds for Thompson Sampling CMSC 858G: Bandits, Experts and Games 09/2/6 Lecture 5: Regret Bounds for Thompson Sampling Instructor: Alex Slivkins Scribed by: Yancy Liao Regret Bounds for Thompson Sampling For each round t, we defined

More information

On the Complexity of Best Arm Identification with Fixed Confidence

On the Complexity of Best Arm Identification with Fixed Confidence On the Complexity of Best Arm Identification with Fixed Confidence Discrete Optimization with Noise Aurélien Garivier, joint work with Emilie Kaufmann CNRS, CRIStAL) to be presented at COLT 16, New York

More information

New bounds on the price of bandit feedback for mistake-bounded online multiclass learning

New bounds on the price of bandit feedback for mistake-bounded online multiclass learning Journal of Machine Learning Research 1 8, 2017 Algorithmic Learning Theory 2017 New bounds on the price of bandit feedback for mistake-bounded online multiclass learning Philip M. Long Google, 1600 Amphitheatre

More information

Stochastic Regret Minimization via Thompson Sampling

Stochastic Regret Minimization via Thompson Sampling JMLR: Workshop and Conference Proceedings vol 35:1 22, 2014 Stochastic Regret Minimization via Thompson Sampling Sudipto Guha Department of Computer and Information Sciences, University of Pennsylvania,

More information

arxiv: v1 [cs.lg] 15 Oct 2014

arxiv: v1 [cs.lg] 15 Oct 2014 THOMPSON SAMPLING WITH THE ONLINE BOOTSTRAP By Dean Eckles and Maurits Kaptein Facebook, Inc., and Radboud University, Nijmegen arxiv:141.49v1 [cs.lg] 15 Oct 214 Thompson sampling provides a solution to

More information

Multi armed bandit problem: some insights

Multi armed bandit problem: some insights Multi armed bandit problem: some insights July 4, 20 Introduction Multi Armed Bandit problems have been widely studied in the context of sequential analysis. The application areas include clinical trials,

More information

Bandit Algorithms. Zhifeng Wang ... Department of Statistics Florida State University

Bandit Algorithms. Zhifeng Wang ... Department of Statistics Florida State University Bandit Algorithms Zhifeng Wang Department of Statistics Florida State University Outline Multi-Armed Bandits (MAB) Exploration-First Epsilon-Greedy Softmax UCB Thompson Sampling Adversarial Bandits Exp3

More information

Notes from Week 9: Multi-Armed Bandit Problems II. 1 Information-theoretic lower bounds for multiarmed

Notes from Week 9: Multi-Armed Bandit Problems II. 1 Information-theoretic lower bounds for multiarmed CS 683 Learning, Games, and Electronic Markets Spring 007 Notes from Week 9: Multi-Armed Bandit Problems II Instructor: Robert Kleinberg 6-30 Mar 007 1 Information-theoretic lower bounds for multiarmed

More information

Annealing-Pareto Multi-Objective Multi-Armed Bandit Algorithm

Annealing-Pareto Multi-Objective Multi-Armed Bandit Algorithm Annealing-Pareto Multi-Objective Multi-Armed Bandit Algorithm Saba Q. Yahyaa, Madalina M. Drugan and Bernard Manderick Vrije Universiteit Brussel, Department of Computer Science, Pleinlaan 2, 1050 Brussels,

More information

Csaba Szepesvári 1. University of Alberta. Machine Learning Summer School, Ile de Re, France, 2008

Csaba Szepesvári 1. University of Alberta. Machine Learning Summer School, Ile de Re, France, 2008 LEARNING THEORY OF OPTIMAL DECISION MAKING PART I: ON-LINE LEARNING IN STOCHASTIC ENVIRONMENTS Csaba Szepesvári 1 1 Department of Computing Science University of Alberta Machine Learning Summer School,

More information

Informational Confidence Bounds for Self-Normalized Averages and Applications

Informational Confidence Bounds for Self-Normalized Averages and Applications Informational Confidence Bounds for Self-Normalized Averages and Applications Aurélien Garivier Institut de Mathématiques de Toulouse - Université Paul Sabatier Thursday, September 12th 2013 Context Tree

More information

Explore no more: Improved high-probability regret bounds for non-stochastic bandits

Explore no more: Improved high-probability regret bounds for non-stochastic bandits Explore no more: Improved high-probability regret bounds for non-stochastic bandits Gergely Neu SequeL team INRIA Lille Nord Europe gergely.neu@gmail.com Abstract This work addresses the problem of regret

More information

Bandit Convex Optimization: T Regret in One Dimension

Bandit Convex Optimization: T Regret in One Dimension Bandit Convex Optimization: T Regret in One Dimension arxiv:1502.06398v1 [cs.lg 23 Feb 2015 Sébastien Bubeck Microsoft Research sebubeck@microsoft.com Tomer Koren Technion tomerk@technion.ac.il February

More information

Lecture 19: UCB Algorithm and Adversarial Bandit Problem. Announcements Review on stochastic multi-armed bandit problem

Lecture 19: UCB Algorithm and Adversarial Bandit Problem. Announcements Review on stochastic multi-armed bandit problem Lecture 9: UCB Algorithm and Adversarial Bandit Problem EECS598: Prediction and Learning: It s Only a Game Fall 03 Lecture 9: UCB Algorithm and Adversarial Bandit Problem Prof. Jacob Abernethy Scribe:

More information

Learning Exploration/Exploitation Strategies for Single Trajectory Reinforcement Learning

Learning Exploration/Exploitation Strategies for Single Trajectory Reinforcement Learning JMLR: Workshop and Conference Proceedings vol:1 8, 2012 10th European Workshop on Reinforcement Learning Learning Exploration/Exploitation Strategies for Single Trajectory Reinforcement Learning Michael

More information

COS 402 Machine Learning and Artificial Intelligence Fall Lecture 22. Exploration & Exploitation in Reinforcement Learning: MAB, UCB, Exp3

COS 402 Machine Learning and Artificial Intelligence Fall Lecture 22. Exploration & Exploitation in Reinforcement Learning: MAB, UCB, Exp3 COS 402 Machine Learning and Artificial Intelligence Fall 2016 Lecture 22 Exploration & Exploitation in Reinforcement Learning: MAB, UCB, Exp3 How to balance exploration and exploitation in reinforcement

More information

arxiv: v1 [cs.lg] 12 Sep 2017

arxiv: v1 [cs.lg] 12 Sep 2017 Adaptive Exploration-Exploitation Tradeoff for Opportunistic Bandits Huasen Wu, Xueying Guo,, Xin Liu University of California, Davis, CA, USA huasenwu@gmail.com guoxueying@outlook.com xinliu@ucdavis.edu

More information

Nearly Optimal Sampling Algorithms for Combinatorial Pure Exploration

Nearly Optimal Sampling Algorithms for Combinatorial Pure Exploration JMLR: Workshop and Conference Proceedings vol 65: 55, 207 30th Annual Conference on Learning Theory Nearly Optimal Sampling Algorithms for Combinatorial Pure Exploration Editor: Under Review for COLT 207

More information

Agnostic Online learnability

Agnostic Online learnability Technical Report TTIC-TR-2008-2 October 2008 Agnostic Online learnability Shai Shalev-Shwartz Toyota Technological Institute Chicago shai@tti-c.org ABSTRACT We study a fundamental question. What classes

More information

Basics of reinforcement learning

Basics of reinforcement learning Basics of reinforcement learning Lucian Buşoniu TMLSS, 20 July 2018 Main idea of reinforcement learning (RL) Learn a sequential decision policy to optimize the cumulative performance of an unknown system

More information

Profile-Based Bandit with Unknown Profiles

Profile-Based Bandit with Unknown Profiles Journal of Machine Learning Research 9 (208) -40 Submitted /7; Revised 6/8; Published 9/8 Profile-Based Bandit with Unknown Profiles Sylvain Lamprier sylvain.lamprier@lip6.fr Sorbonne Universités, UPMC

More information

The Multi-Arm Bandit Framework

The Multi-Arm Bandit Framework The Multi-Arm Bandit Framework A. LAZARIC (SequeL Team @INRIA-Lille) ENS Cachan - Master 2 MVA SequeL INRIA Lille MVA-RL Course In This Lecture A. LAZARIC Reinforcement Learning Algorithms Oct 29th, 2013-2/94

More information

The geometry of Gaussian processes and Bayesian optimization. Contal CMLA, ENS Cachan

The geometry of Gaussian processes and Bayesian optimization. Contal CMLA, ENS Cachan The geometry of Gaussian processes and Bayesian optimization. Contal CMLA, ENS Cachan Background: Global Optimization and Gaussian Processes The Geometry of Gaussian Processes and the Chaining Trick Algorithm

More information

Administration. CSCI567 Machine Learning (Fall 2018) Outline. Outline. HW5 is available, due on 11/18. Practice final will also be available soon.

Administration. CSCI567 Machine Learning (Fall 2018) Outline. Outline. HW5 is available, due on 11/18. Practice final will also be available soon. Administration CSCI567 Machine Learning Fall 2018 Prof. Haipeng Luo U of Southern California Nov 7, 2018 HW5 is available, due on 11/18. Practice final will also be available soon. Remaining weeks: 11/14,

More information

Stat 260/CS Learning in Sequential Decision Problems. Peter Bartlett

Stat 260/CS Learning in Sequential Decision Problems. Peter Bartlett Stat 260/CS 294-102. Learning in Sequential Decision Problems. Peter Bartlett 1. Multi-armed bandit algorithms. Concentration inequalities. P(X ǫ) exp( ψ (ǫ))). Cumulant generating function bounds. Hoeffding

More information

New Algorithms for Contextual Bandits

New Algorithms for Contextual Bandits New Algorithms for Contextual Bandits Lev Reyzin Georgia Institute of Technology Work done at Yahoo! 1 S A. Beygelzimer, J. Langford, L. Li, L. Reyzin, R.E. Schapire Contextual Bandit Algorithms with Supervised

More information

Dynamic resource allocation: Bandit problems and extensions

Dynamic resource allocation: Bandit problems and extensions Dynamic resource allocation: Bandit problems and extensions Aurélien Garivier Institut de Mathématiques de Toulouse MAD Seminar, Université Toulouse 1 October 3rd, 2014 The Bandit Model Roadmap 1 The Bandit

More information

Grundlagen der Künstlichen Intelligenz

Grundlagen der Künstlichen Intelligenz Grundlagen der Künstlichen Intelligenz Uncertainty & Probabilities & Bandits Daniel Hennes 16.11.2017 (WS 2017/18) University Stuttgart - IPVS - Machine Learning & Robotics 1 Today Uncertainty Probability

More information

Racing Thompson: an Efficient Algorithm for Thompson Sampling with Non-conjugate Priors

Racing Thompson: an Efficient Algorithm for Thompson Sampling with Non-conjugate Priors Racing Thompson: an Efficient Algorithm for Thompson Sampling with Non-conugate Priors Yichi Zhou 1 Jun Zhu 1 Jingwe Zhuo 1 Abstract Thompson sampling has impressive empirical performance for many multi-armed

More information

Adaptive Learning with Unknown Information Flows

Adaptive Learning with Unknown Information Flows Adaptive Learning with Unknown Information Flows Yonatan Gur Stanford University Ahmadreza Momeni Stanford University June 8, 018 Abstract An agent facing sequential decisions that are characterized by

More information

Prioritized Sweeping Converges to the Optimal Value Function

Prioritized Sweeping Converges to the Optimal Value Function Technical Report DCS-TR-631 Prioritized Sweeping Converges to the Optimal Value Function Lihong Li and Michael L. Littman {lihong,mlittman}@cs.rutgers.edu RL 3 Laboratory Department of Computer Science

More information

Thompson Sampling for Contextual Bandits with Linear Payoffs

Thompson Sampling for Contextual Bandits with Linear Payoffs Thompson Sampling for Contextual Bandits with Linear Payoffs Shipra Agrawal Microsoft Research Navin Goyal Microsoft Research Abstract Thompson Sampling is one of the oldest heuristics for multi-armed

More information

Optimistic Bayesian Sampling in Contextual-Bandit Problems

Optimistic Bayesian Sampling in Contextual-Bandit Problems Journal of Machine Learning Research volume (2012) 2069-2106 Submitted 7/11; Revised 5/12; Published 6/12 Optimistic Bayesian Sampling in Contextual-Bandit Problems Benedict C. May School of Mathematics

More information

MDP Preliminaries. Nan Jiang. February 10, 2019

MDP Preliminaries. Nan Jiang. February 10, 2019 MDP Preliminaries Nan Jiang February 10, 2019 1 Markov Decision Processes In reinforcement learning, the interactions between the agent and the environment are often described by a Markov Decision Process

More information

1 MDP Value Iteration Algorithm

1 MDP Value Iteration Algorithm CS 0. - Active Learning Problem Set Handed out: 4 Jan 009 Due: 9 Jan 009 MDP Value Iteration Algorithm. Implement the value iteration algorithm given in the lecture. That is, solve Bellman s equation using

More information

Distributed Optimization. Song Chong EE, KAIST

Distributed Optimization. Song Chong EE, KAIST Distributed Optimization Song Chong EE, KAIST songchong@kaist.edu Dynamic Programming for Path Planning A path-planning problem consists of a weighted directed graph with a set of n nodes N, directed links

More information

EFFICIENT ALGORITHMS FOR LINEAR POLYHEDRAL BANDITS. Manjesh K. Hanawal Amir Leshem Venkatesh Saligrama

EFFICIENT ALGORITHMS FOR LINEAR POLYHEDRAL BANDITS. Manjesh K. Hanawal Amir Leshem Venkatesh Saligrama EFFICIENT ALGORITHMS FOR LINEAR POLYHEDRAL BANDITS Manjesh K. Hanawal Amir Leshem Venkatesh Saligrama IEOR Group, IIT-Bombay, Mumbai, India 400076 Dept. of EE, Bar-Ilan University, Ramat-Gan, Israel 52900

More information

Thompson Sampling for the MNL-Bandit

Thompson Sampling for the MNL-Bandit JMLR: Workshop and Conference Proceedings vol 65: 3, 207 30th Annual Conference on Learning Theory Thompson Sampling for the MNL-Bandit author names withheld Editor: Under Review for COLT 207 Abstract

More information

The No-Regret Framework for Online Learning

The No-Regret Framework for Online Learning The No-Regret Framework for Online Learning A Tutorial Introduction Nahum Shimkin Technion Israel Institute of Technology Haifa, Israel Stochastic Processes in Engineering IIT Mumbai, March 2013 N. Shimkin,

More information

Online Optimization in X -Armed Bandits

Online Optimization in X -Armed Bandits Online Optimization in X -Armed Bandits Sébastien Bubeck INRIA Lille, SequeL project, France sebastien.bubeck@inria.fr Rémi Munos INRIA Lille, SequeL project, France remi.munos@inria.fr Gilles Stoltz Ecole

More information

Gaussian processes. Chuong B. Do (updated by Honglak Lee) November 22, 2008

Gaussian processes. Chuong B. Do (updated by Honglak Lee) November 22, 2008 Gaussian processes Chuong B Do (updated by Honglak Lee) November 22, 2008 Many of the classical machine learning algorithms that we talked about during the first half of this course fit the following pattern:

More information

Complexity of stochastic branch and bound methods for belief tree search in Bayesian reinforcement learning

Complexity of stochastic branch and bound methods for belief tree search in Bayesian reinforcement learning Complexity of stochastic branch and bound methods for belief tree search in Bayesian reinforcement learning Christos Dimitrakakis Informatics Institute, University of Amsterdam, Amsterdam, The Netherlands

More information

Bandit Algorithms for Pure Exploration: Best Arm Identification and Game Tree Search. Wouter M. Koolen

Bandit Algorithms for Pure Exploration: Best Arm Identification and Game Tree Search. Wouter M. Koolen Bandit Algorithms for Pure Exploration: Best Arm Identification and Game Tree Search Wouter M. Koolen Machine Learning and Statistics for Structures Friday 23 rd February, 2018 Outline 1 Intro 2 Model

More information

arxiv: v7 [cs.lg] 7 Jul 2017

arxiv: v7 [cs.lg] 7 Jul 2017 Learning to Optimize Via Information-Directed Sampling Daniel Russo 1 and Benjamin Van Roy 2 1 Northwestern University, daniel.russo@kellogg.northwestern.edu 2 Stanford University, bvr@stanford.edu arxiv:1403.5556v7

More information

Exploration and exploitation of scratch games

Exploration and exploitation of scratch games Mach Learn (2013) 92:377 401 DOI 10.1007/s10994-013-5359-2 Exploration and exploitation of scratch games Raphaël Féraud Tanguy Urvoy Received: 10 January 2013 / Accepted: 12 April 2013 / Published online:

More information

Multi-Armed Bandit Formulations for Identification and Control

Multi-Armed Bandit Formulations for Identification and Control Multi-Armed Bandit Formulations for Identification and Control Cristian R. Rojas Joint work with Matías I. Müller and Alexandre Proutiere KTH Royal Institute of Technology, Sweden ERNSI, September 24-27,

More information

EASINESS IN BANDITS. Gergely Neu. Pompeu Fabra University

EASINESS IN BANDITS. Gergely Neu. Pompeu Fabra University EASINESS IN BANDITS Gergely Neu Pompeu Fabra University EASINESS IN BANDITS Gergely Neu Pompeu Fabra University THE BANDIT PROBLEM Play for T rounds attempting to maximize rewards THE BANDIT PROBLEM Play

More information

FORMULATION OF THE LEARNING PROBLEM

FORMULATION OF THE LEARNING PROBLEM FORMULTION OF THE LERNING PROBLEM MIM RGINSKY Now that we have seen an informal statement of the learning problem, as well as acquired some technical tools in the form of concentration inequalities, we

More information

Lecture 4 January 23

Lecture 4 January 23 STAT 263/363: Experimental Design Winter 2016/17 Lecture 4 January 23 Lecturer: Art B. Owen Scribe: Zachary del Rosario 4.1 Bandits Bandits are a form of online (adaptive) experiments; i.e. samples are

More information

Supplementary: Battle of Bandits

Supplementary: Battle of Bandits Supplementary: Battle of Bandits A Proof of Lemma Proof. We sta by noting that PX i PX i, i {U, V } Pi {U, V } PX i i {U, V } PX i i {U, V } j,j i j,j i j,j i PX i {U, V } {i, j} Q Q aia a ia j j, j,j

More information