Assortment Optimization under MNL

Similar documents
princeton univ. F 17 cos 521: Advanced Algorithm Design Lecture 7: LP Duality Lecturer: Matt Weinberg

Module 9. Lecture 6. Duality in Assignment Problems

Problem Set 9 Solutions

Maximal Margin Classifier

Solutions to exam in SF1811 Optimization, Jan 14, 2015

College of Computer & Information Science Fall 2009 Northeastern University 20 October 2009

Lecture 10 Support Vector Machines II

MMA and GCMMA two methods for nonlinear optimization

Solutions HW #2. minimize. Ax = b. Give the dual problem, and make the implicit equality constraints explicit. Solution.

Pricing Problems under the Nested Logit Model with a Quality Consistency Constraint

6.854J / J Advanced Algorithms Fall 2008

COS 521: Advanced Algorithms Game Theory and Linear Programming

Assortment Optimization under the Paired Combinatorial Logit Model

The Minimum Universal Cost Flow in an Infeasible Flow Network

Economics 101. Lecture 4 - Equilibrium and Efficiency

Capacity Constraints Across Nests in Assortment Optimization Under the Nested Logit Model

Supplement: Proofs and Technical Details for The Solution Path of the Generalized Lasso

U.C. Berkeley CS294: Beyond Worst-Case Analysis Luca Trevisan September 5, 2017

Maximizing the number of nonnegative subsets

Lecture Notes on Linear Regression

Technical Note: Capacity Constraints Across Nests in Assortment Optimization Under the Nested Logit Model

A 2D Bounded Linear Program (H,c) 2D Linear Programming

Lecture 21: Numerical methods for pricing American type derivatives

Lecture 14: Bandits with Budget Constraints

Communication Complexity 16:198: February Lecture 4. x ij y ij

Lagrange Multipliers Kernel Trick

PHYS 705: Classical Mechanics. Calculus of Variations II

ECE559VV Project Report

Edge Isoperimetric Inequalities

APPROXIMATE PRICES OF BASKET AND ASIAN OPTIONS DUPONT OLIVIER. Premia 14

Lectures - Week 4 Matrix norms, Conditioning, Vector Spaces, Linear Independence, Spanning sets and Basis, Null space and Range of a Matrix

Perfect Competition and the Nash Bargaining Solution

Lecture 10 Support Vector Machines. Oct

APPENDIX A Some Linear Algebra

Lecture 12: Discrete Laplacian

CS : Algorithms and Uncertainty Lecture 17 Date: October 26, 2016

Graph Reconstruction by Permutations

2E Pattern Recognition Solutions to Introduction to Pattern Recognition, Chapter 2: Bayesian pattern classification

Suggested solutions for the exam in SF2863 Systems Engineering. June 12,

Some modelling aspects for the Matlab implementation of MMA

Exercise Solutions to Real Analysis

The Geometry of Logit and Probit

More metrics on cartesian products

Convex Optimization. Optimality conditions. (EE227BT: UC Berkeley) Lecture 9 (Optimality; Conic duality) 9/25/14. Laurent El Ghaoui.

PROBLEM SET 7 GENERAL EQUILIBRIUM

SELECTED SOLUTIONS, SECTION (Weak duality) Prove that the primal and dual values p and d defined by equations (4.3.2) and (4.3.3) satisfy p d.

k t+1 + c t A t k t, t=0

We present the algorithm first, then derive it later. Assume access to a dataset {(x i, y i )} n i=1, where x i R d and y i { 1, 1}.

Winter 2008 CS567 Stochastic Linear/Integer Programming Guest Lecturer: Xu, Huan

Salmon: Lectures on partial differential equations. Consider the general linear, second-order PDE in the form. ,x 2

Difference Equations

Stanford University CS359G: Graph Partitioning and Expanders Handout 4 Luca Trevisan January 13, 2011

Lecture 20: Lift and Project, SDP Duality. Today we will study the Lift and Project method. Then we will prove the SDP duality theorem.

Chapter Newton s Method

Affine transformations and convexity

Support Vector Machines CS434

Approximation Methods for Pricing Problems under the Nested Logit Model with Price Bounds

Yong Joon Ryang. 1. Introduction Consider the multicommodity transportation problem with convex quadratic cost function. 1 2 (x x0 ) T Q(x x 0 )

Feature Selection: Part 1

Generalized Linear Methods

find (x): given element x, return the canonical element of the set containing x;

Limited Dependent Variables

On the Multicriteria Integer Network Flow Problem

The Expectation-Maximization Algorithm

Spectral Graph Theory and its Applications September 16, Lecture 5

Game Theory. Lecture Notes By Y. Narahari. Department of Computer Science and Automation Indian Institute of Science Bangalore, India February 2008

Technical Note: A Simple Greedy Algorithm for Assortment Optimization in the Two-Level Nested Logit Model

MA 323 Geometric Modelling Course Notes: Day 13 Bezier Curves & Bernstein Polynomials

Simultaneous Optimization of Berth Allocation, Quay Crane Assignment and Quay Crane Scheduling Problems in Container Terminals

n α j x j = 0 j=1 has a nontrivial solution. Here A is the n k matrix whose jth column is the vector for all t j=0

Welfare Properties of General Equilibrium. What can be said about optimality properties of resource allocation implied by general equilibrium?

CSC 411 / CSC D11 / CSC C11

9 Characteristic classes

The Second Anti-Mathima on Game Theory

MAT 578 Functional Analysis

Week 5: Neural Networks

1 GSW Iterative Techniques for y = Ax

Kernel Methods and SVMs Extension

Calculation of time complexity (3%)

3.1 Expectation of Functions of Several Random Variables. )' be a k-dimensional discrete or continuous random vector, with joint PMF p (, E X E X1 E X

20. Mon, Oct. 13 What we have done so far corresponds roughly to Chapters 2 & 3 of Lee. Now we turn to Chapter 4. The first idea is connectedness.

Grover s Algorithm + Quantum Zeno Effect + Vaidman

Finding Dense Subgraphs in G(n, 1/2)

Lecture 11. minimize. c j x j. j=1. 1 x j 0 j. +, b R m + and c R n +

Mixed Taxation and Production Efficiency

Complete subgraphs in multipartite graphs

Inner Product. Euclidean Space. Orthonormal Basis. Orthogonal

The Feynman path integral

CS286r Assign One. Answer Key

Resource Allocation with a Budget Constraint for Computing Independent Tasks in the Cloud

(1 ) (1 ) 0 (1 ) (1 ) 0

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur

Note on EM-training of IBM-model 1

Errors for Linear Systems

Lecture 17: Lee-Sidford Barrier

THE ARIMOTO-BLAHUT ALGORITHM FOR COMPUTATION OF CHANNEL CAPACITY. William A. Pearlman. References: S. Arimoto - IEEE Trans. Inform. Thy., Jan.

n ). This is tight for all admissible values of t, k and n. k t + + n t

Infinitely Split Nash Equilibrium Problems in Repeated Games

Solution Thermodynamics

Structure and Drive Paul A. Jensen Copyright July 20, 2003

Transcription:

Assortment Optmzaton under MNL Haotan Song Aprl 30, 2017 1 Introducton The assortment optmzaton problem ams to fnd the revenue-maxmzng assortment of products to offer when the prces of products are fxed. It gans tremendous popularty and seems to be well-studed already. Ths report focuses on the assortment optmzaton problem under the MNL model wthout any capacty constrants and s organzed as follows. We frst gve the defnton and two examples, one of them showng that the greedy scheme cannot work (orgnal); then we state a crucal theorem and gve four proofs from dfferent aspects Proof 1 and 3 are orgnal, Proof 2 belongs to T arek. 2 Assortment Optmzaton under MNL Let N denote the unverse of n tems and 0 denote the non-purchase opton. The assortment optmzaton problem s to fgure out a set of products S to offer so as to maxmze the potental revenue Rev(S). Under the MNL model, the probablty of choosng tem from S after normalzaton s gven by P( S) = 1 + j S v, j..d. where we have the normal settng that U = V + ɛ, ɛ Gumbel(0, µ)and = exp(µv ). Let r denote the revenue of tem, then the expected revenue of assortment S s gven by Rev(S) = S r P( S) = r 1 + S j S v. j Wthout loss of generalty we may assume the tem revenues are n the decreasng order, that s r 1 r 2 r n. Snce t s a decson problem that s to decde for each tem whether to select or not, we can formulate t nto a bnary optmzaton problem. Problem 2.1 [Bnary Formulaton]. The assortment optmzaton problem under the MNL model can be wrtten as the followng unconstraned bnary optmzaton problem: n max Rev(S) = max =1 r x S {1,2,,n} x {0,1} n 1 + n =1 v. x 1

Before we gong to propertes of ths problem, let us take a look at the example below. Eample 2.2. Root tem 0 tem 1 tem 2 tem 3 tem 0 1 2 3 r 0 10 6 3 1 0.4 1.2 0.6 Startng wth S (0) =, we now consder to add tems nto S n order to ncrease the average revenue. Note that r 2 v 2 = 36/11 > r 1v 1 = 20/7 > r 3v 3 = 9/8. 1 + v 2 1 + v 1 1 + v 3 We may add tem 2 to S (0) and take S (1) = S (0) {2} for the next step wth Rev(S (1) ) = r 2v 2 1 + r 2 = 36/11. What should we add next or should we stop? Thnk of Rev(S) as the average revenue of the assortment S. Snce r 3 = 3 < 36/11 < r 1 = 10, addng tem 3 to S (1) wll brng down the revenue, whle addng tem 1 wll ncrease t. Takng S (2) = S (1) {1}, we obtan Rev(S (2) ) = Rev(S(1) ) (1 + v 2 ) + r 1 v 1 1 + v 2 + v 1 = 56/13. Snce addng tem 3 wll only decrease the revenue, we stop. Indeed, by exhaustng all the possbtes, we can confrm that S = S (2). Example 2.2 gves us an mpresson that t seems we can compute the optmal assortment n a greedy manner. However, t s not always the case. Eample 2.3 [Greedy does not work]. It s easy to check that tem 0 1 2 3 4 r 0 12 10 7 6 1 0.5 0.9 2 5 max S r 1 + = r 4v 4 1 + v 4 = 5. However, one can also easly check that S = {1, 2, 3} wth Rev(S ) = and consequently, tem 4 / S. 12 0.5 + 10 0.9 + 7 2 1 + 0.5 + 0.9 + 2 = 145 22, Although the greedy algorthm does not work, both two examples gve us an ntuton that the optmal assortment S seems to consst of contnuous products from tem 1 to tem for some 1 n. Indeed, we have the followng theorem. Theorem 2.4. Then the optmal soluton to Problem 2.1 must be of the form x = [1,, 1, 0,, 0] whch means the optmal assortment S = {1, 2,, } for some. 2

For example, f n Problem 2.1 we have n = 3 and r 1 > r 2 > r 3, we can mmedately know that none of {[1, 0, 1], [0, 1, 1], [0, 1, 0], [0, 0, 1]} can be the optmal. There are varous proofs for ths theorem and here we demonstrate four of them. The four proofs gve dfferent formulatons and nsghts some are neat tself whle some s complcated but compatble wth addtonal constrants and thus I want to cover them all here. The frst one s qute straghtforward. Proof 1. If S, then {j N r j > r } S. Otherwse, we can add such j s to S to ncrease the revenue. In fact, there s a underlyng structure there that Rev(S {k}) s a convex combnaton of Rev(S) and r k for any k / S: Rev(S {k}) = r kv k + S r ( 1 + v k + v k S v = Then for any S, we have 1 + v k + S }{{} α Rev(S ) = α r + (1 α )Rev(S \ {}). Snce Rev(S ) Rev(S \ {}), we have Rev(S \ {}) r. Therefore, S = { r Rev(S )} = { r > Rev(S )}. ) ( 1 + S r k + v ) 1 + v k + S r S 1 + S }{{} v. }{{} 1 α Rev(S) Based on Theorem 2.4 and Proof 1, the algorthm to compute S can be very smple. Input: r 1 r 2 r n,{ } n =1 Output: S Intalzaton: S =, k = 1; whle Rev(S) < r k & k n do S = S {k}; k = k + 1; end The dea of the second proof s to look at the frst-order condton, whch s a typcal strategy for an unconstraned dfferentable optmzaton problem, of the relaxed objectve functon defned as n =1 R(x) = r x 1 + n =1 v, x [0, 1] n. x Proof 2. Take the partal dervatve w.r.t. any x and we obtan R(x) = r (1 + n =1 v x ) v n =1 r x x = (1 + n =1 x ) 2 1 + n =1 x (r R(x)). If at the current x we have r < R(x), then R(x) < 0 whch gves a descent drecton for x. By x notng ths, we clam that the optmal assortment must have the followng form S = { r > R(x )} where x = argmax R(x). x [0,1] n 3

Otherwse, assume such that r > R(x ) but x < 1, then ncreasng x by a suffcently small ɛ leads to a hgher revenuen; the other hand, assume such that r < R(x ) but x > 0, then decreasng x by a suffcently small ɛ leads to a hgher revenue. Both contradct the optmalty. Note that we dd not say that max R(x) = max R(x) n Proof 2, but the analyss tells us x [0,1] n x {0,1} n f r > R(x ) then x = 1 f r < R(x ) then x = 0 whch mples that the optmal revenue wll be acheved at a vertex. The next proof uses no dervatve and has an emphass on the objectve functon value at corner ponts, but essentally t s the same as Proof 2. Proof 3. When R(x) < r j for some j, then we have j R(x) r x + r j v n j 1 + =1 j v = r x n x + v j 1 + =1 n =1 v r x + r j v j (1 x j ) x 1 + n =1 x + v j (1 x j ) = ( n =1 r x )[1 + n =1 x + v j (1 x j )] [ n =1 r x + r j v j (1 x j )](1 + n =1 x ) (1 + n =1 x )[1 + n =1 x + v j (1 x j )] = ( n =1 r x )v j (1 x j ) r j v j (1 x j )(1 + n = (1 + n =1 x )[1 + n v j (1 x j ) =1 x ) =1 x + v j (1 x j )] 1 + n =1 x + v j (1 x j ) (R(x) r j) < 0, whch mples that ncreasng x j to 1 gves a hgher revenue. Smlarly, f R(x) > r j, then j R(x) r x 1 + v j x j j v = x 1 + n =1 v (R(x) r j ) < 0, x v j x j whch mples that decreasng x j to 0 gves a hgher revenue. Then we have the same argument as the last part of Proof 2 and conclude that S = { r > R(x )}. The next proof seems more complcated wth the prevous but actually s very mportant where we wll present a LP formulaton here. We defne the followng LP max r y =1 s.t. y 0 + y = 1 =1 y y 0, 1 n y 0 Denote the optmal soluton to the above LP by Y and then we want to show that Rev(S ) = Y. Proof 4. In partcular, let 1 ŷ 0 = 1 + j S v j ŷ = 1 { S } 1 +, 1 n j S v j 4 = r ŷ = Rev(S ). =1

It s easy to check that such ŷ s a feasble soluton to the LP, and consequently Rev(S ) Y. In fact, we can nterpret such ŷ as the market share of product. To see Rev(S ) Y, we frst wrte down the dualty of the LP mn s.t. θ θ α 0 = θ + α r, 1 n α 0 By the dualty theorem below, the optmal soluton to the dual problem s also Y. Notaton p s the prmal optmal value; d s the dual optmal value; p = f the prmal problem s nfeasble; d = f the dual problem s nfeasble; p = f the prmal problem s unbounded; d = f the dual problem s unbounded; Dualty Theorem If the prmal or dual problem s feasble, then p = d. Moreover, f p = d <, then both optma are attaned. By a trval transformaton α α, we have an equvalent form of the dual problem, whch s neater. mn s.t. θ θ α = α r θ, 1 n α 0 Note that r 1 θ r 2 θ r n θ. Consequently, for any θ, k such that r θ 0 f k and r θ < 0 otherwse. The constrants α r θ wll not be tght for α 0 > r θ. For k, by reducng α to r θ, we only relax the last constrant and everythng stll mantans feasble. Snce that does not change θ, we have the same objectve functon value, and consequently, there exsts an optmal soluton to the dual problem where the frst l constrants of α r θ are tght and the rest are not tght. By the complmentary slackness, we have { y = y 0, 1 j y = 0, j + 1 j n whch mples y0 = y = 1 1 + l j=1 v j 1 + l j=1 v j 5, 1 l.

Ths shows that the assortment S = {1, 2,, l} yelds the optmal value for the LP. Therefore Rev(S ) Rev(S) = Y. The nce thng for ths LP formulaton s more compatble n the sense that t allows the possblty to add more constrants, such as the capacty constrants whch I wll not cover n ths report. References J. Davs, G. Gallego, H. Topaloglu, Assortment plannng under the multnomal logt model wth totally unmodular constrant structures. Workng paper. 6