Massachusetts Institute of Technology 6.854J/18.415J: Advanced Algorithms Friday, March 18, 2016 Ankur Moitra. Problem Set 6

Similar documents
princeton univ. F 13 cos 521: Advanced Algorithm Design Lecture 17: Duality and MinMax Theorem Lecturer: Sanjeev Arora

Game Theory. Greg Plaxton Theory in Programming Practice, Spring 2004 Department of Computer Science University of Texas at Austin

Today: Linear Programming (con t.)

Theory and Internet Protocols

Duality. Peter Bro Mitersen (University of Aarhus) Optimization, Lecture 9 February 28, / 49

LINEAR PROGRAMMING III

Quantum state discrimination with post-measurement information!

CS 6820 Fall 2014 Lectures, October 3-20, 2014

Lectures 6, 7 and part of 8

CO759: Algorithmic Game Theory Spring 2015

Lecture notes on the ellipsoid algorithm

Lecture 3: Semidefinite Programming

Part IB Optimisation

Motivating examples Introduction to algorithms Simplex algorithm. On a particular example General algorithm. Duality An application to game theory

15-850: Advanced Algorithms CMU, Fall 2018 HW #4 (out October 17, 2018) Due: October 28, 2018

1 Seidel s LP algorithm

Zero-Sum Games Public Strategies Minimax Theorem and Nash Equilibria Appendix. Zero-Sum Games. Algorithmic Game Theory.

Computing Minmax; Dominance

Computing Minmax; Dominance

Lecture th January 2009 Fall 2008 Scribes: D. Widder, E. Widder Today s lecture topics

7. Lecture notes on the ellipsoid algorithm

EECS 495: Combinatorial Optimization Lecture Manolis, Nima Mechanism Design with Rounding

CS 573: Algorithmic Game Theory Lecture date: January 23rd, 2008

CS261: A Second Course in Algorithms Lecture #12: Applications of Multiplicative Weights to Games and Linear Programs

Polynomial-time Computation of Exact Correlated Equilibrium in Compact Games

U.C. Berkeley CS294: Spectral Methods and Expanders Handout 11 Luca Trevisan February 29, 2016

CS261: Problem Set #3

Applied Lagrange Duality for Constrained Optimization

II. Analysis of Linear Programming Solutions

IE 5531: Engineering Optimization I

Additional Homework Problems

The Steiner Network Problem

An introductory example

Game Theory: Lecture 3

CS675: Convex and Combinatorial Optimization Fall 2016 Combinatorial Problems as Linear and Convex Programs. Instructor: Shaddin Dughmi

Combinatorial Optimization Spring Term 2015 Rico Zenklusen. 2 a = ( 3 2 ) 1 E(a, A) = E(( 3 2 ), ( 4 0

Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization

1 Primals and Duals: Zero Sum Games

4. Algebra and Duality

Efficient Nash Equilibrium Computation in Two Player Rank-1 G

Saturday Morning Math Group Austin Math Circle Austin Area Problem Solving Challenge 2009

Submodular Functions Properties Algorithms Machine Learning

Math 152: Applicable Mathematics and Computing

Topics in Theoretical Computer Science: An Algorithmist's Toolkit Fall 2007

Computing Solution Concepts of Normal-Form Games. Song Chong EE, KAIST

Lecture Notes 3: Duality

6.041/6.431 Spring 2009 Quiz 1 Wednesday, March 11, 7:30-9:30 PM. SOLUTIONS

COS 511: Theoretical Machine Learning. Lecturer: Rob Schapire Lecture #24 Scribe: Jad Bechara May 2, 2018

Duality in Linear Programs. Lecturer: Ryan Tibshirani Convex Optimization /36-725

6.891 Games, Decision, and Computation February 5, Lecture 2

Introduction to optimization

3. THE SIMPLEX ALGORITHM

CSC304: Algorithmic Game Theory and Mechanism Design Fall 2016

Lecture 7 Duality II

How to Take the Dual of a Linear Program

Midterm Review. Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A.

CS675: Convex and Combinatorial Optimization Fall 2014 Combinatorial Problems as Linear Programs. Instructor: Shaddin Dughmi

On Maximizing Welfare when Utility Functions are Subadditive

1 Basic Game Modelling

The Ellipsoid (Kachiyan) Method

1 Maximizing a Submodular Function

Integer Programming ISE 418. Lecture 8. Dr. Ted Ralphs

8. Geometric problems

NETS 412: Algorithmic Game Theory March 28 and 30, Lecture Approximation in Mechanism Design. X(v) = arg max v i (a)

M340(921) Solutions Practice Problems (c) 2013, Philip D Loewen

Introduction to Semidefinite Programming I: Basic properties a

Zero sum games Proving the vn theorem. Zero sum games. Roberto Lucchetti. Politecnico di Milano

Section Notes 9. IP: Cutting Planes. Applied Math 121. Week of April 12, 2010

LINEAR PROGRAMMING II

Part 1. The Review of Linear Programming

Lecture 2: Minimax theorem, Impagliazzo Hard Core Lemma

COT 6936: Topics in Algorithms! Giri Narasimhan. ECS 254A / EC 2443; Phone: x3748

Optimization 4. GAME THEORY

Linear Programming. Operations Research. Anthony Papavasiliou 1 / 21

c 2000 Society for Industrial and Applied Mathematics

Lecture 15: October 15

Assignment 1: From the Definition of Convexity to Helley Theorem

Lecture #21. c T x Ax b. maximize subject to

Part II: Integral Splittable Congestion Games. Existence and Computation of Equilibria Integral Polymatroids

Class Meeting #20 COS 226 Spring 2018

Linear and non-linear programming

Linear Programming: Simplex

18.310A Final exam practice questions

CS6999 Probabilistic Methods in Integer Programming Randomized Rounding Andrew D. Smith April 2003

Fall 2017 Qualifier Exam: OPTIMIZATION. September 18, 2017

Linear Programming Duality

Statistics 433 Practice Final Exam: Cover Sheet and Marking Sheet

Introduction to Mathematical Programming IE406. Lecture 10. Dr. Ted Ralphs

Applications of Linear Programming

A notion of Total Dual Integrality for Convex, Semidefinite and Extended Formulations

Algorithmic Game Theory and Applications. Lecture 7: The LP Duality Theorem

FINANCIAL OPTIMIZATION

Game Theory and its Applications to Networks - Part I: Strict Competition

CSC2556. Lecture 5. Matching - Stable Matching - Kidney Exchange [Slides : Ariel D. Procaccia]

SDP Relaxations for MAXCUT

CSC Linear Programming and Combinatorial Optimization Lecture 8: Ellipsoid Algorithm

2. Polynomials. 19 points. 3/3/3/3/3/4 Clearly indicate your correctly formatted answer: this is what is to be graded. No need to justify!

CS 598RM: Algorithmic Game Theory, Spring Practice Exam Solutions

A Review of Linear Programming

F 1 F 2 Daily Requirement Cost N N N

Transcription:

Massachusetts Institute of Technology 6.854J/18.415J: Advanced Algorithms Friday, March 18, 2016 Ankur Moitra Problem Set 6 Due: Wednesday, April 6, 2016 7 pm Dropbox Outside Stata G5 Collaboration policy: collaboration is strongly encouraged. However, remember that 1. You must write up your own solutions, independently. 2. You must record the name of every collaborator. 3. You must actually participate in solving all the problems. This is difficult in very large groups, so you should keep your collaboration groups limited to 3 or 4 people in a given week. 4. Write each problem in a separate sheet and write down your name on top of every sheet. 5. No bibles. This includes solutions posted to problems in previous years. 1 Separation Oracles Describe efficient separation oracles for each of the following families of convex sets. Here, efficient means linear time plus O(1) calls to any additional oracles provided to you. (a) The set A B, given separation oracles for A and B. (b) The l 1 ball: x 1 1. (c) Any convex set A, given a projection oracle for A. A projection oracle, given a point x, reports whether x is in A and if it is not additionally returns arg min x y 2. y A (d) The ɛ-neighborhood of a convex set A: given a projection oracle for A. {x y A, x y 2 ɛ}

2 Problem Set 6 2 Optimization with the Ellipsoid Method Here, we will show a way to optimize a linear function over a convex set using the ellipsoid algorithm. Recall that in class, we showed how to find a feasible point in a convex set given a separation oracle, and then used it to optimize a linear function by doing binary search over the optimum value, and turning the objective function into a linear constraint. Here we will explore an alternative method that uses the ellipsoid method directly without performing binary search. The basic idea is to optimize over the set A {x c T x v} without actually knowing the function value v. To avoid going through the analysis of the ellipsoid method again, we will give a more black-box description of the guarantees of the ellipsoid method. In particular, we will view it as an interactive process, producing output and reading input in each round. The process is initialized with a radius R. At the ith round, it outputs a point x i. It then reads as input a direction d i, and then goes on to round i + 1. Let à be any convex set contained in B(0, R) such that à contains some ball of radius r. Furthermore, suppose that for all i with x i Ã, d i is a hyperplane that separates x i from Ã. Then we proved for m = O(n 2 log(r/r)), some x i, i m must satisfy x i Ã. Notably, this holds despite the ellipsoid method having no explicit knowledge of à except for the conditions on the vectors d i. The idea is in some sense to change à as we go along. Now consider the following algorithm for optimization. We will assume a set A with a separation oracle, a bounding radius R such that A B(0, R), and a linear objective to be minimized, c T x. Algorithm 1 Ellipsoid-Optimize 1: Initialize the ellipsoid method with radius R. Set v best to. 2: for i from 1 to m do 3: Get x i from the ellipsoid method. 4: Use the separation oracle on x i, finding if x i A and a separating hyperplane if not. 5: if x i A then 6: if c T x i < v best then 7: Set v best to c T x i. 8: Set x best to x i. 9: end if 10: Set d i = c. 11: else 12: Set d i to the separating direction returned by the oracle. 13: end if 14: Send the direction d i back to the ellipsoid method. 15: end for 16: return x best. In other words, in each step we call the separation oracle for A, using the hyperplane it

Problem Set 6 3 returns if x i A and a hyperplane defined by the objective c otherwise (i.e. x i A). After m iterations we simply return the best objective value out of all the points we have seen in A. Prove that for any objective value v, as long as A {x c T x v} contains a ball of radius r, Ellipsoid-Optimize will return an x A satisfying c T x v. Note that this guarantee holds without any knowledge of the value v. 3 Two Player Zero Sum Games This problem proves the von Neumann theorem on zero-sum games mentioned in class. In a 0-sum 2-player game, Alice has a choice of n so-called pure strategies and Bob has a choice of m pure strategies. If Alice picks strategy i and Bob picks strategy j, then the payoff is a ij, meaning a ij dollars are transfered from Alice to Bob. So Bob makes money if a ij is positive, but Alice makes money if a ij is negative. Thus, Alice wants to pick a strategy that minimizes the payoff while Bob wants a strategy that maximizes the payoff. The matrix A = (a ij ) is called the payoff matrix. It is well known that to play these games well, you need to use a mixed strategy a random choice from among pure strategies. A mixed strategy is just a particular probability distribution over pure strategies: you flip coins and then play the selected pure strategy. If Alice has mixed strategy x, meaning he plays strategy i with probability x i, and Bob has mixed strategy y, then it is easy to prove that the expected payoff in the resulting game is xay. Alice wants to minimize this expected payoff while Bob wants to maximize it. Our goal is to understand what strategies each player should play. We ll start by making the pessimal assumption for Alice that whichever strategy she picks, Bob will play best possible strategy against her. In other words, given Alice s strategy x, Bob will pick a strategy y that achieves max y xay. Thus, Alice wants to find a distribution x that minimizes max y xay. Similarly, Bob wants a y to maximize min x xay. So we are interested in solving the following 2 problems: min xi =1 max yj =1 Unfortunately, these are nonlinear programs! max xay yj =1 min xay xi =1 (a) Show that if Alice s mixed strategy is known, then Bob has a pure strategy serving as his best response. (b) Show how to convert each program above into a linear program, and thus find an optimal strategy for both players in polynomial time. (c) Give a plausible explanation for the meaning of your linear program (why does it give the optimum?)

4 Problem Set 6 (d) Use strong duality (applied to the LP you built in the previous part) to argue that the above two quantities are equal. The second statement shows that the strategies x and y, besides being optimal, are in Nash Equilibrium: even if each player knows the other s strategy, there is no point in changing strategies. This was proven by Von Neumann and was actually one of the ideas that led to the discovery of strong duality. 4 Weak Duality for Nonlinear Programs Here, we will look at a simple class of second-order cone programs, which are nonlinear convex programs. Your task will be to show that a form of weak duality still holds for these programs. The below convex programs are defined for an n-dimensional vector c, an m n matrix A, an l n matrix B, and an m-dimensional vector b. The primal problem is to find an n-dimensional vector x: minimize x c T x subject to Ax b, Bx 2 1. This is similar to a linear program except for the extra l 2 ball constraint Bx 2 1. The dual problem is to find an m-dimensional vector y and an l-dimensional vector z: maximize x b T y z 2 subject to A T y + B T z = c, y 0. Prove that for any primal and any dual solution, the objective value of the primal is at least the objective value of the dual solution (i.e. weak duality). 5 Two Definitions of Submodular Let N be a set (the universe ) and F a function mapping subsets of N to real numbers. There are two different standard definitions of what it means for F to be submodular: (i) For all A, B, F (A B) + F (A B) F (A) + F (B). (ii) For all A B, j B F (A {j}) F (A) F (B {j}) F (B).

Problem Set 6 5 Your task is to prove these definitions are equivalent: (a) Show that (i) implies (ii). (b) Show that (ii) implies (i). 6 Fun with Lovasz Extensions Let F (S) be a submodular function. Recall that the Lovasz extension of F, f(x) is defined for nonnegative vectors x as f(x) = F ( ) + 0 (F ({i : x i a}) F ( )) da. Note that this definition applies to all x R n +, not just x [0, 1] n. We will be looking at an interesting optimization problem involving the Lovasz extension: x = arg min f(x) + 1 x 0 2 x2. Recall that f is not necessarily monotonically increasing. x will always be unique (due to the strong convexity of the objective). The aim will be to relate x to the minimizers of F a (S) = F (S) + a S. Note that each F a is itself a submodular function. (a) Show that although F a (S) may have multiple minimizers, there is a unique set S a that minimizes F a (S) and has a higher cardinality than any other minimizer. (b) Show that for all b > a, S b S a. (c) For any positive vector x, define X a = {i : x i a}. Show that f(x) + 1 2 x2 = F ( ) + 0 (F (X a ) F ( ) + a X a ) da. (d) Show that for any a > 0, {i : x i a} = S a. This means that solving one convex optimization problem lets us minimize all the submodular functions F a simultaneously! You may assume without proof that the optimization problem defining x has a unique optimum.